In our last post I provided an introduction to Ansible and some use cases for it. Today I will cover inventory configuration, both simple and advanced.
In Ansible, your inventory is essentially just a listing of your systems to be managed, including groupings as needed as well as variables. Ansible then uses this information to decide which systems to perform tasks on. Multiple inventory files can be defined and used independently or also together in the same run. Ansible also supports what is is called Dynamic Inventory which allows inventory to be pulled from dynamic sources, typically cloud platforms.
The following examples assume you have created the standard project layout as covered in the last post, though technically that is not required.
On all systems being managed, you must have python installed. Version should not matter. Most Linux distro's nowadays have this included by default but for instance FreeBSD does not.
Before continuing much further, this is a requirement that must be met. Ansible does all of its work over SSH and thus you must have all systems being managed configured to allow Key based SSH authentication from the controlling system as the user being used. Assuming that you are not using the root user to manage your systems (not a good practice but not my place to tell you not to), the user that you are running Ansible as and connecting to the systems being managed as must have sudo access for any system level changes. See our post titled Configuring Key-Based SSH Authentication for instructions on configuring Key based SSH authentication.
It is our recommendation that if you do not already have one, create a dedicated user just for ansible (or systems management in general) on both the controlling machine as well as all systems being managed. Configure the SSH key based authentication for this user instead of managing the keys and sudo access of multiple users.
Lets get started with a simple inventory consisting of a single file and a handful of systems by creating a file named hosts in the base project directory. In my case I have 3 systems defined, two of which are in a group named offsite.
[brian@client ~/ansible]$ cat hosts onsite.kissitconsulting.com [offsite] offsite1.kissitconsulting.com offsite2.kissitconsulting.com
Ok, now for a quick test of our SSH authentication and sudo configuration.
[brian@client ~/ansible]$ ansible all -i hosts -s -m ping onsite.kissitconsulting.com | success >> { "changed": false, "ping": "pong" } offsite1.kissitconsulting.com | success >> { "changed": false, "ping": "pong" } offsite2.kissitconsulting.com | success >> { "changed": false, "ping": "pong" }
Success! Lets break apart that ansible command and explain what we're doing.
Now, for a more concrete example, lets say I want to restart Apache on both of my offsite servers. It is as simple as this:
[brian@client ~/ansible]$ ansible offsite -i hosts -s -m service -a "name=httpd state=restarted"
offsite2.kissitconsulting.com | success >> {
"changed": true,
"name": "httpd",
"state": "started"
}
offsite2.kissitconsulting.com | success >> {
"changed": true,
"name": "httpd",
"state": "started"
}
Success! In this command, we have some of the same parameters, but also some new ones:
Ok so the above, while kinda cool, is also rather boring. Sure, if you're managing a reasonable number of static servers that rarely change this concept is great. But imagine for a second you are managing 10's, 100's, or even more AWS instances on a daily basis, some of which may or may not be static (for example, you have a portion of your fleet that you may spin down during off peak times). It becomes clear very quickly that while you're gaining the efficiency of automation, you're losing some of that just managing the inventory configuration. Enter Ansible's support for Dynamic Inventory.
Ansible supports various types of dynamic inventory. In our case we're going to focus on the AWS EC2 module. We've also used the Rackspace cloud module (rax) with great success but to avoid confusion we'll stick to AWS.
First, we need to install the EC2 plugin into our ansible project. This does not come prepackaged as an installer and requires a few manual steps as follows:
[brian@client ~/]$ cd ~/ansible
[brian@client ~/ansible]$ mkdir inventory
[brian@client ~/ansible]$ wget -O inventory/ec2.py https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.py [brian@client ~/ansible]$ wget -O inventory/ec2.ini https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini
[brian@client ~/ansible]$ chmod 755 inventory/ec2.py
In order for the plugin to retrieve the inventory from AWS, you must configure your API credentials. To do this add the following entries into your .bashrc file (or similar based on platform). If you need help finding this information see this page.
export AWS_SECRET_ACCESS_KEY=<your_aws_secret_access_key> export AWS_ACCESS_KEY_ID=<your_aws_access_key_id>\
Once done, log out and back in to pick up your change.
You can quickly test that the plugin is working as expected by doing the following:
[brian@client ~/ansible]$ ./inventory/ec2.py
Assuming it was successful, after a few moments information about your various AWS EC2 assets will be output as JSON. If any errors are output it is most likely due to your AWS access key configuration or a missing python dependency.
Once the plugin is working, you can now run ansible against your entire EC2 fleet without managing a single inventory entry in the text file. Sure right now its not really anything special but we'll get to that.
At this point, we've covered the Ansible basics as well as inventory concepts. Stay tuned for the upcoming posts in this series in which we'll build on the usage of our EC2 plugin and begin to cover common configuration management examples, common build examples, and other advanced topics for managing your EC2 environment using Ansible.