Ansible - Building AWS EC2 Instances

Posted on 01/14/2016 by Brian Carey

In our last Ansible post I covered managing Ansible inventory, both manually and also the standard EC2 plug-in.  Today I will cover how you can use Ansible to automate the building of AWS EC2 instances to use in your inventory, including tags to use for grouping and also creating DNS entries for your new instances in Route 53. 

Before going any further, one new concept that will be introduced is the the Ansible playbook.  Essentially a playbook is a set of tasks to perform against the host(s) being targeted in a run of Ansible.  Using the various modules provided you can build playbooks to do a single simple task, include that task in other playbooks, or define a larger playbook for one single process.  Once you have your playbook, it is run using the ansible-playbook command, similar to the ansible command used in our previous posts.

Starting our playbook

First, we need to start our new playbook.  In this case, we aren't operating on a group of existing servers but instead using our local instance to run the various commands using the EC2 modules to setup our instance.  For example we'll start with something like this:

---
- name: "Playbook to spin up new AWS instances"
  hosts: localhost
  connection: local
  gather_facts: False 
  vars:
    - aws_key_name: <your_ec2_ssh_key_name>
    - default_region: us-west-2
    - default_type: t2.micro
    - default_security_group: default
    - default_subnet: subnet-<nnnnnnnn>

Here, we tell the playbook to run against our localhost, and more importantly we specify some default variables to use if certain items aren't passed in on the command line when we request a build.  For simplicity in the example I placed them here but in a real world setting these variables would be more appropriate defined in one or more group_vars configurations so that you could set different defaults for various situations.  These should be changed to values that match your environment

Next, we start our tasks section of the playbook and include a block that simply checks that our required instance details are passed in.  This may or may not be required in your case but its a good way to show how you can check for variables being set if nothing else

  tasks:
    - name: "Validate that the required parameters were passed"
      fail: msg="Please pass the required parameters (name, image, role)"
      when: "name is not defined or name == \"\" or image is not defined or image == \"\" or role is not defined or role == \"\""

Creating our new EC2 instance

Next, we need to create a new task to initiate the build of our instance using the AWS API.  For this we use the built in Ansible module named ec2 as follows:

    - name: "Create the new instance(s)"
      ec2:
        region: "{{ region | default(default_region) }}"
        key_name: "{{ aws_key_name }}"
        instance_type: "{{ instance_type | default(default_type) }}"
        image: "{{ image }}"
        wait: "{{ wait | default('no') }}"
        group: "{{ group | default(default_security_group) }}"
        instance_tags: 
          Name: "{{ name }}"
          role: "{{ role }}"
        user_data: "{{ name }}"
        vpc_subnet_id: "{{ subnet_id | default(default_subnet) }}"
      register: ec2

Most of the above options should hopefully be self explanatory to anyone remotely familiar with EC2.  However, a few noteworthy items are;

  • You may notice the use of the Jinja2 default filter.  We do this to allow us to use variables we pass in when running the playbook, but if we don't provide them we use the default values set in our variables above.  
  • You may also notice we set a few tags here in the instance_tags setting, that is just for our ability to continue using Ansible throughout our infrastructure as we build more instances, have different roles, etc.  Tags are arbitrary and you don't need to use those exact ones, or any at all.
  • We pass in our server name for the user_data option.  This is then fed into the instance's user data so that our instance will come up with a friendly name.  More details on this can be found in our post Dynamically Assign Host Names to your EC2 Instances.
  • Finally, we register the output of the call to build the instance in a variable named ec2 for use in the next steps of our playbook.

Tagging our instance

We talked about tags a moment ago, now more about them.  This is definitely an indicator of how important they have become to us in our use of Ansible and easily managing large fleets of instances.  As part of the previous task of creating our instance, we set a few tags that we will call our standard tags, every instance gets these.  However, most instances get more and this next task uses the Ansible ec2_tag module to dynamically add those extras if passed into our build but if not we skip the step.

    - name: "Tag instance(s)"
      ec2_tag:
        region: "{{ region | default(default_region) }}"
        resource: "{{ item.id }}"
        state: present
        tags: "{{ tags }}"
      with_items: ec2.instances
      when: "tags is defined"

Again, I hope the above is fairly self explanatory but we do have some new concepts here;

  • You'll notice the with_items command, which is a loop over an ansible list.  In this case, the list only has a single entry, our instance information from the previous step that we registered in the ec2 variable.  This allows us to access the information from the newly built instance, namely our instance id referenced by item.id.  
  • We specify when: "tags is defined".  This is a conditional that says to only run this task if we have a tags variable defined with a list of tags.  You'll see how we pass these in below.

Adding a DNS entry to our new instance

For the final step in our creation process we'll automatically create a DNS entry for our new instance in the AWS Route 53 DNS system using the Ansible module route53.  In this case, we'll just add a record into our internal only DNS zone for access limited to our VPC but the process would be the same for a public zone also.  This does require that the zone you're trying to add the record to already exists.

    - name: "Create DNS record for kiss.int"
      route53:
        command: create
        private_zone: true
        overwrite: yes
        record: "{{ name }}.kiss.int"
        zone: "kiss.int"
        type: A
        ttl: 300
        value: "{{ item.private_ip }}"
      with_items: ec2.instances

By now, you know the drill, some points worth noting;

  • Once again you see the with_items.  We use this to pull out the private IP of our new instance.
  • We specify the overwrite: yes option here, this ensures that if the name was previously registered we will update it with the new IP.

Putting it all together

Now that we have our playbook written, its time to build some instances.  For reference, the above pieces of the playbook can be referenced here in its entirety.

In its most simple form, we can run it as so to pass in the minimum required information and allow the defaults to do the rest.

[brian@freebsd-local ~/kiss-ops/ansible]$ ansible-playbook create_ec2_instance.yml --extra-vars='{"name":"test1", "image":"ami-nnnnnnnn", "role":"web"}'

PLAY [Playbook to spin up new AWS instances] **********************************

TASK: [Validate that the required parameters were passed] *********************
skipping: [localhost]

TASK: [Create the new instance(s)] ********************************************
changed: [localhost]

TASK: [Tag instance(s)] *******************************************************
skipping: [localhost] => (item={'ramdisk': None, 'kernel': None, 'root_device_type': 'ebs', .......})

TASK: [Create DNS record for somo.int] ****************************************
changed: [localhost] => (item={'ramdisk': None, 'kernel': None, 'root_device_type': ........})

PLAY RECAP ********************************************************************
localhost                  : ok=3    changed=2    unreachable=0    failed=0

Here, we pass in our name, image, and role parameters using the --extra-vars option of ansible.  In a similar fashion you could pass in any of the other options that are available.  This is common in a larger environment where you may need to build 5 servers each in two different regions, etc.  Also, If you make note of the output, you'll notice that the Tag instance(s) task was skipped.  This is because we did not pass in any extra tags.

Now, for a more complex run, lets pass in some extra tags so that we can try out that task that didn't truly get run last time:

[brian@freebsd-local ~/kiss-ops/ansible]$ ansible-playbook create_ec2_instance.yml --extra-vars='{"name":"test2", "image":"ami-nnnnnnnn", "role":"web", "tags":{"redis":"", "nagios_hostgroups": "prod,redis"}}'

PLAY [Playbook to spin up new AWS instances] **********************************

TASK: [Validate that the required parameters were passed] *********************
skipping: [localhost]

TASK: [Create the new instance(s)] ********************************************
changed: [localhost]

TASK: [Tag instance(s)] *******************************************************
changed: [localhost] => (item={'ramdisk': None, 'kernel': None, 'root_device_type': 'ebs', ..........})

TASK: [Create DNS record for somo.int] ****************************************
changed: [localhost] => (item={'ramdisk': None, 'kernel': None, 'root_device_type': 'ebs', ..........})

PLAY RECAP ********************************************************************
localhost                  : ok=3    changed=3    unreachable=0    failed=0

The result is the same as last time, you get some information output to the screen.  This time you note we did run the Tag instance(s) task.  If I check my new instance, I can see those tags.  Why those tags you ask, maybe we'll cover that another time.

Whats next?

At this point, we've covered the Ansible basics, inventory management, and now building of cloud environment components. Stay tuned for the upcoming posts in this series in which we'll continue to build on the usage of our EC2 plugin and cover common configuration management examples and other advanced topics for managing your EC2 environment using Ansible.