So far in this book you have created your new instances and installed the necessary software on them by hand. There’s nothing wrong with that, and it’s probably the way you’ll wind up configuring your virtual machines as you go on. That said, there is another—and potentially easier—way to accomplish the same thing. You can actually import precreated instances of your software straight into your VPC using the Amazon EC2 API Command Line Tools.
Over the last several years, virtual machines have been growing in popularity as a way to quickly create and scale infrastructure resources. In fact, each of the EC2 instances you’ve been creating in this book is actually a virtual machine hosted in Amazon’s cloud. The two most common formats for virtual machines are the virtual machine disk format (VMDK, created by VMWare Corporation) and the Virtual Hard Disk (VHD) format, an invention of Microsoft. Each format has gained a significant following, and there are now many vendors that package preconfigured versions of their software in one or both of these formats.
The advantage of using one of these preconfigured images is that you can directly import them into the Amazon EC2 infrastructure as a fully ready instance. You can even import them into your existing VPC—if you know the magic words, that is! The way to accomplish these feats of IT magic is with the Amazon EC2 API Command Line Tools.
Amazon is a very developer-friendly entity. For just about every service they offer, they also offer some kind of SDK or other developer tool. The AWS services are no exception. For the purpose of automating common AWS functions—like creating VPCs, starting and stopping an EC2 instance, or just about any other thing you can think of—Amazon has provided a wonderful set of command-line tools. The set you care about in this section of the book are those having to do with EC2 instances and VPCs. These collections of functions all exist in one set of command-line tools known as the EC2 API Command Line Tools.
There are too many to list completely here, but they cover approximately these functional areas:
Availability zones and regions
Elastic block store
Elastic IP addresses
Elastic network interfaces
Virtual machine (VM) import and export
Virtual private gateways
Yikes! That’s a lot of stuff!
For your purposes, you’re only going to be concerned with three specific functions from the VM import function group.
ec2-import-instance command does
exactly what it sounds like. It imports a virtual machine you have on a
local computer and converts the VM to a valid EC2 instance. This process
is called a conversion task. It
therefore stands to reason that
ec2-describe-conversion-tasks gets information about
your currently running tasks and
ec2-cancel-conversion-task cancels a task that’s in
Before you can use these tools, you need to install them on your local machine.
In this chapter, I’m going to assume that you’re installing these tools on a Windows machine. I make this assumption because a) that’s the dominant desktop platform in IT and b) it’s the trickiest to get working.
The EC2 Command Line tools can be found at the Amazon
developer site. They come in a ZIP file, so be sure to unzip them
someplace you can easily remember, like
The next thing you need is a current version of a Java runtime environment (JRE), which you can get from the Oracle Java site. Once you’ve downloaded and run the installer for the JRE, you can continue. For this chapter I’m going to use the installation path of my JRE: c:\Program Files(x86)\java\jre7\.
Many of the EC2 command-line tools also require a client certificate to identify you. This is for your protection, I promise. Since you probably don’t have such a pair from Amazon yet, let’s get those now.
Go to the main Amazon developer portal.
Select the pull-down in the upper right titled My Account/Console, and select Security Credentials.
You might be prompted to sign in to your Amazon developer account, so do that. If not, just continue.
In the Access Credentials part of the page, click the tab marked X.509 Certificates.
If this book is your first experience with Amazon AWS, you will need to create a certificate pair. This pair consists of two parts: a private-key file that you must store locally and a certificate file that you can always redownload if you need to.
You will get one—and only one—opportunity to save the private-key file associated with your certificate. It will automatically download through your browser when you create a new pair.
Save this file someplace safe and memorable.
I cannot stress this warning enough: if you misplace this file (as I have) you will need to invalidate the certificate it corresponds to and create a new pair.
Click the Create a New Certificate link.
A new window will pop up with two buttons: one to download the private-key file and one to download the new certificate. Click each button in turn, and save each file someplace safe.
Click the Access Keys tab.
Copy the Access Key ID to a text file someplace safe and private.
Click Show and copy the Secret Access Key value to the same file.
With the client certificate out of the way, you need to set some very handy environment variables. On your Windows machine, right-click My Computer and select Properties → Advanced Settings → Environment Variables.
You need to set the following system variables:
The location on disk where you unzipped the EC2 tools. You want the directory that contains the bin subdirectory.
Where the current JRE is installed. In my case that’s c:\Program Files(x86)\Java\Jre7.
The full path to the private-key file you downloaded earlier.
The full path to the X.509 certificate you also saved.
The value of the Access Key ID you saved earlier.
The value of the Secret Access Key you saved earlier.
c:\Windows System;c:\someplace else;%EC2_HOME%
If you’re running on Mac OS X (as I am) or on Linux, you probably just want to create a simple shell script that exports these variables, or you can define them in a well-known place for whichever shell you use—in my case .bash-profile, because my native shell is Bash.
Now that your tools are downloaded and configured, it’s time to have some fun!
Since the point of this chapter is to teach you how to upload your own VMs as instances in your VPC, you should first start with a test image.
Although the EC2 service supports both Linux and Windows Server instances, at the time of this writing you can import only images built on Windows Server 2008, 2008 R1, and 2008 R2 through the command-line tools. It’s a bummer, I know, but I’m sure Amazon will get around to rectifying it in the near future. They tend to be pretty good at that stuff.
Not all of you will already have a handy VHD or VMDK to test with; if you don’t, you can go get one from our friends at Microsoft:
You may need to sign in with a valid Microsoft Live account. Get one if you don’t have one.
Download all the files from the page.
Run the executable once all the files are downloaded.
The VHD image is actually wrapped in a self-extracting RAR archive.
If you’re doing this on a Mac, don’t worry that one of the files is a Windows executable. Just grab a copy of the free version of the Stuffit Expander utility and select the .exe file. It will expand just fine from there. Alternatively, there’s always the great open source Rar Expander.
My machine extracted the VHD to ExchangeDemos\SLC-DC01\VirtualHard Disks.
Change directory into the extraction directory.
The command you’re interested in is
ec2-import-instance. A shortened version of its syntax
Table 4-1. ec2-import-instance options
Specifies the type of instance to be launched.
The security group within which the instances should be run. Determines the ingress firewall rules that are applied to the launched instances. Only one security group is supported for an instance.
Default: Your default security group
The file format of the disk image.
The architecture of the image.
Condition: Required if instance type is specified; otherwise defaults to i386.
The Amazon S3 destination bucket for the manifest.
Access key ID of the bucket owner.
Secret access key of the bucket owner.
Prefix for the manifest file and disk image file parts within the Amazon S3 bucket.
The URL for an existing import manifest file already uploaded to Amazon S3.
Default: None. This
option cannot be specified if the
The size of the Amazon Elastic Block Store volume, in Gibibytes (230 bytes), that will hold the converted image. If not specified, EC2 calculates the value using the disk image file.
The Availability Zone for the converted VM.
An optional, free-form comment returned verbatim
during subsequent calls to
Constraint: Maximum length of 255 characters
User data to be made available to the imported instance.
The file containing user data made available to the imported instance.
If you’re using Amazon Virtual Private Cloud, this specifies the ID of the subnet into which you’re launching the instance.
If you’re using Amazon Virtual Private Cloud, this
specifies the specific IP address within
Enables monitoring of the specified instance(s).
If an instance shutdown is initiated, this determines whether the instance stops or terminates.
Validity period for the signed Amazon S3 URLS that allow EC2 to access the manifest.
Default: 30 days
Ignore the verification check to determine that the bucket’s Amazon S3 Region matches the EC2 Region where the conversion task is created.
Does not create an import task, only validates that the disk image matches a known type.
Does not upload a disk image to Amazon S3, only
creates an import task. To complete the import task and upload the
disk image, use
Does not verify the file format. You don’t recommend this option because it can result in a failed conversion.
You don’t need all of these options, of course. Since you want to launch your new instance inside of your existing VPC, your command will take the form of:
ec2-import-instance -o %ACCESS_KEY_ID% -w %SECRET_ACCESS_KEY% -f VHD -a x86_64 --bucket
In my particular case, my VPC is in availability zone
us-east-1a and my subnet ID is
subnet-8aafcce2. I also created an S3 bucket named
dkr_imports as temporary storage for my import jobs.
With all this in mind, my command will be:
ec2-import-instance -o %ACCESS_KEY_ID% -w %SECRET_ACCESS_KEY% -f VHD -a x86_64 --bucket dkr_imports -z us-east-1a --subnet subnet-8aafcce2 SLC-DC01.vhd
You need a valid S3 bucket for this process because that’s where Amazon stores your uploaded instance while it converts it to a VPC. No worries if you don’t already have one created. As long as you specify a name, the command will create a bucket with that name for you.
Done. Average speed was 4.852 MBps. The disk image for import-i-fgspm29j has been uploaded to Amazon S3 where it is being converted into an EC2 instance. You may monitor the progress of this task by running ec2-describe-conversion-tasks. When the task is completed, you may use ec2-delete-disk-image to remove the image from S3.
parts highlighted in bold are the important bits of this message. First,
it has given you the name of the import task:
import-i-fgspm29j. Second, it has told you that you can
check on the status of your import by running
ec2-describe-conversion-tasks; finally, it conveys that
you can delete your intermediate file from your S3 bucket using the
As you may have inferred from these messages, the import process is actually three separate processes:
This is the step you just completed where your VHD was uploaded to Amazon for conversion into a runnable instance in your VPC.
Once the image is uploaded, Amazon needs to convert the image to an intermediate format it uses to populate an EBS volume that it will attach to your new instance.
The final stage of the process is where Amazon creates an instance and attaches the new EBS volume to it as its root storage device.
If you run
ec2-describe-conversion-tasks on task
import-i-fgspm29j, you might see something like:
I’ve highlighted some important parts. The first thing to notice is that you’re getting two statuses with this command. The higher one in the output is the instance creation status, while the lower one is the conversion status. Since I ran this command immediately after the upload, you can see that both tasks are still pending.
If I wait a little while and run the command again, I get the following:
Notice here that the creation status is still pending while the conversion status is at 50 percent.
Next, I wait 30 more minutes and get the following:
The instance creation process is at 19 percent and conversion process is marked as completed.
In general, expect to wait at least an hour after your upload for the instance to be created and online.
For those of you running on either Linux or Mac OS X, you can check on the status of your import every 5 minutes by running this handy command:
while true; do ec2dct import-i-fgspm29j | grep StatusMessage | cut -f 7-10; sleep 300; clear; done
Of course you need to substitute your import task ID for mine, but you get the point.
Eventually, the process will complete and you will be able to start your new instance from the EC2 instance list in the AWS Management Console.
This instance we’ve uploaded happens to be an entire IT infrastructure on one machine: domain controller, Active Directory, and Exchange server. If you really wanted to get up and running fast, you could just import this instance into a regular EC2 instance and call it a day, but you want more service than this provides. (And doing so would make this book wicked short!)
Please note that when this instance starts you will be able to RDP in to it from your gateway machine, but we’ll need special credentials to get in. More on that in a minute.
While you’re waiting for your import to finish, you should go over some do’s and dont’s as they relate to importing an instance into AWS.
Amazon has a very detailed guide on how to prepare your existing VMs (Citrix, VMWare, and Microsoft Hyper-V). Read it twice. It will shortcut lots of potential problems. In the meantime, here are some highlights:
If RDP is not already enabled on the image you are importing, you will have no way to connect to it once it’s an EC2 instance.
There’s no point enabling RDP if the firewall won’t allow public IP addresses to use it.
You can’t RDP into a Windows Server machine if autologon is enabled, because there’s no username/password prompting.
When AWS first boots the new instance, the very last thing you want to happen is for it to start into an update cycle.
While you’re waiting for your import and conversion task to complete, I’d like to specifically thank Peter Beckman from the EC2 Import/Export team at Amazon Web Services. My first couple of attempts to do this failed for some subtle reasons, and he was super helpful while I was troubleshooting. If you get a chance, drop him a shout-out at,"firstname.lastname@example.org", his email address, and tell him “thanks” for helping your humble author!
If everything goes according to plan, you should see this:
When you go back to your AWS console and look at your EC2 instances, you should see one like this:
The instance has no label because you didn’t give it one during
the import. You could have, but you already had a bunch of command-line
options to manage. Go ahead and click the field and give it a meaningful
name—maybe something like
The Instance I’m Going to Delete in 2
Minutes. Then, right-click and start it up.
Now you should verify that everything is working OK.
Find the internal IP address of the instance from its details page. In my case it’s 10.0.0.31.
Connect to the gateway via your VPN.
Using your RDP client, connect to the gateway machine you created.
C:\Users\Administrator>ping 10.0.0.31 Pinging 10.0.0.31 with 32 bytes of data: Reply from 10.0.0.31: bytes=32 time=7ms TTL=128 Reply from 10.0.0.31: bytes=32 time<1ms TTL=128 Reply from 10.0.0.31: bytes=32 time<1ms TTL=128 Reply from 10.0.0.31: bytes=32 time=43ms TTL=128 Ping statistics for 10.0.0.31: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 43ms, Average = 12ms
mstsc /v:10.0.0.31 /console /admin
Again, use your IP address.
This particular image from Microsoft is already its own domain. You will need the following credentials to log in to it:
Accept the certificate if asked.
If all goes well, you will have successfully connected to your newly imported instance. Congratulations!
Go ahead and poke around the instance for a bit if you like, but eventually you’re going to have to terminate it. You don’t need it and it’s not a good idea to have two different domains and controllers on the same subnet.
Once we’ve had your fun with your new instance, it’s time to clean up. You need to do two things:
Clean up the temporary import files created for you in S3.
Terminate the instance.
ec2-delete-disk-image -t import-i-ffsicbou -o %ACCESS_KEY_ID% -w %SECRET_ACCESS_KEY% 0% |--------------------------------------------------| 100% |==================================================| Done
This command deletes all temporary files associated with
the import task specified after the
The final thing to do is to terminate the instance by right-clicking it in the AWS console and selecting Terminate.
So ... what have you learned?
If you already have Windows Server-based virtual machines running in your existing infrastructure, you can easily and securely import them into the AWS cloud. This is very handy when migrating from a traditional on-site infrastructure to a cloud-based architecture.
As it happens, you can also do this process in reverse. You can create and configure an instance in the AWS cloud and then export it to VM to use on a physical machine—maybe a demo laptop, for example.
Of course, we’ve only scratched the surface of what the EC2 command-line tools can do. You can also, for instance, upload a raw disk image and attach it to an already configured instance as another disk drive—and that’s just the beginning. But this is a book about IT virtualization, not scripting or programming, and that subject alone could go a solid hundred pages.