Merge lp:~james-w/pkgme-service/fab-deploy-tasks into lp:pkgme-service
- fab-deploy-tasks
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Jonathan Lange |
Approved revision: | 32 |
Merged at revision: | 26 |
Proposed branch: | lp:~james-w/pkgme-service/fab-deploy-tasks |
Merge into: | lp:pkgme-service |
Prerequisite: | lp:~james-w/pkgme-service/puppet |
Diff against target: |
296 lines (+285/-0) 2 files modified
fabtasks/__init__.py (+1/-0) fabtasks/deploy.py (+284/-0) |
To merge this branch: | bzr merge lp:~james-w/pkgme-service/fab-deploy-tasks |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Jonathan Lange (community) | Approve | ||
Review via email: mp+89165@code.launchpad.net |
This proposal supersedes a proposal from 2012-01-18.
Commit message
Description of the change
Hi,
This adds some fab tasks for helping with spinning up dev instances.
There's no tests unfortunately.
There's a command to deploy to an existing instance, and one to spin
up an ec2 instance and deploy to it (like Launchpad's "ec2 demo".)
There's also a couple of commands to help with managing the ec2 instances.
The pre-requisites are rather convoluted, but that seems inevitable with ec2
unfortuanately, and they are documented as best as I know how at this time.
Thanks,
James
Jonathan Lange (jml) wrote : | # |
Jonathan Lange (jml) : | # |
Jonathan Lange (jml) wrote : | # |
I consistently get this error:
$ fab deploy_to_ec2
Waiting for instance i-3c7d7f5e to start...
Instance started as ec2-23-
Waiting for ssh to come up
[<email address hidden>:22] run: sudo add-apt-repository ppa:canonical-
Fatal error: Timed out trying to connect to ec2-23-
Aborting.
Jonathan Lange (jml) : | # |
James Westby (james-w) wrote : | # |
On Thu, 19 Jan 2012 10:13:31 -0000, Jonathan Lange <email address hidden> wrote:
> It's a bit disappointing that there's not a Python library that provides some of the stuff that you hand-roll in the first few functions.
Indeed, I don't think that boto has a location it looks, or knows about
the ubuntu-specific script I use. I don't think it's worth turning it in
to a library at this point though.
> Any ideas on how we can prevent this from sprawling into a project the size of Launchpad's ec2test?
I wouldn't mind something the size of Launchpad's ec2test if it didn't
live in this project, or at least the parts that aren't specific to
pkgme-service.
Probably we'll abandon it all in favour of juju in a few months though.
> Also, when can we target this at a local virtual machine / chroot / LXC thing?
If you have one up the deploy_to_existing should work ok (it could do
with dealing with not having a fresh install a little better if you want
to re-use.)
We could add lxc without too much work I expect, but I haven't played
with lxc directly yet to know what to do.
Thanks,
James
- 29. By James Westby
-
Merge puppet improvements.
- 30. By James Westby
-
Merge puppet fixes.
- 31. By James Westby
-
Set the security group of the created instance so it is accessible.
James Westby (james-w) wrote : | # |
On Thu, 19 Jan 2012 10:59:59 -0000, Jonathan Lange <email address hidden> wrote:
> I consistently get this error:
>
> $ fab deploy_to_ec2
> Waiting for instance i-3c7d7f5e to start...
> Instance started as ec2-23-
> Waiting for ssh to come up
> [<email address hidden>:22] run: sudo add-apt-repository ppa:canonical-
>
> Fatal error: Timed out trying to connect to ec2-23-
>
> Aborting.
Hi,
This should now work with the latest changes pushed, as it now creates a
security group for the instance that gives the world access to 22,80,443
(I didn't think that locking it down to specific IP addresses gained us
much at all.)
Thanks,
James
- 32. By James Westby
-
Delete the security group before the instance.
Jonathan Lange (jml) wrote : | # |
On Thu, Jan 19, 2012 at 3:06 PM, James Westby <email address hidden> wrote:
> On Thu, 19 Jan 2012 10:59:59 -0000, Jonathan Lange <email address hidden> wrote:
>> I consistently get this error:
>>
>> $ fab deploy_to_ec2
>> Waiting for instance i-3c7d7f5e to start...
>> Instance started as ec2-23-
>> Waiting for ssh to come up
>> [<email address hidden>:22] run: sudo add-apt-repository ppa:canonical-
>>
>> Fatal error: Timed out trying to connect to ec2-23-
>>
>> Aborting.
>
> Hi,
>
> This should now work with the latest changes pushed, as it now creates a
> security group for the instance that gives the world access to 22,80,443
> (I didn't think that locking it down to specific IP addresses gained us
> much at all.)
>
Works now. Thanks.
jml
Jonathan Lange (jml) : | # |
Preview Diff
1 | === modified file 'fabtasks/__init__.py' | |||
2 | --- fabtasks/__init__.py 2011-08-29 23:50:07 +0000 | |||
3 | +++ fabtasks/__init__.py 2012-01-19 18:09:25 +0000 | |||
4 | @@ -1,2 +1,3 @@ | |||
5 | 1 | from .bootstrap import * | 1 | from .bootstrap import * |
6 | 2 | from .deploy import * | ||
7 | 2 | from .django import * | 3 | from .django import * |
8 | 3 | 4 | ||
9 | === added file 'fabtasks/deploy.py' | |||
10 | --- fabtasks/deploy.py 1970-01-01 00:00:00 +0000 | |||
11 | +++ fabtasks/deploy.py 2012-01-19 18:09:25 +0000 | |||
12 | @@ -0,0 +1,284 @@ | |||
13 | 1 | from datetime import datetime | ||
14 | 2 | import os | ||
15 | 3 | import subprocess | ||
16 | 4 | import time | ||
17 | 5 | |||
18 | 6 | from fabric.api import env, run | ||
19 | 7 | from fabric.utils import abort, puts | ||
20 | 8 | from fabric.operations import open_shell as _open_shell | ||
21 | 9 | |||
22 | 10 | from boto import ec2 | ||
23 | 11 | |||
24 | 12 | |||
25 | 13 | RELEASE = 'lucid' | ||
26 | 14 | ARCH = 'amd64' | ||
27 | 15 | FS_TYPE = 'ebs' | ||
28 | 16 | INSTANCE_TYPE = 't1.micro' | ||
29 | 17 | USERNAME = 'ubuntu' | ||
30 | 18 | REGION_NAME = 'us-east-1' | ||
31 | 19 | |||
32 | 20 | NAME_KEY = "name" | ||
33 | 21 | NAME_PREFIX = "fab/pkgme-service" | ||
34 | 22 | |||
35 | 23 | ALL_HOSTS = "0.0.0.0/0" | ||
36 | 24 | |||
37 | 25 | |||
38 | 26 | def open_shell(): | ||
39 | 27 | """Open a shell on the remote host.""" | ||
40 | 28 | _open_shell() | ||
41 | 29 | |||
42 | 30 | |||
43 | 31 | def _get_aws_credentials(): | ||
44 | 32 | """Get the AWS credentials for the user from ~/.ec2/aws_id. | ||
45 | 33 | |||
46 | 34 | :returns: a tuple of access key id, secret access key | ||
47 | 35 | """ | ||
48 | 36 | key_id = None | ||
49 | 37 | secret_access_key = None | ||
50 | 38 | with open(os.path.expanduser("~/.ec2/aws_id")) as f: | ||
51 | 39 | for i, line in enumerate(f.readlines()): | ||
52 | 40 | if i == 0: | ||
53 | 41 | key_id = line.strip() | ||
54 | 42 | if i == 1: | ||
55 | 43 | secret_access_key = line.strip() | ||
56 | 44 | if key_id is None: | ||
57 | 45 | raise AssertionError("Missing key id in ~/.ec2/aws_id") | ||
58 | 46 | if secret_access_key is None: | ||
59 | 47 | raise AssertionError("Missing secret access key in ~/.ec2/aws_id") | ||
60 | 48 | return key_id, secret_access_key | ||
61 | 49 | |||
62 | 50 | |||
63 | 51 | def _get_ami_id(region_name): | ||
64 | 52 | """Get the AMI to use for a particular region. | ||
65 | 53 | |||
66 | 54 | This consults the ubuntu-cloudimg-query tool to find out the best | ||
67 | 55 | AMI to use for a particular region. | ||
68 | 56 | |||
69 | 57 | :returns: the ami id (as a string) | ||
70 | 58 | """ | ||
71 | 59 | proc = subprocess.Popen( | ||
72 | 60 | ['ubuntu-cloudimg-query', RELEASE, ARCH, FS_TYPE, region_name], | ||
73 | 61 | stdout=subprocess.PIPE) | ||
74 | 62 | stdout, _ = proc.communicate() | ||
75 | 63 | if proc.returncode != 0: | ||
76 | 64 | raise AssertionError("calling ubuntu-cloudimg-query failed") | ||
77 | 65 | return stdout.strip() | ||
78 | 66 | |||
79 | 67 | |||
80 | 68 | def _get_security_group(conn, name): | ||
81 | 69 | security_group = conn.create_security_group(name, | ||
82 | 70 | "Access to the fab pkgme-service ec2 deployment %s" % name) | ||
83 | 71 | security_group.authorize('tcp', 22, 22, ALL_HOSTS) | ||
84 | 72 | security_group.authorize('tcp', 80, 80, ALL_HOSTS) | ||
85 | 73 | security_group.authorize('tcp', 443, 443, ALL_HOSTS) | ||
86 | 74 | return security_group | ||
87 | 75 | |||
88 | 76 | |||
89 | 77 | def _new_ec2_instance(keypair): | ||
90 | 78 | """Starts a new ec2 instance, giving the specified keypair access. | ||
91 | 79 | |||
92 | 80 | This will use the AWS credentials from ~/.ec2/aws_id, and the | ||
93 | 81 | AMI recommended by ubuntu-cloudimg-query. | ||
94 | 82 | |||
95 | 83 | :returns: the boto Instance object of the launched instance. It | ||
96 | 84 | will be in a state where it can be accessed over ssh. | ||
97 | 85 | """ | ||
98 | 86 | key_id, secret_access_key = _get_aws_credentials() | ||
99 | 87 | ami_id = _get_ami_id(REGION_NAME) | ||
100 | 88 | conn = ec2.connect_to_region(REGION_NAME, aws_access_key_id=key_id, | ||
101 | 89 | aws_secret_access_key=secret_access_key) | ||
102 | 90 | image = conn.get_image(ami_id) | ||
103 | 91 | now = datetime.utcnow() | ||
104 | 92 | name = "%s/%s" % (NAME_PREFIX, now.isoformat()) | ||
105 | 93 | security_group = _get_security_group(conn, name) | ||
106 | 94 | reservation = image.run(instance_type=INSTANCE_TYPE, key_name=keypair, | ||
107 | 95 | security_groups=[security_group.name]) | ||
108 | 96 | instance = reservation.instances[0] | ||
109 | 97 | puts("Waiting for instance %s to start..." % instance.id) | ||
110 | 98 | while True: | ||
111 | 99 | time.sleep(10) | ||
112 | 100 | instance.update() | ||
113 | 101 | if instance.state != 'pending': | ||
114 | 102 | break | ||
115 | 103 | if instance.state != 'running': | ||
116 | 104 | raise AssertionError("Instance failed to start") | ||
117 | 105 | puts("Instance started as %s" % instance.dns_name) | ||
118 | 106 | puts("Waiting for ssh to come up") | ||
119 | 107 | # FIXME: use something better than a sleep to determine this. | ||
120 | 108 | time.sleep(30) | ||
121 | 109 | instance.add_tag(NAME_KEY, name) | ||
122 | 110 | return instance | ||
123 | 111 | |||
124 | 112 | |||
125 | 113 | def deploy_to_ec2(branch="lp:pkgme-service", use_staging_deps=True, pkgme_branch="lp:pkgme", pkgme_binary_branch="lp:pkgme-binary", keypair='ec2-keypair'): | ||
126 | 114 | """Deploy to a new ec2 instance. | ||
127 | 115 | |||
128 | 116 | This command will spin up an ec2 instance, and deploy to it. | ||
129 | 117 | |||
130 | 118 | To run this command you must first set up an ec2 account. | ||
131 | 119 | |||
132 | 120 | http://aws.amazon.com/ec2/ | ||
133 | 121 | |||
134 | 122 | Once you have that create access keys for this to use. Go to | ||
135 | 123 | |||
136 | 124 | https://aws-portal.amazon.com/gp/aws/developer/account?ie=UTF8&action=access-key | ||
137 | 125 | |||
138 | 126 | and create an access key. Then create a file at ~/.ec2/aws_id and | ||
139 | 127 | put the "Access Key ID" on the first line, and the "Secret Access Key" on | ||
140 | 128 | the second line (with nothing else on either line.) | ||
141 | 129 | |||
142 | 130 | Next you will need an ec2 keypair. Go to the EC2 console at | ||
143 | 131 | |||
144 | 132 | https://console.aws.amazon.com/ec2/home?region=us-east-1 | ||
145 | 133 | |||
146 | 134 | click on "Key Pairs" and then "Create Key Pair", name the keypair "ec2-keypair". | ||
147 | 135 | Save the resulting file as | ||
148 | 136 | |||
149 | 137 | ~/.ec2/ec2-keypair.pem | ||
150 | 138 | |||
151 | 139 | Now you are ready to deploy, so run | ||
152 | 140 | |||
153 | 141 | fab deploy_to_ec2:keypair=ec2-keypair -i ~/.ec2/ec2-keypair.pem | ||
154 | 142 | |||
155 | 143 | and wait. | ||
156 | 144 | |||
157 | 145 | Note that you will be responsible for terminating the instance after | ||
158 | 146 | use (see the destroy_ec2_instances command.) | ||
159 | 147 | |||
160 | 148 | You can also specify the following arguments: | ||
161 | 149 | |||
162 | 150 | * branch: the branch to deploy, defaults to lp:pkgme-service. To | ||
163 | 151 | test some in-progress work push to lp:~you/pkgme-service/something | ||
164 | 152 | and then specify that as the branch. | ||
165 | 153 | |||
166 | 154 | * pkgme_branch: the branch of pkgme to deploy, defaults to | ||
167 | 155 | lp:pkgme. | ||
168 | 156 | |||
169 | 157 | * pkgme_binary_branch: the branch of pkgme-binary to deploy, | ||
170 | 158 | defaults to lp:pkgme-binary. | ||
171 | 159 | |||
172 | 160 | * use_staging_deps: whether to use the staging PPA for | ||
173 | 161 | dependencies, as well as the production one, defaults to True. | ||
174 | 162 | |||
175 | 163 | Arguments are all specified by attaching them to the command name, e.g. | ||
176 | 164 | |||
177 | 165 | fab deploy_to_ec2:keypair=ec2-keypair,branch=lp:~me/pkgme-service/something | ||
178 | 166 | """ | ||
179 | 167 | instance = _new_ec2_instance(keypair) | ||
180 | 168 | _set_instance_as_host(instance) | ||
181 | 169 | deploy_to_existing(branch=branch, use_staging_deps=use_staging_deps, pkgme_branch=pkgme_branch, pkgme_binary_branch=pkgme_binary_branch) | ||
182 | 170 | |||
183 | 171 | |||
184 | 172 | def _set_instance_as_host(instance): | ||
185 | 173 | """Set the host to be acted on by fab to this instance.""" | ||
186 | 174 | env.host_string = "%s@%s:22" % (USERNAME, instance.dns_name) | ||
187 | 175 | |||
188 | 176 | |||
189 | 177 | def deploy_to_existing(branch="lp:pkgme-service", use_staging_deps=True, pkgme_branch="lp:pkgme", pkgme_binary_branch="lp:pkgme-binary"): | ||
190 | 178 | """Deploy to an existing instance. | ||
191 | 179 | |||
192 | 180 | This command will deploy to an existing instance. Don't use it on an | ||
193 | 181 | instance you care about as it may overwrite anything. | ||
194 | 182 | |||
195 | 183 | To specify the host to deploy to use the -H option of fab: | ||
196 | 184 | |||
197 | 185 | fab -H ubuntu@<host> deploy_to_existing | ||
198 | 186 | |||
199 | 187 | You can also specify the following arguments: | ||
200 | 188 | |||
201 | 189 | * branch: the branch to deploy, defaults to lp:pkgme-service. To | ||
202 | 190 | test some in-progress work push to lp:~you/pkgme-service/something | ||
203 | 191 | and then specify that as the branch. | ||
204 | 192 | |||
205 | 193 | * pkgme_branch: the branch of pkgme to deploy, defaults to | ||
206 | 194 | lp:pkgme. | ||
207 | 195 | |||
208 | 196 | * pkgme_binary_branch: the branch of pkgme-binary to deploy, | ||
209 | 197 | defaults to lp:pkgme-binary. | ||
210 | 198 | |||
211 | 199 | * use_staging_deps: whether to use the staging PPA for | ||
212 | 200 | dependencies, as well as the production one, defaults to True. | ||
213 | 201 | |||
214 | 202 | Arguments are all specified by attaching them to the command name, e.g. | ||
215 | 203 | |||
216 | 204 | fab deploy_to_ec2:keypair=ec2-keypair,branch=lp:~me/pkgme-service/something | ||
217 | 205 | """ | ||
218 | 206 | # Get the puppet used by IS | ||
219 | 207 | run('sudo add-apt-repository ppa:canonical-sysadmins/puppet') | ||
220 | 208 | # Add our dependency PPA | ||
221 | 209 | run('sudo add-apt-repository ppa:canonical-ca-hackers/production') | ||
222 | 210 | if use_staging_deps: | ||
223 | 211 | # Add our dependency staging PPA | ||
224 | 212 | run('sudo add-apt-repository ppa:canonical-ca-hackers/staging') | ||
225 | 213 | run('sudo apt-get update -q') | ||
226 | 214 | # Upgrade the base system in case we are shipping any updates in | ||
227 | 215 | # our PPAs | ||
228 | 216 | run('sudo apt-get dist-upgrade -q -y --force-yes') | ||
229 | 217 | # Avoid a debconf note when installing rabbitmq-server on lucid | ||
230 | 218 | run('echo "rabbitmq-server rabbitmq-server/upgrade_previous note" | sudo debconf-set-selections') | ||
231 | 219 | # Install the dependencies needed to get puppet going | ||
232 | 220 | # TODO: move the rest of the dependencies to puppet | ||
233 | 221 | run('sudo apt-get install -q -y --force-yes pkgme-service-dependencies bzr apache2 libapache2-mod-wsgi rabbitmq-server postgresql-8.4 puppet') | ||
234 | 222 | # Grab the branches | ||
235 | 223 | # TODO: investigate re-using IS' config-manager config | ||
236 | 224 | run('bzr branch -q %s pkgme-service' % branch) | ||
237 | 225 | run('bzr branch -q %s pkgme-service/sourcecode/pkgme' % pkgme_branch) | ||
238 | 226 | run('bzr branch -q %s pkgme-service/sourcecode/pkgme-binary' % pkgme_binary_branch) | ||
239 | 227 | run('cd pkgme-service/sourcecode/pkgme && python setup.py build') | ||
240 | 228 | run('cd pkgme-service/sourcecode/pkgme-binary && python setup.py build') | ||
241 | 229 | # Grab canonical-memento and use it? | ||
242 | 230 | # Run puppet to set everything else up. | ||
243 | 231 | run('./pkgme-service/dev_config/apply') | ||
244 | 232 | |||
245 | 233 | |||
246 | 234 | def _connect_to_ec2(): | ||
247 | 235 | """Get a connection to ec2, using the credentials on disk.""" | ||
248 | 236 | key_id, secret_access_key = _get_aws_credentials() | ||
249 | 237 | conn = ec2.connect_to_region(REGION_NAME, aws_access_key_id=key_id, | ||
250 | 238 | aws_secret_access_key=secret_access_key) | ||
251 | 239 | return conn | ||
252 | 240 | |||
253 | 241 | |||
254 | 242 | def _get_started_ec2_instances(conn): | ||
255 | 243 | """Get the ec2 instances started from here.""" | ||
256 | 244 | for reservation in conn.get_all_instances(): | ||
257 | 245 | for instance in reservation.instances: | ||
258 | 246 | if instance.state == 'running': | ||
259 | 247 | tags = getattr(instance, "tags", {}) | ||
260 | 248 | name = tags.get(NAME_KEY, None) | ||
261 | 249 | if name and name.startswith(NAME_PREFIX): | ||
262 | 250 | yield instance | ||
263 | 251 | |||
264 | 252 | |||
265 | 253 | def destroy_ec2_instances(): | ||
266 | 254 | """Destroy any ec2 instances created by this deployment.""" | ||
267 | 255 | conn = _connect_to_ec2() | ||
268 | 256 | for instance in _get_started_ec2_instances(conn): | ||
269 | 257 | puts("Stopping %s" % instance.id) | ||
270 | 258 | conn.delete_security_group(name=instance.tags[NAME_KEY]) | ||
271 | 259 | instance.terminate() | ||
272 | 260 | |||
273 | 261 | |||
274 | 262 | def last_ec2_launched(): | ||
275 | 263 | """Cause other commands to act on the last ec2 instance launched from here. | ||
276 | 264 | |||
277 | 265 | Use this in a list of commands to have the rest of the commands act on the | ||
278 | 266 | last ec2 instance launched with deploy_to_ec2, e.g. | ||
279 | 267 | |||
280 | 268 | fab -i ~/.ec2/ec2-keypair.pem last_ec2_launched -- ls | ||
281 | 269 | |||
282 | 270 | This will then run ls on that instance. | ||
283 | 271 | """ | ||
284 | 272 | def get_date(instance): | ||
285 | 273 | return instance.tags[NAME_KEY][len(NAME_PREFIX):] | ||
286 | 274 | last_instance = None | ||
287 | 275 | conn = _connect_to_ec2() | ||
288 | 276 | for instance in _get_started_ec2_instances(conn): | ||
289 | 277 | if last_instance is None: | ||
290 | 278 | last_instance = instance | ||
291 | 279 | else: | ||
292 | 280 | if get_date(instance) > get_date(last_instance): | ||
293 | 281 | last_instance = instance | ||
294 | 282 | if last_instance is None: | ||
295 | 283 | abort("No instances found.") | ||
296 | 284 | _set_instance_as_host(last_instance) |
This all looks OK to me.
It's a bit disappointing that there's not a Python library that provides some of the stuff that you hand-roll in the first few functions.
Any ideas on how we can prevent this from sprawling into a project the size of Launchpad's ec2test?
Also, when can we target this at a local virtual machine / chroot / LXC thing?