Merge ~powersj/cloud-init:cii-enable-ec2 into cloud-init:master
- Git
- lp:~powersj/cloud-init
- cii-enable-ec2
- Merge into master
Status: | Merged |
---|---|
Approved by: | Scott Moser |
Approved revision: | 0ccf7852289ff0274feb8c7b9d0920ddfefa7ad2 |
Merged at revision: | 34595e9b4abacc10ac599aad97c95861af34ea54 |
Proposed branch: | ~powersj/cloud-init:cii-enable-ec2 |
Merge into: | cloud-init:master |
Diff against target: |
1211 lines (+784/-95) 17 files modified
.pylintrc (+1/-1) doc/rtd/topics/tests.rst (+32/-6) tests/cloud_tests/collect.py (+12/-7) tests/cloud_tests/platforms.yaml (+6/-5) tests/cloud_tests/platforms/__init__.py (+2/-0) tests/cloud_tests/platforms/ec2/image.py (+109/-0) tests/cloud_tests/platforms/ec2/instance.py (+126/-0) tests/cloud_tests/platforms/ec2/platform.py (+231/-0) tests/cloud_tests/platforms/ec2/snapshot.py (+66/-0) tests/cloud_tests/platforms/instances.py (+69/-1) tests/cloud_tests/platforms/nocloudkvm/instance.py (+35/-53) tests/cloud_tests/platforms/nocloudkvm/platform.py (+4/-0) tests/cloud_tests/platforms/platforms.py (+69/-0) tests/cloud_tests/releases.yaml (+6/-2) tests/cloud_tests/setup_image.py (+0/-18) tests/cloud_tests/util.py (+15/-2) tox.ini (+1/-0) |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Scott Moser | Approve | ||
Server Team CI bot | continuous-integration | Approve | |
Chad Smith | Approve | ||
Review via email: mp+335186@code.launchpad.net |
Commit message
tests: Enable AWS EC2 Integration Testing
This enables integration tests to utilize AWS EC2 as a testing platform by
utilizing the boto3 Python library.
Usage will create and delete a custom VPC for every run. All resources will
be tagged with the ec2 tag, 'cii', and the date (e.g. cii-20171220-
The VPC is setup with both IPv4 and IPv6 capabilities, but will only hand out
IPv4 addresses by default. Instances will have complete Internet access and
have full ingress and egress access (i.e. no firewall).
SSH keys are generated with each run of the integration tests with the
key getting uploaded to AWS at the start of tests and deleted on exit.
To enable creation when the platform is setup the SSH generation code is
moved to be completed by the platform setup and not during image setup.
The nocloud-kvm platform was updated with this change.
Creating a custom image will utilize the same clean script,
boot_clean_script, that the LXD platform uses as well. The custom AMI
is generated, used, and de-registered after a test run.
The default instance type is set to t2.micro. This is one of the smallest instance types and is free tier eligible.
The default timeout for ec2 was increased to 300 from 120 as many tests
hit up against the 2 minute timeout and depending on region load can
go over.
Documentation for the AWS platform was added with the expected
configuration files for the platform to be used. There are some
additional whitespace changes included as well.
pylint exception was added for paramiko and simplestreams. In the past these
were not already flagged due to no __init__.py in the subdirectories of files
that used these. boto3 was added to the list of dependencies in the tox
ci-test runner.
Description of the change
All Tests on Bionic + Xenial with image version of cloud-init
---
ec2: https:/
ec2: https:/
lxd: https:/
nocloud-kvm: busted daily image
$ python3 -m tests.cloud_tests run --os-name bionic --os-name xenial --platform ec2 --preserve-data --data-dir results_ec2 --verbose
$ python3 -m tests.cloud_tests run --os-name bionic --os-name xenial --platform lxd --preserve-data --data-dir results_lxd --verbose
$ python3 -m tests.cloud_tests run --os-name bionic --os-name xenial --platform nocloud-kvm --preserve-data --data-dir results_nocloud_kvm --verbose
All Tests on Bionic + Xenial with Bionic deb of cloud-init
---
ec2: https:/
lxd: https:/
nocloud-kvm: https:/
$ python3 -m tests.cloud_tests run --os-name bionic --os-name xenial --platform ec2 --preserve-data --data-dir results_ec2 --verbose --deb cloud-init_
$ python3 -m tests.cloud_tests run --os-name bionic --os-name xenial --platform lxd --preserve-data --data-dir results_lxd --verbose --deb cloud-init_
$ python3 -m tests.cloud_tests run --os-name bionic --os-name xenial --platform nocloud-kvm --preserve-data --data-dir results_nocloud_kvm --verbose --deb cloud-init_
Server Team CI bot (server-team-bot) wrote : | # |
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:ec97516df65
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
i have not gotten all the way thought, but some things in line.
thanks Josh!
Ryan Harper (raharper) wrote : | # |
"AWS at the start of tests and deleted on exit."
I think the keys will need to always be deleted no matter whether we finish; unless we specifically have some flag the user set to keep keys.
I don't think we want automatic runs of this to fail at any stage after injecting keys and not have it ensure it removes the keys.
Something to the effect of a trap_on_
which removes the keys unless the flag to keep them is present on the command line.
Ryan Harper (raharper) wrote : | # |
Some in-line comments and questions.
Ryan Harper (raharper) wrote : | # |
Replaced my pop() with next() in the in-line comments.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:3f76bbf7b58
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Joshua Powers (powersj) wrote : | # |
Added commit for using stream image data and comments with more questions from intial reviews of Scott and Ryan. Thank you both!
Ryan Harper (raharper) : | # |
Scott Moser (smoser) wrote : | # |
Thank you for a good text commit message that makes reading things easier.
* t2.micro for default instance type
the smallest type that will work for both ebs(ssd) and instance-store
from a quick look, m4.large (2 cpu / 8G) at $0.10/hr is the cheapest
you can find something that runs on instance-store that exists
in all regions (many do not have m3.medium).
I guess at least just comment somewhere about ebs affecting instance types.
* boot_clean script: at some point it makes sense to use 'cloud-init clean'
* can you mention in your commmit message that you removed trailing
whitespace? Just looking, its hard to see the difference, and I
assumed that I was just missing it with my eyes. ie:
-The integration testing suite can automatically build a deb based on the
+The integration testing suite can automatically build a deb based on the
* vpc creation. You create a single vpc with 'cloud-
you create ephemeral ssh keys, and properly clean up afterwards.
Is there any reason not to create ephemeral vpc and cleanup?
After typing that I googled. There is a default limit of 5 vpc per region.
http://
* what will happen with vpc (or other) resources multiple instances of the test harness are running at the same time? All be ok? Particularly since we're not re-creating vpc and gateway and such, it could be problematic.
Scott Moser (smoser) wrote : | # |
wrt apt config here is a tested example that does work
#cloud-config
apt:
primary:
- arches: [default]
uri: http://
security:
- arches: [default]
url: http://
Chad Smith (chad.smith) wrote : | # |
Looks great Josh thanks. I have some inline comments to resolve regarding using filters to find existing cloud-init resources in ec2 and using waiter methods. I've added a couple of discussion points about the potential of reusing or updating a named/tagged ssh key instead of use creating new ones. But, maybe we need to have that ssh discussion as a squad instead of on this branch.
Also a patch for better informing us about the required aws configuration files:
http://
Scott Moser (smoser) : | # |
Chad Smith (chad.smith) : | # |
Joshua Powers (powersj) wrote : | # |
Added fixes from reviews and updated commit message.
The VPC behavior remains getting created and destroyed for every run. Trying to create one that is always used leads to conflicts and issues with trying to destroy specific components. Creating one VPC and always using that same one, leads to issues when changes in configs are needed and leaves things around between runs.
There are 4 additional merges required:
* I pulled out the testcase change as we can follow up on that in a separate merge
* Update ssh key names: don't use id_rsa
* Add integration-
* Refactor properties function, put in superclass and change family --> os
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:322b9f9de21
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:0b61ef9c235
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
* (from the commit message).
"The default instance type is set to t2.micro. This is the smallest type
that will work for both ebs and instance-store."
that is not true. per https:/
t2.* are 'EBS-only'.
* user-data, I think probably you should pass None if there is None.
I expect that boto is not just checking a true-ish value.
I know from experience that running an instance with '' as user-data
differs from None (in the former you do not get a 'user-data' field
in the meta-data service, in the latter you do get one).
it looks like you need to not pass UserData in the kwargs at all.
https:/
* last... i dont need this, but it woudl be nice if you (re)started the EC2Image.
that also tells us passing bytes (rather than string) is ok, and that
boto will always base64 encode for you.
* Lets add a console_log() to the EC2Instance please.
http://
Joshua Powers (powersj) wrote : | # |
I think I've addressed all the required fixes and review comments at this time. The commit message was updated to note why t2.micro was chosen (small + free tier).
Thanks!
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:677c60738d0
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Chad Smith (chad.smith) wrote : | # |
Minor nits from me, otherwise +1 here. Ran a few tests against my own ec2 account to validate behavior. approving assuming the minor changes are made
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:f439c1e5600
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
FAILED: Continuous integration, rev:9780f62550a
https:/
Executed test runs:
SUCCESS: Checkout
FAILED: Unit & Style Tests
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:9f0942c324a
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) wrote : | # |
I filed upstream boto issue https:/
Joshua Powers (powersj) : | # |
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:74ffc5e7669
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) : | # |
Scott Moser (smoser) wrote : | # |
thanks for the fixes. i think we're down to just the destroy of an image destroying (the last) snapshot that was created from it.
then i think this is good.
Can you put a comment in 'Re-start instance in the case additional customization required'.
just metion there that it'd be better to only re-start on demand in execute.
And on console output... i *think* that generally speaking when an isntance shuts down that signals to ec2 that they should update their console output. i'm interested in seeing how this all works, but if we get consoel output after stop/terminate of an instance, we should get some data.,
Joshua Powers (powersj) : | # |
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:3793d3d2450
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:751e31fe8ff
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) : | # |
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:44cf1af45fd
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:14beefdcc7d
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Chad Smith (chad.smith) wrote : | # |
Thanks for the reworks Josh. Looks great.
Joshua Powers (powersj) wrote : | # |
On Fri, Jan 5, 2018 at 11:24 AM, Chad Smith <email address hidden>
wrote:
> > +
> > + def _execute(self, *args, **kwargs):
> > + """Execute command in image, modifying image."""
> > + self._instance.
>
> I think the double _instance.start call path should be find because the
> initial call is called with wait=True as well which will block until
> instance is in running state, which instance.start noops on.
>
agreed and tried it both ways This seemed cleaner.
Server Team CI bot (server-team-bot) wrote : | # |
PASSED: Continuous integration, rev:0ccf7852289
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
SUCCESS: Ubuntu LTS: Integration
SUCCESS: MAAS Compatability Testing
IN_PROGRESS: Declarative: Post Actions
Click here to trigger a rebuild:
https:/
Scott Moser (smoser) : | # |
Scott Moser (smoser) wrote : | # |
I've rebased, squahshed and updated the commit message a bit.
tox && centos check in progess. and hten pushing.
There was an error fetching revisions from git servers. Please try again in a few minutes. If the problem persists, contact Launchpad support.
Preview Diff
1 | diff --git a/.pylintrc b/.pylintrc |
2 | index 3ad3692..05a086d 100644 |
3 | --- a/.pylintrc |
4 | +++ b/.pylintrc |
5 | @@ -46,7 +46,7 @@ reports=no |
6 | # (useful for modules/projects where namespaces are manipulated during runtime |
7 | # and thus existing member attributes cannot be deduced by static analysis. It |
8 | # supports qualified module names, as well as Unix pattern matching. |
9 | -ignored-modules=six.moves,pkg_resources,httplib,http.client |
10 | +ignored-modules=six.moves,pkg_resources,httplib,http.client,paramiko,simplestreams |
11 | |
12 | # List of class names for which member attributes should not be checked (useful |
13 | # for classes with dynamically set attributes). This supports the use of |
14 | diff --git a/doc/rtd/topics/tests.rst b/doc/rtd/topics/tests.rst |
15 | index d668e3f..bf04bb3 100644 |
16 | --- a/doc/rtd/topics/tests.rst |
17 | +++ b/doc/rtd/topics/tests.rst |
18 | @@ -118,19 +118,19 @@ TreeRun and TreeCollect |
19 | |
20 | If working on a cloud-init feature or resolving a bug, it may be useful to |
21 | run the current copy of cloud-init in the integration testing environment. |
22 | -The integration testing suite can automatically build a deb based on the |
23 | +The integration testing suite can automatically build a deb based on the |
24 | current working tree of cloud-init and run the test suite using this deb. |
25 | |
26 | The ``tree_run`` and ``tree_collect`` commands take the same arguments as |
27 | -the ``run`` and ``collect`` commands. These commands will build a deb and |
28 | -write it into a temporary file, then start the test suite and pass that deb |
29 | +the ``run`` and ``collect`` commands. These commands will build a deb and |
30 | +write it into a temporary file, then start the test suite and pass that deb |
31 | in. To build a deb only, and not run the test suite, the ``bddeb`` command |
32 | can be used. |
33 | |
34 | Note that code in the cloud-init working tree that has not been committed |
35 | when the cloud-init deb is built will still be included. To build a |
36 | cloud-init deb from or use the ``tree_run`` command using a copy of |
37 | -cloud-init located in a different directory, use the option ``--cloud-init |
38 | +cloud-init located in a different directory, use the option ``--cloud-init |
39 | /path/to/cloud-init``. |
40 | |
41 | .. code-block:: bash |
42 | @@ -383,7 +383,7 @@ Development Checklist |
43 | * Valid unit tests validating output collected |
44 | * Passes pylint & pep8 checks |
45 | * Placed in the appropriate sub-folder in the test cases directory |
46 | -* Tested by running the test: |
47 | +* Tested by running the test: |
48 | |
49 | .. code-block:: bash |
50 | |
51 | @@ -392,6 +392,32 @@ Development Checklist |
52 | --test modules/your_test.yaml \ |
53 | [--deb <build of cloud-init>] |
54 | |
55 | + |
56 | +Platforms |
57 | +========= |
58 | + |
59 | +EC2 |
60 | +--- |
61 | +To run on the EC2 platform it is required that the user has an AWS credentials |
62 | +configuration file specifying his or her access keys and a default region. |
63 | +These configuration files are the standard that the AWS cli and other AWS |
64 | +tools utilize for interacting directly with AWS itself and are normally |
65 | +generated when running ``aws configure``: |
66 | + |
67 | +.. code-block:: bash |
68 | + |
69 | + $ cat $HOME/.aws/credentials |
70 | + [default] |
71 | + aws_access_key_id = <KEY HERE> |
72 | + aws_secret_access_key = <KEY HERE> |
73 | + |
74 | +.. code-block:: bash |
75 | + |
76 | + $ cat $HOME/.aws/config |
77 | + [default] |
78 | + region = us-west-2 |
79 | + |
80 | + |
81 | Architecture |
82 | ============ |
83 | |
84 | @@ -455,7 +481,7 @@ replace the default. If the data is a dictionary then the value will be the |
85 | result of merging that dictionary from the default config and that |
86 | dictionary from the overrides. |
87 | |
88 | -Merging is done using the function |
89 | +Merging is done using the function |
90 | ``tests.cloud_tests.config.merge_config``, which can be examined for more |
91 | detail on config merging behavior. |
92 | |
93 | diff --git a/tests/cloud_tests/collect.py b/tests/cloud_tests/collect.py |
94 | index bb72245..33acbb1 100644 |
95 | --- a/tests/cloud_tests/collect.py |
96 | +++ b/tests/cloud_tests/collect.py |
97 | @@ -28,12 +28,18 @@ def collect_script(instance, base_dir, script, script_name): |
98 | |
99 | |
100 | def collect_console(instance, base_dir): |
101 | - LOG.debug('getting console log') |
102 | + """Collect instance console log. |
103 | + |
104 | + @param instance: instance to get console log for |
105 | + @param base_dir: directory to write console log to |
106 | + """ |
107 | + logfile = os.path.join(base_dir, 'console.log') |
108 | + LOG.debug('getting console log for %s to %s', instance, logfile) |
109 | try: |
110 | data = instance.console_log() |
111 | except NotImplementedError: |
112 | data = b'instance.console_log: not implemented' |
113 | - with open(os.path.join(base_dir, 'console.log'), "wb") as fp: |
114 | + with open(logfile, "wb") as fp: |
115 | fp.write(data) |
116 | |
117 | |
118 | @@ -89,12 +95,11 @@ def collect_test_data(args, snapshot, os_name, test_name): |
119 | test_output_dir, script, script_name)) |
120 | for script_name, script in test_scripts.items()] |
121 | |
122 | - console_log = partial( |
123 | - run_single, 'collect console', |
124 | - partial(collect_console, instance, test_output_dir)) |
125 | - |
126 | res = run_stage('collect for test: {}'.format(test_name), |
127 | - [start_call] + collect_calls + [console_log]) |
128 | + [start_call] + collect_calls) |
129 | + |
130 | + instance.shutdown() |
131 | + collect_console(instance, test_output_dir) |
132 | |
133 | return res |
134 | |
135 | diff --git a/tests/cloud_tests/platforms.yaml b/tests/cloud_tests/platforms.yaml |
136 | index fa4f845..cb1c904 100644 |
137 | --- a/tests/cloud_tests/platforms.yaml |
138 | +++ b/tests/cloud_tests/platforms.yaml |
139 | @@ -6,8 +6,13 @@ default_platform_config: |
140 | get_image_timeout: 300 |
141 | # maximum time to create instance (before waiting for cloud-init) |
142 | create_instance_timeout: 60 |
143 | - |
144 | + private_key: id_rsa |
145 | + public_key: id_rsa.pub |
146 | platforms: |
147 | + ec2: |
148 | + enabled: true |
149 | + instance-type: t2.micro |
150 | + tag: cii |
151 | lxd: |
152 | enabled: true |
153 | # overrides for image templates |
154 | @@ -61,9 +66,5 @@ platforms: |
155 | {{ config_get("user.vendor-data", properties.default) }} |
156 | nocloud-kvm: |
157 | enabled: true |
158 | - private_key: id_rsa |
159 | - public_key: id_rsa.pub |
160 | - ec2: {} |
161 | - azure: {} |
162 | |
163 | # vi: ts=4 expandtab |
164 | diff --git a/tests/cloud_tests/platforms/__init__.py b/tests/cloud_tests/platforms/__init__.py |
165 | index 92ed162..a01e51a 100644 |
166 | --- a/tests/cloud_tests/platforms/__init__.py |
167 | +++ b/tests/cloud_tests/platforms/__init__.py |
168 | @@ -2,10 +2,12 @@ |
169 | |
170 | """Main init.""" |
171 | |
172 | +from .ec2 import platform as ec2 |
173 | from .lxd import platform as lxd |
174 | from .nocloudkvm import platform as nocloudkvm |
175 | |
176 | PLATFORMS = { |
177 | + 'ec2': ec2.EC2Platform, |
178 | 'nocloud-kvm': nocloudkvm.NoCloudKVMPlatform, |
179 | 'lxd': lxd.LXDPlatform, |
180 | } |
181 | diff --git a/tests/cloud_tests/platforms/ec2/image.py b/tests/cloud_tests/platforms/ec2/image.py |
182 | new file mode 100644 |
183 | index 0000000..53706b1 |
184 | --- /dev/null |
185 | +++ b/tests/cloud_tests/platforms/ec2/image.py |
186 | @@ -0,0 +1,109 @@ |
187 | +# This file is part of cloud-init. See LICENSE file for license information. |
188 | + |
189 | +"""EC2 Image Base Class.""" |
190 | + |
191 | +from ..images import Image |
192 | +from .snapshot import EC2Snapshot |
193 | +from tests.cloud_tests import LOG |
194 | + |
195 | + |
196 | +class EC2Image(Image): |
197 | + """EC2 backed image.""" |
198 | + |
199 | + platform_name = 'ec2' |
200 | + |
201 | + def __init__(self, platform, config, image_ami): |
202 | + """Set up image. |
203 | + |
204 | + @param platform: platform object |
205 | + @param config: image configuration |
206 | + @param image_ami: string of image ami ID |
207 | + """ |
208 | + super(EC2Image, self).__init__(platform, config) |
209 | + self._img_instance = None |
210 | + self.image_ami = image_ami |
211 | + |
212 | + @property |
213 | + def _instance(self): |
214 | + """Internal use only, returns a running instance""" |
215 | + if not self._img_instance: |
216 | + self._img_instance = self.platform.create_instance( |
217 | + self.properties, self.config, self.features, |
218 | + self.image_ami, user_data=None) |
219 | + self._img_instance.start(wait=True, wait_for_cloud_init=True) |
220 | + return self._img_instance |
221 | + |
222 | + @property |
223 | + def properties(self): |
224 | + """Dictionary containing: 'arch', 'os', 'version', 'release'.""" |
225 | + return { |
226 | + 'arch': self.config['arch'], |
227 | + 'os': self.config['family'], |
228 | + 'release': self.config['release'], |
229 | + 'version': self.config['version'], |
230 | + } |
231 | + |
232 | + def destroy(self): |
233 | + """Delete the instance used to create a custom image.""" |
234 | + if self._img_instance: |
235 | + LOG.debug('terminating backing instance %s', |
236 | + self._img_instance.instance.instance_id) |
237 | + self._img_instance.instance.terminate() |
238 | + self._img_instance.instance.wait_until_terminated() |
239 | + |
240 | + super(EC2Image, self).destroy() |
241 | + |
242 | + def _execute(self, *args, **kwargs): |
243 | + """Execute command in image, modifying image.""" |
244 | + self._instance.start(wait=True) |
245 | + return self._instance._execute(*args, **kwargs) |
246 | + |
247 | + def push_file(self, local_path, remote_path): |
248 | + """Copy file at 'local_path' to instance at 'remote_path'.""" |
249 | + self._instance.start(wait=True) |
250 | + return self._instance.push_file(local_path, remote_path) |
251 | + |
252 | + def run_script(self, *args, **kwargs): |
253 | + """Run script in image, modifying image. |
254 | + |
255 | + @return_value: script output |
256 | + """ |
257 | + self._instance.start(wait=True) |
258 | + return self._instance.run_script(*args, **kwargs) |
259 | + |
260 | + def snapshot(self): |
261 | + """Create snapshot of image, block until done. |
262 | + |
263 | + Will return base image_ami if no instance has been booted, otherwise |
264 | + will run the clean script, shutdown the instance, create a custom |
265 | + AMI, and use that AMI once available. |
266 | + """ |
267 | + if not self._img_instance: |
268 | + return EC2Snapshot(self.platform, self.properties, self.config, |
269 | + self.features, self.image_ami, |
270 | + delete_on_destroy=False) |
271 | + |
272 | + if self.config.get('boot_clean_script'): |
273 | + self._img_instance.run_script(self.config.get('boot_clean_script')) |
274 | + |
275 | + self._img_instance.shutdown(wait=True) |
276 | + |
277 | + LOG.debug('creating custom ami from instance %s', |
278 | + self._img_instance.instance.instance_id) |
279 | + response = self.platform.ec2_client.create_image( |
280 | + Name='%s-%s' % (self.platform.tag, self.image_ami), |
281 | + InstanceId=self._img_instance.instance.instance_id |
282 | + ) |
283 | + image_ami_edited = response['ImageId'] |
284 | + |
285 | + # Create image and wait until it is in the 'available' state |
286 | + image = self.platform.ec2_resource.Image(image_ami_edited) |
287 | + image.wait_until_exists() |
288 | + waiter = self.platform.ec2_client.get_waiter('image_available') |
289 | + waiter.wait(ImageIds=[image.id]) |
290 | + image.reload() |
291 | + |
292 | + return EC2Snapshot(self.platform, self.properties, self.config, |
293 | + self.features, image_ami_edited) |
294 | + |
295 | +# vi: ts=4 expandtab |
296 | diff --git a/tests/cloud_tests/platforms/ec2/instance.py b/tests/cloud_tests/platforms/ec2/instance.py |
297 | new file mode 100644 |
298 | index 0000000..4ba737a |
299 | --- /dev/null |
300 | +++ b/tests/cloud_tests/platforms/ec2/instance.py |
301 | @@ -0,0 +1,126 @@ |
302 | +# This file is part of cloud-init. See LICENSE file for license information. |
303 | + |
304 | +"""Base EC2 instance.""" |
305 | +import os |
306 | + |
307 | +import botocore |
308 | + |
309 | +from ..instances import Instance |
310 | +from tests.cloud_tests import LOG, util |
311 | + |
312 | + |
313 | +class EC2Instance(Instance): |
314 | + """EC2 backed instance.""" |
315 | + |
316 | + platform_name = "ec2" |
317 | + _ssh_client = None |
318 | + |
319 | + def __init__(self, platform, properties, config, features, |
320 | + image_ami, user_data=None): |
321 | + """Set up instance. |
322 | + |
323 | + @param platform: platform object |
324 | + @param properties: dictionary of properties |
325 | + @param config: dictionary of configuration values |
326 | + @param features: dictionary of supported feature flags |
327 | + @param image_ami: AWS AMI ID for image to use |
328 | + @param user_data: test user-data to pass to instance |
329 | + """ |
330 | + super(EC2Instance, self).__init__( |
331 | + platform, image_ami, properties, config, features) |
332 | + |
333 | + self.image_ami = image_ami |
334 | + self.instance = None |
335 | + self.user_data = user_data |
336 | + self.ssh_ip = None |
337 | + self.ssh_port = 22 |
338 | + self.ssh_key_file = os.path.join( |
339 | + platform.config['data_dir'], platform.config['private_key']) |
340 | + self.ssh_pubkey_file = os.path.join( |
341 | + platform.config['data_dir'], platform.config['public_key']) |
342 | + |
343 | + def console_log(self): |
344 | + """Collect console log from instance. |
345 | + |
346 | + The console log is buffered and not always present, therefore |
347 | + may return empty string. |
348 | + """ |
349 | + try: |
350 | + return self.instance.console_output()['Output'].encode() |
351 | + except KeyError: |
352 | + return b'' |
353 | + |
354 | + def destroy(self): |
355 | + """Clean up instance.""" |
356 | + if self.instance: |
357 | + LOG.debug('destroying instance %s', self.instance.id) |
358 | + self.instance.terminate() |
359 | + self.instance.wait_until_terminated() |
360 | + |
361 | + self._ssh_close() |
362 | + |
363 | + super(EC2Instance, self).destroy() |
364 | + |
365 | + def _execute(self, command, stdin=None, env=None): |
366 | + """Execute command on instance.""" |
367 | + env_args = [] |
368 | + if env: |
369 | + env_args = ['env'] + ["%s=%s" for k, v in env.items()] |
370 | + |
371 | + return self._ssh(['sudo'] + env_args + list(command), stdin=stdin) |
372 | + |
373 | + def start(self, wait=True, wait_for_cloud_init=False): |
374 | + """Start instance on EC2 with the platfrom's VPC.""" |
375 | + if self.instance: |
376 | + if self.instance.state['Name'] == 'running': |
377 | + return |
378 | + |
379 | + LOG.debug('starting instance %s', self.instance.id) |
380 | + self.instance.start() |
381 | + else: |
382 | + LOG.debug('launching instance') |
383 | + |
384 | + args = { |
385 | + 'ImageId': self.image_ami, |
386 | + 'InstanceType': self.platform.instance_type, |
387 | + 'KeyName': self.platform.key_name, |
388 | + 'MaxCount': 1, |
389 | + 'MinCount': 1, |
390 | + 'SecurityGroupIds': [self.platform.security_group.id], |
391 | + 'SubnetId': self.platform.subnet.id, |
392 | + 'TagSpecifications': [{ |
393 | + 'ResourceType': 'instance', |
394 | + 'Tags': [{ |
395 | + 'Key': 'Name', 'Value': self.platform.tag |
396 | + }] |
397 | + }], |
398 | + } |
399 | + |
400 | + if self.user_data: |
401 | + args['UserData'] = self.user_data |
402 | + |
403 | + try: |
404 | + instances = self.platform.ec2_resource.create_instances(**args) |
405 | + except botocore.exceptions.ClientError as error: |
406 | + error_msg = error.response['Error']['Message'] |
407 | + raise util.PlatformError('start', error_msg) |
408 | + |
409 | + self.instance = instances[0] |
410 | + |
411 | + LOG.debug('instance id: %s', self.instance.id) |
412 | + if wait: |
413 | + self.instance.wait_until_running() |
414 | + self.instance.reload() |
415 | + self.ssh_ip = self.instance.public_ip_address |
416 | + self._wait_for_system(wait_for_cloud_init) |
417 | + |
418 | + def shutdown(self, wait=True): |
419 | + """Shutdown instance.""" |
420 | + LOG.debug('stopping instance %s', self.instance.id) |
421 | + self.instance.stop() |
422 | + |
423 | + if wait: |
424 | + self.instance.wait_until_stopped() |
425 | + self.instance.reload() |
426 | + |
427 | +# vi: ts=4 expandtab |
428 | diff --git a/tests/cloud_tests/platforms/ec2/platform.py b/tests/cloud_tests/platforms/ec2/platform.py |
429 | new file mode 100644 |
430 | index 0000000..fdb17ba |
431 | --- /dev/null |
432 | +++ b/tests/cloud_tests/platforms/ec2/platform.py |
433 | @@ -0,0 +1,231 @@ |
434 | +# This file is part of cloud-init. See LICENSE file for license information. |
435 | + |
436 | +"""Base EC2 platform.""" |
437 | +from datetime import datetime |
438 | +import os |
439 | + |
440 | +import boto3 |
441 | +import botocore |
442 | + |
443 | +from ..platforms import Platform |
444 | +from .image import EC2Image |
445 | +from .instance import EC2Instance |
446 | +from tests.cloud_tests import LOG |
447 | + |
448 | + |
449 | +class EC2Platform(Platform): |
450 | + """EC2 test platform.""" |
451 | + |
452 | + platform_name = 'ec2' |
453 | + ipv4_cidr = '192.168.1.0/20' |
454 | + |
455 | + def __init__(self, config): |
456 | + """Set up platform.""" |
457 | + super(EC2Platform, self).__init__(config) |
458 | + # Used for unique VPC, SSH key, and custom AMI generation naming |
459 | + self.tag = '%s-%s' % ( |
460 | + config['tag'], datetime.now().strftime('%Y%m%d%H%M%S')) |
461 | + self.instance_type = config['instance-type'] |
462 | + |
463 | + try: |
464 | + self.ec2_client = boto3.client('ec2') |
465 | + self.ec2_resource = boto3.resource('ec2') |
466 | + self.ec2_region = boto3.Session().region_name |
467 | + self.key_name = self._upload_public_key(config) |
468 | + except botocore.exceptions.NoRegionError: |
469 | + raise RuntimeError( |
470 | + 'Please configure default region in $HOME/.aws/config') |
471 | + except botocore.exceptions.NoCredentialsError: |
472 | + raise RuntimeError( |
473 | + 'Please configure ec2 credentials in $HOME/.aws/credentials') |
474 | + |
475 | + self.vpc = self._create_vpc() |
476 | + self.internet_gateway = self._create_internet_gateway() |
477 | + self.subnet = self._create_subnet() |
478 | + self.routing_table = self._create_routing_table() |
479 | + self.security_group = self._create_security_group() |
480 | + |
481 | + def create_instance(self, properties, config, features, |
482 | + image_ami, user_data=None): |
483 | + """Create an instance |
484 | + |
485 | + @param src_img_path: image path to launch from |
486 | + @param properties: image properties |
487 | + @param config: image configuration |
488 | + @param features: image features |
489 | + @param image_ami: string of image ami ID |
490 | + @param user_data: test user-data to pass to instance |
491 | + @return_value: cloud_tests.instances instance |
492 | + """ |
493 | + return EC2Instance(self, properties, config, features, |
494 | + image_ami, user_data) |
495 | + |
496 | + def destroy(self): |
497 | + """Delete SSH keys, terminate all instances, and delete VPC.""" |
498 | + for instance in self.vpc.instances.all(): |
499 | + LOG.debug('waiting for instance %s termination', instance.id) |
500 | + instance.terminate() |
501 | + instance.wait_until_terminated() |
502 | + |
503 | + if self.key_name: |
504 | + LOG.debug('deleting SSH key %s', self.key_name) |
505 | + self.ec2_client.delete_key_pair(KeyName=self.key_name) |
506 | + |
507 | + if self.security_group: |
508 | + LOG.debug('deleting security group %s', self.security_group.id) |
509 | + self.security_group.delete() |
510 | + |
511 | + if self.subnet: |
512 | + LOG.debug('deleting subnet %s', self.subnet.id) |
513 | + self.subnet.delete() |
514 | + |
515 | + if self.routing_table: |
516 | + LOG.debug('deleting routing table %s', self.routing_table.id) |
517 | + self.routing_table.delete() |
518 | + |
519 | + if self.internet_gateway: |
520 | + LOG.debug('deleting internet gateway %s', self.internet_gateway.id) |
521 | + self.internet_gateway.detach_from_vpc(VpcId=self.vpc.id) |
522 | + self.internet_gateway.delete() |
523 | + |
524 | + if self.vpc: |
525 | + LOG.debug('deleting vpc %s', self.vpc.id) |
526 | + self.vpc.delete() |
527 | + |
528 | + def get_image(self, img_conf): |
529 | + """Get image using specified image configuration. |
530 | + |
531 | + Hard coded for 'amd64' based images. |
532 | + |
533 | + @param img_conf: configuration for image |
534 | + @return_value: cloud_tests.images instance |
535 | + """ |
536 | + if img_conf['root-store'] == 'ebs': |
537 | + root_store = 'ssd' |
538 | + elif img_conf['root-store'] == 'instance-store': |
539 | + root_store = 'instance' |
540 | + else: |
541 | + raise RuntimeError('Unknown root-store type: %s' % |
542 | + (img_conf['root-store'])) |
543 | + |
544 | + filters = [ |
545 | + 'arch=%s' % 'amd64', |
546 | + 'endpoint=https://ec2.%s.amazonaws.com' % self.ec2_region, |
547 | + 'region=%s' % self.ec2_region, |
548 | + 'release=%s' % img_conf['release'], |
549 | + 'root_store=%s' % root_store, |
550 | + 'virt=hvm', |
551 | + ] |
552 | + |
553 | + LOG.debug('finding image using streams') |
554 | + image = self._query_streams(img_conf, filters) |
555 | + |
556 | + try: |
557 | + image_ami = image['id'] |
558 | + except KeyError: |
559 | + raise RuntimeError('No images found for %s!' % img_conf['release']) |
560 | + |
561 | + LOG.debug('found image: %s', image_ami) |
562 | + image = EC2Image(self, img_conf, image_ami) |
563 | + return image |
564 | + |
565 | + def _create_internet_gateway(self): |
566 | + """Create Internet Gateway and assign to VPC.""" |
567 | + LOG.debug('creating internet gateway') |
568 | + internet_gateway = self.ec2_resource.create_internet_gateway() |
569 | + internet_gateway.attach_to_vpc(VpcId=self.vpc.id) |
570 | + self._tag_resource(internet_gateway) |
571 | + |
572 | + return internet_gateway |
573 | + |
574 | + def _create_routing_table(self): |
575 | + """Update default routing table with internet gateway. |
576 | + |
577 | + This sets up internet access between the VPC via the internet gateway |
578 | + by configuring routing tables for IPv4 and IPv6. |
579 | + """ |
580 | + LOG.debug('creating routing table') |
581 | + route_table = self.vpc.create_route_table() |
582 | + route_table.create_route(DestinationCidrBlock='0.0.0.0/0', |
583 | + GatewayId=self.internet_gateway.id) |
584 | + route_table.create_route(DestinationIpv6CidrBlock='::/0', |
585 | + GatewayId=self.internet_gateway.id) |
586 | + route_table.associate_with_subnet(SubnetId=self.subnet.id) |
587 | + self._tag_resource(route_table) |
588 | + |
589 | + return route_table |
590 | + |
591 | + def _create_security_group(self): |
592 | + """Enables ingress to default VPC security group.""" |
593 | + LOG.debug('creating security group') |
594 | + security_group = self.vpc.create_security_group( |
595 | + GroupName=self.tag, Description='integration test security group') |
596 | + security_group.authorize_ingress( |
597 | + IpProtocol='-1', FromPort=-1, ToPort=-1, CidrIp='0.0.0.0/0') |
598 | + self._tag_resource(security_group) |
599 | + |
600 | + return security_group |
601 | + |
602 | + def _create_subnet(self): |
603 | + """Generate IPv4 and IPv6 subnets for use.""" |
604 | + ipv6_cidr = self.vpc.ipv6_cidr_block_association_set[0][ |
605 | + 'Ipv6CidrBlock'][:-2] + '64' |
606 | + |
607 | + LOG.debug('creating subnet with following ranges:') |
608 | + LOG.debug('ipv4: %s', self.ipv4_cidr) |
609 | + LOG.debug('ipv6: %s', ipv6_cidr) |
610 | + subnet = self.vpc.create_subnet(CidrBlock=self.ipv4_cidr, |
611 | + Ipv6CidrBlock=ipv6_cidr) |
612 | + modify_subnet = subnet.meta.client.modify_subnet_attribute |
613 | + modify_subnet(SubnetId=subnet.id, |
614 | + MapPublicIpOnLaunch={'Value': True}) |
615 | + self._tag_resource(subnet) |
616 | + |
617 | + return subnet |
618 | + |
619 | + def _create_vpc(self): |
620 | + """Setup AWS EC2 VPC or return existing VPC.""" |
621 | + LOG.debug('creating new vpc') |
622 | + try: |
623 | + vpc = self.ec2_resource.create_vpc( |
624 | + CidrBlock=self.ipv4_cidr, |
625 | + AmazonProvidedIpv6CidrBlock=True) |
626 | + except botocore.exceptions.ClientError as e: |
627 | + raise RuntimeError(e) |
628 | + |
629 | + vpc.wait_until_available() |
630 | + self._tag_resource(vpc) |
631 | + |
632 | + return vpc |
633 | + |
634 | + def _tag_resource(self, resource): |
635 | + """Tag a resource with the specified tag. |
636 | + |
637 | + This makes finding and deleting resources specific to this testing |
638 | + much easier to find. |
639 | + |
640 | + @param resource: resource to tag |
641 | + """ |
642 | + tag = { |
643 | + 'Key': 'Name', |
644 | + 'Value': self.tag |
645 | + } |
646 | + resource.create_tags(Tags=[tag]) |
647 | + |
648 | + def _upload_public_key(self, config): |
649 | + """Generate random name and upload SSH key with that name. |
650 | + |
651 | + @param config: platform config |
652 | + @return: string of ssh key name |
653 | + """ |
654 | + key_file = os.path.join(config['data_dir'], config['public_key']) |
655 | + with open(key_file, 'r') as file: |
656 | + public_key = file.read().strip('\n') |
657 | + |
658 | + LOG.debug('uploading SSH key %s', self.tag) |
659 | + self.ec2_client.import_key_pair(KeyName=self.tag, |
660 | + PublicKeyMaterial=public_key) |
661 | + |
662 | + return self.tag |
663 | + |
664 | +# vi: ts=4 expandtab |
665 | diff --git a/tests/cloud_tests/platforms/ec2/snapshot.py b/tests/cloud_tests/platforms/ec2/snapshot.py |
666 | new file mode 100644 |
667 | index 0000000..2c48cb5 |
668 | --- /dev/null |
669 | +++ b/tests/cloud_tests/platforms/ec2/snapshot.py |
670 | @@ -0,0 +1,66 @@ |
671 | +# This file is part of cloud-init. See LICENSE file for license information. |
672 | + |
673 | +"""Base EC2 snapshot.""" |
674 | + |
675 | +from ..snapshots import Snapshot |
676 | +from tests.cloud_tests import LOG |
677 | + |
678 | + |
679 | +class EC2Snapshot(Snapshot): |
680 | + """EC2 image copy backed snapshot.""" |
681 | + |
682 | + platform_name = 'ec2' |
683 | + |
684 | + def __init__(self, platform, properties, config, features, image_ami, |
685 | + delete_on_destroy=True): |
686 | + """Set up snapshot. |
687 | + |
688 | + @param platform: platform object |
689 | + @param properties: image properties |
690 | + @param config: image config |
691 | + @param features: supported feature flags |
692 | + @param image_ami: string of image ami ID |
693 | + @param delete_on_destroy: boolean to delete on destroy |
694 | + """ |
695 | + super(EC2Snapshot, self).__init__( |
696 | + platform, properties, config, features) |
697 | + |
698 | + self.image_ami = image_ami |
699 | + self.delete_on_destroy = delete_on_destroy |
700 | + |
701 | + def destroy(self): |
702 | + """Deregister the backing AMI.""" |
703 | + if self.delete_on_destroy: |
704 | + image = self.platform.ec2_resource.Image(self.image_ami) |
705 | + snapshot_id = image.block_device_mappings[0]['Ebs']['SnapshotId'] |
706 | + |
707 | + LOG.debug('removing custom ami %s', self.image_ami) |
708 | + self.platform.ec2_client.deregister_image(ImageId=self.image_ami) |
709 | + |
710 | + LOG.debug('removing custom snapshot %s', snapshot_id) |
711 | + self.platform.ec2_client.delete_snapshot(SnapshotId=snapshot_id) |
712 | + |
713 | + def launch(self, user_data, meta_data=None, block=True, start=True, |
714 | + use_desc=None): |
715 | + """Launch instance. |
716 | + |
717 | + @param user_data: user-data for the instance |
718 | + @param meta_data: meta_data for the instance |
719 | + @param block: wait until instance is created |
720 | + @param start: start instance and wait until fully started |
721 | + @param use_desc: string of test name |
722 | + @return_value: an Instance |
723 | + """ |
724 | + if meta_data is not None: |
725 | + raise ValueError("metadata not supported on Ec2") |
726 | + |
727 | + instance = self.platform.create_instance( |
728 | + self.properties, self.config, self.features, |
729 | + self.image_ami, user_data) |
730 | + |
731 | + if start: |
732 | + instance.start() |
733 | + |
734 | + return instance |
735 | + |
736 | +# vi: ts=4 expandtab |
737 | diff --git a/tests/cloud_tests/platforms/instances.py b/tests/cloud_tests/platforms/instances.py |
738 | index 8c59d62..3bad021 100644 |
739 | --- a/tests/cloud_tests/platforms/instances.py |
740 | +++ b/tests/cloud_tests/platforms/instances.py |
741 | @@ -1,14 +1,21 @@ |
742 | # This file is part of cloud-init. See LICENSE file for license information. |
743 | |
744 | """Base instance.""" |
745 | +import time |
746 | + |
747 | +import paramiko |
748 | +from paramiko.ssh_exception import ( |
749 | + BadHostKeyException, AuthenticationException, SSHException) |
750 | |
751 | from ..util import TargetBase |
752 | +from tests.cloud_tests import LOG, util |
753 | |
754 | |
755 | class Instance(TargetBase): |
756 | """Base instance object.""" |
757 | |
758 | platform_name = None |
759 | + _ssh_client = None |
760 | |
761 | def __init__(self, platform, name, properties, config, features): |
762 | """Set up instance. |
763 | @@ -26,6 +33,11 @@ class Instance(TargetBase): |
764 | self.features = features |
765 | self._tmp_count = 0 |
766 | |
767 | + self.ssh_ip = None |
768 | + self.ssh_port = None |
769 | + self.ssh_key_file = None |
770 | + self.ssh_username = 'ubuntu' |
771 | + |
772 | def console_log(self): |
773 | """Instance console. |
774 | |
775 | @@ -47,7 +59,63 @@ class Instance(TargetBase): |
776 | |
777 | def destroy(self): |
778 | """Clean up instance.""" |
779 | - pass |
780 | + self._ssh_close() |
781 | + |
782 | + def _ssh(self, command, stdin=None): |
783 | + """Run a command via SSH.""" |
784 | + client = self._ssh_connect() |
785 | + |
786 | + cmd = util.shell_pack(command) |
787 | + fp_in, fp_out, fp_err = client.exec_command(cmd) |
788 | + channel = fp_in.channel |
789 | + |
790 | + if stdin is not None: |
791 | + fp_in.write(stdin) |
792 | + fp_in.close() |
793 | + |
794 | + channel.shutdown_write() |
795 | + rc = channel.recv_exit_status() |
796 | + |
797 | + return (fp_out.read(), fp_err.read(), rc) |
798 | + |
799 | + def _ssh_close(self): |
800 | + if self._ssh_client: |
801 | + try: |
802 | + self._ssh_client.close() |
803 | + except SSHException: |
804 | + LOG.warning('Failed to close SSH connection.') |
805 | + self._ssh_client = None |
806 | + |
807 | + def _ssh_connect(self): |
808 | + """Connect via SSH.""" |
809 | + if self._ssh_client: |
810 | + return self._ssh_client |
811 | + |
812 | + if not self.ssh_ip or not self.ssh_port: |
813 | + raise ValueError |
814 | + |
815 | + client = paramiko.SSHClient() |
816 | + client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) |
817 | + private_key = paramiko.RSAKey.from_private_key_file(self.ssh_key_file) |
818 | + |
819 | + retries = 30 |
820 | + while retries: |
821 | + try: |
822 | + client.connect(username=self.ssh_username, |
823 | + hostname=self.ssh_ip, port=self.ssh_port, |
824 | + pkey=private_key, banner_timeout=30) |
825 | + self._ssh_client = client |
826 | + return client |
827 | + except (ConnectionRefusedError, AuthenticationException, |
828 | + BadHostKeyException, ConnectionResetError, SSHException, |
829 | + OSError) as e: |
830 | + retries -= 1 |
831 | + time.sleep(10) |
832 | + |
833 | + ssh_cmd = 'Failed ssh connection to %s@%s:%s after 300 seconds' % ( |
834 | + self.ssh_username, self.ssh_ip, self.ssh_port |
835 | + ) |
836 | + raise util.InTargetExecuteError(b'', b'', 1, ssh_cmd, 'ssh') |
837 | |
838 | def _wait_for_system(self, wait_for_cloud_init): |
839 | """Wait until system has fully booted and cloud-init has finished. |
840 | diff --git a/tests/cloud_tests/platforms/nocloudkvm/instance.py b/tests/cloud_tests/platforms/nocloudkvm/instance.py |
841 | index 9bb2425..932dc0f 100644 |
842 | --- a/tests/cloud_tests/platforms/nocloudkvm/instance.py |
843 | +++ b/tests/cloud_tests/platforms/nocloudkvm/instance.py |
844 | @@ -4,7 +4,6 @@ |
845 | |
846 | import copy |
847 | import os |
848 | -import paramiko |
849 | import socket |
850 | import subprocess |
851 | import time |
852 | @@ -13,7 +12,7 @@ import uuid |
853 | from ..instances import Instance |
854 | from cloudinit.atomic_helper import write_json |
855 | from cloudinit import util as c_util |
856 | -from tests.cloud_tests import util |
857 | +from tests.cloud_tests import LOG, util |
858 | |
859 | # This domain contains reverse lookups for hostnames that are used. |
860 | # The primary reason is so sudo will return quickly when it attempts |
861 | @@ -26,7 +25,6 @@ class NoCloudKVMInstance(Instance): |
862 | """NoCloud KVM backed instance.""" |
863 | |
864 | platform_name = "nocloud-kvm" |
865 | - _ssh_client = None |
866 | |
867 | def __init__(self, platform, name, image_path, properties, config, |
868 | features, user_data, meta_data): |
869 | @@ -39,6 +37,10 @@ class NoCloudKVMInstance(Instance): |
870 | @param config: dictionary of configuration values |
871 | @param features: dictionary of supported feature flags |
872 | """ |
873 | + super(NoCloudKVMInstance, self).__init__( |
874 | + platform, name, properties, config, features |
875 | + ) |
876 | + |
877 | self.user_data = user_data |
878 | if meta_data: |
879 | meta_data = copy.deepcopy(meta_data) |
880 | @@ -66,6 +68,7 @@ class NoCloudKVMInstance(Instance): |
881 | meta_data['public-keys'] = [] |
882 | meta_data['public-keys'].append(self.ssh_pubkey) |
883 | |
884 | + self.ssh_ip = '127.0.0.1' |
885 | self.ssh_port = None |
886 | self.pid = None |
887 | self.pid_file = None |
888 | @@ -73,8 +76,33 @@ class NoCloudKVMInstance(Instance): |
889 | self.disk = image_path |
890 | self.meta_data = meta_data |
891 | |
892 | - super(NoCloudKVMInstance, self).__init__( |
893 | - platform, name, properties, config, features) |
894 | + def shutdown(self, wait=True): |
895 | + """Shutdown instance.""" |
896 | + |
897 | + if self.pid: |
898 | + # This relies on _execute which uses sudo over ssh. The ssh |
899 | + # connection would get killed before sudo exited, so ignore errors. |
900 | + cmd = ['shutdown', 'now'] |
901 | + try: |
902 | + self._execute(cmd) |
903 | + except util.InTargetExecuteError: |
904 | + pass |
905 | + self._ssh_close() |
906 | + |
907 | + if wait: |
908 | + LOG.debug("Executed shutdown. waiting on pid %s to end", |
909 | + self.pid) |
910 | + time_for_shutdown = 120 |
911 | + give_up_at = time.time() + time_for_shutdown |
912 | + pid_file_path = '/proc/%s' % self.pid |
913 | + msg = ("pid %s did not exit in %s seconds after shutdown." % |
914 | + (self.pid, time_for_shutdown)) |
915 | + while True: |
916 | + if not os.path.exists(pid_file_path): |
917 | + break |
918 | + if time.time() > give_up_at: |
919 | + raise util.PlatformError("shutdown", msg) |
920 | + self.pid = None |
921 | |
922 | def destroy(self): |
923 | """Clean up instance.""" |
924 | @@ -88,9 +116,7 @@ class NoCloudKVMInstance(Instance): |
925 | os.remove(self.pid_file) |
926 | |
927 | self.pid = None |
928 | - if self._ssh_client: |
929 | - self._ssh_client.close() |
930 | - self._ssh_client = None |
931 | + self._ssh_close() |
932 | |
933 | super(NoCloudKVMInstance, self).destroy() |
934 | |
935 | @@ -99,7 +125,7 @@ class NoCloudKVMInstance(Instance): |
936 | if env: |
937 | env_args = ['env'] + ["%s=%s" for k, v in env.items()] |
938 | |
939 | - return self.ssh(['sudo'] + env_args + list(command), stdin=stdin) |
940 | + return self._ssh(['sudo'] + env_args + list(command), stdin=stdin) |
941 | |
942 | def generate_seed(self, tmpdir): |
943 | """Generate nocloud seed from user-data""" |
944 | @@ -125,50 +151,6 @@ class NoCloudKVMInstance(Instance): |
945 | s.close() |
946 | return num |
947 | |
948 | - def ssh(self, command, stdin=None): |
949 | - """Run a command via SSH.""" |
950 | - client = self._ssh_connect() |
951 | - |
952 | - cmd = util.shell_pack(command) |
953 | - try: |
954 | - fp_in, fp_out, fp_err = client.exec_command(cmd) |
955 | - channel = fp_in.channel |
956 | - if stdin is not None: |
957 | - fp_in.write(stdin) |
958 | - fp_in.close() |
959 | - |
960 | - channel.shutdown_write() |
961 | - rc = channel.recv_exit_status() |
962 | - return (fp_out.read(), fp_err.read(), rc) |
963 | - except paramiko.SSHException as e: |
964 | - raise util.InTargetExecuteError( |
965 | - b'', b'', -1, command, self.name, reason=e) |
966 | - |
967 | - def _ssh_connect(self, hostname='localhost', username='ubuntu', |
968 | - banner_timeout=120, retry_attempts=30): |
969 | - """Connect via SSH.""" |
970 | - if self._ssh_client: |
971 | - return self._ssh_client |
972 | - |
973 | - private_key = paramiko.RSAKey.from_private_key_file(self.ssh_key_file) |
974 | - client = paramiko.SSHClient() |
975 | - client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) |
976 | - while retry_attempts: |
977 | - try: |
978 | - client.connect(hostname=hostname, username=username, |
979 | - port=self.ssh_port, pkey=private_key, |
980 | - banner_timeout=banner_timeout) |
981 | - self._ssh_client = client |
982 | - return client |
983 | - except (paramiko.SSHException, TypeError): |
984 | - time.sleep(1) |
985 | - retry_attempts = retry_attempts - 1 |
986 | - |
987 | - error_desc = 'Failed command to: %s@%s:%s' % (username, hostname, |
988 | - self.ssh_port) |
989 | - raise util.InTargetExecuteError('', '', -1, 'ssh connect', |
990 | - self.name, error_desc) |
991 | - |
992 | def start(self, wait=True, wait_for_cloud_init=False): |
993 | """Start instance.""" |
994 | tmpdir = self.platform.config['data_dir'] |
995 | diff --git a/tests/cloud_tests/platforms/nocloudkvm/platform.py b/tests/cloud_tests/platforms/nocloudkvm/platform.py |
996 | index 8593346..a7e6f5d 100644 |
997 | --- a/tests/cloud_tests/platforms/nocloudkvm/platform.py |
998 | +++ b/tests/cloud_tests/platforms/nocloudkvm/platform.py |
999 | @@ -21,6 +21,10 @@ class NoCloudKVMPlatform(Platform): |
1000 | |
1001 | platform_name = 'nocloud-kvm' |
1002 | |
1003 | + def __init__(self, config): |
1004 | + """Set up platform.""" |
1005 | + super(NoCloudKVMPlatform, self).__init__(config) |
1006 | + |
1007 | def get_image(self, img_conf): |
1008 | """Get image using specified image configuration. |
1009 | |
1010 | diff --git a/tests/cloud_tests/platforms/platforms.py b/tests/cloud_tests/platforms/platforms.py |
1011 | index 2897536..d4e5c56 100644 |
1012 | --- a/tests/cloud_tests/platforms/platforms.py |
1013 | +++ b/tests/cloud_tests/platforms/platforms.py |
1014 | @@ -1,6 +1,12 @@ |
1015 | # This file is part of cloud-init. See LICENSE file for license information. |
1016 | |
1017 | """Base platform class.""" |
1018 | +import os |
1019 | + |
1020 | +from simplestreams import filters, mirrors |
1021 | +from simplestreams import util as s_util |
1022 | + |
1023 | +from cloudinit import util as c_util |
1024 | |
1025 | |
1026 | class Platform(object): |
1027 | @@ -11,6 +17,7 @@ class Platform(object): |
1028 | def __init__(self, config): |
1029 | """Set up platform.""" |
1030 | self.config = config |
1031 | + self._generate_ssh_keys(config['data_dir']) |
1032 | |
1033 | def get_image(self, img_conf): |
1034 | """Get image using specified image configuration. |
1035 | @@ -24,4 +31,66 @@ class Platform(object): |
1036 | """Clean up platform data.""" |
1037 | pass |
1038 | |
1039 | + def _generate_ssh_keys(self, data_dir): |
1040 | + """Generate SSH keys to be used with image.""" |
1041 | + filename = os.path.join(data_dir, 'id_rsa') |
1042 | + |
1043 | + if os.path.exists(filename): |
1044 | + c_util.del_file(filename) |
1045 | + |
1046 | + c_util.subp(['ssh-keygen', '-t', 'rsa', '-b', '4096', |
1047 | + '-f', filename, '-P', '', |
1048 | + '-C', 'ubuntu@cloud_test'], |
1049 | + capture=True) |
1050 | + |
1051 | + @staticmethod |
1052 | + def _query_streams(img_conf, img_filter): |
1053 | + """Query streams for latest image given a specific filter. |
1054 | + |
1055 | + @param img_conf: configuration for image |
1056 | + @param filters: array of filters as strings format 'key=value' |
1057 | + @return: dictionary with latest image information or empty |
1058 | + """ |
1059 | + def policy(content, path): |
1060 | + return s_util.read_signed(content, keyring=img_conf['keyring']) |
1061 | + |
1062 | + (url, path) = s_util.path_from_mirror_url(img_conf['mirror_url'], None) |
1063 | + smirror = mirrors.UrlMirrorReader(url, policy=policy) |
1064 | + |
1065 | + config = {'max_items': 1, 'filters': filters.get_filters(img_filter)} |
1066 | + tmirror = FilterMirror(config) |
1067 | + tmirror.sync(smirror, path) |
1068 | + |
1069 | + try: |
1070 | + return tmirror.json_entries[0] |
1071 | + except IndexError: |
1072 | + raise RuntimeError('no images found with filter: %s' % img_filter) |
1073 | + |
1074 | + |
1075 | +class FilterMirror(mirrors.BasicMirrorWriter): |
1076 | + """Taken from sstream-query to return query result as json array.""" |
1077 | + |
1078 | + def __init__(self, config=None): |
1079 | + super(FilterMirror, self).__init__(config=config) |
1080 | + if config is None: |
1081 | + config = {} |
1082 | + self.config = config |
1083 | + self.filters = config.get('filters', []) |
1084 | + self.json_entries = [] |
1085 | + |
1086 | + def load_products(self, path=None, content_id=None): |
1087 | + return {'content_id': content_id, 'products': {}} |
1088 | + |
1089 | + def filter_item(self, data, src, target, pedigree): |
1090 | + return filters.filter_item(self.filters, data, src, pedigree) |
1091 | + |
1092 | + def insert_item(self, data, src, target, pedigree, contentsource): |
1093 | + # src and target are top level products:1.0 |
1094 | + # data is src['products'][ped[0]]['versions'][ped[1]]['items'][ped[2]] |
1095 | + # contentsource is a ContentSource if 'path' exists in data or None |
1096 | + data = s_util.products_exdata(src, pedigree) |
1097 | + if 'path' in data: |
1098 | + data.update({'item_url': contentsource.url}) |
1099 | + self.json_entries.append(data) |
1100 | + |
1101 | # vi: ts=4 expandtab |
1102 | diff --git a/tests/cloud_tests/releases.yaml b/tests/cloud_tests/releases.yaml |
1103 | index e593380..48f903b 100644 |
1104 | --- a/tests/cloud_tests/releases.yaml |
1105 | +++ b/tests/cloud_tests/releases.yaml |
1106 | @@ -27,10 +27,14 @@ default_release_config: |
1107 | # features groups and additional feature settings |
1108 | feature_groups: [] |
1109 | features: {} |
1110 | - nocloud-kvm: |
1111 | mirror_url: https://cloud-images.ubuntu.com/daily |
1112 | - mirror_dir: '/srv/citest/nocloud-kvm' |
1113 | + mirror_dir: '/srv/citest/images' |
1114 | keyring: /usr/share/keyrings/ubuntu-cloudimage-keyring.gpg |
1115 | + ec2: |
1116 | + # Choose from: [ebs, instance-store] |
1117 | + root-store: ebs |
1118 | + boot_timeout: 300 |
1119 | + nocloud-kvm: |
1120 | setup_overrides: null |
1121 | override_templates: false |
1122 | # lxd specific default configuration options |
1123 | diff --git a/tests/cloud_tests/setup_image.py b/tests/cloud_tests/setup_image.py |
1124 | index 179f40d..6d24211 100644 |
1125 | --- a/tests/cloud_tests/setup_image.py |
1126 | +++ b/tests/cloud_tests/setup_image.py |
1127 | @@ -5,7 +5,6 @@ |
1128 | from functools import partial |
1129 | import os |
1130 | |
1131 | -from cloudinit import util as c_util |
1132 | from tests.cloud_tests import LOG |
1133 | from tests.cloud_tests import stage, util |
1134 | |
1135 | @@ -192,20 +191,6 @@ def enable_repo(args, image): |
1136 | image.execute(cmd, description=msg) |
1137 | |
1138 | |
1139 | -def generate_ssh_keys(data_dir): |
1140 | - """Generate SSH keys to be used with image.""" |
1141 | - LOG.info('generating SSH keys') |
1142 | - filename = os.path.join(data_dir, 'id_rsa') |
1143 | - |
1144 | - if os.path.exists(filename): |
1145 | - c_util.del_file(filename) |
1146 | - |
1147 | - c_util.subp(['ssh-keygen', '-t', 'rsa', '-b', '4096', |
1148 | - '-f', filename, '-P', '', |
1149 | - '-C', 'ubuntu@cloud_test'], |
1150 | - capture=True) |
1151 | - |
1152 | - |
1153 | def setup_image(args, image): |
1154 | """Set up image as specified in args. |
1155 | |
1156 | @@ -239,9 +224,6 @@ def setup_image(args, image): |
1157 | LOG.info('setting up %s', image) |
1158 | res = stage.run_stage( |
1159 | 'set up for {}'.format(image), calls, continue_after_error=False) |
1160 | - LOG.debug('after setup complete, installed cloud-init version is: %s', |
1161 | - installed_package_version(image, 'cloud-init')) |
1162 | - generate_ssh_keys(args.data_dir) |
1163 | return res |
1164 | |
1165 | # vi: ts=4 expandtab |
1166 | diff --git a/tests/cloud_tests/util.py b/tests/cloud_tests/util.py |
1167 | index 2aedcd0..6ff285e 100644 |
1168 | --- a/tests/cloud_tests/util.py |
1169 | +++ b/tests/cloud_tests/util.py |
1170 | @@ -321,9 +321,9 @@ class TargetBase(object): |
1171 | rcs = (0,) |
1172 | |
1173 | if description: |
1174 | - LOG.debug('Executing "%s"', description) |
1175 | + LOG.debug('executing "%s"', description) |
1176 | else: |
1177 | - LOG.debug("Executing command: %s", shell_quote(command)) |
1178 | + LOG.debug("executing command: %s", shell_quote(command)) |
1179 | |
1180 | out, err, rc = self._execute(command=command, stdin=stdin, env=env) |
1181 | |
1182 | @@ -447,6 +447,19 @@ class InTargetExecuteError(c_util.ProcessExecutionError): |
1183 | reason=reason) |
1184 | |
1185 | |
1186 | +class PlatformError(IOError): |
1187 | + """Error type for platform errors.""" |
1188 | + |
1189 | + default_desc = 'unexpected error in platform.' |
1190 | + |
1191 | + def __init__(self, operation, description=None): |
1192 | + """Init error and parent error class.""" |
1193 | + description = description if description else self.default_desc |
1194 | + |
1195 | + message = '%s: %s' % (operation, description) |
1196 | + IOError.__init__(self, message) |
1197 | + |
1198 | + |
1199 | class TempDir(object): |
1200 | """Configurable temporary directory like tempfile.TemporaryDirectory.""" |
1201 | |
1202 | diff --git a/tox.ini b/tox.ini |
1203 | index fdc8a66..88b82dc 100644 |
1204 | --- a/tox.ini |
1205 | +++ b/tox.ini |
1206 | @@ -134,4 +134,5 @@ passenv = HOME |
1207 | deps = |
1208 | pylxd==2.2.4 |
1209 | paramiko==2.3.1 |
1210 | + boto3==1.4.8 |
1211 | bzr+lp:simplestreams |
FAILED: Continuous integration, rev:ec97516df65 46c1092eaf89960 942d6064d435e7 /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 627/
https:/
Executed test runs:
SUCCESS: Checkout
SUCCESS: Unit & Style Tests
SUCCESS: Ubuntu LTS: Build
FAILED: Ubuntu LTS: Integration
Click here to trigger a rebuild: /jenkins. ubuntu. com/server/ job/cloud- init-ci/ 627/rebuild
https:/