Merge ~utlemming/cloud-init/+git/cloud-init:digitalocean_networking into cloud-init:master

Proposed by Ben Howard on 2016-08-19
Status: Merged
Merged at revision: 9f83bb8e80806d3dd79ba426474dc3c696e19a41
Proposed branch: ~utlemming/cloud-init/+git/cloud-init:digitalocean_networking
Merge into: cloud-init:master
Diff against target: 625 lines (+485/-34)
3 files modified
cloudinit/sources/DataSourceDigitalOcean.py (+44/-26)
cloudinit/sources/helpers/digitalocean.py (+187/-0)
tests/unittests/test_datasource/test_digitalocean.py (+254/-8)
Reviewer Review Type Date Requested Status
Scott Moser 2016-08-19 Needs Information on 2016-09-28
Review via email: mp+303471@code.launchpad.net

Description of the Change

On DigitalOcean, the meta-data provides network configuration on a link-local address (ip4LL). This change makes uses Cloud-init's ability to setup netwokrking from data provided in the meta-data.

Rather than using avahi-autoipd (and thus adding an additional depedency on Cloud-init, a random link-local address is chosen. DigitalOcean's networking guarantees that the entire 169.254.0.0/16 net block is private to the fabric and the droplet. Therefore, any arbitrary address in 169.254.0.0/16 can be used (with the notable exception of 169.254.269.254). That said, the datasource will select a random address between 169.254.[1-168[.[0-254], query the metadata service and then flush the interface.

Included in this merge proposal is an updated test case to use actual meta-data from a droplet.

To post a comment you must log in.
Scott Moser (smoser) wrote :

Over all, this is really nice. Its mostly exactly what i would like to have network datasources doing.

General flow in the 'get' for your datasource then looks like:

a.) quickly check (ie, via dmi info) if you are on the cloud. if not rreturn
b.) you are in all likely hood on your cloud, so you this datasource is in charge
c.) configure local networking transiently as you did
d.) read network metadata service
e.) un-do your network changes

there are some nits in line below, and please make sure that in the local datasource 'get', you're un-configuring your ipv4 link local addresses.

Also, need some unit tests. the unit tests can focus on parse_network_configuration as that is the brunt of the work.

review: Needs Fixing
Scott Moser (smoser) wrote :

fyi, you can push to
 git+ssh://<email address hidden>/~utlemming/cloud-init

then its referencable via shorter url of
 https://code.launchpad.net/~utlemming/cloud-init/
rather than
 https://code.launchpad.net/~utlemming/cloud-init/

see Repository Urls at
  https://help.launchpad.net/Code/Git
for more information.

Scott Moser (smoser) wrote :

fixing previous comment

fyi, you can push to
 git+ssh://<email address hidden>/~utlemming/cloud-init

then its referencable via shorter url of
 https://code.launchpad.net/~utlemming/cloud-init/
compared to
 https://code.launchpad.net/~utlemming/cloud-init/+git/cloud-init

The first is the "default" repository for cloud-init for you. You can have multiples under the +git/.

see Repository Urls at
  https://help.launchpad.net/Code/Git
for more information.

using the shorter form allows me to do something like:

for user in lpuser1 lpuser2 utlemming; do
   git remote add $user git://git.launchpad.net/~$user/cloud-init
   git fetch $user
done

739d113... by Ben Howard on 2016-08-30

Minors changes per the MP.
- fixed function names
- changed scope of some messages

Ben Howard (utlemming) wrote :

Thanks Scott for the review. I just got back from vacation. I've fixed the nit picks and will look at getting the Unit tests done shortly.

In the meantime, I've updated my branch.

4d78b9f... by Ben Howard on 2016-08-31

DigitalOcean: add test cases for networking config

Ben Howard (utlemming) wrote :

Added test cases for the networking configuration.

Ben Howard (utlemming) wrote :

Also, I've push the branch to:
git+ssh://<email address hidden>/~utlemming/cloud-init

I figured I would leave this MP open rather than creating a new one so that the conversation wasn't lost.

Scott Moser (smoser) wrote :

Some changes i made at http://paste.ubuntu.com/23146037/ that i'd like you to take.
I've pasted the comments http://paste.ubuntu.com/23146043/ in case this doesn't format well.

Over all, this is good, and thank you.

Comments below, where i put 'FIX' i'd like you to make sure that things are as you'd expect
and then a 'TEST' comment, please test that.

 - DataSourceDigitalOcean.get()
 - explicitly populate the self.metadata from keys in the full
     digital ocean metadata rather than assuming that values are there, and
     not having a explicit list of what we expect. FIX: there may be other
     values that were implied by copying the metadata to .metadata, please
     check that.

   this also allowed me to drop get_public_ssh_keys and availability
     zone methods.

   - changed
        dns = self.metadata.get('dns')
        nameservers = dns.get('nameservers')
        self._network_config = do_helper.parse_network_configuration(
     to
        nameservers = self.metadata_full['dns']['nameservers']

     that would fail badly if metadata might not have a 'dns' entry,
     but really no worse as before, since if there was None in 'dns' you'd
     get a TypeError on None.get('nameservers)'

   - put the reading of the metadata into a function in do_helpers.
   the value of doing this is that mocking the DataSource.get() is
     much easier as you can just mock read_metadata() and have it return
     what you expect.

   FIX: change your tests to just use mock and mock that rather than
     using httpretty.

 - fixed a logexc usage, improved another one with more useful information.
 - dropped use of 'util.which("ip")' in one case, rather just let
   execution find that.
 - renamed do_helper.parse_network_configuration to
   convert_network_configuration

 - TEST:
 You can easily add tests of the network configuration conversion
   that do not need the datasource infrastructure in place. Its good to
   have the test of the datasource but you can test more network config
   parsing directly by just calling it with digital ocean values and
   making expectations on the response.

  See my example TestNetworkConvert and how its easier than dealing with
  the datasource.

review: Needs Fixing
de8b146... by Ben Howard on 2016-09-13

Applied proposed patches from Scott Moser in MP

528541a... by Ben Howard on 2016-09-13

updated test cases to not use httpretty

a9d8221... by Ben Howard on 2016-09-14

explicitly test actual JSON for expected types and values

f274cfc... by Ben Howard on 2016-09-14

updated patch from MP for pylint/pep8 compliance

Ben Howard (utlemming) wrote :

okay, I think that the issues are addressed. I added a test case to explicitly test the JSON that is returned from the metadata service; the logic being that using actual JSON gives a better, real world test. I can change it be explicit, but would rather be testing against real data, not mocked up data.

This code is all tested as well.

Scott Moser (smoser) wrote :
Download full text (16.5 KiB)

I'm going to merge this now. But for your reference, the changes I've made are below.

They generally include:
 - move '_get_sysinfo' implementation into do_helper.read_sysinfo() just
   for easier patching
 - some modifications to tests
 - remove httppretty completely
 - drop your test 'TestDataSourceDigitalOcean'. I think you misunderstood
   what I was asking. I just wanted to actually use a response from
   the metadata service.
 - use of .copy() on DO_META so it didn't inadvertantly get updated.
 - drop no longer used 'apply_filter' on get_data()
 - drop 'launch_index' as that is inherited from DataSource.
 - flake8 related cleanups.
 - assertEqual rather than assertEquals
 - add negative test (test_returns_false_not_on_docean)
 - log critical *and* raise exception if DigitalOcean is found as
   system manufacture and droplet_id is not found.
 - some LOG usage cleanups (use LOG.debug("msg %s", key) rather than
   LOG.debug("msg {}".format(key)) as the ladder requires rendering/
   formatting to string of 'key'. and is also harder to make work
   with long messages.

diff --git a/cloudinit/sources/DataSourceDigitalOcean.py b/cloudinit/sources/DataSourceDigitalOcean.py
index 95a939f..c5770d5 100644
--- a/cloudinit/sources/DataSourceDigitalOcean.py
+++ b/cloudinit/sources/DataSourceDigitalOcean.py
@@ -54,38 +54,17 @@ class DataSourceDigitalOcean(sources.DataSource):
         self._network_config = None

     def _get_sysinfo(self):
- # DigitalOcean embeds vendor ID and instance/droplet_id in the
- # SMBIOS information
+ return do_helper.read_sysinfo()

- LOG.debug("checking if instance is a DigitalOcean droplet")
-
- # Detect if we are on DigitalOcean and return the Droplet's ID
- vendor_name = util.read_dmi_data("system-manufacturer")
- if vendor_name != "DigitalOcean":
- return (False, None)
-
- LOG.info("running on DigitalOcean")
-
- droplet_id = util.read_dmi_data("system-serial-number")
- if droplet_id:
- LOG.debug(("system identified via SMBIOS as DigitalOcean Droplet"
- "{}").format(droplet_id))
- else:
- LOG.critical(("system identified via SMBIOS as a DigitalOcean "
- "Droplet, but did not provide an ID. Please file a "
- "support ticket at: "
- "https://cloud.digitalocean.com/support/tickets/"
- "new"))
-
- return (True, droplet_id)
-
- def get_data(self, apply_filter=False):
+ def get_data(self):
         (is_do, droplet_id) = self._get_sysinfo()

         # only proceed if we know we are on DigitalOcean
         if not is_do:
             return False

+ LOG.info("Running on digital ocean. droplet_id=%s" % droplet_id)
+
         ipv4LL_nic = None
         if self.use_ip4LL:
             ipv4LL_nic = do_helper.assign_ipv4_link_local()
@@ -108,10 +87,6 @@ class DataSourceDigitalOcean(sources.DataSource):

         return True

- @property
- def launch_index(self):
- return None
-
     def check_instance_id(self, sys_cfg):
         return sources.instance_id_ma...

review: Approve
Scott Moser (smoser) wrote :

Wow.
More work... further reading of this.

I have a branch i'm happy with, and proposed merging into this one...
 https://code.launchpad.net/~smoser/cloud-init/+git/cloud-init/+merge/307093

the one question i have is in NIC_MAP.
the network data we get from digital ocean is like:
 {
  'public': [{'mac': '04:01...', ...}],
  'private': ['mac': '05:01...', ...]],
 }

the top level is a dictionary, but the second is a list.
Currently you assign the name by a map.
What if there are 2 entries in 'public'?

At this point they'd both get 'eth0' as their name.

review: Needs Information
Scott Moser (smoser) :
Ben Howard (utlemming) wrote :

> the one question i have is in NIC_MAP.
> the network data we get from digital ocean is like:
> {
> 'public': [{'mac': '04:01...', ...}],
> 'private': ['mac': '05:01...', ...]],
> }
>
> the top level is a dictionary, but the second is a list.
> Currently you assign the name by a map.
> What if there are 2 entries in 'public'?

I can certify that there won't be, and if there are, then this is a _huge_ bug. We have notated that changes to the meta-data should be checked against Cloud-init first.

That said, I've double checked and I'm seeing (from actual droplets, i.e. 'curl http://169.254.169.254/metadata/v1.json | jq')

  "interfaces": {
    "public": [
      {
        "ipv4": {
          "ip_address": "REDACTED",
          "netmask": "255.255.240.0",
          "gateway": "159.203.96.1"
        },
        "anchor_ipv4": {
          "ip_address": "10.17.0.5",
          "netmask": "255.255.0.0",
          "gateway": "10.17.0.1"
        },
        "mac": "1e:9a:51:a2:e4:0a",
        "type": "public"
      }
    ]
  },

And
  "interfaces": {
    "private": [
      {
        "ipv4": {
          "ip_address": "10.132.84.146",
          "netmask": "255.255.0.0",
          "gateway": "10.132.0.1"
        },
        "mac": "96:b5:74:db:dd:85",
        "type": "private"
      }
    ],
    "public": [
      {
        "ipv4": {
          "ip_address": "REDACED",
          "netmask": "255.255.240.0",
          "gateway": "159.203.96.1"
        },
        "ipv6": {
          "ip_address": "REDACTED",
          "cidr": 64,
          "gateway": "2604:A880:0800:00A1:0000:0000:0000:0001"
        },
        "anchor_ipv4": {
          "ip_address": "10.17.0.6",
          "netmask": "255.255.0.0",
          "gateway": "10.17.0.1"
        },
        "mac": "5a:b1:61:a0:b8:98",
        "type": "public"
      }
    ]
  },

Scott Moser (smoser) wrote :

Yeah. Just odd that it is a list if there can not be more than one entry.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1diff --git a/cloudinit/sources/DataSourceDigitalOcean.py b/cloudinit/sources/DataSourceDigitalOcean.py
2index fc596e1..95a939f 100644
3--- a/cloudinit/sources/DataSourceDigitalOcean.py
4+++ b/cloudinit/sources/DataSourceDigitalOcean.py
5@@ -18,13 +18,12 @@
6 # DigitalOcean Droplet API:
7 # https://developers.digitalocean.com/documentation/metadata/
8
9-import json
10-
11 from cloudinit import log as logging
12 from cloudinit import sources
13-from cloudinit import url_helper
14 from cloudinit import util
15
16+import cloudinit.sources.helpers.digitalocean as do_helper
17+
18 LOG = logging.getLogger(__name__)
19
20 BUILTIN_DS_CONFIG = {
21@@ -36,11 +35,13 @@ BUILTIN_DS_CONFIG = {
22 MD_RETRIES = 30
23 MD_TIMEOUT = 2
24 MD_WAIT_RETRY = 2
25+MD_USE_IPV4LL = True
26
27
28 class DataSourceDigitalOcean(sources.DataSource):
29 def __init__(self, sys_cfg, distro, paths):
30 sources.DataSource.__init__(self, sys_cfg, distro, paths)
31+ self.distro = distro
32 self.metadata = dict()
33 self.ds_cfg = util.mergemanydict([
34 util.get_cfg_by_path(sys_cfg, ["datasource", "DigitalOcean"], {}),
35@@ -48,7 +49,9 @@ class DataSourceDigitalOcean(sources.DataSource):
36 self.metadata_address = self.ds_cfg['metadata_url']
37 self.retries = self.ds_cfg.get('retries', MD_RETRIES)
38 self.timeout = self.ds_cfg.get('timeout', MD_TIMEOUT)
39+ self.use_ip4LL = self.ds_cfg.get('use_ip4LL', MD_USE_IPV4LL)
40 self.wait_retry = self.ds_cfg.get('wait_retry', MD_WAIT_RETRY)
41+ self._network_config = None
42
43 def _get_sysinfo(self):
44 # DigitalOcean embeds vendor ID and instance/droplet_id in the
45@@ -83,32 +86,27 @@ class DataSourceDigitalOcean(sources.DataSource):
46 if not is_do:
47 return False
48
49- LOG.debug("reading metadata from {}".format(self.metadata_address))
50- response = url_helper.readurl(self.metadata_address,
51- timeout=self.timeout,
52- sec_between=self.wait_retry,
53- retries=self.retries)
54+ ipv4LL_nic = None
55+ if self.use_ip4LL:
56+ ipv4LL_nic = do_helper.assign_ipv4_link_local()
57
58- contents = util.decode_binary(response.contents)
59- decoded = json.loads(contents)
60+ md = do_helper.read_metadata(
61+ self.metadata_address, timeout=self.timeout,
62+ sec_between=self.wait_retry, retries=self.retries)
63
64- self.metadata = decoded
65- self.metadata['instance-id'] = decoded.get('droplet_id', droplet_id)
66- self.metadata['local-hostname'] = decoded.get('hostname', droplet_id)
67- self.vendordata_raw = decoded.get("vendor_data", None)
68- self.userdata_raw = decoded.get("user_data", None)
69- return True
70+ self.metadata_full = md
71+ self.metadata['instance-id'] = md.get('droplet_id', droplet_id)
72+ self.metadata['local-hostname'] = md.get('hostname', droplet_id)
73+ self.metadata['interfaces'] = md.get('interfaces')
74+ self.metadata['public-keys'] = md.get('public_keys')
75+ self.metadata['availability_zone'] = md.get('region', 'default')
76+ self.vendordata_raw = md.get("vendor_data", None)
77+ self.userdata_raw = md.get("user_data", None)
78
79- def get_public_ssh_keys(self):
80- public_keys = self.metadata.get('public_keys', [])
81- if isinstance(public_keys, list):
82- return public_keys
83- else:
84- return [public_keys]
85+ if ipv4LL_nic:
86+ do_helper.del_ipv4_link_local(ipv4LL_nic)
87
88- @property
89- def availability_zone(self):
90- return self.metadata.get('region', 'default')
91+ return True
92
93 @property
94 def launch_index(self):
95@@ -118,10 +116,30 @@ class DataSourceDigitalOcean(sources.DataSource):
96 return sources.instance_id_matches_system_uuid(
97 self.get_instance_id(), 'system-serial-number')
98
99+ @property
100+ def network_config(self):
101+ """Configure the networking. This needs to be done each boot, since
102+ the IP information may have changed due to snapshot and/or
103+ migration.
104+ """
105+
106+ if self._network_config:
107+ return self._network_config
108+
109+ interfaces = self.metadata.get('interfaces')
110+ LOG.debug(interfaces)
111+ if not interfaces:
112+ raise Exception("Unable to get meta-data from server....")
113+
114+ nameservers = self.metadata_full['dns']['nameservers']
115+ self._network_config = do_helper.convert_network_configuration(
116+ interfaces, nameservers)
117+ return self._network_config
118+
119
120 # Used to match classes to dependencies
121 datasources = [
122- (DataSourceDigitalOcean, (sources.DEP_FILESYSTEM, sources.DEP_NETWORK)),
123+ (DataSourceDigitalOcean, (sources.DEP_FILESYSTEM, )),
124 ]
125
126
127diff --git a/cloudinit/sources/helpers/digitalocean.py b/cloudinit/sources/helpers/digitalocean.py
128new file mode 100644
129index 0000000..2e112a2
130--- /dev/null
131+++ b/cloudinit/sources/helpers/digitalocean.py
132@@ -0,0 +1,187 @@
133+# vi: ts=4 expandtab
134+#
135+# Author: Ben Howard <bh@digitalocean.com>
136+
137+# This program is free software: you can redistribute it and/or modify
138+# it under the terms of the GNU General Public License version 3, as
139+# published by the Free Software Foundation.
140+#
141+# This program is distributed in the hope that it will be useful,
142+# but WITHOUT ANY WARRANTY; without even the implied warranty of
143+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
144+# GNU General Public License for more details.
145+#
146+# You should have received a copy of the GNU General Public License
147+# along with this program. If not, see <http://www.gnu.org/licenses/>.
148+
149+import logging
150+import random
151+import json
152+
153+from cloudinit import net as cloudnet
154+from cloudinit import util
155+from cloudinit import url_helper
156+
157+NIC_MAP = {'public': 'eth0', 'private': 'eth1'}
158+
159+LOG = logging.getLogger(__name__)
160+
161+
162+def assign_ipv4_link_local(nic=None):
163+ """Bring up NIC using an address using link-local (ip4LL) IPs. On
164+ DigitalOcean, the link-local domain is per-droplet routed, so there
165+ is no risk of collisions. However, to be more safe, the ip4LL
166+ address is random.
167+ """
168+
169+ if not nic:
170+ for cdev in sorted(cloudnet.get_devicelist()):
171+ if cloudnet.is_physical(cdev):
172+ nic = cdev
173+ LOG.debug("assigned nic '%s' for link-local discovery", nic)
174+ break
175+
176+ if not nic:
177+ raise RuntimeError("unable to find interfaces to access the"
178+ "meta-data server. This droplet is broken.")
179+
180+ addr = "169.254.{0}.{1}/16".format(random.randint(1, 168),
181+ random.randint(0, 255))
182+
183+ ip_addr_cmd = ['ip', 'addr', 'add', addr, 'dev', nic]
184+ ip_link_cmd = ['ip', 'link', 'set', 'dev', nic, 'up']
185+
186+ if not util.which('ip'):
187+ raise RuntimeError("No 'ip' command available to configure ip4LL "
188+ "address")
189+
190+ try:
191+ (result, _err) = util.subp(ip_addr_cmd)
192+ LOG.debug("assigned ip4LL address '%s' to '%s'", addr, nic)
193+
194+ (result, _err) = util.subp(ip_link_cmd)
195+ LOG.debug("brought device '%s' up", nic)
196+ except Exception:
197+ util.logexc(LOG, "ip4LL address assignment of '%s' to '%s' failed."
198+ " Droplet networking will be broken", addr, nic)
199+ raise
200+
201+ return nic
202+
203+
204+def del_ipv4_link_local(nic=None):
205+ """Remove the ip4LL address. While this is not necessary, the ip4LL
206+ address is extraneous and confusing to users.
207+ """
208+ if not nic:
209+ LOG.debug("no link_local address interface defined, skipping link "
210+ "local address cleanup")
211+ return
212+
213+ LOG.debug("cleaning up ipv4LL address")
214+
215+ ip_addr_cmd = ['ip', 'addr', 'flush', 'dev', nic]
216+
217+ try:
218+ (result, _err) = util.subp(ip_addr_cmd)
219+ LOG.debug("removed ip4LL addresses from %s", nic)
220+
221+ except Exception as e:
222+ util.logexc(LOG, "failed to remove ip4LL address from '%s'.", nic, e)
223+
224+
225+def convert_network_configuration(config, dns_servers):
226+ """Convert the DigitalOcean Network description into Cloud-init's netconfig
227+ format.
228+
229+ Example JSON:
230+ {'public': [
231+ {'mac': '04:01:58:27:7f:01',
232+ 'ipv4': {'gateway': '45.55.32.1',
233+ 'netmask': '255.255.224.0',
234+ 'ip_address': '45.55.50.93'},
235+ 'anchor_ipv4': {
236+ 'gateway': '10.17.0.1',
237+ 'netmask': '255.255.0.0',
238+ 'ip_address': '10.17.0.9'},
239+ 'type': 'public',
240+ 'ipv6': {'gateway': '....',
241+ 'ip_address': '....',
242+ 'cidr': 64}}
243+ ],
244+ 'private': [
245+ {'mac': '04:01:58:27:7f:02',
246+ 'ipv4': {'gateway': '10.132.0.1',
247+ 'netmask': '255.255.0.0',
248+ 'ip_address': '10.132.75.35'},
249+ 'type': 'private'}
250+ ]
251+ }
252+ """
253+
254+ def _get_subnet_part(pcfg, nameservers=None):
255+ subpart = {'type': 'static',
256+ 'control': 'auto',
257+ 'address': pcfg.get('ip_address'),
258+ 'gateway': pcfg.get('gateway')}
259+
260+ if nameservers:
261+ subpart['dns_nameservers'] = " ".join(nameservers)
262+
263+ if ":" in pcfg.get('ip_address'):
264+ subpart['address'] = "{0}/{1}".format(pcfg.get('ip_address'),
265+ pcfg.get('cidr'))
266+ else:
267+ subpart['netmask'] = pcfg.get('netmask')
268+
269+ return subpart
270+
271+ all_nics = config.get('public')
272+ all_nics.extend(config.get('private'))
273+ macs_to_nics = cloudnet.get_interfaces_by_mac()
274+ nic_configs = []
275+
276+ for nic in all_nics:
277+
278+ mac_address = nic.get('mac')
279+ sysfs_name = macs_to_nics.get(mac_address)
280+ nic_type = nic.get('type', 'unknown')
281+ if_name = NIC_MAP.get(nic_type, sysfs_name)
282+
283+ LOG.debug("mapped {0} interface to {1}, assigning name of {2}".format(
284+ mac_address, sysfs_name, if_name))
285+
286+ ncfg = {'type': 'physical',
287+ 'mac_address': mac_address,
288+ 'name': if_name}
289+
290+ subnets = []
291+ for netdef in ('ipv4', 'ipv6', 'anchor_ipv4', 'anchor_ipv6'):
292+ raw_subnet = nic.get(netdef, None)
293+ if not raw_subnet:
294+ continue
295+
296+ sub_part = _get_subnet_part(raw_subnet)
297+ if nic_type == 'public' and 'anchor' not in netdef:
298+ # add DNS resolvers to the public interfaces only
299+ sub_part = _get_subnet_part(raw_subnet, dns_servers)
300+ else:
301+ # remove the gateway any non-public interfaces
302+ if 'gateway' in sub_part:
303+ del sub_part['gateway']
304+
305+ subnets.append(sub_part)
306+
307+ ncfg['subnets'] = subnets
308+ nic_configs.append(ncfg)
309+ LOG.debug("nic {0} configuration: {1}".format(if_name, ncfg))
310+
311+ return {'version': 1, 'config': nic_configs}
312+
313+
314+def read_metadata(url, timeout=2, sec_between=2, retries=30):
315+ response = url_helper.readurl(url, timeout=timeout,
316+ sec_between=sec_between, retries=retries)
317+ if not response.ok():
318+ raise RuntimeError("unable to read metadata at %s" % url)
319+ return json.loads(response.contents.decode())
320diff --git a/tests/unittests/test_datasource/test_digitalocean.py b/tests/unittests/test_datasource/test_digitalocean.py
321index f5d2ef3..2e27f89 100644
322--- a/tests/unittests/test_datasource/test_digitalocean.py
323+++ b/tests/unittests/test_datasource/test_digitalocean.py
324@@ -20,6 +20,7 @@ import json
325 from cloudinit import helpers
326 from cloudinit import settings
327 from cloudinit.sources import DataSourceDigitalOcean
328+from cloudinit.sources.helpers import digitalocean
329
330 from .. import helpers as test_helpers
331 from ..helpers import HttprettyTestCase
332@@ -30,14 +31,66 @@ DO_MULTIPLE_KEYS = ["ssh-rsa AAAAB3NzaC1yc2EAAAA... test1@do.co",
333 "ssh-rsa AAAAB3NzaC1yc2EAAAA... test2@do.co"]
334 DO_SINGLE_KEY = "ssh-rsa AAAAB3NzaC1yc2EAAAA... test@do.co"
335
336-DO_META = {
337- 'user_data': 'user_data_here',
338- 'vendor_data': 'vendor_data_here',
339- 'public_keys': DO_SINGLE_KEY,
340- 'region': 'nyc3',
341- 'id': '2000000',
342- 'hostname': 'cloudinit-test',
343+# the following JSON was taken from droplet (that's why its a string)
344+DO_META = json.loads("""
345+{
346+ "droplet_id": "22532410",
347+ "hostname": "utl-96268",
348+ "vendor_data": "vendordata goes here",
349+ "user_data": "userdata goes here",
350+ "public_keys": "",
351+ "auth_key": "authorization_key",
352+ "region": "nyc3",
353+ "interfaces": {
354+ "private": [
355+ {
356+ "ipv4": {
357+ "ip_address": "10.132.6.205",
358+ "netmask": "255.255.0.0",
359+ "gateway": "10.132.0.1"
360+ },
361+ "mac": "04:01:57:d1:9e:02",
362+ "type": "private"
363+ }
364+ ],
365+ "public": [
366+ {
367+ "ipv4": {
368+ "ip_address": "192.0.0.20",
369+ "netmask": "255.255.255.0",
370+ "gateway": "104.236.0.1"
371+ },
372+ "ipv6": {
373+ "ip_address": "2604:A880:0800:0000:1000:0000:0000:0000",
374+ "cidr": 64,
375+ "gateway": "2604:A880:0800:0000:0000:0000:0000:0001"
376+ },
377+ "anchor_ipv4": {
378+ "ip_address": "10.0.0.5",
379+ "netmask": "255.255.0.0",
380+ "gateway": "10.0.0.1"
381+ },
382+ "mac": "04:01:57:d1:9e:01",
383+ "type": "public"
384+ }
385+ ]
386+ },
387+ "floating_ip": {
388+ "ipv4": {
389+ "active": false
390+ }
391+ },
392+ "dns": {
393+ "nameservers": [
394+ "2001:4860:4860::8844",
395+ "2001:4860:4860::8888",
396+ "8.8.8.8"
397+ ]
398+ }
399 }
400+""")
401+
402+DO_META['public_keys'] = DO_SINGLE_KEY
403
404 MD_URL = 'http://169.254.169.254/metadata/v1.json'
405
406@@ -50,6 +103,86 @@ def _request_callback(method, uri, headers):
407 return (200, headers, json.dumps(DO_META))
408
409
410+class TestJSONValid(test_helpers.TestCase):
411+ """
412+ Test that the JSON is valid. While the JSON above is actual JSON is
413+ ACTUAL JSON returned from the meta-data server on DigitalOcean, we
414+ should check for explicit values. This is likely an exercise in pendantry,
415+ (it would be easier to just mock out the data structure, but this gives
416+ us the advantage of testing an actual meta-data response
417+ """
418+
419+ def testExpectedKeys(self):
420+ required = ['droplet_id', 'hostname', 'user_data', 'vendor_data',
421+ 'public_keys', 'region', 'interfaces', 'dns',
422+ 'floating_ip']
423+
424+ for req in required:
425+ self.assertIn(req, DO_META)
426+
427+ def testExpectedStrings(self):
428+ should_be_strings = ["hostname", "vendor_data", "user_data",
429+ "auth_key", "region", "droplet_id"]
430+
431+ for sbs in should_be_strings:
432+ print("checking type %s" % sbs)
433+ self.assertIs(str, type(DO_META[sbs]))
434+
435+ def testInterfaceTypes(self):
436+ '''test for expected types in interface definitions'''
437+
438+ expected_ip_strings = {
439+ 'anchor_ipv4': ["ip_address", "netmask", "gateway"],
440+ 'ipv4': ["ip_address", "netmask", "gateway"],
441+ 'ipv6': ["ip_address", "cidr", "gateway"]
442+ }
443+
444+ expected_int_strings = ["mac", "type"]
445+
446+ expected_nets = {
447+ 'public': ["ipv4", "ipv6", "anchor_ipv4"],
448+ 'private': ["private"]
449+ }
450+
451+ interfaces = DO_META['interfaces']
452+ self.assertIs(dict, type(interfaces))
453+
454+ # make sure that expected interfaces types are there
455+ for if_type in expected_nets:
456+ int_def = interfaces[if_type]
457+ self.assertIs(list, type(int_def))
458+ self.assertIn(if_type, expected_nets)
459+
460+ for enet in expected_nets[if_type]:
461+ print(enet)
462+ for sub in int_def:
463+ for sub_key in sub:
464+ sk = sub[sub_key]
465+ if sk is str:
466+ self.assertIn(sub_key, expected_int_strings)
467+ elif sk is dict:
468+ self.assertIn(sk, expected_ip_strings)
469+ for ssk in expected_ip_strings[sk]:
470+ self.assertIs(str, type(sk[ssk]))
471+
472+ def testfloatingIP(self):
473+ '''test to make sure the floating ip is defined'''
474+ floating_ip = DO_META['floating_ip']
475+
476+ self.assertIs(dict, type(floating_ip))
477+ self.assertIs(dict, type(floating_ip['ipv4']))
478+ self.assertIs(bool, type(floating_ip['ipv4']['active']))
479+
480+ def testDNS(self):
481+ '''test to make sure that DNS is defined'''
482+ dns = DO_META['dns']
483+ self.assertIs(dict, type(dns))
484+ self.assertIs(list, type(dns['nameservers']))
485+
486+ for servers in dns['nameservers']:
487+ self.assertIs(str, type(servers))
488+
489+
490 class TestDataSourceDigitalOcean(HttprettyTestCase):
491 """
492 Test reading the meta-data
493@@ -59,6 +192,7 @@ class TestDataSourceDigitalOcean(HttprettyTestCase):
494 self.ds = DataSourceDigitalOcean.DataSourceDigitalOcean(
495 settings.CFG_BUILTIN, None,
496 helpers.Paths({}))
497+ self.ds.use_ip4LL = False
498 self.ds._get_sysinfo = _mock_dmi
499 super(TestDataSourceDigitalOcean, self).setUp()
500
501@@ -87,7 +221,7 @@ class TestDataSourceDigitalOcean(HttprettyTestCase):
502 self.assertEqual(DO_META.get('region'),
503 self.ds.availability_zone)
504
505- self.assertEqual(DO_META.get('id'),
506+ self.assertEqual(DO_META.get('droplet_id'),
507 self.ds.get_instance_id())
508
509 self.assertEqual(DO_META.get('hostname'),
510@@ -112,3 +246,115 @@ class TestDataSourceDigitalOcean(HttprettyTestCase):
511 self.ds.get_public_ssh_keys())
512
513 self.assertIsInstance(self.ds.get_public_ssh_keys(), list)
514+
515+
516+class TestNetworkConvert(test_helpers.TestCase):
517+
518+ def _get_networking(self):
519+ netcfg = digitalocean.convert_network_configuration(
520+ DO_META['interfaces'], DO_META['dns']['nameservers'])
521+ self.assertIn('config', netcfg)
522+ return netcfg
523+
524+ def test_networking_defined(self):
525+ netcfg = self._get_networking()
526+ self.assertIsNotNone(netcfg)
527+
528+ for nic_def in netcfg.get('config'):
529+ print(json.dumps(nic_def, indent=3))
530+ n_type = nic_def.get('type')
531+ n_subnets = nic_def.get('type')
532+ n_name = nic_def.get('name')
533+ n_mac = nic_def.get('mac_address')
534+
535+ self.assertIsNotNone(n_type)
536+ self.assertIsNotNone(n_subnets)
537+ self.assertIsNotNone(n_name)
538+ self.assertIsNotNone(n_mac)
539+
540+ def _get_nic_definition(self, int_type, expected_name):
541+ """helper function to return if_type (i.e. public) and the expected
542+ name used by cloud-init (i.e eth0)"""
543+ netcfg = self._get_networking()
544+ meta_def = (DO_META.get('interfaces')).get(int_type)[0]
545+
546+ self.assertEquals(int_type, meta_def.get('type'))
547+
548+ for nic_def in netcfg.get('config'):
549+ print(nic_def)
550+ if nic_def.get('name') == expected_name:
551+ return nic_def, meta_def
552+
553+ def _get_match_subn(self, subnets, ip_addr):
554+ """get the matching subnet definition based on ip address"""
555+ for subn in subnets:
556+ address = subn.get('address')
557+ self.assertIsNotNone(address)
558+
559+ # equals won't work because of ipv6 addressing being in
560+ # cidr notation, i.e fe00::1/64
561+ if ip_addr in address:
562+ print(json.dumps(subn, indent=3))
563+ return subn
564+
565+ def test_public_interface_defined(self):
566+ """test that the public interface is defined as eth0"""
567+ (nic_def, meta_def) = self._get_nic_definition('public', 'eth0')
568+ self.assertEquals('eth0', nic_def.get('name'))
569+ self.assertEquals(meta_def.get('mac'), nic_def.get('mac_address'))
570+ self.assertEquals('physical', nic_def.get('type'))
571+
572+ def test_private_interface_defined(self):
573+ """test that the private interface is defined as eth1"""
574+ (nic_def, meta_def) = self._get_nic_definition('private', 'eth1')
575+ self.assertEquals('eth1', nic_def.get('name'))
576+ self.assertEquals(meta_def.get('mac'), nic_def.get('mac_address'))
577+ self.assertEquals('physical', nic_def.get('type'))
578+
579+ def _check_dns_nameservers(self, subn_def):
580+ self.assertIn('dns_nameservers', subn_def)
581+ expected_nameservers = DO_META['dns']['nameservers']
582+ nic_nameservers = subn_def.get('dns_nameservers').split()
583+ for dns_server in expected_nameservers:
584+ self.assertIn(dns_server, nic_nameservers)
585+
586+ def test_public_interface_ipv6(self):
587+ """test public ipv6 addressing"""
588+ (nic_def, meta_def) = self._get_nic_definition('public', 'eth0')
589+ ipv6_def = meta_def.get('ipv6')
590+ self.assertIsNotNone(ipv6_def)
591+
592+ subn_def = self._get_match_subn(nic_def.get('subnets'),
593+ ipv6_def.get('ip_address'))
594+
595+ cidr_notated_address = "{0}/{1}".format(ipv6_def.get('ip_address'),
596+ ipv6_def.get('cidr'))
597+
598+ self.assertEquals(cidr_notated_address, subn_def.get('address'))
599+ self.assertEquals(ipv6_def.get('gateway'), subn_def.get('gateway'))
600+ self._check_dns_nameservers(subn_def)
601+
602+ def test_public_interface_ipv4(self):
603+ """test public ipv4 addressing"""
604+ (nic_def, meta_def) = self._get_nic_definition('public', 'eth0')
605+ ipv4_def = meta_def.get('ipv4')
606+ self.assertIsNotNone(ipv4_def)
607+
608+ subn_def = self._get_match_subn(nic_def.get('subnets'),
609+ ipv4_def.get('ip_address'))
610+
611+ self.assertEquals(ipv4_def.get('netmask'), subn_def.get('netmask'))
612+ self.assertEquals(ipv4_def.get('gateway'), subn_def.get('gateway'))
613+ self._check_dns_nameservers(subn_def)
614+
615+ def test_public_interface_anchor_ipv4(self):
616+ """test public ipv4 addressing"""
617+ (nic_def, meta_def) = self._get_nic_definition('public', 'eth0')
618+ ipv4_def = meta_def.get('anchor_ipv4')
619+ self.assertIsNotNone(ipv4_def)
620+
621+ subn_def = self._get_match_subn(nic_def.get('subnets'),
622+ ipv4_def.get('ip_address'))
623+
624+ self.assertEquals(ipv4_def.get('netmask'), subn_def.get('netmask'))
625+ self.assertNotIn('gateway', subn_def)

Subscribers

People subscribed via source and target branches