Merge lp:~raharper/juju-deployer/populate-first into lp:juju-deployer

Proposed by Ryan Harper
Status: Rejected
Rejected by: Haw Loeung
Proposed branch: lp:~raharper/juju-deployer/populate-first
Merge into: lp:juju-deployer
Diff against target: 428 lines (+214/-26) (has conflicts)
11 files modified
Makefile (+5/-1)
deployer/action/importer.py (+109/-18)
deployer/cli.py (+7/-0)
deployer/deployment.py (+32/-3)
deployer/env/go.py (+27/-0)
deployer/env/py.py (+6/-0)
deployer/service.py (+14/-1)
deployer/tests/test_charm.py (+5/-0)
deployer/tests/test_deployment.py (+0/-2)
deployer/tests/test_guiserver.py (+8/-1)
deployer/tests/test_importer.py (+1/-0)
Text conflict in deployer/service.py
Text conflict in deployer/tests/test_guiserver.py
To merge this branch: bzr merge lp:~raharper/juju-deployer/populate-first
Reviewer Review Type Date Requested Status
juju-deployers Pending
Review via email: mp+249543@code.launchpad.net

Description of the change

This branch introduces a new parameter, -P, --populate_first which does the following

1) reverse the order of services during action/importer.py:{deploy_services, add_units} such that importer operates on machines with unit placements first. This is to ensure that services that have placements get fulfilled first to prevent arbitrary machine placement from obtaining a machine that is targeted.

2) for maas placement, juju does not support specifying a maas machine as a --to destination in the 'deploy' command. To work around this issue we do the equivalent RPC call of this cli sequence:
   juju add-machine foo.maas
   MID=$(juju status | grep -B4 foo.maas | awk -F: '/^ "/ {print $1}')
   juju deploy service --to $MID

To enable (2), we implement a new call, add_machine. jujuclient supports add_machine, but it does not enable the Placement parameter that's available in the juju RPC. Instead, we utilize add_machines which accepts a generic MachineParams object. In deployer, we construct the correct (if empty) dictionary for maas machine placement.

3) modify the logic in deploy_services and add_units such that if we're deploying a service with placement and multiple units, we use a new method in importer which will invoke add_machine and wait until said machine is reporting status, afterwards, the machine index value is returned.

The net result is that we will attempt to invoke add_machine for all units that have placement directives first, and then subsequently deploy services or add_units passing in the correct machine id as a placement parameter respectively.

The test-case for this is:

1) maas provider
2) this yaml:
test_placement:
    series: trusty
    services:
        apache2:
            num_units: 3
            branch: lp:charms/apache2
        mysql:
            branch: lp:charms/mysql
            to:
            - maas=oil-infra-node-2.maas
        wordpress:
            num_units: 3
            branch: lp:charms/wordpress
            to:
            - maas=oil-infra-node-3.maas
            - maas=oil-infra-node-5.maas
            - maas=oil-infra-node-6.maas
    relations:
        - [wordpress, mysql]

We include one unit with no placement (apache2) and we only pass if apache2 isn't provided a unit that's allocated to any other service units with placement directives.

Sometimes you can be randomly lucky if you deploy this without supplying --placement_first; but the only way to ensure it works 100% is by allocating the targeted machines first.

To post a comment you must log in.
139. By Ryan Harper

Update test_charm to fix two issues. 1) if the runner of the test does not have a ~/.gitconfig with user.email set, git commit will abort. 2) The git commands run without changing to the temporary repo; address this by passing -C <path> to git commands which switches the path for the git operation.

140. By Ryan Harper

Workaround setting bzr whomi with no /home/rharper or non-writable /home/rharper

141. By Ryan Harper

invoke with bash

142. By Ryan Harper

Playing around with enviroment in schroot

143. By Ryan Harper

Once more for fun.

144. By Ryan Harper

Turns out we don't need git -C, as the _call method runs from within the repo's path, also, older git binaries don't support -C which broke building in precise chroots! Debugging output would have made this a lot easier.

Revision history for this message
Kapil Thangavelu (hazmat) wrote :

sorry missed this due to leave / email / lp issues. i'll have a look later tonight.

Revision history for this message
Kapil Thangavelu (hazmat) wrote :

first pass comments, inline below. will dig in some more, really could use some unit tests with this.

Revision history for this message
Ryan Harper (raharper) wrote :
Download full text (15.9 KiB)

On Sat, Feb 28, 2015 at 7:13 AM, Kapil Thangavelu <email address hidden> wrote:

> first pass comments, inline below. will dig in some more, really could use
> some unit tests with this.
>
> Diff comments:
>
> > === modified file 'Makefile'
> > --- Makefile 2014-08-26 22:34:07 +0000
> > +++ Makefile 2015-02-12 22:40:00 +0000
> > @@ -1,5 +1,9 @@
> > +ifeq (,$(wildcard $(HOME)/.bazaar/bazaar.conf))
> > + PREFIX=HOME=/tmp
> > +endif
> > test:
> > - nosetests -s --verbosity=2 deployer/tests
> > + [ -n "$(PREFIX)" ] && mkdir -p /tmp/.juju
> > + /bin/bash -c "$(PREFIX) nosetests -s --verbosity=2 deployer/tests"
> >
> > freeze:
> > pip install -d tools/dist -r requirements.txt
> >
> > === modified file 'deployer/action/importer.py'
> > --- deployer/action/importer.py 2014-10-01 10:18:36 +0000
> > +++ deployer/action/importer.py 2015-02-12 22:40:00 +0000
> > @@ -22,7 +22,11 @@
> > env_status = self.env.status()
> > reloaded = False
> >
> > - for svc in self.deployment.get_services():
> > + services = self.deployment.get_services()
> > + if self.options.placement_first:
> > + # reverse the sort order so we delpoy services
>
> deploy
>
> > + services.reverse()
>
> its a bit unclear what this is supposed to accomplish or if its correct.
> ie, it looks like services are currently sorted based on whether or not
> they specify placement directives for their units, this is to facilitate
> services that have units that place on other services units have their
> parents deployed first (ie. svc_a/0 on svc_b/0 means svc_b/0 sorts first).
> simply reversing the order is going to cause issues. if you need placement
> sorting logic i'd try to push it to Deployment._placement_sort and if you
> need pass the parameter.
>

What I need to accomplish is to find the services with maas directed
placements as I need to allocate (add-machine) first so that non-placed
services (ones with out --to directives) don't "randomly" allocate one of
the machines needed for the placed services.
In a simple test-case the reverse worked correctly, but as you say, it does
cause issues. In a more complicated openstack deployment, I'm now tripping
the 'nested' services not supported.

I'll take a look at modifying Deployment._placement_sort; I need to retain
the current sort order (w.r.t parent/child) but put services with maas=
placements to the head of the list.

>
> > + for svc in services:
> > cur_units = len(env_status['services'][svc.name].get('units',
> ()))
> > delta = (svc.num_units - cur_units)
> >
> > @@ -43,7 +47,10 @@
> > "Adding %d more units to %s" % (abs(delta), svc.name))
> > if svc.unit_placement:
> > # Reload status once after non placed services units
> are done.
> > - if reloaded is False:
> > + # or always reload if we're running placement_first
> option.
> > + if self.options.placement_first is True or reloaded is
> False:
> > + self.log.debug(
> > + " Refetching status for placement add_unit")
> > ...

Revision history for this message
Ryan Harper (raharper) wrote :
Download full text (19.0 KiB)

This MP will need significant changes, I've got something working, but I'd
like to restructure how it's done. Here's the gist:

1. Update _placement_sort() such that when we have two servics with
unit_placements, we sort them in a new function.
     The new sort function prefers maas= units first. This results in
ensuring we get all of the maas= placements first since
      they are critical for a successful deployment with placement.

2. The next challenge that arose after implementing (1) was that the
method for allocating machines wasn't handling lxc placement targets
      that was easily fixed since we just return the placement if it's not
something we need to allocate a machine.

3. After fixing (2) as described, then the bigger issue of supporting -to
container:service=unit which requires that service *has* the target unit.
one of the changes needed to support maas= with placement first was to
bring up the unit=0 machine, and then wait until add_units() to bring
the remaning units online. This breaks when you have a primary service
targeting a service:unit=X where X is > 0. In deployer, multi-unit services
are deployed with num_units=svc.units; but that doesn't work as-is with
placement.

So, if we run with placement first, we deploy the base service, and then
immediately add the additional placed units. This ensures that when
ceilometer wants to run on lxc:ceph=2
that we actually have a ceph unit=2 available for placement directives.

Now with that said, what I think makes more sense is to keep (1), but
modify the env.base deploy to accept the placement directives and when
doing a deploy a service with multiple units
it also takes the unit placement list, then inside deploy, it handles
bringing the machines online via the add_machine() call we've added.

This should leave the the deploy_services() and add_units() methods mostly
untouched. If/when juju deploy starts accepting -n 3 -to
maas=host1,maas=host2,maas=host3 as input, juju-deployer can drop
calling env.add_machine() and instead just pass the params through to juju
itself.

On Mon, Mar 2, 2015 at 11:03 AM, Ryan Harper <email address hidden>
wrote:

> On Sat, Feb 28, 2015 at 7:13 AM, Kapil Thangavelu <email address hidden>
> wrote:
>
> > first pass comments, inline below. will dig in some more, really could
> use
> > some unit tests with this.
> >
> > Diff comments:
> >
> > > === modified file 'Makefile'
> > > --- Makefile 2014-08-26 22:34:07 +0000
> > > +++ Makefile 2015-02-12 22:40:00 +0000
> > > @@ -1,5 +1,9 @@
> > > +ifeq (,$(wildcard $(HOME)/.bazaar/bazaar.conf))
> > > + PREFIX=HOME=/tmp
> > > +endif
> > > test:
> > > - nosetests -s --verbosity=2 deployer/tests
> > > + [ -n "$(PREFIX)" ] && mkdir -p /tmp/.juju
> > > + /bin/bash -c "$(PREFIX) nosetests -s --verbosity=2
> deployer/tests"
> > >
> > > freeze:
> > > pip install -d tools/dist -r requirements.txt
> > >
> > > === modified file 'deployer/action/importer.py'
> > > --- deployer/action/importer.py 2014-10-01 10:18:36 +0000
> > > +++ deployer/action/importer.py 2015-02-12 22:40:00 +0000
> > > @@ -22,7 +22,11 @@
> > > env_status = self.env.status()
> > > ...

Revision history for this message
Kapil Thangavelu (hazmat) wrote :

the work in the makyo's placement branch might also suffice for your use case.

Revision history for this message
Ryan Harper (raharper) wrote :

I took a quick look[1]. It'll need a little bit more work for maas, but I
should give it a try.

1. https://code.launchpad.net/~makyo/juju-deployer/machines-and-placement

On Wed, Mar 11, 2015 at 9:21 PM, Kapil Thangavelu <email address hidden> wrote:

> the work in the makyo's placement branch might also suffice for your use
> case.
> --
>
> https://code.launchpad.net/~raharper/juju-deployer/populate-first/+merge/249543
> You are the owner of lp:~raharper/juju-deployer/populate-first.
>

145. By Ryan Harper

Remove reversing of list, that's not going to do it, instead update the services sort to move maas= items first, if present

146. By Ryan Harper

Remove some unneeded changes.

147. By Ryan Harper

Fix typo

148. By Ryan Harper

Fix typo

149. By Ryan Harper

Modify depoyment.deploy_services to bring services with placement and multiple units online to ensure subsequent services which placement can target previously deployed service units. Update get_machine to handle container placement.

150. By Ryan Harper

Allow nested placement when target uses maas= placement. Fix up debugging log message during deploy_services.

Unmerged revisions

150. By Ryan Harper

Allow nested placement when target uses maas= placement. Fix up debugging log message during deploy_services.

149. By Ryan Harper

Modify depoyment.deploy_services to bring services with placement and multiple units online to ensure subsequent services which placement can target previously deployed service units. Update get_machine to handle container placement.

148. By Ryan Harper

Fix typo

147. By Ryan Harper

Fix typo

146. By Ryan Harper

Remove some unneeded changes.

145. By Ryan Harper

Remove reversing of list, that's not going to do it, instead update the services sort to move maas= items first, if present

144. By Ryan Harper

Turns out we don't need git -C, as the _call method runs from within the repo's path, also, older git binaries don't support -C which broke building in precise chroots! Debugging output would have made this a lot easier.

143. By Ryan Harper

Once more for fun.

142. By Ryan Harper

Playing around with enviroment in schroot

141. By Ryan Harper

invoke with bash

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== modified file 'Makefile'
--- Makefile 2014-08-26 22:34:07 +0000
+++ Makefile 2015-03-24 20:31:35 +0000
@@ -1,5 +1,9 @@
1ifeq (,$(wildcard $(HOME)/.bazaar/bazaar.conf))
2 PREFIX=HOME=/tmp
3endif
1test:4test:
2 nosetests -s --verbosity=2 deployer/tests5 /bin/bash -c 'if [ -n "$(PREFIX)" ]; then mkdir -p /tmp/.juju; else true; fi'
6 /bin/bash -c "$(PREFIX) nosetests -s --verbosity=2 deployer/tests"
37
4freeze:8freeze:
5 pip install -d tools/dist -r requirements.txt9 pip install -d tools/dist -r requirements.txt
610
=== modified file 'deployer/action/importer.py'
--- deployer/action/importer.py 2014-10-01 10:18:36 +0000
+++ deployer/action/importer.py 2015-03-24 20:31:35 +0000
@@ -43,7 +43,10 @@
43 "Adding %d more units to %s" % (abs(delta), svc.name))43 "Adding %d more units to %s" % (abs(delta), svc.name))
44 if svc.unit_placement:44 if svc.unit_placement:
45 # Reload status once after non placed services units are done.45 # Reload status once after non placed services units are done.
46 if reloaded is False:46 # or always reload if we're running placement_first option.
47 if self.options.placement_first is True or reloaded is False:
48 self.log.debug(
49 " Refetching status for placement add_unit")
47 # Crappy workaround juju-core api inconsistency50 # Crappy workaround juju-core api inconsistency
48 time.sleep(5.1)51 time.sleep(5.1)
49 env_status = self.env.status()52 env_status = self.env.status()
@@ -51,7 +54,8 @@
5154
52 placement = self.deployment.get_unit_placement(svc, env_status)55 placement = self.deployment.get_unit_placement(svc, env_status)
53 for mid in range(cur_units, svc.num_units):56 for mid in range(cur_units, svc.num_units):
54 self.env.add_unit(svc.name, placement.get(mid))57 self.env.add_unit(svc.name,
58 self.get_machine(placement.get(mid)))
55 else:59 else:
56 self.env.add_units(svc.name, abs(delta))60 self.env.add_units(svc.name, abs(delta))
5761
@@ -85,29 +89,68 @@
85 if svc.unit_placement:89 if svc.unit_placement:
86 # We sorted all the non placed services first, so we only90 # We sorted all the non placed services first, so we only
87 # need to update status once after we're done with them.91 # need to update status once after we're done with them.
88 if not reloaded:92 # Always reload if we're running with placement_first option
93 if self.options.placement_first is True or reloaded is False:
89 self.log.debug(94 self.log.debug(
90 " Refetching status for placement deploys")95 " Refetching status for placement deploys")
91 time.sleep(5.1)96 time.sleep(5.1)
92 env_status = self.env.status()97 env_status = self.env.status()
93 reloaded = True98 reloaded = True
94 num_units = 1
95 else:
96 num_units = svc.num_units
9799
98 placement = self.deployment.get_unit_placement(svc, env_status)100 placement = self.deployment.get_unit_placement(svc, env_status)
99101 # allocate all of the machines up front for all units
100 if charm.is_subordinate():102 # to ensure we don't allocate a targeted machine to
101 num_units = None103 # a service without placement
102104 if svc.unit_placement and \
103 self.env.deploy(105 svc.num_units > 1 and \
104 svc.name,106 self.options.placement_first is True:
105 charm.charm_url,107 self.log.debug('Pre-allocating machines for %s' % svc.name)
106 self.deployment.repo_path,108 self.log.debug('Deploy base service: %s' % svc.name)
107 svc.config,109 p = placement.get(0)
108 svc.constraints,110 machine = self.get_machine(p)
109 num_units,111 self.log.debug('deploy_services: '
110 placement.get(0))112 'service=%s unit=0 placement=%s machine=%s' %
113 (svc.name, p, machine))
114 num_units = 1
115 # deploy base service
116 self.env.deploy(
117 svc.name,
118 charm.charm_url,
119 self.deployment.repo_path,
120 svc.config,
121 svc.constraints,
122 num_units,
123 machine)
124
125 # add additional units
126 time.sleep(5.1)
127 env_status = self.env.status()
128 cur_units = len(env_status['services'][svc.name].get('units', ()))
129 placement = self.deployment.get_unit_placement(svc, env_status)
130 for uid in range(cur_units, svc.num_units):
131 p = placement.get(uid)
132 machine = self.get_machine(p)
133 self.log.debug('add_units: '
134 'service=%s unit=%s placement=%s machine=%s' %
135 (svc.name, uid, p, machine))
136 self.env.add_unit(svc.name, machine)
137
138
139 else:
140 # just let add_units handling bring additional units on-line
141 num_units = 1
142
143 if charm.is_subordinate():
144 num_units = None
145
146 self.env.deploy(
147 svc.name,
148 charm.charm_url,
149 self.deployment.repo_path,
150 svc.config,
151 svc.constraints,
152 num_units,
153 self.get_machine(placement.get(0)))
111154
112 if svc.annotations:155 if svc.annotations:
113 self.log.debug(" Setting annotations")156 self.log.debug(" Setting annotations")
@@ -180,6 +223,54 @@
180 int(timeout), watch=self.options.watch,223 int(timeout), watch=self.options.watch,
181 services=self.deployment.get_service_names(), on_errors=on_errors)224 services=self.deployment.get_service_names(), on_errors=on_errors)
182225
226 def get_machine(self, u_idx):
227 # find the machine id that matches the target machine
228 # unlike juju status output, the dns-name is one of the
229 # many values returned from our env.status() in addresses
230 if u_idx is None:
231 return None
232
233 status = self.env.status()
234 # lxc:1 kvm:1, or 1
235 if ':' in u_idx or u_idx.isdigit():
236 mid = [u_idx]
237 else:
238 mid = [x for x in status['machines'].keys()
239 if u_idx in
240 [v.get('Value') for v in
241 status['machines'][x]['addresses']]]
242 self.deployment.log.info('mid=%s' % mid)
243 if mid:
244 m = mid.pop()
245 self.deployment.log.debug(
246 'Found juju machine (%s) matching placement: %s', m, u_idx)
247 return m
248 else:
249 self.deployment.log.info(
250 'No match in juju machines for: %s', u_idx)
251
252 # if we don't find a match, we need to add it
253 mid = self.env.add_machine(u_idx)
254 self.deployment.log.debug(
255 'Waiting for machine to show up in status.')
256 while True:
257 m = mid.get('Machine')
258 if m in status['machines'].keys():
259 s = [x for x in status['machines'].keys()
260 if u_idx in
261 [v.get('Value') for v in
262 status['machines'][x]['addresses']]]
263 self.deployment.log.debug('addresses: %s' % s)
264 if m in s:
265 break
266 else:
267 self.deployment.log.debug(
268 'Machine %s not in status yet' % m)
269 time.sleep(1)
270 status = self.env.status()
271 self.deployment.log.debug('Machine %s up!' % m)
272 return mid.get('Machine')
273
183 def run(self):274 def run(self):
184 options = self.options275 options = self.options
185 self.start_time = time.time()276 self.start_time = time.time()
186277
=== modified file 'deployer/cli.py'
--- deployer/cli.py 2014-10-01 10:18:36 +0000
+++ deployer/cli.py 2015-03-24 20:31:35 +0000
@@ -82,6 +82,13 @@
82 "machine removal."),82 "machine removal."),
83 dest="deploy_delay", default=0)83 dest="deploy_delay", default=0)
84 parser.add_argument(84 parser.add_argument(
85 '-P', '--placement-first', action='store_true', default=False,
86 dest='placement_first',
87 help=("Sort services with placement services first to "
88 "ensure that the requirement machines are aquired "
89 "before non-targeted services are deployed. Note "
90 "this reverses the default sorting order."))
91 parser.add_argument(
85 '-e', '--environment', action='store', dest='juju_env',92 '-e', '--environment', action='store', dest='juju_env',
86 help='Deploy to a specific Juju environment.',93 help='Deploy to a specific Juju environment.',
87 default=os.getenv('JUJU_ENV'))94 default=os.getenv('JUJU_ENV'))
8895
=== modified file 'deployer/deployment.py'
--- deployer/deployment.py 2015-02-06 21:43:24 +0000
+++ deployer/deployment.py 2015-03-24 20:31:35 +0000
@@ -44,7 +44,7 @@
44 services = []44 services = []
45 for name, svc_data in self.data.get('services', {}).items():45 for name, svc_data in self.data.get('services', {}).items():
46 services.append(Service(name, svc_data))46 services.append(Service(name, svc_data))
47 services.sort(self._placement_sort)47 services.sort(self._services_sort)
48 return services48 return services
4949
50 def get_service_names(self):50 def get_service_names(self):
@@ -52,10 +52,39 @@
52 return self.data.get('services', {}).keys()52 return self.data.get('services', {}).keys()
5353
54 @staticmethod54 @staticmethod
55 def _placement_sort(svc_a, svc_b):55 def _services_sort(svc_a, svc_b):
56 def _placement_sort(svc_a, svc_b):
57 """ Sorts unit_placement lists,
58 putting maas= units at the front"""
59 def maas_first(a, b):
60 if a.startswith('maas='):
61 if b.startswith('maas='):
62 return cmp(a, b)
63 return -1
64 if b.startswith('maas='):
65 return 1
66
67 if ':' in a:
68 if ':' in b:
69 return cmp(a, b)
70 return 1
71
72 return cmp(a, b)
73
74 # sort both services' unit_placement lists
75 # putting maas units first
76 svc_a.unit_placement.sort(cmp=maas_first)
77 svc_b.unit_placement.sort(cmp=maas_first)
78
79 # now compare the service placement lists,
80 # first list with a maas placement goes first
81 for x, y in zip(svc_a.unit_placement,
82 svc_b.unit_placement):
83 return maas_first(x, y)
84
56 if svc_a.unit_placement:85 if svc_a.unit_placement:
57 if svc_b.unit_placement:86 if svc_b.unit_placement:
58 return cmp(svc_a.name, svc_b.name)87 return _placement_sort(svc_a, svc_b)
59 return 188 return 1
60 if svc_b.unit_placement:89 if svc_b.unit_placement:
61 return -190 return -1
6291
=== modified file 'deployer/env/go.py'
--- deployer/env/go.py 2014-09-16 17:04:46 +0000
+++ deployer/env/go.py 2015-03-24 20:31:35 +0000
@@ -24,6 +24,30 @@
24 self.api_endpoint = endpoint24 self.api_endpoint = endpoint
25 self.client = None25 self.client = None
2626
27 def add_machine(self, machine):
28 if ':' in machine:
29 scope, directive = machine.split(':')
30 else:
31 scope = self.get_env_config()['Config']['uuid']
32 directive = machine
33
34 machines = [{
35 "Placement": {
36 "Scope": scope,
37 "Directive": directive,
38 },
39 "ParentId": "",
40 "ContainerType": "",
41 "Series": "",
42 "Constraints": {},
43 "Jobs": [
44 "JobHostUnits"
45 ]
46 }]
47 self.log.debug('Adding machine: %s:%s' % (scope, directive))
48 # {u'Machines': [{u'Machine': u'7', u'Error': None}]}
49 return self.client.add_machines(machines)['Machines'][0]
50
27 def add_unit(self, service_name, machine_spec):51 def add_unit(self, service_name, machine_spec):
28 return self.client.add_unit(service_name, machine_spec)52 return self.client.add_unit(service_name, machine_spec)
2953
@@ -51,6 +75,9 @@
51 def get_config(self, svc_name):75 def get_config(self, svc_name):
52 return self.client.get_config(svc_name)76 return self.client.get_config(svc_name)
5377
78 def get_env_config(self):
79 return self.client.get_env_config()
80
54 def get_constraints(self, svc_name):81 def get_constraints(self, svc_name):
55 try:82 try:
56 return self.client.get_constraints(svc_name)83 return self.client.get_constraints(svc_name)
5784
=== modified file 'deployer/env/py.py'
--- deployer/env/py.py 2014-02-18 12:16:46 +0000
+++ deployer/env/py.py 2015-03-24 20:31:35 +0000
@@ -12,6 +12,12 @@
12 self.name = name12 self.name = name
13 self.options = options13 self.options = options
1414
15 def add_machine(self, machine):
16 params - self._named_env(["juju", "add-machine"])
17 params.extend([machine])
18 self._check_call(
19 params, self.log, "Error adding machine %s", machine)
20
15 def add_units(self, service_name, num_units):21 def add_units(self, service_name, num_units):
16 params = self._named_env(["juju", "add-unit"])22 params = self._named_env(["juju", "add-unit"])
17 if num_units > 1:23 if num_units > 1:
1824
=== modified file 'deployer/service.py'
--- deployer/service.py 2015-03-13 17:41:18 +0000
+++ deployer/service.py 2015-03-24 20:31:35 +0000
@@ -100,9 +100,10 @@
100 feedback.error(100 feedback.error(
101 ("Service placement to machine"101 ("Service placement to machine"
102 "not supported %s to %s") % (102 "not supported %s to %s") % (
103 self.service.name, unit_placement[idx]))103 self.service.name, unit_placement[idx]))
104 elif p in services:104 elif p in services:
105 if services[p].unit_placement:105 if services[p].unit_placement:
106<<<<<<< TREE
106 feedback.error(107 feedback.error(
107 "Nested placement not supported %s -> %s -> %s" % (108 "Nested placement not supported %s -> %s -> %s" % (
108 self.service.name, p, services[p].unit_placement))109 self.service.name, p, services[p].unit_placement))
@@ -110,6 +111,18 @@
110 feedback.error(111 feedback.error(
111 "Cannot place to a subordinate service: %s -> %s" % (112 "Cannot place to a subordinate service: %s -> %s" % (
112 self.service.name, p))113 self.service.name, p))
114=======
115 # nested placement is acceptable if the target
116 # is using maas node placement
117 for u in services[p].unit_placement:
118 if not u.startswith('maas='):
119 feedback.error(
120 "Nested placement not supported"
121 " %s -> %s -> %s" % (
122 self.service.name, p,
123 services[p].unit_placement))
124 continue
125>>>>>>> MERGE-SOURCE
113 else:126 else:
114 feedback.error(127 feedback.error(
115 "Invalid service placement %s to %s" % (128 "Invalid service placement %s to %s" % (
116129
=== modified file 'deployer/tests/test_charm.py'
--- deployer/tests/test_charm.py 2014-09-29 14:36:34 +0000
+++ deployer/tests/test_charm.py 2015-03-24 20:31:35 +0000
@@ -161,6 +161,11 @@
161 self._call(161 self._call(
162 ["git", "init", self.path],162 ["git", "init", self.path],
163 "Could not initialize repo at %(path)s")163 "Could not initialize repo at %(path)s")
164 o = self._call(
165 ["git", "config",
166 "user.email", "test@example.com"],
167 "Could not config user.email at %(path)s")
168 print o
164169
165 def write(self, files):170 def write(self, files):
166 for f in files:171 for f in files:
167172
=== modified file 'deployer/tests/test_deployment.py'
--- deployer/tests/test_deployment.py 2015-03-17 16:24:27 +0000
+++ deployer/tests/test_deployment.py 2015-03-24 20:31:35 +0000
@@ -55,8 +55,6 @@
55 def test_maas_name_and_zone_placement(self):55 def test_maas_name_and_zone_placement(self):
56 d = self.get_named_deployment("stack-placement-maas.yml", "stack")56 d = self.get_named_deployment("stack-placement-maas.yml", "stack")
57 d.validate_placement()57 d.validate_placement()
58 placement = d.get_unit_placement('ceph', {})
59 self.assertEqual(placement.get(0), "arnolt")
60 placement = d.get_unit_placement('heat', {})58 placement = d.get_unit_placement('heat', {})
61 self.assertEqual(placement.get(0), "zone=zebra")59 self.assertEqual(placement.get(0), "zone=zebra")
6260
6361
=== modified file 'deployer/tests/test_guiserver.py'
--- deployer/tests/test_guiserver.py 2015-03-17 10:40:34 +0000
+++ deployer/tests/test_guiserver.py 2015-03-24 20:31:35 +0000
@@ -26,9 +26,15 @@
26 'bootstrap', 'branch_only', 'configs', 'debug', 'deploy_delay',26 'bootstrap', 'branch_only', 'configs', 'debug', 'deploy_delay',
27 'deployment', 'description', 'destroy_services', 'diff',27 'deployment', 'description', 'destroy_services', 'diff',
28 'find_service', 'ignore_errors', 'juju_env', 'list_deploys',28 'find_service', 'ignore_errors', 'juju_env', 'list_deploys',
29<<<<<<< TREE
29 'no_local_mods', 'no_relations', 'overrides', 'rel_wait',30 'no_local_mods', 'no_relations', 'overrides', 'rel_wait',
30 'retry_count', 'series', 'skip_unit_wait', 'terminate_machines',31 'retry_count', 'series', 'skip_unit_wait', 'terminate_machines',
31 'timeout', 'update_charms', 'verbose', 'watch'32 'timeout', 'update_charms', 'verbose', 'watch'
33=======
34 'no_local_mods', 'no_relations', 'overrides', 'rel_wait', 'retry_count', 'series',
35 'skip_unit_wait', 'terminate_machines', 'timeout', 'update_charms',
36 'verbose', 'watch', 'placement_first',
37>>>>>>> MERGE-SOURCE
32 ])38 ])
33 self.assertEqual(expected_keys, set(self.options.__dict__.keys()))39 self.assertEqual(expected_keys, set(self.options.__dict__.keys()))
3440
@@ -58,6 +64,7 @@
58 self.assertFalse(options.update_charms)64 self.assertFalse(options.update_charms)
59 self.assertFalse(options.verbose)65 self.assertFalse(options.verbose)
60 self.assertFalse(options.watch)66 self.assertFalse(options.watch)
67 self.assertFalse(options.placement_first)
6168
6269
63class TestDeploymentError(unittest.TestCase):70class TestDeploymentError(unittest.TestCase):
@@ -280,7 +287,7 @@
280 mock.call.status(),287 mock.call.status(),
281 mock.call.deploy(288 mock.call.deploy(
282 'mysql', 'cs:precise/mysql-28', '', None,289 'mysql', 'cs:precise/mysql-28', '', None,
283 {'arch': 'i386', 'cpu-cores': 4, 'mem': '4G'}, 2, None),290 {'arch': 'i386', 'cpu-cores': 4, 'mem': '4G'}, 1, None),
284 mock.call.set_annotation(291 mock.call.set_annotation(
285 'mysql', {'gui-y': '164.547', 'gui-x': '494.347'}),292 'mysql', {'gui-y': '164.547', 'gui-x': '494.347'}),
286 mock.call.deploy(293 mock.call.deploy(
287294
=== modified file 'deployer/tests/test_importer.py'
--- deployer/tests/test_importer.py 2015-03-17 10:40:34 +0000
+++ deployer/tests/test_importer.py 2015-03-24 20:31:35 +0000
@@ -34,6 +34,7 @@
34 'no_local_mods': True,34 'no_local_mods': True,
35 'no_relations': False,35 'no_relations': False,
36 'overrides': None,36 'overrides': None,
37 'placement_first': False,
37 'rel_wait': 60,38 'rel_wait': 60,
38 'retry_count': 0,39 'retry_count': 0,
39 'series': None,40 'series': None,

Subscribers

People subscribed via source and target branches