Merge lp:~raharper/juju-deployer/populate-first into lp:juju-deployer

Proposed by Ryan Harper
Status: Rejected
Rejected by: Haw Loeung
Proposed branch: lp:~raharper/juju-deployer/populate-first
Merge into: lp:juju-deployer
Diff against target: 428 lines (+214/-26) (has conflicts)
11 files modified
Makefile (+5/-1)
deployer/action/importer.py (+109/-18)
deployer/cli.py (+7/-0)
deployer/deployment.py (+32/-3)
deployer/env/go.py (+27/-0)
deployer/env/py.py (+6/-0)
deployer/service.py (+14/-1)
deployer/tests/test_charm.py (+5/-0)
deployer/tests/test_deployment.py (+0/-2)
deployer/tests/test_guiserver.py (+8/-1)
deployer/tests/test_importer.py (+1/-0)
Text conflict in deployer/service.py
Text conflict in deployer/tests/test_guiserver.py
To merge this branch: bzr merge lp:~raharper/juju-deployer/populate-first
Reviewer Review Type Date Requested Status
juju-deployers Pending
Review via email: mp+249543@code.launchpad.net

Description of the change

This branch introduces a new parameter, -P, --populate_first which does the following

1) reverse the order of services during action/importer.py:{deploy_services, add_units} such that importer operates on machines with unit placements first. This is to ensure that services that have placements get fulfilled first to prevent arbitrary machine placement from obtaining a machine that is targeted.

2) for maas placement, juju does not support specifying a maas machine as a --to destination in the 'deploy' command. To work around this issue we do the equivalent RPC call of this cli sequence:
   juju add-machine foo.maas
   MID=$(juju status | grep -B4 foo.maas | awk -F: '/^ "/ {print $1}')
   juju deploy service --to $MID

To enable (2), we implement a new call, add_machine. jujuclient supports add_machine, but it does not enable the Placement parameter that's available in the juju RPC. Instead, we utilize add_machines which accepts a generic MachineParams object. In deployer, we construct the correct (if empty) dictionary for maas machine placement.

3) modify the logic in deploy_services and add_units such that if we're deploying a service with placement and multiple units, we use a new method in importer which will invoke add_machine and wait until said machine is reporting status, afterwards, the machine index value is returned.

The net result is that we will attempt to invoke add_machine for all units that have placement directives first, and then subsequently deploy services or add_units passing in the correct machine id as a placement parameter respectively.

The test-case for this is:

1) maas provider
2) this yaml:
test_placement:
    series: trusty
    services:
        apache2:
            num_units: 3
            branch: lp:charms/apache2
        mysql:
            branch: lp:charms/mysql
            to:
            - maas=oil-infra-node-2.maas
        wordpress:
            num_units: 3
            branch: lp:charms/wordpress
            to:
            - maas=oil-infra-node-3.maas
            - maas=oil-infra-node-5.maas
            - maas=oil-infra-node-6.maas
    relations:
        - [wordpress, mysql]

We include one unit with no placement (apache2) and we only pass if apache2 isn't provided a unit that's allocated to any other service units with placement directives.

Sometimes you can be randomly lucky if you deploy this without supplying --placement_first; but the only way to ensure it works 100% is by allocating the targeted machines first.

To post a comment you must log in.
139. By Ryan Harper

Update test_charm to fix two issues. 1) if the runner of the test does not have a ~/.gitconfig with user.email set, git commit will abort. 2) The git commands run without changing to the temporary repo; address this by passing -C <path> to git commands which switches the path for the git operation.

140. By Ryan Harper

Workaround setting bzr whomi with no /home/rharper or non-writable /home/rharper

141. By Ryan Harper

invoke with bash

142. By Ryan Harper

Playing around with enviroment in schroot

143. By Ryan Harper

Once more for fun.

144. By Ryan Harper

Turns out we don't need git -C, as the _call method runs from within the repo's path, also, older git binaries don't support -C which broke building in precise chroots! Debugging output would have made this a lot easier.

Revision history for this message
Kapil Thangavelu (hazmat) wrote :

sorry missed this due to leave / email / lp issues. i'll have a look later tonight.

Revision history for this message
Kapil Thangavelu (hazmat) wrote :

first pass comments, inline below. will dig in some more, really could use some unit tests with this.

Revision history for this message
Ryan Harper (raharper) wrote :
Download full text (15.9 KiB)

On Sat, Feb 28, 2015 at 7:13 AM, Kapil Thangavelu <email address hidden> wrote:

> first pass comments, inline below. will dig in some more, really could use
> some unit tests with this.
>
> Diff comments:
>
> > === modified file 'Makefile'
> > --- Makefile 2014-08-26 22:34:07 +0000
> > +++ Makefile 2015-02-12 22:40:00 +0000
> > @@ -1,5 +1,9 @@
> > +ifeq (,$(wildcard $(HOME)/.bazaar/bazaar.conf))
> > + PREFIX=HOME=/tmp
> > +endif
> > test:
> > - nosetests -s --verbosity=2 deployer/tests
> > + [ -n "$(PREFIX)" ] && mkdir -p /tmp/.juju
> > + /bin/bash -c "$(PREFIX) nosetests -s --verbosity=2 deployer/tests"
> >
> > freeze:
> > pip install -d tools/dist -r requirements.txt
> >
> > === modified file 'deployer/action/importer.py'
> > --- deployer/action/importer.py 2014-10-01 10:18:36 +0000
> > +++ deployer/action/importer.py 2015-02-12 22:40:00 +0000
> > @@ -22,7 +22,11 @@
> > env_status = self.env.status()
> > reloaded = False
> >
> > - for svc in self.deployment.get_services():
> > + services = self.deployment.get_services()
> > + if self.options.placement_first:
> > + # reverse the sort order so we delpoy services
>
> deploy
>
> > + services.reverse()
>
> its a bit unclear what this is supposed to accomplish or if its correct.
> ie, it looks like services are currently sorted based on whether or not
> they specify placement directives for their units, this is to facilitate
> services that have units that place on other services units have their
> parents deployed first (ie. svc_a/0 on svc_b/0 means svc_b/0 sorts first).
> simply reversing the order is going to cause issues. if you need placement
> sorting logic i'd try to push it to Deployment._placement_sort and if you
> need pass the parameter.
>

What I need to accomplish is to find the services with maas directed
placements as I need to allocate (add-machine) first so that non-placed
services (ones with out --to directives) don't "randomly" allocate one of
the machines needed for the placed services.
In a simple test-case the reverse worked correctly, but as you say, it does
cause issues. In a more complicated openstack deployment, I'm now tripping
the 'nested' services not supported.

I'll take a look at modifying Deployment._placement_sort; I need to retain
the current sort order (w.r.t parent/child) but put services with maas=
placements to the head of the list.

>
> > + for svc in services:
> > cur_units = len(env_status['services'][svc.name].get('units',
> ()))
> > delta = (svc.num_units - cur_units)
> >
> > @@ -43,7 +47,10 @@
> > "Adding %d more units to %s" % (abs(delta), svc.name))
> > if svc.unit_placement:
> > # Reload status once after non placed services units
> are done.
> > - if reloaded is False:
> > + # or always reload if we're running placement_first
> option.
> > + if self.options.placement_first is True or reloaded is
> False:
> > + self.log.debug(
> > + " Refetching status for placement add_unit")
> > ...

Revision history for this message
Ryan Harper (raharper) wrote :
Download full text (19.0 KiB)

This MP will need significant changes, I've got something working, but I'd
like to restructure how it's done. Here's the gist:

1. Update _placement_sort() such that when we have two servics with
unit_placements, we sort them in a new function.
     The new sort function prefers maas= units first. This results in
ensuring we get all of the maas= placements first since
      they are critical for a successful deployment with placement.

2. The next challenge that arose after implementing (1) was that the
method for allocating machines wasn't handling lxc placement targets
      that was easily fixed since we just return the placement if it's not
something we need to allocate a machine.

3. After fixing (2) as described, then the bigger issue of supporting -to
container:service=unit which requires that service *has* the target unit.
one of the changes needed to support maas= with placement first was to
bring up the unit=0 machine, and then wait until add_units() to bring
the remaning units online. This breaks when you have a primary service
targeting a service:unit=X where X is > 0. In deployer, multi-unit services
are deployed with num_units=svc.units; but that doesn't work as-is with
placement.

So, if we run with placement first, we deploy the base service, and then
immediately add the additional placed units. This ensures that when
ceilometer wants to run on lxc:ceph=2
that we actually have a ceph unit=2 available for placement directives.

Now with that said, what I think makes more sense is to keep (1), but
modify the env.base deploy to accept the placement directives and when
doing a deploy a service with multiple units
it also takes the unit placement list, then inside deploy, it handles
bringing the machines online via the add_machine() call we've added.

This should leave the the deploy_services() and add_units() methods mostly
untouched. If/when juju deploy starts accepting -n 3 -to
maas=host1,maas=host2,maas=host3 as input, juju-deployer can drop
calling env.add_machine() and instead just pass the params through to juju
itself.

On Mon, Mar 2, 2015 at 11:03 AM, Ryan Harper <email address hidden>
wrote:

> On Sat, Feb 28, 2015 at 7:13 AM, Kapil Thangavelu <email address hidden>
> wrote:
>
> > first pass comments, inline below. will dig in some more, really could
> use
> > some unit tests with this.
> >
> > Diff comments:
> >
> > > === modified file 'Makefile'
> > > --- Makefile 2014-08-26 22:34:07 +0000
> > > +++ Makefile 2015-02-12 22:40:00 +0000
> > > @@ -1,5 +1,9 @@
> > > +ifeq (,$(wildcard $(HOME)/.bazaar/bazaar.conf))
> > > + PREFIX=HOME=/tmp
> > > +endif
> > > test:
> > > - nosetests -s --verbosity=2 deployer/tests
> > > + [ -n "$(PREFIX)" ] && mkdir -p /tmp/.juju
> > > + /bin/bash -c "$(PREFIX) nosetests -s --verbosity=2
> deployer/tests"
> > >
> > > freeze:
> > > pip install -d tools/dist -r requirements.txt
> > >
> > > === modified file 'deployer/action/importer.py'
> > > --- deployer/action/importer.py 2014-10-01 10:18:36 +0000
> > > +++ deployer/action/importer.py 2015-02-12 22:40:00 +0000
> > > @@ -22,7 +22,11 @@
> > > env_status = self.env.status()
> > > ...

Revision history for this message
Kapil Thangavelu (hazmat) wrote :

the work in the makyo's placement branch might also suffice for your use case.

Revision history for this message
Ryan Harper (raharper) wrote :

I took a quick look[1]. It'll need a little bit more work for maas, but I
should give it a try.

1. https://code.launchpad.net/~makyo/juju-deployer/machines-and-placement

On Wed, Mar 11, 2015 at 9:21 PM, Kapil Thangavelu <email address hidden> wrote:

> the work in the makyo's placement branch might also suffice for your use
> case.
> --
>
> https://code.launchpad.net/~raharper/juju-deployer/populate-first/+merge/249543
> You are the owner of lp:~raharper/juju-deployer/populate-first.
>

145. By Ryan Harper

Remove reversing of list, that's not going to do it, instead update the services sort to move maas= items first, if present

146. By Ryan Harper

Remove some unneeded changes.

147. By Ryan Harper

Fix typo

148. By Ryan Harper

Fix typo

149. By Ryan Harper

Modify depoyment.deploy_services to bring services with placement and multiple units online to ensure subsequent services which placement can target previously deployed service units. Update get_machine to handle container placement.

150. By Ryan Harper

Allow nested placement when target uses maas= placement. Fix up debugging log message during deploy_services.

Unmerged revisions

150. By Ryan Harper

Allow nested placement when target uses maas= placement. Fix up debugging log message during deploy_services.

149. By Ryan Harper

Modify depoyment.deploy_services to bring services with placement and multiple units online to ensure subsequent services which placement can target previously deployed service units. Update get_machine to handle container placement.

148. By Ryan Harper

Fix typo

147. By Ryan Harper

Fix typo

146. By Ryan Harper

Remove some unneeded changes.

145. By Ryan Harper

Remove reversing of list, that's not going to do it, instead update the services sort to move maas= items first, if present

144. By Ryan Harper

Turns out we don't need git -C, as the _call method runs from within the repo's path, also, older git binaries don't support -C which broke building in precise chroots! Debugging output would have made this a lot easier.

143. By Ryan Harper

Once more for fun.

142. By Ryan Harper

Playing around with enviroment in schroot

141. By Ryan Harper

invoke with bash

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'Makefile'
2--- Makefile 2014-08-26 22:34:07 +0000
3+++ Makefile 2015-03-24 20:31:35 +0000
4@@ -1,5 +1,9 @@
5+ifeq (,$(wildcard $(HOME)/.bazaar/bazaar.conf))
6+ PREFIX=HOME=/tmp
7+endif
8 test:
9- nosetests -s --verbosity=2 deployer/tests
10+ /bin/bash -c 'if [ -n "$(PREFIX)" ]; then mkdir -p /tmp/.juju; else true; fi'
11+ /bin/bash -c "$(PREFIX) nosetests -s --verbosity=2 deployer/tests"
12
13 freeze:
14 pip install -d tools/dist -r requirements.txt
15
16=== modified file 'deployer/action/importer.py'
17--- deployer/action/importer.py 2014-10-01 10:18:36 +0000
18+++ deployer/action/importer.py 2015-03-24 20:31:35 +0000
19@@ -43,7 +43,10 @@
20 "Adding %d more units to %s" % (abs(delta), svc.name))
21 if svc.unit_placement:
22 # Reload status once after non placed services units are done.
23- if reloaded is False:
24+ # or always reload if we're running placement_first option.
25+ if self.options.placement_first is True or reloaded is False:
26+ self.log.debug(
27+ " Refetching status for placement add_unit")
28 # Crappy workaround juju-core api inconsistency
29 time.sleep(5.1)
30 env_status = self.env.status()
31@@ -51,7 +54,8 @@
32
33 placement = self.deployment.get_unit_placement(svc, env_status)
34 for mid in range(cur_units, svc.num_units):
35- self.env.add_unit(svc.name, placement.get(mid))
36+ self.env.add_unit(svc.name,
37+ self.get_machine(placement.get(mid)))
38 else:
39 self.env.add_units(svc.name, abs(delta))
40
41@@ -85,29 +89,68 @@
42 if svc.unit_placement:
43 # We sorted all the non placed services first, so we only
44 # need to update status once after we're done with them.
45- if not reloaded:
46+ # Always reload if we're running with placement_first option
47+ if self.options.placement_first is True or reloaded is False:
48 self.log.debug(
49 " Refetching status for placement deploys")
50 time.sleep(5.1)
51 env_status = self.env.status()
52 reloaded = True
53- num_units = 1
54- else:
55- num_units = svc.num_units
56
57 placement = self.deployment.get_unit_placement(svc, env_status)
58-
59- if charm.is_subordinate():
60- num_units = None
61-
62- self.env.deploy(
63- svc.name,
64- charm.charm_url,
65- self.deployment.repo_path,
66- svc.config,
67- svc.constraints,
68- num_units,
69- placement.get(0))
70+ # allocate all of the machines up front for all units
71+ # to ensure we don't allocate a targeted machine to
72+ # a service without placement
73+ if svc.unit_placement and \
74+ svc.num_units > 1 and \
75+ self.options.placement_first is True:
76+ self.log.debug('Pre-allocating machines for %s' % svc.name)
77+ self.log.debug('Deploy base service: %s' % svc.name)
78+ p = placement.get(0)
79+ machine = self.get_machine(p)
80+ self.log.debug('deploy_services: '
81+ 'service=%s unit=0 placement=%s machine=%s' %
82+ (svc.name, p, machine))
83+ num_units = 1
84+ # deploy base service
85+ self.env.deploy(
86+ svc.name,
87+ charm.charm_url,
88+ self.deployment.repo_path,
89+ svc.config,
90+ svc.constraints,
91+ num_units,
92+ machine)
93+
94+ # add additional units
95+ time.sleep(5.1)
96+ env_status = self.env.status()
97+ cur_units = len(env_status['services'][svc.name].get('units', ()))
98+ placement = self.deployment.get_unit_placement(svc, env_status)
99+ for uid in range(cur_units, svc.num_units):
100+ p = placement.get(uid)
101+ machine = self.get_machine(p)
102+ self.log.debug('add_units: '
103+ 'service=%s unit=%s placement=%s machine=%s' %
104+ (svc.name, uid, p, machine))
105+ self.env.add_unit(svc.name, machine)
106+
107+
108+ else:
109+ # just let add_units handling bring additional units on-line
110+ num_units = 1
111+
112+ if charm.is_subordinate():
113+ num_units = None
114+
115+ self.env.deploy(
116+ svc.name,
117+ charm.charm_url,
118+ self.deployment.repo_path,
119+ svc.config,
120+ svc.constraints,
121+ num_units,
122+ self.get_machine(placement.get(0)))
123
124 if svc.annotations:
125 self.log.debug(" Setting annotations")
126@@ -180,6 +223,54 @@
127 int(timeout), watch=self.options.watch,
128 services=self.deployment.get_service_names(), on_errors=on_errors)
129
130+ def get_machine(self, u_idx):
131+ # find the machine id that matches the target machine
132+ # unlike juju status output, the dns-name is one of the
133+ # many values returned from our env.status() in addresses
134+ if u_idx is None:
135+ return None
136+
137+ status = self.env.status()
138+ # lxc:1 kvm:1, or 1
139+ if ':' in u_idx or u_idx.isdigit():
140+ mid = [u_idx]
141+ else:
142+ mid = [x for x in status['machines'].keys()
143+ if u_idx in
144+ [v.get('Value') for v in
145+ status['machines'][x]['addresses']]]
146+ self.deployment.log.info('mid=%s' % mid)
147+ if mid:
148+ m = mid.pop()
149+ self.deployment.log.debug(
150+ 'Found juju machine (%s) matching placement: %s', m, u_idx)
151+ return m
152+ else:
153+ self.deployment.log.info(
154+ 'No match in juju machines for: %s', u_idx)
155+
156+ # if we don't find a match, we need to add it
157+ mid = self.env.add_machine(u_idx)
158+ self.deployment.log.debug(
159+ 'Waiting for machine to show up in status.')
160+ while True:
161+ m = mid.get('Machine')
162+ if m in status['machines'].keys():
163+ s = [x for x in status['machines'].keys()
164+ if u_idx in
165+ [v.get('Value') for v in
166+ status['machines'][x]['addresses']]]
167+ self.deployment.log.debug('addresses: %s' % s)
168+ if m in s:
169+ break
170+ else:
171+ self.deployment.log.debug(
172+ 'Machine %s not in status yet' % m)
173+ time.sleep(1)
174+ status = self.env.status()
175+ self.deployment.log.debug('Machine %s up!' % m)
176+ return mid.get('Machine')
177+
178 def run(self):
179 options = self.options
180 self.start_time = time.time()
181
182=== modified file 'deployer/cli.py'
183--- deployer/cli.py 2014-10-01 10:18:36 +0000
184+++ deployer/cli.py 2015-03-24 20:31:35 +0000
185@@ -82,6 +82,13 @@
186 "machine removal."),
187 dest="deploy_delay", default=0)
188 parser.add_argument(
189+ '-P', '--placement-first', action='store_true', default=False,
190+ dest='placement_first',
191+ help=("Sort services with placement services first to "
192+ "ensure that the requirement machines are aquired "
193+ "before non-targeted services are deployed. Note "
194+ "this reverses the default sorting order."))
195+ parser.add_argument(
196 '-e', '--environment', action='store', dest='juju_env',
197 help='Deploy to a specific Juju environment.',
198 default=os.getenv('JUJU_ENV'))
199
200=== modified file 'deployer/deployment.py'
201--- deployer/deployment.py 2015-02-06 21:43:24 +0000
202+++ deployer/deployment.py 2015-03-24 20:31:35 +0000
203@@ -44,7 +44,7 @@
204 services = []
205 for name, svc_data in self.data.get('services', {}).items():
206 services.append(Service(name, svc_data))
207- services.sort(self._placement_sort)
208+ services.sort(self._services_sort)
209 return services
210
211 def get_service_names(self):
212@@ -52,10 +52,39 @@
213 return self.data.get('services', {}).keys()
214
215 @staticmethod
216- def _placement_sort(svc_a, svc_b):
217+ def _services_sort(svc_a, svc_b):
218+ def _placement_sort(svc_a, svc_b):
219+ """ Sorts unit_placement lists,
220+ putting maas= units at the front"""
221+ def maas_first(a, b):
222+ if a.startswith('maas='):
223+ if b.startswith('maas='):
224+ return cmp(a, b)
225+ return -1
226+ if b.startswith('maas='):
227+ return 1
228+
229+ if ':' in a:
230+ if ':' in b:
231+ return cmp(a, b)
232+ return 1
233+
234+ return cmp(a, b)
235+
236+ # sort both services' unit_placement lists
237+ # putting maas units first
238+ svc_a.unit_placement.sort(cmp=maas_first)
239+ svc_b.unit_placement.sort(cmp=maas_first)
240+
241+ # now compare the service placement lists,
242+ # first list with a maas placement goes first
243+ for x, y in zip(svc_a.unit_placement,
244+ svc_b.unit_placement):
245+ return maas_first(x, y)
246+
247 if svc_a.unit_placement:
248 if svc_b.unit_placement:
249- return cmp(svc_a.name, svc_b.name)
250+ return _placement_sort(svc_a, svc_b)
251 return 1
252 if svc_b.unit_placement:
253 return -1
254
255=== modified file 'deployer/env/go.py'
256--- deployer/env/go.py 2014-09-16 17:04:46 +0000
257+++ deployer/env/go.py 2015-03-24 20:31:35 +0000
258@@ -24,6 +24,30 @@
259 self.api_endpoint = endpoint
260 self.client = None
261
262+ def add_machine(self, machine):
263+ if ':' in machine:
264+ scope, directive = machine.split(':')
265+ else:
266+ scope = self.get_env_config()['Config']['uuid']
267+ directive = machine
268+
269+ machines = [{
270+ "Placement": {
271+ "Scope": scope,
272+ "Directive": directive,
273+ },
274+ "ParentId": "",
275+ "ContainerType": "",
276+ "Series": "",
277+ "Constraints": {},
278+ "Jobs": [
279+ "JobHostUnits"
280+ ]
281+ }]
282+ self.log.debug('Adding machine: %s:%s' % (scope, directive))
283+ # {u'Machines': [{u'Machine': u'7', u'Error': None}]}
284+ return self.client.add_machines(machines)['Machines'][0]
285+
286 def add_unit(self, service_name, machine_spec):
287 return self.client.add_unit(service_name, machine_spec)
288
289@@ -51,6 +75,9 @@
290 def get_config(self, svc_name):
291 return self.client.get_config(svc_name)
292
293+ def get_env_config(self):
294+ return self.client.get_env_config()
295+
296 def get_constraints(self, svc_name):
297 try:
298 return self.client.get_constraints(svc_name)
299
300=== modified file 'deployer/env/py.py'
301--- deployer/env/py.py 2014-02-18 12:16:46 +0000
302+++ deployer/env/py.py 2015-03-24 20:31:35 +0000
303@@ -12,6 +12,12 @@
304 self.name = name
305 self.options = options
306
307+ def add_machine(self, machine):
308+ params - self._named_env(["juju", "add-machine"])
309+ params.extend([machine])
310+ self._check_call(
311+ params, self.log, "Error adding machine %s", machine)
312+
313 def add_units(self, service_name, num_units):
314 params = self._named_env(["juju", "add-unit"])
315 if num_units > 1:
316
317=== modified file 'deployer/service.py'
318--- deployer/service.py 2015-03-13 17:41:18 +0000
319+++ deployer/service.py 2015-03-24 20:31:35 +0000
320@@ -100,9 +100,10 @@
321 feedback.error(
322 ("Service placement to machine"
323 "not supported %s to %s") % (
324- self.service.name, unit_placement[idx]))
325+ self.service.name, unit_placement[idx]))
326 elif p in services:
327 if services[p].unit_placement:
328+<<<<<<< TREE
329 feedback.error(
330 "Nested placement not supported %s -> %s -> %s" % (
331 self.service.name, p, services[p].unit_placement))
332@@ -110,6 +111,18 @@
333 feedback.error(
334 "Cannot place to a subordinate service: %s -> %s" % (
335 self.service.name, p))
336+=======
337+ # nested placement is acceptable if the target
338+ # is using maas node placement
339+ for u in services[p].unit_placement:
340+ if not u.startswith('maas='):
341+ feedback.error(
342+ "Nested placement not supported"
343+ " %s -> %s -> %s" % (
344+ self.service.name, p,
345+ services[p].unit_placement))
346+ continue
347+>>>>>>> MERGE-SOURCE
348 else:
349 feedback.error(
350 "Invalid service placement %s to %s" % (
351
352=== modified file 'deployer/tests/test_charm.py'
353--- deployer/tests/test_charm.py 2014-09-29 14:36:34 +0000
354+++ deployer/tests/test_charm.py 2015-03-24 20:31:35 +0000
355@@ -161,6 +161,11 @@
356 self._call(
357 ["git", "init", self.path],
358 "Could not initialize repo at %(path)s")
359+ o = self._call(
360+ ["git", "config",
361+ "user.email", "test@example.com"],
362+ "Could not config user.email at %(path)s")
363+ print o
364
365 def write(self, files):
366 for f in files:
367
368=== modified file 'deployer/tests/test_deployment.py'
369--- deployer/tests/test_deployment.py 2015-03-17 16:24:27 +0000
370+++ deployer/tests/test_deployment.py 2015-03-24 20:31:35 +0000
371@@ -55,8 +55,6 @@
372 def test_maas_name_and_zone_placement(self):
373 d = self.get_named_deployment("stack-placement-maas.yml", "stack")
374 d.validate_placement()
375- placement = d.get_unit_placement('ceph', {})
376- self.assertEqual(placement.get(0), "arnolt")
377 placement = d.get_unit_placement('heat', {})
378 self.assertEqual(placement.get(0), "zone=zebra")
379
380
381=== modified file 'deployer/tests/test_guiserver.py'
382--- deployer/tests/test_guiserver.py 2015-03-17 10:40:34 +0000
383+++ deployer/tests/test_guiserver.py 2015-03-24 20:31:35 +0000
384@@ -26,9 +26,15 @@
385 'bootstrap', 'branch_only', 'configs', 'debug', 'deploy_delay',
386 'deployment', 'description', 'destroy_services', 'diff',
387 'find_service', 'ignore_errors', 'juju_env', 'list_deploys',
388+<<<<<<< TREE
389 'no_local_mods', 'no_relations', 'overrides', 'rel_wait',
390 'retry_count', 'series', 'skip_unit_wait', 'terminate_machines',
391 'timeout', 'update_charms', 'verbose', 'watch'
392+=======
393+ 'no_local_mods', 'no_relations', 'overrides', 'rel_wait', 'retry_count', 'series',
394+ 'skip_unit_wait', 'terminate_machines', 'timeout', 'update_charms',
395+ 'verbose', 'watch', 'placement_first',
396+>>>>>>> MERGE-SOURCE
397 ])
398 self.assertEqual(expected_keys, set(self.options.__dict__.keys()))
399
400@@ -58,6 +64,7 @@
401 self.assertFalse(options.update_charms)
402 self.assertFalse(options.verbose)
403 self.assertFalse(options.watch)
404+ self.assertFalse(options.placement_first)
405
406
407 class TestDeploymentError(unittest.TestCase):
408@@ -280,7 +287,7 @@
409 mock.call.status(),
410 mock.call.deploy(
411 'mysql', 'cs:precise/mysql-28', '', None,
412- {'arch': 'i386', 'cpu-cores': 4, 'mem': '4G'}, 2, None),
413+ {'arch': 'i386', 'cpu-cores': 4, 'mem': '4G'}, 1, None),
414 mock.call.set_annotation(
415 'mysql', {'gui-y': '164.547', 'gui-x': '494.347'}),
416 mock.call.deploy(
417
418=== modified file 'deployer/tests/test_importer.py'
419--- deployer/tests/test_importer.py 2015-03-17 10:40:34 +0000
420+++ deployer/tests/test_importer.py 2015-03-24 20:31:35 +0000
421@@ -34,6 +34,7 @@
422 'no_local_mods': True,
423 'no_relations': False,
424 'overrides': None,
425+ 'placement_first': False,
426 'rel_wait': 60,
427 'retry_count': 0,
428 'series': None,

Subscribers

People subscribed via source and target branches