Merge lp:~ltrager/maas/region_refresh into lp:~maas-committers/maas/trunk
- region_refresh
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Lee Trager |
Approved revision: | no longer in the source branch. |
Merged at revision: | 5126 |
Proposed branch: | lp:~ltrager/maas/region_refresh |
Merge into: | lp:~maas-committers/maas/trunk |
Prerequisite: | lp:~ltrager/maas/push_rack_refresh |
Diff against target: |
1815 lines (+708/-338) 16 files modified
src/maasserver/models/node.py (+73/-16) src/maasserver/models/tests/test_node.py (+183/-1) src/maasserver/regiondservices/tests/test_networks_monitoring.py (+2/-0) src/maasserver/rpc/regionservice.py (+8/-62) src/maasserver/rpc/tests/test_regionservice.py (+63/-172) src/maasserver/start_up.py (+65/-0) src/maasserver/tests/test_start_up.py (+126/-2) src/maasserver/views/tests/test_rpc.py (+6/-1) src/metadataserver/api.py (+1/-1) src/metadataserver/tests/test_api.py (+1/-1) src/provisioningserver/events.py (+4/-4) src/provisioningserver/refresh/__init__.py (+28/-4) src/provisioningserver/refresh/tests/test_refresh.py (+117/-22) src/provisioningserver/rpc/cluster.py (+1/-0) src/provisioningserver/rpc/clusterservice.py (+6/-35) src/provisioningserver/rpc/tests/test_clusterservice.py (+24/-17) |
To merge this branch: | bzr merge lp:~ltrager/maas/region_refresh |
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Gavin Panella (community) | Approve | ||
Blake Rouse (community) | Approve | ||
Lee Trager | Pending | ||
Review via email: mp+297015@code.launchpad.net |
This proposal supersedes a proposal from 2016-06-09.
Commit message
Refresh region hardware and networking information on start
Description of the change
Blake Rouse (blake-rouse) : Posted in a previous version of this proposal | # |
Lee Trager (ltrager) wrote : Posted in a previous version of this proposal | # |
Blake Rouse (blake-rouse) wrote : | # |
Looks good. Not going to block you on anything. Please fix the issues I pointed out and respond to my questions before landing.
Lee Trager (ltrager) wrote : | # |
Thanks for the review. I've responded to your comments below and corrected to issues you found.
Gavin Panella (allenap) wrote : | # |
This needs a lot of work! I've put lots of comments inline.
Lee Trager (ltrager) wrote : | # |
I've moved the region creation and refresh to happen during start_up and made the corrections you've suggested.
To answer you're question about logging, it was originally done because the user could run a refresh over API/websocket. The only other reason to log it as a request_event is to have a database record of when the details were last refreshed. I could just remove that and leave logging to about refresh to syslog.
Gavin Panella (allenap) wrote : | # |
It's getting close, but no cigar just yet. The stickiest bits are the newest bits: the stuff in start_up.py. I've put my suggested changes into lp:~allenap/maas/ltrager--region_refresh so just merge that and see what I think needs to change. I didn't update any tests or write new one though (and I noticed that get_running_
Lee Trager (ltrager) wrote : | # |
Thanks for the review. I'm not sure we want to move creating the region object to Controller.
Gavin Panella (allenap) wrote : | # |
There are a few more issues that need clearing up, but +1 because
they're small.
> Thanks for the review. I'm not sure we want to move creating the
> region object to Controller.
> get_running_
> If a user deletes /var/lib/
> error, while MAAS is running, a call to get_running_
> create a new object without any hardware or networking details, nor
> will it have any connections registered. This could cause really weird
> errors. I think its safer to only create region objects in start_up()
> and if a user deletes /var/lib/
> fixed everything else.
There are only three call-sites for get_running_
of agree with your point. I think create_region_obj belongs elsewhere,
but leave it for now: I have a branch that moves it elsewhere which I'll
propose after this lands.
[...]
> The caller isn't necessarily wrong. Calling RackController.
> without this will work its just that it will go over RPC. If the
> maas_url for the rack controller is configured with a VIP the node
> result data may then go over the network and be processed by a foreign
> region controller. I want to prevent that and use the shortest and
> quickest path for refresh available.
I think you're missing my point (and also trying to over-optimise).
The caller may be explicitly getting a RackController:
rack = typecast_node(node, RackController)
return rack.refresh()
but the work is then being silently redirected to the region. There's no
way to say "do this on the *rack* dammit".
One problem is that refreshing a rack-not-region controller returns
rapidly with the basic system info, whereas refreshing a region waits
for all the refresh scripts to run too. The caller doesn't know this for
sure. All in, I think refreshing is poorly factored: getting basic
system info should be one task (done synchronously during start-up of a
region, and done during the first handshake with a rack), and running
the commissioning scripts a separate task, for both region and rack.
Don't change that here though; enough has changed already!
An underlying problem is that there's no RegionRackContr
corresponding manager. If there was, it would make sense for its refresh
method to choose the region if that's where it's running, else ask the
rack. Getting RackController's refresh to alter the method's dispatch is
black-market polymorphism.
> The reason I put that logic in RackController.
> on the caller is its something that could be easily overlooked. For
> example if we decide to bring back the rack refresh API and websocket
> the developer will have to remember to put this logic in both places.
There's a lot that can be overlooked; that's one reason for tests :)
In addition to the missing RegionRackContr
with the use of typecast_node and node_type:
- typecast_node can be used to cast a Node to any node class, whether it
makes sense or not.
- node_type can be modified so that the model does not match.
This is a hard...
Preview Diff
1 | === modified file 'src/maasserver/models/node.py' |
2 | --- src/maasserver/models/node.py 2016-06-10 23:22:35 +0000 |
3 | +++ src/maasserver/models/node.py 2016-06-16 01:30:55 +0000 |
4 | @@ -160,6 +160,10 @@ |
5 | ) |
6 | from provisioningserver.logger import get_maas_logger |
7 | from provisioningserver.power import QUERY_POWER_TYPES |
8 | +from provisioningserver.refresh import ( |
9 | + get_sys_info, |
10 | + refresh, |
11 | +) |
12 | from provisioningserver.rpc.cluster import ( |
13 | AddChassis, |
14 | DisableAndShutoffRackd, |
15 | @@ -179,6 +183,7 @@ |
16 | ) |
17 | from provisioningserver.utils.enum import map_enum_reverse |
18 | from provisioningserver.utils.env import get_maas_id |
19 | +from provisioningserver.utils.fs import NamedLock |
20 | from provisioningserver.utils.twisted import ( |
21 | asynchronous, |
22 | callOut, |
23 | @@ -189,6 +194,7 @@ |
24 | inlineCallbacks, |
25 | succeed, |
26 | ) |
27 | +from twisted.internet.threads import deferToThread |
28 | |
29 | |
30 | maaslog = get_maas_logger("node") |
31 | @@ -546,7 +552,7 @@ |
32 | ]} |
33 | |
34 | def get_running_controller(self): |
35 | - return self.get(system_id=get_maas_id()) |
36 | + return self.get(system_id=get_maas_id(), owner_id__isnull=False) |
37 | |
38 | |
39 | class RackControllerManager(ControllerManager): |
40 | @@ -3629,30 +3635,28 @@ |
41 | @transactional |
42 | def _signal_start_of_refresh(self): |
43 | self._register_request_event( |
44 | - self.owner, EVENT_TYPES.REQUEST_RACK_CONTROLLER_REFRESH, |
45 | + self.owner, EVENT_TYPES.REQUEST_CONTROLLER_REFRESH, |
46 | action='starting refresh') |
47 | |
48 | @transactional |
49 | - def _process_refresh(self, response, previous_result_ids): |
50 | - # Avoid circular imports. |
51 | - from metadataserver.models import NodeResult |
52 | - |
53 | - # Delete existing results. This is a bit shonk because it assumes that |
54 | - # the commissioning scripts will run to completion on the controller. |
55 | - # After deletion there will be a period when the controller has no |
56 | - # results in the database while waiting to recieve the new data. If |
57 | - # uploading fails this will leave the controller with no NodeResults. |
58 | - NodeResult.objects.filter(id__in=previous_result_ids).delete() |
59 | - |
60 | + def _process_sys_info(self, response): |
61 | + update_fields = [] |
62 | + if response['hostname'] != '': |
63 | + self.hostname = response['hostname'] |
64 | + update_fields.append('hostname') |
65 | if response['architecture'] != '': |
66 | self.architecture = response['architecture'] |
67 | + update_fields.append('architecture') |
68 | if response['osystem'] != '': |
69 | self.osystem = response['osystem'] |
70 | + update_fields.append('osystem') |
71 | if response['distro_series'] != '': |
72 | self.distro_series = response['distro_series'] |
73 | + update_fields.append('distro_series') |
74 | if response['interfaces'] != {}: |
75 | self.update_interfaces(response['interfaces']) |
76 | - self.save() |
77 | + if len(update_fields) > 0: |
78 | + self.save(update_fields=update_fields) |
79 | |
80 | |
81 | class RackController(Controller): |
82 | @@ -3667,6 +3671,19 @@ |
83 | super(RackController, self).__init__( |
84 | node_type=NODE_TYPE.RACK_CONTROLLER, *args, **kwargs) |
85 | |
86 | + @transactional |
87 | + def _delete_node_result(self, ids): |
88 | + # Avoid circular imports. |
89 | + from metadataserver.models import NodeResult |
90 | + |
91 | + # Delete existing node results. This is a bit shonky for rack |
92 | + # controllers because it assumes that the commissioning scripts will |
93 | + # run to completion on rack controller. After deletion there will be a |
94 | + # period when the rack controller has no results in the database while |
95 | + # waiting to recieve the new data. If uploading fails this will leave |
96 | + # the rack controller with no NodeResults. |
97 | + NodeResult.objects.filter(id__in=ids).delete() |
98 | + |
99 | @inlineCallbacks |
100 | def refresh(self): |
101 | """Refresh the hardware and networking columns of the rack controller. |
102 | @@ -3674,6 +3691,13 @@ |
103 | :raises NoConnectionsAvailable: If no connections to the cluster |
104 | are available. |
105 | """ |
106 | + if self.system_id == get_maas_id(): |
107 | + # If the refresh is occuring on the running region execute it using |
108 | + # the region process. This avoids using RPC and sends the node |
109 | + # results back to this host when in HA. |
110 | + yield typecast_node(self, RegionController).refresh() |
111 | + return |
112 | + |
113 | client = yield getClientFor(self.system_id, timeout=1) |
114 | |
115 | token = yield deferToDatabase(self._get_token_for_controller) |
116 | @@ -3690,10 +3714,11 @@ |
117 | token_key=token.key, token_secret=token.secret) |
118 | except RefreshAlreadyInProgress: |
119 | # If another refresh is in progress let the other process |
120 | - # handle it and don't update the database |
121 | + # handle it and don't update the database. |
122 | return |
123 | else: |
124 | - yield deferToDatabase(self._process_refresh, response, result_ids) |
125 | + yield deferToDatabase(self._delete_node_result, result_ids) |
126 | + yield deferToDatabase(self._process_sys_info, response) |
127 | |
128 | def add_chassis( |
129 | self, user, chassis_type, hostname, username=None, password=None, |
130 | @@ -3915,6 +3940,38 @@ |
131 | else: |
132 | super().delete() |
133 | |
134 | + @transactional |
135 | + def _delete_node_results(self): |
136 | + # Avoid circular imports. |
137 | + from metadataserver.models import NodeResult |
138 | + |
139 | + NodeResult.objects.filter(node=self).delete() |
140 | + |
141 | + @inlineCallbacks |
142 | + def refresh(self): |
143 | + """Refresh the region controller.""" |
144 | + # XXX ltrager 2016-05-25 - MAAS doesn't have an RPC method between |
145 | + # region controllers. If this method refreshes a foreign region |
146 | + # controller the foreign region controller will contain the running |
147 | + # region's hardware and networking information. |
148 | + if self.system_id != get_maas_id(): |
149 | + raise NotImplementedError( |
150 | + 'Can only refresh the running region controller') |
151 | + |
152 | + try: |
153 | + with NamedLock('refresh'): |
154 | + token = yield deferToDatabase(self._get_token_for_controller) |
155 | + yield deferToDatabase(self._signal_start_of_refresh) |
156 | + yield deferToDatabase(self._delete_node_results) |
157 | + sys_info = yield deferToThread(get_sys_info) |
158 | + yield deferToDatabase(self._process_sys_info, sys_info) |
159 | + yield deferToThread( |
160 | + refresh, self.system_id, token.consumer.key, token.key, |
161 | + token.secret, 'http://127.0.0.1:5240/MAAS') |
162 | + except NamedLock.NotAvailable: |
163 | + # Refresh already running. |
164 | + pass |
165 | + |
166 | |
167 | class Device(Node): |
168 | """A non-installable node.""" |
169 | |
170 | === modified file 'src/maasserver/models/tests/test_node.py' |
171 | --- src/maasserver/models/tests/test_node.py 2016-06-10 01:48:36 +0000 |
172 | +++ src/maasserver/models/tests/test_node.py 2016-06-16 01:30:55 +0000 |
173 | @@ -156,6 +156,7 @@ |
174 | map_enum, |
175 | map_enum_reverse, |
176 | ) |
177 | +from provisioningserver.utils.fs import NamedLock |
178 | from testtools import ExpectedException |
179 | from testtools.matchers import ( |
180 | AfterPreprocessing, |
181 | @@ -6918,10 +6919,18 @@ |
182 | token.consumer.key # Fetch this now while we're in the database. |
183 | return token |
184 | |
185 | + def test_refresh_calls_region_refresh_when_on_node(self): |
186 | + rack = factory.make_RackController() |
187 | + self.patch(node_module, 'get_maas_id').return_value = rack.system_id |
188 | + mock_refresh = self.patch(node_module.RegionController, 'refresh') |
189 | + rack.refresh() |
190 | + self.assertThat(mock_refresh, MockCalledOnce()) |
191 | + |
192 | @wait_for_reactor |
193 | @defer.inlineCallbacks |
194 | def test_refresh_issues_rpc_call(self): |
195 | self.protocol.RefreshRackControllerInfo.return_value = defer.succeed({ |
196 | + 'hostname': self.rackcontroller.hostname, |
197 | 'architecture': self.rackcontroller.architecture, |
198 | 'osystem': '', |
199 | 'distro_series': '', |
200 | @@ -6942,6 +6951,7 @@ |
201 | @defer.inlineCallbacks |
202 | def test_refresh_logs_user_request(self): |
203 | self.protocol.RefreshRackControllerInfo.return_value = defer.succeed({ |
204 | + 'hostname': self.rackcontroller.hostname, |
205 | 'architecture': self.rackcontroller.architecture, |
206 | 'osystem': '', |
207 | 'distro_series': '', |
208 | @@ -6954,13 +6964,14 @@ |
209 | yield self.rackcontroller.refresh() |
210 | self.assertThat(register_event, MockCalledOnceWith( |
211 | self.rackcontroller.owner, |
212 | - EVENT_TYPES.REQUEST_RACK_CONTROLLER_REFRESH, |
213 | + EVENT_TYPES.REQUEST_CONTROLLER_REFRESH, |
214 | action='starting refresh')) |
215 | |
216 | @wait_for_reactor |
217 | @defer.inlineCallbacks |
218 | def test_refresh_clears_node_results(self): |
219 | self.protocol.RefreshRackControllerInfo.return_value = defer.succeed({ |
220 | + 'hostname': self.rackcontroller.hostname, |
221 | 'architecture': self.rackcontroller.architecture, |
222 | 'osystem': '', |
223 | 'distro_series': '', |
224 | @@ -6988,6 +6999,7 @@ |
225 | @wait_for_reactor |
226 | @defer.inlineCallbacks |
227 | def test_refresh_sets_extra_values(self): |
228 | + hostname = factory.make_hostname() |
229 | osystem = factory.make_name('osystem') |
230 | distro_series = factory.make_name('distro_series') |
231 | interfaces = { |
232 | @@ -7001,6 +7013,7 @@ |
233 | } |
234 | |
235 | self.protocol.RefreshRackControllerInfo.return_value = defer.succeed({ |
236 | + 'hostname': hostname, |
237 | 'architecture': self.rackcontroller.architecture, |
238 | 'osystem': osystem, |
239 | 'distro_series': distro_series, |
240 | @@ -7010,6 +7023,7 @@ |
241 | yield self.rackcontroller.refresh() |
242 | rackcontroller = yield deferToDatabase( |
243 | reload_object, self.rackcontroller) |
244 | + self.assertEquals(hostname, rackcontroller.hostname) |
245 | self.assertEquals(osystem, rackcontroller.osystem) |
246 | self.assertEquals(distro_series, rackcontroller.distro_series) |
247 | |
248 | @@ -7377,3 +7391,171 @@ |
249 | region = factory.make_RegionController() |
250 | region.delete() |
251 | self.assertIsNone(reload_object(region)) |
252 | + |
253 | + |
254 | +class TestRegionControllerRefresh(MAASTransactionServerTestCase): |
255 | + |
256 | + @wait_for_reactor |
257 | + @defer.inlineCallbacks |
258 | + def test__only_runs_on_running_region(self): |
259 | + region = yield deferToDatabase(factory.make_RegionController) |
260 | + |
261 | + with ExpectedException(NotImplementedError): |
262 | + yield region.refresh() |
263 | + |
264 | + @wait_for_reactor |
265 | + @defer.inlineCallbacks |
266 | + def test__acquires_and_releases_lock(self): |
267 | + def mock_refresh(*args, **kwargs): |
268 | + lock = NamedLock('refresh') |
269 | + self.assertTrue(lock.is_locked()) |
270 | + self.patch(node_module, 'refresh', mock_refresh) |
271 | + region = yield deferToDatabase(factory.make_RegionController) |
272 | + self.patch(node_module, 'get_maas_id').return_value = region.system_id |
273 | + self.patch(node_module, 'get_sys_info').return_value = { |
274 | + 'hostname': region.hostname, |
275 | + 'architecture': region.architecture, |
276 | + 'osystem': '', |
277 | + 'distro_series': '', |
278 | + 'interfaces': {}, |
279 | + } |
280 | + yield region.refresh() |
281 | + lock = NamedLock('refresh') |
282 | + self.assertFalse(lock.is_locked()) |
283 | + |
284 | + @wait_for_reactor |
285 | + @defer.inlineCallbacks |
286 | + def test__lock_released_on_error(self): |
287 | + exception = factory.make_exception() |
288 | + self.patch(node_module, 'refresh').side_effect = exception |
289 | + region = yield deferToDatabase(factory.make_RegionController) |
290 | + self.patch(node_module, 'get_maas_id').return_value = region.system_id |
291 | + self.patch(node_module, 'get_sys_info').return_value = { |
292 | + 'hostname': region.hostname, |
293 | + 'architecture': region.architecture, |
294 | + 'osystem': '', |
295 | + 'distro_series': '', |
296 | + 'interfaces': {}, |
297 | + } |
298 | + with ExpectedException(type(exception)): |
299 | + yield region.refresh() |
300 | + lock = NamedLock('refresh') |
301 | + self.assertFalse(lock.is_locked()) |
302 | + |
303 | + @wait_for_reactor |
304 | + @defer.inlineCallbacks |
305 | + def test__does_nothing_when_locked(self): |
306 | + region = yield deferToDatabase(factory.make_RegionController) |
307 | + self.patch(node_module, 'get_maas_id').return_value = region.system_id |
308 | + mock_deferToDatabase = self.patch(node_module, 'deferToDatabase') |
309 | + with NamedLock('refresh'): |
310 | + yield region.refresh() |
311 | + self.assertThat(mock_deferToDatabase, MockNotCalled()) |
312 | + |
313 | + @wait_for_reactor |
314 | + @defer.inlineCallbacks |
315 | + def test__logs_user_request(self): |
316 | + region = yield deferToDatabase(factory.make_RegionController) |
317 | + self.patch(node_module, 'get_maas_id').return_value = region.system_id |
318 | + self.patch(node_module, 'refresh') |
319 | + self.patch(node_module, 'get_sys_info').return_value = { |
320 | + 'hostname': region.hostname, |
321 | + 'architecture': region.architecture, |
322 | + 'osystem': '', |
323 | + 'distro_series': '', |
324 | + 'interfaces': {}, |
325 | + } |
326 | + register_event = self.patch(region, '_register_request_event') |
327 | + |
328 | + yield region.refresh() |
329 | + self.assertThat(register_event, MockCalledOnceWith( |
330 | + region.owner, |
331 | + EVENT_TYPES.REQUEST_CONTROLLER_REFRESH, |
332 | + action='starting refresh')) |
333 | + |
334 | + @wait_for_reactor |
335 | + @defer.inlineCallbacks |
336 | + def test__runs_refresh(self): |
337 | + def get_token_for_controller(region): |
338 | + token = NodeKey.objects.get_token_for_node(region) |
339 | + token.consumer.key # Fetch this now while we're in the database. |
340 | + return token |
341 | + |
342 | + region = yield deferToDatabase(factory.make_RegionController) |
343 | + self.patch(node_module, 'get_maas_id').return_value = region.system_id |
344 | + mock_refresh = self.patch(node_module, 'refresh') |
345 | + self.patch(node_module, 'get_sys_info').return_value = { |
346 | + 'hostname': region.hostname, |
347 | + 'architecture': region.architecture, |
348 | + 'osystem': '', |
349 | + 'distro_series': '', |
350 | + 'interfaces': {}, |
351 | + } |
352 | + yield region.refresh() |
353 | + token = yield deferToDatabase(get_token_for_controller, region) |
354 | + |
355 | + self.expectThat( |
356 | + mock_refresh, |
357 | + MockCalledOnceWith( |
358 | + region.system_id, token.consumer.key, token.key, |
359 | + token.secret, 'http://127.0.0.1:5240/MAAS')) |
360 | + |
361 | + @wait_for_reactor |
362 | + @defer.inlineCallbacks |
363 | + def test__clears_node_results(self): |
364 | + region = yield deferToDatabase(factory.make_RegionController) |
365 | + node_result = yield deferToDatabase( |
366 | + factory.make_NodeResult_for_installation, node=region) |
367 | + self.patch(node_module, 'get_maas_id').return_value = region.system_id |
368 | + self.patch(node_module, 'refresh') |
369 | + self.patch(node_module, 'get_sys_info').return_value = { |
370 | + 'hostname': region.hostname, |
371 | + 'architecture': region.architecture, |
372 | + 'osystem': '', |
373 | + 'distro_series': '', |
374 | + 'interfaces': {}, |
375 | + } |
376 | + yield region.refresh() |
377 | + |
378 | + def has_results(): |
379 | + return NodeResult.objects.filter(id=node_result.id).exists() |
380 | + |
381 | + self.assertFalse((yield deferToDatabase(has_results))) |
382 | + |
383 | + @wait_for_reactor |
384 | + @defer.inlineCallbacks |
385 | + def test__sets_extra_values(self): |
386 | + region = yield deferToDatabase(factory.make_RegionController) |
387 | + self.patch(node_module, 'get_maas_id').return_value = region.system_id |
388 | + self.patch(node_module, 'refresh') |
389 | + hostname = factory.make_hostname() |
390 | + osystem = factory.make_name('osystem') |
391 | + distro_series = factory.make_name('distro_series') |
392 | + interfaces = { |
393 | + "eth0": { |
394 | + "type": "physical", |
395 | + "mac_address": factory.make_mac_address(), |
396 | + "parents": [], |
397 | + "links": [], |
398 | + "enabled": True, |
399 | + } |
400 | + } |
401 | + self.patch(node_module, 'get_sys_info').return_value = { |
402 | + 'hostname': hostname, |
403 | + 'architecture': region.architecture, |
404 | + 'osystem': osystem, |
405 | + 'distro_series': distro_series, |
406 | + 'interfaces': interfaces, |
407 | + } |
408 | + yield region.refresh() |
409 | + region = yield deferToDatabase(reload_object, region) |
410 | + self.assertEquals(hostname, region.hostname) |
411 | + self.assertEquals(osystem, region.osystem) |
412 | + self.assertEquals(distro_series, region.distro_series) |
413 | + |
414 | + def has_nic(region): |
415 | + mac_address = interfaces["eth0"]["mac_address"] |
416 | + return Interface.objects.filter( |
417 | + node=region, mac_address=mac_address).exists() |
418 | + |
419 | + self.assertTrue((yield deferToDatabase(has_nic, region))) |
420 | |
421 | === modified file 'src/maasserver/regiondservices/tests/test_networks_monitoring.py' |
422 | --- src/maasserver/regiondservices/tests/test_networks_monitoring.py 2016-06-08 16:52:07 +0000 |
423 | +++ src/maasserver/regiondservices/tests/test_networks_monitoring.py 2016-06-16 01:30:55 +0000 |
424 | @@ -36,6 +36,8 @@ |
425 | @inlineCallbacks |
426 | def test_updates_interfaces_in_database(self): |
427 | region = yield deferToDatabase(factory.make_RegionController) |
428 | + region.owner = yield deferToDatabase(factory.make_admin) |
429 | + yield deferToDatabase(region.save) |
430 | # Declare this region controller as the one running here. |
431 | self.useFixture(MAASIDFixture(region.system_id)) |
432 | |
433 | |
434 | === modified file 'src/maasserver/rpc/regionservice.py' |
435 | --- src/maasserver/rpc/regionservice.py 2016-06-08 16:11:14 +0000 |
436 | +++ src/maasserver/rpc/regionservice.py 2016-06-16 01:30:55 +0000 |
437 | @@ -22,16 +22,12 @@ |
438 | ) |
439 | import threading |
440 | |
441 | -from django.db.models import Q |
442 | from maasserver import ( |
443 | eventloop, |
444 | locks, |
445 | ) |
446 | from maasserver.bootresources import get_simplestream_endpoint |
447 | -from maasserver.enum import ( |
448 | - NODE_TYPE, |
449 | - SERVICE_STATUS, |
450 | -) |
451 | +from maasserver.enum import SERVICE_STATUS |
452 | from maasserver.models.node import ( |
453 | Node, |
454 | RackController, |
455 | @@ -61,7 +57,6 @@ |
456 | from maasserver.security import get_shared_secret |
457 | from maasserver.utils import synchronised |
458 | from maasserver.utils.orm import ( |
459 | - get_one, |
460 | transactional, |
461 | with_connection, |
462 | ) |
463 | @@ -77,12 +72,7 @@ |
464 | from provisioningserver.rpc.exceptions import NoSuchCluster |
465 | from provisioningserver.rpc.interfaces import IConnection |
466 | from provisioningserver.security import calculate_digest |
467 | -from provisioningserver.utils.env import ( |
468 | - get_maas_id, |
469 | - set_maas_id, |
470 | -) |
471 | from provisioningserver.utils.events import EventGroup |
472 | -from provisioningserver.utils.ipaddr import get_mac_addresses |
473 | from provisioningserver.utils.network import get_all_interface_addresses |
474 | from provisioningserver.utils.twisted import ( |
475 | asynchronous, |
476 | @@ -116,7 +106,6 @@ |
477 | from twisted.internet.error import ConnectionClosed |
478 | from twisted.internet.protocol import Factory |
479 | from twisted.internet.task import LoopingCall |
480 | -from twisted.internet.threads import deferToThread |
481 | from twisted.protocols import amp |
482 | from twisted.python import log |
483 | from zope.interface import implementer |
484 | @@ -576,8 +565,7 @@ |
485 | yield deferToDatabase( |
486 | registerConnection, region_id, rack_controller, self.host) |
487 | |
488 | - # Rack controller is now registered. Log this status and refresh |
489 | - # the information about the rack controller if needed. |
490 | + # Rack controller is now registered. Log this status. |
491 | log.msg( |
492 | "Rack controller '%s' has been registered." % self.ident) |
493 | |
494 | @@ -1014,10 +1002,10 @@ |
495 | """ |
496 | while self.running: |
497 | try: |
498 | - advertising_info = yield deferToThread( |
499 | - self._getAdvertisingInfo) |
500 | - advertising = yield deferToDatabase( |
501 | - RegionAdvertising.promote, *advertising_info) |
502 | + advertising = yield deferToDatabase(RegionAdvertising.promote) |
503 | + except RegionController.DoesNotExist: |
504 | + # Wait for the region controller object to be created |
505 | + yield pause(1) |
506 | except: |
507 | log.err(None, ( |
508 | "Promotion of %s failed; will try again in " |
509 | @@ -1030,14 +1018,6 @@ |
510 | # so raise CancelledError to prevent callbacks from being called. |
511 | raise defer.CancelledError() |
512 | |
513 | - @staticmethod |
514 | - def _getAdvertisingInfo(): |
515 | - """Return the info required to promote this regiond. |
516 | - |
517 | - :return: A ``(region-id, hostname, mac-addresses)`` tuple. |
518 | - """ |
519 | - return get_maas_id(), gethostname(), get_mac_addresses() |
520 | - |
521 | def _promotionOkay(self, advertising): |
522 | """Store the region advertising object. |
523 | |
524 | @@ -1047,7 +1027,6 @@ |
525 | |
526 | :param advertising: An instance of `RegionAdvertising`. |
527 | """ |
528 | - set_maas_id(advertising.region_id) # Create `maas_id` file. |
529 | self.advertising.set(advertising) |
530 | |
531 | def _promotionFailed(self, failure): |
532 | @@ -1184,7 +1163,7 @@ |
533 | @with_connection # Needed by the following lock. |
534 | @synchronised(locks.eventloop) |
535 | @transactional |
536 | - def promote(cls, region_id, hostname, mac_addresses): |
537 | + def promote(cls): |
538 | """Promote this regiond to active duty. |
539 | |
540 | Ensure that `RegionController` and `RegionControllerProcess` records |
541 | @@ -1192,17 +1171,7 @@ |
542 | |
543 | :return: An instance of `RegionAdvertising`. |
544 | """ |
545 | - region_filter = Q() if region_id is None else Q(system_id=region_id) |
546 | - region_filter |= Q(hostname=hostname) |
547 | - region_filter |= Q(interface__mac_address__in=mac_addresses) |
548 | - region_obj = get_one(Node.objects.filter(region_filter).distinct()) |
549 | - if region_obj is None: |
550 | - # This is the first time MAAS has run on this node. |
551 | - region_obj = RegionController.objects.create(hostname=hostname) |
552 | - else: |
553 | - # Already exists. Make sure it's configured as a region. |
554 | - region_obj = fix_node_for_region(region_obj, hostname) |
555 | - |
556 | + region_obj = RegionController.objects.get_running_controller() |
557 | # Create the process for this region. This process object will be |
558 | # updated by calls to `update`, which is the responsibility of the |
559 | # rpc-advertise service. Calls to `update` also remove old process |
560 | @@ -1329,26 +1298,3 @@ |
561 | for region_obj in Node.objects.filter(system_id=self.region_id): |
562 | RegionControllerProcess.objects.get( |
563 | region=region_obj, pid=os.getpid()).delete() |
564 | - |
565 | - |
566 | -@transactional |
567 | -def fix_node_for_region(region_obj, hostname): |
568 | - """Fix the `node_type` and `hostname` fields on `region_obj`. |
569 | - |
570 | - This method only updates the database if it has changed. |
571 | - """ |
572 | - update_fields = [] |
573 | - if region_obj.node_type not in [ |
574 | - NODE_TYPE.REGION_CONTROLLER, |
575 | - NODE_TYPE.REGION_AND_RACK_CONTROLLER]: |
576 | - if region_obj.node_type == NODE_TYPE.RACK_CONTROLLER: |
577 | - region_obj.node_type = NODE_TYPE.REGION_AND_RACK_CONTROLLER |
578 | - else: |
579 | - region_obj.node_type = NODE_TYPE.REGION_CONTROLLER |
580 | - update_fields.append("node_type") |
581 | - if region_obj.hostname != hostname: |
582 | - region_obj.hostname = hostname |
583 | - update_fields.append("hostname") |
584 | - if len(update_fields) > 0: |
585 | - region_obj.save(update_fields=update_fields) |
586 | - return region_obj |
587 | |
588 | === modified file 'src/maasserver/rpc/tests/test_regionservice.py' |
589 | --- src/maasserver/rpc/tests/test_regionservice.py 2016-06-08 16:11:14 +0000 |
590 | +++ src/maasserver/rpc/tests/test_regionservice.py 2016-06-16 01:30:55 +0000 |
591 | @@ -24,10 +24,7 @@ |
592 | |
593 | from crochet import wait_for |
594 | from django.db import IntegrityError |
595 | -from maasserver import ( |
596 | - eventloop, |
597 | - locks, |
598 | -) |
599 | +from maasserver import eventloop |
600 | from maasserver.enum import ( |
601 | NODE_TYPE, |
602 | SERVICE_STATUS, |
603 | @@ -67,7 +64,6 @@ |
604 | IsFiredDeferred, |
605 | IsUnfiredDeferred, |
606 | MockAnyCall, |
607 | - MockCalledOnce, |
608 | MockCalledOnceWith, |
609 | MockCallsMatch, |
610 | Provides, |
611 | @@ -91,6 +87,7 @@ |
612 | from provisioningserver.rpc.testing import call_responder |
613 | from provisioningserver.rpc.testing.doubles import DummyConnection |
614 | from provisioningserver.utils import events |
615 | +from provisioningserver.utils.testing import MAASIDFixture |
616 | from provisioningserver.utils.twisted import ( |
617 | callInReactorWithTimeout, |
618 | DeferredValue, |
619 | @@ -553,27 +550,6 @@ |
620 | yield deferToDatabase( |
621 | RackController.objects.get, hostname=hostname) |
622 | |
623 | - @skip("XXX: GavinPanella 2016-03-09 bug=1555236: Fails spuriously.") |
624 | - @wait_for_reactor |
625 | - @inlineCallbacks |
626 | - def test_register_calls_refresh_when_needed(self): |
627 | - protocol = self.make_Region() |
628 | - protocol.transport = MagicMock() |
629 | - mock_gethost = self.patch(protocol.transport, 'getHost') |
630 | - mock_gethost.return_value = IPv4Address( |
631 | - type='TCP', host=factory.make_ipv4_address(), |
632 | - port=random.randint(1, 65535)) |
633 | - mock_refresh = self.patch(RackController, 'refresh') |
634 | - self.patch(regionservice, 'registerConnection') |
635 | - hostname = factory.make_hostname() |
636 | - yield call_responder( |
637 | - protocol, RegisterRackController, { |
638 | - "system_id": None, |
639 | - "hostname": hostname, |
640 | - "interfaces": {}, |
641 | - }) |
642 | - self.assertThat(mock_refresh, MockCalledOnce()) |
643 | - |
644 | @wait_for_reactor |
645 | @inlineCallbacks |
646 | def test_register_raises_CannotRegisterRackController_when_it_cant(self): |
647 | @@ -1087,20 +1063,12 @@ |
648 | run_tests_with = MAASCrochetRunTest |
649 | |
650 | def setUp(self): |
651 | - super(TestRegionAdvertisingService, self).setUp() |
652 | - self.maas_id = None |
653 | - |
654 | - def set_maas_id(maas_id): |
655 | - self.maas_id = maas_id |
656 | - |
657 | - self.set_maas_id = self.patch(regionservice, "set_maas_id") |
658 | - self.set_maas_id.side_effect = set_maas_id |
659 | - |
660 | - def get_maas_id(): |
661 | - return self.maas_id |
662 | - |
663 | - self.get_maas_id = self.patch(regionservice, "get_maas_id") |
664 | - self.get_maas_id.side_effect = get_maas_id |
665 | + super().setUp() |
666 | + # Create the running region controller object |
667 | + region = factory.make_RegionController() |
668 | + region.owner = factory.make_admin() |
669 | + region.save() |
670 | + self.useFixture(MAASIDFixture(region.system_id)) |
671 | |
672 | def test_init(self): |
673 | ras = RegionAdvertisingService() |
674 | @@ -1164,10 +1132,8 @@ |
675 | region_ids = yield deferToDatabase(lambda: { |
676 | region.system_id for region in RegionController.objects.all()}) |
677 | self.assertThat(region_ids, HasLength(1)) |
678 | - # The maas_id file has been created too. |
679 | + # Finally, the advertising value has been set. |
680 | region_id = region_ids.pop() |
681 | - self.assertThat(self.set_maas_id, MockCalledOnceWith(region_id)) |
682 | - # Finally, the advertising value has been set. |
683 | advertising = yield service.advertising.get(0.0) |
684 | self.assertThat( |
685 | advertising, MatchesAll( |
686 | @@ -1191,18 +1157,18 @@ |
687 | service = RegionAdvertisingService() |
688 | # Prevent real pauses. |
689 | self.patch_autospec(regionservice, "pause").return_value = None |
690 | - # Make service._getAdvertisingInfo fail the first time it's called. |
691 | + # Make service.promote fail the first time it's called. |
692 | exceptions = [ValueError("You don't vote for kings!")] |
693 | - original = service._getAdvertisingInfo |
694 | + original = regionservice.RegionAdvertising.promote |
695 | |
696 | - def _getAdvertisingInfo(): |
697 | + def promote(): |
698 | if len(exceptions) == 0: |
699 | return original() |
700 | else: |
701 | raise exceptions.pop(0) |
702 | |
703 | - gao = self.patch(service, "_getAdvertisingInfo") |
704 | - gao.side_effect = _getAdvertisingInfo |
705 | + fake_promote = self.patch(regionservice.RegionAdvertising, "promote") |
706 | + fake_promote.side_effect = promote |
707 | # Capture all Twisted logs. |
708 | logger = self.useFixture(TwistedLoggerFixture()) |
709 | |
710 | @@ -1218,6 +1184,31 @@ |
711 | finally: |
712 | yield service.stopService() |
713 | |
714 | + @wait_for_reactor |
715 | + @inlineCallbacks |
716 | + def test_start_up_waits_for_region_obj(self): |
717 | + service = RegionAdvertisingService() |
718 | + # Prevent real pauses. |
719 | + self.patch_autospec(regionservice, "pause").return_value = None |
720 | + # Make service.promote fail the first time it's called. |
721 | + exceptions = [RegionController.DoesNotExist()] |
722 | + original = regionservice.RegionAdvertising.promote |
723 | + |
724 | + def promote(): |
725 | + if len(exceptions) == 0: |
726 | + return original() |
727 | + else: |
728 | + raise exceptions.pop(0) |
729 | + |
730 | + fake_promote = self.patch(regionservice.RegionAdvertising, "promote") |
731 | + fake_promote.side_effect = promote |
732 | + |
733 | + yield service.startService() |
734 | + try: |
735 | + self.assertThat(fake_promote, MockCallsMatch(call(), call())) |
736 | + finally: |
737 | + yield service.stopService() |
738 | + |
739 | def test_stopping_waits_for_startup(self): |
740 | service = RegionAdvertisingService() |
741 | synchronise = threading.Condition() |
742 | @@ -1226,14 +1217,15 @@ |
743 | service._startAdvertising = lambda: None |
744 | service._stopAdvertising = lambda: None |
745 | |
746 | - # Prevent the service's _getAdvertisingInfo method - which is deferred |
747 | - # to a thread - from completing while we hold the lock. |
748 | - def _getAdvertisingInfo(original=service._getAdvertisingInfo): |
749 | + # Prevent the service's promote method - which is deferred to a thread, |
750 | + # from completing while we hold the lock. |
751 | + def promote(original=regionservice.RegionAdvertising.promote): |
752 | with synchronise: |
753 | synchronise.notify() |
754 | synchronise.wait(2.0) |
755 | return original() |
756 | - service._getAdvertisingInfo = _getAdvertisingInfo |
757 | + fake_promote = self.patch(regionservice.RegionAdvertising, "promote") |
758 | + fake_promote.side_effect = promote |
759 | |
760 | with synchronise: |
761 | # Start the service, but stop it again before promote is able to |
762 | @@ -1374,11 +1366,13 @@ |
763 | |
764 | hostname = gethostname() |
765 | |
766 | - def promote(self, region_id=None, hostname=hostname, mac_addresses=None): |
767 | - """Convenient wrapper around `RegionAdvertising.promote`.""" |
768 | - return RegionAdvertising.promote( |
769 | - factory.make_name("region-id") if region_id is None else region_id, |
770 | - hostname, [] if mac_addresses is None else mac_addresses) |
771 | + def setUp(self): |
772 | + super().setUp() |
773 | + # Create the running region controller object |
774 | + region = factory.make_RegionController() |
775 | + region.owner = factory.make_admin() |
776 | + region.save() |
777 | + self.useFixture(MAASIDFixture(region.system_id)) |
778 | |
779 | def make_addresses(self): |
780 | """Return a set of a couple of ``(addr, port)`` tuples.""" |
781 | @@ -1396,107 +1390,8 @@ |
782 | for endpoint in process.endpoints.all() |
783 | } |
784 | |
785 | - def test_promote_new_region(self): |
786 | - # Before promotion there are no RegionControllers. |
787 | - self.assertEquals( |
788 | - 0, RegionController.objects.count(), |
789 | - "No RegionControllers should exist.") |
790 | - |
791 | - advertising = self.promote() |
792 | - |
793 | - # Now a RegionController exists for the given hostname. |
794 | - region = RegionController.objects.get(hostname=gethostname()) |
795 | - self.assertThat(advertising.region_id, Equals(region.system_id)) |
796 | - |
797 | - def test_promote_converts_from_node(self): |
798 | - node = factory.make_Node(interface=True) |
799 | - interfaces = [ |
800 | - factory.make_Interface(node=node), |
801 | - factory.make_Interface(node=node), |
802 | - ] |
803 | - mac_addresses = [ |
804 | - str(interface.mac_address) |
805 | - for interface in interfaces |
806 | - ] |
807 | - |
808 | - self.promote(node.system_id, self.hostname, mac_addresses) |
809 | - |
810 | - # Node should have been converted to a RegionController. |
811 | - node = reload_object(node) |
812 | - self.assertEquals(NODE_TYPE.REGION_CONTROLLER, node.node_type) |
813 | - # The hostname has also been set. |
814 | - self.assertEquals(self.hostname, node.hostname) |
815 | - |
816 | - def test_promote_converts_from_rack(self): |
817 | - node = factory.make_Node( |
818 | - interface=True, node_type=NODE_TYPE.RACK_CONTROLLER) |
819 | - interface = node.get_boot_interface() |
820 | - mac_address = str(interface.mac_address) |
821 | - |
822 | - self.promote(node.system_id, self.hostname, [mac_address]) |
823 | - |
824 | - # Node should have been converted to a RegionRackController. |
825 | - node = reload_object(node) |
826 | - self.assertEquals(NODE_TYPE.REGION_AND_RACK_CONTROLLER, node.node_type) |
827 | - # The hostname has also been set. |
828 | - self.assertEquals(self.hostname, node.hostname) |
829 | - |
830 | - def test_promote_sets_region_hostname(self): |
831 | - node = factory.make_Node(node_type=NODE_TYPE.REGION_CONTROLLER) |
832 | - |
833 | - self.promote(node.system_id, self.hostname) |
834 | - |
835 | - # The hostname has been set. |
836 | - self.assertEquals(self.hostname, reload_object(node).hostname) |
837 | - |
838 | - def test_promote_holds_startup_lock(self): |
839 | - # Creating tables in PostgreSQL is a transactional operation like any |
840 | - # other. If the isolation level is not sufficient it is susceptible to |
841 | - # races. Using a higher isolation level may lead to serialisation |
842 | - # failures, for example. However, PostgreSQL provides advisory locking |
843 | - # functions, and that's what RegionAdvertising.promote takes advantage |
844 | - # of to prevent concurrent creation of the region controllers. |
845 | - |
846 | - # A record of the lock's status, populated when a custom |
847 | - # patched-in _do_create() is called. |
848 | - locked = [] |
849 | - |
850 | - # Capture the state of `locks.eventloop` while `promote` is running. |
851 | - original_fix_node_for_region = regionservice.fix_node_for_region |
852 | - |
853 | - def fix_node_for_region(*args, **kwargs): |
854 | - locked.append(locks.eventloop.is_locked()) |
855 | - return original_fix_node_for_region(*args, **kwargs) |
856 | - |
857 | - fnfr = self.patch(regionservice, "fix_node_for_region") |
858 | - fnfr.side_effect = fix_node_for_region |
859 | - |
860 | - # `fix_node_for_region` is only called for preexisting nodes. |
861 | - node = factory.make_Node(node_type=NODE_TYPE.REGION_CONTROLLER) |
862 | - |
863 | - # The lock is not held before and after `promote` is called. |
864 | - self.assertFalse(locks.eventloop.is_locked()) |
865 | - self.promote(node.system_id) |
866 | - self.assertFalse(locks.eventloop.is_locked()) |
867 | - |
868 | - # The lock was held when `fix_node_for_region` was called. |
869 | - self.assertEqual([True], locked) |
870 | - |
871 | - def test_update_updates_region_hostname(self): |
872 | - advertising = self.promote() |
873 | - |
874 | - region = RegionController.objects.get(system_id=advertising.region_id) |
875 | - region.hostname = factory.make_name("host") |
876 | - region.save() |
877 | - |
878 | - advertising.update(self.make_addresses()) |
879 | - |
880 | - # RegionController should have hostname updated. |
881 | - region = reload_object(region) |
882 | - self.assertEquals(self.hostname, region.hostname) |
883 | - |
884 | def test_update_creates_process_when_removed(self): |
885 | - advertising = self.promote() |
886 | + advertising = RegionAdvertising.promote() |
887 | |
888 | region = RegionController.objects.get(system_id=advertising.region_id) |
889 | [process] = region.processes.all() |
890 | @@ -1511,7 +1406,7 @@ |
891 | self.assertEquals(process.pid, os.getpid()) |
892 | |
893 | def test_update_removes_old_processes(self): |
894 | - advertising = self.promote() |
895 | + advertising = RegionAdvertising.promote() |
896 | |
897 | old_time = now() - timedelta(seconds=90) |
898 | region = RegionController.objects.get(system_id=advertising.region_id) |
899 | @@ -1532,7 +1427,7 @@ |
900 | current_time = now() |
901 | self.patch(timestampedmodel, "now").return_value = current_time |
902 | |
903 | - advertising = self.promote() |
904 | + advertising = RegionAdvertising.promote() |
905 | |
906 | old_time = current_time - timedelta(seconds=90) |
907 | region = RegionController.objects.get(system_id=advertising.region_id) |
908 | @@ -1554,14 +1449,14 @@ |
909 | def test_update_creates_endpoints_on_process(self): |
910 | addresses = self.make_addresses() |
911 | |
912 | - advertising = self.promote() |
913 | + advertising = RegionAdvertising.promote() |
914 | advertising.update(addresses) |
915 | |
916 | saved_endpoints = self.get_endpoints(advertising.region_id) |
917 | self.assertEqual(addresses, saved_endpoints) |
918 | |
919 | def test_update_does_not_insert_endpoints_when_nothings_listening(self): |
920 | - advertising = self.promote() |
921 | + advertising = RegionAdvertising.promote() |
922 | advertising.update(set()) # No addresses. |
923 | |
924 | saved_endpoints = self.get_endpoints(advertising.region_id) |
925 | @@ -1572,7 +1467,7 @@ |
926 | addresses_one = self.make_addresses().union(addresses_common) |
927 | addresses_two = self.make_addresses().union(addresses_common) |
928 | |
929 | - advertising = self.promote() |
930 | + advertising = RegionAdvertising.promote() |
931 | |
932 | advertising.update(addresses_one) |
933 | self.assertEqual( |
934 | @@ -1583,7 +1478,7 @@ |
935 | addresses_two, self.get_endpoints(advertising.region_id)) |
936 | |
937 | def test_update_sets_regiond_degraded_with_less_than_4_processes(self): |
938 | - advertising = self.promote() |
939 | + advertising = RegionAdvertising.promote() |
940 | advertising.update(self.make_addresses()) |
941 | |
942 | region = RegionController.objects.get(system_id=advertising.region_id) |
943 | @@ -1594,7 +1489,7 @@ |
944 | status_info="1 process running but 4 were expected.")) |
945 | |
946 | def test_update_sets_regiond_running_with_4_processes(self): |
947 | - advertising = self.promote() |
948 | + advertising = RegionAdvertising.promote() |
949 | |
950 | region = RegionController.objects.get(system_id=advertising.region_id) |
951 | [process] = region.processes.all() |
952 | @@ -1610,7 +1505,7 @@ |
953 | status=SERVICE_STATUS.RUNNING, status_info="")) |
954 | |
955 | def test_update_calls_mark_dead_on_regions_without_processes(self): |
956 | - advertising = self.promote() |
957 | + advertising = RegionAdvertising.promote() |
958 | |
959 | other_region = factory.make_RegionController() |
960 | mock_mark_dead = self.patch(ServiceModel.objects, "mark_dead") |
961 | @@ -1622,29 +1517,25 @@ |
962 | MockCalledOnceWith(other_region, dead_region=True)) |
963 | |
964 | def test_demote(self): |
965 | - region_id = factory.make_name("region-id") |
966 | - hostname = gethostname() |
967 | addresses = { |
968 | (factory.make_ipv4_address(), factory.pick_port()), |
969 | (factory.make_ipv4_address(), factory.pick_port()), |
970 | } |
971 | - advertising = RegionAdvertising.promote(region_id, hostname, []) |
972 | + advertising = RegionAdvertising.promote() |
973 | advertising.update(addresses) |
974 | advertising.demote() |
975 | self.assertItemsEqual([], advertising.dump()) |
976 | |
977 | def test_dump(self): |
978 | - region_id = factory.make_name("region-id") |
979 | - hostname = gethostname() |
980 | addresses = { |
981 | (factory.make_ipv4_address(), factory.pick_port()), |
982 | (factory.make_ipv4_address(), factory.pick_port()), |
983 | } |
984 | - advertising = RegionAdvertising.promote(region_id, hostname, []) |
985 | + advertising = RegionAdvertising.promote() |
986 | advertising.update(addresses) |
987 | |
988 | expected = [ |
989 | - ("%s:pid=%d" % (hostname, os.getpid()), addr, port) |
990 | + ("%s:pid=%d" % (gethostname(), os.getpid()), addr, port) |
991 | for (addr, port) in addresses |
992 | ] |
993 | self.assertItemsEqual(expected, advertising.dump()) |
994 | |
995 | === modified file 'src/maasserver/start_up.py' |
996 | --- src/maasserver/start_up.py 2016-06-04 01:35:39 +0000 |
997 | +++ src/maasserver/start_up.py 2016-06-16 01:30:55 +0000 |
998 | @@ -8,30 +8,48 @@ |
999 | ] |
1000 | |
1001 | import logging |
1002 | +from socket import gethostname |
1003 | |
1004 | from django.db import connection |
1005 | +from django.db.models import Q |
1006 | from django.db.utils import DatabaseError |
1007 | from maasserver import ( |
1008 | is_master_process, |
1009 | locks, |
1010 | security, |
1011 | ) |
1012 | +from maasserver.enum import NODE_TYPE |
1013 | from maasserver.fields import register_mac_type |
1014 | from maasserver.models.domain import dns_kms_setting_changed |
1015 | +from maasserver.models.node import ( |
1016 | + Node, |
1017 | + RegionController, |
1018 | + typecast_node, |
1019 | +) |
1020 | from maasserver.utils import synchronised |
1021 | from maasserver.utils.orm import ( |
1022 | + get_one, |
1023 | get_psycopg2_exception, |
1024 | + post_commit_do, |
1025 | transactional, |
1026 | with_connection, |
1027 | ) |
1028 | from maasserver.utils.threads import deferToDatabase |
1029 | +from maasserver.worker_user import get_worker_user |
1030 | from provisioningserver.logger import get_maas_logger |
1031 | +from provisioningserver.utils.env import ( |
1032 | + get_maas_id, |
1033 | + set_maas_id, |
1034 | +) |
1035 | +from provisioningserver.utils.ipaddr import get_mac_addresses |
1036 | from provisioningserver.utils.twisted import ( |
1037 | asynchronous, |
1038 | FOREVER, |
1039 | pause, |
1040 | ) |
1041 | +from twisted.internet import reactor |
1042 | from twisted.internet.defer import inlineCallbacks |
1043 | +from twisted.python import log |
1044 | |
1045 | |
1046 | maaslog = get_maas_logger("start-up") |
1047 | @@ -98,8 +116,55 @@ |
1048 | # Register our MAC data type with psycopg. |
1049 | register_mac_type(connection.cursor()) |
1050 | |
1051 | + def refresh_region(region_obj): |
1052 | + # The RegionAdvertisingService uses |
1053 | + # RegionController.objects.get_running_controller() which calls |
1054 | + # get_maas_id() to find the system_id of the running region. If this |
1055 | + # isn't done here RegionAdvertisingService won't be able to figure out |
1056 | + # the region object for the running machine. |
1057 | + set_maas_id(region_obj.system_id) |
1058 | + d = region_obj.refresh() |
1059 | + d.addErrback(log.err, "Failure when refreshing region") |
1060 | + return d |
1061 | + |
1062 | # Only perform the following if the master process for the |
1063 | # region controller. |
1064 | if is_master_process(): |
1065 | # Freshen the kms SRV records. |
1066 | dns_kms_setting_changed() |
1067 | + |
1068 | + region_obj = create_region_obj() |
1069 | + post_commit_do(reactor.callLater, 0, refresh_region, region_obj) |
1070 | + # Trigger post commit |
1071 | + region_obj.save() |
1072 | + |
1073 | + |
1074 | +def create_region_obj(): |
1075 | + region_id = get_maas_id() |
1076 | + hostname = gethostname() |
1077 | + update_fields = [] |
1078 | + region_filter = Q() if region_id is None else Q(system_id=region_id) |
1079 | + region_filter |= Q(hostname=hostname) |
1080 | + region_filter |= Q(interface__mac_address__in=get_mac_addresses()) |
1081 | + region_obj = get_one(Node.objects.filter(region_filter).distinct()) |
1082 | + if region_obj is None: |
1083 | + # This is the first time MAAS has run on this node. |
1084 | + region_obj = RegionController.objects.create(hostname=hostname) |
1085 | + else: |
1086 | + # Already exists. Make sure it's configured as a region. |
1087 | + if region_obj.node_type == NODE_TYPE.RACK_CONTROLLER: |
1088 | + region_obj.node_type = NODE_TYPE.REGION_AND_RACK_CONTROLLER |
1089 | + update_fields.append("node_type") |
1090 | + elif not region_obj.is_region_controller: |
1091 | + region_obj.node_type = NODE_TYPE.REGION_CONTROLLER |
1092 | + update_fields.append("node_type") |
1093 | + region_obj = typecast_node(region_obj, RegionController) |
1094 | + |
1095 | + if region_obj.owner is None: |
1096 | + region_obj.owner = get_worker_user() |
1097 | + update_fields.append("owner") |
1098 | + |
1099 | + if len(update_fields) > 0: |
1100 | + region_obj.save(update_fields=update_fields) |
1101 | + |
1102 | + return region_obj |
1103 | |
1104 | === modified file 'src/maasserver/tests/test_start_up.py' |
1105 | --- src/maasserver/tests/test_start_up.py 2016-06-04 01:35:39 +0000 |
1106 | +++ src/maasserver/tests/test_start_up.py 2016-06-16 01:30:55 +0000 |
1107 | @@ -12,15 +12,24 @@ |
1108 | locks, |
1109 | start_up, |
1110 | ) |
1111 | +from maasserver.enum import ( |
1112 | + NODE_TYPE, |
1113 | + NODE_TYPE_CHOICES, |
1114 | +) |
1115 | +from maasserver.models.node import RegionController |
1116 | from maasserver.models.signals import bootsources |
1117 | from maasserver.testing.eventloop import RegionEventLoopFixture |
1118 | from maasserver.testing.factory import factory |
1119 | from maasserver.testing.testcase import MAASServerTestCase |
1120 | +from maasserver.utils.orm import post_commit_hooks |
1121 | +from maasserver.worker_user import get_worker_user |
1122 | from maastesting.matchers import ( |
1123 | + MockCalledOnce, |
1124 | MockCalledOnceWith, |
1125 | MockCallsMatch, |
1126 | MockNotCalled, |
1127 | ) |
1128 | +from provisioningserver.utils.testing import MAASIDFixture |
1129 | |
1130 | |
1131 | class LockChecker: |
1132 | @@ -64,6 +73,17 @@ |
1133 | self.assertEqual(1, lock_checker.call_count) |
1134 | self.assertEqual(True, lock_checker.lock_was_held) |
1135 | |
1136 | + def test_refresh_on_master(self): |
1137 | + # Disable boot source cache signals. |
1138 | + self.addCleanup(bootsources.signals.enable) |
1139 | + bootsources.signals.disable() |
1140 | + |
1141 | + self.patch(start_up, 'is_master_process').return_value = True |
1142 | + self.patch(start_up, 'register_mac_type') |
1143 | + mock_refresh = self.patch_autospec(RegionController, 'refresh') |
1144 | + start_up.start_up() |
1145 | + self.assertThat(mock_refresh, MockCalledOnce()) |
1146 | + |
1147 | def test_start_up_retries_with_wait_on_exception(self): |
1148 | inner_start_up = self.patch(start_up, 'inner_start_up') |
1149 | inner_start_up.side_effect = [ |
1150 | @@ -94,10 +114,114 @@ |
1151 | |
1152 | def test__calls_dns_kms_setting_changed_if_master(self): |
1153 | self.patch(start_up, "is_master_process").return_value = True |
1154 | - start_up.inner_start_up() |
1155 | + self.patch(start_up, "post_commit_do") |
1156 | + with post_commit_hooks: |
1157 | + start_up.inner_start_up() |
1158 | self.assertThat(start_up.dns_kms_setting_changed, MockCalledOnceWith()) |
1159 | |
1160 | def test__doesnt_call_dns_kms_setting_changed_if_not_master(self): |
1161 | self.patch(start_up, "is_master_process").return_value = False |
1162 | - start_up.inner_start_up() |
1163 | + with post_commit_hooks: |
1164 | + start_up.inner_start_up() |
1165 | self.assertThat(start_up.dns_kms_setting_changed, MockNotCalled()) |
1166 | + |
1167 | + def test__creates_region_obj_if_master(self): |
1168 | + self.patch(start_up, "is_master_process").return_value = True |
1169 | + self.patch(start_up, "set_maas_id") |
1170 | + mock_create_region_obj = self.patch_autospec( |
1171 | + start_up, "create_region_obj") |
1172 | + with post_commit_hooks: |
1173 | + start_up.inner_start_up() |
1174 | + self.assertThat(mock_create_region_obj, MockCalledOnce()) |
1175 | + |
1176 | + def test__doesnt_create_region_obj_if_not_master(self): |
1177 | + self.patch(start_up, "is_master_process").return_value = False |
1178 | + mock_create_region_obj = self.patch_autospec( |
1179 | + start_up, "create_region_obj") |
1180 | + with post_commit_hooks: |
1181 | + start_up.inner_start_up() |
1182 | + self.assertThat(mock_create_region_obj, MockNotCalled()) |
1183 | + |
1184 | + def test__creates_maas_id_file(self): |
1185 | + self.patch(start_up, "is_master_process").return_value = True |
1186 | + mock_set_maas_id = self.patch_autospec(start_up, "set_maas_id") |
1187 | + self.patch(start_up.RegionController, 'refresh') |
1188 | + with post_commit_hooks: |
1189 | + start_up.inner_start_up() |
1190 | + self.assertThat(mock_set_maas_id, MockCalledOnce()) |
1191 | + |
1192 | + def test__doesnt_create_maas_id_file_if_not_master(self): |
1193 | + self.patch(start_up, "is_master_process").return_value = False |
1194 | + mock_set_maas_id = self.patch_autospec(start_up, "set_maas_id") |
1195 | + with post_commit_hooks: |
1196 | + start_up.inner_start_up() |
1197 | + self.assertThat(mock_set_maas_id, MockNotCalled()) |
1198 | + |
1199 | + |
1200 | +class TestCreateRegionObj(MAASServerTestCase): |
1201 | + |
1202 | + """Tests for the actual work done in `create_region_obj`.""" |
1203 | + |
1204 | + def test__creates_obj(self): |
1205 | + region = start_up.create_region_obj() |
1206 | + self.assertIsNotNone(region) |
1207 | + self.assertIsNotNone( |
1208 | + RegionController.objects.get(system_id=region.system_id)) |
1209 | + |
1210 | + def test__finds_region_by_maas_id(self): |
1211 | + region = factory.make_RegionController() |
1212 | + self.useFixture(MAASIDFixture(region.system_id)) |
1213 | + self.assertEquals(region, start_up.create_region_obj()) |
1214 | + |
1215 | + def test__finds_region_by_hostname(self): |
1216 | + region = factory.make_RegionController() |
1217 | + mock_gethostname = self.patch_autospec(start_up, "gethostname") |
1218 | + mock_gethostname.return_value = region.hostname |
1219 | + self.assertEquals(region, start_up.create_region_obj()) |
1220 | + |
1221 | + def test__finds_region_by_mac(self): |
1222 | + region = factory.make_RegionController() |
1223 | + factory.make_Interface(node=region) |
1224 | + mock_get_mac_addresses = self.patch_autospec( |
1225 | + start_up, "get_mac_addresses") |
1226 | + mock_get_mac_addresses.return_value = [ |
1227 | + nic.mac_address.raw |
1228 | + for nic in region.interface_set.all() |
1229 | + ] |
1230 | + self.assertEquals(region, start_up.create_region_obj()) |
1231 | + |
1232 | + def test__converts_rack_to_region_rack(self): |
1233 | + rack = factory.make_RackController() |
1234 | + self.useFixture(MAASIDFixture(rack.system_id)) |
1235 | + region_rack = start_up.create_region_obj() |
1236 | + self.assertEquals(rack, region_rack) |
1237 | + self.assertEquals( |
1238 | + region_rack.node_type, NODE_TYPE.REGION_AND_RACK_CONTROLLER) |
1239 | + |
1240 | + def test__converts_node_to_region_rack(self): |
1241 | + node = factory.make_Node( |
1242 | + node_type=factory.pick_choice( |
1243 | + NODE_TYPE_CHOICES, |
1244 | + but_not=[ |
1245 | + NODE_TYPE.REGION_CONTROLLER, |
1246 | + NODE_TYPE.RACK_CONTROLLER, |
1247 | + NODE_TYPE.REGION_AND_RACK_CONTROLLER, |
1248 | + ])) |
1249 | + self.useFixture(MAASIDFixture(node.system_id)) |
1250 | + region = start_up.create_region_obj() |
1251 | + self.assertEquals(node, region) |
1252 | + self.assertEquals(region.node_type, NODE_TYPE.REGION_CONTROLLER) |
1253 | + |
1254 | + def test__sets_owner_if_none(self): |
1255 | + region = factory.make_RegionController() |
1256 | + self.useFixture(MAASIDFixture(region.system_id)) |
1257 | + self.assertEquals( |
1258 | + get_worker_user(), start_up.create_region_obj().owner) |
1259 | + |
1260 | + def test__leaves_owner_if_set(self): |
1261 | + region = factory.make_RegionController() |
1262 | + self.useFixture(MAASIDFixture(region.system_id)) |
1263 | + user = factory.make_User() |
1264 | + region.owner = user |
1265 | + region.save() |
1266 | + self.assertEquals(user, start_up.create_region_obj().owner) |
1267 | |
1268 | === modified file 'src/maasserver/views/tests/test_rpc.py' |
1269 | --- src/maasserver/views/tests/test_rpc.py 2016-05-11 19:01:48 +0000 |
1270 | +++ src/maasserver/views/tests/test_rpc.py 2016-06-16 01:30:55 +0000 |
1271 | @@ -12,10 +12,11 @@ |
1272 | from maasserver import eventloop |
1273 | from maasserver.rpc import regionservice |
1274 | from maasserver.testing.eventloop import RegionEventLoopFixture |
1275 | +from maasserver.testing.factory import factory |
1276 | from maasserver.testing.testcase import MAASTransactionServerTestCase |
1277 | -from maastesting.factory import factory |
1278 | from netaddr import IPAddress |
1279 | from provisioningserver.utils.network import get_all_interface_addresses |
1280 | +from provisioningserver.utils.testing import MAASIDFixture |
1281 | from testtools.matchers import ( |
1282 | Equals, |
1283 | GreaterThan, |
1284 | @@ -108,6 +109,10 @@ |
1285 | self.assertEqual({"eventloops": None}, info) |
1286 | |
1287 | def test_rpc_info_when_rpc_advertise_running(self): |
1288 | + region = factory.make_RegionController() |
1289 | + self.useFixture(MAASIDFixture(region.system_id)) |
1290 | + region.owner = factory.make_admin() |
1291 | + region.save() |
1292 | self.useFixture(RegionEventLoopFixture("rpc", "rpc-advertise")) |
1293 | |
1294 | eventloop.start().wait(5) |
1295 | |
1296 | === modified file 'src/metadataserver/api.py' |
1297 | --- src/metadataserver/api.py 2016-05-12 19:07:37 +0000 |
1298 | +++ src/metadataserver/api.py 2016-06-16 01:30:55 +0000 |
1299 | @@ -166,7 +166,7 @@ |
1300 | elif node.node_type in [ |
1301 | NODE_TYPE.RACK_CONTROLLER, |
1302 | NODE_TYPE.REGION_AND_RACK_CONTROLLER]: |
1303 | - type_name = EVENT_TYPES.REQUEST_RACK_CONTROLLER_REFRESH |
1304 | + type_name = EVENT_TYPES.REQUEST_CONTROLLER_REFRESH |
1305 | else: |
1306 | type_name = EVENT_TYPES.NODE_STATUS_EVENT |
1307 | |
1308 | |
1309 | === modified file 'src/metadataserver/tests/test_api.py' |
1310 | --- src/metadataserver/tests/test_api.py 2016-06-11 22:29:53 +0000 |
1311 | +++ src/metadataserver/tests/test_api.py 2016-06-16 01:30:55 +0000 |
1312 | @@ -210,7 +210,7 @@ |
1313 | self.assertIn(origin, event.description) |
1314 | self.assertIn(description, event.description) |
1315 | self.assertEqual( |
1316 | - EVENT_TYPES.REQUEST_RACK_CONTROLLER_REFRESH, event.type.name) |
1317 | + EVENT_TYPES.REQUEST_CONTROLLER_REFRESH, event.type.name) |
1318 | |
1319 | |
1320 | def make_node_client(node=None): |
1321 | |
1322 | === modified file 'src/provisioningserver/events.py' |
1323 | --- src/provisioningserver/events.py 2016-04-26 19:52:34 +0000 |
1324 | +++ src/provisioningserver/events.py 2016-06-16 01:30:55 +0000 |
1325 | @@ -81,7 +81,7 @@ |
1326 | REQUEST_NODE_START = "REQUEST_NODE_START" |
1327 | REQUEST_NODE_STOP = "REQUEST_NODE_STOP" |
1328 | # Rack controller request events |
1329 | - REQUEST_RACK_CONTROLLER_REFRESH = "REQUEST_RACK_CONTROLLER_REFRESH" |
1330 | + REQUEST_CONTROLLER_REFRESH = "REQUEST_CONTROLLER_REFRESH" |
1331 | REQUEST_RACK_CONTROLLER_ADD_CHASSIS = "REQUEST_RACK_CONTROLLER_ADD_CHASSIS" |
1332 | |
1333 | |
1334 | @@ -202,9 +202,9 @@ |
1335 | description="User powering down node", |
1336 | level=INFO, |
1337 | ), |
1338 | - EVENT_TYPES.REQUEST_RACK_CONTROLLER_REFRESH: EventDetail( |
1339 | - description=("Starting refresh of rack controller hardware and " |
1340 | - "networking information"), |
1341 | + EVENT_TYPES.REQUEST_CONTROLLER_REFRESH: EventDetail( |
1342 | + description=("Starting refresh of controller hardware and networking " |
1343 | + "information"), |
1344 | level=INFO, |
1345 | ), |
1346 | EVENT_TYPES.REQUEST_RACK_CONTROLLER_ADD_CHASSIS: EventDetail( |
1347 | |
1348 | === modified file 'src/provisioningserver/refresh/__init__.py' |
1349 | --- src/provisioningserver/refresh/__init__.py 2016-06-06 23:56:40 +0000 |
1350 | +++ src/provisioningserver/refresh/__init__.py 2016-06-16 01:30:55 +0000 |
1351 | @@ -9,13 +9,13 @@ |
1352 | import tempfile |
1353 | import urllib |
1354 | |
1355 | -from provisioningserver.config import ClusterConfiguration |
1356 | from provisioningserver.logger.log import get_maas_logger |
1357 | from provisioningserver.refresh.maas_api_helper import ( |
1358 | encode_multipart_data, |
1359 | geturl, |
1360 | ) |
1361 | from provisioningserver.refresh.node_info_scripts import NODE_INFO_SCRIPTS |
1362 | +from provisioningserver.utils.network import get_all_interfaces_definition |
1363 | from provisioningserver.utils.shell import ( |
1364 | call_and_check, |
1365 | ExternalProcessError, |
1366 | @@ -52,6 +52,31 @@ |
1367 | return os_release |
1368 | |
1369 | |
1370 | +@synchronous |
1371 | +def get_sys_info(): |
1372 | + """Return basic system information in a dictionary.""" |
1373 | + os_release = get_os_release() |
1374 | + if 'ID' in os_release: |
1375 | + osystem = os_release['ID'] |
1376 | + elif 'NAME' in os_release: |
1377 | + osystem = os_release['NAME'] |
1378 | + else: |
1379 | + osystem = '' |
1380 | + if 'UBUNTU_CODENAME' in os_release: |
1381 | + distro_series = os_release['UBUNTU_CODENAME'] |
1382 | + elif 'VERSION_ID' in os_release: |
1383 | + distro_series = os_release['VERSION_ID'] |
1384 | + else: |
1385 | + distro_series = '' |
1386 | + return { |
1387 | + 'hostname': socket.gethostname().split('.')[0], |
1388 | + 'architecture': get_architecture(), |
1389 | + 'osystem': osystem, |
1390 | + 'distro_series': distro_series, |
1391 | + 'interfaces': get_all_interfaces_definition(), |
1392 | + } |
1393 | + |
1394 | + |
1395 | def signal( |
1396 | url, creds, status, message, files={}, script_result=None, |
1397 | extra_headers=None): |
1398 | @@ -91,13 +116,12 @@ |
1399 | |
1400 | |
1401 | @synchronous |
1402 | -def refresh(system_id, consumer_key, token_key, token_secret): |
1403 | +def refresh(system_id, consumer_key, token_key, token_secret, maas_url=None): |
1404 | """Run all builtin commissioning scripts and report results to region.""" |
1405 | maaslog.info( |
1406 | "Refreshing rack controller hardware information.") |
1407 | |
1408 | - with ClusterConfiguration.open() as config: |
1409 | - url = "%s/metadata/status/%s/latest" % (config.maas_url, system_id) |
1410 | + url = "%s/metadata/status/%s/latest" % (maas_url, system_id) |
1411 | |
1412 | creds = { |
1413 | 'consumer_key': consumer_key, |
1414 | |
1415 | === modified file 'src/provisioningserver/refresh/tests/test_refresh.py' |
1416 | --- src/provisioningserver/refresh/tests/test_refresh.py 2016-06-06 23:56:40 +0000 |
1417 | +++ src/provisioningserver/refresh/tests/test_refresh.py 2016-06-16 01:30:55 +0000 |
1418 | @@ -15,6 +15,7 @@ |
1419 | |
1420 | from maastesting.factory import factory |
1421 | from maastesting.matchers import ( |
1422 | + Equals, |
1423 | MockAnyCall, |
1424 | MockCalledWith, |
1425 | ) |
1426 | @@ -31,7 +32,7 @@ |
1427 | def test_get_architecture_returns_arch_with_generic(self): |
1428 | arch = random.choice(['i386', 'amd64', 'arm64', 'ppc64el']) |
1429 | subarch = factory.make_name('subarch') |
1430 | - self.patch(refresh, 'call_and_check').return_value = ( |
1431 | + self.patch_autospec(refresh, 'call_and_check').return_value = ( |
1432 | "%s/%s" % (arch, subarch)).encode('utf-8') |
1433 | ret_arch = refresh.get_architecture() |
1434 | self.assertEquals("%s/generic" % arch, ret_arch) |
1435 | @@ -40,7 +41,7 @@ |
1436 | arch = factory.make_name('arch') |
1437 | subarch = factory.make_name('subarch') |
1438 | architecture = "%s/%s" % (arch, subarch) |
1439 | - self.patch(refresh, 'call_and_check').return_value = ( |
1440 | + self.patch_autospec(refresh, 'call_and_check').return_value = ( |
1441 | architecture.encode('utf-8')) |
1442 | ret_arch = refresh.get_architecture() |
1443 | self.assertEquals(architecture, ret_arch) |
1444 | @@ -60,6 +61,73 @@ |
1445 | self.assertIn('UBUNTU_CODENAME', os_release) |
1446 | self.assertIn('VERSION_ID', os_release) |
1447 | |
1448 | + def test_get_sys_info(self): |
1449 | + hostname = factory.make_hostname() |
1450 | + osystem = factory.make_name('id') |
1451 | + distro_series = factory.make_name('ubuntu_codename') |
1452 | + architecture = factory.make_name('architecture') |
1453 | + interfaces = factory.make_name('interfaces') |
1454 | + self.patch(refresh.socket, 'gethostname').return_value = hostname |
1455 | + self.patch_autospec(refresh, 'get_os_release').return_value = { |
1456 | + 'ID': osystem, |
1457 | + 'UBUNTU_CODENAME': distro_series, |
1458 | + } |
1459 | + self.patch_autospec( |
1460 | + refresh, 'get_architecture').return_value = architecture |
1461 | + self.patch_autospec( |
1462 | + refresh, 'get_all_interfaces_definition').return_value = interfaces |
1463 | + self.assertThat({ |
1464 | + 'hostname': hostname, |
1465 | + 'architecture': architecture, |
1466 | + 'osystem': osystem, |
1467 | + 'distro_series': distro_series, |
1468 | + 'interfaces': interfaces, |
1469 | + }, Equals(refresh.get_sys_info())) |
1470 | + |
1471 | + def test_get_sys_info_on_host(self): |
1472 | + self.assertNotIn(None, refresh.get_sys_info()) |
1473 | + |
1474 | + def test_get_sys_info_alt(self): |
1475 | + hostname = factory.make_hostname() |
1476 | + osystem = factory.make_name('name') |
1477 | + distro_series = factory.make_name('version_id') |
1478 | + architecture = factory.make_name('architecture') |
1479 | + interfaces = factory.make_name('interfaces') |
1480 | + self.patch(refresh.socket, 'gethostname').return_value = hostname |
1481 | + self.patch_autospec(refresh, 'get_os_release').return_value = { |
1482 | + 'NAME': osystem, |
1483 | + 'VERSION_ID': distro_series, |
1484 | + } |
1485 | + self.patch_autospec( |
1486 | + refresh, 'get_architecture').return_value = architecture |
1487 | + self.patch_autospec( |
1488 | + refresh, 'get_all_interfaces_definition').return_value = interfaces |
1489 | + self.assertThat({ |
1490 | + 'hostname': hostname, |
1491 | + 'architecture': architecture, |
1492 | + 'osystem': osystem, |
1493 | + 'distro_series': distro_series, |
1494 | + 'interfaces': interfaces, |
1495 | + }, Equals(refresh.get_sys_info())) |
1496 | + |
1497 | + def test_get_sys_info_empty(self): |
1498 | + hostname = factory.make_hostname() |
1499 | + architecture = factory.make_name('architecture') |
1500 | + interfaces = factory.make_name('interfaces') |
1501 | + self.patch(refresh.socket, 'gethostname').return_value = hostname |
1502 | + self.patch_autospec(refresh, 'get_os_release').return_value = {} |
1503 | + self.patch_autospec( |
1504 | + refresh, 'get_architecture').return_value = architecture |
1505 | + self.patch_autospec( |
1506 | + refresh, 'get_all_interfaces_definition').return_value = interfaces |
1507 | + self.assertThat({ |
1508 | + 'hostname': hostname, |
1509 | + 'architecture': architecture, |
1510 | + 'osystem': '', |
1511 | + 'distro_series': '', |
1512 | + 'interfaces': interfaces, |
1513 | + }, Equals(refresh.get_sys_info())) |
1514 | + |
1515 | |
1516 | class TestSignal(MAASTestCase): |
1517 | def test_signal_formats_params(self): |
1518 | @@ -181,6 +249,29 @@ |
1519 | }) |
1520 | ]) |
1521 | |
1522 | + def test_refresh_accepts_defined_url(self): |
1523 | + signal = self.patch(refresh, 'signal') |
1524 | + self.patch_scripts_success() |
1525 | + |
1526 | + system_id = factory.make_name('system_id') |
1527 | + consumer_key = factory.make_name('consumer_key') |
1528 | + token_key = factory.make_name('token_key') |
1529 | + token_secret = factory.make_name('token_secret') |
1530 | + url = factory.make_url() |
1531 | + |
1532 | + refresh.refresh(system_id, consumer_key, token_key, token_secret, url) |
1533 | + self.assertThat(signal, MockAnyCall( |
1534 | + "%s/metadata/status/%s/latest" % (url, system_id), |
1535 | + { |
1536 | + 'consumer_secret': '', |
1537 | + 'consumer_key': consumer_key, |
1538 | + 'token_key': token_key, |
1539 | + 'token_secret': token_secret, |
1540 | + }, |
1541 | + 'WORKING', |
1542 | + 'Starting test_script [1/1]', |
1543 | + )) |
1544 | + |
1545 | def test_refresh_signals_starting(self): |
1546 | signal = self.patch(refresh, 'signal') |
1547 | self.patch_scripts_success() |
1548 | @@ -189,10 +280,11 @@ |
1549 | consumer_key = factory.make_name('consumer_key') |
1550 | token_key = factory.make_name('token_key') |
1551 | token_secret = factory.make_name('token_secret') |
1552 | + url = factory.make_url() |
1553 | |
1554 | - refresh.refresh(system_id, consumer_key, token_key, token_secret) |
1555 | - self.assertItemsEqual([ |
1556 | - "http://localhost:5240/MAAS/metadata/status/%s/latest" % system_id, |
1557 | + refresh.refresh(system_id, consumer_key, token_key, token_secret, url) |
1558 | + self.assertThat(signal, MockAnyCall( |
1559 | + "%s/metadata/status/%s/latest" % (url, system_id), |
1560 | { |
1561 | 'consumer_secret': '', |
1562 | 'consumer_key': consumer_key, |
1563 | @@ -200,8 +292,8 @@ |
1564 | 'token_secret': token_secret, |
1565 | }, |
1566 | 'WORKING', |
1567 | - 'Starting test_script [1/1]'], |
1568 | - signal.call_args_list[0][0]) |
1569 | + 'Starting test_script [1/1]', |
1570 | + )) |
1571 | |
1572 | def test_refresh_signals_results(self): |
1573 | signal = self.patch(refresh, 'signal') |
1574 | @@ -211,10 +303,11 @@ |
1575 | consumer_key = factory.make_name('consumer_key') |
1576 | token_key = factory.make_name('token_key') |
1577 | token_secret = factory.make_name('token_secret') |
1578 | + url = factory.make_url() |
1579 | |
1580 | - refresh.refresh(system_id, consumer_key, token_key, token_secret) |
1581 | - self.assertItemsEqual([ |
1582 | - "http://localhost:5240/MAAS/metadata/status/%s/latest" % system_id, |
1583 | + refresh.refresh(system_id, consumer_key, token_key, token_secret, url) |
1584 | + self.assertThat(signal, MockAnyCall( |
1585 | + "%s/metadata/status/%s/latest" % (url, system_id), |
1586 | { |
1587 | 'consumer_secret': '', |
1588 | 'consumer_key': consumer_key, |
1589 | @@ -227,8 +320,8 @@ |
1590 | 'test_script.out': b'test script\n', |
1591 | 'test_script.err': b'', |
1592 | }, |
1593 | - 0], |
1594 | - signal.call_args_list[1][0]) |
1595 | + 0, |
1596 | + )) |
1597 | |
1598 | def test_refresh_signals_finished(self): |
1599 | signal = self.patch(refresh, 'signal') |
1600 | @@ -238,10 +331,11 @@ |
1601 | consumer_key = factory.make_name('consumer_key') |
1602 | token_key = factory.make_name('token_key') |
1603 | token_secret = factory.make_name('token_secret') |
1604 | + url = factory.make_url() |
1605 | |
1606 | - refresh.refresh(system_id, consumer_key, token_key, token_secret) |
1607 | - self.assertItemsEqual([ |
1608 | - "http://localhost:5240/MAAS/metadata/status/%s/latest" % system_id, |
1609 | + refresh.refresh(system_id, consumer_key, token_key, token_secret, url) |
1610 | + self.assertThat(signal, MockAnyCall( |
1611 | + "%s/metadata/status/%s/latest" % (url, system_id), |
1612 | { |
1613 | 'consumer_secret': '', |
1614 | 'consumer_key': consumer_key, |
1615 | @@ -249,8 +343,8 @@ |
1616 | 'token_secret': token_secret, |
1617 | }, |
1618 | 'OK', |
1619 | - "Finished refreshing %s" % system_id], |
1620 | - signal.call_args_list[2][0]) |
1621 | + "Finished refreshing %s" % system_id |
1622 | + )) |
1623 | |
1624 | def test_refresh_signals_failure(self): |
1625 | signal = self.patch(refresh, 'signal') |
1626 | @@ -260,10 +354,11 @@ |
1627 | consumer_key = factory.make_name('consumer_key') |
1628 | token_key = factory.make_name('token_key') |
1629 | token_secret = factory.make_name('token_secret') |
1630 | + url = factory.make_url() |
1631 | |
1632 | - refresh.refresh(system_id, consumer_key, token_key, token_secret) |
1633 | - self.assertItemsEqual([ |
1634 | - "http://localhost:5240/MAAS/metadata/status/%s/latest" % system_id, |
1635 | + refresh.refresh(system_id, consumer_key, token_key, token_secret, url) |
1636 | + self.assertThat(signal, MockAnyCall( |
1637 | + "%s/metadata/status/%s/latest" % (url, system_id), |
1638 | { |
1639 | 'consumer_secret': '', |
1640 | 'consumer_key': consumer_key, |
1641 | @@ -271,8 +366,8 @@ |
1642 | 'token_secret': token_secret, |
1643 | }, |
1644 | 'FAILED', |
1645 | - "Failed refreshing %s" % system_id], |
1646 | - signal.call_args_list[2][0]) |
1647 | + "Failed refreshing %s" % system_id, |
1648 | + )) |
1649 | |
1650 | def test_refresh_clears_up_temporary_directory(self): |
1651 | |
1652 | |
1653 | === modified file 'src/provisioningserver/rpc/cluster.py' |
1654 | --- src/provisioningserver/rpc/cluster.py 2016-06-06 22:19:59 +0000 |
1655 | +++ src/provisioningserver/rpc/cluster.py 2016-06-16 01:30:55 +0000 |
1656 | @@ -450,6 +450,7 @@ |
1657 | (b"token_secret", amp.Unicode()), |
1658 | ] |
1659 | response = [ |
1660 | + (b"hostname", amp.Unicode()), |
1661 | (b"architecture", amp.Unicode()), |
1662 | (b"osystem", amp.Unicode()), |
1663 | (b"distro_series", amp.Unicode()), |
1664 | |
1665 | === modified file 'src/provisioningserver/rpc/clusterservice.py' |
1666 | --- src/provisioningserver/rpc/clusterservice.py 2016-06-09 02:13:37 +0000 |
1667 | +++ src/provisioningserver/rpc/clusterservice.py 2016-06-16 01:30:55 +0000 |
1668 | @@ -36,8 +36,7 @@ |
1669 | from provisioningserver.power.change import maybe_change_power_state |
1670 | from provisioningserver.power.query import get_power_state |
1671 | from provisioningserver.refresh import ( |
1672 | - get_architecture, |
1673 | - get_os_release, |
1674 | + get_sys_info, |
1675 | refresh, |
1676 | ) |
1677 | from provisioningserver.rpc import ( |
1678 | @@ -79,7 +78,6 @@ |
1679 | from provisioningserver.utils.twisted import ( |
1680 | callOut, |
1681 | DeferredValue, |
1682 | - synchronous, |
1683 | ) |
1684 | from twisted import web |
1685 | from twisted.application.internet import TimerService |
1686 | @@ -385,43 +383,16 @@ |
1687 | # Refresh is already running, don't do anything |
1688 | raise exceptions.RefreshAlreadyInProgress() |
1689 | else: |
1690 | + with ClusterConfiguration.open() as config: |
1691 | + maas_url = config.maas_url |
1692 | # Start gathering node results(lshw, lsblk, etc) but don't wait. |
1693 | d = deferToThread( |
1694 | - refresh, system_id, consumer_key, token_key, token_secret) |
1695 | + refresh, system_id, consumer_key, token_key, token_secret, |
1696 | + maas_url) |
1697 | d.addErrback(log.err, 'Failed to refresh the rack controller.') |
1698 | d.addBoth(callOut, lock.release) |
1699 | |
1700 | - @synchronous |
1701 | - def perform_refresh(): |
1702 | - architecture = get_architecture() |
1703 | - os_release = get_os_release() |
1704 | - interfaces = get_all_interfaces_definition() |
1705 | - return architecture, os_release, interfaces |
1706 | - |
1707 | - def cb_result(result): |
1708 | - architecture, os_release, interfaces = result |
1709 | - if 'ID' in os_release: |
1710 | - osystem = os_release['ID'] |
1711 | - elif 'NAME' in os_release: |
1712 | - osystem = os_release['NAME'] |
1713 | - else: |
1714 | - osystem = '' |
1715 | - if 'UBUNTU_CODENAME' in os_release: |
1716 | - distro_series = os_release['UBUNTU_CODENAME'] |
1717 | - elif 'VERSION_ID' in os_release: |
1718 | - distro_series = os_release['VERSION_ID'] |
1719 | - else: |
1720 | - distro_series = '' |
1721 | - return { |
1722 | - 'architecture': architecture, |
1723 | - 'osystem': osystem, |
1724 | - 'distro_series': distro_series, |
1725 | - 'interfaces': interfaces |
1726 | - } |
1727 | - |
1728 | - d = deferToThread(perform_refresh) |
1729 | - d.addCallback(cb_result) |
1730 | - return d |
1731 | + return deferToThread(get_sys_info) |
1732 | |
1733 | @cluster.AddChassis.responder |
1734 | def add_chassis( |
1735 | |
1736 | === modified file 'src/provisioningserver/rpc/tests/test_clusterservice.py' |
1737 | --- src/provisioningserver/rpc/tests/test_clusterservice.py 2016-06-09 19:30:17 +0000 |
1738 | +++ src/provisioningserver/rpc/tests/test_clusterservice.py 2016-06-16 01:30:55 +0000 |
1739 | @@ -2029,7 +2029,13 @@ |
1740 | clusterservice, 'deferToThread') |
1741 | mock_deferToThread.side_effect = [ |
1742 | succeed(None), |
1743 | - succeed(('', {}, {})), |
1744 | + succeed({ |
1745 | + 'hostname': '', |
1746 | + 'architecture': '', |
1747 | + 'osystem': '', |
1748 | + 'distro_series': '', |
1749 | + 'interfaces': {} |
1750 | + }), |
1751 | ] |
1752 | |
1753 | system_id = factory.make_name('system_id') |
1754 | @@ -2038,16 +2044,16 @@ |
1755 | token_secret = factory.make_name('token_secret') |
1756 | |
1757 | yield call_responder(Cluster(), cluster.RefreshRackControllerInfo, { |
1758 | - "system_id": system_id, |
1759 | - "consumer_key": consumer_key, |
1760 | - "token_key": token_key, |
1761 | - "token_secret": token_secret, |
1762 | + 'system_id': system_id, |
1763 | + 'consumer_key': consumer_key, |
1764 | + 'token_key': token_key, |
1765 | + 'token_secret': token_secret, |
1766 | }) |
1767 | |
1768 | self.assertThat( |
1769 | mock_deferToThread, MockAnyCall( |
1770 | clusterservice.refresh, system_id, consumer_key, token_key, |
1771 | - token_secret)) |
1772 | + token_secret, ANY)) |
1773 | |
1774 | @inlineCallbacks |
1775 | def test_returns_extra_info(self): |
1776 | @@ -2057,18 +2063,18 @@ |
1777 | consumer_key = factory.make_name('consumer_key') |
1778 | token_key = factory.make_name('token_key') |
1779 | token_secret = factory.make_name('token_secret') |
1780 | - architecture = factory.make_name("architecture") |
1781 | - osystem = factory.make_name("osystem") |
1782 | - distro_series = factory.make_name("distro_series") |
1783 | - os_release = { |
1784 | - 'ID': osystem, |
1785 | - 'UBUNTU_CODENAME': distro_series, |
1786 | + hostname = factory.make_hostname() |
1787 | + architecture = factory.make_name('architecture') |
1788 | + osystem = factory.make_name('osystem') |
1789 | + distro_series = factory.make_name('distro_series') |
1790 | + self.patch_autospec(clusterservice, 'get_sys_info').return_value = { |
1791 | + 'hostname': hostname, |
1792 | + 'osystem': osystem, |
1793 | + 'distro_series': distro_series, |
1794 | + 'architecture': architecture, |
1795 | + 'interfaces': {}, |
1796 | } |
1797 | |
1798 | - self.patch(clusterservice, 'get_architecture').return_value = ( |
1799 | - architecture) |
1800 | - self.patch(clusterservice, 'get_os_release').return_value = os_release |
1801 | - |
1802 | response = yield call_responder( |
1803 | Cluster(), cluster.RefreshRackControllerInfo, { |
1804 | "system_id": system_id, |
1805 | @@ -2079,10 +2085,11 @@ |
1806 | |
1807 | self.assertItemsEqual( |
1808 | { |
1809 | + 'hostname': hostname, |
1810 | 'osystem': osystem, |
1811 | 'distro_series': distro_series, |
1812 | 'architecture': architecture, |
1813 | - 'interfaces': get_all_interfaces_definition(), |
1814 | + 'interfaces': {}, |
1815 | }, response) |
1816 | |
1817 |
I accidentally proposed to merge this into one of my branches instead of lp:maas. I've responded to your comments here but please finish the review at https:/ /code.launchpad .net/~ltrager/ maas/region_ refresh/ +merge/ 297015