Merge lp:~blake-rouse/maas/fix-1387133-1.7 into lp:~maas-committers/maas/trunk
- fix-1387133-1.7
- Merge into trunk
Status: | Superseded | ||||
---|---|---|---|---|---|
Proposed branch: | lp:~blake-rouse/maas/fix-1387133-1.7 | ||||
Merge into: | lp:~maas-committers/maas/trunk | ||||
Diff against target: |
1487 lines (+1133/-11) (has conflicts) 16 files modified
INSTALL.txt (+13/-0) Makefile (+2/-2) docs/changelog.rst (+366/-0) src/maasserver/forms_settings.py (+10/-0) src/maasserver/models/config.py (+1/-0) src/maasserver/models/node.py (+162/-0) src/maasserver/models/tests/test_bootsource.py (+4/-0) src/maasserver/models/tests/test_node.py (+417/-0) src/maasserver/views/nodes.py (+5/-0) src/maasserver/views/tests/test_nodes.py (+54/-0) src/metadataserver/api.py (+14/-4) src/metadataserver/models/commissioningscript.py (+22/-5) src/metadataserver/models/tests/test_noderesults.py (+27/-0) src/metadataserver/tests/test_api.py (+15/-0) src/provisioningserver/rpc/boot_images.py (+14/-0) src/provisioningserver/rpc/tests/test_boot_images.py (+7/-0) Text conflict in INSTALL.txt Text conflict in docs/changelog.rst Text conflict in src/maasserver/models/node.py Text conflict in src/maasserver/models/tests/test_node.py Text conflict in src/maasserver/views/nodes.py Text conflict in src/maasserver/views/tests/test_nodes.py Text conflict in src/provisioningserver/rpc/boot_images.py |
||||
To merge this branch: | bzr merge lp:~blake-rouse/maas/fix-1387133-1.7 | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Blake Rouse (community) | Approve | ||
Review via email: mp+240153@code.launchpad.net |
This proposal has been superseded by a proposal from 2014-10-30.
Commit message
Handle wierd race that will delete a synced boot images. Add more logging to the importing of images on the region. Enable simplestreams to log to maas-django.log, without modifing the maas_local_
Description of the change
Andres Rodriguez (andreserl) wrote : | # |
This is a huge diff and I see merge conflcist. Can you check that this is
the correct branch?
On Oct 30, 2014 2:50 PM, "Blake Rouse" <email address hidden> wrote:
> Review: Approve
>
> Self-approve.
>
> Already approved for 1.7 landing here:
> https:/
> --
> https:/
> You are subscribed to branch lp:maas.
>
Blake Rouse (blake-rouse) wrote : | # |
Yeah it was incorrect, I fixed the MP.
On Thu, Oct 30, 2014 at 3:42 PM, Andres Rodriguez <email address hidden>
wrote:
> This is a huge diff and I see merge conflcist. Can you check that this is
> the correct branch?
> On Oct 30, 2014 2:50 PM, "Blake Rouse" <email address hidden> wrote:
>
>> Review: Approve
>>
>> Self-approve.
>>
>> Already approved for 1.7 landing here:
>> https:/
>> --
>> https:/
>> You are subscribed to branch lp:maas.
>>
>
Preview Diff
1 | === modified file 'INSTALL.txt' |
2 | --- INSTALL.txt 2014-10-24 05:53:08 +0000 |
3 | +++ INSTALL.txt 2014-10-30 18:48:30 +0000 |
4 | @@ -232,6 +232,7 @@ |
5 | Import the boot images |
6 | ---------------------- |
7 | |
8 | +<<<<<<< TREE |
9 | Since version 1.7, MAAS stores the boot images in the region controller's |
10 | database, from where the cluster controllers will synchronise with the region |
11 | and pull images from the region to the cluster's local disk. This process |
12 | @@ -244,6 +245,18 @@ |
13 | To do it in the web user interface, go to the Images tab, check the boxes to |
14 | say which images you want to import, and click the "Import images" button at |
15 | the bottom of the Ubuntu section. |
16 | +======= |
17 | +MAAS will check for and download new Ubuntu images every hour. On a new |
18 | +installation you'll need to start the import process manually once you have |
19 | +set up your MAAS region. |
20 | + |
21 | +There are two ways to start the import: through the web user interface, or |
22 | +through the remote API. |
23 | + |
24 | +To do it in the web user interface, go to the Images tab, check the boxes to |
25 | +say which images you want to import, and click the "Import images" button at |
26 | +the bottom of the Ubuntu section. |
27 | +>>>>>>> MERGE-SOURCE |
28 | |
29 | .. image:: media/import-images.* |
30 | |
31 | |
32 | === modified file 'Makefile' |
33 | --- Makefile 2014-10-01 07:22:16 +0000 |
34 | +++ Makefile 2014-10-30 18:48:30 +0000 |
35 | @@ -351,8 +351,8 @@ |
36 | # has a bug and always considers apt-source tarballs before the specified |
37 | # branch. So instead, export to a local tarball which is always found. |
38 | # Make sure debhelper and dh-apport packages are installed before using this. |
39 | -PACKAGING := $(CURDIR)/../packaging.trunk |
40 | -PACKAGING_BRANCH := lp:~maas-maintainers/maas/packaging |
41 | +PACKAGING := $(CURDIR)/../packaging.utopic |
42 | +PACKAGING_BRANCH := lp:~maas-maintainers/maas/packaging.utopic |
43 | |
44 | package_branch: |
45 | @echo Downloading/refreshing packaging branch... |
46 | |
47 | === modified file 'docs/changelog.rst' |
48 | --- docs/changelog.rst 2014-10-30 05:53:02 +0000 |
49 | +++ docs/changelog.rst 2014-10-30 18:48:30 +0000 |
50 | @@ -2,6 +2,7 @@ |
51 | Changelog |
52 | ========= |
53 | |
54 | +<<<<<<< TREE |
55 | 1.7.0 |
56 | ===== |
57 | |
58 | @@ -403,6 +404,371 @@ |
59 | #1348364 non-maas managed subnets cannot query maas DNS |
60 | #1381543 Disabling Disk Erasing with node in 'Failed Erasing' state leads to Invalid transition: Failed disk erasing -> Ready. |
61 | |
62 | +======= |
63 | +1.7.0 |
64 | +===== |
65 | + |
66 | +Major new features |
67 | +------------------ |
68 | + |
69 | +Improved image downloading and reporting. |
70 | + MAAS boot images are now downloaded centrally by the region controller |
71 | + and disseminated to all registered cluster controllers. This change includes |
72 | + a new web UI under the `Images` tab that allows the admin to select |
73 | + which images to import and shows the progress of the ongoing download. |
74 | + This completely replaces any file-based configuration that used to take |
75 | + place on cluster controllers. The cluster page now shows whether it has |
76 | + synchronised all the images from the region controller. |
77 | + |
78 | + This process is also completely controllable using the API. |
79 | + |
80 | +Note: |
81 | + Unfortunately due to a format change in the way images are stored, it |
82 | + was not possible to migrate previously downloaded images to the new region |
83 | + storage. The cluster(s) will still be able to use the existing images, |
84 | + however the region controller will be unaware of them until an import |
85 | + is initiated. When the import is finished, the cluster(s) will remove |
86 | + older image resources. |
87 | + |
88 | + This means that the first thing to do after upgrading to 1.7 is go to the |
89 | + `Images` tab and re-import the images. |
90 | + |
91 | +Increased robustness. |
92 | + A large amount of effort has been given to ensuring that MAAS remains |
93 | + robust in the face of adversity. An updated node state model has been |
94 | + implemented that takes into account more of the situations in which a |
95 | + node can be found including any failures at each stage. |
96 | + |
97 | + When a node is getting deployed, it is now monitored to check that each |
98 | + stage is reached in a timely fashion; if it does not then it is marked |
99 | + as failed. |
100 | + |
101 | + The core power driver was updated to check the state of the power on each |
102 | + node and is reported in the web UI and API. The core driver now also |
103 | + handles retries when changing the power state of hardware, removing the |
104 | + requirement that each power template handle it individually. |
105 | + |
106 | +RPC security. |
107 | + As a step towards mutually verified TLS connections between MAAS's |
108 | + components, 1.7 introduces a simple shared-secret mechanism to |
109 | + authenticate the region with the clusters and vice-versa. For those |
110 | + clusters that run on the same machine as the region controller (which |
111 | + will account for most people), everything will continue to work |
112 | + without intervention. However, if you're running a cluster on a |
113 | + separate machine, you must install the secret: |
114 | + |
115 | + 1. After upgrading the region controller, view /var/lib/maas/secret |
116 | + (it's text) and copy it. |
117 | + |
118 | + 2. On each cluster, run: |
119 | + |
120 | + sudo -u maas maas-provision install-shared-secret |
121 | + |
122 | + You'll be prompted for the secret; paste it in and press enter. It |
123 | + is a password prompt, so the secret will not be echoed back to you. |
124 | + |
125 | + That's it; the upgraded cluster controller will find the secret |
126 | + without needing to be told. |
127 | + |
128 | +RPC connections. |
129 | + Each cluster maintains a persistent connection to each region |
130 | + controller process that's running. The ports on which the region is |
131 | + listening are all high-numbered, and they are allocated randomly by |
132 | + the OS. In a future release of MAAS we will narrow this down. For now, |
133 | + each cluster controller needs unfiltered access to each machine in the |
134 | + region on all high-numbered TCP ports. |
135 | + |
136 | +Node event log. |
137 | + For every major event on nodes, it is now logged in a node-specific log. |
138 | + This includes events such as power changes, deployments and any failures. |
139 | + |
140 | +IPv6. |
141 | + It is now possible to deploy Ubuntu nodes that have IPv6 enabled. |
142 | + See :doc:`ipv6` for more details. |
143 | + |
144 | +Removal of Celery and RabbitMQ. |
145 | + While Celery was found to be very reliable it ultimately did not suit |
146 | + the project's requirements as it is a largely fire-and-forget mechanism. |
147 | + Additionally it was another moving part that caused some headaches for |
148 | + users and admins alike, so the decision was taken to remove it and implement |
149 | + a custom communications mechanism between the region controller and cluster |
150 | + controllers. The new mechanism is bidirectional and allowed the complex |
151 | + interactions to take place that are required as part of the robustness |
152 | + improvements. |
153 | + |
154 | + Since a constant connection is maintained, as a side effect the web UI now |
155 | + shows whether each cluster is connected or not. |
156 | + |
157 | +Support for other OSes. |
158 | + Non-Ubuntu OSes are fully supported now. This includes: |
159 | + - Windows |
160 | + - Centos |
161 | + - SuSE |
162 | + |
163 | +Custom Images. |
164 | + MAAS now supports the deployment of Custom Images. Custom images can be |
165 | + uploaded via the API. The usage of custom images allows the deployment of |
166 | + other Ubuntu Flavors, such as Ubuntu Desktop. |
167 | + |
168 | +maas-proxy. |
169 | + MAAS now uses maas-proxy as the default proxy solution instead of |
170 | + squid-deb-proxy. On a fresh install, MAAS will use maas-proxy by default. |
171 | + On upgrades from previous releases, MAAS will install maas-proxy instead of |
172 | + squid-deb-proxy. |
173 | + |
174 | +Minor notable changes |
175 | +--------------------- |
176 | + |
177 | +Better handling of networks. |
178 | + All networks referred to by cluster interfaces are now automatically |
179 | + registered on the Network page. Any node network interfaces are |
180 | + automatically linked to the relevant Network. |
181 | + |
182 | +Improved logging. |
183 | + A total overhaul of where logging is produced was undertaken, and now |
184 | + all the main events in MAAS are selectively reported to syslog with the |
185 | + "maas" prefix from both the region and cluster controllers alike. If MAAS |
186 | + is installed using the standard Ubuntu packaging, its syslog entries are |
187 | + redirected to /var/log/maas/maas.log. |
188 | + |
189 | + On the clusters, pserv.log is now less chatty and contains only errors. |
190 | + On the region controller appservers, maas-django.log contains only appserver |
191 | + errors. |
192 | + |
193 | +Static IP selection. |
194 | + The API was extended so that specific IPs can be pre-allocated for network |
195 | + interfaces on nodes and for user-allocated IPs. |
196 | + |
197 | +Pronounceable random hostnames. |
198 | + The old auto-generated 5-letter names were replaced with a pseudo-random |
199 | + name that is produced from a dictionary giving names of the form |
200 | + 'adjective-noun'. |
201 | + |
202 | +Bugs fixed in this release |
203 | +-------------------------- |
204 | +#1081660 If maas-enlist fails to reach a DNS server, the node will be named ";; connection timed out; no servers could be reached" |
205 | +#1087183 MaaS cloud-init configuration specifies 'manage_etc_hosts: localhost' |
206 | +#1328351 ConstipationError: When the cluster runs the "import boot images" task it blocks other tasks |
207 | +#1340208 DoesNotExist: NodeGroupInterface has no nodegroup |
208 | +#1340896 MAAS upgrade from 1.5.2+bzr2282-0ubuntu0.2 to experiment failed |
209 | +#1342117 CLI command to set up node-group-interface fails with /usr/lib/python2.7/dist-packages/maascli/__main__.py: error: u'name' |
210 | +#1342395 power_on: ipmi failed: name 'power_off_mode' is not defined at line 12 column 18 in file /etc/maas/templates/power/ipmi.template |
211 | +#1347579 Schema migration 0091 is broken (node boot type) |
212 | +#1349254 Duplicate FQDN can be configured on MAAS via CLI or API |
213 | +#1352575 BMC password showing in the apache2 logs |
214 | +#1353598 maas-import-pxe-files logger import error for logger |
215 | +#1355014 Can't run tests without a net connection |
216 | +#1355534 UnknownPowerType traceback in appserver log |
217 | +#1356788 Test failure: “One or more services are registered” etc. |
218 | +#1359029 Power status monitoring does not scale |
219 | +#1359517 Periodic DHCP probe breaks: "Don't log exceptions to maaslog" |
220 | +#1359551 create_Network_from_NodeGroupInterface is missing a catch for IntegrityError |
221 | +#1360004 UI becomes unresponsive (unaccessible) if RPC to cluster fails |
222 | +#1360008 Data migration fails with django.db.utils.InternalError: current transaction is aborted, commands ignored until end of transaction block |
223 | +#1360676 KeyError raised importing boot images |
224 | +#1361799 absolute_reverse returns incorrect url if base_url is missing ending / |
225 | +#1362397 django.core.exceptions.ValidationError: {'power_state': [u'Ensure this value has at most 10 characters (it has 18).']} |
226 | +#1363105 Change in absolute_reverse breaks netbooting on installed MAAS |
227 | +#1363116 DHCP Probe timer service fails |
228 | +#1363138 DHCP Probe TimerService fails with 'NoneType' object has no attribute 'encode' |
229 | +#1363474 exceptions.KeyError: u'subarches' when syncing uploaded image from region to cluster |
230 | +#1363525 preseed path for generated tgz doesn't match actual path |
231 | +#1363722 Boot resource upload failed: error: length too large |
232 | +#1363850 Auto-enlistment not reporting power parameters |
233 | +#1363900 Dev server errors while trying to write to '/var/lib/maas' |
234 | +#1363999 Not assigning static IP addresses |
235 | +#1364062 New download boot resources method doesn't use the configured proxy |
236 | +#1364481 http 500 error doesn't contain a stack trace |
237 | +#1364993 500 error when trying to acquire a commissioned node (AddrFormatError: failed to detect a valid IP address from None) |
238 | +#1365130 django-admin prints spurious messages to stdout, breaking scripts |
239 | +#1365175 bootloader import code goes directly to archive.ubuntu.com rather than the configured archive |
240 | +#1365850 DHCP scan using cluster interface name as network interface? |
241 | +#1366104 [FFe] OperationError when large object greater than 2gb |
242 | +#1366172 NUC does not boot after power off/power on |
243 | +#1366212 Large dhcp leases file leads to tftp timeouts |
244 | +#1366652 Leaking temporary directories |
245 | +#1366726 CI breakage: Deployed nodes don't get a static IP address |
246 | +#1368269 internal server error when deleting a node |
247 | +#1368590 Power actions are not serialized. |
248 | +#1370534 Recurrent update of the power state of nodes crashes if the connection to the BMC fails. |
249 | +#1370958 excessive pserv logging |
250 | +#1371033 A node can get stuck in the 'RELEASING' state if the power change command fails to power down the node. |
251 | +#1371064 Spurious test failure: maasserver.rpc.tests.test_nodes.TestCreateNode.test_creates_node |
252 | +#1371236 power parameters for probe-and-enlist mscm no longer saved for enlisted nodes |
253 | +#1372408 PowerQuery RPC method crashes with exceptions.TypeError: get_power_state() got an unexpected keyword argument 'power_change' |
254 | +#1372732 ImportError running src/metadataserver/tests/test_fields.py |
255 | +#1372735 Deprecation warning breaks Node model tests |
256 | +#1372767 Twisted web client does not support IPv6 address |
257 | +#1372944 Twisted web client fails looking up IPv6 address hostname |
258 | +#1373031 Cannot register cluster |
259 | +#1373103 compose_curtin_network_preseed breaks installation of all other operating systems |
260 | +#1373207 Can't build package |
261 | +#1373237 maas-cluster-controller installation breaks: __main__.py: error: unrecognized arguments: -u maas -g maas |
262 | +#1373265 Where did the “Import boot images” button go? |
263 | +#1373357 register_event_type fails: already exists |
264 | +#1373368 Conflicting power actions being dropped on the floor can result in leaving a node in an inconsistent state |
265 | +#1373477 Circular import between preseed.py and models/node.py |
266 | +#1373658 request_node_info_by_mac_address errors during enlistment: MACAddress matching query does not exist |
267 | +#1373699 Cluster Listing Page lacks feedback about the images each cluster has |
268 | +#1373710 Machines fail to PXE Boot |
269 | +#1374102 No retries for AMT power? |
270 | +#1374388 UI checkbox for Node.disable_ipv4 never unchecks |
271 | +#1374793 Cluster page no longer shows whether the cluster is connected or not. |
272 | +#1375594 After a fresh install, cluster can't connect to region |
273 | +#1375664 Node powering on but not deploying |
274 | +#1375835 Can't create node in the UI with 1.7 beta 4 |
275 | +#1375970 Timeout leads to inconsistency between maas and real world state, can't commission or start nodes |
276 | +#1375980 Nodes failed to transition out of "New" state on bulk commission |
277 | +#1376000 oops: 'NoneType' object has no attribute 'encode' |
278 | +#1376023 After performing bulk action on maas nodes, Internal Server Error |
279 | +#1376028 maasserver Unable to identify boot image for (ubuntu/amd64/generic/trusty/poweroff): cluster 'maas' does not have matching boot image. |
280 | +#1376031 WebUI became unresponsive after disconnecting Remote Cluster Controller (powered node off) |
281 | +#1376303 Can't commission a node: xceptions.AttributeError: 'NoneType' object has no attribute 'addCallback' |
282 | +#1376304 Timeout errors in RPC commands cause 500 errors |
283 | +#1376782 Node stuck with: "another action is already in progress for that node." |
284 | +#1376888 Nodes can't be deleted if DHCP management is off. |
285 | +#1377099 Bulk operation leaves nodes in inconsistent state |
286 | +#1377860 Nodes not configured with IPv6 DNS server address |
287 | +#1379154 "boot-images" link in the "Visit the boot images page to start the import." is a 404 |
288 | +#1379209 When a node has multiple interfaces on a network MAAS manages, MAAS assigns static IP addresses to all of them |
289 | +#1379568 maas-cluster fails to register if the host has an IPv6 address |
290 | +#1379591 nodes with two interfaces fail to deploy in maas 1.7 beta5 |
291 | +#1379641 IPv6 netmasks aren't *always* 64 bits, but we only configure 64-bit ones |
292 | +#1379649 Invalid transition - 'Releasing Failed' to 'Disk Erasing' |
293 | +#1379744 Cluster registration is fragile and insecure |
294 | +#1379924 maas 1.7 flooded with OOPSs |
295 | +#1380927 Default Cluster does not autoconnect after a fresh install |
296 | +#1380932 MAAS does not cope with changes of the dhcp daemons |
297 | +#1381605 Not all the DNS records are being added when deploying multiple nodes |
298 | +#1381714 Nodes release API bypasses disk erase |
299 | +#1012954 If a power script fails, there is no UI feedback |
300 | +#1057250 TestGetLongpollContext.test_get_longpoll_context is causing test failures in metadataserver |
301 | +#1186196 "Starting a node" has different meanings in the UI and in the API. |
302 | +#1237215 maas and curtin do not indicate failure reasonably |
303 | +#1273222 MAAS doesn't check return values of power actions |
304 | +#1288502 archive and proxy settings not honoured for commissioning |
305 | +#1300554 If the rabbit password changes, clusters are not informed |
306 | +#1315161 cannot deploy Windows |
307 | +#1316919 Checks don't exist to confirm a node will actually boot |
308 | +#1321885 IPMI detection and automatic setting fail in ubuntu 14.04 maas |
309 | +#1325610 node marked "Ready" before poweroff complete |
310 | +#1325638 Add hardware enablement for Universal Management Gateway |
311 | +#1333954 global registry of license keys |
312 | +#1334963 Nodegroupinterface.clean_ip_ranges() is very slow with large networks |
313 | +#1337437 [SRU] maas needs utopic support |
314 | +#1338169 Non-Ubuntu preseed templates are not tested |
315 | +#1339868 No way to list supported operating systems via RPC |
316 | +#1339903 No way to validate an OS license key via RPC |
317 | +#1340188 unallocated node started manually, causes AssertionError for purpose poweroff |
318 | +#1340305 No way to get the title for a release from OperatingSystem |
319 | +#1341118 No feedback when IPMI credentials fail |
320 | +#1341121 No feedback to user when cluster is not running |
321 | +#1341581 power state is not represented in api and ui |
322 | +#1341619 NodeGroupInterface is not linked to Network |
323 | +#1341772 No way to get extra preseed data from OperatingSystem via RPC |
324 | +#1341800 MAAS doesn't support soft power off through the API |
325 | +#1343425 deprecate use-fastpath-installer tag and use a property on node instead |
326 | +#1344177 hostnames can't be changed while a node is acquired |
327 | +#1347518 Confusing error message when API key is wrong |
328 | +#1349496 Unable to request a specific static IP on the API |
329 | +#1349736 MAAS logging is too verbose and not very useful |
330 | +#1349917 guess_server_address() can return IPAddress or hostname |
331 | +#1350103 No support for armhf/keystone architecture |
332 | +#1350856 Can't constrain acquisition of nodes by not having a tag |
333 | +#1350948 IPMI power template treats soft as an option rather than a command |
334 | +#1354014 clusters should sync boot images from the region |
335 | +#1356490 Metadataserver api needs tests for _store_installing_results |
336 | +#1356780 maaslog items are logged twice |
337 | +#1356880 MAAS shouldn't allow changing the hostname of a deployed node |
338 | +#1357071 When a power template fails, the content of the event from the node event log is not readable (it contains the whole template) |
339 | +#1357685 docs/bootsources.rst:: WARNING: document isn't included in any toctree |
340 | +#1357714 Virsh power driver does not seem to work at all |
341 | +#1358177 maas-region-admin requires root privileges [docs] |
342 | +#1358337 [docs] MAAS documentation suggests to execute 'juju --sync-tools' |
343 | +#1358829 IPMI power query fails when trying to commit config changes |
344 | +#1358859 Commissioning output xml is hard to understand, would be nice to have yaml as an output option. |
345 | +#1359169 MAAS should handle invalid consumers gracefully |
346 | +#1359822 Gateway is missing in network definition |
347 | +#1361897 exceptions in PeriodicImageDownloadService will cause it to stop running |
348 | +#1361941 erlang upgrade makes maas angry |
349 | +#1361967 NodePowerMonitorService has no tests |
350 | +#1363913 Impossible to remove last MAC from network in UI |
351 | +#1364228 Help text for node hostname is wrong |
352 | +#1364591 MAAS Archive Mirror does not respect non-default port |
353 | +#1364617 ipmipower returns a zero exit status when password invalid |
354 | +#1364713 selenium test will not pass with new Firefox |
355 | +#1365616 Non-admin access to cluster controller config |
356 | +#1365619 DNS should be an optional field in the network definition |
357 | +#1365722 NodeStateViolation when commissioning |
358 | +#1365742 Logged OOPS ... NoSuchEventType: Event type with name=NODE_POWER_ON_FAILED could not be found. |
359 | +#1365776 commissioning results view for a node also shows installation results |
360 | +#1366812 Old boot resources are not being removed on clusters |
361 | +#1367455 MAC address for node's IPMI is reversed looked up to yield IP address using case sensitive comparison |
362 | +#1368398 Can't mark systems that 'Failed commissioning' as 'Broken' |
363 | +#1368916 No resources found in Simplestreams repository |
364 | +#1370860 Node power monitor doesn't cope with power template answers other than "on" or "off" |
365 | +#1370887 No event is registered on a node for when the power monitor sees a problem |
366 | +#1371663 Node page Javascript crashes when there is no lshw output to display yet |
367 | +#1371763 Need to use RPC for validating license key. |
368 | +#1372974 No "installation complete" event |
369 | +#1373272 "No boot images are available.…" message doesn't disappear when images are imported |
370 | +#1373580 [SRU] Glen m700 cartridge list as ARM64/generic after enlist |
371 | +#1373723 Releasing a node without power parameters ends up in not being able to release a node |
372 | +#1373727 PXE node event logs provide too much info |
373 | +#1373900 New install of MAAS can't download boot images |
374 | +#1374153 Stuck in "power controller problem" |
375 | +#1374321 Internal server error when attempting to perform an action when the cluster is down |
376 | +#1375360 Automatic population of managed networks for eth1 and beyond |
377 | +#1375427 Need to remove references to older import images button |
378 | +#1375647 'static-ipaddresses' capability in 1.6 not documented. |
379 | +#1375681 "Importing images . . ." message on the image page never disappears |
380 | +#1375953 bootsourcecache is not refreshed when sources change |
381 | +#1376016 MAAS lacks a setting for the Simple Streams Image repository location |
382 | +#1376481 Wrong error messages in UI |
383 | +#1376620 maas-url config question doesn't make clear that localhost won't do |
384 | +#1376990 Elusive JavaScript lint |
385 | +#1378366 When there are no images, clusters should show that there |
386 | +#1378527 Images UI doesn't handle HWE images |
387 | +#1378643 Periodic test failure for compose_curtin_network_preseed_for |
388 | +#1378837 "Abort operation" action name is vague and misleading |
389 | +#1378910 Call the install log 'install log' rather than 'curtin log' |
390 | +#1379401 Race in EventManager.register_event_and_event_type |
391 | +#1379816 disable_ipv4 has a default setting on the cluster, but it's not visible |
392 | +#1380470 Event log says node was allocated but doesn't say to *whom* |
393 | +#1380805 uprade from 1.5.4 to 1.7 overwrote my cluster name |
394 | +#1381007 "Acquire and start node" button appears on node page for admins who don't own an allocated but unstarted node |
395 | +#1381213 mark_fixed should clear the osystem and distro_series fields |
396 | +#1381747 APIRPCErrorsMiddleware isn't installed |
397 | +#1381796 license_key is not given in the curtin_userdata preseed for Windows |
398 | +#1172773 Web UI has no indication of image download status. |
399 | +#1233158 no way to get power parameters in api |
400 | +#1319854 `maas login` tells you you're logged in successfully when you're not |
401 | +#1351451 Impossible to release a BROKEN node via the API. |
402 | +#1361040 Weird log message: "Power state has changed from unknown to connection timeout." |
403 | +#1366170 Node Event log doesn't currently display anything apart from power on/off |
404 | +#1368480 Need API to gather image metadata across all of MAAS |
405 | +#1370306 commissioning output XML and YAML tabs are not vertical |
406 | +#1371122 WindowsBootMethod request pxeconfig from API for every file |
407 | +#1376030 Unable to get RPC connection for cluster 'maas' <-- 'maas' is the DNS zone name |
408 | +#1378358 Missing images warning should contain a link to images page |
409 | +#1281406 Disk/memory space on Node edit page have no units |
410 | +#1299231 MAAS DHCP/DNS can't manage more than a /16 network |
411 | +#1357381 maas-region-admin createadmin shows error if not params given |
412 | +#1357686 Caching in get_worker_user() looks like premature optimisation |
413 | +#1358852 Tons of Linking <mac address> to <cluster interface> spam in log |
414 | +#1359178 Docs - U1 still listed for uploading data |
415 | +#1359947 Spelling Errors/Inconsistencies with MAAS Documentation |
416 | +#1365396 UI: top link to “<name> MAAS” only appears on some pages |
417 | +#1365591 "Start node" UI button does not allocate node before starting in 1.7 |
418 | +#1365603 No "stop node" button on the page of a node with status "failed deployment" |
419 | +#1371658 Wasted space in the "Discovery data" section of the node page |
420 | +#1376393 powerkvm boot loader installs even when not needed |
421 | +#1376956 commissioning results page with YAML/XML output tabs are not centered on page. |
422 | +#1287224 MAAS random generated hostnames are not pronounceable |
423 | +#1348364 non-maas managed subnets cannot query maas DNS |
424 | +#1381543 Disabling Disk Erasing with node in 'Failed Erasing' state leads to Invalid transition: Failed disk erasing -> Ready. |
425 | + |
426 | +>>>>>>> MERGE-SOURCE |
427 | 1.6.1 |
428 | ===== |
429 | |
430 | |
431 | === modified file 'docs/index.rst' |
432 | === modified file 'src/maasserver/api/nodes.py' |
433 | === modified file 'src/maasserver/forms.py' |
434 | === modified file 'src/maasserver/forms_settings.py' |
435 | --- src/maasserver/forms_settings.py 2014-09-25 21:06:39 +0000 |
436 | +++ src/maasserver/forms_settings.py 2014-10-30 18:48:30 +0000 |
437 | @@ -266,6 +266,16 @@ |
438 | "Erase nodes' disks prior to releasing.") |
439 | } |
440 | }, |
441 | + 'enable_dhcp_discovery_on_unconfigured_interfaces': { |
442 | + 'default': False, |
443 | + 'form': forms.BooleanField, |
444 | + 'form_kwargs': { |
445 | + 'required': False, |
446 | + 'label': ( |
447 | + "Perform DHCP discovery on unconfigured network " |
448 | + "interfaces of commissioning nodes."), |
449 | + } |
450 | + }, |
451 | } |
452 | |
453 | |
454 | |
455 | === modified file 'src/maasserver/middleware.py' |
456 | === modified file 'src/maasserver/models/config.py' |
457 | --- src/maasserver/models/config.py 2014-10-08 20:41:31 +0000 |
458 | +++ src/maasserver/models/config.py 2014-10-30 18:48:30 +0000 |
459 | @@ -60,6 +60,7 @@ |
460 | # Third Party |
461 | 'enable_third_party_drivers': True, |
462 | 'enable_disk_erasing_on_release': False, |
463 | + 'enable_dhcp_discovery_on_unconfigured_interfaces': True, |
464 | ## /settings |
465 | } |
466 | |
467 | |
468 | === modified file 'src/maasserver/models/node.py' |
469 | --- src/maasserver/models/node.py 2014-10-30 00:39:21 +0000 |
470 | +++ src/maasserver/models/node.py 2014-10-30 18:48:30 +0000 |
471 | @@ -327,6 +327,146 @@ |
472 | available_nodes = self.get_nodes(for_user, NODE_PERMISSION.VIEW) |
473 | return available_nodes.filter(status=NODE_STATUS.READY) |
474 | |
475 | +<<<<<<< TREE |
476 | +======= |
477 | + def stop_nodes(self, ids, by_user, stop_mode='hard'): |
478 | + """Request on given user's behalf that the given nodes be shut down. |
479 | + |
480 | + Shutdown is only requested for nodes that the user has ownership |
481 | + privileges for; any other nodes in the request are ignored. |
482 | + |
483 | + :param ids: The `system_id` values for nodes to be shut down. |
484 | + :type ids: Sequence |
485 | + :param by_user: Requesting user. |
486 | + :type by_user: User_ |
487 | + :param stop_mode: Power off mode - usually 'soft' or 'hard'. |
488 | + :type stop_mode: unicode |
489 | + :return: Those Nodes for which shutdown was actually requested. |
490 | + :rtype: list |
491 | + """ |
492 | + # Obtain node model objects for each node specified. |
493 | + nodes = self.get_nodes(by_user, NODE_PERMISSION.EDIT, ids=ids) |
494 | + |
495 | + # Helper function to whittle the list of nodes down to those that we |
496 | + # can actually stop, and keep hold of their power control info. |
497 | + def gen_power_info(nodes): |
498 | + for node in nodes: |
499 | + power_info = node.get_effective_power_info() |
500 | + if power_info.can_be_stopped: |
501 | + # Smuggle in a hint about how to power-off the node. |
502 | + power_info.power_parameters['power_off_mode'] = stop_mode |
503 | + yield node, power_info |
504 | + |
505 | + # Create info that we can pass into the reactor (no model objects). |
506 | + nodes_stop_info = list( |
507 | + (node.system_id, node.hostname, node.nodegroup.uuid, power_info) |
508 | + for node, power_info in gen_power_info(nodes)) |
509 | + powered_systems = [ |
510 | + system_id for system_id, _, _, _ in nodes_stop_info] |
511 | + |
512 | + # Request that these nodes be powered off and wait for the |
513 | + # commands to return or fail. |
514 | + deferreds = power_off_nodes(nodes_stop_info).viewvalues() |
515 | + wait_for_power_commands(deferreds) |
516 | + |
517 | + # Return a list of those nodes that we've sent power commands for. |
518 | + return list( |
519 | + node for node in nodes if node.system_id in powered_systems) |
520 | + |
521 | + def start_nodes(self, ids, by_user, user_data=None): |
522 | + """Request on given user's behalf that the given nodes be started up. |
523 | + |
524 | + Power-on is only requested for nodes that the user has ownership |
525 | + privileges for; any other nodes in the request are ignored. |
526 | + |
527 | + Nodes are also ignored if they don't have a valid power type |
528 | + configured. |
529 | + |
530 | + :param ids: The `system_id` values for nodes to be started. |
531 | + :type ids: Sequence |
532 | + :param by_user: Requesting user. |
533 | + :type by_user: User_ |
534 | + :param user_data: Optional blob of user-data to be made available to |
535 | + the nodes through the metadata service. If not given, any |
536 | + previous user data is used. |
537 | + :type user_data: unicode |
538 | + :return: Those Nodes for which power-on was actually requested. |
539 | + :rtype: list |
540 | + |
541 | + :raises MultipleFailures: When there are failures originating from a |
542 | + remote process. There could be one or more failures -- it's not |
543 | + strictly *multiple* -- but they do all originate from comms with |
544 | + remote processes. |
545 | + :raises: `StaticIPAddressExhaustion` if there are not enough IP |
546 | + addresses left in the static range.. |
547 | + """ |
548 | + # Avoid circular imports. |
549 | + from metadataserver.models import NodeUserData |
550 | + |
551 | + # Obtain node model objects for each node specified. |
552 | + nodes = self.get_nodes(by_user, NODE_PERMISSION.EDIT, ids=ids) |
553 | + |
554 | + # Record the same user data for all nodes we've been *requested* to |
555 | + # start, regardless of whether or not we actually can; the user may |
556 | + # choose to manually start them. |
557 | + NodeUserData.objects.bulk_set_user_data(nodes, user_data) |
558 | + |
559 | + # Claim static IP addresses for all nodes we've been *requested* to |
560 | + # start, such that they're recorded in the database. This results in a |
561 | + # mapping of nodegroups to (ips, macs). |
562 | + static_mappings = defaultdict(dict) |
563 | + for node in nodes: |
564 | + if node.status == NODE_STATUS.ALLOCATED: |
565 | + claims = node.claim_static_ip_addresses() |
566 | + # If the PXE mac is on a managed interface then we can ask |
567 | + # the cluster to generate the DHCP host map(s). |
568 | + if node.is_pxe_mac_on_managed_interface(): |
569 | + static_mappings[node.nodegroup].update(claims) |
570 | + node.start_deployment() |
571 | + |
572 | + # XXX 2014-06-17 bigjools bug=1330765 |
573 | + # If the above fails it needs to release the static IPs back to the |
574 | + # pool. An enclosing transaction or savepoint from the caller may take |
575 | + # care of this, given that a serious problem above will result in an |
576 | + # exception. If we're being belt-n-braces though it ought to clear up |
577 | + # before returning too. As part of the robustness work coming up, it |
578 | + # also needs to inform the user. |
579 | + |
580 | + # Update host maps and wait for them so that we can report failures |
581 | + # directly to the caller. |
582 | + update_host_maps_failures = list(update_host_maps(static_mappings)) |
583 | + if len(update_host_maps_failures) != 0: |
584 | + raise MultipleFailures(*update_host_maps_failures) |
585 | + |
586 | + # Update the DNS zone with the new static IP info as necessary. |
587 | + from maasserver.dns.config import change_dns_zones |
588 | + change_dns_zones({node.nodegroup for node in nodes}) |
589 | + |
590 | + # Helper function to whittle the list of nodes down to those that we |
591 | + # can actually start, and keep hold of their power control info. |
592 | + def gen_power_info(nodes): |
593 | + for node in nodes: |
594 | + power_info = node.get_effective_power_info() |
595 | + if power_info.can_be_started: |
596 | + yield node, power_info |
597 | + |
598 | + # Create info that we can pass into the reactor (no model objects). |
599 | + nodes_start_info = list( |
600 | + (node.system_id, node.hostname, node.nodegroup.uuid, power_info) |
601 | + for node, power_info in gen_power_info(nodes)) |
602 | + powered_systems = [ |
603 | + system_id for system_id, _, _, _ in nodes_start_info] |
604 | + |
605 | + # Request that these nodes be powered off and wait for the |
606 | + # commands to return or fail. |
607 | + deferreds = power_on_nodes(nodes_start_info).viewvalues() |
608 | + wait_for_power_commands(deferreds) |
609 | + |
610 | + # Return a list of those nodes that we've sent power commands for. |
611 | + return list( |
612 | + node for node in nodes if node.system_id in powered_systems) |
613 | + |
614 | +>>>>>>> MERGE-SOURCE |
615 | |
616 | def patch_pgarray_types(): |
617 | """Monkey-patch incompatibility with recent versions of `djorm_pgarray`. |
618 | @@ -1238,6 +1378,7 @@ |
619 | self.distro_series = '' |
620 | self.license_key = '' |
621 | self.save() |
622 | +<<<<<<< TREE |
623 | |
624 | # Clear installation results |
625 | NodeResult.objects.filter( |
626 | @@ -1250,6 +1391,16 @@ |
627 | from maasserver.dns.config import change_dns_zones |
628 | change_dns_zones([self.nodegroup]) |
629 | |
630 | +======= |
631 | + |
632 | + # Do these after updating the node to avoid creating deadlocks with |
633 | + # other node editing operations. |
634 | + deallocated_ips = StaticIPAddress.objects.deallocate_by_node(self) |
635 | + self.delete_host_maps(deallocated_ips) |
636 | + from maasserver.dns.config import change_dns_zones |
637 | + change_dns_zones([self.nodegroup]) |
638 | + |
639 | +>>>>>>> MERGE-SOURCE |
640 | # We explicitly commit here because during bulk node actions we |
641 | # want to make sure that each successful state transition is |
642 | # recorded in the DB. |
643 | @@ -1426,6 +1577,7 @@ |
644 | return self.pxe_mac |
645 | |
646 | return self.macaddress_set.first() |
647 | +<<<<<<< TREE |
648 | |
649 | def is_pxe_mac_on_managed_interface(self): |
650 | pxe_mac = self.get_pxe_mac() |
651 | @@ -1553,3 +1705,13 @@ |
652 | deferreds = power_off_nodes([stop_info]).viewvalues() |
653 | wait_for_power_commands(deferreds) |
654 | return True |
655 | +======= |
656 | + |
657 | + def is_pxe_mac_on_managed_interface(self): |
658 | + pxe_mac = self.get_pxe_mac() |
659 | + if pxe_mac is not None: |
660 | + cluster_interface = pxe_mac.cluster_interface |
661 | + if cluster_interface is not None: |
662 | + return cluster_interface.is_managed |
663 | + return False |
664 | +>>>>>>> MERGE-SOURCE |
665 | |
666 | === modified file 'src/maasserver/models/tests/test_bootsource.py' |
667 | --- src/maasserver/models/tests/test_bootsource.py 2014-09-30 19:48:33 +0000 |
668 | +++ src/maasserver/models/tests/test_bootsource.py 2014-10-30 18:48:30 +0000 |
669 | @@ -15,6 +15,7 @@ |
670 | __all__ = [] |
671 | |
672 | import os |
673 | +from unittest import skip |
674 | |
675 | from django.core.exceptions import ValidationError |
676 | from maasserver.bootsources import _cache_boot_sources |
677 | @@ -98,6 +99,9 @@ |
678 | [], |
679 | boot_source_dict['selections']) |
680 | |
681 | + # XXX: GavinPanella 2014-10-28 bug=1376317: This test is fragile, possibly |
682 | + # due to isolation issues. |
683 | + @skip("Possible isolation issues") |
684 | def test_calls_cache_boot_sources_on_create(self): |
685 | mock_callLater = self.patch(reactor, 'callLater') |
686 | BootSource.objects.create( |
687 | |
688 | === modified file 'src/maasserver/models/tests/test_node.py' |
689 | --- src/maasserver/models/tests/test_node.py 2014-10-30 00:39:21 +0000 |
690 | +++ src/maasserver/models/tests/test_node.py 2014-10-30 18:48:30 +0000 |
691 | @@ -19,6 +19,7 @@ |
692 | timedelta, |
693 | ) |
694 | import random |
695 | +from unittest import skip |
696 | |
697 | import crochet |
698 | from django.core.exceptions import ValidationError |
699 | @@ -2057,6 +2058,422 @@ |
700 | self.assertThat(erase_mock, MockNotCalled()) |
701 | |
702 | |
703 | +<<<<<<< TREE |
704 | +======= |
705 | +class NodeManagerTest_StartNodes(MAASServerTestCase): |
706 | + |
707 | + def setUp(self): |
708 | + super(NodeManagerTest_StartNodes, self).setUp() |
709 | + self.useFixture(RegionEventLoopFixture("rpc")) |
710 | + self.useFixture(RunningEventLoopFixture()) |
711 | + self.rpc_fixture = self.useFixture(MockLiveRegionToClusterRPCFixture()) |
712 | + |
713 | + def prepare_rpc_to_cluster(self, nodegroup): |
714 | + protocol = self.rpc_fixture.makeCluster( |
715 | + nodegroup, cluster_module.CreateHostMaps, cluster_module.PowerOn, |
716 | + cluster_module.StartMonitors) |
717 | + protocol.CreateHostMaps.side_effect = always_succeed_with({}) |
718 | + protocol.StartMonitors.side_effect = always_succeed_with({}) |
719 | + protocol.PowerOn.side_effect = always_succeed_with({}) |
720 | + return protocol |
721 | + |
722 | + def make_acquired_nodes_with_macs(self, user, nodegroup=None, count=3): |
723 | + nodes = [] |
724 | + for _ in xrange(count): |
725 | + node = factory.make_node_with_mac_attached_to_nodegroupinterface( |
726 | + nodegroup=nodegroup, status=NODE_STATUS.READY) |
727 | + self.prepare_rpc_to_cluster(node.nodegroup) |
728 | + node.acquire(user) |
729 | + nodes.append(node) |
730 | + return nodes |
731 | + |
732 | + def test__sets_user_data(self): |
733 | + user = factory.make_User() |
734 | + nodegroup = factory.make_NodeGroup() |
735 | + self.prepare_rpc_to_cluster(nodegroup) |
736 | + nodes = self.make_acquired_nodes_with_macs(user, nodegroup) |
737 | + user_data = factory.make_bytes() |
738 | + |
739 | + with TwistedLoggerFixture() as twisted_log: |
740 | + Node.objects.start_nodes( |
741 | + list(node.system_id for node in nodes), |
742 | + user, user_data=user_data) |
743 | + |
744 | + # All three nodes have been given the same user data. |
745 | + nuds = NodeUserData.objects.filter( |
746 | + node_id__in=(node.id for node in nodes)) |
747 | + self.assertEqual({user_data}, {nud.data for nud in nuds}) |
748 | + # No complaints are made to the Twisted log. |
749 | + self.assertFalse(twisted_log.containsError(), twisted_log.output) |
750 | + |
751 | + def test__resets_user_data(self): |
752 | + user = factory.make_User() |
753 | + nodegroup = factory.make_NodeGroup() |
754 | + self.prepare_rpc_to_cluster(nodegroup) |
755 | + nodes = self.make_acquired_nodes_with_macs(user, nodegroup) |
756 | + |
757 | + with TwistedLoggerFixture() as twisted_log: |
758 | + Node.objects.start_nodes( |
759 | + list(node.system_id for node in nodes), |
760 | + user, user_data=None) |
761 | + |
762 | + # All three nodes have been given the same user data. |
763 | + nuds = NodeUserData.objects.filter( |
764 | + node_id__in=(node.id for node in nodes)) |
765 | + self.assertThat(list(nuds), HasLength(0)) |
766 | + # No complaints are made to the Twisted log. |
767 | + self.assertFalse(twisted_log.containsError(), twisted_log.output) |
768 | + |
769 | + def test__claims_static_ip_addresses(self): |
770 | + user = factory.make_User() |
771 | + nodegroup = factory.make_NodeGroup() |
772 | + self.prepare_rpc_to_cluster(nodegroup) |
773 | + nodes = self.make_acquired_nodes_with_macs(user, nodegroup) |
774 | + |
775 | + claim_static_ip_addresses = self.patch_autospec( |
776 | + Node, "claim_static_ip_addresses", spec_set=False) |
777 | + claim_static_ip_addresses.return_value = {} |
778 | + |
779 | + with TwistedLoggerFixture() as twisted_log: |
780 | + Node.objects.start_nodes( |
781 | + list(node.system_id for node in nodes), user) |
782 | + |
783 | + for node in nodes: |
784 | + self.expectThat(claim_static_ip_addresses, MockAnyCall(node)) |
785 | + # No complaints are made to the Twisted log. |
786 | + self.assertFalse(twisted_log.containsError(), twisted_log.output) |
787 | + |
788 | + def test__claims_static_ip_addresses_for_allocated_nodes_only(self): |
789 | + user = factory.make_User() |
790 | + nodegroup = factory.make_NodeGroup() |
791 | + self.prepare_rpc_to_cluster(nodegroup) |
792 | + nodes = self.make_acquired_nodes_with_macs(user, nodegroup, count=2) |
793 | + |
794 | + # Change the status of the first node to something other than |
795 | + # allocated. |
796 | + broken_node, allocated_node = nodes |
797 | + broken_node.status = NODE_STATUS.BROKEN |
798 | + broken_node.save() |
799 | + |
800 | + claim_static_ip_addresses = self.patch_autospec( |
801 | + Node, "claim_static_ip_addresses", spec_set=False) |
802 | + claim_static_ip_addresses.return_value = {} |
803 | + |
804 | + with TwistedLoggerFixture() as twisted_log: |
805 | + Node.objects.start_nodes( |
806 | + list(node.system_id for node in nodes), user) |
807 | + |
808 | + # Only one call is made to claim_static_ip_addresses(), for the |
809 | + # still-allocated node. |
810 | + self.assertThat( |
811 | + claim_static_ip_addresses, |
812 | + MockCalledOnceWith(allocated_node)) |
813 | + # No complaints are made to the Twisted log. |
814 | + self.assertFalse(twisted_log.containsError(), twisted_log.output) |
815 | + |
816 | + def test__updates_host_maps(self): |
817 | + user = factory.make_User() |
818 | + nodes = self.make_acquired_nodes_with_macs(user) |
819 | + |
820 | + update_host_maps = self.patch(node_module, "update_host_maps") |
821 | + update_host_maps.return_value = [] # No failures. |
822 | + |
823 | + with TwistedLoggerFixture() as twisted_log: |
824 | + Node.objects.start_nodes( |
825 | + list(node.system_id for node in nodes), user) |
826 | + |
827 | + # Host maps are updated. |
828 | + self.assertThat( |
829 | + update_host_maps, MockCalledOnceWith({ |
830 | + node.nodegroup: { |
831 | + ip_address.ip: mac.mac_address |
832 | + for ip_address in mac.ip_addresses.all() |
833 | + } |
834 | + for node in nodes |
835 | + for mac in node.mac_addresses_on_managed_interfaces() |
836 | + })) |
837 | + # No complaints are made to the Twisted log. |
838 | + self.assertFalse(twisted_log.containsError(), twisted_log.output) |
839 | + |
840 | + def test__propagates_errors_when_updating_host_maps(self): |
841 | + user = factory.make_User() |
842 | + nodes = self.make_acquired_nodes_with_macs(user) |
843 | + |
844 | + update_host_maps = self.patch(node_module, "update_host_maps") |
845 | + update_host_maps.return_value = [ |
846 | + Failure(AssertionError("That is so not true")), |
847 | + Failure(ZeroDivisionError("I cannot defy mathematics")), |
848 | + ] |
849 | + |
850 | + with TwistedLoggerFixture() as twisted_log: |
851 | + error = self.assertRaises( |
852 | + MultipleFailures, Node.objects.start_nodes, |
853 | + list(node.system_id for node in nodes), user) |
854 | + |
855 | + self.assertSequenceEqual( |
856 | + update_host_maps.return_value, error.args) |
857 | + |
858 | + # No complaints are made to the Twisted log. |
859 | + self.assertFalse(twisted_log.containsError(), twisted_log.output) |
860 | + |
861 | + def test__updates_dns(self): |
862 | + user = factory.make_User() |
863 | + nodes = self.make_acquired_nodes_with_macs(user) |
864 | + |
865 | + change_dns_zones = self.patch(dns_config, "change_dns_zones") |
866 | + |
867 | + with TwistedLoggerFixture() as twisted_log: |
868 | + Node.objects.start_nodes( |
869 | + list(node.system_id for node in nodes), user) |
870 | + |
871 | + self.assertThat( |
872 | + change_dns_zones, MockCalledOnceWith( |
873 | + {node.nodegroup for node in nodes})) |
874 | + |
875 | + # No complaints are made to the Twisted log. |
876 | + self.assertFalse(twisted_log.containsError(), twisted_log.output) |
877 | + |
878 | + def test__starts_nodes(self): |
879 | + user = factory.make_User() |
880 | + nodes = self.make_acquired_nodes_with_macs(user) |
881 | + power_infos = list( |
882 | + node.get_effective_power_info() |
883 | + for node in nodes) |
884 | + |
885 | + power_on_nodes = self.patch(node_module, "power_on_nodes") |
886 | + power_on_nodes.return_value = {} |
887 | + |
888 | + with TwistedLoggerFixture() as twisted_log: |
889 | + Node.objects.start_nodes( |
890 | + list(node.system_id for node in nodes), user) |
891 | + |
892 | + self.assertThat(power_on_nodes, MockCalledOnceWith(ANY)) |
893 | + |
894 | + nodes_start_info_observed = power_on_nodes.call_args[0][0] |
895 | + nodes_start_info_expected = [ |
896 | + (node.system_id, node.hostname, node.nodegroup.uuid, power_info) |
897 | + for node, power_info in izip(nodes, power_infos) |
898 | + ] |
899 | + |
900 | + # If the following fails the diff is big, but it's useful. |
901 | + self.maxDiff = None |
902 | + |
903 | + self.assertItemsEqual( |
904 | + nodes_start_info_expected, |
905 | + nodes_start_info_observed) |
906 | + |
907 | + # No complaints are made to the Twisted log. |
908 | + self.assertFalse(twisted_log.containsError(), twisted_log.output) |
909 | + |
910 | + def test__raises_failures_for_nodes_that_cannot_be_started(self): |
911 | + power_on_nodes = self.patch(node_module, "power_on_nodes") |
912 | + power_on_nodes.return_value = { |
913 | + factory.make_name("system_id"): defer.fail( |
914 | + ZeroDivisionError("Defiance is futile")), |
915 | + factory.make_name("system_id"): defer.succeed({}), |
916 | + } |
917 | + |
918 | + failures = self.assertRaises( |
919 | + MultipleFailures, Node.objects.start_nodes, [], |
920 | + factory.make_User()) |
921 | + [failure] = failures.args |
922 | + self.assertThat(failure.value, IsInstance(ZeroDivisionError)) |
923 | + |
924 | + def test__marks_allocated_node_as_deploying(self): |
925 | + user = factory.make_User() |
926 | + [node] = self.make_acquired_nodes_with_macs(user, count=1) |
927 | + nodes_started = Node.objects.start_nodes([node.system_id], user) |
928 | + self.assertItemsEqual([node], nodes_started) |
929 | + self.assertEqual( |
930 | + NODE_STATUS.DEPLOYING, reload_object(node).status) |
931 | + |
932 | + def test__does_not_change_state_of_deployed_node(self): |
933 | + user = factory.make_User() |
934 | + node = factory.make_Node( |
935 | + power_type='ether_wake', status=NODE_STATUS.DEPLOYED, |
936 | + owner=user) |
937 | + factory.make_MACAddress(node=node) |
938 | + power_on_nodes = self.patch(node_module, "power_on_nodes") |
939 | + power_on_nodes.return_value = { |
940 | + node.system_id: defer.succeed({}), |
941 | + } |
942 | + nodes_started = Node.objects.start_nodes([node.system_id], user) |
943 | + self.assertItemsEqual([node], nodes_started) |
944 | + self.assertEqual( |
945 | + NODE_STATUS.DEPLOYED, reload_object(node).status) |
946 | + |
947 | + def test__only_returns_nodes_for_which_power_commands_have_been_sent(self): |
948 | + user = factory.make_User() |
949 | + node1, node2 = self.make_acquired_nodes_with_macs(user, count=2) |
950 | + node1.power_type = 'ether_wake' # Can be started. |
951 | + node1.save() |
952 | + node2.power_type = '' # Undefined power type, cannot be started. |
953 | + node2.save() |
954 | + nodes_started = Node.objects.start_nodes( |
955 | + [node1.system_id, node2.system_id], user) |
956 | + self.assertItemsEqual([node1], nodes_started) |
957 | + |
958 | + def test__does_not_try_to_start_nodes_not_allocated_to_user(self): |
959 | + user1 = factory.make_User() |
960 | + [node1] = self.make_acquired_nodes_with_macs(user1, count=1) |
961 | + node1.power_type = 'ether_wake' # can be started. |
962 | + node1.save() |
963 | + user2 = factory.make_User() |
964 | + [node2] = self.make_acquired_nodes_with_macs(user2, count=1) |
965 | + node2.power_type = 'ether_wake' # can be started. |
966 | + node2.save() |
967 | + |
968 | + self.patch(node_module, 'power_on_nodes') |
969 | + self.patch(node_module, 'wait_for_power_commands') |
970 | + nodes_started = Node.objects.start_nodes( |
971 | + [node1.system_id, node2.system_id], user1) |
972 | + |
973 | + # Since no power commands were sent to the node, it isn't |
974 | + # returned by start_nodes(). |
975 | + # test__only_returns_nodes_for_which_power_commands_have_been_sent() |
976 | + # demonstrates this behaviour. |
977 | + self.assertItemsEqual([node1], nodes_started) |
978 | + |
979 | + # XXX: GavinPanella 2014-10-30 bug=1387696: Flaky test, returning wrong IP |
980 | + # addresses. Appears unrelated to the changes that highlighted this. |
981 | + @skip("Flaky; returns incorrect IP addresses") |
982 | + def test__does_not_generate_host_maps_if_not_on_managed_interface(self): |
983 | + cluster = factory.make_NodeGroup() |
984 | + managed_interface = factory.make_NodeGroupInterface( |
985 | + nodegroup=cluster, |
986 | + management=NODEGROUPINTERFACE_MANAGEMENT.DHCP_AND_DNS) |
987 | + unmanaged_interface = factory.make_NodeGroupInterface( |
988 | + nodegroup=cluster, |
989 | + management=NODEGROUPINTERFACE_MANAGEMENT.DEFAULT) |
990 | + user = factory.make_User() |
991 | + [node1, node2] = self.make_acquired_nodes_with_macs( |
992 | + user, nodegroup=cluster, count=2) |
993 | + # Give the node a PXE MAC address on the cluster's interface. |
994 | + node1_mac = node1.get_pxe_mac() |
995 | + node1_mac.cluster_interface = managed_interface |
996 | + node1_mac.save() |
997 | + node2_mac = node1.get_pxe_mac() |
998 | + node2_mac.cluster_interface = unmanaged_interface |
999 | + node2_mac.save() |
1000 | + |
1001 | + node1_ip = factory.make_ipv4_address() |
1002 | + claim_static_ip_addresses = self.patch( |
1003 | + node_module.Node, 'claim_static_ip_addresses') |
1004 | + claim_static_ip_addresses.side_effect = [ |
1005 | + [(node1_ip, node1_mac.mac_address)], |
1006 | + [(factory.make_ipv4_address(), node2_mac.mac_address)], |
1007 | + ] |
1008 | + |
1009 | + update_host_maps = self.patch(node_module, "update_host_maps") |
1010 | + Node.objects.start_nodes([node1.system_id, node2.system_id], user) |
1011 | + self.expectThat(update_host_maps, MockCalledOnceWith(ANY)) |
1012 | + |
1013 | + observed_static_mappings = update_host_maps.call_args[0][0] |
1014 | + |
1015 | + [observed_cluster] = observed_static_mappings.keys() |
1016 | + self.expectThat(observed_cluster.uuid, Equals(cluster.uuid)) |
1017 | + |
1018 | + observed_claims = observed_static_mappings.values() |
1019 | + self.expectThat( |
1020 | + observed_claims, |
1021 | + Equals([{node1_ip: node1_mac.mac_address}])) |
1022 | + |
1023 | + |
1024 | +class NodeManagerTest_StopNodes(MAASServerTestCase): |
1025 | + |
1026 | + def make_nodes_with_macs(self, user, nodegroup=None, count=3): |
1027 | + nodes = [] |
1028 | + for _ in xrange(count): |
1029 | + node = factory.make_node_with_mac_attached_to_nodegroupinterface( |
1030 | + nodegroup=nodegroup, status=NODE_STATUS.READY, |
1031 | + power_type='virsh') |
1032 | + node.acquire(user) |
1033 | + nodes.append(node) |
1034 | + return nodes |
1035 | + |
1036 | + def test_stop_nodes_stops_nodes(self): |
1037 | + wait_for_power_commands = self.patch_autospec( |
1038 | + node_module, 'wait_for_power_commands') |
1039 | + power_off_nodes = self.patch_autospec(node_module, "power_off_nodes") |
1040 | + power_off_nodes.side_effect = lambda nodes: { |
1041 | + system_id: Deferred() for system_id, _, _, _ in nodes} |
1042 | + |
1043 | + user = factory.make_User() |
1044 | + nodes = self.make_nodes_with_macs(user) |
1045 | + power_infos = list(node.get_effective_power_info() for node in nodes) |
1046 | + |
1047 | + stop_mode = factory.make_name('stop-mode') |
1048 | + nodes_stopped = Node.objects.stop_nodes( |
1049 | + list(node.system_id for node in nodes), user, stop_mode) |
1050 | + |
1051 | + self.assertItemsEqual(nodes, nodes_stopped) |
1052 | + self.assertThat(power_off_nodes, MockCalledOnceWith(ANY)) |
1053 | + self.assertThat(wait_for_power_commands, MockCalledOnceWith(ANY)) |
1054 | + |
1055 | + nodes_stop_info_observed = power_off_nodes.call_args[0][0] |
1056 | + nodes_stop_info_expected = [ |
1057 | + (node.system_id, node.hostname, node.nodegroup.uuid, power_info) |
1058 | + for node, power_info in izip(nodes, power_infos) |
1059 | + ] |
1060 | + |
1061 | + # The stop mode is added into the power info that's passed. |
1062 | + for _, _, _, power_info in nodes_stop_info_expected: |
1063 | + power_info.power_parameters['power_off_mode'] = stop_mode |
1064 | + |
1065 | + # If the following fails the diff is big, but it's useful. |
1066 | + self.maxDiff = None |
1067 | + |
1068 | + self.assertItemsEqual( |
1069 | + nodes_stop_info_expected, |
1070 | + nodes_stop_info_observed) |
1071 | + |
1072 | + def test_stop_nodes_ignores_uneditable_nodes(self): |
1073 | + owner = factory.make_User() |
1074 | + nodes = self.make_nodes_with_macs(owner) |
1075 | + |
1076 | + user = factory.make_User() |
1077 | + nodes_stopped = Node.objects.stop_nodes( |
1078 | + list(node.system_id for node in nodes), user) |
1079 | + |
1080 | + self.assertItemsEqual([], nodes_stopped) |
1081 | + |
1082 | + def test_stop_nodes_does_not_attempt_power_off_if_no_power_type(self): |
1083 | + # If the node has a power_type set to UNKNOWN_POWER_TYPE, stop_nodes() |
1084 | + # won't attempt to power it off. |
1085 | + user = factory.make_User() |
1086 | + [node] = self.make_nodes_with_macs(user, count=1) |
1087 | + node.power_type = "" |
1088 | + node.save() |
1089 | + |
1090 | + nodes_stopped = Node.objects.stop_nodes([node.system_id], user) |
1091 | + self.assertItemsEqual([], nodes_stopped) |
1092 | + |
1093 | + def test_stop_nodes_does_not_attempt_power_off_if_cannot_be_stopped(self): |
1094 | + # If the node has a power_type that MAAS knows stopping does not work, |
1095 | + # stop_nodes() won't attempt to power it off. |
1096 | + user = factory.make_User() |
1097 | + [node] = self.make_nodes_with_macs(user, count=1) |
1098 | + node.power_type = "ether_wake" |
1099 | + node.save() |
1100 | + |
1101 | + nodes_stopped = Node.objects.stop_nodes([node.system_id], user) |
1102 | + self.assertItemsEqual([], nodes_stopped) |
1103 | + |
1104 | + def test__raises_failures_for_nodes_that_cannot_be_stopped(self): |
1105 | + power_off_nodes = self.patch(node_module, "power_off_nodes") |
1106 | + power_off_nodes.return_value = { |
1107 | + factory.make_name("system_id"): defer.fail( |
1108 | + ZeroDivisionError("Ee by gum lad, that's a rum 'un.")), |
1109 | + factory.make_name("system_id"): defer.succeed({}), |
1110 | + } |
1111 | + |
1112 | + failures = self.assertRaises( |
1113 | + MultipleFailures, Node.objects.stop_nodes, [], factory.make_User()) |
1114 | + [failure] = failures.args |
1115 | + self.assertThat(failure.value, IsInstance(ZeroDivisionError)) |
1116 | + |
1117 | + |
1118 | +>>>>>>> MERGE-SOURCE |
1119 | class TestNodeTransitionMonitors(MAASServerTestCase): |
1120 | |
1121 | def prepare_rpc(self): |
1122 | |
1123 | === modified file 'src/maasserver/node_action.py' |
1124 | === modified file 'src/maasserver/tests/test_middleware.py' |
1125 | === modified file 'src/maasserver/tests/test_node_action.py' |
1126 | === modified file 'src/maasserver/views/nodes.py' |
1127 | --- src/maasserver/views/nodes.py 2014-10-24 23:45:33 +0000 |
1128 | +++ src/maasserver/views/nodes.py 2014-10-30 18:48:30 +0000 |
1129 | @@ -591,8 +591,13 @@ |
1130 | context['nodecommissionresults'] = commissioning_results |
1131 | |
1132 | installation_results = NodeResult.objects.filter( |
1133 | +<<<<<<< TREE |
1134 | node=node, result_type=RESULT_TYPE.INSTALLATION) |
1135 | if len(installation_results) > 1: |
1136 | +======= |
1137 | + node=node, result_type=RESULT_TYPE.INSTALLING) |
1138 | + if len(installation_results) > 1: |
1139 | +>>>>>>> MERGE-SOURCE |
1140 | for result in installation_results: |
1141 | result.name = re.sub('[_.]', ' ', result.name) |
1142 | context['nodeinstallresults'] = installation_results |
1143 | |
1144 | === modified file 'src/maasserver/views/tests/test_nodes.py' |
1145 | --- src/maasserver/views/tests/test_nodes.py 2014-10-29 04:34:42 +0000 |
1146 | +++ src/maasserver/views/tests/test_nodes.py 2014-10-30 18:48:30 +0000 |
1147 | @@ -1528,6 +1528,7 @@ |
1148 | else: |
1149 | self.fail("Found more than one link: %s" % links) |
1150 | |
1151 | +<<<<<<< TREE |
1152 | def get_installation_results_link(self, display): |
1153 | """Find the results link in `display`. |
1154 | |
1155 | @@ -1545,6 +1546,25 @@ |
1156 | elif len(links) > 1: |
1157 | return links |
1158 | |
1159 | +======= |
1160 | + def get_installing_results_link(self, display): |
1161 | + """Find the results link in `display`. |
1162 | + |
1163 | + :param display: Results display section for a node, as returned by |
1164 | + `request_results_display`. |
1165 | + :return: `lxml.html.HtmlElement` for the link to the node's |
1166 | + installation results, as found in `display`; or `None` if it was |
1167 | + not present. |
1168 | + """ |
1169 | + links = display.cssselect('a') |
1170 | + if len(links) == 0: |
1171 | + return None |
1172 | + elif len(links) == 1: |
1173 | + return links[0] |
1174 | + elif len(links) > 1: |
1175 | + return links |
1176 | + |
1177 | +>>>>>>> MERGE-SOURCE |
1178 | def test_view_node_links_to_commissioning_results_if_appropriate(self): |
1179 | self.client_log_in(as_admin=True) |
1180 | result = factory.make_NodeResult_for_commissioning() |
1181 | @@ -1624,8 +1644,13 @@ |
1182 | self.logged_in_user = user |
1183 | result = factory.make_NodeResult_for_installation(node=node) |
1184 | section = self.request_results_display( |
1185 | +<<<<<<< TREE |
1186 | result.node, RESULT_TYPE.INSTALLATION) |
1187 | link = self.get_installation_results_link(section) |
1188 | +======= |
1189 | + result.node, RESULT_TYPE.INSTALLING) |
1190 | + link = self.get_installing_results_link(section) |
1191 | +>>>>>>> MERGE-SOURCE |
1192 | self.assertNotIn( |
1193 | normalise_whitespace(link.text_content()), |
1194 | ('', None)) |
1195 | @@ -1667,6 +1692,35 @@ |
1196 | ContainsAll( |
1197 | [normalise_whitespace(link.text_content()) for link in links])) |
1198 | |
1199 | + def test_view_node_shows_single_installing_result(self): |
1200 | + self.client_log_in(as_admin=True) |
1201 | + result = factory.make_NodeResult_for_installing() |
1202 | + section = self.request_results_display( |
1203 | + result.node, RESULT_TYPE.INSTALLING) |
1204 | + link = self.get_installing_results_link(section) |
1205 | + self.assertEqual( |
1206 | + "install log", |
1207 | + normalise_whitespace(link.text_content())) |
1208 | + |
1209 | + def test_view_node_shows_multiple_installing_results(self): |
1210 | + self.client_log_in(as_admin=True) |
1211 | + node = factory.make_Node() |
1212 | + num_results = randint(2, 5) |
1213 | + results_names = [] |
1214 | + for _ in range(num_results): |
1215 | + node_result = factory.make_NodeResult_for_installing(node=node) |
1216 | + results_names.append(node_result.name) |
1217 | + section = self.request_results_display( |
1218 | + node, RESULT_TYPE.INSTALLING) |
1219 | + links = self.get_installing_results_link(section) |
1220 | + expected_results_names = list(reversed(results_names)) |
1221 | + observed_results_names = list( |
1222 | + normalise_whitespace(link.text_content()) |
1223 | + for link in links) |
1224 | + self.assertListEqual( |
1225 | + expected_results_names, |
1226 | + observed_results_names) |
1227 | + |
1228 | |
1229 | class NodeListingSelectionJSControls(SeleniumTestCase): |
1230 | |
1231 | |
1232 | === modified file 'src/metadataserver/api.py' |
1233 | --- src/metadataserver/api.py 2014-10-24 23:45:33 +0000 |
1234 | +++ src/metadataserver/api.py 2014-10-30 18:48:30 +0000 |
1235 | @@ -76,7 +76,7 @@ |
1236 | NodeUserData, |
1237 | ) |
1238 | from metadataserver.models.commissioningscript import ( |
1239 | - BUILTIN_COMMISSIONING_SCRIPTS, |
1240 | + get_builtin_commissioning_scripts, |
1241 | ) |
1242 | from piston.utils import rc |
1243 | from provisioningserver.events import ( |
1244 | @@ -229,8 +229,10 @@ |
1245 | script_result = int(request.POST.get('script_result', 0)) |
1246 | for name, uploaded_file in request.FILES.items(): |
1247 | raw_content = uploaded_file.read() |
1248 | - if name in BUILTIN_COMMISSIONING_SCRIPTS: |
1249 | - postprocess_hook = BUILTIN_COMMISSIONING_SCRIPTS[name]['hook'] |
1250 | + builtin_commissioning_scripts = ( |
1251 | + get_builtin_commissioning_scripts()) |
1252 | + if name in builtin_commissioning_scripts: |
1253 | + postprocess_hook = builtin_commissioning_scripts[name]['hook'] |
1254 | postprocess_hook( |
1255 | node=node, output=raw_content, |
1256 | exit_status=script_result) |
1257 | @@ -280,7 +282,15 @@ |
1258 | |
1259 | if node.status == NODE_STATUS.COMMISSIONING: |
1260 | self._store_commissioning_results(node, request) |
1261 | - store_node_power_parameters(node, request) |
1262 | + # XXX 2014-10-21 newell, bug=1382075 |
1263 | + # Auto detection for IPMI tries to save power parameters |
1264 | + # for Moonshot. This causes issues if the node's power type |
1265 | + # is already MSCM as it uses SSH instead of IPMI. This fix |
1266 | + # is temporary as power parameters should not be overwritten |
1267 | + # during commissioning because MAAS already has knowledge to |
1268 | + # boot the node. |
1269 | + if node.power_type != "mscm": |
1270 | + store_node_power_parameters(node, request) |
1271 | target_status = self.signaling_statuses.get(status) |
1272 | elif node.status == NODE_STATUS.DEPLOYING: |
1273 | self._store_installation_results(node, request) |
1274 | |
1275 | === modified file 'src/metadataserver/models/commissioningscript.py' |
1276 | --- src/metadataserver/models/commissioningscript.py 2014-10-29 22:37:02 +0000 |
1277 | +++ src/metadataserver/models/commissioningscript.py 2014-10-30 18:48:30 +0000 |
1278 | @@ -14,8 +14,8 @@ |
1279 | |
1280 | __metaclass__ = type |
1281 | __all__ = [ |
1282 | - 'BUILTIN_COMMISSIONING_SCRIPTS', |
1283 | 'CommissioningScript', |
1284 | + 'get_builtin_commissioning_scripts', |
1285 | 'inject_lldp_result', |
1286 | 'inject_lshw_result', |
1287 | 'inject_result', |
1288 | @@ -43,6 +43,7 @@ |
1289 | ) |
1290 | from lxml import etree |
1291 | from maasserver.fields import MAC |
1292 | +from maasserver.models import Config |
1293 | from maasserver.models.tag import Tag |
1294 | from metadataserver import DefaultMeta |
1295 | from metadataserver.enum import RESULT_TYPE |
1296 | @@ -352,6 +353,7 @@ |
1297 | LIST_MODALIASES_OUTPUT_NAME = '00-maas-04-list-modaliases.out' |
1298 | LIST_MODALIASES_SCRIPT = \ |
1299 | 'find /sys -name modalias -print0 | xargs -0 cat | sort -u' |
1300 | +DHCP_UNCONFIGURED_INTERFACES_NAME = '00-maas-05-dhcp-unconfigured-ifaces' |
1301 | |
1302 | |
1303 | def null_hook(node, output, exit_status): |
1304 | @@ -397,7 +399,7 @@ |
1305 | 'content': LIST_MODALIASES_SCRIPT.encode('ascii'), |
1306 | 'hook': null_hook, |
1307 | }, |
1308 | - '00-maas-05-dhcp-unconfigured-ifaces': { |
1309 | + DHCP_UNCONFIGURED_INTERFACES_NAME: { |
1310 | 'content': make_function_call_script(dhcp_explore), |
1311 | 'hook': null_hook, |
1312 | }, |
1313 | @@ -430,6 +432,20 @@ |
1314 | add_names_to_scripts(BUILTIN_COMMISSIONING_SCRIPTS) |
1315 | |
1316 | |
1317 | +def get_builtin_commissioning_scripts(): |
1318 | + """Get the builtin commissioning scripts. |
1319 | + |
1320 | + The builtin scripts exposed may vary based on config settings. |
1321 | + """ |
1322 | + scripts = BUILTIN_COMMISSIONING_SCRIPTS.copy() |
1323 | + |
1324 | + config_key = 'enable_dhcp_discovery_on_unconfigured_interfaces' |
1325 | + if not Config.objects.get_config(config_key): |
1326 | + del scripts[DHCP_UNCONFIGURED_INTERFACES_NAME] |
1327 | + |
1328 | + return scripts |
1329 | + |
1330 | + |
1331 | def add_script_to_archive(tarball, name, content, mtime): |
1332 | """Add a commissioning script to an archive of commissioning scripts.""" |
1333 | assert isinstance(content, bytes), "Script content must be binary." |
1334 | @@ -447,7 +463,7 @@ |
1335 | """Utility for the collection of `CommissioningScript`s.""" |
1336 | |
1337 | def _iter_builtin_scripts(self): |
1338 | - for script in BUILTIN_COMMISSIONING_SCRIPTS.itervalues(): |
1339 | + for script in get_builtin_commissioning_scripts().itervalues(): |
1340 | yield script['name'], script['content'] |
1341 | |
1342 | def _iter_user_scripts(self): |
1343 | @@ -500,8 +516,9 @@ |
1344 | NodeResult.objects.store_data( |
1345 | node, name, script_result=exit_status, |
1346 | result_type=RESULT_TYPE.COMMISSIONING, data=Bin(output)) |
1347 | - if name in BUILTIN_COMMISSIONING_SCRIPTS: |
1348 | - postprocess_hook = BUILTIN_COMMISSIONING_SCRIPTS[name]['hook'] |
1349 | + builtin_commissioning_scripts = get_builtin_commissioning_scripts() |
1350 | + if name in builtin_commissioning_scripts: |
1351 | + postprocess_hook = builtin_commissioning_scripts[name]['hook'] |
1352 | postprocess_hook(node=node, output=output, exit_status=exit_status) |
1353 | |
1354 | |
1355 | |
1356 | === modified file 'src/metadataserver/models/tests/test_noderesults.py' |
1357 | --- src/metadataserver/models/tests/test_noderesults.py 2014-10-29 22:37:02 +0000 |
1358 | +++ src/metadataserver/models/tests/test_noderesults.py 2014-10-30 18:48:30 +0000 |
1359 | @@ -35,6 +35,7 @@ |
1360 | |
1361 | from fixtures import FakeLogger |
1362 | from maasserver.fields import MAC |
1363 | +from maasserver.models import Config |
1364 | from maasserver.models.tag import Tag |
1365 | from maasserver.testing.factory import factory |
1366 | from maasserver.testing.orm import reload_object |
1367 | @@ -55,7 +56,10 @@ |
1368 | ) |
1369 | from metadataserver.models.commissioningscript import ( |
1370 | ARCHIVE_PREFIX, |
1371 | + BUILTIN_COMMISSIONING_SCRIPTS, |
1372 | + DHCP_UNCONFIGURED_INTERFACES_NAME, |
1373 | extract_router_mac_addresses, |
1374 | + get_builtin_commissioning_scripts, |
1375 | inject_lldp_result, |
1376 | inject_lshw_result, |
1377 | inject_result, |
1378 | @@ -676,3 +680,26 @@ |
1379 | logger = self.useFixture(FakeLogger(name='commissioningscript')) |
1380 | update_hardware_details(factory.make_Node(), b"garbage", exit_status=1) |
1381 | self.assertEqual("", logger.output) |
1382 | + |
1383 | + |
1384 | +class TestGetBuiltinCommissioningScripts(MAASServerTestCase): |
1385 | + |
1386 | + def test__includes_all_builtin_commissioning_scripts_by_default(self): |
1387 | + self.assertItemsEqual( |
1388 | + BUILTIN_COMMISSIONING_SCRIPTS, |
1389 | + get_builtin_commissioning_scripts(), |
1390 | + ) |
1391 | + |
1392 | + def test__excludes_dhcp_discovery_when_disabled(self): |
1393 | + Config.objects.set_config( |
1394 | + 'enable_dhcp_discovery_on_unconfigured_interfaces', False) |
1395 | + self.assertNotIn( |
1396 | + DHCP_UNCONFIGURED_INTERFACES_NAME, |
1397 | + get_builtin_commissioning_scripts()) |
1398 | + |
1399 | + def test__includes_dhcp_discovery_when_enabled(self): |
1400 | + Config.objects.set_config( |
1401 | + 'enable_dhcp_discovery_on_unconfigured_interfaces', True) |
1402 | + self.assertIn( |
1403 | + DHCP_UNCONFIGURED_INTERFACES_NAME, |
1404 | + get_builtin_commissioning_scripts()) |
1405 | |
1406 | === modified file 'src/metadataserver/tests/test_api.py' |
1407 | --- src/metadataserver/tests/test_api.py 2014-10-24 23:45:33 +0000 |
1408 | +++ src/metadataserver/tests/test_api.py 2014-10-30 18:48:30 +0000 |
1409 | @@ -793,6 +793,21 @@ |
1410 | node = reload_object(node) |
1411 | self.assertEqual(0, len(node.tags.all())) |
1412 | |
1413 | + def test_signal_current_power_type_mscm_does_not_store_params(self): |
1414 | + node = factory.make_Node( |
1415 | + power_type="mscm", status=NODE_STATUS.COMMISSIONING) |
1416 | + client = make_node_client(node=node) |
1417 | + params = dict( |
1418 | + power_address=factory.make_string(), |
1419 | + power_user=factory.make_string(), |
1420 | + power_pass=factory.make_string()) |
1421 | + response = call_signal( |
1422 | + client, power_type="moonshot", power_parameters=json.dumps(params)) |
1423 | + self.assertEqual(httplib.OK, response.status_code, response.content) |
1424 | + node = reload_object(node) |
1425 | + self.assertEqual("mscm", node.power_type) |
1426 | + self.assertNotEqual(params, node.power_parameters) |
1427 | + |
1428 | def test_signal_refuses_bad_power_type(self): |
1429 | node = factory.make_Node(status=NODE_STATUS.COMMISSIONING) |
1430 | client = make_node_client(node=node) |
1431 | |
1432 | === modified file 'src/provisioningserver/rpc/boot_images.py' |
1433 | --- src/provisioningserver/rpc/boot_images.py 2014-10-30 13:17:51 +0000 |
1434 | +++ src/provisioningserver/rpc/boot_images.py 2014-10-30 18:48:30 +0000 |
1435 | @@ -18,8 +18,12 @@ |
1436 | "is_import_boot_images_running", |
1437 | ] |
1438 | |
1439 | +<<<<<<< TREE |
1440 | import os |
1441 | from urlparse import urlparse |
1442 | +======= |
1443 | +from urlparse import urlparse |
1444 | +>>>>>>> MERGE-SOURCE |
1445 | |
1446 | from provisioningserver import concurrency |
1447 | from provisioningserver.auth import get_maas_user_gpghome |
1448 | @@ -47,6 +51,16 @@ |
1449 | return hosts |
1450 | |
1451 | |
1452 | +def get_hosts_from_sources(sources): |
1453 | + """Return set of hosts that are contained in the given sources.""" |
1454 | + hosts = set() |
1455 | + for source in sources: |
1456 | + url = urlparse(source['url']) |
1457 | + if url.hostname is not None: |
1458 | + hosts.add(url.hostname) |
1459 | + return hosts |
1460 | + |
1461 | + |
1462 | @synchronous |
1463 | def _run_import(sources, http_proxy=None, https_proxy=None): |
1464 | """Run the import. |
1465 | |
1466 | === modified file 'src/provisioningserver/rpc/clusterservice.py' |
1467 | === modified file 'src/provisioningserver/rpc/power.py' |
1468 | === modified file 'src/provisioningserver/rpc/tests/test_boot_images.py' |
1469 | --- src/provisioningserver/rpc/tests/test_boot_images.py 2014-10-30 13:17:51 +0000 |
1470 | +++ src/provisioningserver/rpc/tests/test_boot_images.py 2014-10-30 18:48:30 +0000 |
1471 | @@ -76,6 +76,13 @@ |
1472 | self.assertItemsEqual(hosts, get_hosts_from_sources(sources)) |
1473 | |
1474 | |
1475 | +class TestGetHostsFromSources(PservTestCase): |
1476 | + |
1477 | + def test__returns_set_of_hosts_from_sources(self): |
1478 | + sources, hosts = make_sources() |
1479 | + self.assertItemsEqual(hosts, get_hosts_from_sources(sources)) |
1480 | + |
1481 | + |
1482 | class TestRunImport(PservTestCase): |
1483 | |
1484 | def make_archive_url(self, name=None): |
1485 | |
1486 | === modified file 'src/provisioningserver/rpc/tests/test_clusterservice.py' |
1487 | === modified file 'src/provisioningserver/rpc/tests/test_power.py' |
Self-approve.
Already approved for 1.7 landing here: https:/ /code.launchpad .net/~blake- rouse/maas/ fix-1387133/ +merge/ 240132