Merge lp:~jtv/maas/defer-task-logger into lp:~maas-committers/maas/trunk
- defer-task-logger
- Merge into trunk
Status: | Merged | ||||
---|---|---|---|---|---|
Approved by: | Jeroen T. Vermeulen | ||||
Approved revision: | no longer in the source branch. | ||||
Merged at revision: | 1353 | ||||
Proposed branch: | lp:~jtv/maas/defer-task-logger | ||||
Merge into: | lp:~maas-committers/maas/trunk | ||||
Diff against target: |
352 lines (+63/-40) 6 files modified
src/provisioningserver/boot_images.py (+4/-4) src/provisioningserver/dhcp/leases.py (+4/-4) src/provisioningserver/start_cluster_controller.py (+18/-9) src/provisioningserver/tags.py (+19/-16) src/provisioningserver/tests/test_start_cluster_controller.py (+8/-1) src/provisioningserver/tests/test_tags.py (+10/-6) |
||||
To merge this branch: | bzr merge lp:~jtv/maas/defer-task-logger | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Gavin Panella (community) | Approve | ||
Review via email: mp+134679@code.launchpad.net |
Commit message
Defer celery imports in pserv that may load config and issue warnings.
Description of the change
Discussed the details with Gavin. This avoids problems with maas-provision importing celery modules that, as a side effect, try to read celeryconfig.py. That module is not always available, e.g. on a dev system, or on a cluster controller that isn't also a region controller where celery hasn't been told to load the right config module, so that it may output inappropriate warnings.
Ultimately the start_cluster_
Jeroen
Gavin Panella (allenap) : | # |
Julian Edwards (julian-edwards) wrote : | # |
Gavin Panella (allenap) wrote : | # |
On 19 November 2012 01:37, Julian Edwards wrote:
> On 17/11/12 00:59, Jeroen T. Vermeulen wrote:
>> Ultimately the start_cluster_
>> problematic celery imports
>
> Why? start_cluster_
> with celery (other than execing a celery daemon).
In start_cluster_
configuration given for Celery. This basically involves calling
setup_logging_
won't be pulled in at the moment start_cluster_
and therefore not when other maas-provision commands are used.
As to why setup_logging_
see start-up messages, i.e. while the cluster is polling the region
for acceptance. It seems sensible to send these messages to the worker
logs; I think it's the first place most people will look.
Julian Edwards (julian-edwards) wrote : | # |
On 19/11/12 20:07, Gavin Panella wrote:
> In start_cluster_
> configuration given for Celery.
This begs the next question, why? :)
As in, why not just the normal logging.
Gavin Panella (allenap) wrote : | # |
On 19 November 2012 12:01, Julian Edwards <email address hidden> wrote:
> On 19/11/12 20:07, Gavin Panella wrote:
>> In start_cluster_
>> configuration given for Celery.
>
> This begs the next question, why? :)
>
> As in, why not just the normal logging.
Because it goes nowhere, or nowhere obvious.
Julian Edwards (julian-edwards) wrote : | # |
On 19/11/12 22:22, Gavin Panella wrote:
> On 19 November 2012 12:01, Julian Edwards <email address hidden> wrote:
>> On 19/11/12 20:07, Gavin Panella wrote:
>>> In start_cluster_
>>> configuration given for Celery.
>>
>> This begs the next question, why? :)
>>
>> As in, why not just the normal logging.
>
> Because it goes nowhere, or nowhere obvious.
>
So what is the barrier to fixing that?
/me playing root-cause analysis
Gavin Panella (allenap) wrote : | # |
On 19 November 2012 23:19, Julian Edwards <email address hidden> wrote:
> So what is the barrier to fixing that?
Nothing, it's fixed :)
I don't really know what the problem is now. The "hello region, I'm a
cluster, please recognise me" logs are now sent to the same log file
that celeryd logs to, which seems to be a fairly sensible place. We
/could/ add another log file, or reuse one, just for those messages,
but I can't see that we'd get a lot for the complexity it would bring.
I think we'd actually lose clarity.
Julian Edwards (julian-edwards) wrote : | # |
On 20/11/12 19:37, Gavin Panella wrote:
> On 19 November 2012 23:19, Julian Edwards <email address hidden> wrote:
>> So what is the barrier to fixing that?
>
> Nothing, it's fixed :)
>
> I don't really know what the problem is now. The "hello region, I'm a
> cluster, please recognise me" logs are now sent to the same log file
> that celeryd logs to, which seems to be a fairly sensible place. We
> /could/ add another log file, or reuse one, just for those messages,
> but I can't see that we'd get a lot for the complexity it would bring.
> I think we'd actually lose clarity.
>
Ok fair enough.
I thought that it was still using celery logging somewhere.
Gavin Panella (allenap) wrote : | # |
On 20 November 2012 10:01, Julian Edwards <email address hidden> wrote:
> I thought that it was still using celery logging somewhere.
It kind of is. It calls setup_logging_
Celery. However, Celery uses the regular Python logging module, and
setup_logging_
the Celery config module. The change that Jeroen made ensures that
Celery is only imported when you've unambiguously said "start me a
cluster controller", which, though misnamed, equates to starting
celeryd. Put another way, it's not importing any Celery code until
you've asked it to start Celery.
Preview Diff
1 | === modified file 'src/provisioningserver/boot_images.py' | |||
2 | --- src/provisioningserver/boot_images.py 2012-11-08 06:34:48 +0000 | |||
3 | +++ src/provisioningserver/boot_images.py 2012-11-16 14:58:20 +0000 | |||
4 | @@ -19,13 +19,13 @@ | |||
5 | 19 | ] | 19 | ] |
6 | 20 | 20 | ||
7 | 21 | import json | 21 | import json |
8 | 22 | from logging import getLogger | ||
9 | 22 | 23 | ||
10 | 23 | from apiclient.maas_client import ( | 24 | from apiclient.maas_client import ( |
11 | 24 | MAASClient, | 25 | MAASClient, |
12 | 25 | MAASDispatcher, | 26 | MAASDispatcher, |
13 | 26 | MAASOAuth, | 27 | MAASOAuth, |
14 | 27 | ) | 28 | ) |
15 | 28 | from celery.log import get_task_logger | ||
16 | 29 | from provisioningserver.auth import ( | 29 | from provisioningserver.auth import ( |
17 | 30 | get_recorded_api_credentials, | 30 | get_recorded_api_credentials, |
18 | 31 | get_recorded_maas_url, | 31 | get_recorded_maas_url, |
19 | @@ -35,7 +35,7 @@ | |||
20 | 35 | from provisioningserver.start_cluster_controller import get_cluster_uuid | 35 | from provisioningserver.start_cluster_controller import get_cluster_uuid |
21 | 36 | 36 | ||
22 | 37 | 37 | ||
24 | 38 | task_logger = get_task_logger(name=__name__) | 38 | logger = getLogger(__name__) |
25 | 39 | 39 | ||
26 | 40 | 40 | ||
27 | 41 | def get_cached_knowledge(): | 41 | def get_cached_knowledge(): |
28 | @@ -46,10 +46,10 @@ | |||
29 | 46 | """ | 46 | """ |
30 | 47 | maas_url = get_recorded_maas_url() | 47 | maas_url = get_recorded_maas_url() |
31 | 48 | if maas_url is None: | 48 | if maas_url is None: |
33 | 49 | task_logger.debug("Not reporting boot images: don't have API URL yet.") | 49 | logger.debug("Not reporting boot images: don't have API URL yet.") |
34 | 50 | api_credentials = get_recorded_api_credentials() | 50 | api_credentials = get_recorded_api_credentials() |
35 | 51 | if api_credentials is None: | 51 | if api_credentials is None: |
37 | 52 | task_logger.debug("Not reporting boot images: don't have API key yet.") | 52 | logger.debug("Not reporting boot images: don't have API key yet.") |
38 | 53 | return maas_url, api_credentials | 53 | return maas_url, api_credentials |
39 | 54 | 54 | ||
40 | 55 | 55 | ||
41 | 56 | 56 | ||
42 | === modified file 'src/provisioningserver/dhcp/leases.py' | |||
43 | --- src/provisioningserver/dhcp/leases.py 2012-10-19 15:08:52 +0000 | |||
44 | +++ src/provisioningserver/dhcp/leases.py 2012-11-16 14:58:20 +0000 | |||
45 | @@ -34,6 +34,7 @@ | |||
46 | 34 | 34 | ||
47 | 35 | import errno | 35 | import errno |
48 | 36 | import json | 36 | import json |
49 | 37 | from logging import getLogger | ||
50 | 37 | from os import ( | 38 | from os import ( |
51 | 38 | fstat, | 39 | fstat, |
52 | 39 | stat, | 40 | stat, |
53 | @@ -45,7 +46,6 @@ | |||
54 | 45 | MAASOAuth, | 46 | MAASOAuth, |
55 | 46 | ) | 47 | ) |
56 | 47 | from celery.app import app_or_default | 48 | from celery.app import app_or_default |
57 | 48 | from celery.log import get_task_logger | ||
58 | 49 | from provisioningserver import cache | 49 | from provisioningserver import cache |
59 | 50 | from provisioningserver.auth import ( | 50 | from provisioningserver.auth import ( |
60 | 51 | get_recorded_api_credentials, | 51 | get_recorded_api_credentials, |
61 | @@ -55,7 +55,7 @@ | |||
62 | 55 | from provisioningserver.dhcp.leases_parser import parse_leases | 55 | from provisioningserver.dhcp.leases_parser import parse_leases |
63 | 56 | 56 | ||
64 | 57 | 57 | ||
66 | 58 | task_logger = get_task_logger(name=__name__) | 58 | logger = getLogger(__name__) |
67 | 59 | 59 | ||
68 | 60 | 60 | ||
69 | 61 | # Cache key for the modification time on last-processed leases file. | 61 | # Cache key for the modification time on last-processed leases file. |
70 | @@ -157,7 +157,7 @@ | |||
71 | 157 | if None in knowledge.values(): | 157 | if None in knowledge.values(): |
72 | 158 | # The MAAS server hasn't sent us enough information for us to do | 158 | # The MAAS server hasn't sent us enough information for us to do |
73 | 159 | # this yet. Leave it for another time. | 159 | # this yet. Leave it for another time. |
75 | 160 | task_logger.info( | 160 | logger.info( |
76 | 161 | "Not sending DHCP leases to server: not all required knowledge " | 161 | "Not sending DHCP leases to server: not all required knowledge " |
77 | 162 | "received from server yet. " | 162 | "received from server yet. " |
78 | 163 | "Missing: %s" | 163 | "Missing: %s" |
79 | @@ -189,7 +189,7 @@ | |||
80 | 189 | timestamp, leases = parse_result | 189 | timestamp, leases = parse_result |
81 | 190 | process_leases(timestamp, leases) | 190 | process_leases(timestamp, leases) |
82 | 191 | else: | 191 | else: |
84 | 192 | task_logger.info( | 192 | logger.info( |
85 | 193 | "The DHCP leases file does not exist. This is only a problem if " | 193 | "The DHCP leases file does not exist. This is only a problem if " |
86 | 194 | "this cluster controller is managing its DHCP server. If that's " | 194 | "this cluster controller is managing its DHCP server. If that's " |
87 | 195 | "the case then you need to install the 'maas-dhcp' package on " | 195 | "the case then you need to install the 'maas-dhcp' package on " |
88 | 196 | 196 | ||
89 | === modified file 'src/provisioningserver/start_cluster_controller.py' | |||
90 | --- src/provisioningserver/start_cluster_controller.py 2012-10-23 12:48:31 +0000 | |||
91 | +++ src/provisioningserver/start_cluster_controller.py 2012-11-16 14:58:20 +0000 | |||
92 | @@ -18,6 +18,7 @@ | |||
93 | 18 | from grp import getgrnam | 18 | from grp import getgrnam |
94 | 19 | import httplib | 19 | import httplib |
95 | 20 | import json | 20 | import json |
96 | 21 | from logging import getLogger | ||
97 | 21 | import os | 22 | import os |
98 | 22 | from pwd import getpwnam | 23 | from pwd import getpwnam |
99 | 23 | from time import sleep | 24 | from time import sleep |
100 | @@ -31,15 +32,10 @@ | |||
101 | 31 | MAASDispatcher, | 32 | MAASDispatcher, |
102 | 32 | NoAuth, | 33 | NoAuth, |
103 | 33 | ) | 34 | ) |
104 | 34 | from celery.app import app_or_default | ||
105 | 35 | from celery.log import ( | ||
106 | 36 | get_task_logger, | ||
107 | 37 | setup_logging_subsystem, | ||
108 | 38 | ) | ||
109 | 39 | from provisioningserver.network import discover_networks | 35 | from provisioningserver.network import discover_networks |
110 | 40 | 36 | ||
111 | 41 | 37 | ||
113 | 42 | task_logger = get_task_logger(name=__name__) | 38 | logger = getLogger(__name__) |
114 | 43 | 39 | ||
115 | 44 | 40 | ||
116 | 45 | class ClusterControllerRejected(Exception): | 41 | class ClusterControllerRejected(Exception): |
117 | @@ -59,7 +55,7 @@ | |||
118 | 59 | 55 | ||
119 | 60 | 56 | ||
120 | 61 | def log_error(exception): | 57 | def log_error(exception): |
122 | 62 | task_logger.info( | 58 | logger.info( |
123 | 63 | "Could not register with region controller: %s." | 59 | "Could not register with region controller: %s." |
124 | 64 | % exception.reason) | 60 | % exception.reason) |
125 | 65 | 61 | ||
126 | @@ -71,6 +67,9 @@ | |||
127 | 71 | 67 | ||
128 | 72 | def get_cluster_uuid(): | 68 | def get_cluster_uuid(): |
129 | 73 | """Read this cluster's UUID from the config.""" | 69 | """Read this cluster's UUID from the config.""" |
130 | 70 | # Import this lazily. It reads config as a side effect, which can | ||
131 | 71 | # produce warnings. | ||
132 | 72 | from celery.app import app_or_default | ||
133 | 74 | return app_or_default().conf.CLUSTER_UUID | 73 | return app_or_default().conf.CLUSTER_UUID |
134 | 75 | 74 | ||
135 | 76 | 75 | ||
136 | @@ -145,7 +144,7 @@ | |||
137 | 145 | try: | 144 | try: |
138 | 146 | client.post('api/1.0/nodegroups/', 'refresh_workers') | 145 | client.post('api/1.0/nodegroups/', 'refresh_workers') |
139 | 147 | except URLError as e: | 146 | except URLError as e: |
141 | 148 | task_logger.warn( | 147 | logger.warn( |
142 | 149 | "Could not request secrets from region controller: %s" | 148 | "Could not request secrets from region controller: %s" |
143 | 150 | % e.reason) | 149 | % e.reason) |
144 | 151 | 150 | ||
145 | @@ -164,13 +163,23 @@ | |||
146 | 164 | start_celery(connection_details, user=user, group=group) | 163 | start_celery(connection_details, user=user, group=group) |
147 | 165 | 164 | ||
148 | 166 | 165 | ||
149 | 166 | def set_up_logging(): | ||
150 | 167 | """Set up logging.""" | ||
151 | 168 | # This import has side effects (it imports celeryconfig) and may | ||
152 | 169 | # produce warnings (if there is no celeryconfig). | ||
153 | 170 | # Postpone the import so that we don't go through that every time | ||
154 | 171 | # anything imports this module. | ||
155 | 172 | from celery.log import setup_logging_subsystem | ||
156 | 173 | setup_logging_subsystem() | ||
157 | 174 | |||
158 | 175 | |||
159 | 167 | def run(args): | 176 | def run(args): |
160 | 168 | """Start the cluster controller. | 177 | """Start the cluster controller. |
161 | 169 | 178 | ||
162 | 170 | If this system is still awaiting approval as a cluster controller, this | 179 | If this system is still awaiting approval as a cluster controller, this |
163 | 171 | command will keep looping until it gets a definite answer. | 180 | command will keep looping until it gets a definite answer. |
164 | 172 | """ | 181 | """ |
166 | 173 | setup_logging_subsystem() | 182 | set_up_logging() |
167 | 174 | connection_details = register(args.server_url) | 183 | connection_details = register(args.server_url) |
168 | 175 | while connection_details is None: | 184 | while connection_details is None: |
169 | 176 | sleep(60) | 185 | sleep(60) |
170 | 177 | 186 | ||
171 | === modified file 'src/provisioningserver/tags.py' | |||
172 | --- src/provisioningserver/tags.py 2012-10-26 10:24:52 +0000 | |||
173 | +++ src/provisioningserver/tags.py 2012-11-16 14:58:20 +0000 | |||
174 | @@ -19,6 +19,7 @@ | |||
175 | 19 | 19 | ||
176 | 20 | 20 | ||
177 | 21 | import httplib | 21 | import httplib |
178 | 22 | from logging import getLogger | ||
179 | 22 | import urllib2 | 23 | import urllib2 |
180 | 23 | 24 | ||
181 | 24 | from apiclient.maas_client import ( | 25 | from apiclient.maas_client import ( |
182 | @@ -26,7 +27,6 @@ | |||
183 | 26 | MAASDispatcher, | 27 | MAASDispatcher, |
184 | 27 | MAASOAuth, | 28 | MAASOAuth, |
185 | 28 | ) | 29 | ) |
186 | 29 | from celery.log import get_task_logger | ||
187 | 30 | from lxml import etree | 30 | from lxml import etree |
188 | 31 | from provisioningserver.auth import ( | 31 | from provisioningserver.auth import ( |
189 | 32 | get_recorded_api_credentials, | 32 | get_recorded_api_credentials, |
190 | @@ -36,7 +36,7 @@ | |||
191 | 36 | import simplejson as json | 36 | import simplejson as json |
192 | 37 | 37 | ||
193 | 38 | 38 | ||
195 | 39 | task_logger = get_task_logger(name=__name__) | 39 | logger = getLogger(__name__) |
196 | 40 | 40 | ||
197 | 41 | 41 | ||
198 | 42 | class MissingCredentials(Exception): | 42 | class MissingCredentials(Exception): |
199 | @@ -53,15 +53,15 @@ | |||
200 | 53 | """ | 53 | """ |
201 | 54 | maas_url = get_recorded_maas_url() | 54 | maas_url = get_recorded_maas_url() |
202 | 55 | if maas_url is None: | 55 | if maas_url is None: |
204 | 56 | task_logger.error("Not updating tags: don't have API URL yet.") | 56 | logger.error("Not updating tags: don't have API URL yet.") |
205 | 57 | return None, None | 57 | return None, None |
206 | 58 | api_credentials = get_recorded_api_credentials() | 58 | api_credentials = get_recorded_api_credentials() |
207 | 59 | if api_credentials is None: | 59 | if api_credentials is None: |
209 | 60 | task_logger.error("Not updating tags: don't have API key yet.") | 60 | logger.error("Not updating tags: don't have API key yet.") |
210 | 61 | return None, None | 61 | return None, None |
211 | 62 | nodegroup_uuid = get_recorded_nodegroup_uuid() | 62 | nodegroup_uuid = get_recorded_nodegroup_uuid() |
212 | 63 | if nodegroup_uuid is None: | 63 | if nodegroup_uuid is None: |
214 | 64 | task_logger.error("Not updating tags: don't have UUID yet.") | 64 | logger.error("Not updating tags: don't have UUID yet.") |
215 | 65 | return None, None | 65 | return None, None |
216 | 66 | client = MAASClient(MAASOAuth(*api_credentials), MAASDispatcher(), | 66 | client = MAASClient(MAASOAuth(*api_credentials), MAASDispatcher(), |
217 | 67 | maas_url) | 67 | maas_url) |
218 | @@ -119,7 +119,8 @@ | |||
219 | 119 | :param removed: Set of nodes to remove | 119 | :param removed: Set of nodes to remove |
220 | 120 | """ | 120 | """ |
221 | 121 | path = '/api/1.0/tags/%s/' % (tag_name,) | 121 | path = '/api/1.0/tags/%s/' % (tag_name,) |
223 | 122 | task_logger.debug('Updating nodes for %s %s, adding %s removing %s' | 122 | logger.debug( |
224 | 123 | "Updating nodes for %s %s, adding %s removing %s" | ||
225 | 123 | % (tag_name, uuid, len(added), len(removed))) | 124 | % (tag_name, uuid, len(added), len(removed))) |
226 | 124 | try: | 125 | try: |
227 | 125 | return process_response(client.post( | 126 | return process_response(client.post( |
228 | @@ -131,7 +132,7 @@ | |||
229 | 131 | msg = e.fp.read() | 132 | msg = e.fp.read() |
230 | 132 | else: | 133 | else: |
231 | 133 | msg = e.msg | 134 | msg = e.msg |
233 | 134 | task_logger.info('Got a CONFLICT while updating tag: %s', msg) | 135 | logger.info("Got a CONFLICT while updating tag: %s", msg) |
234 | 135 | return {} | 136 | return {} |
235 | 136 | raise | 137 | raise |
236 | 137 | 138 | ||
237 | @@ -148,8 +149,8 @@ | |||
238 | 148 | try: | 149 | try: |
239 | 149 | xml = etree.XML(hw_xml) | 150 | xml = etree.XML(hw_xml) |
240 | 150 | except etree.XMLSyntaxError as e: | 151 | except etree.XMLSyntaxError as e: |
243 | 151 | task_logger.debug('Invalid hardware_details for %s: %s' | 152 | logger.debug( |
244 | 152 | % (system_id, e)) | 153 | "Invalid hardware_details for %s: %s" % (system_id, e)) |
245 | 153 | else: | 154 | else: |
246 | 154 | if xpath(xml): | 155 | if xpath(xml): |
247 | 155 | matched = True | 156 | matched = True |
248 | @@ -166,16 +167,17 @@ | |||
249 | 166 | batch_size = DEFAULT_BATCH_SIZE | 167 | batch_size = DEFAULT_BATCH_SIZE |
250 | 167 | all_matched = [] | 168 | all_matched = [] |
251 | 168 | all_unmatched = [] | 169 | all_unmatched = [] |
254 | 169 | task_logger.debug('processing %d system_ids for tag %s nodegroup %s' | 170 | logger.debug( |
255 | 170 | % (len(system_ids), tag_name, nodegroup_uuid)) | 171 | "processing %d system_ids for tag %s nodegroup %s" |
256 | 172 | % (len(system_ids), tag_name, nodegroup_uuid)) | ||
257 | 171 | for i in range(0, len(system_ids), batch_size): | 173 | for i in range(0, len(system_ids), batch_size): |
258 | 172 | selected_ids = system_ids[i:i + batch_size] | 174 | selected_ids = system_ids[i:i + batch_size] |
259 | 173 | details = get_hardware_details_for_nodes( | 175 | details = get_hardware_details_for_nodes( |
260 | 174 | client, nodegroup_uuid, selected_ids) | 176 | client, nodegroup_uuid, selected_ids) |
261 | 175 | matched, unmatched = process_batch(xpath, details) | 177 | matched, unmatched = process_batch(xpath, details) |
265 | 176 | task_logger.debug( | 178 | logger.debug( |
266 | 177 | 'processing batch of %d ids received %d details' | 179 | "processing batch of %d ids received %d details" |
267 | 178 | ' (%d matched, %d unmatched)' | 180 | " (%d matched, %d unmatched)" |
268 | 179 | % (len(selected_ids), len(details), len(matched), len(unmatched))) | 181 | % (len(selected_ids), len(details), len(matched), len(unmatched))) |
269 | 180 | all_matched.extend(matched) | 182 | all_matched.extend(matched) |
270 | 181 | all_unmatched.extend(unmatched) | 183 | all_unmatched.extend(unmatched) |
271 | @@ -197,8 +199,9 @@ | |||
272 | 197 | """ | 199 | """ |
273 | 198 | client, nodegroup_uuid = get_cached_knowledge() | 200 | client, nodegroup_uuid = get_cached_knowledge() |
274 | 199 | if not all([client, nodegroup_uuid]): | 201 | if not all([client, nodegroup_uuid]): |
277 | 200 | task_logger.error('Unable to update tag: %s for definition %r' | 202 | logger.error( |
278 | 201 | ' please refresh secrets, then rebuild this tag' | 203 | "Unable to update tag: %s for definition %r. " |
279 | 204 | "Please refresh secrets, then rebuild this tag." | ||
280 | 202 | % (tag_name, tag_definition)) | 205 | % (tag_name, tag_definition)) |
281 | 203 | raise MissingCredentials() | 206 | raise MissingCredentials() |
282 | 204 | # We evaluate this early, so we can fail before sending a bunch of data to | 207 | # We evaluate this early, so we can fail before sending a bunch of data to |
283 | 205 | 208 | ||
284 | === modified file 'src/provisioningserver/tests/test_start_cluster_controller.py' | |||
285 | --- src/provisioningserver/tests/test_start_cluster_controller.py 2012-10-23 12:48:31 +0000 | |||
286 | +++ src/provisioningserver/tests/test_start_cluster_controller.py 2012-11-16 14:58:20 +0000 | |||
287 | @@ -25,7 +25,10 @@ | |||
288 | 25 | 25 | ||
289 | 26 | from apiclient.maas_client import MAASDispatcher | 26 | from apiclient.maas_client import MAASDispatcher |
290 | 27 | from apiclient.testing.django import parse_headers_and_body_with_django | 27 | from apiclient.testing.django import parse_headers_and_body_with_django |
292 | 28 | from fixtures import EnvironmentVariableFixture | 28 | from fixtures import ( |
293 | 29 | EnvironmentVariableFixture, | ||
294 | 30 | FakeLogger, | ||
295 | 31 | ) | ||
296 | 29 | from maastesting.factory import factory | 32 | from maastesting.factory import factory |
297 | 30 | from mock import ( | 33 | from mock import ( |
298 | 31 | call, | 34 | call, |
299 | @@ -84,6 +87,10 @@ | |||
300 | 84 | 87 | ||
301 | 85 | def setUp(self): | 88 | def setUp(self): |
302 | 86 | super(TestStartClusterController, self).setUp() | 89 | super(TestStartClusterController, self).setUp() |
303 | 90 | |||
304 | 91 | self.useFixture(FakeLogger()) | ||
305 | 92 | self.patch(start_cluster_controller, 'set_up_logging') | ||
306 | 93 | |||
307 | 87 | # Patch out anything that could be remotely harmful if we did it | 94 | # Patch out anything that could be remotely harmful if we did it |
308 | 88 | # accidentally in the test. Make the really outrageous ones | 95 | # accidentally in the test. Make the really outrageous ones |
309 | 89 | # raise exceptions. | 96 | # raise exceptions. |
310 | 90 | 97 | ||
311 | === modified file 'src/provisioningserver/tests/test_tags.py' | |||
312 | --- src/provisioningserver/tests/test_tags.py 2012-10-24 16:29:02 +0000 | |||
313 | +++ src/provisioningserver/tests/test_tags.py 2012-11-16 14:58:20 +0000 | |||
314 | @@ -12,21 +12,21 @@ | |||
315 | 12 | __metaclass__ = type | 12 | __metaclass__ = type |
316 | 13 | __all__ = [] | 13 | __all__ = [] |
317 | 14 | 14 | ||
318 | 15 | import httplib | ||
319 | 16 | import urllib2 | ||
320 | 17 | |||
321 | 15 | from apiclient.maas_client import MAASClient | 18 | from apiclient.maas_client import MAASClient |
323 | 16 | import httplib | 19 | from fixtures import FakeLogger |
324 | 17 | from lxml import etree | 20 | from lxml import etree |
325 | 18 | from maastesting.factory import factory | 21 | from maastesting.factory import factory |
326 | 19 | from maastesting.fakemethod import ( | 22 | from maastesting.fakemethod import ( |
327 | 20 | FakeMethod, | 23 | FakeMethod, |
328 | 21 | MultiFakeMethod, | 24 | MultiFakeMethod, |
329 | 22 | ) | 25 | ) |
330 | 23 | import urllib2 | ||
331 | 24 | from mock import MagicMock | 26 | from mock import MagicMock |
335 | 25 | from provisioningserver.auth import ( | 27 | from provisioningserver import tags |
336 | 26 | get_recorded_nodegroup_uuid, | 28 | from provisioningserver.auth import get_recorded_nodegroup_uuid |
334 | 27 | ) | ||
337 | 28 | from provisioningserver.testing.testcase import PservTestCase | 29 | from provisioningserver.testing.testcase import PservTestCase |
338 | 29 | from provisioningserver import tags | ||
339 | 30 | 30 | ||
340 | 31 | 31 | ||
341 | 32 | class FakeResponse: | 32 | class FakeResponse: |
342 | @@ -41,6 +41,10 @@ | |||
343 | 41 | 41 | ||
344 | 42 | class TestTagUpdating(PservTestCase): | 42 | class TestTagUpdating(PservTestCase): |
345 | 43 | 43 | ||
346 | 44 | def setUp(self): | ||
347 | 45 | super(TestTagUpdating, self).setUp() | ||
348 | 46 | self.useFixture(FakeLogger()) | ||
349 | 47 | |||
350 | 44 | def test_get_cached_knowledge_knows_nothing(self): | 48 | def test_get_cached_knowledge_knows_nothing(self): |
351 | 45 | # If we haven't given it any secrets, we should get back nothing | 49 | # If we haven't given it any secrets, we should get back nothing |
352 | 46 | self.assertEqual((None, None), tags.get_cached_knowledge()) | 50 | self.assertEqual((None, None), tags.get_cached_knowledge()) |
On 17/11/12 00:59, Jeroen T. Vermeulen wrote: controller command will still need the problematic celery imports
> Ultimately the start_cluster_
Why? start_cluster_ controller is not a task and has nothing to do with
celery (other than execing a celery daemon).