Merge lp:~anso/nova/newlog2 into lp:~hudson-openstack/nova/trunk
- newlog2
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Todd Willey |
Approved revision: | 524 |
Merged at revision: | 528 |
Proposed branch: | lp:~anso/nova/newlog2 |
Merge into: | lp:~hudson-openstack/nova/trunk |
Diff against target: |
5259 lines (+1375/-571) 65 files modified
bin/nova-api-paste (+1/-1) bin/nova-dhcpbridge (+10/-7) bin/nova-instancemonitor (+5/-2) bin/nova-logspool (+156/-0) bin/nova-manage (+12/-5) bin/nova-spoolsentry (+97/-0) nova/api/__init__.py (+0/-1) nova/api/ec2/__init__.py (+71/-23) nova/api/ec2/admin.py (+32/-6) nova/api/ec2/apirequest.py (+6/-6) nova/api/ec2/cloud.py (+31/-6) nova/api/ec2/metadatarequesthandler.py (+3/-4) nova/api/openstack/__init__.py (+4/-9) nova/api/openstack/servers.py (+8/-8) nova/auth/dbdriver.py (+0/-1) nova/auth/ldapdriver.py (+5/-3) nova/auth/manager.py (+50/-11) nova/auth/signer.py (+9/-6) nova/cloudpipe/pipelib.py (+2/-2) nova/compute/api.py (+17/-16) nova/compute/disk.py (+8/-7) nova/compute/manager.py (+59/-59) nova/compute/monitor.py (+20/-19) nova/crypto.py (+5/-4) nova/db/sqlalchemy/__init__.py (+5/-4) nova/exception.py (+3/-4) nova/fakerabbit.py (+12/-8) nova/flags.py (+5/-7) nova/image/glance.py (+11/-13) nova/log.py (+260/-0) nova/network/api.py (+3/-3) nova/network/linux_net.py (+10/-9) nova/network/manager.py (+10/-8) nova/objectstore/handler.py (+47/-21) nova/rpc.py (+14/-15) nova/scheduler/manager.py (+3/-2) nova/service.py (+5/-11) nova/tests/api/openstack/test_images.py (+0/-1) nova/tests/objectstore_unittest.py (+0/-2) nova/tests/test_access.py (+0/-1) nova/tests/test_auth.py (+5/-4) nova/tests/test_cloud.py (+9/-9) nova/tests/test_compute.py (+4/-4) nova/tests/test_log.py (+110/-0) nova/tests/test_network.py (+4/-5) nova/tests/test_quota.py (+0/-4) nova/tests/test_rpc.py (+6/-5) nova/tests/test_volume.py (+3/-3) nova/twistd.py (+3/-22) nova/utils.py (+11/-10) nova/virt/connection.py (+3/-2) nova/virt/hyperv.py (+34/-31) nova/virt/images.py (+4/-2) nova/virt/libvirt_conn.py (+32/-29) nova/virt/xenapi/fake.py (+13/-11) nova/virt/xenapi/vm_utils.py (+31/-37) nova/virt/xenapi/vmops.py (+14/-12) nova/virt/xenapi/volume_utils.py (+21/-23) nova/virt/xenapi/volumeops.py (+17/-14) nova/virt/xenapi_conn.py (+10/-6) nova/volume/api.py (+6/-4) nova/volume/driver.py (+6/-5) nova/volume/manager.py (+11/-10) nova/wsgi.py (+17/-4) setup.py (+2/-0) |
To merge this branch: | bzr merge lp:~anso/nova/newlog2 |
Related bugs: | |
Related blueprints: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Todd Willey (community) | Approve | ||
Soren Hansen (community) | Approve | ||
Vish Ishaya (community) | Approve | ||
Review via email: mp+45100@code.launchpad.net |
Commit message
Description of the change
Wrap logs so we can:
* use a "context" kwarg to track requests all the way through the system
* use a custom formatter so we get the data we want (configurable with flags)
* allow additional formatting for debug statements for easer debugging
* add an AUDIT level, useful for noticing changes to system components
* use named logs instead of the general logger where it makes sesnse
Todd Willey (xtoddx) wrote : | # |
Soren Hansen (soren) wrote : | # |
What's the purpose of logspool and spoolsentry?
I wonder what the rationale behind the lack of timestamps is?
Also, I find it a tad annoying that the output from eventlet's wsgi
container is interspersed with our own. It can be fixed easily by
passing a log kwarg to eventlet.
that just discards it (an object with a write(*args) method that just
returns does the trick), but I wonder if there's a reason you're not
doing this?
Todd Willey (xtoddx) wrote : | # |
Logspool takes exceptions and puts them in a spool directory named by request id so they can be reported on. Spoolsentry processes the spool directory and adds the exceptions to an instance of django-sentry (perhaps it should live in contrib?).
I've added a class that wraps a logger with write for eventlet.
Vish Ishaya (vishvananda) wrote : | # |
keeping this branch up to date with merges is a huge challenge, so lets get this merged.
Soren Hansen (soren) wrote : | # |
Great, thanks for the clarifications.
I really appreciate that work that's gone into this. This is a massive improvement.
I still do think there ought to be timestamps by default, but I don't think that should block this work. I'll propose a patch that fixes that and approve this one now.
Soren Hansen (soren) wrote : | # |
Whoops, forgot to vote.
OpenStack Infra (hudson-openstack) wrote : | # |
Attempt to merge into lp:nova failed due to conflicts:
text conflict in nova/api/
text conflict in nova/compute/api.py
text conflict in nova/compute/
text conflict in nova/image/
text conflict in nova/tests/
text conflict in nova/virt/
Vish Ishaya (vishvananda) wrote : | # |
retrying
OpenStack Infra (hudson-openstack) wrote : | # |
The attempt to merge lp:~anso/nova/newlog2 into lp:nova failed. Below is the output from the failed tests.
bin/nova-
^
JCR: Trailing whitespace is superfluous.
FBM: Except when it occurs as part of a blank line (i.e. the line is
nothing but whitespace). According to Python docs[1] a line with only
whitespace is considered a blank line, and is to be ignored. However,
matching a blank line to its indentation level avoids mistakenly
pasting code into the standard Python interpreter.
[1] http://
The warning returned varies on whether the line itself is blank, for easier
filtering for those who want to indent their blank lines.
Okay: spam(1)
W291: spam(1)\s
W293: class Foo(object):\n \n bang = 12
bin/nova-
class LogReader(object):
^
Separate top-level function and class definitions with two blank lines.
Method definitions inside a class are separated by a single blank line.
Extra blank lines may be used (sparingly) to separate groups of related
functions. Blank lines may be omitted between a bunch of related
one-liners (e.g. a set of dummy implementations).
Use blank lines in functions, sparingly, to indicate logical sections.
Okay: def a():\n pass\n\n\ndef b():\n pass
Okay: def a():\n pass\n\n\n# Foo\n# Bar\n\ndef b():\n pass
E301: class Foo:\n b = 0\n def bar():\n pass
E302: def a():\n pass\n\ndef b(n):\n pass
E303: def a():\n pass\n\n\n\ndef b(n):\n pass
E303: def a():\n\n\n\n pass
E304: @decorator\n\ndef a():\n pass
Todd Willey (xtoddx) wrote : | # |
Retry after pep8 fixing.
OpenStack Infra (hudson-openstack) wrote : | # |
There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.
OpenStack Infra (hudson-openstack) wrote : | # |
Attempt to merge into lp:nova failed due to conflicts:
text conflict in nova/db/
Todd Willey (xtoddx) wrote : | # |
Retry after merge.
OpenStack Infra (hudson-openstack) wrote : | # |
There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.
Vish Ishaya (vishvananda) wrote : | # |
You have to wait for launchpad to recognize the new revision before you hit
approve so that the approved revision is set properly.
On Jan 7, 2011 11:55 AM, "OpenStack Hudson" <email address hidden> wrote:
> The proposal to merge lp:~anso/nova/newlog2 into lp:nova has been updated.
>
> Status: Approved => Needs review
>
> For more details, see:
> https:/
> --
> https:/
> You are reviewing the proposed merge of lp:~anso/nova/newlog2 into
lp:nova.
Todd Willey (xtoddx) wrote : | # |
I really feel I should be able to merge this.
Preview Diff
1 | === modified file 'bin/nova-api-paste' |
2 | --- bin/nova-api-paste 2010-12-30 06:19:38 +0000 |
3 | +++ bin/nova-api-paste 2011-01-07 19:51:33 +0000 |
4 | @@ -21,7 +21,6 @@ |
5 | """Starter script for Nova API.""" |
6 | |
7 | import gettext |
8 | -import logging |
9 | import os |
10 | import sys |
11 | |
12 | @@ -38,6 +37,7 @@ |
13 | gettext.install('nova', unicode=1) |
14 | |
15 | from nova import flags |
16 | +from nova import log as logging |
17 | from nova import wsgi |
18 | |
19 | LOG = logging.getLogger('nova.api') |
20 | |
21 | === modified file 'bin/nova-dhcpbridge' |
22 | --- bin/nova-dhcpbridge 2010-12-17 01:29:26 +0000 |
23 | +++ bin/nova-dhcpbridge 2011-01-07 19:51:33 +0000 |
24 | @@ -22,7 +22,6 @@ |
25 | """ |
26 | |
27 | import gettext |
28 | -import logging |
29 | import os |
30 | import sys |
31 | |
32 | @@ -39,6 +38,7 @@ |
33 | from nova import context |
34 | from nova import db |
35 | from nova import flags |
36 | +from nova import log as logging |
37 | from nova import rpc |
38 | from nova import utils |
39 | from nova.network import linux_net |
40 | @@ -49,11 +49,13 @@ |
41 | flags.DECLARE('num_networks', 'nova.network.manager') |
42 | flags.DECLARE('update_dhcp_on_disassociate', 'nova.network.manager') |
43 | |
44 | +LOG = logging.getLogger('nova.dhcpbridge') |
45 | + |
46 | |
47 | def add_lease(mac, ip_address, _hostname, _interface): |
48 | """Set the IP that was assigned by the DHCP server.""" |
49 | if FLAGS.fake_rabbit: |
50 | - logging.debug("leasing ip") |
51 | + LOG.debug(_("leasing ip")) |
52 | network_manager = utils.import_object(FLAGS.network_manager) |
53 | network_manager.lease_fixed_ip(context.get_admin_context(), |
54 | mac, |
55 | @@ -68,14 +70,14 @@ |
56 | |
57 | def old_lease(mac, ip_address, hostname, interface): |
58 | """Update just as add lease.""" |
59 | - logging.debug("Adopted old lease or got a change of mac/hostname") |
60 | + LOG.debug(_("Adopted old lease or got a change of mac/hostname")) |
61 | add_lease(mac, ip_address, hostname, interface) |
62 | |
63 | |
64 | def del_lease(mac, ip_address, _hostname, _interface): |
65 | """Called when a lease expires.""" |
66 | if FLAGS.fake_rabbit: |
67 | - logging.debug("releasing ip") |
68 | + LOG.debug(_("releasing ip")) |
69 | network_manager = utils.import_object(FLAGS.network_manager) |
70 | network_manager.release_fixed_ip(context.get_admin_context(), |
71 | mac, |
72 | @@ -100,6 +102,7 @@ |
73 | flagfile = os.environ.get('FLAGFILE', FLAGS.dhcpbridge_flagfile) |
74 | utils.default_flagfile(flagfile) |
75 | argv = FLAGS(sys.argv) |
76 | + logging.basicConfig() |
77 | interface = os.environ.get('DNSMASQ_INTERFACE', 'br0') |
78 | if int(os.environ.get('TESTING', '0')): |
79 | FLAGS.fake_rabbit = True |
80 | @@ -117,9 +120,9 @@ |
81 | mac = argv[2] |
82 | ip = argv[3] |
83 | hostname = argv[4] |
84 | - logging.debug("Called %s for mac %s with ip %s and " |
85 | - "hostname %s on interface %s", |
86 | - action, mac, ip, hostname, interface) |
87 | + LOG.debug(_("Called %s for mac %s with ip %s and " |
88 | + "hostname %s on interface %s"), |
89 | + action, mac, ip, hostname, interface) |
90 | globals()[action + '_lease'](mac, ip, hostname, interface) |
91 | else: |
92 | print init_leases(interface) |
93 | |
94 | === modified file 'bin/nova-instancemonitor' |
95 | --- bin/nova-instancemonitor 2010-12-14 23:22:03 +0000 |
96 | +++ bin/nova-instancemonitor 2011-01-07 19:51:33 +0000 |
97 | @@ -23,7 +23,6 @@ |
98 | |
99 | import gettext |
100 | import os |
101 | -import logging |
102 | import sys |
103 | from twisted.application import service |
104 | |
105 | @@ -37,19 +36,23 @@ |
106 | |
107 | gettext.install('nova', unicode=1) |
108 | |
109 | +from nova import log as logging |
110 | from nova import utils |
111 | from nova import twistd |
112 | from nova.compute import monitor |
113 | |
114 | +# TODO(todd): shouldn't this be done with flags? And what about verbose? |
115 | logging.getLogger('boto').setLevel(logging.WARN) |
116 | |
117 | +LOG = logging.getLogger('nova.instancemonitor') |
118 | + |
119 | |
120 | if __name__ == '__main__': |
121 | utils.default_flagfile() |
122 | twistd.serve(__file__) |
123 | |
124 | if __name__ == '__builtin__': |
125 | - logging.warn('Starting instance monitor') |
126 | + LOG.warn(_('Starting instance monitor')) |
127 | # pylint: disable-msg=C0103 |
128 | monitor = monitor.InstanceMonitor() |
129 | |
130 | |
131 | === added file 'bin/nova-logspool' |
132 | --- bin/nova-logspool 1970-01-01 00:00:00 +0000 |
133 | +++ bin/nova-logspool 2011-01-07 19:51:33 +0000 |
134 | @@ -0,0 +1,156 @@ |
135 | +#!/usr/bin/env python |
136 | + |
137 | +# Copyright 2010 United States Government as represented by the |
138 | +# Administrator of the National Aeronautics and Space Administration. |
139 | +# All Rights Reserved. |
140 | +# |
141 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
142 | +# you may not use this file except in compliance with the License. |
143 | +# You may obtain a copy of the License at |
144 | +# |
145 | +# http://www.apache.org/licenses/LICENSE-2.0 |
146 | +# |
147 | +# Unless required by applicable law or agreed to in writing, software |
148 | +# distributed under the License is distributed on an "AS IS" BASIS, |
149 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
150 | +# See the License for the specific language governing permissions and |
151 | +# limitations under the License. |
152 | + |
153 | +""" |
154 | +Tools for working with logs generated by nova components |
155 | +""" |
156 | + |
157 | + |
158 | +import json |
159 | +import os |
160 | +import re |
161 | +import sys |
162 | + |
163 | + |
164 | +class Request(object): |
165 | + def __init__(self): |
166 | + self.time = "" |
167 | + self.host = "" |
168 | + self.logger = "" |
169 | + self.message = "" |
170 | + self.trace = "" |
171 | + self.env = "" |
172 | + self.request_id = "" |
173 | + |
174 | + def add_error_line(self, error_line): |
175 | + self.time = " ".join(error_line.split(" ")[:3]) |
176 | + self.host = error_line.split(" ")[3] |
177 | + self.logger = error_line.split("(")[1].split(" ")[0] |
178 | + self.request_id = error_line.split("[")[1].split(" ")[0] |
179 | + error_lines = error_line.split("#012") |
180 | + self.message = self.clean_log_line(error_lines.pop(0)) |
181 | + self.trace = "\n".join([self.clean_trace(l) for l in error_lines]) |
182 | + |
183 | + def add_environment_line(self, env_line): |
184 | + self.env = self.clean_env_line(env_line) |
185 | + |
186 | + def clean_log_line(self, line): |
187 | + """Remove log format for time, level, etc: split after context""" |
188 | + return line.split('] ')[-1] |
189 | + |
190 | + def clean_env_line(self, line): |
191 | + """Also has an 'Environment: ' string in the message""" |
192 | + return re.sub(r'^Environment: ', '', self.clean_log_line(line)) |
193 | + |
194 | + def clean_trace(self, line): |
195 | + """trace has a different format, so split on TRACE:""" |
196 | + return line.split('TRACE: ')[-1] |
197 | + |
198 | + def to_dict(self): |
199 | + return {'traceback': self.trace, 'message': self.message, |
200 | + 'host': self.host, 'env': self.env, 'logger': self.logger, |
201 | + 'request_id': self.request_id} |
202 | + |
203 | + |
204 | +class LogReader(object): |
205 | + def __init__(self, filename): |
206 | + self.filename = filename |
207 | + self._errors = {} |
208 | + |
209 | + def process(self, spooldir): |
210 | + with open(self.filename) as f: |
211 | + line = f.readline() |
212 | + while len(line) > 0: |
213 | + parts = line.split(" ") |
214 | + level = (len(parts) < 6) or parts[5] |
215 | + if level == 'ERROR': |
216 | + self.handle_logged_error(line) |
217 | + elif level == '[-]' and self.last_error: |
218 | + # twisted stack trace line |
219 | + clean_line = " ".join(line.split(" ")[6:]) |
220 | + self.last_error.trace = self.last_error.trace + clean_line |
221 | + else: |
222 | + self.last_error = None |
223 | + line = f.readline() |
224 | + self.update_spool(spooldir) |
225 | + |
226 | + def handle_logged_error(self, line): |
227 | + request_id = re.search(r' \[([A-Z0-9\-/]+)', line) |
228 | + if not request_id: |
229 | + raise Exception("Unable to parse request id from %s" % line) |
230 | + request_id = request_id.group(1) |
231 | + data = self._errors.get(request_id, Request()) |
232 | + if self.is_env_line(line): |
233 | + data.add_environment_line(line) |
234 | + elif self.is_error_line(line): |
235 | + data.add_error_line(line) |
236 | + else: |
237 | + # possibly error from twsited |
238 | + data.add_error_line(line) |
239 | + self.last_error = data |
240 | + self._errors[request_id] = data |
241 | + |
242 | + def is_env_line(self, line): |
243 | + return re.search('Environment: ', line) |
244 | + |
245 | + def is_error_line(self, line): |
246 | + return re.search('raised', line) |
247 | + |
248 | + def update_spool(self, directory): |
249 | + processed_dir = "%s/processed" % directory |
250 | + self._ensure_dir_exists(processed_dir) |
251 | + for rid, value in self._errors.iteritems(): |
252 | + if not self.has_been_processed(processed_dir, rid): |
253 | + with open("%s/%s" % (directory, rid), "w") as spool: |
254 | + spool.write(json.dumps(value.to_dict())) |
255 | + self.flush_old_processed_spool(processed_dir) |
256 | + |
257 | + def _ensure_dir_exists(self, d): |
258 | + mkdir = False |
259 | + try: |
260 | + os.stat(d) |
261 | + except: |
262 | + mkdir = True |
263 | + if mkdir: |
264 | + os.mkdir(d) |
265 | + |
266 | + def has_been_processed(self, processed_dir, rid): |
267 | + rv = False |
268 | + try: |
269 | + os.stat("%s/%s" % (processed_dir, rid)) |
270 | + rv = True |
271 | + except: |
272 | + pass |
273 | + return rv |
274 | + |
275 | + def flush_old_processed_spool(self, processed_dir): |
276 | + keys = self._errors.keys() |
277 | + procs = os.listdir(processed_dir) |
278 | + for p in procs: |
279 | + if p not in keys: |
280 | + # log has rotated and the old error won't be seen again |
281 | + os.unlink("%s/%s" % (processed_dir, p)) |
282 | + |
283 | +if __name__ == '__main__': |
284 | + filename = '/var/log/nova.log' |
285 | + spooldir = '/var/spool/nova' |
286 | + if len(sys.argv) > 1: |
287 | + filename = sys.argv[1] |
288 | + if len(sys.argv) > 2: |
289 | + spooldir = sys.argv[2] |
290 | + LogReader(filename).process(spooldir) |
291 | |
292 | === modified file 'bin/nova-manage' |
293 | --- bin/nova-manage 2010-12-28 20:11:41 +0000 |
294 | +++ bin/nova-manage 2011-01-07 19:51:33 +0000 |
295 | @@ -55,8 +55,8 @@ |
296 | |
297 | import datetime |
298 | import gettext |
299 | -import logging |
300 | import os |
301 | +import re |
302 | import sys |
303 | import time |
304 | |
305 | @@ -499,6 +499,15 @@ |
306 | db.service_update(ctxt, svc['id'], {'disabled': True}) |
307 | |
308 | |
309 | +class LogCommands(object): |
310 | + def request(self, request_id, logfile='/var/log/nova.log'): |
311 | + """Show all fields in the log for the given request. Assumes you |
312 | + haven't changed the log format too much. |
313 | + ARGS: request_id [logfile]""" |
314 | + lines = utils.execute("cat %s | grep '\[%s '" % (logfile, request_id)) |
315 | + print re.sub('#012', "\n", "\n".join(lines)) |
316 | + |
317 | + |
318 | CATEGORIES = [ |
319 | ('user', UserCommands), |
320 | ('project', ProjectCommands), |
321 | @@ -507,7 +516,8 @@ |
322 | ('vpn', VpnCommands), |
323 | ('floating', FloatingIpCommands), |
324 | ('network', NetworkCommands), |
325 | - ('service', ServiceCommands)] |
326 | + ('service', ServiceCommands), |
327 | + ('log', LogCommands)] |
328 | |
329 | |
330 | def lazy_match(name, key_value_tuples): |
331 | @@ -546,9 +556,6 @@ |
332 | utils.default_flagfile() |
333 | argv = FLAGS(sys.argv) |
334 | |
335 | - if FLAGS.verbose: |
336 | - logging.getLogger().setLevel(logging.DEBUG) |
337 | - |
338 | script_name = argv.pop(0) |
339 | if len(argv) < 1: |
340 | print script_name + " category action [<args>]" |
341 | |
342 | === added file 'bin/nova-spoolsentry' |
343 | --- bin/nova-spoolsentry 1970-01-01 00:00:00 +0000 |
344 | +++ bin/nova-spoolsentry 2011-01-07 19:51:33 +0000 |
345 | @@ -0,0 +1,97 @@ |
346 | +#!/usr/bin/env python |
347 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
348 | + |
349 | +# Copyright 2010 United States Government as represented by the |
350 | +# Administrator of the National Aeronautics and Space Administration. |
351 | +# All Rights Reserved. |
352 | +# |
353 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
354 | +# not use this file except in compliance with the License. You may obtain |
355 | +# a copy of the License at |
356 | +# |
357 | +# http://www.apache.org/licenses/LICENSE-2.0 |
358 | +# |
359 | +# Unless required by applicable law or agreed to in writing, software |
360 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
361 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
362 | +# License for the specific language governing permissions and limitations |
363 | +# under the License. |
364 | + |
365 | + |
366 | +import base64 |
367 | +import json |
368 | +import logging |
369 | +import os |
370 | +import shutil |
371 | +import sys |
372 | +import urllib |
373 | +import urllib2 |
374 | +try: |
375 | + import cPickle as pickle |
376 | +except: |
377 | + import pickle |
378 | + |
379 | + |
380 | +class SpoolSentry(object): |
381 | + def __init__(self, spool_dir, sentry_url, key=None): |
382 | + self.spool_dir = spool_dir |
383 | + self.sentry_url = sentry_url |
384 | + self.key = key |
385 | + |
386 | + def process(self): |
387 | + for fname in os.listdir(self.spool_dir): |
388 | + if fname == "processed": |
389 | + continue |
390 | + try: |
391 | + sourcefile = "%s/%s" % (self.spool_dir, fname) |
392 | + with open(sourcefile) as f: |
393 | + fdata = f.read() |
394 | + data_from_json = json.loads(fdata) |
395 | + data = self.build_data(data_from_json) |
396 | + self.send_data(data) |
397 | + destfile = "%s/processed/%s" % (self.spool_dir, fname) |
398 | + shutil.move(sourcefile, destfile) |
399 | + except: |
400 | + logging.exception("Unable to upload record %s", fname) |
401 | + raise |
402 | + |
403 | + def build_data(self, filejson): |
404 | + env = {'SERVER_NAME': 'unknown', 'SERVER_PORT': '0000', |
405 | + 'SCRIPT_NAME': '/unknown/', 'PATH_INFO': 'unknown'} |
406 | + if filejson['env']: |
407 | + env = json.loads(filejson['env']) |
408 | + url = "http://%s:%s%s%s" % (env['SERVER_NAME'], env['SERVER_PORT'], |
409 | + env['SCRIPT_NAME'], env['PATH_INFO']) |
410 | + rv = {'logger': filejson['logger'], 'level': logging.ERROR, |
411 | + 'server_name': filejson['host'], 'url': url, |
412 | + 'message': filejson['message'], |
413 | + 'traceback': filejson['traceback']} |
414 | + rv['data'] = {} |
415 | + if filejson['env']: |
416 | + rv['data']['META'] = env |
417 | + if filejson['request_id']: |
418 | + rv['data']['request_id'] = filejson['request_id'] |
419 | + return rv |
420 | + |
421 | + def send_data(self, data): |
422 | + data = { |
423 | + 'data': base64.b64encode(pickle.dumps(data).encode('zlib')), |
424 | + 'key': self.key |
425 | + } |
426 | + req = urllib2.Request(self.sentry_url) |
427 | + res = urllib2.urlopen(req, urllib.urlencode(data)) |
428 | + if res.getcode() != 200: |
429 | + raise Exception("Bad HTTP code: %s" % res.getcode()) |
430 | + txt = res.read() |
431 | + |
432 | +if __name__ == '__main__': |
433 | + sentryurl = 'http://127.0.0.1/sentry/store/' |
434 | + key = '' |
435 | + spooldir = '/var/spool/nova' |
436 | + if len(sys.argv) > 1: |
437 | + sentryurl = sys.argv[1] |
438 | + if len(sys.argv) > 2: |
439 | + key = sys.argv[2] |
440 | + if len(sys.argv) > 3: |
441 | + spooldir = sys.argv[3] |
442 | + SpoolSentry(spooldir, sentryurl, key).process() |
443 | |
444 | === modified file 'nova/api/__init__.py' |
445 | --- nova/api/__init__.py 2010-12-28 00:10:26 +0000 |
446 | +++ nova/api/__init__.py 2011-01-07 19:51:33 +0000 |
447 | @@ -24,7 +24,6 @@ |
448 | :ec2api_subdomain: subdomain running the EC2 API (default: ec2) |
449 | |
450 | """ |
451 | -import logging |
452 | |
453 | import routes |
454 | import webob.dec |
455 | |
456 | === modified file 'nova/api/ec2/__init__.py' |
457 | --- nova/api/ec2/__init__.py 2010-12-30 07:20:31 +0000 |
458 | +++ nova/api/ec2/__init__.py 2011-01-07 19:51:33 +0000 |
459 | @@ -20,7 +20,7 @@ |
460 | |
461 | """ |
462 | |
463 | -import logging |
464 | +import datetime |
465 | import routes |
466 | import webob |
467 | import webob.dec |
468 | @@ -29,6 +29,7 @@ |
469 | from nova import context |
470 | from nova import exception |
471 | from nova import flags |
472 | +from nova import log as logging |
473 | from nova import wsgi |
474 | from nova.api.ec2 import apirequest |
475 | from nova.api.ec2 import admin |
476 | @@ -37,6 +38,7 @@ |
477 | |
478 | |
479 | FLAGS = flags.FLAGS |
480 | +LOG = logging.getLogger("nova.api") |
481 | flags.DEFINE_boolean('use_forwarded_for', False, |
482 | 'Treat X-Forwarded-For as the canonical remote address. ' |
483 | 'Only enable this if you have a sanitizing proxy.') |
484 | @@ -52,10 +54,6 @@ |
485 | 'Memcached servers or None for in process cache.') |
486 | |
487 | |
488 | -_log = logging.getLogger("api") |
489 | -_log.setLevel(logging.DEBUG) |
490 | - |
491 | - |
492 | class API(wsgi.Middleware): |
493 | """Routing for all EC2 API requests.""" |
494 | |
495 | @@ -64,6 +62,40 @@ |
496 | if FLAGS.use_lockout: |
497 | self.application = Lockout(self.application) |
498 | |
499 | + @webob.dec.wsgify |
500 | + def __call__(self, req): |
501 | + rv = req.get_response(self.application) |
502 | + self.log_request_completion(rv, req) |
503 | + return rv |
504 | + |
505 | + def log_request_completion(self, response, request): |
506 | + controller = request.environ.get('ec2.controller', None) |
507 | + if controller: |
508 | + controller = controller.__class__.__name__ |
509 | + action = request.environ.get('ec2.action', None) |
510 | + ctxt = request.environ.get('ec2.context', None) |
511 | + seconds = 'X' |
512 | + microseconds = 'X' |
513 | + if ctxt: |
514 | + delta = datetime.datetime.utcnow() - \ |
515 | + ctxt.timestamp |
516 | + seconds = delta.seconds |
517 | + microseconds = delta.microseconds |
518 | + LOG.info( |
519 | + "%s.%ss %s %s %s %s:%s %s [%s] %s %s", |
520 | + seconds, |
521 | + microseconds, |
522 | + request.remote_addr, |
523 | + request.method, |
524 | + request.path_info, |
525 | + controller, |
526 | + action, |
527 | + response.status_int, |
528 | + request.user_agent, |
529 | + request.content_type, |
530 | + response.content_type, |
531 | + context=ctxt) |
532 | + |
533 | |
534 | class Lockout(wsgi.Middleware): |
535 | """Lockout for x minutes on y failed auths in a z minute period. |
536 | @@ -98,7 +130,7 @@ |
537 | failures_key = "authfailures-%s" % access_key |
538 | failures = int(self.mc.get(failures_key) or 0) |
539 | if failures >= FLAGS.lockout_attempts: |
540 | - detail = "Too many failed authentications." |
541 | + detail = _("Too many failed authentications.") |
542 | raise webob.exc.HTTPForbidden(detail=detail) |
543 | res = req.get_response(self.application) |
544 | if res.status_int == 403: |
545 | @@ -107,9 +139,9 @@ |
546 | # NOTE(vish): To use incr, failures has to be a string. |
547 | self.mc.set(failures_key, '1', time=FLAGS.lockout_window * 60) |
548 | elif failures >= FLAGS.lockout_attempts: |
549 | - _log.warn('Access key %s has had %d failed authentications' |
550 | - ' and will be locked out for %d minutes.' % |
551 | - (access_key, failures, FLAGS.lockout_minutes)) |
552 | + LOG.warn(_('Access key %s has had %d failed authentications' |
553 | + ' and will be locked out for %d minutes.'), |
554 | + access_key, failures, FLAGS.lockout_minutes) |
555 | self.mc.set(failures_key, str(failures), |
556 | time=FLAGS.lockout_minutes * 60) |
557 | return res |
558 | @@ -142,8 +174,9 @@ |
559 | req.method, |
560 | req.host, |
561 | req.path) |
562 | - except exception.Error, ex: |
563 | - logging.debug(_("Authentication Failure: %s") % ex) |
564 | + # Be explicit for what exceptions are 403, the rest bubble as 500 |
565 | + except (exception.NotFound, exception.NotAuthorized) as ex: |
566 | + LOG.audit(_("Authentication Failure: %s"), str(ex)) |
567 | raise webob.exc.HTTPForbidden() |
568 | |
569 | # Authenticated! |
570 | @@ -154,6 +187,8 @@ |
571 | project=project, |
572 | remote_address=remote_address) |
573 | req.environ['ec2.context'] = ctxt |
574 | + LOG.audit(_('Authenticated Request For %s:%s)'), user.name, |
575 | + project.name, context=req.environ['ec2.context']) |
576 | return self.application |
577 | |
578 | |
579 | @@ -189,9 +224,9 @@ |
580 | except: |
581 | raise webob.exc.HTTPBadRequest() |
582 | |
583 | - _log.debug(_('action: %s') % action) |
584 | + LOG.debug(_('action: %s'), action) |
585 | for key, value in args.items(): |
586 | - _log.debug(_('arg: %s\t\tval: %s') % (key, value)) |
587 | + LOG.debug(_('arg: %s\t\tval: %s'), key, value) |
588 | |
589 | # Success! |
590 | req.environ['ec2.controller'] = controller |
591 | @@ -263,6 +298,9 @@ |
592 | if self._matches_any_role(context, allowed_roles): |
593 | return self.application |
594 | else: |
595 | + LOG.audit(_("Unauthorized request for controller=%s " |
596 | + "and action=%s"), controller_name, action, |
597 | + context=context) |
598 | raise webob.exc.HTTPUnauthorized() |
599 | |
600 | def _matches_any_role(self, context, roles): |
601 | @@ -297,15 +335,24 @@ |
602 | result = None |
603 | try: |
604 | result = api_request.send(context, **args) |
605 | + except exception.NotFound as ex: |
606 | + LOG.info(_('NotFound raised: %s'), str(ex), context=context) |
607 | + return self._error(req, context, type(ex).__name__, str(ex)) |
608 | except exception.ApiError as ex: |
609 | - |
610 | + LOG.exception(_('ApiError raised: %s'), str(ex), context=context) |
611 | if ex.code: |
612 | - return self._error(req, ex.code, ex.message) |
613 | + return self._error(req, context, ex.code, str(ex)) |
614 | else: |
615 | - return self._error(req, type(ex).__name__, ex.message) |
616 | - # TODO(vish): do something more useful with unknown exceptions |
617 | + return self._error(req, context, type(ex).__name__, str(ex)) |
618 | except Exception as ex: |
619 | - return self._error(req, type(ex).__name__, str(ex)) |
620 | + extra = {'environment': req.environ} |
621 | + LOG.exception(_('Unexpected error raised: %s'), str(ex), |
622 | + extra=extra, context=context) |
623 | + return self._error(req, |
624 | + context, |
625 | + 'UnknownError', |
626 | + _('An unknown error has occurred. ' |
627 | + 'Please try your request again.')) |
628 | else: |
629 | resp = webob.Response() |
630 | resp.status = 200 |
631 | @@ -313,15 +360,16 @@ |
632 | resp.body = str(result) |
633 | return resp |
634 | |
635 | - def _error(self, req, code, message): |
636 | - logging.error("%s: %s", code, message) |
637 | + def _error(self, req, context, code, message): |
638 | + LOG.error("%s: %s", code, message, context=context) |
639 | resp = webob.Response() |
640 | resp.status = 400 |
641 | resp.headers['Content-Type'] = 'text/xml' |
642 | resp.body = str('<?xml version="1.0"?>\n' |
643 | - '<Response><Errors><Error><Code>%s</Code>' |
644 | - '<Message>%s</Message></Error></Errors>' |
645 | - '<RequestID>?</RequestID></Response>' % (code, message)) |
646 | + '<Response><Errors><Error><Code>%s</Code>' |
647 | + '<Message>%s</Message></Error></Errors>' |
648 | + '<RequestID>%s</RequestID></Response>' % |
649 | + (code, message, context.request_id)) |
650 | return resp |
651 | |
652 | |
653 | |
654 | === modified file 'nova/api/ec2/admin.py' |
655 | --- nova/api/ec2/admin.py 2010-12-22 20:59:53 +0000 |
656 | +++ nova/api/ec2/admin.py 2011-01-07 19:51:33 +0000 |
657 | @@ -24,9 +24,13 @@ |
658 | |
659 | from nova import db |
660 | from nova import exception |
661 | +from nova import log as logging |
662 | from nova.auth import manager |
663 | |
664 | |
665 | +LOG = logging.getLogger('nova.api.ec2.admin') |
666 | + |
667 | + |
668 | def user_dict(user, base64_file=None): |
669 | """Convert the user object to a result dict""" |
670 | if user: |
671 | @@ -75,17 +79,18 @@ |
672 | return {'userSet': |
673 | [user_dict(u) for u in manager.AuthManager().get_users()]} |
674 | |
675 | - def register_user(self, _context, name, **_kwargs): |
676 | + def register_user(self, context, name, **_kwargs): |
677 | """Creates a new user, and returns generated credentials.""" |
678 | + LOG.audit(_("Creating new user: %s"), name, context=context) |
679 | return user_dict(manager.AuthManager().create_user(name)) |
680 | |
681 | - def deregister_user(self, _context, name, **_kwargs): |
682 | + def deregister_user(self, context, name, **_kwargs): |
683 | """Deletes a single user (NOT undoable.) |
684 | Should throw an exception if the user has instances, |
685 | volumes, or buckets remaining. |
686 | """ |
687 | + LOG.audit(_("Deleting user: %s"), name, context=context) |
688 | manager.AuthManager().delete_user(name) |
689 | - |
690 | return True |
691 | |
692 | def describe_roles(self, context, project_roles=True, **kwargs): |
693 | @@ -105,15 +110,27 @@ |
694 | operation='add', **kwargs): |
695 | """Add or remove a role for a user and project.""" |
696 | if operation == 'add': |
697 | + if project: |
698 | + LOG.audit(_("Adding role %s to user %s for project %s"), role, |
699 | + user, project, context=context) |
700 | + else: |
701 | + LOG.audit(_("Adding sitewide role %s to user %s"), role, user, |
702 | + context=context) |
703 | manager.AuthManager().add_role(user, role, project) |
704 | elif operation == 'remove': |
705 | + if project: |
706 | + LOG.audit(_("Removing role %s from user %s for project %s"), |
707 | + role, user, project, context=context) |
708 | + else: |
709 | + LOG.audit(_("Removing sitewide role %s from user %s"), role, |
710 | + user, context=context) |
711 | manager.AuthManager().remove_role(user, role, project) |
712 | else: |
713 | - raise exception.ApiError('operation must be add or remove') |
714 | + raise exception.ApiError(_('operation must be add or remove')) |
715 | |
716 | return True |
717 | |
718 | - def generate_x509_for_user(self, _context, name, project=None, **kwargs): |
719 | + def generate_x509_for_user(self, context, name, project=None, **kwargs): |
720 | """Generates and returns an x509 certificate for a single user. |
721 | Is usually called from a client that will wrap this with |
722 | access and secret key info, and return a zip file. |
723 | @@ -122,6 +139,8 @@ |
724 | project = name |
725 | project = manager.AuthManager().get_project(project) |
726 | user = manager.AuthManager().get_user(name) |
727 | + LOG.audit(_("Getting x509 for user: %s on project: %s"), name, |
728 | + project, context=context) |
729 | return user_dict(user, base64.b64encode(project.get_credentials(user))) |
730 | |
731 | def describe_project(self, context, name, **kwargs): |
732 | @@ -137,6 +156,8 @@ |
733 | def register_project(self, context, name, manager_user, description=None, |
734 | member_users=None, **kwargs): |
735 | """Creates a new project""" |
736 | + LOG.audit(_("Create project %s managed by %s"), name, manager_user, |
737 | + context=context) |
738 | return project_dict( |
739 | manager.AuthManager().create_project( |
740 | name, |
741 | @@ -146,6 +167,7 @@ |
742 | |
743 | def deregister_project(self, context, name): |
744 | """Permanently deletes a project.""" |
745 | + LOG.audit(_("Delete project: %s"), name, context=context) |
746 | manager.AuthManager().delete_project(name) |
747 | return True |
748 | |
749 | @@ -159,11 +181,15 @@ |
750 | **kwargs): |
751 | """Add or remove a user from a project.""" |
752 | if operation == 'add': |
753 | + LOG.audit(_("Adding user %s to project %s"), user, project, |
754 | + context=context) |
755 | manager.AuthManager().add_to_project(user, project) |
756 | elif operation == 'remove': |
757 | + LOG.audit(_("Removing user %s from project %s"), user, project, |
758 | + context=context) |
759 | manager.AuthManager().remove_from_project(user, project) |
760 | else: |
761 | - raise exception.ApiError('operation must be add or remove') |
762 | + raise exception.ApiError(_('operation must be add or remove')) |
763 | return True |
764 | |
765 | # FIXME(vish): these host commands don't work yet, perhaps some of the |
766 | |
767 | === modified file 'nova/api/ec2/apirequest.py' |
768 | --- nova/api/ec2/apirequest.py 2010-12-22 20:59:53 +0000 |
769 | +++ nova/api/ec2/apirequest.py 2011-01-07 19:51:33 +0000 |
770 | @@ -20,13 +20,13 @@ |
771 | APIRequest class |
772 | """ |
773 | |
774 | -import logging |
775 | import re |
776 | # TODO(termie): replace minidom with etree |
777 | from xml.dom import minidom |
778 | |
779 | -_log = logging.getLogger("api") |
780 | -_log.setLevel(logging.DEBUG) |
781 | +from nova import log as logging |
782 | + |
783 | +LOG = logging.getLogger("nova.api.request") |
784 | |
785 | |
786 | _c2u = re.compile('(((?<=[a-z])[A-Z])|([A-Z](?![A-Z]|$)))') |
787 | @@ -94,7 +94,7 @@ |
788 | except AttributeError: |
789 | _error = _('Unsupported API request: controller = %s,' |
790 | 'action = %s') % (self.controller, self.action) |
791 | - _log.warning(_error) |
792 | + LOG.exception(_error) |
793 | # TODO: Raise custom exception, trap in apiserver, |
794 | # and reraise as 400 error. |
795 | raise Exception(_error) |
796 | @@ -142,7 +142,7 @@ |
797 | |
798 | response = xml.toxml() |
799 | xml.unlink() |
800 | - _log.debug(response) |
801 | + LOG.debug(response) |
802 | return response |
803 | |
804 | def _render_dict(self, xml, el, data): |
805 | @@ -151,7 +151,7 @@ |
806 | val = data[key] |
807 | el.appendChild(self._render_data(xml, key, val)) |
808 | except: |
809 | - _log.debug(data) |
810 | + LOG.debug(data) |
811 | raise |
812 | |
813 | def _render_data(self, xml, el_name, data): |
814 | |
815 | === modified file 'nova/api/ec2/cloud.py' |
816 | --- nova/api/ec2/cloud.py 2011-01-06 18:35:18 +0000 |
817 | +++ nova/api/ec2/cloud.py 2011-01-07 19:51:33 +0000 |
818 | @@ -24,18 +24,18 @@ |
819 | |
820 | import base64 |
821 | import datetime |
822 | -import logging |
823 | +import IPy |
824 | import re |
825 | import os |
826 | |
827 | +from nova import compute |
828 | from nova import context |
829 | -import IPy |
830 | - |
831 | -from nova import compute |
832 | from nova import crypto |
833 | from nova import db |
834 | from nova import exception |
835 | from nova import flags |
836 | +from nova import log as logging |
837 | +from nova import quota |
838 | from nova import network |
839 | from nova import rpc |
840 | from nova import utils |
841 | @@ -45,6 +45,8 @@ |
842 | |
843 | FLAGS = flags.FLAGS |
844 | |
845 | +LOG = logging.getLogger("nova.api.cloud") |
846 | + |
847 | InvalidInputException = exception.InvalidInputException |
848 | |
849 | |
850 | @@ -282,6 +284,7 @@ |
851 | return {'keypairsSet': result} |
852 | |
853 | def create_key_pair(self, context, key_name, **kwargs): |
854 | + LOG.audit(_("Create key pair %s"), key_name, context=context) |
855 | data = _gen_key(context, context.user.id, key_name) |
856 | return {'keyName': key_name, |
857 | 'keyFingerprint': data['fingerprint'], |
858 | @@ -289,6 +292,7 @@ |
859 | # TODO(vish): when context is no longer an object, pass it here |
860 | |
861 | def delete_key_pair(self, context, key_name, **kwargs): |
862 | + LOG.audit(_("Delete key pair %s"), key_name, context=context) |
863 | try: |
864 | db.key_pair_destroy(context, context.user.id, key_name) |
865 | except exception.NotFound: |
866 | @@ -395,6 +399,8 @@ |
867 | return False |
868 | |
869 | def revoke_security_group_ingress(self, context, group_name, **kwargs): |
870 | + LOG.audit(_("Revoke security group ingress %s"), group_name, |
871 | + context=context) |
872 | self.compute_api.ensure_default_security_group(context) |
873 | security_group = db.security_group_get_by_name(context, |
874 | context.project_id, |
875 | @@ -421,6 +427,8 @@ |
876 | # for these operations, so support for newer API versions |
877 | # is sketchy. |
878 | def authorize_security_group_ingress(self, context, group_name, **kwargs): |
879 | + LOG.audit(_("Authorize security group ingress %s"), group_name, |
880 | + context=context) |
881 | self.compute_api.ensure_default_security_group(context) |
882 | security_group = db.security_group_get_by_name(context, |
883 | context.project_id, |
884 | @@ -457,6 +465,7 @@ |
885 | return source_project_id |
886 | |
887 | def create_security_group(self, context, group_name, group_description): |
888 | + LOG.audit(_("Create Security Group %s"), group_name, context=context) |
889 | self.compute_api.ensure_default_security_group(context) |
890 | if db.security_group_exists(context, context.project_id, group_name): |
891 | raise exception.ApiError(_('group %s already exists') % group_name) |
892 | @@ -471,6 +480,7 @@ |
893 | group_ref)]} |
894 | |
895 | def delete_security_group(self, context, group_name, **kwargs): |
896 | + LOG.audit(_("Delete security group %s"), group_name, context=context) |
897 | security_group = db.security_group_get_by_name(context, |
898 | context.project_id, |
899 | group_name) |
900 | @@ -478,6 +488,8 @@ |
901 | return True |
902 | |
903 | def get_console_output(self, context, instance_id, **kwargs): |
904 | + LOG.audit(_("Get console output for instance %s"), instance_id, |
905 | + context=context) |
906 | # instance_id is passed in as a list of instances |
907 | ec2_id = instance_id[0] |
908 | instance_id = ec2_id_to_id(ec2_id) |
909 | @@ -536,6 +548,7 @@ |
910 | return v |
911 | |
912 | def create_volume(self, context, size, **kwargs): |
913 | + LOG.audit(_("Create volume of %s GB"), size, context=context) |
914 | volume = self.volume_api.create(context, size, |
915 | kwargs.get('display_name'), |
916 | kwargs.get('display_description')) |
917 | @@ -559,6 +572,8 @@ |
918 | return True |
919 | |
920 | def attach_volume(self, context, volume_id, instance_id, device, **kwargs): |
921 | + LOG.audit(_("Attach volume %s to instacne %s at %s"), volume_id, |
922 | + instance_id, device, context=context) |
923 | self.compute_api.attach_volume(context, instance_id, volume_id, device) |
924 | volume = self.volume_api.get(context, volume_id) |
925 | return {'attachTime': volume['attach_time'], |
926 | @@ -569,6 +584,7 @@ |
927 | 'volumeId': volume_id} |
928 | |
929 | def detach_volume(self, context, volume_id, **kwargs): |
930 | + LOG.audit(_("Detach volume %s"), volume_id, context=context) |
931 | volume = self.volume_api.get(context, volume_id) |
932 | instance = self.compute_api.detach_volume(context, volume_id) |
933 | return {'attachTime': volume['attach_time'], |
934 | @@ -670,19 +686,24 @@ |
935 | return {'addressesSet': addresses} |
936 | |
937 | def allocate_address(self, context, **kwargs): |
938 | + LOG.audit(_("Allocate address"), context=context) |
939 | public_ip = self.network_api.allocate_floating_ip(context) |
940 | return {'addressSet': [{'publicIp': public_ip}]} |
941 | |
942 | def release_address(self, context, public_ip, **kwargs): |
943 | + LOG.audit(_("Release address %s"), public_ip, context=context) |
944 | self.network_api.release_floating_ip(context, public_ip) |
945 | return {'releaseResponse': ["Address released."]} |
946 | |
947 | def associate_address(self, context, instance_id, public_ip, **kwargs): |
948 | + LOG.audit(_("Associate address %s to instance %s"), public_ip, |
949 | + instance_id, context=context) |
950 | instance_id = ec2_id_to_id(instance_id) |
951 | self.compute_api.associate_floating_ip(context, instance_id, public_ip) |
952 | return {'associateResponse': ["Address associated."]} |
953 | |
954 | def disassociate_address(self, context, public_ip, **kwargs): |
955 | + LOG.audit(_("Disassociate address %s"), public_ip, context=context) |
956 | self.network_api.disassociate_floating_ip(context, public_ip) |
957 | return {'disassociateResponse': ["Address disassociated."]} |
958 | |
959 | @@ -709,7 +730,7 @@ |
960 | def terminate_instances(self, context, instance_id, **kwargs): |
961 | """Terminate each instance in instance_id, which is a list of ec2 ids. |
962 | instance_id is a kwarg so its name cannot be modified.""" |
963 | - logging.debug("Going to start terminating instances") |
964 | + LOG.debug(_("Going to start terminating instances")) |
965 | for ec2_id in instance_id: |
966 | instance_id = ec2_id_to_id(ec2_id) |
967 | self.compute_api.delete(context, instance_id) |
968 | @@ -717,6 +738,7 @@ |
969 | |
970 | def reboot_instances(self, context, instance_id, **kwargs): |
971 | """instance_id is a list of instance ids""" |
972 | + LOG.audit(_("Reboot instance %r"), instance_id, context=context) |
973 | for ec2_id in instance_id: |
974 | instance_id = ec2_id_to_id(ec2_id) |
975 | self.compute_api.reboot(context, instance_id) |
976 | @@ -753,6 +775,7 @@ |
977 | return {'imagesSet': images} |
978 | |
979 | def deregister_image(self, context, image_id, **kwargs): |
980 | + LOG.audit(_("De-registering image %s"), image_id, context=context) |
981 | self.image_service.deregister(context, image_id) |
982 | return {'imageId': image_id} |
983 | |
984 | @@ -760,7 +783,8 @@ |
985 | if image_location is None and 'name' in kwargs: |
986 | image_location = kwargs['name'] |
987 | image_id = self.image_service.register(context, image_location) |
988 | - logging.debug("Registered %s as %s" % (image_location, image_id)) |
989 | + LOG.audit(_("Registered image %s with id %s"), image_location, |
990 | + image_id, context=context) |
991 | return {'imageId': image_id} |
992 | |
993 | def describe_image_attribute(self, context, image_id, attribute, **kwargs): |
994 | @@ -788,6 +812,7 @@ |
995 | raise exception.ApiError(_('only group "all" is supported')) |
996 | if not operation_type in ['add', 'remove']: |
997 | raise exception.ApiError(_('operation_type must be add or remove')) |
998 | + LOG.audit(_("Updating image %s publicity"), image_id, context=context) |
999 | return self.image_service.modify(context, image_id, operation_type) |
1000 | |
1001 | def update_image(self, context, image_id, **kwargs): |
1002 | |
1003 | === modified file 'nova/api/ec2/metadatarequesthandler.py' |
1004 | --- nova/api/ec2/metadatarequesthandler.py 2010-12-30 07:20:31 +0000 |
1005 | +++ nova/api/ec2/metadatarequesthandler.py 2011-01-07 19:51:33 +0000 |
1006 | @@ -18,15 +18,15 @@ |
1007 | |
1008 | """Metadata request handler.""" |
1009 | |
1010 | -import logging |
1011 | - |
1012 | import webob.dec |
1013 | import webob.exc |
1014 | |
1015 | +from nova import log as logging |
1016 | from nova import flags |
1017 | from nova.api.ec2 import cloud |
1018 | |
1019 | |
1020 | +LOG = logging.getLogger('nova.api.ec2.metadata') |
1021 | FLAGS = flags.FLAGS |
1022 | |
1023 | |
1024 | @@ -72,8 +72,7 @@ |
1025 | remote_address = req.headers.get('X-Forwarded-For', remote_address) |
1026 | meta_data = cc.get_metadata(remote_address) |
1027 | if meta_data is None: |
1028 | - logging.error(_('Failed to get metadata for ip: %s') % |
1029 | - remote_address) |
1030 | + LOG.error(_('Failed to get metadata for ip: %s'), remote_address) |
1031 | raise webob.exc.HTTPNotFound() |
1032 | data = self.lookup(req.path_info, meta_data) |
1033 | if data is None: |
1034 | |
1035 | === modified file 'nova/api/openstack/__init__.py' |
1036 | --- nova/api/openstack/__init__.py 2011-01-06 22:35:48 +0000 |
1037 | +++ nova/api/openstack/__init__.py 2011-01-07 19:51:33 +0000 |
1038 | @@ -20,28 +20,24 @@ |
1039 | WSGI middleware for OpenStack API controllers. |
1040 | """ |
1041 | |
1042 | -import time |
1043 | - |
1044 | -import logging |
1045 | import routes |
1046 | -import traceback |
1047 | import webob.dec |
1048 | import webob.exc |
1049 | import webob |
1050 | |
1051 | -from nova import context |
1052 | from nova import flags |
1053 | +from nova import log as logging |
1054 | from nova import utils |
1055 | from nova import wsgi |
1056 | from nova.api.openstack import faults |
1057 | from nova.api.openstack import backup_schedules |
1058 | from nova.api.openstack import flavors |
1059 | from nova.api.openstack import images |
1060 | -from nova.api.openstack import ratelimiting |
1061 | from nova.api.openstack import servers |
1062 | from nova.api.openstack import sharedipgroups |
1063 | |
1064 | |
1065 | +LOG = logging.getLogger('nova.api.openstack') |
1066 | FLAGS = flags.FLAGS |
1067 | flags.DEFINE_string('os_api_auth', |
1068 | 'nova.api.openstack.auth.AuthMiddleware', |
1069 | @@ -71,8 +67,7 @@ |
1070 | try: |
1071 | return req.get_response(self.application) |
1072 | except Exception as ex: |
1073 | - logging.warn(_("Caught error: %s") % str(ex)) |
1074 | - logging.error(traceback.format_exc()) |
1075 | + LOG.exception(_("Caught error: %s"), str(ex)) |
1076 | exc = webob.exc.HTTPInternalServerError(explanation=str(ex)) |
1077 | return faults.Fault(exc) |
1078 | |
1079 | @@ -88,7 +83,7 @@ |
1080 | |
1081 | server_members = {'action': 'POST'} |
1082 | if FLAGS.allow_admin_api: |
1083 | - logging.debug("Including admin operations in API.") |
1084 | + LOG.debug(_("Including admin operations in API.")) |
1085 | server_members['pause'] = 'POST' |
1086 | server_members['unpause'] = 'POST' |
1087 | server_members["diagnostics"] = "GET" |
1088 | |
1089 | === modified file 'nova/api/openstack/servers.py' |
1090 | --- nova/api/openstack/servers.py 2011-01-07 01:19:22 +0000 |
1091 | +++ nova/api/openstack/servers.py 2011-01-07 19:51:33 +0000 |
1092 | @@ -15,13 +15,13 @@ |
1093 | # License for the specific language governing permissions and limitations |
1094 | # under the License. |
1095 | |
1096 | -import logging |
1097 | import traceback |
1098 | |
1099 | from webob import exc |
1100 | |
1101 | from nova import compute |
1102 | from nova import exception |
1103 | +from nova import log as logging |
1104 | from nova import wsgi |
1105 | from nova.api.openstack import common |
1106 | from nova.api.openstack import faults |
1107 | @@ -181,7 +181,7 @@ |
1108 | self.compute_api.lock(context, id) |
1109 | except: |
1110 | readable = traceback.format_exc() |
1111 | - logging.error(_("Compute.api::lock %s"), readable) |
1112 | + LOG.exception(_("Compute.api::lock %s"), readable) |
1113 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
1114 | return exc.HTTPAccepted() |
1115 | |
1116 | @@ -196,7 +196,7 @@ |
1117 | self.compute_api.unlock(context, id) |
1118 | except: |
1119 | readable = traceback.format_exc() |
1120 | - logging.error(_("Compute.api::unlock %s"), readable) |
1121 | + LOG.exception(_("Compute.api::unlock %s"), readable) |
1122 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
1123 | return exc.HTTPAccepted() |
1124 | |
1125 | @@ -210,7 +210,7 @@ |
1126 | self.compute_api.get_lock(context, id) |
1127 | except: |
1128 | readable = traceback.format_exc() |
1129 | - logging.error(_("Compute.api::get_lock %s"), readable) |
1130 | + LOG.exception(_("Compute.api::get_lock %s"), readable) |
1131 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
1132 | return exc.HTTPAccepted() |
1133 | |
1134 | @@ -221,7 +221,7 @@ |
1135 | self.compute_api.pause(ctxt, id) |
1136 | except: |
1137 | readable = traceback.format_exc() |
1138 | - logging.error(_("Compute.api::pause %s"), readable) |
1139 | + LOG.exception(_("Compute.api::pause %s"), readable) |
1140 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
1141 | return exc.HTTPAccepted() |
1142 | |
1143 | @@ -232,7 +232,7 @@ |
1144 | self.compute_api.unpause(ctxt, id) |
1145 | except: |
1146 | readable = traceback.format_exc() |
1147 | - logging.error(_("Compute.api::unpause %s"), readable) |
1148 | + LOG.exception(_("Compute.api::unpause %s"), readable) |
1149 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
1150 | return exc.HTTPAccepted() |
1151 | |
1152 | @@ -243,7 +243,7 @@ |
1153 | self.compute_api.suspend(context, id) |
1154 | except: |
1155 | readable = traceback.format_exc() |
1156 | - logging.error(_("compute.api::suspend %s"), readable) |
1157 | + LOG.exception(_("compute.api::suspend %s"), readable) |
1158 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
1159 | return exc.HTTPAccepted() |
1160 | |
1161 | @@ -254,7 +254,7 @@ |
1162 | self.compute_api.resume(context, id) |
1163 | except: |
1164 | readable = traceback.format_exc() |
1165 | - logging.error(_("compute.api::resume %s"), readable) |
1166 | + LOG.exception(_("compute.api::resume %s"), readable) |
1167 | return faults.Fault(exc.HTTPUnprocessableEntity()) |
1168 | return exc.HTTPAccepted() |
1169 | |
1170 | |
1171 | === modified file 'nova/auth/dbdriver.py' |
1172 | --- nova/auth/dbdriver.py 2010-12-22 20:59:53 +0000 |
1173 | +++ nova/auth/dbdriver.py 2011-01-07 19:51:33 +0000 |
1174 | @@ -20,7 +20,6 @@ |
1175 | Auth driver using the DB as its backend. |
1176 | """ |
1177 | |
1178 | -import logging |
1179 | import sys |
1180 | |
1181 | from nova import context |
1182 | |
1183 | === modified file 'nova/auth/ldapdriver.py' |
1184 | --- nova/auth/ldapdriver.py 2010-12-22 23:47:31 +0000 |
1185 | +++ nova/auth/ldapdriver.py 2011-01-07 19:51:33 +0000 |
1186 | @@ -24,11 +24,11 @@ |
1187 | public methods. |
1188 | """ |
1189 | |
1190 | -import logging |
1191 | import sys |
1192 | |
1193 | from nova import exception |
1194 | from nova import flags |
1195 | +from nova import log as logging |
1196 | |
1197 | |
1198 | FLAGS = flags.FLAGS |
1199 | @@ -65,6 +65,8 @@ |
1200 | flags.DEFINE_string('ldap_developer', |
1201 | 'cn=developers,ou=Groups,dc=example,dc=com', 'cn for Developers') |
1202 | |
1203 | +LOG = logging.getLogger("nova.ldapdriver") |
1204 | + |
1205 | |
1206 | # TODO(vish): make an abstract base class with the same public methods |
1207 | # to define a set interface for AuthDrivers. I'm delaying |
1208 | @@ -502,8 +504,8 @@ |
1209 | try: |
1210 | self.conn.modify_s(group_dn, attr) |
1211 | except self.ldap.OBJECT_CLASS_VIOLATION: |
1212 | - logging.debug(_("Attempted to remove the last member of a group. " |
1213 | - "Deleting the group at %s instead."), group_dn) |
1214 | + LOG.debug(_("Attempted to remove the last member of a group. " |
1215 | + "Deleting the group at %s instead."), group_dn) |
1216 | self.__delete_group(group_dn) |
1217 | |
1218 | def __remove_from_all(self, uid): |
1219 | |
1220 | === modified file 'nova/auth/manager.py' |
1221 | --- nova/auth/manager.py 2010-12-22 21:17:30 +0000 |
1222 | +++ nova/auth/manager.py 2011-01-07 19:51:33 +0000 |
1223 | @@ -20,7 +20,6 @@ |
1224 | Nova authentication management |
1225 | """ |
1226 | |
1227 | -import logging |
1228 | import os |
1229 | import shutil |
1230 | import string # pylint: disable-msg=W0402 |
1231 | @@ -33,6 +32,7 @@ |
1232 | from nova import db |
1233 | from nova import exception |
1234 | from nova import flags |
1235 | +from nova import log as logging |
1236 | from nova import utils |
1237 | from nova.auth import signer |
1238 | |
1239 | @@ -70,6 +70,8 @@ |
1240 | flags.DEFINE_string('auth_driver', 'nova.auth.dbdriver.DbDriver', |
1241 | 'Driver that auth manager uses') |
1242 | |
1243 | +LOG = logging.getLogger('nova.auth.manager') |
1244 | + |
1245 | |
1246 | class AuthBase(object): |
1247 | """Base class for objects relating to auth |
1248 | @@ -254,43 +256,51 @@ |
1249 | # TODO(vish): check for valid timestamp |
1250 | (access_key, _sep, project_id) = access.partition(':') |
1251 | |
1252 | - logging.info(_('Looking up user: %r'), access_key) |
1253 | + LOG.debug(_('Looking up user: %r'), access_key) |
1254 | user = self.get_user_from_access_key(access_key) |
1255 | - logging.info('user: %r', user) |
1256 | + LOG.debug('user: %r', user) |
1257 | if user == None: |
1258 | + LOG.audit(_("Failed authorization for access key %s"), access_key) |
1259 | raise exception.NotFound(_('No user found for access key %s') |
1260 | % access_key) |
1261 | |
1262 | # NOTE(vish): if we stop using project name as id we need better |
1263 | # logic to find a default project for user |
1264 | if project_id == '': |
1265 | + LOG.debug(_("Using project name = user name (%s)"), user.name) |
1266 | project_id = user.name |
1267 | |
1268 | project = self.get_project(project_id) |
1269 | if project == None: |
1270 | + LOG.audit(_("failed authorization: no project named %s (user=%s)"), |
1271 | + project_id, user.name) |
1272 | raise exception.NotFound(_('No project called %s could be found') |
1273 | % project_id) |
1274 | if not self.is_admin(user) and not self.is_project_member(user, |
1275 | project): |
1276 | + LOG.audit(_("Failed authorization: user %s not admin and not " |
1277 | + "member of project %s"), user.name, project.name) |
1278 | raise exception.NotFound(_('User %s is not a member of project %s') |
1279 | % (user.id, project.id)) |
1280 | if check_type == 's3': |
1281 | sign = signer.Signer(user.secret.encode()) |
1282 | expected_signature = sign.s3_authorization(headers, verb, path) |
1283 | - logging.debug('user.secret: %s', user.secret) |
1284 | - logging.debug('expected_signature: %s', expected_signature) |
1285 | - logging.debug('signature: %s', signature) |
1286 | + LOG.debug('user.secret: %s', user.secret) |
1287 | + LOG.debug('expected_signature: %s', expected_signature) |
1288 | + LOG.debug('signature: %s', signature) |
1289 | if signature != expected_signature: |
1290 | + LOG.audit(_("Invalid signature for user %s"), user.name) |
1291 | raise exception.NotAuthorized(_('Signature does not match')) |
1292 | elif check_type == 'ec2': |
1293 | # NOTE(vish): hmac can't handle unicode, so encode ensures that |
1294 | # secret isn't unicode |
1295 | expected_signature = signer.Signer(user.secret.encode()).generate( |
1296 | params, verb, server_string, path) |
1297 | - logging.debug('user.secret: %s', user.secret) |
1298 | - logging.debug('expected_signature: %s', expected_signature) |
1299 | - logging.debug('signature: %s', signature) |
1300 | + LOG.debug('user.secret: %s', user.secret) |
1301 | + LOG.debug('expected_signature: %s', expected_signature) |
1302 | + LOG.debug('signature: %s', signature) |
1303 | if signature != expected_signature: |
1304 | + LOG.audit(_("Invalid signature for user %s"), user.name) |
1305 | raise exception.NotAuthorized(_('Signature does not match')) |
1306 | return (user, project) |
1307 | |
1308 | @@ -398,6 +408,12 @@ |
1309 | raise exception.NotFound(_("The %s role can not be found") % role) |
1310 | if project is not None and role in FLAGS.global_roles: |
1311 | raise exception.NotFound(_("The %s role is global only") % role) |
1312 | + if project: |
1313 | + LOG.audit(_("Adding role %s to user %s in project %s"), role, |
1314 | + User.safe_id(user), Project.safe_id(project)) |
1315 | + else: |
1316 | + LOG.audit(_("Adding sitewide role %s to user %s"), role, |
1317 | + User.safe_id(user)) |
1318 | with self.driver() as drv: |
1319 | drv.add_role(User.safe_id(user), role, Project.safe_id(project)) |
1320 | |
1321 | @@ -418,6 +434,12 @@ |
1322 | @type project: Project or project_id |
1323 | @param project: Project in which to remove local role. |
1324 | """ |
1325 | + if project: |
1326 | + LOG.audit(_("Removing role %s from user %s on project %s"), |
1327 | + role, User.safe_id(user), Project.safe_id(project)) |
1328 | + else: |
1329 | + LOG.audit(_("Removing sitewide role %s from user %s"), role, |
1330 | + User.safe_id(user)) |
1331 | with self.driver() as drv: |
1332 | drv.remove_role(User.safe_id(user), role, Project.safe_id(project)) |
1333 | |
1334 | @@ -480,6 +502,8 @@ |
1335 | description, |
1336 | member_users) |
1337 | if project_dict: |
1338 | + LOG.audit(_("Created project %s with manager %s"), name, |
1339 | + manager_user) |
1340 | project = Project(**project_dict) |
1341 | return project |
1342 | |
1343 | @@ -496,6 +520,7 @@ |
1344 | @param project: This will be the new description of the project. |
1345 | |
1346 | """ |
1347 | + LOG.audit(_("modifying project %s"), Project.safe_id(project)) |
1348 | if manager_user: |
1349 | manager_user = User.safe_id(manager_user) |
1350 | with self.driver() as drv: |
1351 | @@ -505,6 +530,8 @@ |
1352 | |
1353 | def add_to_project(self, user, project): |
1354 | """Add user to project""" |
1355 | + LOG.audit(_("Adding user %s to project %s"), User.safe_id(user), |
1356 | + Project.safe_id(project)) |
1357 | with self.driver() as drv: |
1358 | return drv.add_to_project(User.safe_id(user), |
1359 | Project.safe_id(project)) |
1360 | @@ -523,6 +550,8 @@ |
1361 | |
1362 | def remove_from_project(self, user, project): |
1363 | """Removes a user from a project""" |
1364 | + LOG.audit(_("Remove user %s from project %s"), User.safe_id(user), |
1365 | + Project.safe_id(project)) |
1366 | with self.driver() as drv: |
1367 | return drv.remove_from_project(User.safe_id(user), |
1368 | Project.safe_id(project)) |
1369 | @@ -549,6 +578,7 @@ |
1370 | |
1371 | def delete_project(self, project): |
1372 | """Deletes a project""" |
1373 | + LOG.audit(_("Deleting project %s"), Project.safe_id(project)) |
1374 | with self.driver() as drv: |
1375 | drv.delete_project(Project.safe_id(project)) |
1376 | |
1377 | @@ -603,13 +633,16 @@ |
1378 | with self.driver() as drv: |
1379 | user_dict = drv.create_user(name, access, secret, admin) |
1380 | if user_dict: |
1381 | - return User(**user_dict) |
1382 | + rv = User(**user_dict) |
1383 | + LOG.audit(_("Created user %s (admin: %r)"), rv.name, rv.admin) |
1384 | + return rv |
1385 | |
1386 | def delete_user(self, user): |
1387 | """Deletes a user |
1388 | |
1389 | Additionally deletes all users key_pairs""" |
1390 | uid = User.safe_id(user) |
1391 | + LOG.audit(_("Deleting user %s"), uid) |
1392 | db.key_pair_destroy_all_by_user(context.get_admin_context(), |
1393 | uid) |
1394 | with self.driver() as drv: |
1395 | @@ -618,6 +651,12 @@ |
1396 | def modify_user(self, user, access_key=None, secret_key=None, admin=None): |
1397 | """Modify credentials for a user""" |
1398 | uid = User.safe_id(user) |
1399 | + if access_key: |
1400 | + LOG.audit(_("Access Key change for user %s"), uid) |
1401 | + if secret_key: |
1402 | + LOG.audit(_("Secret Key change for user %s"), uid) |
1403 | + if admin is not None: |
1404 | + LOG.audit(_("Admin status set to %r for user %s"), admin, uid) |
1405 | with self.driver() as drv: |
1406 | drv.modify_user(uid, access_key, secret_key, admin) |
1407 | |
1408 | @@ -666,7 +705,7 @@ |
1409 | port=vpn_port) |
1410 | zippy.writestr(FLAGS.credential_vpn_file, config) |
1411 | else: |
1412 | - logging.warn(_("No vpn data for project %s"), pid) |
1413 | + LOG.warn(_("No vpn data for project %s"), pid) |
1414 | |
1415 | zippy.writestr(FLAGS.ca_file, crypto.fetch_ca(pid)) |
1416 | zippy.close() |
1417 | |
1418 | === modified file 'nova/auth/signer.py' |
1419 | --- nova/auth/signer.py 2010-08-19 05:14:34 +0000 |
1420 | +++ nova/auth/signer.py 2011-01-07 19:51:33 +0000 |
1421 | @@ -46,7 +46,6 @@ |
1422 | import base64 |
1423 | import hashlib |
1424 | import hmac |
1425 | -import logging |
1426 | import urllib |
1427 | |
1428 | # NOTE(vish): for new boto |
1429 | @@ -54,9 +53,13 @@ |
1430 | # NOTE(vish): for old boto |
1431 | import boto.utils |
1432 | |
1433 | +from nova import log as logging |
1434 | from nova.exception import Error |
1435 | |
1436 | |
1437 | +LOG = logging.getLogger('nova.signer') |
1438 | + |
1439 | + |
1440 | class Signer(object): |
1441 | """Hacked up code from boto/connection.py""" |
1442 | |
1443 | @@ -120,7 +123,7 @@ |
1444 | |
1445 | def _calc_signature_2(self, params, verb, server_string, path): |
1446 | """Generate AWS signature version 2 string.""" |
1447 | - logging.debug('using _calc_signature_2') |
1448 | + LOG.debug('using _calc_signature_2') |
1449 | string_to_sign = '%s\n%s\n%s\n' % (verb, server_string, path) |
1450 | if self.hmac_256: |
1451 | current_hmac = self.hmac_256 |
1452 | @@ -136,13 +139,13 @@ |
1453 | val = urllib.quote(val, safe='-_~') |
1454 | pairs.append(urllib.quote(key, safe='') + '=' + val) |
1455 | qs = '&'.join(pairs) |
1456 | - logging.debug('query string: %s', qs) |
1457 | + LOG.debug('query string: %s', qs) |
1458 | string_to_sign += qs |
1459 | - logging.debug('string_to_sign: %s', string_to_sign) |
1460 | + LOG.debug('string_to_sign: %s', string_to_sign) |
1461 | current_hmac.update(string_to_sign) |
1462 | b64 = base64.b64encode(current_hmac.digest()) |
1463 | - logging.debug('len(b64)=%d', len(b64)) |
1464 | - logging.debug('base64 encoded digest: %s', b64) |
1465 | + LOG.debug('len(b64)=%d', len(b64)) |
1466 | + LOG.debug('base64 encoded digest: %s', b64) |
1467 | return b64 |
1468 | |
1469 | |
1470 | |
1471 | === modified file 'nova/cloudpipe/pipelib.py' |
1472 | --- nova/cloudpipe/pipelib.py 2010-12-22 21:17:30 +0000 |
1473 | +++ nova/cloudpipe/pipelib.py 2011-01-07 19:51:33 +0000 |
1474 | @@ -22,7 +22,6 @@ |
1475 | |
1476 | """ |
1477 | |
1478 | -import logging |
1479 | import os |
1480 | import string |
1481 | import tempfile |
1482 | @@ -33,6 +32,7 @@ |
1483 | from nova import db |
1484 | from nova import exception |
1485 | from nova import flags |
1486 | +from nova import log as logging |
1487 | from nova import utils |
1488 | from nova.auth import manager |
1489 | # TODO(eday): Eventually changes these to something not ec2-specific |
1490 | @@ -51,7 +51,7 @@ |
1491 | _('Netmask to push into openvpn config')) |
1492 | |
1493 | |
1494 | -LOG = logging.getLogger('nova-cloudpipe') |
1495 | +LOG = logging.getLogger('nova.cloudpipe') |
1496 | |
1497 | |
1498 | class CloudPipe(object): |
1499 | |
1500 | === modified file 'nova/compute/api.py' |
1501 | --- nova/compute/api.py 2011-01-07 01:19:22 +0000 |
1502 | +++ nova/compute/api.py 2011-01-07 19:51:33 +0000 |
1503 | @@ -21,12 +21,12 @@ |
1504 | """ |
1505 | |
1506 | import datetime |
1507 | -import logging |
1508 | import time |
1509 | |
1510 | from nova import db |
1511 | from nova import exception |
1512 | from nova import flags |
1513 | +from nova import log as logging |
1514 | from nova import network |
1515 | from nova import quota |
1516 | from nova import rpc |
1517 | @@ -36,6 +36,7 @@ |
1518 | from nova.db import base |
1519 | |
1520 | FLAGS = flags.FLAGS |
1521 | +LOG = logging.getLogger('nova.compute.api') |
1522 | |
1523 | |
1524 | def generate_default_hostname(instance_id): |
1525 | @@ -63,13 +64,13 @@ |
1526 | try: |
1527 | instance = self.get(context, instance_id) |
1528 | except exception.NotFound as e: |
1529 | - logging.warning("Instance %d was not found in get_network_topic", |
1530 | - instance_id) |
1531 | + LOG.warning(_("Instance %d was not found in get_network_topic"), |
1532 | + instance_id) |
1533 | raise e |
1534 | |
1535 | host = instance['host'] |
1536 | if not host: |
1537 | - raise exception.Error("Instance %d has no host" % instance_id) |
1538 | + raise exception.Error(_("Instance %d has no host") % instance_id) |
1539 | topic = self.db.queue_get_for(context, FLAGS.compute_topic, host) |
1540 | return rpc.call(context, |
1541 | topic, |
1542 | @@ -88,10 +89,10 @@ |
1543 | type_data = instance_types.INSTANCE_TYPES[instance_type] |
1544 | num_instances = quota.allowed_instances(context, max_count, type_data) |
1545 | if num_instances < min_count: |
1546 | - logging.warn("Quota exceeeded for %s, tried to run %s instances", |
1547 | - context.project_id, min_count) |
1548 | - raise quota.QuotaError("Instance quota exceeded. You can only " |
1549 | - "run %s more instances of this type." % |
1550 | + LOG.warn(_("Quota exceeeded for %s, tried to run %s instances"), |
1551 | + context.project_id, min_count) |
1552 | + raise quota.QuotaError(_("Instance quota exceeded. You can only " |
1553 | + "run %s more instances of this type.") % |
1554 | num_instances, "InstanceLimitExceeded") |
1555 | |
1556 | is_vpn = image_id == FLAGS.vpn_image_id |
1557 | @@ -105,7 +106,7 @@ |
1558 | if kernel_id == str(FLAGS.null_kernel): |
1559 | kernel_id = None |
1560 | ramdisk_id = None |
1561 | - logging.debug("Creating a raw instance") |
1562 | + LOG.debug(_("Creating a raw instance")) |
1563 | # Make sure we have access to kernel and ramdisk (if not raw) |
1564 | if kernel_id: |
1565 | self.image_service.show(context, kernel_id) |
1566 | @@ -152,7 +153,7 @@ |
1567 | |
1568 | elevated = context.elevated() |
1569 | instances = [] |
1570 | - logging.debug(_("Going to run %s instances..."), num_instances) |
1571 | + LOG.debug(_("Going to run %s instances..."), num_instances) |
1572 | for num in range(num_instances): |
1573 | instance = dict(mac_address=utils.generate_mac(), |
1574 | launch_index=num, |
1575 | @@ -176,7 +177,7 @@ |
1576 | instance = self.update(context, instance_id, **updates) |
1577 | instances.append(instance) |
1578 | |
1579 | - logging.debug(_("Casting to scheduler for %s/%s's instance %s"), |
1580 | + LOG.debug(_("Casting to scheduler for %s/%s's instance %s"), |
1581 | context.project_id, context.user_id, instance_id) |
1582 | rpc.cast(context, |
1583 | FLAGS.scheduler_topic, |
1584 | @@ -218,17 +219,17 @@ |
1585 | return self.db.instance_update(context, instance_id, kwargs) |
1586 | |
1587 | def delete(self, context, instance_id): |
1588 | - logging.debug("Going to try and terminate %s" % instance_id) |
1589 | + LOG.debug(_("Going to try and terminate %s"), instance_id) |
1590 | try: |
1591 | instance = self.get(context, instance_id) |
1592 | except exception.NotFound as e: |
1593 | - logging.warning(_("Instance %s was not found during terminate"), |
1594 | - instance_id) |
1595 | + LOG.warning(_("Instance %d was not found during terminate"), |
1596 | + instance_id) |
1597 | raise e |
1598 | |
1599 | if (instance['state_description'] == 'terminating'): |
1600 | - logging.warning(_("Instance %s is already being terminated"), |
1601 | - instance_id) |
1602 | + LOG.warning(_("Instance %d is already being terminated"), |
1603 | + instance_id) |
1604 | return |
1605 | |
1606 | self.update(context, |
1607 | |
1608 | === modified file 'nova/compute/disk.py' |
1609 | --- nova/compute/disk.py 2010-12-23 21:41:54 +0000 |
1610 | +++ nova/compute/disk.py 2011-01-07 19:51:33 +0000 |
1611 | @@ -22,14 +22,15 @@ |
1612 | |
1613 | """ |
1614 | |
1615 | -import logging |
1616 | import os |
1617 | import tempfile |
1618 | |
1619 | from nova import exception |
1620 | from nova import flags |
1621 | - |
1622 | - |
1623 | +from nova import log as logging |
1624 | + |
1625 | + |
1626 | +LOG = logging.getLogger('nova.compute.disk') |
1627 | FLAGS = flags.FLAGS |
1628 | flags.DEFINE_integer('minimum_root_size', 1024 * 1024 * 1024 * 10, |
1629 | 'minimum size in bytes of root partition') |
1630 | @@ -67,12 +68,12 @@ |
1631 | execute('resize2fs %s' % infile) |
1632 | file_size = FLAGS.minimum_root_size |
1633 | elif file_size % sector_size != 0: |
1634 | - logging.warn(_("Input partition size not evenly divisible by" |
1635 | - " sector size: %d / %d"), file_size, sector_size) |
1636 | + LOG.warn(_("Input partition size not evenly divisible by" |
1637 | + " sector size: %d / %d"), file_size, sector_size) |
1638 | primary_sectors = file_size / sector_size |
1639 | if local_bytes % sector_size != 0: |
1640 | - logging.warn(_("Bytes for local storage not evenly divisible" |
1641 | - " by sector size: %d / %d"), local_bytes, sector_size) |
1642 | + LOG.warn(_("Bytes for local storage not evenly divisible" |
1643 | + " by sector size: %d / %d"), local_bytes, sector_size) |
1644 | local_sectors = local_bytes / sector_size |
1645 | |
1646 | mbr_last = 62 # a |
1647 | |
1648 | === modified file 'nova/compute/manager.py' |
1649 | --- nova/compute/manager.py 2011-01-06 23:27:57 +0000 |
1650 | +++ nova/compute/manager.py 2011-01-07 19:51:33 +0000 |
1651 | @@ -35,11 +35,11 @@ |
1652 | """ |
1653 | |
1654 | import datetime |
1655 | -import logging |
1656 | import functools |
1657 | |
1658 | from nova import exception |
1659 | from nova import flags |
1660 | +from nova import log as logging |
1661 | from nova import manager |
1662 | from nova import rpc |
1663 | from nova import utils |
1664 | @@ -53,6 +53,8 @@ |
1665 | flags.DEFINE_string('stub_network', False, |
1666 | 'Stub network related code') |
1667 | |
1668 | +LOG = logging.getLogger('nova.compute.manager') |
1669 | + |
1670 | |
1671 | def checks_instance_lock(function): |
1672 | """ |
1673 | @@ -64,23 +66,25 @@ |
1674 | @functools.wraps(function) |
1675 | def decorated_function(self, context, instance_id, *args, **kwargs): |
1676 | |
1677 | - logging.info(_("check_instance_lock: decorating: |%s|"), function) |
1678 | - logging.info(_("check_instance_lock: arguments: |%s| |%s| |%s|"), |
1679 | - self, |
1680 | - context, |
1681 | - instance_id) |
1682 | + LOG.info(_("check_instance_lock: decorating: |%s|"), function, |
1683 | + context=context) |
1684 | + LOG.info(_("check_instance_lock: arguments: |%s| |%s| |%s|"), |
1685 | + self, context, instance_id, context=context) |
1686 | locked = self.get_lock(context, instance_id) |
1687 | admin = context.is_admin |
1688 | - logging.info(_("check_instance_lock: locked: |%s|"), locked) |
1689 | - logging.info(_("check_instance_lock: admin: |%s|"), admin) |
1690 | + LOG.info(_("check_instance_lock: locked: |%s|"), locked, |
1691 | + context=context) |
1692 | + LOG.info(_("check_instance_lock: admin: |%s|"), admin, |
1693 | + context=context) |
1694 | |
1695 | # if admin or unlocked call function otherwise log error |
1696 | if admin or not locked: |
1697 | - logging.info(_("check_instance_lock: executing: |%s|"), function) |
1698 | + LOG.info(_("check_instance_lock: executing: |%s|"), function, |
1699 | + context=context) |
1700 | function(self, context, instance_id, *args, **kwargs) |
1701 | else: |
1702 | - logging.error(_("check_instance_lock: not executing |%s|"), |
1703 | - function) |
1704 | + LOG.error(_("check_instance_lock: not executing |%s|"), |
1705 | + function, context=context) |
1706 | return False |
1707 | |
1708 | return decorated_function |
1709 | @@ -144,7 +148,8 @@ |
1710 | instance_ref = self.db.instance_get(context, instance_id) |
1711 | if instance_ref['name'] in self.driver.list_instances(): |
1712 | raise exception.Error(_("Instance has already been created")) |
1713 | - logging.debug(_("instance %s: starting..."), instance_id) |
1714 | + LOG.audit(_("instance %s: starting..."), instance_id, |
1715 | + context=context) |
1716 | self.db.instance_update(context, |
1717 | instance_id, |
1718 | {'host': self.host}) |
1719 | @@ -182,8 +187,8 @@ |
1720 | instance_id, |
1721 | {'launched_at': now}) |
1722 | except Exception: # pylint: disable-msg=W0702 |
1723 | - logging.exception(_("instance %s: Failed to spawn"), |
1724 | - instance_ref['name']) |
1725 | + LOG.exception(_("instance %s: Failed to spawn"), instance_id, |
1726 | + context=context) |
1727 | self.db.instance_set_state(context, |
1728 | instance_id, |
1729 | power_state.SHUTDOWN) |
1730 | @@ -195,14 +200,15 @@ |
1731 | def terminate_instance(self, context, instance_id): |
1732 | """Terminate an instance on this machine.""" |
1733 | context = context.elevated() |
1734 | - |
1735 | instance_ref = self.db.instance_get(context, instance_id) |
1736 | + LOG.audit(_("Terminating instance %s"), instance_id, context=context) |
1737 | |
1738 | if not FLAGS.stub_network: |
1739 | address = self.db.instance_get_floating_address(context, |
1740 | instance_ref['id']) |
1741 | if address: |
1742 | - logging.debug(_("Disassociating address %s") % address) |
1743 | + LOG.debug(_("Disassociating address %s"), address, |
1744 | + context=context) |
1745 | # NOTE(vish): Right now we don't really care if the ip is |
1746 | # disassociated. We may need to worry about |
1747 | # checking this later. |
1748 | @@ -214,15 +220,14 @@ |
1749 | address = self.db.instance_get_fixed_address(context, |
1750 | instance_ref['id']) |
1751 | if address: |
1752 | - logging.debug(_("Deallocating address %s") % address) |
1753 | + LOG.debug(_("Deallocating address %s"), address, |
1754 | + context=context) |
1755 | # NOTE(vish): Currently, nothing needs to be done on the |
1756 | # network node until release. If this changes, |
1757 | # we will need to cast here. |
1758 | self.network_manager.deallocate_fixed_ip(context.elevated(), |
1759 | address) |
1760 | |
1761 | - logging.debug(_("instance %s: terminating"), instance_id) |
1762 | - |
1763 | volumes = instance_ref.get('volumes', []) or [] |
1764 | for volume in volumes: |
1765 | self.detach_volume(context, instance_id, volume['id']) |
1766 | @@ -242,15 +247,16 @@ |
1767 | context = context.elevated() |
1768 | self._update_state(context, instance_id) |
1769 | instance_ref = self.db.instance_get(context, instance_id) |
1770 | + LOG.audit(_("Rebooting instance %s"), instance_id, context=context) |
1771 | |
1772 | if instance_ref['state'] != power_state.RUNNING: |
1773 | - logging.warn(_('trying to reboot a non-running ' |
1774 | - 'instance: %s (state: %s excepted: %s)'), |
1775 | - instance_id, |
1776 | - instance_ref['state'], |
1777 | - power_state.RUNNING) |
1778 | + LOG.warn(_('trying to reboot a non-running ' |
1779 | + 'instance: %s (state: %s excepted: %s)'), |
1780 | + instance_id, |
1781 | + instance_ref['state'], |
1782 | + power_state.RUNNING, |
1783 | + context=context) |
1784 | |
1785 | - logging.debug(_('instance %s: rebooting'), instance_ref['name']) |
1786 | self.db.instance_set_state(context, |
1787 | instance_id, |
1788 | power_state.NOSTATE, |
1789 | @@ -270,13 +276,12 @@ |
1790 | # potentially? |
1791 | self._update_state(context, instance_id) |
1792 | |
1793 | - logging.debug(_('instance %s: snapshotting'), instance_ref['name']) |
1794 | + LOG.audit(_('instance %s: snapshotting'), instance_id, |
1795 | + context=context) |
1796 | if instance_ref['state'] != power_state.RUNNING: |
1797 | - logging.warn(_('trying to snapshot a non-running ' |
1798 | - 'instance: %s (state: %s excepted: %s)'), |
1799 | - instance_id, |
1800 | - instance_ref['state'], |
1801 | - power_state.RUNNING) |
1802 | + LOG.warn(_('trying to snapshot a non-running ' |
1803 | + 'instance: %s (state: %s excepted: %s)'), |
1804 | + instance_id, instance_ref['state'], power_state.RUNNING) |
1805 | |
1806 | self.driver.snapshot(instance_ref, name) |
1807 | |
1808 | @@ -286,8 +291,7 @@ |
1809 | """Rescue an instance on this server.""" |
1810 | context = context.elevated() |
1811 | instance_ref = self.db.instance_get(context, instance_id) |
1812 | - |
1813 | - logging.debug(_('instance %s: rescuing'), instance_id) |
1814 | + LOG.audit(_('instance %s: rescuing'), instance_id, context=context) |
1815 | self.db.instance_set_state(context, |
1816 | instance_id, |
1817 | power_state.NOSTATE, |
1818 | @@ -302,8 +306,7 @@ |
1819 | """Rescue an instance on this server.""" |
1820 | context = context.elevated() |
1821 | instance_ref = self.db.instance_get(context, instance_id) |
1822 | - |
1823 | - logging.debug(_('instance %s: unrescuing'), instance_id) |
1824 | + LOG.audit(_('instance %s: unrescuing'), instance_id, context=context) |
1825 | self.db.instance_set_state(context, |
1826 | instance_id, |
1827 | power_state.NOSTATE, |
1828 | @@ -322,8 +325,7 @@ |
1829 | """Pause an instance on this server.""" |
1830 | context = context.elevated() |
1831 | instance_ref = self.db.instance_get(context, instance_id) |
1832 | - |
1833 | - logging.debug('instance %s: pausing', instance_id) |
1834 | + LOG.audit(_('instance %s: pausing'), instance_id, context=context) |
1835 | self.db.instance_set_state(context, |
1836 | instance_id, |
1837 | power_state.NOSTATE, |
1838 | @@ -340,8 +342,7 @@ |
1839 | """Unpause a paused instance on this server.""" |
1840 | context = context.elevated() |
1841 | instance_ref = self.db.instance_get(context, instance_id) |
1842 | - |
1843 | - logging.debug('instance %s: unpausing', instance_id) |
1844 | + LOG.audit(_('instance %s: unpausing'), instance_id, context=context) |
1845 | self.db.instance_set_state(context, |
1846 | instance_id, |
1847 | power_state.NOSTATE, |
1848 | @@ -358,8 +359,8 @@ |
1849 | instance_ref = self.db.instance_get(context, instance_id) |
1850 | |
1851 | if instance_ref["state"] == power_state.RUNNING: |
1852 | - logging.debug(_("instance %s: retrieving diagnostics"), |
1853 | - instance_id) |
1854 | + LOG.audit(_("instance %s: retrieving diagnostics"), instance_id, |
1855 | + context=context) |
1856 | return self.driver.get_diagnostics(instance_ref) |
1857 | |
1858 | @exception.wrap_exception |
1859 | @@ -371,8 +372,7 @@ |
1860 | """ |
1861 | context = context.elevated() |
1862 | instance_ref = self.db.instance_get(context, instance_id) |
1863 | - |
1864 | - logging.debug(_('instance %s: suspending'), instance_id) |
1865 | + LOG.audit(_('instance %s: suspending'), instance_id, context=context) |
1866 | self.db.instance_set_state(context, instance_id, |
1867 | power_state.NOSTATE, |
1868 | 'suspending') |
1869 | @@ -391,8 +391,7 @@ |
1870 | """ |
1871 | context = context.elevated() |
1872 | instance_ref = self.db.instance_get(context, instance_id) |
1873 | - |
1874 | - logging.debug(_('instance %s: resuming'), instance_id) |
1875 | + LOG.audit(_('instance %s: resuming'), instance_id, context=context) |
1876 | self.db.instance_set_state(context, instance_id, |
1877 | power_state.NOSTATE, |
1878 | 'resuming') |
1879 | @@ -411,7 +410,7 @@ |
1880 | context = context.elevated() |
1881 | instance_ref = self.db.instance_get(context, instance_id) |
1882 | |
1883 | - logging.debug(_('instance %s: locking'), instance_id) |
1884 | + LOG.debug(_('instance %s: locking'), instance_id, context=context) |
1885 | self.db.instance_update(context, instance_id, {'locked': True}) |
1886 | |
1887 | @exception.wrap_exception |
1888 | @@ -423,7 +422,7 @@ |
1889 | context = context.elevated() |
1890 | instance_ref = self.db.instance_get(context, instance_id) |
1891 | |
1892 | - logging.debug(_('instance %s: unlocking'), instance_id) |
1893 | + LOG.debug(_('instance %s: unlocking'), instance_id, context=context) |
1894 | self.db.instance_update(context, instance_id, {'locked': False}) |
1895 | |
1896 | @exception.wrap_exception |
1897 | @@ -433,7 +432,8 @@ |
1898 | |
1899 | """ |
1900 | context = context.elevated() |
1901 | - logging.debug(_('instance %s: getting locked state'), instance_id) |
1902 | + LOG.debug(_('instance %s: getting locked state'), instance_id, |
1903 | + context=context) |
1904 | instance_ref = self.db.instance_get(context, instance_id) |
1905 | return instance_ref['locked'] |
1906 | |
1907 | @@ -441,9 +441,9 @@ |
1908 | def get_console_output(self, context, instance_id): |
1909 | """Send the console output for an instance.""" |
1910 | context = context.elevated() |
1911 | - logging.debug(_("instance %s: getting console output"), instance_id) |
1912 | instance_ref = self.db.instance_get(context, instance_id) |
1913 | - |
1914 | + LOG.audit(_("Get console output for instance %s"), instance_id, |
1915 | + context=context) |
1916 | return self.driver.get_console_output(instance_ref) |
1917 | |
1918 | @exception.wrap_exception |
1919 | @@ -451,9 +451,9 @@ |
1920 | def attach_volume(self, context, instance_id, volume_id, mountpoint): |
1921 | """Attach a volume to an instance.""" |
1922 | context = context.elevated() |
1923 | - logging.debug(_("instance %s: attaching volume %s to %s"), instance_id, |
1924 | - volume_id, mountpoint) |
1925 | instance_ref = self.db.instance_get(context, instance_id) |
1926 | + LOG.audit(_("instance %s: attaching volume %s to %s"), instance_id, |
1927 | + volume_id, mountpoint, context=context) |
1928 | dev_path = self.volume_manager.setup_compute_volume(context, |
1929 | volume_id) |
1930 | try: |
1931 | @@ -468,8 +468,8 @@ |
1932 | # NOTE(vish): The inline callback eats the exception info so we |
1933 | # log the traceback here and reraise the same |
1934 | # ecxception below. |
1935 | - logging.exception(_("instance %s: attach failed %s, removing"), |
1936 | - instance_id, mountpoint) |
1937 | + LOG.exception(_("instance %s: attach failed %s, removing"), |
1938 | + instance_id, mountpoint, context=context) |
1939 | self.volume_manager.remove_compute_volume(context, |
1940 | volume_id) |
1941 | raise exc |
1942 | @@ -481,14 +481,14 @@ |
1943 | def detach_volume(self, context, instance_id, volume_id): |
1944 | """Detach a volume from an instance.""" |
1945 | context = context.elevated() |
1946 | - logging.debug(_("instance %s: detaching volume %s"), |
1947 | - instance_id, |
1948 | - volume_id) |
1949 | instance_ref = self.db.instance_get(context, instance_id) |
1950 | volume_ref = self.db.volume_get(context, volume_id) |
1951 | + LOG.audit(_("Detach volume %s from mountpoint %s on instance %s"), |
1952 | + volume_id, volume_ref['mountpoint'], instance_id, |
1953 | + context=context) |
1954 | if instance_ref['name'] not in self.driver.list_instances(): |
1955 | - logging.warn(_("Detaching volume from unknown instance %s"), |
1956 | - instance_ref['name']) |
1957 | + LOG.warn(_("Detaching volume from unknown instance %s"), |
1958 | + instance_id, context=context) |
1959 | else: |
1960 | self.driver.detach_volume(instance_ref['name'], |
1961 | volume_ref['mountpoint']) |
1962 | |
1963 | === modified file 'nova/compute/monitor.py' |
1964 | --- nova/compute/monitor.py 2010-12-22 20:59:53 +0000 |
1965 | +++ nova/compute/monitor.py 2011-01-07 19:51:33 +0000 |
1966 | @@ -25,19 +25,17 @@ |
1967 | """ |
1968 | |
1969 | import datetime |
1970 | -import logging |
1971 | import os |
1972 | -import sys |
1973 | import time |
1974 | |
1975 | import boto |
1976 | import boto.s3 |
1977 | import rrdtool |
1978 | -from twisted.internet import defer |
1979 | from twisted.internet import task |
1980 | from twisted.application import service |
1981 | |
1982 | from nova import flags |
1983 | +from nova import log as logging |
1984 | from nova.virt import connection as virt_connection |
1985 | |
1986 | |
1987 | @@ -91,6 +89,9 @@ |
1988 | utcnow = datetime.datetime.utcnow |
1989 | |
1990 | |
1991 | +LOG = logging.getLogger('nova.compute.monitor') |
1992 | + |
1993 | + |
1994 | def update_rrd(instance, name, data): |
1995 | """ |
1996 | Updates the specified RRD file. |
1997 | @@ -255,20 +256,20 @@ |
1998 | Updates the instances statistics and stores the resulting graphs |
1999 | in the internal object store on the cloud controller. |
2000 | """ |
2001 | - logging.debug(_('updating %s...'), self.instance_id) |
2002 | + LOG.debug(_('updating %s...'), self.instance_id) |
2003 | |
2004 | try: |
2005 | data = self.fetch_cpu_stats() |
2006 | if data != None: |
2007 | - logging.debug('CPU: %s', data) |
2008 | + LOG.debug('CPU: %s', data) |
2009 | update_rrd(self, 'cpu', data) |
2010 | |
2011 | data = self.fetch_net_stats() |
2012 | - logging.debug('NET: %s', data) |
2013 | + LOG.debug('NET: %s', data) |
2014 | update_rrd(self, 'net', data) |
2015 | |
2016 | data = self.fetch_disk_stats() |
2017 | - logging.debug('DISK: %s', data) |
2018 | + LOG.debug('DISK: %s', data) |
2019 | update_rrd(self, 'disk', data) |
2020 | |
2021 | # TODO(devcamcar): Turn these into pool.ProcessPool.execute() calls |
2022 | @@ -285,7 +286,7 @@ |
2023 | graph_disk(self, '1w') |
2024 | graph_disk(self, '1m') |
2025 | except Exception: |
2026 | - logging.exception(_('unexpected error during update')) |
2027 | + LOG.exception(_('unexpected error during update')) |
2028 | |
2029 | self.last_updated = utcnow() |
2030 | |
2031 | @@ -309,7 +310,7 @@ |
2032 | self.cputime = float(info['cpu_time']) |
2033 | self.cputime_last_updated = utcnow() |
2034 | |
2035 | - logging.debug('CPU: %d', self.cputime) |
2036 | + LOG.debug('CPU: %d', self.cputime) |
2037 | |
2038 | # Skip calculation on first pass. Need delta to get a meaningful value. |
2039 | if cputime_last_updated == None: |
2040 | @@ -319,17 +320,17 @@ |
2041 | d = self.cputime_last_updated - cputime_last_updated |
2042 | t = d.days * 86400 + d.seconds |
2043 | |
2044 | - logging.debug('t = %d', t) |
2045 | + LOG.debug('t = %d', t) |
2046 | |
2047 | # Calculate change over time in number of nanoseconds of CPU time used. |
2048 | cputime_delta = self.cputime - cputime_last |
2049 | |
2050 | - logging.debug('cputime_delta = %s', cputime_delta) |
2051 | + LOG.debug('cputime_delta = %s', cputime_delta) |
2052 | |
2053 | # Get the number of virtual cpus in this domain. |
2054 | vcpus = int(info['num_cpu']) |
2055 | |
2056 | - logging.debug('vcpus = %d', vcpus) |
2057 | + LOG.debug('vcpus = %d', vcpus) |
2058 | |
2059 | # Calculate CPU % used and cap at 100. |
2060 | return min(cputime_delta / (t * vcpus * 1.0e9) * 100, 100) |
2061 | @@ -351,8 +352,8 @@ |
2062 | rd += rd_bytes |
2063 | wr += wr_bytes |
2064 | except TypeError: |
2065 | - logging.error(_('Cannot get blockstats for "%s" on "%s"'), |
2066 | - disk, self.instance_id) |
2067 | + LOG.error(_('Cannot get blockstats for "%s" on "%s"'), |
2068 | + disk, self.instance_id) |
2069 | raise |
2070 | |
2071 | return '%d:%d' % (rd, wr) |
2072 | @@ -373,8 +374,8 @@ |
2073 | rx += stats[0] |
2074 | tx += stats[4] |
2075 | except TypeError: |
2076 | - logging.error(_('Cannot get ifstats for "%s" on "%s"'), |
2077 | - interface, self.instance_id) |
2078 | + LOG.error(_('Cannot get ifstats for "%s" on "%s"'), |
2079 | + interface, self.instance_id) |
2080 | raise |
2081 | |
2082 | return '%d:%d' % (rx, tx) |
2083 | @@ -408,7 +409,7 @@ |
2084 | try: |
2085 | conn = virt_connection.get_connection(read_only=True) |
2086 | except Exception, exn: |
2087 | - logging.exception(_('unexpected exception getting connection')) |
2088 | + LOG.exception(_('unexpected exception getting connection')) |
2089 | time.sleep(FLAGS.monitoring_instances_delay) |
2090 | return |
2091 | |
2092 | @@ -416,14 +417,14 @@ |
2093 | try: |
2094 | self.updateInstances_(conn, domain_ids) |
2095 | except Exception, exn: |
2096 | - logging.exception('updateInstances_') |
2097 | + LOG.exception('updateInstances_') |
2098 | |
2099 | def updateInstances_(self, conn, domain_ids): |
2100 | for domain_id in domain_ids: |
2101 | if not domain_id in self._instances: |
2102 | instance = Instance(conn, domain_id) |
2103 | self._instances[domain_id] = instance |
2104 | - logging.debug(_('Found instance: %s'), domain_id) |
2105 | + LOG.debug(_('Found instance: %s'), domain_id) |
2106 | |
2107 | for key in self._instances.keys(): |
2108 | instance = self._instances[key] |
2109 | |
2110 | === modified file 'nova/crypto.py' |
2111 | --- nova/crypto.py 2010-12-23 18:31:46 +0000 |
2112 | +++ nova/crypto.py 2011-01-07 19:51:33 +0000 |
2113 | @@ -24,7 +24,6 @@ |
2114 | import base64 |
2115 | import gettext |
2116 | import hashlib |
2117 | -import logging |
2118 | import os |
2119 | import shutil |
2120 | import struct |
2121 | @@ -39,8 +38,10 @@ |
2122 | from nova import context |
2123 | from nova import db |
2124 | from nova import flags |
2125 | - |
2126 | - |
2127 | +from nova import log as logging |
2128 | + |
2129 | + |
2130 | +LOG = logging.getLogger("nova.crypto") |
2131 | FLAGS = flags.FLAGS |
2132 | flags.DEFINE_string('ca_file', 'cacert.pem', _('Filename of root CA')) |
2133 | flags.DEFINE_string('key_file', |
2134 | @@ -254,7 +255,7 @@ |
2135 | csrfile = open(inbound, "w") |
2136 | csrfile.write(csr_text) |
2137 | csrfile.close() |
2138 | - logging.debug(_("Flags path: %s") % ca_folder) |
2139 | + LOG.debug(_("Flags path: %s"), ca_folder) |
2140 | start = os.getcwd() |
2141 | # Change working dir to CA |
2142 | os.chdir(ca_folder) |
2143 | |
2144 | === modified file 'nova/db/sqlalchemy/__init__.py' |
2145 | --- nova/db/sqlalchemy/__init__.py 2011-01-07 18:11:04 +0000 |
2146 | +++ nova/db/sqlalchemy/__init__.py 2011-01-07 19:51:33 +0000 |
2147 | @@ -19,16 +19,17 @@ |
2148 | """ |
2149 | SQLAlchemy database backend |
2150 | """ |
2151 | -import logging |
2152 | import time |
2153 | |
2154 | from sqlalchemy.exc import OperationalError |
2155 | |
2156 | from nova import flags |
2157 | +from nova import log as logging |
2158 | from nova.db.sqlalchemy import models |
2159 | |
2160 | |
2161 | FLAGS = flags.FLAGS |
2162 | +LOG = logging.getLogger('nova.db.sqlalchemy') |
2163 | |
2164 | |
2165 | for i in xrange(FLAGS.sql_max_retries): |
2166 | @@ -39,6 +40,6 @@ |
2167 | models.register_models() |
2168 | break |
2169 | except OperationalError: |
2170 | - logging.exception(_("Data store %s is unreachable." |
2171 | - " Trying again in %d seconds.") % |
2172 | - (FLAGS.sql_connection, FLAGS.sql_retry_interval)) |
2173 | + LOG.exception(_("Data store %s is unreachable." |
2174 | + " Trying again in %d seconds."), |
2175 | + FLAGS.sql_connection, FLAGS.sql_retry_interval) |
2176 | |
2177 | === modified file 'nova/exception.py' |
2178 | --- nova/exception.py 2010-12-22 20:59:53 +0000 |
2179 | +++ nova/exception.py 2011-01-07 19:51:33 +0000 |
2180 | @@ -21,9 +21,8 @@ |
2181 | Nova-type exceptions. SHOULD include dedicated exception logging. |
2182 | """ |
2183 | |
2184 | -import logging |
2185 | -import sys |
2186 | -import traceback |
2187 | +from nova import log as logging |
2188 | +LOG = logging.getLogger('nova.exception') |
2189 | |
2190 | |
2191 | class ProcessExecutionError(IOError): |
2192 | @@ -84,7 +83,7 @@ |
2193 | except Exception, e: |
2194 | if not isinstance(e, Error): |
2195 | #exc_type, exc_value, exc_traceback = sys.exc_info() |
2196 | - logging.exception(_('Uncaught exception')) |
2197 | + LOG.exception(_('Uncaught exception')) |
2198 | #logging.error(traceback.extract_stack(exc_traceback)) |
2199 | raise Error(str(e)) |
2200 | raise |
2201 | |
2202 | === modified file 'nova/fakerabbit.py' |
2203 | --- nova/fakerabbit.py 2010-12-22 21:55:11 +0000 |
2204 | +++ nova/fakerabbit.py 2011-01-07 19:51:33 +0000 |
2205 | @@ -18,12 +18,16 @@ |
2206 | |
2207 | """Based a bit on the carrot.backeds.queue backend... but a lot better.""" |
2208 | |
2209 | -import logging |
2210 | import Queue as queue |
2211 | |
2212 | from carrot.backends import base |
2213 | from eventlet import greenthread |
2214 | |
2215 | +from nova import log as logging |
2216 | + |
2217 | + |
2218 | +LOG = logging.getLogger("nova.fakerabbit") |
2219 | + |
2220 | |
2221 | EXCHANGES = {} |
2222 | QUEUES = {} |
2223 | @@ -41,12 +45,12 @@ |
2224 | self._routes = {} |
2225 | |
2226 | def publish(self, message, routing_key=None): |
2227 | - logging.debug(_('(%s) publish (key: %s) %s'), |
2228 | - self.name, routing_key, message) |
2229 | + LOG.debug(_('(%s) publish (key: %s) %s'), |
2230 | + self.name, routing_key, message) |
2231 | routing_key = routing_key.split('.')[0] |
2232 | if routing_key in self._routes: |
2233 | for f in self._routes[routing_key]: |
2234 | - logging.debug(_('Publishing to route %s'), f) |
2235 | + LOG.debug(_('Publishing to route %s'), f) |
2236 | f(message, routing_key=routing_key) |
2237 | |
2238 | def bind(self, callback, routing_key): |
2239 | @@ -76,19 +80,19 @@ |
2240 | def queue_declare(self, queue, **kwargs): |
2241 | global QUEUES |
2242 | if queue not in QUEUES: |
2243 | - logging.debug(_('Declaring queue %s'), queue) |
2244 | + LOG.debug(_('Declaring queue %s'), queue) |
2245 | QUEUES[queue] = Queue(queue) |
2246 | |
2247 | def exchange_declare(self, exchange, type, *args, **kwargs): |
2248 | global EXCHANGES |
2249 | if exchange not in EXCHANGES: |
2250 | - logging.debug(_('Declaring exchange %s'), exchange) |
2251 | + LOG.debug(_('Declaring exchange %s'), exchange) |
2252 | EXCHANGES[exchange] = Exchange(exchange, type) |
2253 | |
2254 | def queue_bind(self, queue, exchange, routing_key, **kwargs): |
2255 | global EXCHANGES |
2256 | global QUEUES |
2257 | - logging.debug(_('Binding %s to %s with key %s'), |
2258 | + LOG.debug(_('Binding %s to %s with key %s'), |
2259 | queue, exchange, routing_key) |
2260 | EXCHANGES[exchange].bind(QUEUES[queue].push, routing_key) |
2261 | |
2262 | @@ -113,7 +117,7 @@ |
2263 | content_type=content_type, |
2264 | content_encoding=content_encoding) |
2265 | message.result = True |
2266 | - logging.debug(_('Getting from %s: %s'), queue, message) |
2267 | + LOG.debug(_('Getting from %s: %s'), queue, message) |
2268 | return message |
2269 | |
2270 | def prepare_message(self, message_data, delivery_mode, |
2271 | |
2272 | === modified file 'nova/flags.py' |
2273 | --- nova/flags.py 2011-01-03 22:08:52 +0000 |
2274 | +++ nova/flags.py 2011-01-07 19:51:33 +0000 |
2275 | @@ -29,8 +29,6 @@ |
2276 | |
2277 | import gflags |
2278 | |
2279 | -from nova import utils |
2280 | - |
2281 | |
2282 | class FlagValues(gflags.FlagValues): |
2283 | """Extension of gflags.FlagValues that allows undefined and runtime flags. |
2284 | @@ -213,10 +211,10 @@ |
2285 | DEFINE_string('aws_access_key_id', 'admin', 'AWS Access ID') |
2286 | DEFINE_string('aws_secret_access_key', 'admin', 'AWS Access Key') |
2287 | DEFINE_integer('glance_port', 9292, 'glance port') |
2288 | -DEFINE_string('glance_host', utils.get_my_ip(), 'glance host') |
2289 | +DEFINE_string('glance_host', '127.0.0.1', 'glance host') |
2290 | DEFINE_integer('s3_port', 3333, 's3 port') |
2291 | -DEFINE_string('s3_host', utils.get_my_ip(), 's3 host (for infrastructure)') |
2292 | -DEFINE_string('s3_dmz', utils.get_my_ip(), 's3 dmz ip (for instances)') |
2293 | +DEFINE_string('s3_host', '127.0.0.1', 's3 host (for infrastructure)') |
2294 | +DEFINE_string('s3_dmz', '127.0.0.1', 's3 dmz ip (for instances)') |
2295 | DEFINE_string('compute_topic', 'compute', 'the topic compute nodes listen on') |
2296 | DEFINE_string('scheduler_topic', 'scheduler', |
2297 | 'the topic scheduler nodes listen on') |
2298 | @@ -236,8 +234,8 @@ |
2299 | DEFINE_integer('rabbit_max_retries', 12, 'rabbit connection attempts') |
2300 | DEFINE_string('control_exchange', 'nova', 'the main exchange to connect to') |
2301 | DEFINE_string('ec2_prefix', 'http', 'prefix for ec2') |
2302 | -DEFINE_string('cc_host', utils.get_my_ip(), 'ip of api server') |
2303 | -DEFINE_string('cc_dmz', utils.get_my_ip(), 'internal ip of api server') |
2304 | +DEFINE_string('cc_host', '127.0.0.1', 'ip of api server') |
2305 | +DEFINE_string('cc_dmz', '127.0.0.1', 'internal ip of api server') |
2306 | DEFINE_integer('cc_port', 8773, 'cloud controller port') |
2307 | DEFINE_string('ec2_suffix', '/services/Cloud', 'suffix for ec2') |
2308 | |
2309 | |
2310 | === modified file 'nova/image/glance.py' |
2311 | --- nova/image/glance.py 2010-12-31 03:55:00 +0000 |
2312 | +++ nova/image/glance.py 2011-01-07 19:51:33 +0000 |
2313 | @@ -19,19 +19,17 @@ |
2314 | |
2315 | import httplib |
2316 | import json |
2317 | -import logging |
2318 | import urlparse |
2319 | |
2320 | -import webob.exc |
2321 | - |
2322 | -from nova import utils |
2323 | +from nova import exception |
2324 | from nova import flags |
2325 | -from nova import exception |
2326 | -import nova.image.service |
2327 | +from nova import log as logging |
2328 | +from nova.image import service |
2329 | + |
2330 | + |
2331 | +LOG = logging.getLogger('nova.image.glance') |
2332 | |
2333 | FLAGS = flags.FLAGS |
2334 | - |
2335 | - |
2336 | flags.DEFINE_string('glance_teller_address', 'http://127.0.0.1', |
2337 | 'IP address or URL where Glance\'s Teller service resides') |
2338 | flags.DEFINE_string('glance_teller_port', '9191', |
2339 | @@ -77,8 +75,8 @@ |
2340 | data = json.loads(res.read())['images'] |
2341 | return data |
2342 | else: |
2343 | - logging.warn(_("Parallax returned HTTP error %d from " |
2344 | - "request for /images"), res.status_int) |
2345 | + LOG.warn(_("Parallax returned HTTP error %d from " |
2346 | + "request for /images"), res.status_int) |
2347 | return [] |
2348 | finally: |
2349 | c.close() |
2350 | @@ -96,8 +94,8 @@ |
2351 | data = json.loads(res.read())['images'] |
2352 | return data |
2353 | else: |
2354 | - logging.warn(_("Parallax returned HTTP error %d from " |
2355 | - "request for /images/detail"), res.status_int) |
2356 | + LOG.warn(_("Parallax returned HTTP error %d from " |
2357 | + "request for /images/detail"), res.status_int) |
2358 | return [] |
2359 | finally: |
2360 | c.close() |
2361 | @@ -165,7 +163,7 @@ |
2362 | c.close() |
2363 | |
2364 | |
2365 | -class GlanceImageService(nova.image.service.BaseImageService): |
2366 | +class GlanceImageService(service.BaseImageService): |
2367 | """Provides storage and retrieval of disk image objects within Glance.""" |
2368 | |
2369 | def __init__(self): |
2370 | |
2371 | === added file 'nova/log.py' |
2372 | --- nova/log.py 1970-01-01 00:00:00 +0000 |
2373 | +++ nova/log.py 2011-01-07 19:51:33 +0000 |
2374 | @@ -0,0 +1,260 @@ |
2375 | +# vim: tabstop=4 shiftwidth=4 softtabstop=4 |
2376 | + |
2377 | +# Copyright 2010 United States Government as represented by the |
2378 | +# Administrator of the National Aeronautics and Space Administration. |
2379 | +# All Rights Reserved. |
2380 | +# |
2381 | +# Licensed under the Apache License, Version 2.0 (the "License"); you may |
2382 | +# not use this file except in compliance with the License. You may obtain |
2383 | +# a copy of the License at |
2384 | +# |
2385 | +# http://www.apache.org/licenses/LICENSE-2.0 |
2386 | +# |
2387 | +# Unless required by applicable law or agreed to in writing, software |
2388 | +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT |
2389 | +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the |
2390 | +# License for the specific language governing permissions and limitations |
2391 | +# under the License. |
2392 | + |
2393 | +""" |
2394 | +Nova logging handler. |
2395 | + |
2396 | +This module adds to logging functionality by adding the option to specify |
2397 | +a context object when calling the various log methods. If the context object |
2398 | +is not specified, default formatting is used. |
2399 | + |
2400 | +It also allows setting of formatting information through flags. |
2401 | +""" |
2402 | + |
2403 | + |
2404 | +import cStringIO |
2405 | +import json |
2406 | +import logging |
2407 | +import logging.handlers |
2408 | +import traceback |
2409 | + |
2410 | +from nova import flags |
2411 | +# TODO(todd): fix after version.py merge |
2412 | +# from nova import version |
2413 | + |
2414 | + |
2415 | +FLAGS = flags.FLAGS |
2416 | + |
2417 | +# TODO(todd): fix after version.py merge |
2418 | +# '(%(name)s %(nova_version)s): %(levelname)s ' |
2419 | +flags.DEFINE_string('logging_context_format_string', |
2420 | + '(%(name)s): %(levelname)s ' |
2421 | + '[%(request_id)s %(user)s ' |
2422 | + '%(project)s] %(message)s', |
2423 | + 'format string to use for log messages') |
2424 | + |
2425 | +# TODO(todd): fix after version.py merge |
2426 | +# '(%(name)s %(nova_version)s): %(levelname)s [N/A] ' |
2427 | +flags.DEFINE_string('logging_default_format_string', |
2428 | + '(%(name)s): %(levelname)s [N/A] ' |
2429 | + '%(message)s', |
2430 | + 'format string to use for log messages') |
2431 | + |
2432 | +flags.DEFINE_string('logging_debug_format_suffix', |
2433 | + 'from %(processName)s (pid=%(process)d) %(funcName)s' |
2434 | + ' %(pathname)s:%(lineno)d', |
2435 | + 'data to append to log format when level is DEBUG') |
2436 | + |
2437 | +flags.DEFINE_string('logging_exception_prefix', |
2438 | + '(%(name)s): TRACE: ', |
2439 | + 'prefix each line of exception output with this format') |
2440 | + |
2441 | +flags.DEFINE_list('default_log_levels', |
2442 | + ['amqplib=WARN', |
2443 | + 'sqlalchemy=WARN', |
2444 | + 'eventlet.wsgi.server=WARN'], |
2445 | + 'list of logger=LEVEL pairs') |
2446 | + |
2447 | +flags.DEFINE_bool('use_syslog', False, 'output to syslog') |
2448 | +flags.DEFINE_string('logfile', None, 'output to named file') |
2449 | + |
2450 | + |
2451 | +# A list of things we want to replicate from logging. |
2452 | +# levels |
2453 | +CRITICAL = logging.CRITICAL |
2454 | +FATAL = logging.FATAL |
2455 | +ERROR = logging.ERROR |
2456 | +WARNING = logging.WARNING |
2457 | +WARN = logging.WARN |
2458 | +INFO = logging.INFO |
2459 | +DEBUG = logging.DEBUG |
2460 | +NOTSET = logging.NOTSET |
2461 | +# methods |
2462 | +getLogger = logging.getLogger |
2463 | +debug = logging.debug |
2464 | +info = logging.info |
2465 | +warning = logging.warning |
2466 | +warn = logging.warn |
2467 | +error = logging.error |
2468 | +exception = logging.exception |
2469 | +critical = logging.critical |
2470 | +log = logging.log |
2471 | +# handlers |
2472 | +StreamHandler = logging.StreamHandler |
2473 | +FileHandler = logging.FileHandler |
2474 | +# logging.SysLogHandler is nicer than logging.logging.handler.SysLogHandler. |
2475 | +SysLogHandler = logging.handlers.SysLogHandler |
2476 | + |
2477 | + |
2478 | +# our new audit level |
2479 | +AUDIT = logging.INFO + 1 |
2480 | +logging.addLevelName(AUDIT, 'AUDIT') |
2481 | + |
2482 | + |
2483 | +def _dictify_context(context): |
2484 | + if context == None: |
2485 | + return None |
2486 | + if not isinstance(context, dict) \ |
2487 | + and getattr(context, 'to_dict', None): |
2488 | + context = context.to_dict() |
2489 | + return context |
2490 | + |
2491 | + |
2492 | +def basicConfig(): |
2493 | + logging.basicConfig() |
2494 | + for handler in logging.root.handlers: |
2495 | + handler.setFormatter(_formatter) |
2496 | + if FLAGS.verbose: |
2497 | + logging.root.setLevel(logging.DEBUG) |
2498 | + if FLAGS.use_syslog: |
2499 | + syslog = SysLogHandler(address='/dev/log') |
2500 | + syslog.setFormatter(_formatter) |
2501 | + logging.root.addHandler(syslog) |
2502 | + if FLAGS.logfile: |
2503 | + logfile = FileHandler(FLAGS.logfile) |
2504 | + logfile.setFormatter(_formatter) |
2505 | + logging.root.addHandler(logfile) |
2506 | + |
2507 | + |
2508 | +class NovaLogger(logging.Logger): |
2509 | + """ |
2510 | + NovaLogger manages request context and formatting. |
2511 | + |
2512 | + This becomes the class that is instanciated by logging.getLogger. |
2513 | + """ |
2514 | + def __init__(self, name, level=NOTSET): |
2515 | + level_name = self._get_level_from_flags(name, FLAGS) |
2516 | + level = globals()[level_name] |
2517 | + logging.Logger.__init__(self, name, level) |
2518 | + |
2519 | + def _get_level_from_flags(self, name, FLAGS): |
2520 | + # if exactly "nova", or a child logger, honor the verbose flag |
2521 | + if (name == "nova" or name.startswith("nova.")) and FLAGS.verbose: |
2522 | + return 'DEBUG' |
2523 | + for pair in FLAGS.default_log_levels: |
2524 | + logger, _sep, level = pair.partition('=') |
2525 | + # NOTE(todd): if we set a.b, we want a.b.c to have the same level |
2526 | + # (but not a.bc, so we check the dot) |
2527 | + if name == logger: |
2528 | + return level |
2529 | + if name.startswith(logger) and name[len(logger)] == '.': |
2530 | + return level |
2531 | + return 'INFO' |
2532 | + |
2533 | + def _log(self, level, msg, args, exc_info=None, extra=None, context=None): |
2534 | + """Extract context from any log call""" |
2535 | + if not extra: |
2536 | + extra = {} |
2537 | + if context: |
2538 | + extra.update(_dictify_context(context)) |
2539 | + # TODO(todd): fix after version.py merge |
2540 | + #extra.update({"nova_version": version.string_with_vcs()}) |
2541 | + logging.Logger._log(self, level, msg, args, exc_info, extra) |
2542 | + |
2543 | + def addHandler(self, handler): |
2544 | + """Each handler gets our custom formatter""" |
2545 | + handler.setFormatter(_formatter) |
2546 | + logging.Logger.addHandler(self, handler) |
2547 | + |
2548 | + def audit(self, msg, *args, **kwargs): |
2549 | + """Shortcut for our AUDIT level""" |
2550 | + if self.isEnabledFor(AUDIT): |
2551 | + self._log(AUDIT, msg, args, **kwargs) |
2552 | + |
2553 | + def exception(self, msg, *args, **kwargs): |
2554 | + """Logging.exception doesn't handle kwargs, so breaks context""" |
2555 | + if not kwargs.get('exc_info'): |
2556 | + kwargs['exc_info'] = 1 |
2557 | + self.error(msg, *args, **kwargs) |
2558 | + # NOTE(todd): does this really go here, or in _log ? |
2559 | + extra = kwargs.get('extra') |
2560 | + if not extra: |
2561 | + return |
2562 | + env = extra.get('environment') |
2563 | + if env: |
2564 | + env = env.copy() |
2565 | + for k in env.keys(): |
2566 | + if not isinstance(env[k], str): |
2567 | + env.pop(k) |
2568 | + message = "Environment: %s" % json.dumps(env) |
2569 | + kwargs.pop('exc_info') |
2570 | + self.error(message, **kwargs) |
2571 | + |
2572 | +logging.setLoggerClass(NovaLogger) |
2573 | + |
2574 | + |
2575 | +class NovaRootLogger(NovaLogger): |
2576 | + pass |
2577 | + |
2578 | +if not isinstance(logging.root, NovaRootLogger): |
2579 | + logging.root = NovaRootLogger("nova.root", WARNING) |
2580 | + NovaLogger.root = logging.root |
2581 | + NovaLogger.manager.root = logging.root |
2582 | + |
2583 | + |
2584 | +class NovaFormatter(logging.Formatter): |
2585 | + """ |
2586 | + A nova.context.RequestContext aware formatter configured through flags. |
2587 | + |
2588 | + The flags used to set format strings are: logging_context_foramt_string |
2589 | + and logging_default_format_string. You can also specify |
2590 | + logging_debug_format_suffix to append extra formatting if the log level is |
2591 | + debug. |
2592 | + |
2593 | + For information about what variables are available for the formatter see: |
2594 | + http://docs.python.org/library/logging.html#formatter |
2595 | + """ |
2596 | + |
2597 | + def format(self, record): |
2598 | + """Uses contextstring if request_id is set, otherwise default""" |
2599 | + if record.__dict__.get('request_id', None): |
2600 | + self._fmt = FLAGS.logging_context_format_string |
2601 | + else: |
2602 | + self._fmt = FLAGS.logging_default_format_string |
2603 | + if record.levelno == logging.DEBUG \ |
2604 | + and FLAGS.logging_debug_format_suffix: |
2605 | + self._fmt += " " + FLAGS.logging_debug_format_suffix |
2606 | + # Cache this on the record, Logger will respect our formated copy |
2607 | + if record.exc_info: |
2608 | + record.exc_text = self.formatException(record.exc_info, record) |
2609 | + return logging.Formatter.format(self, record) |
2610 | + |
2611 | + def formatException(self, exc_info, record=None): |
2612 | + """Format exception output with FLAGS.logging_exception_prefix""" |
2613 | + if not record: |
2614 | + return logging.Formatter.formatException(self, exc_info) |
2615 | + stringbuffer = cStringIO.StringIO() |
2616 | + traceback.print_exception(exc_info[0], exc_info[1], exc_info[2], |
2617 | + None, stringbuffer) |
2618 | + lines = stringbuffer.getvalue().split("\n") |
2619 | + stringbuffer.close() |
2620 | + formatted_lines = [] |
2621 | + for line in lines: |
2622 | + pl = FLAGS.logging_exception_prefix % record.__dict__ |
2623 | + fl = "%s%s" % (pl, line) |
2624 | + formatted_lines.append(fl) |
2625 | + return "\n".join(formatted_lines) |
2626 | + |
2627 | +_formatter = NovaFormatter() |
2628 | + |
2629 | + |
2630 | +def audit(msg, *args, **kwargs): |
2631 | + """Shortcut for logging to root log with sevrity 'AUDIT'.""" |
2632 | + if len(logging.root.handlers) == 0: |
2633 | + basicConfig() |
2634 | + logging.root.log(AUDIT, msg, *args, **kwargs) |
2635 | |
2636 | === modified file 'nova/network/api.py' |
2637 | --- nova/network/api.py 2011-01-04 21:56:36 +0000 |
2638 | +++ nova/network/api.py 2011-01-07 19:51:33 +0000 |
2639 | @@ -20,15 +20,15 @@ |
2640 | Handles all requests relating to instances (guest vms). |
2641 | """ |
2642 | |
2643 | -import logging |
2644 | - |
2645 | from nova import db |
2646 | from nova import flags |
2647 | +from nova import log as logging |
2648 | from nova import quota |
2649 | from nova import rpc |
2650 | from nova.db import base |
2651 | |
2652 | FLAGS = flags.FLAGS |
2653 | +LOG = logging.getLogger('nova.network') |
2654 | |
2655 | |
2656 | class API(base.Base): |
2657 | @@ -36,7 +36,7 @@ |
2658 | |
2659 | def allocate_floating_ip(self, context): |
2660 | if quota.allowed_floating_ips(context, 1) < 1: |
2661 | - logging.warn(_("Quota exceeeded for %s, tried to allocate " |
2662 | + LOG.warn(_("Quota exceeeded for %s, tried to allocate " |
2663 | "address"), |
2664 | context.project_id) |
2665 | raise quota.QuotaError(_("Address quota exceeded. You cannot " |
2666 | |
2667 | === modified file 'nova/network/linux_net.py' |
2668 | --- nova/network/linux_net.py 2010-12-22 21:38:44 +0000 |
2669 | +++ nova/network/linux_net.py 2011-01-07 19:51:33 +0000 |
2670 | @@ -17,16 +17,17 @@ |
2671 | Implements vlans, bridges, and iptables rules using linux utilities. |
2672 | """ |
2673 | |
2674 | -import logging |
2675 | import os |
2676 | |
2677 | -# TODO(ja): does the definition of network_path belong here? |
2678 | - |
2679 | from nova import db |
2680 | from nova import flags |
2681 | +from nova import log as logging |
2682 | from nova import utils |
2683 | |
2684 | |
2685 | +LOG = logging.getLogger("nova.linux_net") |
2686 | + |
2687 | + |
2688 | def _bin_file(script): |
2689 | """Return the absolute path to scipt in the bin directory""" |
2690 | return os.path.abspath(os.path.join(__file__, "../../../bin", script)) |
2691 | @@ -172,7 +173,7 @@ |
2692 | """Create a vlan unless it already exists""" |
2693 | interface = "vlan%s" % vlan_num |
2694 | if not _device_exists(interface): |
2695 | - logging.debug(_("Starting VLAN inteface %s"), interface) |
2696 | + LOG.debug(_("Starting VLAN inteface %s"), interface) |
2697 | _execute("sudo vconfig set_name_type VLAN_PLUS_VID_NO_PAD") |
2698 | _execute("sudo vconfig add %s %s" % (FLAGS.vlan_interface, vlan_num)) |
2699 | _execute("sudo ifconfig %s up" % interface) |
2700 | @@ -182,7 +183,7 @@ |
2701 | def ensure_bridge(bridge, interface, net_attrs=None): |
2702 | """Create a bridge unless it already exists""" |
2703 | if not _device_exists(bridge): |
2704 | - logging.debug(_("Starting Bridge interface for %s"), interface) |
2705 | + LOG.debug(_("Starting Bridge interface for %s"), interface) |
2706 | _execute("sudo brctl addbr %s" % bridge) |
2707 | _execute("sudo brctl setfd %s 0" % bridge) |
2708 | # _execute("sudo brctl setageing %s 10" % bridge) |
2709 | @@ -248,9 +249,9 @@ |
2710 | _execute('sudo kill -HUP %d' % pid) |
2711 | return |
2712 | except Exception as exc: # pylint: disable-msg=W0703 |
2713 | - logging.debug(_("Hupping dnsmasq threw %s"), exc) |
2714 | + LOG.debug(_("Hupping dnsmasq threw %s"), exc) |
2715 | else: |
2716 | - logging.debug(_("Pid %d is stale, relaunching dnsmasq"), pid) |
2717 | + LOG.debug(_("Pid %d is stale, relaunching dnsmasq"), pid) |
2718 | |
2719 | # FLAGFILE and DNSMASQ_INTERFACE in env |
2720 | env = {'FLAGFILE': FLAGS.dhcpbridge_flagfile, |
2721 | @@ -270,7 +271,7 @@ |
2722 | def _execute(cmd, *args, **kwargs): |
2723 | """Wrapper around utils._execute for fake_network""" |
2724 | if FLAGS.fake_network: |
2725 | - logging.debug("FAKE NET: %s", cmd) |
2726 | + LOG.debug("FAKE NET: %s", cmd) |
2727 | return "fake", 0 |
2728 | else: |
2729 | return utils.execute(cmd, *args, **kwargs) |
2730 | @@ -328,7 +329,7 @@ |
2731 | try: |
2732 | _execute('sudo kill -TERM %d' % pid) |
2733 | except Exception as exc: # pylint: disable-msg=W0703 |
2734 | - logging.debug(_("Killing dnsmasq threw %s"), exc) |
2735 | + LOG.debug(_("Killing dnsmasq threw %s"), exc) |
2736 | |
2737 | |
2738 | def _dhcp_file(bridge, kind): |
2739 | |
2740 | === modified file 'nova/network/manager.py' |
2741 | --- nova/network/manager.py 2010-12-22 21:38:44 +0000 |
2742 | +++ nova/network/manager.py 2011-01-07 19:51:33 +0000 |
2743 | @@ -45,7 +45,6 @@ |
2744 | """ |
2745 | |
2746 | import datetime |
2747 | -import logging |
2748 | import math |
2749 | import socket |
2750 | |
2751 | @@ -55,11 +54,13 @@ |
2752 | from nova import db |
2753 | from nova import exception |
2754 | from nova import flags |
2755 | +from nova import log as logging |
2756 | from nova import manager |
2757 | from nova import utils |
2758 | from nova import rpc |
2759 | |
2760 | |
2761 | +LOG = logging.getLogger("nova.network.manager") |
2762 | FLAGS = flags.FLAGS |
2763 | flags.DEFINE_string('flat_network_bridge', 'br100', |
2764 | 'Bridge for simple network instances') |
2765 | @@ -131,7 +132,7 @@ |
2766 | |
2767 | def set_network_host(self, context, network_id): |
2768 | """Safely sets the host of the network.""" |
2769 | - logging.debug(_("setting network host")) |
2770 | + LOG.debug(_("setting network host"), context=context) |
2771 | host = self.db.network_set_host(context, |
2772 | network_id, |
2773 | self.host) |
2774 | @@ -186,7 +187,7 @@ |
2775 | |
2776 | def lease_fixed_ip(self, context, mac, address): |
2777 | """Called by dhcp-bridge when ip is leased.""" |
2778 | - logging.debug("Leasing IP %s", address) |
2779 | + LOG.debug(_("Leasing IP %s"), address, context=context) |
2780 | fixed_ip_ref = self.db.fixed_ip_get_by_address(context, address) |
2781 | instance_ref = fixed_ip_ref['instance'] |
2782 | if not instance_ref: |
2783 | @@ -201,12 +202,12 @@ |
2784 | {'leased': True, |
2785 | 'updated_at': now}) |
2786 | if not fixed_ip_ref['allocated']: |
2787 | - logging.warn(_("IP %s leased that was already deallocated"), |
2788 | - address) |
2789 | + LOG.warn(_("IP %s leased that was already deallocated"), address, |
2790 | + context=context) |
2791 | |
2792 | def release_fixed_ip(self, context, mac, address): |
2793 | """Called by dhcp-bridge when ip is released.""" |
2794 | - logging.debug("Releasing IP %s", address) |
2795 | + LOG.debug("Releasing IP %s", address, context=context) |
2796 | fixed_ip_ref = self.db.fixed_ip_get_by_address(context, address) |
2797 | instance_ref = fixed_ip_ref['instance'] |
2798 | if not instance_ref: |
2799 | @@ -216,7 +217,8 @@ |
2800 | raise exception.Error(_("IP %s released from bad mac %s vs %s") % |
2801 | (address, instance_ref['mac_address'], mac)) |
2802 | if not fixed_ip_ref['leased']: |
2803 | - logging.warn(_("IP %s released that was not leased"), address) |
2804 | + LOG.warn(_("IP %s released that was not leased"), address, |
2805 | + context=context) |
2806 | self.db.fixed_ip_update(context, |
2807 | fixed_ip_ref['address'], |
2808 | {'leased': False}) |
2809 | @@ -437,7 +439,7 @@ |
2810 | self.host, |
2811 | time) |
2812 | if num: |
2813 | - logging.debug(_("Dissassociated %s stale fixed ip(s)"), num) |
2814 | + LOG.debug(_("Dissassociated %s stale fixed ip(s)"), num) |
2815 | |
2816 | def init_host(self): |
2817 | """Do any initialization that needs to be run if this is a |
2818 | |
2819 | === modified file 'nova/objectstore/handler.py' |
2820 | --- nova/objectstore/handler.py 2010-12-22 20:59:53 +0000 |
2821 | +++ nova/objectstore/handler.py 2011-01-07 19:51:33 +0000 |
2822 | @@ -39,7 +39,6 @@ |
2823 | |
2824 | import datetime |
2825 | import json |
2826 | -import logging |
2827 | import multiprocessing |
2828 | import os |
2829 | import urllib |
2830 | @@ -54,12 +53,14 @@ |
2831 | from nova import context |
2832 | from nova import exception |
2833 | from nova import flags |
2834 | +from nova import log as logging |
2835 | from nova import utils |
2836 | from nova.auth import manager |
2837 | from nova.objectstore import bucket |
2838 | from nova.objectstore import image |
2839 | |
2840 | |
2841 | +LOG = logging.getLogger('nova.objectstore.handler') |
2842 | FLAGS = flags.FLAGS |
2843 | flags.DEFINE_string('s3_listen_host', '', 'Host to listen on.') |
2844 | |
2845 | @@ -132,9 +133,11 @@ |
2846 | request.uri, |
2847 | headers=request.getAllHeaders(), |
2848 | check_type='s3') |
2849 | - return context.RequestContext(user, project) |
2850 | + rv = context.RequestContext(user, project) |
2851 | + LOG.audit(_("Authenticated request"), context=rv) |
2852 | + return rv |
2853 | except exception.Error as ex: |
2854 | - logging.debug(_("Authentication Failure: %s"), ex) |
2855 | + LOG.debug(_("Authentication Failure: %s"), ex) |
2856 | raise exception.NotAuthorized() |
2857 | |
2858 | |
2859 | @@ -176,7 +179,7 @@ |
2860 | |
2861 | def render_GET(self, request): # pylint: disable-msg=R0201 |
2862 | """Renders the GET request for a list of buckets as XML""" |
2863 | - logging.debug('List of buckets requested') |
2864 | + LOG.debug(_('List of buckets requested'), context=request.context) |
2865 | buckets = [b for b in bucket.Bucket.all() \ |
2866 | if b.is_authorized(request.context)] |
2867 | |
2868 | @@ -203,7 +206,7 @@ |
2869 | |
2870 | def render_GET(self, request): |
2871 | "Returns the keys for the bucket resource""" |
2872 | - logging.debug("List keys for bucket %s", self.name) |
2873 | + LOG.debug(_("List keys for bucket %s"), self.name) |
2874 | |
2875 | try: |
2876 | bucket_object = bucket.Bucket(self.name) |
2877 | @@ -211,6 +214,8 @@ |
2878 | return error.NoResource(message="No such bucket").render(request) |
2879 | |
2880 | if not bucket_object.is_authorized(request.context): |
2881 | + LOG.audit(_("Unauthorized attempt to access bucket %s"), |
2882 | + self.name, context=request.context) |
2883 | raise exception.NotAuthorized() |
2884 | |
2885 | prefix = get_argument(request, "prefix", u"") |
2886 | @@ -227,8 +232,8 @@ |
2887 | |
2888 | def render_PUT(self, request): |
2889 | "Creates the bucket resource""" |
2890 | - logging.debug(_("Creating bucket %s"), self.name) |
2891 | - logging.debug("calling bucket.Bucket.create(%r, %r)", |
2892 | + LOG.debug(_("Creating bucket %s"), self.name) |
2893 | + LOG.debug("calling bucket.Bucket.create(%r, %r)", |
2894 | self.name, |
2895 | request.context) |
2896 | bucket.Bucket.create(self.name, request.context) |
2897 | @@ -237,10 +242,12 @@ |
2898 | |
2899 | def render_DELETE(self, request): |
2900 | """Deletes the bucket resource""" |
2901 | - logging.debug(_("Deleting bucket %s"), self.name) |
2902 | + LOG.debug(_("Deleting bucket %s"), self.name) |
2903 | bucket_object = bucket.Bucket(self.name) |
2904 | |
2905 | if not bucket_object.is_authorized(request.context): |
2906 | + LOG.audit(_("Unauthorized attempt to delete bucket %s"), |
2907 | + self.name, context=request.context) |
2908 | raise exception.NotAuthorized() |
2909 | |
2910 | bucket_object.delete() |
2911 | @@ -261,11 +268,12 @@ |
2912 | Raises NotAuthorized if user in request context is not |
2913 | authorized to delete the object. |
2914 | """ |
2915 | - logging.debug(_("Getting object: %s / %s"), |
2916 | - self.bucket.name, |
2917 | - self.name) |
2918 | + LOG.debug(_("Getting object: %s / %s"), self.bucket.name, self.name) |
2919 | |
2920 | if not self.bucket.is_authorized(request.context): |
2921 | + LOG.audit(_("Unauthorized attempt to get object %s from bucket " |
2922 | + "%s"), self.name, self.bucket.name, |
2923 | + context=request.context) |
2924 | raise exception.NotAuthorized() |
2925 | |
2926 | obj = self.bucket[urllib.unquote(self.name)] |
2927 | @@ -281,11 +289,12 @@ |
2928 | Raises NotAuthorized if user in request context is not |
2929 | authorized to delete the object. |
2930 | """ |
2931 | - logging.debug(_("Putting object: %s / %s"), |
2932 | - self.bucket.name, |
2933 | - self.name) |
2934 | + LOG.debug(_("Putting object: %s / %s"), self.bucket.name, self.name) |
2935 | |
2936 | if not self.bucket.is_authorized(request.context): |
2937 | + LOG.audit(_("Unauthorized attempt to upload object %s to bucket " |
2938 | + "%s"), |
2939 | + self.name, self.bucket.name, context=request.context) |
2940 | raise exception.NotAuthorized() |
2941 | |
2942 | key = urllib.unquote(self.name) |
2943 | @@ -302,11 +311,13 @@ |
2944 | authorized to delete the object. |
2945 | """ |
2946 | |
2947 | - logging.debug(_("Deleting object: %s / %s"), |
2948 | - self.bucket.name, |
2949 | - self.name) |
2950 | + LOG.debug(_("Deleting object: %s / %s"), self.bucket.name, self.name, |
2951 | + context=request.context) |
2952 | |
2953 | if not self.bucket.is_authorized(request.context): |
2954 | + LOG.audit("Unauthorized attempt to delete object %s from " |
2955 | + "bucket %s", self.name, self.bucket.name, |
2956 | + context=request.context) |
2957 | raise exception.NotAuthorized() |
2958 | |
2959 | del self.bucket[urllib.unquote(self.name)] |
2960 | @@ -379,13 +390,21 @@ |
2961 | image_path = os.path.join(FLAGS.images_path, image_id) |
2962 | if not image_path.startswith(FLAGS.images_path) or \ |
2963 | os.path.exists(image_path): |
2964 | + LOG.audit(_("Not authorized to upload image: invalid directory " |
2965 | + "%s"), |
2966 | + image_path, context=request.context) |
2967 | raise exception.NotAuthorized() |
2968 | |
2969 | bucket_object = bucket.Bucket(image_location.split("/")[0]) |
2970 | |
2971 | if not bucket_object.is_authorized(request.context): |
2972 | + LOG.audit(_("Not authorized to upload image: unauthorized " |
2973 | + "bucket %s"), bucket_object.name, |
2974 | + context=request.context) |
2975 | raise exception.NotAuthorized() |
2976 | |
2977 | + LOG.audit(_("Starting image upload: %s"), image_id, |
2978 | + context=request.context) |
2979 | p = multiprocessing.Process(target=image.Image.register_aws_image, |
2980 | args=(image_id, image_location, request.context)) |
2981 | p.start() |
2982 | @@ -398,17 +417,21 @@ |
2983 | image_id = get_argument(request, 'image_id', u'') |
2984 | image_object = image.Image(image_id) |
2985 | if not image_object.is_authorized(request.context): |
2986 | - logging.debug(_("not authorized for render_POST in images")) |
2987 | + LOG.audit(_("Not authorized to update attributes of image %s"), |
2988 | + image_id, context=request.context) |
2989 | raise exception.NotAuthorized() |
2990 | |
2991 | operation = get_argument(request, 'operation', u'') |
2992 | if operation: |
2993 | # operation implies publicity toggle |
2994 | - logging.debug(_("handling publicity toggle")) |
2995 | - image_object.set_public(operation == 'add') |
2996 | + newstatus = (operation == 'add') |
2997 | + LOG.audit(_("Toggling publicity flag of image %s %r"), image_id, |
2998 | + newstatus, context=request.context) |
2999 | + image_object.set_public(newstatus) |
3000 | else: |
3001 | # other attributes imply update |
3002 | - logging.debug(_("update user fields")) |
3003 | + LOG.audit(_("Updating user fields on image %s"), image_id, |
3004 | + context=request.context) |
3005 | clean_args = {} |
3006 | for arg in request.args.keys(): |
3007 | clean_args[arg] = request.args[arg][0] |
3008 | @@ -421,9 +444,12 @@ |
3009 | image_object = image.Image(image_id) |
3010 | |
3011 | if not image_object.is_authorized(request.context): |
3012 | + LOG.audit(_("Unauthorized attempt to delete image %s"), |
3013 | + image_id, context=request.context) |
3014 | raise exception.NotAuthorized() |
3015 | |
3016 | image_object.delete() |
3017 | + LOG.audit(_("Deleted image: %s"), image_id, context=request.context) |
3018 | |
3019 | request.setResponseCode(204) |
3020 | return '' |
3021 | |
3022 | === modified file 'nova/rpc.py' |
3023 | --- nova/rpc.py 2011-01-07 18:15:29 +0000 |
3024 | +++ nova/rpc.py 2011-01-07 19:51:33 +0000 |
3025 | @@ -22,7 +22,6 @@ |
3026 | """ |
3027 | |
3028 | import json |
3029 | -import logging |
3030 | import sys |
3031 | import time |
3032 | import traceback |
3033 | @@ -36,13 +35,12 @@ |
3034 | from nova import exception |
3035 | from nova import fakerabbit |
3036 | from nova import flags |
3037 | +from nova import log as logging |
3038 | from nova import utils |
3039 | |
3040 | |
3041 | FLAGS = flags.FLAGS |
3042 | - |
3043 | -LOG = logging.getLogger('amqplib') |
3044 | -LOG.setLevel(logging.DEBUG) |
3045 | +LOG = logging.getLogger('nova.rpc') |
3046 | |
3047 | |
3048 | class Connection(carrot_connection.BrokerConnection): |
3049 | @@ -91,15 +89,16 @@ |
3050 | self.failed_connection = False |
3051 | break |
3052 | except: # Catching all because carrot sucks |
3053 | - logging.exception(_("AMQP server on %s:%d is unreachable." |
3054 | - " Trying again in %d seconds.") % ( |
3055 | - FLAGS.rabbit_host, |
3056 | - FLAGS.rabbit_port, |
3057 | - FLAGS.rabbit_retry_interval)) |
3058 | + LOG.exception(_("AMQP server on %s:%d is unreachable." |
3059 | + " Trying again in %d seconds.") % ( |
3060 | + FLAGS.rabbit_host, |
3061 | + FLAGS.rabbit_port, |
3062 | + FLAGS.rabbit_retry_interval)) |
3063 | self.failed_connection = True |
3064 | if self.failed_connection: |
3065 | - logging.exception(_("Unable to connect to AMQP server" |
3066 | - " after %d tries. Shutting down.") % FLAGS.rabbit_max_retries) |
3067 | + LOG.exception(_("Unable to connect to AMQP server " |
3068 | + "after %d tries. Shutting down."), |
3069 | + FLAGS.rabbit_max_retries) |
3070 | sys.exit(1) |
3071 | |
3072 | def fetch(self, no_ack=None, auto_ack=None, enable_callbacks=False): |
3073 | @@ -116,14 +115,14 @@ |
3074 | self.declare() |
3075 | super(Consumer, self).fetch(no_ack, auto_ack, enable_callbacks) |
3076 | if self.failed_connection: |
3077 | - logging.error(_("Reconnected to queue")) |
3078 | + LOG.error(_("Reconnected to queue")) |
3079 | self.failed_connection = False |
3080 | # NOTE(vish): This is catching all errors because we really don't |
3081 | # exceptions to be logged 10 times a second if some |
3082 | # persistent failure occurs. |
3083 | except Exception: # pylint: disable-msg=W0703 |
3084 | if not self.failed_connection: |
3085 | - logging.exception(_("Failed to fetch message from queue")) |
3086 | + LOG.exception(_("Failed to fetch message from queue")) |
3087 | self.failed_connection = True |
3088 | |
3089 | def attach_to_eventlet(self): |
3090 | @@ -243,8 +242,8 @@ |
3091 | if failure: |
3092 | message = str(failure[1]) |
3093 | tb = traceback.format_exception(*failure) |
3094 | - logging.error(_("Returning exception %s to caller"), message) |
3095 | - logging.error(tb) |
3096 | + LOG.error(_("Returning exception %s to caller"), message) |
3097 | + LOG.error(tb) |
3098 | failure = (failure[0].__name__, str(failure[1]), tb) |
3099 | conn = Connection.instance(True) |
3100 | publisher = DirectPublisher(connection=conn, msg_id=msg_id) |
3101 | |
3102 | === modified file 'nova/scheduler/manager.py' |
3103 | --- nova/scheduler/manager.py 2010-12-22 20:59:53 +0000 |
3104 | +++ nova/scheduler/manager.py 2011-01-07 19:51:33 +0000 |
3105 | @@ -21,15 +21,16 @@ |
3106 | Scheduler Service |
3107 | """ |
3108 | |
3109 | -import logging |
3110 | import functools |
3111 | |
3112 | from nova import db |
3113 | from nova import flags |
3114 | +from nova import log as logging |
3115 | from nova import manager |
3116 | from nova import rpc |
3117 | from nova import utils |
3118 | |
3119 | +LOG = logging.getLogger('nova.scheduler.manager') |
3120 | FLAGS = flags.FLAGS |
3121 | flags.DEFINE_string('scheduler_driver', |
3122 | 'nova.scheduler.chance.ChanceScheduler', |
3123 | @@ -65,4 +66,4 @@ |
3124 | db.queue_get_for(context, topic, host), |
3125 | {"method": method, |
3126 | "args": kwargs}) |
3127 | - logging.debug(_("Casting to %s %s for %s"), topic, host, method) |
3128 | + LOG.debug(_("Casting to %s %s for %s"), topic, host, method) |
3129 | |
3130 | === modified file 'nova/service.py' |
3131 | --- nova/service.py 2011-01-07 18:11:04 +0000 |
3132 | +++ nova/service.py 2011-01-07 19:51:33 +0000 |
3133 | @@ -21,7 +21,6 @@ |
3134 | """ |
3135 | |
3136 | import inspect |
3137 | -import logging |
3138 | import os |
3139 | import sys |
3140 | import time |
3141 | @@ -35,6 +34,7 @@ |
3142 | from nova import context |
3143 | from nova import db |
3144 | from nova import exception |
3145 | +from nova import log as logging |
3146 | from nova import flags |
3147 | from nova import rpc |
3148 | from nova import utils |
3149 | @@ -155,7 +155,7 @@ |
3150 | report_interval = FLAGS.report_interval |
3151 | if not periodic_interval: |
3152 | periodic_interval = FLAGS.periodic_interval |
3153 | - logging.warn(_("Starting %s node"), topic) |
3154 | + logging.audit(_("Starting %s node"), topic) |
3155 | service_obj = cls(host, binary, topic, manager, |
3156 | report_interval, periodic_interval) |
3157 | |
3158 | @@ -219,20 +219,14 @@ |
3159 | |
3160 | |
3161 | def serve(*services): |
3162 | - argv = FLAGS(sys.argv) |
3163 | + FLAGS(sys.argv) |
3164 | + logging.basicConfig() |
3165 | |
3166 | if not services: |
3167 | services = [Service.create()] |
3168 | |
3169 | name = '_'.join(x.binary for x in services) |
3170 | - logging.debug("Serving %s" % name) |
3171 | - |
3172 | - logging.getLogger('amqplib').setLevel(logging.WARN) |
3173 | - |
3174 | - if FLAGS.verbose: |
3175 | - logging.getLogger().setLevel(logging.DEBUG) |
3176 | - else: |
3177 | - logging.getLogger().setLevel(logging.WARNING) |
3178 | + logging.debug(_("Serving %s"), name) |
3179 | |
3180 | logging.debug(_("Full set of FLAGS:")) |
3181 | for flag in FLAGS: |
3182 | |
3183 | === modified file 'nova/tests/api/openstack/test_images.py' |
3184 | --- nova/tests/api/openstack/test_images.py 2010-12-28 19:01:54 +0000 |
3185 | +++ nova/tests/api/openstack/test_images.py 2011-01-07 19:51:33 +0000 |
3186 | @@ -22,7 +22,6 @@ |
3187 | |
3188 | import json |
3189 | import datetime |
3190 | -import logging |
3191 | import unittest |
3192 | |
3193 | import stubout |
3194 | |
3195 | === modified file 'nova/tests/objectstore_unittest.py' |
3196 | --- nova/tests/objectstore_unittest.py 2010-12-10 00:05:13 +0000 |
3197 | +++ nova/tests/objectstore_unittest.py 2011-01-07 19:51:33 +0000 |
3198 | @@ -23,7 +23,6 @@ |
3199 | import boto |
3200 | import glob |
3201 | import hashlib |
3202 | -import logging |
3203 | import os |
3204 | import shutil |
3205 | import tempfile |
3206 | @@ -63,7 +62,6 @@ |
3207 | self.flags(buckets_path=os.path.join(OSS_TEMPDIR, 'buckets'), |
3208 | images_path=os.path.join(OSS_TEMPDIR, 'images'), |
3209 | ca_path=os.path.join(os.path.dirname(__file__), 'CA')) |
3210 | - logging.getLogger().setLevel(logging.DEBUG) |
3211 | |
3212 | self.auth_manager = manager.AuthManager() |
3213 | self.auth_manager.create_user('user1') |
3214 | |
3215 | === modified file 'nova/tests/test_access.py' |
3216 | --- nova/tests/test_access.py 2010-12-17 01:05:54 +0000 |
3217 | +++ nova/tests/test_access.py 2011-01-07 19:51:33 +0000 |
3218 | @@ -17,7 +17,6 @@ |
3219 | # under the License. |
3220 | |
3221 | import unittest |
3222 | -import logging |
3223 | import webob |
3224 | |
3225 | from nova import context |
3226 | |
3227 | === modified file 'nova/tests/test_auth.py' |
3228 | --- nova/tests/test_auth.py 2010-12-23 18:31:46 +0000 |
3229 | +++ nova/tests/test_auth.py 2011-01-07 19:51:33 +0000 |
3230 | @@ -16,17 +16,18 @@ |
3231 | # License for the specific language governing permissions and limitations |
3232 | # under the License. |
3233 | |
3234 | -import logging |
3235 | from M2Crypto import X509 |
3236 | import unittest |
3237 | |
3238 | from nova import crypto |
3239 | from nova import flags |
3240 | +from nova import log as logging |
3241 | from nova import test |
3242 | from nova.auth import manager |
3243 | from nova.api.ec2 import cloud |
3244 | |
3245 | FLAGS = flags.FLAGS |
3246 | +LOG = logging.getLogger('nova.tests.auth_unittest') |
3247 | |
3248 | |
3249 | class user_generator(object): |
3250 | @@ -211,12 +212,12 @@ |
3251 | # NOTE(vish): Setup runs genroot.sh if it hasn't been run |
3252 | cloud.CloudController().setup() |
3253 | _key, cert_str = crypto.generate_x509_cert(user.id, project.id) |
3254 | - logging.debug(cert_str) |
3255 | + LOG.debug(cert_str) |
3256 | |
3257 | full_chain = crypto.fetch_ca(project_id=project.id, chain=True) |
3258 | int_cert = crypto.fetch_ca(project_id=project.id, chain=False) |
3259 | cloud_cert = crypto.fetch_ca() |
3260 | - logging.debug("CA chain:\n\n =====\n%s\n\n=====" % full_chain) |
3261 | + LOG.debug("CA chain:\n\n =====\n%s\n\n=====", full_chain) |
3262 | signed_cert = X509.load_cert_string(cert_str) |
3263 | chain_cert = X509.load_cert_string(full_chain) |
3264 | int_cert = X509.load_cert_string(int_cert) |
3265 | @@ -331,7 +332,7 @@ |
3266 | test.TestCase.__init__(self, *args, **kwargs) |
3267 | import nova.auth.fakeldap as fakeldap |
3268 | if FLAGS.flush_db: |
3269 | - logging.info("Flushing datastore") |
3270 | + LOG.info("Flushing datastore") |
3271 | r = fakeldap.Store.instance() |
3272 | r.flushdb() |
3273 | |
3274 | |
3275 | === modified file 'nova/tests/test_cloud.py' |
3276 | --- nova/tests/test_cloud.py 2011-01-06 18:46:28 +0000 |
3277 | +++ nova/tests/test_cloud.py 2011-01-07 19:51:33 +0000 |
3278 | @@ -18,7 +18,6 @@ |
3279 | |
3280 | from base64 import b64decode |
3281 | import json |
3282 | -import logging |
3283 | from M2Crypto import BIO |
3284 | from M2Crypto import RSA |
3285 | import os |
3286 | @@ -31,6 +30,7 @@ |
3287 | from nova import crypto |
3288 | from nova import db |
3289 | from nova import flags |
3290 | +from nova import log as logging |
3291 | from nova import rpc |
3292 | from nova import service |
3293 | from nova import test |
3294 | @@ -41,6 +41,7 @@ |
3295 | |
3296 | |
3297 | FLAGS = flags.FLAGS |
3298 | +LOG = logging.getLogger('nova.tests.cloud') |
3299 | |
3300 | # Temp dirs for working with image attributes through the cloud controller |
3301 | # (stole this from objectstore_unittest.py) |
3302 | @@ -56,7 +57,6 @@ |
3303 | images_path=IMAGES_PATH) |
3304 | |
3305 | self.conn = rpc.Connection.instance() |
3306 | - logging.getLogger().setLevel(logging.DEBUG) |
3307 | |
3308 | # set up our cloud |
3309 | self.cloud = cloud.CloudController() |
3310 | @@ -179,7 +179,7 @@ |
3311 | |
3312 | def test_run_instances(self): |
3313 | if FLAGS.connection_type == 'fake': |
3314 | - logging.debug("Can't test instances without a real virtual env.") |
3315 | + LOG.debug(_("Can't test instances without a real virtual env.")) |
3316 | return |
3317 | image_id = FLAGS.default_image |
3318 | instance_type = FLAGS.default_instance_type |
3319 | @@ -191,25 +191,25 @@ |
3320 | # TODO: check for proper response |
3321 | instance_id = rv['reservationSet'][0].keys()[0] |
3322 | instance = rv['reservationSet'][0][instance_id][0] |
3323 | - logging.debug("Need to watch instance %s until it's running..." % |
3324 | - instance['instance_id']) |
3325 | + LOG.debug(_("Need to watch instance %s until it's running..."), |
3326 | + instance['instance_id']) |
3327 | while True: |
3328 | greenthread.sleep(1) |
3329 | info = self.cloud._get_instance(instance['instance_id']) |
3330 | - logging.debug(info['state']) |
3331 | + LOG.debug(info['state']) |
3332 | if info['state'] == power_state.RUNNING: |
3333 | break |
3334 | self.assert_(rv) |
3335 | |
3336 | - if connection_type != 'fake': |
3337 | + if FLAGS.connection_type != 'fake': |
3338 | time.sleep(45) # Should use boto for polling here |
3339 | for reservations in rv['reservationSet']: |
3340 | # for res_id in reservations.keys(): |
3341 | - # logging.debug(reservations[res_id]) |
3342 | + # LOG.debug(reservations[res_id]) |
3343 | # for instance in reservations[res_id]: |
3344 | for instance in reservations[reservations.keys()[0]]: |
3345 | instance_id = instance['instance_id'] |
3346 | - logging.debug("Terminating instance %s" % instance_id) |
3347 | + LOG.debug(_("Terminating instance %s"), instance_id) |
3348 | rv = self.compute.terminate_instance(instance_id) |
3349 | |
3350 | def test_instance_update_state(self): |
3351 | |
3352 | === modified file 'nova/tests/test_compute.py' |
3353 | --- nova/tests/test_compute.py 2011-01-06 22:35:48 +0000 |
3354 | +++ nova/tests/test_compute.py 2011-01-07 19:51:33 +0000 |
3355 | @@ -20,25 +20,25 @@ |
3356 | """ |
3357 | |
3358 | import datetime |
3359 | -import logging |
3360 | |
3361 | from nova import compute |
3362 | from nova import context |
3363 | from nova import db |
3364 | from nova import exception |
3365 | from nova import flags |
3366 | +from nova import log as logging |
3367 | from nova import test |
3368 | from nova import utils |
3369 | from nova.auth import manager |
3370 | |
3371 | |
3372 | FLAGS = flags.FLAGS |
3373 | +LOG = logging.getLogger('nova.tests.compute') |
3374 | |
3375 | |
3376 | class ComputeTestCase(test.TestCase): |
3377 | """Test case for compute""" |
3378 | def setUp(self): |
3379 | - logging.getLogger().setLevel(logging.DEBUG) |
3380 | super(ComputeTestCase, self).setUp() |
3381 | self.flags(connection_type='fake', |
3382 | stub_network=True, |
3383 | @@ -101,13 +101,13 @@ |
3384 | self.compute.run_instance(self.context, instance_id) |
3385 | |
3386 | instances = db.instance_get_all(context.get_admin_context()) |
3387 | - logging.info(_("Running instances: %s"), instances) |
3388 | + LOG.info(_("Running instances: %s"), instances) |
3389 | self.assertEqual(len(instances), 1) |
3390 | |
3391 | self.compute.terminate_instance(self.context, instance_id) |
3392 | |
3393 | instances = db.instance_get_all(context.get_admin_context()) |
3394 | - logging.info(_("After terminating instances: %s"), instances) |
3395 | + LOG.info(_("After terminating instances: %s"), instances) |
3396 | self.assertEqual(len(instances), 0) |
3397 | |
3398 | def test_run_terminate_timestamps(self): |
3399 | |
3400 | === added file 'nova/tests/test_log.py' |
3401 | --- nova/tests/test_log.py 1970-01-01 00:00:00 +0000 |
3402 | +++ nova/tests/test_log.py 2011-01-07 19:51:33 +0000 |
3403 | @@ -0,0 +1,110 @@ |
3404 | +import cStringIO |
3405 | + |
3406 | +from nova import context |
3407 | +from nova import log |
3408 | +from nova import test |
3409 | + |
3410 | + |
3411 | +def _fake_context(): |
3412 | + return context.RequestContext(1, 1) |
3413 | + |
3414 | + |
3415 | +class RootLoggerTestCase(test.TrialTestCase): |
3416 | + def setUp(self): |
3417 | + super(RootLoggerTestCase, self).setUp() |
3418 | + self.log = log.logging.root |
3419 | + |
3420 | + def tearDown(self): |
3421 | + super(RootLoggerTestCase, self).tearDown() |
3422 | + log.NovaLogger.manager.loggerDict = {} |
3423 | + |
3424 | + def test_is_nova_instance(self): |
3425 | + self.assert_(isinstance(self.log, log.NovaLogger)) |
3426 | + |
3427 | + def test_name_is_nova_root(self): |
3428 | + self.assertEqual("nova.root", self.log.name) |
3429 | + |
3430 | + def test_handlers_have_nova_formatter(self): |
3431 | + formatters = [] |
3432 | + for h in self.log.handlers: |
3433 | + f = h.formatter |
3434 | + if isinstance(f, log.NovaFormatter): |
3435 | + formatters.append(f) |
3436 | + self.assert_(formatters) |
3437 | + self.assertEqual(len(formatters), len(self.log.handlers)) |
3438 | + |
3439 | + def test_handles_context_kwarg(self): |
3440 | + self.log.info("foo", context=_fake_context()) |
3441 | + self.assert_(True) # didn't raise exception |
3442 | + |
3443 | + def test_module_level_methods_handle_context_arg(self): |
3444 | + log.info("foo", context=_fake_context()) |
3445 | + self.assert_(True) # didn't raise exception |
3446 | + |
3447 | + def test_module_level_audit_handles_context_arg(self): |
3448 | + log.audit("foo", context=_fake_context()) |
3449 | + self.assert_(True) # didn't raise exception |
3450 | + |
3451 | + |
3452 | +class NovaFormatterTestCase(test.TrialTestCase): |
3453 | + def setUp(self): |
3454 | + super(NovaFormatterTestCase, self).setUp() |
3455 | + self.flags(logging_context_format_string="HAS CONTEXT "\ |
3456 | + "[%(request_id)s]: %(message)s", |
3457 | + logging_default_format_string="NOCTXT: %(message)s", |
3458 | + logging_debug_format_suffix="--DBG") |
3459 | + self.log = log.logging.root |
3460 | + self.stream = cStringIO.StringIO() |
3461 | + handler = log.StreamHandler(self.stream) |
3462 | + self.log.addHandler(handler) |
3463 | + self.log.setLevel(log.DEBUG) |
3464 | + |
3465 | + def tearDown(self): |
3466 | + super(NovaFormatterTestCase, self).tearDown() |
3467 | + log.NovaLogger.manager.loggerDict = {} |
3468 | + |
3469 | + def test_uncontextualized_log(self): |
3470 | + self.log.info("foo") |
3471 | + self.assertEqual("NOCTXT: foo\n", self.stream.getvalue()) |
3472 | + |
3473 | + def test_contextualized_log(self): |
3474 | + ctxt = _fake_context() |
3475 | + self.log.info("bar", context=ctxt) |
3476 | + expected = "HAS CONTEXT [%s]: bar\n" % ctxt.request_id |
3477 | + self.assertEqual(expected, self.stream.getvalue()) |
3478 | + |
3479 | + def test_debugging_log(self): |
3480 | + self.log.debug("baz") |
3481 | + self.assertEqual("NOCTXT: baz --DBG\n", self.stream.getvalue()) |
3482 | + |
3483 | + |
3484 | +class NovaLoggerTestCase(test.TrialTestCase): |
3485 | + def setUp(self): |
3486 | + super(NovaLoggerTestCase, self).setUp() |
3487 | + self.flags(default_log_levels=["nova-test=AUDIT"], verbose=False) |
3488 | + self.log = log.getLogger('nova-test') |
3489 | + |
3490 | + def tearDown(self): |
3491 | + super(NovaLoggerTestCase, self).tearDown() |
3492 | + log.NovaLogger.manager.loggerDict = {} |
3493 | + |
3494 | + def test_has_level_from_flags(self): |
3495 | + self.assertEqual(log.AUDIT, self.log.level) |
3496 | + |
3497 | + def test_child_log_has_level_of_parent_flag(self): |
3498 | + l = log.getLogger('nova-test.foo') |
3499 | + self.assertEqual(log.AUDIT, l.level) |
3500 | + |
3501 | + |
3502 | +class VerboseLoggerTestCase(test.TrialTestCase): |
3503 | + def setUp(self): |
3504 | + super(VerboseLoggerTestCase, self).setUp() |
3505 | + self.flags(default_log_levels=["nova.test=AUDIT"], verbose=True) |
3506 | + self.log = log.getLogger('nova.test') |
3507 | + |
3508 | + def tearDown(self): |
3509 | + super(VerboseLoggerTestCase, self).tearDown() |
3510 | + log.NovaLogger.manager.loggerDict = {} |
3511 | + |
3512 | + def test_will_be_verbose_if_named_nova_and_verbose_flag_set(self): |
3513 | + self.assertEqual(log.DEBUG, self.log.level) |
3514 | |
3515 | === modified file 'nova/tests/test_network.py' |
3516 | --- nova/tests/test_network.py 2010-12-23 18:31:46 +0000 |
3517 | +++ nova/tests/test_network.py 2011-01-07 19:51:33 +0000 |
3518 | @@ -20,18 +20,18 @@ |
3519 | """ |
3520 | import IPy |
3521 | import os |
3522 | -import logging |
3523 | |
3524 | from nova import context |
3525 | from nova import db |
3526 | from nova import exception |
3527 | from nova import flags |
3528 | -from nova import service |
3529 | +from nova import log as logging |
3530 | from nova import test |
3531 | from nova import utils |
3532 | from nova.auth import manager |
3533 | |
3534 | FLAGS = flags.FLAGS |
3535 | +LOG = logging.getLogger('nova.tests.network') |
3536 | |
3537 | |
3538 | class NetworkTestCase(test.TestCase): |
3539 | @@ -45,7 +45,6 @@ |
3540 | fake_network=True, |
3541 | network_size=16, |
3542 | num_networks=5) |
3543 | - logging.getLogger().setLevel(logging.DEBUG) |
3544 | self.manager = manager.AuthManager() |
3545 | self.user = self.manager.create_user('netuser', 'netuser', 'netuser') |
3546 | self.projects = [] |
3547 | @@ -328,7 +327,7 @@ |
3548 | 'TESTING': '1', |
3549 | 'FLAGFILE': FLAGS.dhcpbridge_flagfile} |
3550 | (out, err) = utils.execute(cmd, addl_env=env) |
3551 | - logging.debug("ISSUE_IP: %s, %s ", out, err) |
3552 | + LOG.debug("ISSUE_IP: %s, %s ", out, err) |
3553 | |
3554 | |
3555 | def release_ip(private_ip): |
3556 | @@ -344,4 +343,4 @@ |
3557 | 'TESTING': '1', |
3558 | 'FLAGFILE': FLAGS.dhcpbridge_flagfile} |
3559 | (out, err) = utils.execute(cmd, addl_env=env) |
3560 | - logging.debug("RELEASE_IP: %s, %s ", out, err) |
3561 | + LOG.debug("RELEASE_IP: %s, %s ", out, err) |
3562 | |
3563 | === modified file 'nova/tests/test_quota.py' |
3564 | --- nova/tests/test_quota.py 2011-01-04 21:56:36 +0000 |
3565 | +++ nova/tests/test_quota.py 2011-01-07 19:51:33 +0000 |
3566 | @@ -16,11 +16,8 @@ |
3567 | # License for the specific language governing permissions and limitations |
3568 | # under the License. |
3569 | |
3570 | -import logging |
3571 | - |
3572 | from nova import context |
3573 | from nova import db |
3574 | -from nova import exception |
3575 | from nova import flags |
3576 | from nova import quota |
3577 | from nova import test |
3578 | @@ -35,7 +32,6 @@ |
3579 | |
3580 | class QuotaTestCase(test.TestCase): |
3581 | def setUp(self): |
3582 | - logging.getLogger().setLevel(logging.DEBUG) |
3583 | super(QuotaTestCase, self).setUp() |
3584 | self.flags(connection_type='fake', |
3585 | quota_instances=2, |
3586 | |
3587 | === modified file 'nova/tests/test_rpc.py' |
3588 | --- nova/tests/test_rpc.py 2010-12-23 18:31:46 +0000 |
3589 | +++ nova/tests/test_rpc.py 2011-01-07 19:51:33 +0000 |
3590 | @@ -18,15 +18,16 @@ |
3591 | """ |
3592 | Unit Tests for remote procedure calls using queue |
3593 | """ |
3594 | -import logging |
3595 | |
3596 | from nova import context |
3597 | from nova import flags |
3598 | +from nova import log as logging |
3599 | from nova import rpc |
3600 | from nova import test |
3601 | |
3602 | |
3603 | FLAGS = flags.FLAGS |
3604 | +LOG = logging.getLogger('nova.tests.rpc') |
3605 | |
3606 | |
3607 | class RpcTestCase(test.TestCase): |
3608 | @@ -85,12 +86,12 @@ |
3609 | @staticmethod |
3610 | def echo(context, queue, value): |
3611 | """Calls echo in the passed queue""" |
3612 | - logging.debug("Nested received %s, %s", queue, value) |
3613 | + LOG.debug(_("Nested received %s, %s"), queue, value) |
3614 | ret = rpc.call(context, |
3615 | queue, |
3616 | {"method": "echo", |
3617 | "args": {"value": value}}) |
3618 | - logging.debug("Nested return %s", ret) |
3619 | + LOG.debug(_("Nested return %s"), ret) |
3620 | return value |
3621 | |
3622 | nested = Nested() |
3623 | @@ -115,13 +116,13 @@ |
3624 | @staticmethod |
3625 | def echo(context, value): |
3626 | """Simply returns whatever value is sent in""" |
3627 | - logging.debug("Received %s", value) |
3628 | + LOG.debug(_("Received %s"), value) |
3629 | return value |
3630 | |
3631 | @staticmethod |
3632 | def context(context, value): |
3633 | """Returns dictionary version of context""" |
3634 | - logging.debug("Received %s", context) |
3635 | + LOG.debug(_("Received %s"), context) |
3636 | return context.to_dict() |
3637 | |
3638 | @staticmethod |
3639 | |
3640 | === modified file 'nova/tests/test_volume.py' |
3641 | --- nova/tests/test_volume.py 2010-12-17 01:05:54 +0000 |
3642 | +++ nova/tests/test_volume.py 2011-01-07 19:51:33 +0000 |
3643 | @@ -19,23 +19,23 @@ |
3644 | Tests for Volume Code. |
3645 | |
3646 | """ |
3647 | -import logging |
3648 | |
3649 | from nova import context |
3650 | from nova import exception |
3651 | from nova import db |
3652 | from nova import flags |
3653 | +from nova import log as logging |
3654 | from nova import test |
3655 | from nova import utils |
3656 | |
3657 | FLAGS = flags.FLAGS |
3658 | +LOG = logging.getLogger('nova.tests.volume') |
3659 | |
3660 | |
3661 | class VolumeTestCase(test.TestCase): |
3662 | """Test Case for volumes.""" |
3663 | |
3664 | def setUp(self): |
3665 | - logging.getLogger().setLevel(logging.DEBUG) |
3666 | super(VolumeTestCase, self).setUp() |
3667 | self.compute = utils.import_object(FLAGS.compute_manager) |
3668 | self.flags(connection_type='fake') |
3669 | @@ -159,7 +159,7 @@ |
3670 | volume_id) |
3671 | self.assert_(iscsi_target not in targets) |
3672 | targets.append(iscsi_target) |
3673 | - logging.debug("Target %s allocated", iscsi_target) |
3674 | + LOG.debug(_("Target %s allocated"), iscsi_target) |
3675 | total_slots = FLAGS.iscsi_num_targets |
3676 | for _index in xrange(total_slots): |
3677 | volume_id = self._create_volume() |
3678 | |
3679 | === modified file 'nova/twistd.py' |
3680 | --- nova/twistd.py 2011-01-05 00:06:03 +0000 |
3681 | +++ nova/twistd.py 2011-01-07 19:51:33 +0000 |
3682 | @@ -22,7 +22,6 @@ |
3683 | """ |
3684 | |
3685 | import gflags |
3686 | -import logging |
3687 | import os |
3688 | import signal |
3689 | import sys |
3690 | @@ -34,6 +33,7 @@ |
3691 | from twisted.python import usage |
3692 | |
3693 | from nova import flags |
3694 | +from nova import log as logging |
3695 | |
3696 | |
3697 | if runtime.platformType == "win32": |
3698 | @@ -234,22 +234,12 @@ |
3699 | OptionsClass = WrapTwistedOptions(TwistdServerOptions) |
3700 | options = OptionsClass() |
3701 | argv = options.parseOptions() |
3702 | - logging.getLogger('amqplib').setLevel(logging.WARN) |
3703 | FLAGS.python = filename |
3704 | FLAGS.no_save = True |
3705 | if not FLAGS.pidfile: |
3706 | FLAGS.pidfile = '%s.pid' % name |
3707 | elif FLAGS.pidfile.endswith('twistd.pid'): |
3708 | FLAGS.pidfile = FLAGS.pidfile.replace('twistd.pid', '%s.pid' % name) |
3709 | - # NOTE(vish): if we're running nodaemon, redirect the log to stdout |
3710 | - if FLAGS.nodaemon and not FLAGS.logfile: |
3711 | - FLAGS.logfile = "-" |
3712 | - if not FLAGS.logfile: |
3713 | - FLAGS.logfile = '%s.log' % name |
3714 | - elif FLAGS.logfile.endswith('twistd.log'): |
3715 | - FLAGS.logfile = FLAGS.logfile.replace('twistd.log', '%s.log' % name) |
3716 | - if FLAGS.logdir: |
3717 | - FLAGS.logfile = os.path.join(FLAGS.logdir, FLAGS.logfile) |
3718 | if not FLAGS.prefix: |
3719 | FLAGS.prefix = name |
3720 | elif FLAGS.prefix.endswith('twisted'): |
3721 | @@ -270,19 +260,10 @@ |
3722 | print 'usage: %s [options] [start|stop|restart]' % argv[0] |
3723 | sys.exit(1) |
3724 | |
3725 | - formatter = logging.Formatter( |
3726 | - '(%(name)s): %(levelname)s %(message)s') |
3727 | - handler = logging.StreamHandler(log.StdioOnnaStick()) |
3728 | - handler.setFormatter(formatter) |
3729 | - logging.getLogger().addHandler(handler) |
3730 | - |
3731 | - if FLAGS.verbose: |
3732 | - logging.getLogger().setLevel(logging.DEBUG) |
3733 | - else: |
3734 | - logging.getLogger().setLevel(logging.WARNING) |
3735 | - |
3736 | + logging.basicConfig() |
3737 | logging.debug(_("Full set of FLAGS:")) |
3738 | for flag in FLAGS: |
3739 | logging.debug("%s : %s" % (flag, FLAGS.get(flag, None))) |
3740 | |
3741 | + logging.audit(_("Starting %s"), name) |
3742 | twistd.runApp(options) |
3743 | |
3744 | === modified file 'nova/utils.py' |
3745 | --- nova/utils.py 2010-12-26 14:08:38 +0000 |
3746 | +++ nova/utils.py 2011-01-07 19:51:33 +0000 |
3747 | @@ -22,7 +22,6 @@ |
3748 | |
3749 | import datetime |
3750 | import inspect |
3751 | -import logging |
3752 | import os |
3753 | import random |
3754 | import subprocess |
3755 | @@ -37,8 +36,10 @@ |
3756 | |
3757 | from nova import exception |
3758 | from nova.exception import ProcessExecutionError |
3759 | - |
3760 | - |
3761 | +from nova import log as logging |
3762 | + |
3763 | + |
3764 | +LOG = logging.getLogger("nova.utils") |
3765 | TIME_FORMAT = "%Y-%m-%dT%H:%M:%SZ" |
3766 | |
3767 | |
3768 | @@ -109,7 +110,7 @@ |
3769 | |
3770 | |
3771 | def fetchfile(url, target): |
3772 | - logging.debug(_("Fetching %s") % url) |
3773 | + LOG.debug(_("Fetching %s") % url) |
3774 | # c = pycurl.Curl() |
3775 | # fp = open(target, "wb") |
3776 | # c.setopt(c.URL, url) |
3777 | @@ -121,7 +122,7 @@ |
3778 | |
3779 | |
3780 | def execute(cmd, process_input=None, addl_env=None, check_exit_code=True): |
3781 | - logging.debug(_("Running cmd (subprocess): %s"), cmd) |
3782 | + LOG.debug(_("Running cmd (subprocess): %s"), cmd) |
3783 | env = os.environ.copy() |
3784 | if addl_env: |
3785 | env.update(addl_env) |
3786 | @@ -134,7 +135,7 @@ |
3787 | result = obj.communicate() |
3788 | obj.stdin.close() |
3789 | if obj.returncode: |
3790 | - logging.debug(_("Result was %s") % (obj.returncode)) |
3791 | + LOG.debug(_("Result was %s") % (obj.returncode)) |
3792 | if check_exit_code and obj.returncode != 0: |
3793 | (stdout, stderr) = result |
3794 | raise ProcessExecutionError(exit_code=obj.returncode, |
3795 | @@ -167,12 +168,12 @@ |
3796 | |
3797 | |
3798 | def debug(arg): |
3799 | - logging.debug('debug in callback: %s', arg) |
3800 | + LOG.debug(_('debug in callback: %s'), arg) |
3801 | return arg |
3802 | |
3803 | |
3804 | def runthis(prompt, cmd, check_exit_code=True): |
3805 | - logging.debug(_("Running %s") % (cmd)) |
3806 | + LOG.debug(_("Running %s"), (cmd)) |
3807 | rv, err = execute(cmd, check_exit_code=check_exit_code) |
3808 | |
3809 | |
3810 | @@ -203,7 +204,7 @@ |
3811 | csock.close() |
3812 | return addr |
3813 | except socket.gaierror as ex: |
3814 | - logging.warn(_("Couldn't get IP, using 127.0.0.1 %s"), ex) |
3815 | + LOG.warn(_("Couldn't get IP, using 127.0.0.1 %s"), ex) |
3816 | return "127.0.0.1" |
3817 | |
3818 | |
3819 | @@ -296,7 +297,7 @@ |
3820 | fromlist = backend |
3821 | |
3822 | self.__backend = __import__(name, None, None, fromlist) |
3823 | - logging.info('backend %s', self.__backend) |
3824 | + LOG.debug(_('backend %s'), self.__backend) |
3825 | return self.__backend |
3826 | |
3827 | def __getattr__(self, key): |
3828 | |
3829 | === modified file 'nova/virt/connection.py' |
3830 | --- nova/virt/connection.py 2011-01-03 19:39:50 +0000 |
3831 | +++ nova/virt/connection.py 2011-01-07 19:51:33 +0000 |
3832 | @@ -19,16 +19,17 @@ |
3833 | |
3834 | """Abstraction of the underlying virtualization API.""" |
3835 | |
3836 | -import logging |
3837 | import sys |
3838 | |
3839 | from nova import flags |
3840 | +from nova import log as logging |
3841 | from nova.virt import fake |
3842 | from nova.virt import libvirt_conn |
3843 | from nova.virt import xenapi_conn |
3844 | from nova.virt import hyperv |
3845 | |
3846 | |
3847 | +LOG = logging.getLogger("nova.virt.connection") |
3848 | FLAGS = flags.FLAGS |
3849 | |
3850 | |
3851 | @@ -69,6 +70,6 @@ |
3852 | raise Exception('Unknown connection type "%s"' % t) |
3853 | |
3854 | if conn is None: |
3855 | - logging.error(_('Failed to open connection to the hypervisor')) |
3856 | + LOG.error(_('Failed to open connection to the hypervisor')) |
3857 | sys.exit(1) |
3858 | return conn |
3859 | |
3860 | === modified file 'nova/virt/hyperv.py' |
3861 | --- nova/virt/hyperv.py 2011-01-04 23:42:29 +0000 |
3862 | +++ nova/virt/hyperv.py 2011-01-07 19:51:33 +0000 |
3863 | @@ -61,11 +61,11 @@ |
3864 | """ |
3865 | |
3866 | import os |
3867 | -import logging |
3868 | import time |
3869 | |
3870 | from nova import exception |
3871 | from nova import flags |
3872 | +from nova import log as logging |
3873 | from nova.auth import manager |
3874 | from nova.compute import power_state |
3875 | from nova.virt import images |
3876 | @@ -76,6 +76,9 @@ |
3877 | FLAGS = flags.FLAGS |
3878 | |
3879 | |
3880 | +LOG = logging.getLogger('nova.virt.hyperv') |
3881 | + |
3882 | + |
3883 | HYPERV_POWER_STATE = { |
3884 | 3: power_state.SHUTDOWN, |
3885 | 2: power_state.RUNNING, |
3886 | @@ -112,7 +115,7 @@ |
3887 | |
3888 | def init_host(self): |
3889 | #FIXME(chiradeep): implement this |
3890 | - logging.debug(_('In init host')) |
3891 | + LOG.debug(_('In init host')) |
3892 | pass |
3893 | |
3894 | def list_instances(self): |
3895 | @@ -142,11 +145,11 @@ |
3896 | self._create_disk(instance['name'], vhdfile) |
3897 | self._create_nic(instance['name'], instance['mac_address']) |
3898 | |
3899 | - logging.debug(_('Starting VM %s '), instance.name) |
3900 | + LOG.debug(_('Starting VM %s '), instance.name) |
3901 | self._set_vm_state(instance['name'], 'Enabled') |
3902 | - logging.info(_('Started VM %s '), instance.name) |
3903 | + LOG.info(_('Started VM %s '), instance.name) |
3904 | except Exception as exn: |
3905 | - logging.error(_('spawn vm failed: %s'), exn) |
3906 | + LOG.exception(_('spawn vm failed: %s'), exn) |
3907 | self.destroy(instance) |
3908 | |
3909 | def _create_vm(self, instance): |
3910 | @@ -165,7 +168,7 @@ |
3911 | if not success: |
3912 | raise Exception(_('Failed to create VM %s'), instance.name) |
3913 | |
3914 | - logging.debug(_('Created VM %s...'), instance.name) |
3915 | + LOG.debug(_('Created VM %s...'), instance.name) |
3916 | vm = self._conn.Msvm_ComputerSystem(ElementName=instance.name)[0] |
3917 | |
3918 | vmsettings = vm.associators( |
3919 | @@ -182,7 +185,7 @@ |
3920 | |
3921 | (job, ret_val) = vs_man_svc.ModifyVirtualSystemResources( |
3922 | vm.path_(), [memsetting.GetText_(1)]) |
3923 | - logging.debug(_('Set memory for vm %s...'), instance.name) |
3924 | + LOG.debug(_('Set memory for vm %s...'), instance.name) |
3925 | procsetting = vmsetting.associators( |
3926 | wmi_result_class='Msvm_ProcessorSettingData')[0] |
3927 | vcpus = long(instance['vcpus']) |
3928 | @@ -192,12 +195,12 @@ |
3929 | |
3930 | (job, ret_val) = vs_man_svc.ModifyVirtualSystemResources( |
3931 | vm.path_(), [procsetting.GetText_(1)]) |
3932 | - logging.debug(_('Set vcpus for vm %s...'), instance.name) |
3933 | + LOG.debug(_('Set vcpus for vm %s...'), instance.name) |
3934 | |
3935 | def _create_disk(self, vm_name, vhdfile): |
3936 | """Create a disk and attach it to the vm""" |
3937 | - logging.debug(_('Creating disk for %s by attaching disk file %s'), |
3938 | - vm_name, vhdfile) |
3939 | + LOG.debug(_('Creating disk for %s by attaching disk file %s'), |
3940 | + vm_name, vhdfile) |
3941 | #Find the IDE controller for the vm. |
3942 | vms = self._conn.MSVM_ComputerSystem(ElementName=vm_name) |
3943 | vm = vms[0] |
3944 | @@ -224,7 +227,7 @@ |
3945 | raise Exception(_('Failed to add diskdrive to VM %s'), |
3946 | vm_name) |
3947 | diskdrive_path = new_resources[0] |
3948 | - logging.debug(_('New disk drive path is %s'), diskdrive_path) |
3949 | + LOG.debug(_('New disk drive path is %s'), diskdrive_path) |
3950 | #Find the default VHD disk object. |
3951 | vhddefault = self._conn.query( |
3952 | "SELECT * FROM Msvm_ResourceAllocationSettingData \ |
3953 | @@ -243,11 +246,11 @@ |
3954 | if new_resources is None: |
3955 | raise Exception(_('Failed to add vhd file to VM %s'), |
3956 | vm_name) |
3957 | - logging.info(_('Created disk for %s'), vm_name) |
3958 | + LOG.info(_('Created disk for %s'), vm_name) |
3959 | |
3960 | def _create_nic(self, vm_name, mac): |
3961 | """Create a (emulated) nic and attach it to the vm""" |
3962 | - logging.debug(_('Creating nic for %s '), vm_name) |
3963 | + LOG.debug(_('Creating nic for %s '), vm_name) |
3964 | #Find the vswitch that is connected to the physical nic. |
3965 | vms = self._conn.Msvm_ComputerSystem(ElementName=vm_name) |
3966 | extswitch = self._find_external_network() |
3967 | @@ -266,11 +269,11 @@ |
3968 | (new_port, ret_val) = switch_svc.CreateSwitchPort(vm_name, vm_name, |
3969 | "", extswitch.path_()) |
3970 | if ret_val != 0: |
3971 | - logging.error(_('Failed creating a port on the external vswitch')) |
3972 | + LOG.error(_('Failed creating a port on the external vswitch')) |
3973 | raise Exception(_('Failed creating port for %s'), |
3974 | vm_name) |
3975 | - logging.debug(_("Created switch port %s on switch %s"), |
3976 | - vm_name, extswitch.path_()) |
3977 | + LOG.debug(_("Created switch port %s on switch %s"), |
3978 | + vm_name, extswitch.path_()) |
3979 | #Connect the new nic to the new port. |
3980 | new_nic_data.Connection = [new_port] |
3981 | new_nic_data.ElementName = vm_name + ' nic' |
3982 | @@ -281,7 +284,7 @@ |
3983 | if new_resources is None: |
3984 | raise Exception(_('Failed to add nic to VM %s'), |
3985 | vm_name) |
3986 | - logging.info(_("Created nic for %s "), vm_name) |
3987 | + LOG.info(_("Created nic for %s "), vm_name) |
3988 | |
3989 | def _add_virt_resource(self, res_setting_data, target_vm): |
3990 | """Add a new resource (disk/nic) to the VM""" |
3991 | @@ -314,10 +317,10 @@ |
3992 | time.sleep(0.1) |
3993 | job = self._conn.Msvm_ConcreteJob(InstanceID=inst_id)[0] |
3994 | if job.JobState != WMI_JOB_STATE_COMPLETED: |
3995 | - logging.debug(_("WMI job failed: %s"), job.ErrorSummaryDescription) |
3996 | + LOG.debug(_("WMI job failed: %s"), job.ErrorSummaryDescription) |
3997 | return False |
3998 | - logging.debug(_("WMI job succeeded: %s, Elapsed=%s "), job.Description, |
3999 | - job.ElapsedTime) |
4000 | + LOG.debug(_("WMI job succeeded: %s, Elapsed=%s "), job.Description, |
4001 | + job.ElapsedTime) |
4002 | return True |
4003 | |
4004 | def _find_external_network(self): |
4005 | @@ -352,7 +355,7 @@ |
4006 | |
4007 | def destroy(self, instance): |
4008 | """Destroy the VM. Also destroy the associated VHD disk files""" |
4009 | - logging.debug(_("Got request to destroy vm %s"), instance.name) |
4010 | + LOG.debug(_("Got request to destroy vm %s"), instance.name) |
4011 | vm = self._lookup(instance.name) |
4012 | if vm is None: |
4013 | return |
4014 | @@ -383,7 +386,7 @@ |
4015 | vhdfile = self._cim_conn.CIM_DataFile(Name=disk) |
4016 | for vf in vhdfile: |
4017 | vf.Delete() |
4018 | - logging.debug(_("Del: disk %s vm %s"), vhdfile, instance.name) |
4019 | + LOG.debug(_("Del: disk %s vm %s"), vhdfile, instance.name) |
4020 | |
4021 | def get_info(self, instance_id): |
4022 | """Get information about the VM""" |
4023 | @@ -399,12 +402,12 @@ |
4024 | summary_info = vs_man_svc.GetSummaryInformation( |
4025 | [4, 100, 103, 105], settings_paths)[1] |
4026 | info = summary_info[0] |
4027 | - logging.debug(_("Got Info for vm %s: state=%s, mem=%s, num_cpu=%s, \ |
4028 | + LOG.debug(_("Got Info for vm %s: state=%s, mem=%s, num_cpu=%s, \ |
4029 | cpu_time=%s"), instance_id, |
4030 | - str(HYPERV_POWER_STATE[info.EnabledState]), |
4031 | - str(info.MemoryUsage), |
4032 | - str(info.NumberOfProcessors), |
4033 | - str(info.UpTime)) |
4034 | + str(HYPERV_POWER_STATE[info.EnabledState]), |
4035 | + str(info.MemoryUsage), |
4036 | + str(info.NumberOfProcessors), |
4037 | + str(info.UpTime)) |
4038 | |
4039 | return {'state': HYPERV_POWER_STATE[info.EnabledState], |
4040 | 'max_mem': info.MemoryUsage, |
4041 | @@ -438,11 +441,11 @@ |
4042 | #already in the state requested |
4043 | success = True |
4044 | if success: |
4045 | - logging.info(_("Successfully changed vm state of %s to %s"), |
4046 | - vm_name, req_state) |
4047 | + LOG.info(_("Successfully changed vm state of %s to %s"), vm_name, |
4048 | + req_state) |
4049 | else: |
4050 | - logging.error(_("Failed to change vm state of %s to %s"), |
4051 | - vm_name, req_state) |
4052 | + LOG.error(_("Failed to change vm state of %s to %s"), vm_name, |
4053 | + req_state) |
4054 | raise Exception(_("Failed to change vm state of %s to %s"), |
4055 | vm_name, req_state) |
4056 | |
4057 | |
4058 | === modified file 'nova/virt/images.py' |
4059 | --- nova/virt/images.py 2011-01-05 00:06:03 +0000 |
4060 | +++ nova/virt/images.py 2011-01-07 19:51:33 +0000 |
4061 | @@ -21,7 +21,6 @@ |
4062 | Handling of VM disk images. |
4063 | """ |
4064 | |
4065 | -import logging |
4066 | import os.path |
4067 | import shutil |
4068 | import sys |
4069 | @@ -30,6 +29,7 @@ |
4070 | import urlparse |
4071 | |
4072 | from nova import flags |
4073 | +from nova import log as logging |
4074 | from nova import utils |
4075 | from nova.auth import manager |
4076 | from nova.auth import signer |
4077 | @@ -40,6 +40,8 @@ |
4078 | flags.DEFINE_bool('use_s3', True, |
4079 | 'whether to get images from s3 or use local copy') |
4080 | |
4081 | +LOG = logging.getLogger('nova.virt.images') |
4082 | + |
4083 | |
4084 | def fetch(image, path, user, project): |
4085 | if FLAGS.use_s3: |
4086 | @@ -65,7 +67,7 @@ |
4087 | |
4088 | urlopened = urllib2.urlopen(request) |
4089 | urlretrieve(urlopened, path) |
4090 | - logging.debug(_("Finished retreving %s -- placed in %s"), url, path) |
4091 | + LOG.debug(_("Finished retreving %s -- placed in %s"), url, path) |
4092 | |
4093 | |
4094 | def _fetch_s3_image(image, path, user, project): |
4095 | |
4096 | === modified file 'nova/virt/libvirt_conn.py' |
4097 | --- nova/virt/libvirt_conn.py 2010-12-30 21:23:14 +0000 |
4098 | +++ nova/virt/libvirt_conn.py 2011-01-07 19:51:33 +0000 |
4099 | @@ -36,7 +36,6 @@ |
4100 | |
4101 | """ |
4102 | |
4103 | -import logging |
4104 | import os |
4105 | import shutil |
4106 | |
4107 | @@ -50,6 +49,7 @@ |
4108 | from nova import db |
4109 | from nova import exception |
4110 | from nova import flags |
4111 | +from nova import log as logging |
4112 | from nova import utils |
4113 | #from nova.api import context |
4114 | from nova.auth import manager |
4115 | @@ -62,6 +62,7 @@ |
4116 | libxml2 = None |
4117 | Template = None |
4118 | |
4119 | +LOG = logging.getLogger('nova.virt.libvirt_conn') |
4120 | |
4121 | FLAGS = flags.FLAGS |
4122 | # TODO(vish): These flags should probably go into a shared location |
4123 | @@ -130,7 +131,7 @@ |
4124 | @property |
4125 | def _conn(self): |
4126 | if not self._wrapped_conn or not self._test_connection(): |
4127 | - logging.debug(_('Connecting to libvirt: %s') % self.libvirt_uri) |
4128 | + LOG.debug(_('Connecting to libvirt: %s'), self.libvirt_uri) |
4129 | self._wrapped_conn = self._connect(self.libvirt_uri, |
4130 | self.read_only) |
4131 | return self._wrapped_conn |
4132 | @@ -142,7 +143,7 @@ |
4133 | except libvirt.libvirtError as e: |
4134 | if e.get_error_code() == libvirt.VIR_ERR_SYSTEM_ERROR and \ |
4135 | e.get_error_domain() == libvirt.VIR_FROM_REMOTE: |
4136 | - logging.debug(_('Connection to libvirt broke')) |
4137 | + LOG.debug(_('Connection to libvirt broke')) |
4138 | return False |
4139 | raise |
4140 | |
4141 | @@ -214,8 +215,8 @@ |
4142 | |
4143 | def _cleanup(self, instance): |
4144 | target = os.path.join(FLAGS.instances_path, instance['name']) |
4145 | - logging.info(_('instance %s: deleting instance files %s'), |
4146 | - instance['name'], target) |
4147 | + LOG.info(_('instance %s: deleting instance files %s'), |
4148 | + instance['name'], target) |
4149 | if os.path.exists(target): |
4150 | shutil.rmtree(target) |
4151 | |
4152 | @@ -279,10 +280,10 @@ |
4153 | db.instance_set_state(context.get_admin_context(), |
4154 | instance['id'], state) |
4155 | if state == power_state.RUNNING: |
4156 | - logging.debug(_('instance %s: rebooted'), instance['name']) |
4157 | + LOG.debug(_('instance %s: rebooted'), instance['name']) |
4158 | timer.stop() |
4159 | except Exception, exn: |
4160 | - logging.error(_('_wait_for_reboot failed: %s'), exn) |
4161 | + LOG.exception(_('_wait_for_reboot failed: %s'), exn) |
4162 | db.instance_set_state(context.get_admin_context(), |
4163 | instance['id'], |
4164 | power_state.SHUTDOWN) |
4165 | @@ -325,10 +326,10 @@ |
4166 | state = self.get_info(instance['name'])['state'] |
4167 | db.instance_set_state(None, instance['id'], state) |
4168 | if state == power_state.RUNNING: |
4169 | - logging.debug(_('instance %s: rescued'), instance['name']) |
4170 | + LOG.debug(_('instance %s: rescued'), instance['name']) |
4171 | timer.stop() |
4172 | except Exception, exn: |
4173 | - logging.error(_('_wait_for_rescue failed: %s'), exn) |
4174 | + LOG.exception(_('_wait_for_rescue failed: %s'), exn) |
4175 | db.instance_set_state(None, |
4176 | instance['id'], |
4177 | power_state.SHUTDOWN) |
4178 | @@ -353,7 +354,7 @@ |
4179 | NWFilterFirewall(self._conn).setup_nwfilters_for_instance(instance) |
4180 | self._create_image(instance, xml) |
4181 | self._conn.createXML(xml, 0) |
4182 | - logging.debug(_("instance %s: is running"), instance['name']) |
4183 | + LOG.debug(_("instance %s: is running"), instance['name']) |
4184 | |
4185 | timer = utils.LoopingCall(f=None) |
4186 | |
4187 | @@ -363,11 +364,11 @@ |
4188 | db.instance_set_state(context.get_admin_context(), |
4189 | instance['id'], state) |
4190 | if state == power_state.RUNNING: |
4191 | - logging.debug(_('instance %s: booted'), instance['name']) |
4192 | + LOG.debug(_('instance %s: booted'), instance['name']) |
4193 | timer.stop() |
4194 | except: |
4195 | - logging.exception(_('instance %s: failed to boot'), |
4196 | - instance['name']) |
4197 | + LOG.exception(_('instance %s: failed to boot'), |
4198 | + instance['name']) |
4199 | db.instance_set_state(context.get_admin_context(), |
4200 | instance['id'], |
4201 | power_state.SHUTDOWN) |
4202 | @@ -377,11 +378,11 @@ |
4203 | return timer.start(interval=0.5, now=True) |
4204 | |
4205 | def _flush_xen_console(self, virsh_output): |
4206 | - logging.info('virsh said: %r' % (virsh_output,)) |
4207 | + LOG.info(_('virsh said: %r'), virsh_output) |
4208 | virsh_output = virsh_output[0].strip() |
4209 | |
4210 | if virsh_output.startswith('/dev/'): |
4211 | - logging.info(_('cool, it\'s a device')) |
4212 | + LOG.info(_('cool, it\'s a device')) |
4213 | out, err = utils.execute("sudo dd if=%s iflag=nonblock" % |
4214 | virsh_output, check_exit_code=False) |
4215 | return out |
4216 | @@ -389,7 +390,7 @@ |
4217 | return '' |
4218 | |
4219 | def _append_to_file(self, data, fpath): |
4220 | - logging.info(_('data: %r, fpath: %r') % (data, fpath)) |
4221 | + LOG.info(_('data: %r, fpath: %r'), data, fpath) |
4222 | fp = open(fpath, 'a+') |
4223 | fp.write(data) |
4224 | return fpath |
4225 | @@ -397,7 +398,7 @@ |
4226 | def _dump_file(self, fpath): |
4227 | fp = open(fpath, 'r+') |
4228 | contents = fp.read() |
4229 | - logging.info('Contents: %r' % (contents,)) |
4230 | + LOG.info(_('Contents of file %s: %r'), fpath, contents) |
4231 | return contents |
4232 | |
4233 | @exception.wrap_exception |
4234 | @@ -431,7 +432,7 @@ |
4235 | |
4236 | # TODO(termie): these are blocking calls, it would be great |
4237 | # if they weren't. |
4238 | - logging.info(_('instance %s: Creating image'), inst['name']) |
4239 | + LOG.info(_('instance %s: Creating image'), inst['name']) |
4240 | f = open(basepath('libvirt.xml'), 'w') |
4241 | f.write(libvirt_xml) |
4242 | f.close() |
4243 | @@ -487,10 +488,10 @@ |
4244 | 'dns': network_ref['dns']} |
4245 | if key or net: |
4246 | if key: |
4247 | - logging.info(_('instance %s: injecting key into image %s'), |
4248 | + LOG.info(_('instance %s: injecting key into image %s'), |
4249 | inst['name'], inst.image_id) |
4250 | if net: |
4251 | - logging.info(_('instance %s: injecting net into image %s'), |
4252 | + LOG.info(_('instance %s: injecting net into image %s'), |
4253 | inst['name'], inst.image_id) |
4254 | try: |
4255 | disk.inject_data(basepath('disk-raw'), key, net, |
4256 | @@ -498,9 +499,9 @@ |
4257 | execute=execute) |
4258 | except Exception as e: |
4259 | # This could be a windows image, or a vmdk format disk |
4260 | - logging.warn(_('instance %s: ignoring error injecting data' |
4261 | - ' into image %s (%s)'), |
4262 | - inst['name'], inst.image_id, e) |
4263 | + LOG.warn(_('instance %s: ignoring error injecting data' |
4264 | + ' into image %s (%s)'), |
4265 | + inst['name'], inst.image_id, e) |
4266 | |
4267 | if inst['kernel_id']: |
4268 | if os.path.exists(basepath('disk')): |
4269 | @@ -526,8 +527,10 @@ |
4270 | |
4271 | def to_xml(self, instance, rescue=False): |
4272 | # TODO(termie): cache? |
4273 | - logging.debug(_('instance %s: starting toXML method'), |
4274 | - instance['name']) |
4275 | + LOG.debug(_('instance %s: starting toXML method'), instance['name']) |
4276 | + network = db.project_get_network(context.get_admin_context(), |
4277 | + instance['project_id']) |
4278 | + LOG.debug(_('instance %s: starting toXML method'), instance['name']) |
4279 | network = db.network_get_by_instance(context.get_admin_context(), |
4280 | instance['id']) |
4281 | # FIXME(vish): stick this in db |
4282 | @@ -569,7 +572,7 @@ |
4283 | xml_info['disk'] = xml_info['basepath'] + "/disk" |
4284 | |
4285 | xml = str(Template(self.libvirt_xml, searchList=[xml_info])) |
4286 | - logging.debug(_('instance %s: finished toXML method'), |
4287 | + LOG.debug(_('instance %s: finished toXML method'), |
4288 | instance['name']) |
4289 | |
4290 | return xml |
4291 | @@ -870,9 +873,9 @@ |
4292 | rule_xml += "dstportstart='%s' dstportend='%s' " % \ |
4293 | (rule.from_port, rule.to_port) |
4294 | elif rule.protocol == 'icmp': |
4295 | - logging.info('rule.protocol: %r, rule.from_port: %r, ' |
4296 | - 'rule.to_port: %r' % |
4297 | - (rule.protocol, rule.from_port, rule.to_port)) |
4298 | + LOG.info('rule.protocol: %r, rule.from_port: %r, ' |
4299 | + 'rule.to_port: %r', rule.protocol, |
4300 | + rule.from_port, rule.to_port) |
4301 | if rule.from_port != -1: |
4302 | rule_xml += "type='%s' " % rule.from_port |
4303 | if rule.to_port != -1: |
4304 | |
4305 | === modified file 'nova/virt/xenapi/fake.py' |
4306 | --- nova/virt/xenapi/fake.py 2010-12-28 18:53:32 +0000 |
4307 | +++ nova/virt/xenapi/fake.py 2011-01-07 19:51:33 +0000 |
4308 | @@ -52,12 +52,12 @@ |
4309 | |
4310 | |
4311 | import datetime |
4312 | -import logging |
4313 | import uuid |
4314 | |
4315 | from pprint import pformat |
4316 | |
4317 | from nova import exception |
4318 | +from nova import log as logging |
4319 | |
4320 | |
4321 | _CLASSES = ['host', 'network', 'session', 'SR', 'VBD',\ |
4322 | @@ -65,9 +65,11 @@ |
4323 | |
4324 | _db_content = {} |
4325 | |
4326 | +LOG = logging.getLogger("nova.virt.xenapi.fake") |
4327 | + |
4328 | |
4329 | def log_db_contents(msg=None): |
4330 | - logging.debug(_("%s: _db_content => %s"), msg or "", pformat(_db_content)) |
4331 | + LOG.debug(_("%s: _db_content => %s"), msg or "", pformat(_db_content)) |
4332 | |
4333 | |
4334 | def reset(): |
4335 | @@ -242,9 +244,9 @@ |
4336 | full_params = (self._session,) + params |
4337 | meth = getattr(self, methodname, None) |
4338 | if meth is None: |
4339 | - logging.warn('Raising NotImplemented') |
4340 | + LOG.debug(_('Raising NotImplemented')) |
4341 | raise NotImplementedError( |
4342 | - 'xenapi.fake does not have an implementation for %s' % |
4343 | + _('xenapi.fake does not have an implementation for %s') % |
4344 | methodname) |
4345 | return meth(*full_params) |
4346 | |
4347 | @@ -278,12 +280,12 @@ |
4348 | if impl is not None: |
4349 | |
4350 | def callit(*params): |
4351 | - logging.warn('Calling %s %s', name, impl) |
4352 | + LOG.debug(_('Calling %s %s'), name, impl) |
4353 | self._check_session(params) |
4354 | return impl(*params) |
4355 | return callit |
4356 | if self._is_gettersetter(name, True): |
4357 | - logging.warn('Calling getter %s', name) |
4358 | + LOG.debug(_('Calling getter %s'), name) |
4359 | return lambda *params: self._getter(name, params) |
4360 | elif self._is_create(name): |
4361 | return lambda *params: self._create(name, params) |
4362 | @@ -333,10 +335,10 @@ |
4363 | field in _db_content[cls][ref]): |
4364 | return _db_content[cls][ref][field] |
4365 | |
4366 | - logging.error('Raising NotImplemented') |
4367 | + LOG.debuug(_('Raising NotImplemented')) |
4368 | raise NotImplementedError( |
4369 | - 'xenapi.fake does not have an implementation for %s or it has ' |
4370 | - 'been called with the wrong number of arguments' % name) |
4371 | + _('xenapi.fake does not have an implementation for %s or it has ' |
4372 | + 'been called with the wrong number of arguments') % name) |
4373 | |
4374 | def _setter(self, name, params): |
4375 | self._check_session(params) |
4376 | @@ -351,7 +353,7 @@ |
4377 | field in _db_content[cls][ref]): |
4378 | _db_content[cls][ref][field] = val |
4379 | |
4380 | - logging.warn('Raising NotImplemented') |
4381 | + LOG.debug(_('Raising NotImplemented')) |
4382 | raise NotImplementedError( |
4383 | 'xenapi.fake does not have an implementation for %s or it has ' |
4384 | 'been called with the wrong number of arguments or the database ' |
4385 | @@ -399,7 +401,7 @@ |
4386 | self._session not in _db_content['session']): |
4387 | raise Failure(['HANDLE_INVALID', 'session', self._session]) |
4388 | if len(params) == 0 or params[0] != self._session: |
4389 | - logging.warn('Raising NotImplemented') |
4390 | + LOG.debug(_('Raising NotImplemented')) |
4391 | raise NotImplementedError('Call to XenAPI without using .xenapi') |
4392 | |
4393 | def _check_arg_count(self, params, expected): |
4394 | |
4395 | === modified file 'nova/virt/xenapi/vm_utils.py' |
4396 | --- nova/virt/xenapi/vm_utils.py 2010-12-30 21:23:14 +0000 |
4397 | +++ nova/virt/xenapi/vm_utils.py 2011-01-07 19:51:33 +0000 |
4398 | @@ -19,7 +19,6 @@ |
4399 | their attributes like VDIs, VIFs, as well as their lookup functions. |
4400 | """ |
4401 | |
4402 | -import logging |
4403 | import pickle |
4404 | import urllib |
4405 | from xml.dom import minidom |
4406 | @@ -27,6 +26,7 @@ |
4407 | from eventlet import event |
4408 | from nova import exception |
4409 | from nova import flags |
4410 | +from nova import log as logging |
4411 | from nova import utils |
4412 | from nova.auth.manager import AuthManager |
4413 | from nova.compute import instance_types |
4414 | @@ -37,6 +37,7 @@ |
4415 | |
4416 | |
4417 | FLAGS = flags.FLAGS |
4418 | +LOG = logging.getLogger("nova.virt.xenapi.vm_utils") |
4419 | |
4420 | XENAPI_POWER_STATE = { |
4421 | 'Halted': power_state.SHUTDOWN, |
4422 | @@ -121,9 +122,9 @@ |
4423 | rec['HVM_boot_params'] = {'order': 'dc'} |
4424 | rec['platform'] = {'acpi': 'true', 'apic': 'true', |
4425 | 'pae': 'true', 'viridian': 'true'} |
4426 | - logging.debug('Created VM %s...', instance.name) |
4427 | + LOG.debug(_('Created VM %s...'), instance.name) |
4428 | vm_ref = session.call_xenapi('VM.create', rec) |
4429 | - logging.debug(_('Created VM %s as %s.'), instance.name, vm_ref) |
4430 | + LOG.debug(_('Created VM %s as %s.'), instance.name, vm_ref) |
4431 | return vm_ref |
4432 | |
4433 | @classmethod |
4434 | @@ -143,10 +144,9 @@ |
4435 | vbd_rec['qos_algorithm_type'] = '' |
4436 | vbd_rec['qos_algorithm_params'] = {} |
4437 | vbd_rec['qos_supported_algorithms'] = [] |
4438 | - logging.debug(_('Creating VBD for VM %s, VDI %s ... '), |
4439 | - vm_ref, vdi_ref) |
4440 | + LOG.debug(_('Creating VBD for VM %s, VDI %s ... '), vm_ref, vdi_ref) |
4441 | vbd_ref = session.call_xenapi('VBD.create', vbd_rec) |
4442 | - logging.debug(_('Created VBD %s for VM %s, VDI %s.'), vbd_ref, vm_ref, |
4443 | + LOG.debug(_('Created VBD %s for VM %s, VDI %s.'), vbd_ref, vm_ref, |
4444 | vdi_ref) |
4445 | return vbd_ref |
4446 | |
4447 | @@ -161,7 +161,7 @@ |
4448 | if vbd_rec['userdevice'] == str(number): |
4449 | return vbd |
4450 | except cls.XenAPI.Failure, exc: |
4451 | - logging.warn(exc) |
4452 | + LOG.exception(exc) |
4453 | raise StorageError(_('VBD not found in instance %s') % vm_ref) |
4454 | |
4455 | @classmethod |
4456 | @@ -170,7 +170,7 @@ |
4457 | try: |
4458 | vbd_ref = session.call_xenapi('VBD.unplug', vbd_ref) |
4459 | except cls.XenAPI.Failure, exc: |
4460 | - logging.warn(exc) |
4461 | + LOG.exception(exc) |
4462 | if exc.details[0] != 'DEVICE_ALREADY_DETACHED': |
4463 | raise StorageError(_('Unable to unplug VBD %s') % vbd_ref) |
4464 | |
4465 | @@ -183,7 +183,7 @@ |
4466 | #with Josh Kearney |
4467 | session.wait_for_task(0, task) |
4468 | except cls.XenAPI.Failure, exc: |
4469 | - logging.warn(exc) |
4470 | + LOG.exception(exc) |
4471 | raise StorageError(_('Unable to destroy VBD %s') % vbd_ref) |
4472 | |
4473 | @classmethod |
4474 | @@ -199,11 +199,11 @@ |
4475 | vif_rec['other_config'] = {} |
4476 | vif_rec['qos_algorithm_type'] = '' |
4477 | vif_rec['qos_algorithm_params'] = {} |
4478 | - logging.debug(_('Creating VIF for VM %s, network %s.'), vm_ref, |
4479 | - network_ref) |
4480 | + LOG.debug(_('Creating VIF for VM %s, network %s.'), vm_ref, |
4481 | + network_ref) |
4482 | vif_ref = session.call_xenapi('VIF.create', vif_rec) |
4483 | - logging.debug(_('Created VIF %s for VM %s, network %s.'), vif_ref, |
4484 | - vm_ref, network_ref) |
4485 | + LOG.debug(_('Created VIF %s for VM %s, network %s.'), vif_ref, |
4486 | + vm_ref, network_ref) |
4487 | return vif_ref |
4488 | |
4489 | @classmethod |
4490 | @@ -213,8 +213,7 @@ |
4491 | """ |
4492 | #TODO(sirp): Add quiesce and VSS locking support when Windows support |
4493 | # is added |
4494 | - logging.debug(_("Snapshotting VM %s with label '%s'..."), |
4495 | - vm_ref, label) |
4496 | + LOG.debug(_("Snapshotting VM %s with label '%s'..."), vm_ref, label) |
4497 | |
4498 | vm_vdi_ref, vm_vdi_rec = get_vdi_for_vm_safely(session, vm_ref) |
4499 | vm_vdi_uuid = vm_vdi_rec["uuid"] |
4500 | @@ -227,8 +226,8 @@ |
4501 | template_vdi_rec = get_vdi_for_vm_safely(session, template_vm_ref)[1] |
4502 | template_vdi_uuid = template_vdi_rec["uuid"] |
4503 | |
4504 | - logging.debug(_('Created snapshot %s from VM %s.'), template_vm_ref, |
4505 | - vm_ref) |
4506 | + LOG.debug(_('Created snapshot %s from VM %s.'), template_vm_ref, |
4507 | + vm_ref) |
4508 | |
4509 | parent_uuid = wait_for_vhd_coalesce( |
4510 | session, instance_id, sr_ref, vm_vdi_ref, original_parent_uuid) |
4511 | @@ -241,8 +240,7 @@ |
4512 | """ Requests that the Glance plugin bundle the specified VDIs and |
4513 | push them into Glance using the specified human-friendly name. |
4514 | """ |
4515 | - logging.debug(_("Asking xapi to upload %s as '%s'"), |
4516 | - vdi_uuids, image_name) |
4517 | + LOG.debug(_("Asking xapi to upload %s as '%s'"), vdi_uuids, image_name) |
4518 | |
4519 | params = {'vdi_uuids': vdi_uuids, |
4520 | 'image_name': image_name, |
4521 | @@ -260,7 +258,7 @@ |
4522 | """ |
4523 | url = images.image_url(image) |
4524 | access = AuthManager().get_access_key(user, project) |
4525 | - logging.debug("Asking xapi to fetch %s as %s", url, access) |
4526 | + LOG.debug(_("Asking xapi to fetch %s as %s"), url, access) |
4527 | fn = (type != ImageType.KERNEL_RAMDISK) and 'get_vdi' or 'get_kernel' |
4528 | args = {} |
4529 | args['src_url'] = url |
4530 | @@ -278,7 +276,7 @@ |
4531 | |
4532 | @classmethod |
4533 | def lookup_image(cls, session, vdi_ref): |
4534 | - logging.debug("Looking up vdi %s for PV kernel", vdi_ref) |
4535 | + LOG.debug(_("Looking up vdi %s for PV kernel"), vdi_ref) |
4536 | fn = "is_vdi_pv" |
4537 | args = {} |
4538 | args['vdi-ref'] = vdi_ref |
4539 | @@ -289,7 +287,7 @@ |
4540 | pv = True |
4541 | elif pv_str.lower() == 'false': |
4542 | pv = False |
4543 | - logging.debug("PV Kernel in VDI:%d", pv) |
4544 | + LOG.debug(_("PV Kernel in VDI:%d"), pv) |
4545 | return pv |
4546 | |
4547 | @classmethod |
4548 | @@ -317,10 +315,9 @@ |
4549 | vdi = session.get_xenapi().VBD.get_VDI(vbd) |
4550 | # Test valid VDI |
4551 | record = session.get_xenapi().VDI.get_record(vdi) |
4552 | - logging.debug(_('VDI %s is still available'), |
4553 | - record['uuid']) |
4554 | + LOG.debug(_('VDI %s is still available'), record['uuid']) |
4555 | except cls.XenAPI.Failure, exc: |
4556 | - logging.warn(exc) |
4557 | + LOG.exception(exc) |
4558 | else: |
4559 | vdis.append(vdi) |
4560 | if len(vdis) > 0: |
4561 | @@ -331,10 +328,10 @@ |
4562 | @classmethod |
4563 | def compile_info(cls, record): |
4564 | """Fill record with VM status information""" |
4565 | - logging.info(_("(VM_UTILS) xenserver vm state -> |%s|"), |
4566 | - record['power_state']) |
4567 | - logging.info(_("(VM_UTILS) xenapi power_state -> |%s|"), |
4568 | - XENAPI_POWER_STATE[record['power_state']]) |
4569 | + LOG.info(_("(VM_UTILS) xenserver vm state -> |%s|"), |
4570 | + record['power_state']) |
4571 | + LOG.info(_("(VM_UTILS) xenapi power_state -> |%s|"), |
4572 | + XENAPI_POWER_STATE[record['power_state']]) |
4573 | return {'state': XENAPI_POWER_STATE[record['power_state']], |
4574 | 'max_mem': long(record['memory_static_max']) >> 10, |
4575 | 'mem': long(record['memory_dynamic_max']) >> 10, |
4576 | @@ -388,11 +385,9 @@ |
4577 | """ |
4578 | if 'vhd-parent' in vdi_rec['sm_config']: |
4579 | parent_uuid = vdi_rec['sm_config']['vhd-parent'] |
4580 | - #NOTE(sirp): changed xenapi -> get_xenapi() |
4581 | parent_ref = session.get_xenapi().VDI.get_by_uuid(parent_uuid) |
4582 | parent_rec = session.get_xenapi().VDI.get_record(parent_ref) |
4583 | - #NOTE(sirp): changed log -> logging |
4584 | - logging.debug(_("VHD %s has parent %s"), vdi_rec['uuid'], parent_ref) |
4585 | + LOG.debug(_("VHD %s has parent %s"), vdi_rec['uuid'], parent_ref) |
4586 | return parent_ref, parent_rec |
4587 | else: |
4588 | return None |
4589 | @@ -409,7 +404,7 @@ |
4590 | |
4591 | |
4592 | def scan_sr(session, instance_id, sr_ref): |
4593 | - logging.debug(_("Re-scanning SR %s"), sr_ref) |
4594 | + LOG.debug(_("Re-scanning SR %s"), sr_ref) |
4595 | task = session.call_xenapi('Async.SR.scan', sr_ref) |
4596 | session.wait_for_task(instance_id, task) |
4597 | |
4598 | @@ -433,10 +428,9 @@ |
4599 | scan_sr(session, instance_id, sr_ref) |
4600 | parent_uuid = get_vhd_parent_uuid(session, vdi_ref) |
4601 | if original_parent_uuid and (parent_uuid != original_parent_uuid): |
4602 | - logging.debug( |
4603 | - _("Parent %s doesn't match original parent %s, " |
4604 | - "waiting for coalesce..."), |
4605 | - parent_uuid, original_parent_uuid) |
4606 | + LOG.debug(_("Parent %s doesn't match original parent %s, " |
4607 | + "waiting for coalesce..."), parent_uuid, |
4608 | + original_parent_uuid) |
4609 | else: |
4610 | done.send(parent_uuid) |
4611 | |
4612 | |
4613 | === modified file 'nova/virt/xenapi/vmops.py' |
4614 | --- nova/virt/xenapi/vmops.py 2011-01-04 21:20:10 +0000 |
4615 | +++ nova/virt/xenapi/vmops.py 2011-01-07 19:51:33 +0000 |
4616 | @@ -20,10 +20,10 @@ |
4617 | """ |
4618 | |
4619 | import json |
4620 | -import logging |
4621 | |
4622 | from nova import db |
4623 | from nova import context |
4624 | +from nova import log as logging |
4625 | from nova import exception |
4626 | from nova import utils |
4627 | |
4628 | @@ -33,6 +33,9 @@ |
4629 | from nova.virt.xenapi.vm_utils import VMHelper |
4630 | from nova.virt.xenapi.vm_utils import ImageType |
4631 | |
4632 | +XenAPI = None |
4633 | +LOG = logging.getLogger("nova.virt.xenapi.vmops") |
4634 | + |
4635 | |
4636 | class VMOps(object): |
4637 | """ |
4638 | @@ -93,10 +96,9 @@ |
4639 | if network_ref: |
4640 | VMHelper.create_vif(self._session, vm_ref, |
4641 | network_ref, instance.mac_address) |
4642 | - logging.debug(_('Starting VM %s...'), vm_ref) |
4643 | + LOG.debug(_('Starting VM %s...'), vm_ref) |
4644 | self._session.call_xenapi('VM.start', vm_ref, False, False) |
4645 | - logging.info(_('Spawning VM %s created %s.'), instance.name, |
4646 | - vm_ref) |
4647 | + LOG.info(_('Spawning VM %s created %s.'), instance.name, vm_ref) |
4648 | |
4649 | # NOTE(armando): Do we really need to do this in virt? |
4650 | timer = utils.LoopingCall(f=None) |
4651 | @@ -107,12 +109,12 @@ |
4652 | db.instance_set_state(context.get_admin_context(), |
4653 | instance['id'], state) |
4654 | if state == power_state.RUNNING: |
4655 | - logging.debug(_('Instance %s: booted'), instance['name']) |
4656 | + LOG.debug(_('Instance %s: booted'), instance['name']) |
4657 | timer.stop() |
4658 | except Exception, exc: |
4659 | - logging.warn(exc) |
4660 | - logging.exception(_('instance %s: failed to boot'), |
4661 | - instance['name']) |
4662 | + LOG.warn(exc) |
4663 | + LOG.exception(_('instance %s: failed to boot'), |
4664 | + instance['name']) |
4665 | db.instance_set_state(context.get_admin_context(), |
4666 | instance['id'], |
4667 | power_state.SHUTDOWN) |
4668 | @@ -205,7 +207,7 @@ |
4669 | task = self._session.call_xenapi('Async.VM.hard_shutdown', vm) |
4670 | self._session.wait_for_task(instance.id, task) |
4671 | except self.XenAPI.Failure, exc: |
4672 | - logging.warn(exc) |
4673 | + LOG.exception(exc) |
4674 | |
4675 | # Disk clean-up |
4676 | if vdis: |
4677 | @@ -214,20 +216,20 @@ |
4678 | task = self._session.call_xenapi('Async.VDI.destroy', vdi) |
4679 | self._session.wait_for_task(instance.id, task) |
4680 | except self.XenAPI.Failure, exc: |
4681 | - logging.warn(exc) |
4682 | + LOG.exception(exc) |
4683 | # VM Destroy |
4684 | try: |
4685 | task = self._session.call_xenapi('Async.VM.destroy', vm) |
4686 | self._session.wait_for_task(instance.id, task) |
4687 | except self.XenAPI.Failure, exc: |
4688 | - logging.warn(exc) |
4689 | + LOG.exception(exc) |
4690 | |
4691 | def _wait_with_callback(self, instance_id, task, callback): |
4692 | ret = None |
4693 | try: |
4694 | ret = self._session.wait_for_task(instance_id, task) |
4695 | except self.XenAPI.Failure, exc: |
4696 | - logging.warn(exc) |
4697 | + LOG.exception(exc) |
4698 | callback(ret) |
4699 | |
4700 | def pause(self, instance, callback): |
4701 | |
4702 | === modified file 'nova/virt/xenapi/volume_utils.py' |
4703 | --- nova/virt/xenapi/volume_utils.py 2011-01-04 20:27:50 +0000 |
4704 | +++ nova/virt/xenapi/volume_utils.py 2011-01-07 19:51:33 +0000 |
4705 | @@ -21,16 +21,17 @@ |
4706 | |
4707 | import re |
4708 | import string |
4709 | -import logging |
4710 | |
4711 | from nova import db |
4712 | from nova import context |
4713 | from nova import exception |
4714 | from nova import flags |
4715 | +from nova import log as logging |
4716 | from nova import utils |
4717 | from nova.virt.xenapi import HelperBase |
4718 | |
4719 | FLAGS = flags.FLAGS |
4720 | +LOG = logging.getLogger("nova.virt.xenapi.volume_utils") |
4721 | |
4722 | |
4723 | class StorageError(Exception): |
4724 | @@ -53,7 +54,7 @@ |
4725 | """ |
4726 | sr_ref = session.get_xenapi().SR.get_by_name_label(label) |
4727 | if len(sr_ref) == 0: |
4728 | - logging.debug('Introducing %s...', label) |
4729 | + LOG.debug(_('Introducing %s...'), label) |
4730 | record = {} |
4731 | if 'chapuser' in info and 'chappassword' in info: |
4732 | record = {'target': info['targetHost'], |
4733 | @@ -70,10 +71,10 @@ |
4734 | session.get_xenapi_host(), |
4735 | record, |
4736 | '0', label, description, 'iscsi', '', False, {}) |
4737 | - logging.debug('Introduced %s as %s.', label, sr_ref) |
4738 | + LOG.debug(_('Introduced %s as %s.'), label, sr_ref) |
4739 | return sr_ref |
4740 | except cls.XenAPI.Failure, exc: |
4741 | - logging.warn(exc) |
4742 | + LOG.exception(exc) |
4743 | raise StorageError(_('Unable to create Storage Repository')) |
4744 | else: |
4745 | return sr_ref[0] |
4746 | @@ -85,32 +86,32 @@ |
4747 | vdi_ref = session.get_xenapi().VBD.get_VDI(vbd_ref) |
4748 | sr_ref = session.get_xenapi().VDI.get_SR(vdi_ref) |
4749 | except cls.XenAPI.Failure, exc: |
4750 | - logging.warn(exc) |
4751 | + LOG.exception(exc) |
4752 | raise StorageError(_('Unable to find SR from VBD %s') % vbd_ref) |
4753 | return sr_ref |
4754 | |
4755 | @classmethod |
4756 | def destroy_iscsi_storage(cls, session, sr_ref): |
4757 | """Forget the SR whilst preserving the state of the disk""" |
4758 | - logging.debug("Forgetting SR %s ... ", sr_ref) |
4759 | + LOG.debug(_("Forgetting SR %s ... "), sr_ref) |
4760 | pbds = [] |
4761 | try: |
4762 | pbds = session.get_xenapi().SR.get_PBDs(sr_ref) |
4763 | except cls.XenAPI.Failure, exc: |
4764 | - logging.warn('Ignoring exception %s when getting PBDs for %s', |
4765 | - exc, sr_ref) |
4766 | + LOG.warn(_('Ignoring exception %s when getting PBDs for %s'), |
4767 | + exc, sr_ref) |
4768 | for pbd in pbds: |
4769 | try: |
4770 | session.get_xenapi().PBD.unplug(pbd) |
4771 | except cls.XenAPI.Failure, exc: |
4772 | - logging.warn('Ignoring exception %s when unplugging PBD %s', |
4773 | - exc, pbd) |
4774 | + LOG.warn(_('Ignoring exception %s when unplugging PBD %s'), |
4775 | + exc, pbd) |
4776 | try: |
4777 | session.get_xenapi().SR.forget(sr_ref) |
4778 | - logging.debug("Forgetting SR %s done.", sr_ref) |
4779 | + LOG.debug(_("Forgetting SR %s done."), sr_ref) |
4780 | except cls.XenAPI.Failure, exc: |
4781 | - logging.warn('Ignoring exception %s when forgetting SR %s', |
4782 | - exc, sr_ref) |
4783 | + LOG.warn(_('Ignoring exception %s when forgetting SR %s'), exc, |
4784 | + sr_ref) |
4785 | |
4786 | @classmethod |
4787 | def introduce_vdi(cls, session, sr_ref): |
4788 | @@ -118,12 +119,12 @@ |
4789 | try: |
4790 | vdis = session.get_xenapi().SR.get_VDIs(sr_ref) |
4791 | except cls.XenAPI.Failure, exc: |
4792 | - logging.warn(exc) |
4793 | + LOG.exception(exc) |
4794 | raise StorageError(_('Unable to introduce VDI on SR %s') % sr_ref) |
4795 | try: |
4796 | vdi_rec = session.get_xenapi().VDI.get_record(vdis[0]) |
4797 | except cls.XenAPI.Failure, exc: |
4798 | - logging.warn(exc) |
4799 | + LOG.exception(exc) |
4800 | raise StorageError(_('Unable to get record' |
4801 | ' of VDI %s on') % vdis[0]) |
4802 | else: |
4803 | @@ -141,7 +142,7 @@ |
4804 | vdi_rec['xenstore_data'], |
4805 | vdi_rec['sm_config']) |
4806 | except cls.XenAPI.Failure, exc: |
4807 | - logging.warn(exc) |
4808 | + LOG.exception(exc) |
4809 | raise StorageError(_('Unable to introduce VDI for SR %s') |
4810 | % sr_ref) |
4811 | |
4812 | @@ -165,11 +166,8 @@ |
4813 | target_host = _get_target_host(iscsi_portal) |
4814 | target_port = _get_target_port(iscsi_portal) |
4815 | target_iqn = _get_iqn(iscsi_name, volume_id) |
4816 | - logging.debug('(vol_id,number,host,port,iqn): (%s,%s,%s,%s)', |
4817 | - volume_id, |
4818 | - target_host, |
4819 | - target_port, |
4820 | - target_iqn) |
4821 | + LOG.debug('(vol_id,number,host,port,iqn): (%s,%s,%s,%s)', |
4822 | + volume_id, target_host, target_port, target_iqn) |
4823 | if (device_number < 0) or \ |
4824 | (volume_id is None) or \ |
4825 | (target_host is None) or \ |
4826 | @@ -196,7 +194,7 @@ |
4827 | elif re.match('^[0-9]+$', mountpoint): |
4828 | return string.atoi(mountpoint, 10) |
4829 | else: |
4830 | - logging.warn('Mountpoint cannot be translated: %s', mountpoint) |
4831 | + LOG.warn(_('Mountpoint cannot be translated: %s'), mountpoint) |
4832 | return -1 |
4833 | |
4834 | |
4835 | @@ -257,7 +255,7 @@ |
4836 | "sendtargets -p %s" % |
4837 | volume_ref['host']) |
4838 | except exception.ProcessExecutionError, exc: |
4839 | - logging.warn(exc) |
4840 | + LOG.exception(exc) |
4841 | else: |
4842 | targets = r.splitlines() |
4843 | if len(_e) == 0 and len(targets) == 1: |
4844 | |
4845 | === modified file 'nova/virt/xenapi/volumeops.py' |
4846 | --- nova/virt/xenapi/volumeops.py 2010-12-23 03:35:41 +0000 |
4847 | +++ nova/virt/xenapi/volumeops.py 2011-01-07 19:51:33 +0000 |
4848 | @@ -17,14 +17,17 @@ |
4849 | """ |
4850 | Management class for Storage-related functions (attach, detach, etc). |
4851 | """ |
4852 | -import logging |
4853 | |
4854 | from nova import exception |
4855 | +from nova import log as logging |
4856 | from nova.virt.xenapi.vm_utils import VMHelper |
4857 | from nova.virt.xenapi.volume_utils import VolumeHelper |
4858 | from nova.virt.xenapi.volume_utils import StorageError |
4859 | |
4860 | |
4861 | +LOG = logging.getLogger("nova.virt.xenapi.volumeops") |
4862 | + |
4863 | + |
4864 | class VolumeOps(object): |
4865 | """ |
4866 | Management class for Volume-related tasks |
4867 | @@ -45,8 +48,8 @@ |
4868 | raise exception.NotFound(_('Instance %s not found') |
4869 | % instance_name) |
4870 | # NOTE: No Resource Pool concept so far |
4871 | - logging.debug(_("Attach_volume: %s, %s, %s"), |
4872 | - instance_name, device_path, mountpoint) |
4873 | + LOG.debug(_("Attach_volume: %s, %s, %s"), |
4874 | + instance_name, device_path, mountpoint) |
4875 | # Create the iSCSI SR, and the PDB through which hosts access SRs. |
4876 | # But first, retrieve target info, like Host, IQN, LUN and SCSIID |
4877 | vol_rec = VolumeHelper.parse_volume_info(device_path, mountpoint) |
4878 | @@ -61,7 +64,7 @@ |
4879 | try: |
4880 | vdi_ref = VolumeHelper.introduce_vdi(self._session, sr_ref) |
4881 | except StorageError, exc: |
4882 | - logging.warn(exc) |
4883 | + LOG.exception(exc) |
4884 | VolumeHelper.destroy_iscsi_storage(self._session, sr_ref) |
4885 | raise Exception(_('Unable to create VDI on SR %s for instance %s') |
4886 | % (sr_ref, |
4887 | @@ -73,7 +76,7 @@ |
4888 | vol_rec['deviceNumber'], |
4889 | False) |
4890 | except self.XenAPI.Failure, exc: |
4891 | - logging.warn(exc) |
4892 | + LOG.exception(exc) |
4893 | VolumeHelper.destroy_iscsi_storage(self._session, sr_ref) |
4894 | raise Exception(_('Unable to use SR %s for instance %s') |
4895 | % (sr_ref, |
4896 | @@ -84,13 +87,13 @@ |
4897 | vbd_ref) |
4898 | self._session.wait_for_task(vol_rec['deviceNumber'], task) |
4899 | except self.XenAPI.Failure, exc: |
4900 | - logging.warn(exc) |
4901 | + LOG.exception(exc) |
4902 | VolumeHelper.destroy_iscsi_storage(self._session, |
4903 | sr_ref) |
4904 | raise Exception(_('Unable to attach volume to instance %s') |
4905 | % instance_name) |
4906 | - logging.info(_('Mountpoint %s attached to instance %s'), |
4907 | - mountpoint, instance_name) |
4908 | + LOG.info(_('Mountpoint %s attached to instance %s'), |
4909 | + mountpoint, instance_name) |
4910 | |
4911 | def detach_volume(self, instance_name, mountpoint): |
4912 | """Detach volume storage to VM instance""" |
4913 | @@ -100,13 +103,13 @@ |
4914 | raise exception.NotFound(_('Instance %s not found') |
4915 | % instance_name) |
4916 | # Detach VBD from VM |
4917 | - logging.debug(_("Detach_volume: %s, %s"), instance_name, mountpoint) |
4918 | + LOG.debug(_("Detach_volume: %s, %s"), instance_name, mountpoint) |
4919 | device_number = VolumeHelper.mountpoint_to_number(mountpoint) |
4920 | try: |
4921 | vbd_ref = VMHelper.find_vbd_by_number(self._session, |
4922 | vm_ref, device_number) |
4923 | except StorageError, exc: |
4924 | - logging.warn(exc) |
4925 | + LOG.exception(exc) |
4926 | raise Exception(_('Unable to locate volume %s') % mountpoint) |
4927 | else: |
4928 | try: |
4929 | @@ -114,13 +117,13 @@ |
4930 | vbd_ref) |
4931 | VMHelper.unplug_vbd(self._session, vbd_ref) |
4932 | except StorageError, exc: |
4933 | - logging.warn(exc) |
4934 | + LOG.exception(exc) |
4935 | raise Exception(_('Unable to detach volume %s') % mountpoint) |
4936 | try: |
4937 | VMHelper.destroy_vbd(self._session, vbd_ref) |
4938 | except StorageError, exc: |
4939 | - logging.warn(exc) |
4940 | + LOG.exception(exc) |
4941 | # Forget SR |
4942 | VolumeHelper.destroy_iscsi_storage(self._session, sr_ref) |
4943 | - logging.info(_('Mountpoint %s detached from instance %s'), |
4944 | - mountpoint, instance_name) |
4945 | + LOG.info(_('Mountpoint %s detached from instance %s'), |
4946 | + mountpoint, instance_name) |
4947 | |
4948 | === modified file 'nova/virt/xenapi_conn.py' |
4949 | --- nova/virt/xenapi_conn.py 2011-01-03 22:43:24 +0000 |
4950 | +++ nova/virt/xenapi_conn.py 2011-01-07 19:51:33 +0000 |
4951 | @@ -51,7 +51,6 @@ |
4952 | :iqn_prefix: IQN Prefix, e.g. 'iqn.2010-10.org.openstack' |
4953 | """ |
4954 | |
4955 | -import logging |
4956 | import sys |
4957 | import xmlrpclib |
4958 | |
4959 | @@ -62,9 +61,14 @@ |
4960 | from nova import db |
4961 | from nova import utils |
4962 | from nova import flags |
4963 | +from nova import log as logging |
4964 | from nova.virt.xenapi.vmops import VMOps |
4965 | from nova.virt.xenapi.volumeops import VolumeOps |
4966 | |
4967 | + |
4968 | +LOG = logging.getLogger("nova.virt.xenapi") |
4969 | + |
4970 | + |
4971 | FLAGS = flags.FLAGS |
4972 | |
4973 | flags.DEFINE_string('xenapi_connection_url', |
4974 | @@ -256,7 +260,7 @@ |
4975 | return |
4976 | elif status == "success": |
4977 | result = self._session.xenapi.task.get_result(task) |
4978 | - logging.info(_("Task [%s] %s status: success %s") % ( |
4979 | + LOG.info(_("Task [%s] %s status: success %s") % ( |
4980 | name, |
4981 | task, |
4982 | result)) |
4983 | @@ -264,7 +268,7 @@ |
4984 | else: |
4985 | error_info = self._session.xenapi.task.get_error_info(task) |
4986 | action["error"] = str(error_info) |
4987 | - logging.warn(_("Task [%s] %s status: %s %s") % ( |
4988 | + LOG.warn(_("Task [%s] %s status: %s %s") % ( |
4989 | name, |
4990 | task, |
4991 | status, |
4992 | @@ -272,7 +276,7 @@ |
4993 | done.send_exception(self.XenAPI.Failure(error_info)) |
4994 | db.instance_action_create(context.get_admin_context(), action) |
4995 | except self.XenAPI.Failure, exc: |
4996 | - logging.warn(exc) |
4997 | + LOG.warn(exc) |
4998 | done.send_exception(*sys.exc_info()) |
4999 | |
5000 | def _unwrap_plugin_exceptions(self, func, *args, **kwargs): |
Previous discussion was here: https:/ /code.launchpad .net/~anso/ nova/newlog2/ +merge/ 45100
This rebranch is to fix the cloudpipe directory issues in bzr as mentioned by ewan.