Merge lp:~jaypipes/nova/i18n-strings into lp:~hudson-openstack/nova/trunk

Proposed by Jay Pipes
Status: Merged
Approved by: Rick Clark
Approved revision: 463
Merged at revision: 474
Proposed branch: lp:~jaypipes/nova/i18n-strings
Merge into: lp:~hudson-openstack/nova/trunk
Diff against target: 2187 lines (+313/-288)
39 files modified
nova/api/cloudpipe/__init__.py (+2/-2)
nova/api/ec2/__init__.py (+3/-3)
nova/api/ec2/apirequest.py (+2/-2)
nova/api/ec2/cloud.py (+28/-24)
nova/api/ec2/metadatarequesthandler.py (+1/-1)
nova/api/openstack/__init__.py (+2/-2)
nova/auth/dbdriver.py (+10/-10)
nova/auth/fakeldap.py (+1/-1)
nova/auth/ldapdriver.py (+38/-31)
nova/auth/manager.py (+15/-15)
nova/cloudpipe/pipelib.py (+1/-1)
nova/compute/api.py (+6/-6)
nova/compute/disk.py (+9/-7)
nova/compute/instance_types.py (+2/-1)
nova/compute/manager.py (+16/-16)
nova/compute/monitor.py (+6/-6)
nova/crypto.py (+9/-9)
nova/db/sqlalchemy/api.py (+24/-24)
nova/exception.py (+4/-4)
nova/fakerabbit.py (+6/-6)
nova/image/glance.py (+4/-4)
nova/image/s3.py (+2/-1)
nova/network/linux_net.py (+5/-5)
nova/network/manager.py (+9/-8)
nova/objectstore/handler.py (+14/-10)
nova/rpc.py (+17/-17)
nova/scheduler/chance.py (+1/-1)
nova/scheduler/driver.py (+1/-1)
nova/scheduler/manager.py (+1/-1)
nova/scheduler/simple.py (+7/-6)
nova/service.py (+7/-7)
nova/twistd.py (+3/-3)
nova/utils.py (+7/-7)
nova/virt/connection.py (+1/-1)
nova/virt/fake.py (+2/-1)
nova/virt/libvirt_conn.py (+24/-21)
nova/virt/xenapi_conn.py (+8/-8)
nova/volume/driver.py (+5/-5)
nova/volume/manager.py (+10/-10)
To merge this branch: bzr merge lp:~jaypipes/nova/i18n-strings
Reviewer Review Type Date Requested Status
Rick Clark (community) Approve
David Pravec (community) Approve
Thierry Carrez (community) Approve
Soren Hansen (community) Approve
Review via email: mp+44128@code.launchpad.net

Description of the change

All merged with trunk and let's see if a new merge prop (with no pre-req) works..

To post a comment you must log in.
Revision history for this message
Soren Hansen (soren) wrote :

Good luck :)

review: Approve
Revision history for this message
Thierry Carrez (ttx) wrote :

Some issues around nova/service.py:
 - "nova/service.py.THIS" is probably superfluous
 - Strings in nova/service.py were not _marked

(Apparently service.py was removed and added in the eventlet branch, which might ave caused this weird merge/conflict situation)

review: Needs Fixing
Revision history for this message
Jay Pipes (jaypipes) wrote :

This branch is becoming my worse nightmare :)

Revision history for this message
Jay Pipes (jaypipes) wrote :

OK, should be ready to go again...

Revision history for this message
Thierry Carrez (ttx) wrote :

Looks good !

review: Approve
Revision history for this message
Rick Clark (dendrobates) wrote :

looks good and tedious. :)

review: Approve
Revision history for this message
David Pravec (alekibango) wrote :

gj.

review: Approve
Revision history for this message
OpenStack Infra (hudson-openstack) wrote :
Download full text (7.0 KiB)

The attempt to merge lp:~jaypipes/nova/i18n-strings into lp:nova failed. Below is the output from the failed tests.

nova.tests.access_unittest
  AccessTestCase
    test_001_allow_all ... [OK]
    test_002_allow_none ... [OK]
    test_003_allow_project_manager ... [OK]
    test_004_allow_sys_and_net ... [OK]
nova.tests.api_unittest
  ApiEc2TestCase
    test_authorize_revoke_security_group_cidr ... [ERROR]
    test_authorize_revoke_security_group_foreign_group ... [ERROR]
    test_create_delete_security_group ... [ERROR]
    test_describe_instances ... [ERROR]
    test_get_all_key_pairs ... [ERROR]
    test_get_all_security_groups ... [ERROR]
  XmlConversionTestCase
    test_number_conversion ... [OK]
nova.tests.auth_unittest
  AuthManagerDbTestCase
    test_004_signature_is_valid ... [OK]
    test_005_can_get_credentials ... [OK]
    test_add_user_role_doesnt_infect_project_roles ... [OK]
    test_adding_role_to_project_is_ignored_unless_added_to_user ... [OK]
    test_can_add_and_remove_user_role ... [OK]
    test_can_add_remove_user_with_role ... [OK]
    test_can_add_user_to_project ... [OK]
    test_can_create_and_get_project ... [OK]
    test_can_create_and_get_project_with_attributes ... [OK]
    test_can_create_project_with_manager ... [OK]
    test_can_delete_project ... [OK]
    test_can_delete_user ... [OK]
    test_can_generate_x509 ... [ERROR]
    test_can_list_project_roles ... [OK]
    test_can_list_projects ... [OK]
    test_can_list_user_roles ... [OK]
    test_can_list_users ... [OK]
    test_can_modify_project ... [OK]
    test_can_modify_users ... [OK]
    test_can_remove_project_role_but_keep_user_role ... [OK]
    test_can_remove_user_from_project ... [OK]
    test_can_remove_user_roles ... [OK]
    test_can_retrieve_project_by_user ... [OK]
    test_create_and_find_user ... [OK]
    test_create_and_find_with_properties ... [OK]
    test_create_project_assigns_manager_to_me...

Read more...

Revision history for this message
Jay Pipes (jaypipes) wrote :

Merged trunk yet again and resolve conflicts...

Revision history for this message
OpenStack Infra (hudson-openstack) wrote :

There are additional revisions which have not been approved in review. Please seek review and approval of these new revisions.

Revision history for this message
Rick Clark (dendrobates) wrote :

approved

review: Approve

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'nova/api/cloudpipe/__init__.py'
2--- nova/api/cloudpipe/__init__.py 2010-09-23 18:21:14 +0000
3+++ nova/api/cloudpipe/__init__.py 2010-12-22 15:43:09 +0000
4@@ -45,7 +45,7 @@
5 def __call__(self, req):
6 if req.method == 'POST':
7 return self.sign_csr(req)
8- _log.debug("Cloudpipe path is %s" % req.path_info)
9+ _log.debug(_("Cloudpipe path is %s") % req.path_info)
10 if req.path_info.endswith("/getca/"):
11 return self.send_root_ca(req)
12 return webob.exc.HTTPNotFound()
13@@ -56,7 +56,7 @@
14 return instance['project_id']
15
16 def send_root_ca(self, req):
17- _log.debug("Getting root ca")
18+ _log.debug(_("Getting root ca"))
19 project_id = self.get_project_id_from_ip(req.remote_addr)
20 res = webob.Response()
21 res.headers["Content-Type"] = "text/plain"
22
23=== modified file 'nova/api/ec2/__init__.py'
24--- nova/api/ec2/__init__.py 2010-11-18 18:52:54 +0000
25+++ nova/api/ec2/__init__.py 2010-12-22 15:43:09 +0000
26@@ -77,7 +77,7 @@
27 req.host,
28 req.path)
29 except exception.Error, ex:
30- logging.debug("Authentication Failure: %s" % ex)
31+ logging.debug(_("Authentication Failure: %s") % ex)
32 raise webob.exc.HTTPForbidden()
33
34 # Authenticated!
35@@ -120,9 +120,9 @@
36 except:
37 raise webob.exc.HTTPBadRequest()
38
39- _log.debug('action: %s' % action)
40+ _log.debug(_('action: %s') % action)
41 for key, value in args.items():
42- _log.debug('arg: %s\t\tval: %s' % (key, value))
43+ _log.debug(_('arg: %s\t\tval: %s') % (key, value))
44
45 # Success!
46 req.environ['ec2.controller'] = controller
47
48=== modified file 'nova/api/ec2/apirequest.py'
49--- nova/api/ec2/apirequest.py 2010-10-21 22:26:06 +0000
50+++ nova/api/ec2/apirequest.py 2010-12-22 15:43:09 +0000
51@@ -92,8 +92,8 @@
52 method = getattr(self.controller,
53 _camelcase_to_underscore(self.action))
54 except AttributeError:
55- _error = ('Unsupported API request: controller = %s,'
56- 'action = %s') % (self.controller, self.action)
57+ _error = _('Unsupported API request: controller = %s,'
58+ 'action = %s') % (self.controller, self.action)
59 _log.warning(_error)
60 # TODO: Raise custom exception, trap in apiserver,
61 # and reraise as 400 error.
62
63=== modified file 'nova/api/ec2/cloud.py'
64--- nova/api/ec2/cloud.py 2010-12-14 17:11:30 +0000
65+++ nova/api/ec2/cloud.py 2010-12-22 15:43:09 +0000
66@@ -114,7 +114,7 @@
67 start = os.getcwd()
68 os.chdir(FLAGS.ca_path)
69 # TODO(vish): Do this with M2Crypto instead
70- utils.runthis("Generating root CA: %s", "sh genrootca.sh")
71+ utils.runthis(_("Generating root CA: %s"), "sh genrootca.sh")
72 os.chdir(start)
73
74 def _get_mpi_data(self, context, project_id):
75@@ -318,11 +318,11 @@
76 ip_protocol = str(ip_protocol)
77
78 if ip_protocol.upper() not in ['TCP', 'UDP', 'ICMP']:
79- raise InvalidInputException('%s is not a valid ipProtocol' %
80+ raise InvalidInputException(_('%s is not a valid ipProtocol') %
81 (ip_protocol,))
82 if ((min(from_port, to_port) < -1) or
83 (max(from_port, to_port) > 65535)):
84- raise InvalidInputException('Invalid port range')
85+ raise InvalidInputException(_('Invalid port range'))
86
87 values['protocol'] = ip_protocol
88 values['from_port'] = from_port
89@@ -360,7 +360,8 @@
90
91 criteria = self._revoke_rule_args_to_dict(context, **kwargs)
92 if criteria == None:
93- raise exception.ApiError("No rule for the specified parameters.")
94+ raise exception.ApiError(_("No rule for the specified "
95+ "parameters."))
96
97 for rule in security_group.rules:
98 match = True
99@@ -371,7 +372,7 @@
100 db.security_group_rule_destroy(context, rule['id'])
101 self._trigger_refresh_security_group(context, security_group)
102 return True
103- raise exception.ApiError("No rule for the specified parameters.")
104+ raise exception.ApiError(_("No rule for the specified parameters."))
105
106 # TODO(soren): This has only been tested with Boto as the client.
107 # Unfortunately, it seems Boto is using an old API
108@@ -387,8 +388,8 @@
109 values['parent_group_id'] = security_group.id
110
111 if self._security_group_rule_exists(security_group, values):
112- raise exception.ApiError('This rule already exists in group %s' %
113- group_name)
114+ raise exception.ApiError(_('This rule already exists in group %s')
115+ % group_name)
116
117 security_group_rule = db.security_group_rule_create(context, values)
118
119@@ -416,7 +417,7 @@
120 def create_security_group(self, context, group_name, group_description):
121 self.compute_api.ensure_default_security_group(context)
122 if db.security_group_exists(context, context.project_id, group_name):
123- raise exception.ApiError('group %s already exists' % group_name)
124+ raise exception.ApiError(_('group %s already exists') % group_name)
125
126 group = {'user_id': context.user.id,
127 'project_id': context.project_id,
128@@ -529,13 +530,13 @@
129 def attach_volume(self, context, volume_id, instance_id, device, **kwargs):
130 volume_ref = db.volume_get_by_ec2_id(context, volume_id)
131 if not re.match("^/dev/[a-z]d[a-z]+$", device):
132- raise exception.ApiError("Invalid device specified: %s. "
133- "Example device: /dev/vdb" % device)
134+ raise exception.ApiError(_("Invalid device specified: %s. "
135+ "Example device: /dev/vdb") % device)
136 # TODO(vish): abstract status checking?
137 if volume_ref['status'] != "available":
138- raise exception.ApiError("Volume status must be available")
139+ raise exception.ApiError(_("Volume status must be available"))
140 if volume_ref['attach_status'] == "attached":
141- raise exception.ApiError("Volume is already attached")
142+ raise exception.ApiError(_("Volume is already attached"))
143 internal_id = ec2_id_to_internal_id(instance_id)
144 instance_ref = self.compute_api.get_instance(context, internal_id)
145 host = instance_ref['host']
146@@ -557,10 +558,10 @@
147 instance_ref = db.volume_get_instance(context.elevated(),
148 volume_ref['id'])
149 if not instance_ref:
150- raise exception.ApiError("Volume isn't attached to anything!")
151+ raise exception.ApiError(_("Volume isn't attached to anything!"))
152 # TODO(vish): abstract status checking?
153 if volume_ref['status'] == "available":
154- raise exception.ApiError("Volume is already detached")
155+ raise exception.ApiError(_("Volume is already detached"))
156 try:
157 host = instance_ref['host']
158 rpc.cast(context,
159@@ -689,10 +690,11 @@
160 def allocate_address(self, context, **kwargs):
161 # check quota
162 if quota.allowed_floating_ips(context, 1) < 1:
163- logging.warn("Quota exceeeded for %s, tried to allocate address",
164+ logging.warn(_("Quota exceeeded for %s, tried to allocate "
165+ "address"),
166 context.project_id)
167- raise quota.QuotaError("Address quota exceeded. You cannot "
168- "allocate any more addresses")
169+ raise quota.QuotaError(_("Address quota exceeded. You cannot "
170+ "allocate any more addresses"))
171 network_topic = self._get_network_topic(context)
172 public_ip = rpc.call(context,
173 network_topic,
174@@ -805,7 +807,7 @@
175 # TODO: return error if not authorized
176 volume_ref = db.volume_get_by_ec2_id(context, volume_id)
177 if volume_ref['status'] != "available":
178- raise exception.ApiError("Volume status must be available")
179+ raise exception.ApiError(_("Volume status must be available"))
180 now = datetime.datetime.utcnow()
181 db.volume_update(context, volume_ref['id'], {'status': 'deleting',
182 'terminated_at': now})
183@@ -836,11 +838,12 @@
184
185 def describe_image_attribute(self, context, image_id, attribute, **kwargs):
186 if attribute != 'launchPermission':
187- raise exception.ApiError('attribute not supported: %s' % attribute)
188+ raise exception.ApiError(_('attribute not supported: %s')
189+ % attribute)
190 try:
191 image = self.image_service.show(context, image_id)
192 except IndexError:
193- raise exception.ApiError('invalid id: %s' % image_id)
194+ raise exception.ApiError(_('invalid id: %s') % image_id)
195 result = {'image_id': image_id, 'launchPermission': []}
196 if image['isPublic']:
197 result['launchPermission'].append({'group': 'all'})
198@@ -850,13 +853,14 @@
199 operation_type, **kwargs):
200 # TODO(devcamcar): Support users and groups other than 'all'.
201 if attribute != 'launchPermission':
202- raise exception.ApiError('attribute not supported: %s' % attribute)
203+ raise exception.ApiError(_('attribute not supported: %s')
204+ % attribute)
205 if not 'user_group' in kwargs:
206- raise exception.ApiError('user or group not specified')
207+ raise exception.ApiError(_('user or group not specified'))
208 if len(kwargs['user_group']) != 1 and kwargs['user_group'][0] != 'all':
209- raise exception.ApiError('only group "all" is supported')
210+ raise exception.ApiError(_('only group "all" is supported'))
211 if not operation_type in ['add', 'remove']:
212- raise exception.ApiError('operation_type must be add or remove')
213+ raise exception.ApiError(_('operation_type must be add or remove'))
214 return self.image_service.modify(context, image_id, operation_type)
215
216 def update_image(self, context, image_id, **kwargs):
217
218=== modified file 'nova/api/ec2/metadatarequesthandler.py'
219--- nova/api/ec2/metadatarequesthandler.py 2010-10-21 22:26:06 +0000
220+++ nova/api/ec2/metadatarequesthandler.py 2010-12-22 15:43:09 +0000
221@@ -65,7 +65,7 @@
222 cc = cloud.CloudController()
223 meta_data = cc.get_metadata(req.remote_addr)
224 if meta_data is None:
225- logging.error('Failed to get metadata for ip: %s' %
226+ logging.error(_('Failed to get metadata for ip: %s') %
227 req.remote_addr)
228 raise webob.exc.HTTPNotFound()
229 data = self.lookup(req.path_info, meta_data)
230
231=== modified file 'nova/api/openstack/__init__.py'
232--- nova/api/openstack/__init__.py 2010-12-14 08:25:39 +0000
233+++ nova/api/openstack/__init__.py 2010-12-22 15:43:09 +0000
234@@ -66,7 +66,7 @@
235 try:
236 return req.get_response(self.application)
237 except Exception as ex:
238- logging.warn("Caught error: %s" % str(ex))
239+ logging.warn(_("Caught error: %s") % str(ex))
240 logging.debug(traceback.format_exc())
241 exc = webob.exc.HTTPInternalServerError(explanation=str(ex))
242 return faults.Fault(exc)
243@@ -133,7 +133,7 @@
244 if delay:
245 # TODO(gundlach): Get the retry-after format correct.
246 exc = webob.exc.HTTPRequestEntityTooLarge(
247- explanation='Too many requests.',
248+ explanation=_('Too many requests.'),
249 headers={'Retry-After': time.time() + delay})
250 raise faults.Fault(exc)
251 return self.application
252
253=== modified file 'nova/auth/dbdriver.py'
254--- nova/auth/dbdriver.py 2010-10-22 00:15:21 +0000
255+++ nova/auth/dbdriver.py 2010-12-22 15:43:09 +0000
256@@ -37,7 +37,6 @@
257 def __init__(self):
258 """Imports the LDAP module"""
259 pass
260- db
261
262 def __enter__(self):
263 return self
264@@ -83,7 +82,7 @@
265 user_ref = db.user_create(context.get_admin_context(), values)
266 return self._db_user_to_auth_user(user_ref)
267 except exception.Duplicate, e:
268- raise exception.Duplicate('User %s already exists' % name)
269+ raise exception.Duplicate(_('User %s already exists') % name)
270
271 def _db_user_to_auth_user(self, user_ref):
272 return {'id': user_ref['id'],
273@@ -105,8 +104,9 @@
274 """Create a project"""
275 manager = db.user_get(context.get_admin_context(), manager_uid)
276 if not manager:
277- raise exception.NotFound("Project can't be created because "
278- "manager %s doesn't exist" % manager_uid)
279+ raise exception.NotFound(_("Project can't be created because "
280+ "manager %s doesn't exist")
281+ % manager_uid)
282
283 # description is a required attribute
284 if description is None:
285@@ -133,8 +133,8 @@
286 try:
287 project = db.project_create(context.get_admin_context(), values)
288 except exception.Duplicate:
289- raise exception.Duplicate("Project can't be created because "
290- "project %s already exists" % name)
291+ raise exception.Duplicate(_("Project can't be created because "
292+ "project %s already exists") % name)
293
294 for member in members:
295 db.project_add_member(context.get_admin_context(),
296@@ -155,8 +155,8 @@
297 if manager_uid:
298 manager = db.user_get(context.get_admin_context(), manager_uid)
299 if not manager:
300- raise exception.NotFound("Project can't be modified because "
301- "manager %s doesn't exist" %
302+ raise exception.NotFound(_("Project can't be modified because "
303+ "manager %s doesn't exist") %
304 manager_uid)
305 values['project_manager'] = manager['id']
306 if description:
307@@ -243,8 +243,8 @@
308 def _validate_user_and_project(self, user_id, project_id):
309 user = db.user_get(context.get_admin_context(), user_id)
310 if not user:
311- raise exception.NotFound('User "%s" not found' % user_id)
312+ raise exception.NotFound(_('User "%s" not found') % user_id)
313 project = db.project_get(context.get_admin_context(), project_id)
314 if not project:
315- raise exception.NotFound('Project "%s" not found' % project_id)
316+ raise exception.NotFound(_('Project "%s" not found') % project_id)
317 return user, project
318
319=== modified file 'nova/auth/fakeldap.py'
320--- nova/auth/fakeldap.py 2010-12-17 17:24:06 +0000
321+++ nova/auth/fakeldap.py 2010-12-22 15:43:09 +0000
322@@ -30,7 +30,7 @@
323 class Store(object):
324 def __init__(self):
325 if hasattr(self.__class__, '_instance'):
326- raise Exception('Attempted to instantiate singleton')
327+ raise Exception(_('Attempted to instantiate singleton'))
328
329 @classmethod
330 def instance(cls):
331
332=== modified file 'nova/auth/ldapdriver.py'
333--- nova/auth/ldapdriver.py 2010-12-08 00:49:20 +0000
334+++ nova/auth/ldapdriver.py 2010-12-22 15:43:09 +0000
335@@ -159,7 +159,7 @@
336 self.conn.modify_s(self.__uid_to_dn(name), attr)
337 return self.get_user(name)
338 else:
339- raise exception.NotFound("LDAP object for %s doesn't exist"
340+ raise exception.NotFound(_("LDAP object for %s doesn't exist")
341 % name)
342 else:
343 attr = [
344@@ -182,11 +182,12 @@
345 description=None, member_uids=None):
346 """Create a project"""
347 if self.__project_exists(name):
348- raise exception.Duplicate("Project can't be created because "
349- "project %s already exists" % name)
350+ raise exception.Duplicate(_("Project can't be created because "
351+ "project %s already exists") % name)
352 if not self.__user_exists(manager_uid):
353- raise exception.NotFound("Project can't be created because "
354- "manager %s doesn't exist" % manager_uid)
355+ raise exception.NotFound(_("Project can't be created because "
356+ "manager %s doesn't exist")
357+ % manager_uid)
358 manager_dn = self.__uid_to_dn(manager_uid)
359 # description is a required attribute
360 if description is None:
361@@ -195,8 +196,8 @@
362 if member_uids is not None:
363 for member_uid in member_uids:
364 if not self.__user_exists(member_uid):
365- raise exception.NotFound("Project can't be created "
366- "because user %s doesn't exist"
367+ raise exception.NotFound(_("Project can't be created "
368+ "because user %s doesn't exist")
369 % member_uid)
370 members.append(self.__uid_to_dn(member_uid))
371 # always add the manager as a member because members is required
372@@ -218,9 +219,9 @@
373 attr = []
374 if manager_uid:
375 if not self.__user_exists(manager_uid):
376- raise exception.NotFound("Project can't be modified because "
377- "manager %s doesn't exist" %
378- manager_uid)
379+ raise exception.NotFound(_("Project can't be modified because "
380+ "manager %s doesn't exist")
381+ % manager_uid)
382 manager_dn = self.__uid_to_dn(manager_uid)
383 attr.append((self.ldap.MOD_REPLACE, 'projectManager', manager_dn))
384 if description:
385@@ -416,8 +417,9 @@
386 if member_uids is not None:
387 for member_uid in member_uids:
388 if not self.__user_exists(member_uid):
389- raise exception.NotFound("Group can't be created "
390- "because user %s doesn't exist" % member_uid)
391+ raise exception.NotFound(_("Group can't be created "
392+ "because user %s doesn't exist")
393+ % member_uid)
394 members.append(self.__uid_to_dn(member_uid))
395 dn = self.__uid_to_dn(uid)
396 if not dn in members:
397@@ -432,8 +434,9 @@
398 def __is_in_group(self, uid, group_dn):
399 """Check if user is in group"""
400 if not self.__user_exists(uid):
401- raise exception.NotFound("User %s can't be searched in group "
402- "becuase the user doesn't exist" % (uid,))
403+ raise exception.NotFound(_("User %s can't be searched in group "
404+ "because the user doesn't exist")
405+ % uid)
406 if not self.__group_exists(group_dn):
407 return False
408 res = self.__find_object(group_dn,
409@@ -444,28 +447,30 @@
410 def __add_to_group(self, uid, group_dn):
411 """Add user to group"""
412 if not self.__user_exists(uid):
413- raise exception.NotFound("User %s can't be added to the group "
414- "becuase the user doesn't exist" % (uid,))
415+ raise exception.NotFound(_("User %s can't be added to the group "
416+ "because the user doesn't exist")
417+ % uid)
418 if not self.__group_exists(group_dn):
419- raise exception.NotFound("The group at dn %s doesn't exist" %
420- (group_dn,))
421+ raise exception.NotFound(_("The group at dn %s doesn't exist")
422+ % group_dn)
423 if self.__is_in_group(uid, group_dn):
424- raise exception.Duplicate("User %s is already a member of "
425- "the group %s" % (uid, group_dn))
426+ raise exception.Duplicate(_("User %s is already a member of "
427+ "the group %s") % (uid, group_dn))
428 attr = [(self.ldap.MOD_ADD, 'member', self.__uid_to_dn(uid))]
429 self.conn.modify_s(group_dn, attr)
430
431 def __remove_from_group(self, uid, group_dn):
432 """Remove user from group"""
433 if not self.__group_exists(group_dn):
434- raise exception.NotFound("The group at dn %s doesn't exist" %
435- (group_dn,))
436+ raise exception.NotFound(_("The group at dn %s doesn't exist")
437+ % group_dn)
438 if not self.__user_exists(uid):
439- raise exception.NotFound("User %s can't be removed from the "
440- "group because the user doesn't exist" % (uid,))
441+ raise exception.NotFound(_("User %s can't be removed from the "
442+ "group because the user doesn't exist")
443+ % uid)
444 if not self.__is_in_group(uid, group_dn):
445- raise exception.NotFound("User %s is not a member of the group" %
446- (uid,))
447+ raise exception.NotFound(_("User %s is not a member of the group")
448+ % uid)
449 # NOTE(vish): remove user from group and any sub_groups
450 sub_dns = self.__find_group_dns_with_member(
451 group_dn, uid)
452@@ -479,15 +484,16 @@
453 try:
454 self.conn.modify_s(group_dn, attr)
455 except self.ldap.OBJECT_CLASS_VIOLATION:
456- logging.debug("Attempted to remove the last member of a group. "
457- "Deleting the group at %s instead.", group_dn)
458+ logging.debug(_("Attempted to remove the last member of a group. "
459+ "Deleting the group at %s instead."), group_dn)
460 self.__delete_group(group_dn)
461
462 def __remove_from_all(self, uid):
463 """Remove user from all roles and projects"""
464 if not self.__user_exists(uid):
465- raise exception.NotFound("User %s can't be removed from all "
466- "because the user doesn't exist" % (uid,))
467+ raise exception.NotFound(_("User %s can't be removed from all "
468+ "because the user doesn't exist")
469+ % uid)
470 role_dns = self.__find_group_dns_with_member(
471 FLAGS.role_project_subtree, uid)
472 for role_dn in role_dns:
473@@ -500,7 +506,8 @@
474 def __delete_group(self, group_dn):
475 """Delete Group"""
476 if not self.__group_exists(group_dn):
477- raise exception.NotFound("Group at dn %s doesn't exist" % group_dn)
478+ raise exception.NotFound(_("Group at dn %s doesn't exist")
479+ % group_dn)
480 self.conn.delete_s(group_dn)
481
482 def __delete_roles(self, project_dn):
483
484=== modified file 'nova/auth/manager.py'
485--- nova/auth/manager.py 2010-12-17 17:14:32 +0000
486+++ nova/auth/manager.py 2010-12-22 15:43:09 +0000
487@@ -257,12 +257,12 @@
488 # TODO(vish): check for valid timestamp
489 (access_key, _sep, project_id) = access.partition(':')
490
491- logging.info('Looking up user: %r', access_key)
492+ logging.info(_('Looking up user: %r'), access_key)
493 user = self.get_user_from_access_key(access_key)
494 logging.info('user: %r', user)
495 if user == None:
496- raise exception.NotFound('No user found for access key %s' %
497- access_key)
498+ raise exception.NotFound(_('No user found for access key %s')
499+ % access_key)
500
501 # NOTE(vish): if we stop using project name as id we need better
502 # logic to find a default project for user
503@@ -271,12 +271,12 @@
504
505 project = self.get_project(project_id)
506 if project == None:
507- raise exception.NotFound('No project called %s could be found' %
508- project_id)
509+ raise exception.NotFound(_('No project called %s could be found')
510+ % project_id)
511 if not self.is_admin(user) and not self.is_project_member(user,
512 project):
513- raise exception.NotFound('User %s is not a member of project %s' %
514- (user.id, project.id))
515+ raise exception.NotFound(_('User %s is not a member of project %s')
516+ % (user.id, project.id))
517 if check_type == 's3':
518 sign = signer.Signer(user.secret.encode())
519 expected_signature = sign.s3_authorization(headers, verb, path)
520@@ -284,7 +284,7 @@
521 logging.debug('expected_signature: %s', expected_signature)
522 logging.debug('signature: %s', signature)
523 if signature != expected_signature:
524- raise exception.NotAuthorized('Signature does not match')
525+ raise exception.NotAuthorized(_('Signature does not match'))
526 elif check_type == 'ec2':
527 # NOTE(vish): hmac can't handle unicode, so encode ensures that
528 # secret isn't unicode
529@@ -294,7 +294,7 @@
530 logging.debug('expected_signature: %s', expected_signature)
531 logging.debug('signature: %s', signature)
532 if signature != expected_signature:
533- raise exception.NotAuthorized('Signature does not match')
534+ raise exception.NotAuthorized(_('Signature does not match'))
535 return (user, project)
536
537 def get_access_key(self, user, project):
538@@ -364,7 +364,7 @@
539 with self.driver() as drv:
540 if role == 'projectmanager':
541 if not project:
542- raise exception.Error("Must specify project")
543+ raise exception.Error(_("Must specify project"))
544 return self.is_project_manager(user, project)
545
546 global_role = drv.has_role(User.safe_id(user),
547@@ -398,9 +398,9 @@
548 @param project: Project in which to add local role.
549 """
550 if role not in FLAGS.allowed_roles:
551- raise exception.NotFound("The %s role can not be found" % role)
552+ raise exception.NotFound(_("The %s role can not be found") % role)
553 if project is not None and role in FLAGS.global_roles:
554- raise exception.NotFound("The %s role is global only" % role)
555+ raise exception.NotFound(_("The %s role is global only") % role)
556 with self.driver() as drv:
557 drv.add_role(User.safe_id(user), role, Project.safe_id(project))
558
559@@ -546,7 +546,8 @@
560 Project.safe_id(project))
561
562 if not network_ref['vpn_public_port']:
563- raise exception.NotFound('project network data has not been set')
564+ raise exception.NotFound(_('project network data has not '
565+ 'been set'))
566 return (network_ref['vpn_public_address'],
567 network_ref['vpn_public_port'])
568
569@@ -659,8 +660,7 @@
570 port=vpn_port)
571 zippy.writestr(FLAGS.credential_vpn_file, config)
572 else:
573- logging.warn("No vpn data for project %s" %
574- pid)
575+ logging.warn(_("No vpn data for project %s"), pid)
576
577 zippy.writestr(FLAGS.ca_file, crypto.fetch_ca(user.id))
578 zippy.close()
579
580=== modified file 'nova/cloudpipe/pipelib.py'
581--- nova/cloudpipe/pipelib.py 2010-10-22 00:15:21 +0000
582+++ nova/cloudpipe/pipelib.py 2010-12-22 15:43:09 +0000
583@@ -49,7 +49,7 @@
584 self.manager = manager.AuthManager()
585
586 def launch_vpn_instance(self, project_id):
587- logging.debug("Launching VPN for %s" % (project_id))
588+ logging.debug(_("Launching VPN for %s") % (project_id))
589 project = self.manager.get_project(project_id)
590 # Make a payload.zip
591 tmpfolder = tempfile.mkdtemp()
592
593=== modified file 'nova/compute/api.py'
594--- nova/compute/api.py 2010-12-20 22:04:12 +0000
595+++ nova/compute/api.py 2010-12-22 15:43:09 +0000
596@@ -125,7 +125,7 @@
597
598 elevated = context.elevated()
599 instances = []
600- logging.debug("Going to run %s instances...", num_instances)
601+ logging.debug(_("Going to run %s instances..."), num_instances)
602 for num in range(num_instances):
603 instance = dict(mac_address=utils.generate_mac(),
604 launch_index=num,
605@@ -162,7 +162,7 @@
606 {"method": "setup_fixed_ip",
607 "args": {"address": address}})
608
609- logging.debug("Casting to scheduler for %s/%s's instance %s",
610+ logging.debug(_("Casting to scheduler for %s/%s's instance %s"),
611 context.project_id, context.user_id, instance_id)
612 rpc.cast(context,
613 FLAGS.scheduler_topic,
614@@ -209,12 +209,12 @@
615 instance = self.db.instance_get_by_internal_id(context,
616 instance_id)
617 except exception.NotFound as e:
618- logging.warning("Instance %d was not found during terminate",
619+ logging.warning(_("Instance %d was not found during terminate"),
620 instance_id)
621 raise e
622
623 if (instance['state_description'] == 'terminating'):
624- logging.warning("Instance %d is already being terminated",
625+ logging.warning(_("Instance %d is already being terminated"),
626 instance_id)
627 return
628
629@@ -228,7 +228,7 @@
630 address = self.db.instance_get_floating_address(context,
631 instance['id'])
632 if address:
633- logging.debug("Disassociating address %s" % address)
634+ logging.debug(_("Disassociating address %s") % address)
635 # NOTE(vish): Right now we don't really care if the ip is
636 # disassociated. We may need to worry about
637 # checking this later. Perhaps in the scheduler?
638@@ -239,7 +239,7 @@
639
640 address = self.db.instance_get_fixed_address(context, instance['id'])
641 if address:
642- logging.debug("Deallocating address %s" % address)
643+ logging.debug(_("Deallocating address %s") % address)
644 # NOTE(vish): Currently, nothing needs to be done on the
645 # network node until release. If this changes,
646 # we will need to cast here.
647
648=== modified file 'nova/compute/disk.py'
649--- nova/compute/disk.py 2010-12-20 19:37:56 +0000
650+++ nova/compute/disk.py 2010-12-22 15:43:09 +0000
651@@ -67,12 +67,12 @@
652 execute('resize2fs %s' % infile)
653 file_size = FLAGS.minimum_root_size
654 elif file_size % sector_size != 0:
655- logging.warn("Input partition size not evenly divisible by"
656- " sector size: %d / %d", file_size, sector_size)
657+ logging.warn(_("Input partition size not evenly divisible by"
658+ " sector size: %d / %d"), file_size, sector_size)
659 primary_sectors = file_size / sector_size
660 if local_bytes % sector_size != 0:
661- logging.warn("Bytes for local storage not evenly divisible"
662- " by sector size: %d / %d", local_bytes, sector_size)
663+ logging.warn(_("Bytes for local storage not evenly divisible"
664+ " by sector size: %d / %d"), local_bytes, sector_size)
665 local_sectors = local_bytes / sector_size
666
667 mbr_last = 62 # a
668@@ -124,14 +124,15 @@
669 """
670 out, err = execute('sudo losetup --find --show %s' % image)
671 if err:
672- raise exception.Error('Could not attach image to loopback: %s' % err)
673+ raise exception.Error(_('Could not attach image to loopback: %s')
674+ % err)
675 device = out.strip()
676 try:
677 if not partition is None:
678 # create partition
679 out, err = execute('sudo kpartx -a %s' % device)
680 if err:
681- raise exception.Error('Failed to load partition: %s' % err)
682+ raise exception.Error(_('Failed to load partition: %s') % err)
683 mapped_device = '/dev/mapper/%sp%s' % (device.split('/')[-1],
684 partition)
685 else:
686@@ -153,7 +154,8 @@
687 out, err = execute(
688 'sudo mount %s %s' % (mapped_device, tmpdir))
689 if err:
690- raise exception.Error('Failed to mount filesystem: %s' % err)
691+ raise exception.Error(_('Failed to mount filesystem: %s')
692+ % err)
693
694 try:
695 if key:
696
697=== modified file 'nova/compute/instance_types.py'
698--- nova/compute/instance_types.py 2010-12-09 20:18:06 +0000
699+++ nova/compute/instance_types.py 2010-12-22 15:43:09 +0000
700@@ -38,7 +38,8 @@
701 if instance_type is None:
702 return FLAGS.default_instance_type
703 if instance_type not in INSTANCE_TYPES:
704- raise exception.ApiError("Unknown instance type: %s" % instance_type)
705+ raise exception.ApiError(_("Unknown instance type: %s"),
706+ instance_type)
707 return instance_type
708
709
710
711=== modified file 'nova/compute/manager.py'
712--- nova/compute/manager.py 2010-12-16 00:31:32 +0000
713+++ nova/compute/manager.py 2010-12-22 15:43:09 +0000
714@@ -87,8 +87,8 @@
715 context = context.elevated()
716 instance_ref = self.db.instance_get(context, instance_id)
717 if instance_ref['name'] in self.driver.list_instances():
718- raise exception.Error("Instance has already been created")
719- logging.debug("instance %s: starting...", instance_id)
720+ raise exception.Error(_("Instance has already been created"))
721+ logging.debug(_("instance %s: starting..."), instance_id)
722 self.network_manager.setup_compute_network(context, instance_id)
723 self.db.instance_update(context,
724 instance_id,
725@@ -107,7 +107,7 @@
726 instance_id,
727 {'launched_at': now})
728 except Exception: # pylint: disable-msg=W0702
729- logging.exception("instance %s: Failed to spawn",
730+ logging.exception(_("instance %s: Failed to spawn"),
731 instance_ref['name'])
732 self.db.instance_set_state(context,
733 instance_id,
734@@ -119,7 +119,7 @@
735 def terminate_instance(self, context, instance_id):
736 """Terminate an instance on this machine."""
737 context = context.elevated()
738- logging.debug("instance %s: terminating", instance_id)
739+ logging.debug(_("instance %s: terminating"), instance_id)
740
741 instance_ref = self.db.instance_get(context, instance_id)
742 volumes = instance_ref.get('volumes', []) or []
743@@ -127,8 +127,8 @@
744 self.detach_volume(context, instance_id, volume['id'])
745 if instance_ref['state'] == power_state.SHUTOFF:
746 self.db.instance_destroy(context, instance_id)
747- raise exception.Error('trying to destroy already destroyed'
748- ' instance: %s' % instance_id)
749+ raise exception.Error(_('trying to destroy already destroyed'
750+ ' instance: %s') % instance_id)
751 self.driver.destroy(instance_ref)
752
753 # TODO(ja): should we keep it in a terminated state for a bit?
754@@ -142,13 +142,13 @@
755 self._update_state(context, instance_id)
756
757 if instance_ref['state'] != power_state.RUNNING:
758- logging.warn('trying to reboot a non-running '
759- 'instance: %s (state: %s excepted: %s)',
760+ logging.warn(_('trying to reboot a non-running '
761+ 'instance: %s (state: %s excepted: %s)'),
762 instance_ref['internal_id'],
763 instance_ref['state'],
764 power_state.RUNNING)
765
766- logging.debug('instance %s: rebooting', instance_ref['name'])
767+ logging.debug(_('instance %s: rebooting'), instance_ref['name'])
768 self.db.instance_set_state(context,
769 instance_id,
770 power_state.NOSTATE,
771@@ -162,7 +162,7 @@
772 context = context.elevated()
773 instance_ref = self.db.instance_get(context, instance_id)
774
775- logging.debug('instance %s: rescuing',
776+ logging.debug(_('instance %s: rescuing'),
777 instance_ref['internal_id'])
778 self.db.instance_set_state(context,
779 instance_id,
780@@ -177,7 +177,7 @@
781 context = context.elevated()
782 instance_ref = self.db.instance_get(context, instance_id)
783
784- logging.debug('instance %s: unrescuing',
785+ logging.debug(_('instance %s: unrescuing'),
786 instance_ref['internal_id'])
787 self.db.instance_set_state(context,
788 instance_id,
789@@ -231,7 +231,7 @@
790 def get_console_output(self, context, instance_id):
791 """Send the console output for an instance."""
792 context = context.elevated()
793- logging.debug("instance %s: getting console output", instance_id)
794+ logging.debug(_("instance %s: getting console output"), instance_id)
795 instance_ref = self.db.instance_get(context, instance_id)
796
797 return self.driver.get_console_output(instance_ref)
798@@ -240,7 +240,7 @@
799 def attach_volume(self, context, instance_id, volume_id, mountpoint):
800 """Attach a volume to an instance."""
801 context = context.elevated()
802- logging.debug("instance %s: attaching volume %s to %s", instance_id,
803+ logging.debug(_("instance %s: attaching volume %s to %s"), instance_id,
804 volume_id, mountpoint)
805 instance_ref = self.db.instance_get(context, instance_id)
806 dev_path = self.volume_manager.setup_compute_volume(context,
807@@ -257,7 +257,7 @@
808 # NOTE(vish): The inline callback eats the exception info so we
809 # log the traceback here and reraise the same
810 # ecxception below.
811- logging.exception("instance %s: attach failed %s, removing",
812+ logging.exception(_("instance %s: attach failed %s, removing"),
813 instance_id, mountpoint)
814 self.volume_manager.remove_compute_volume(context,
815 volume_id)
816@@ -269,13 +269,13 @@
817 def detach_volume(self, context, instance_id, volume_id):
818 """Detach a volume from an instance."""
819 context = context.elevated()
820- logging.debug("instance %s: detaching volume %s",
821+ logging.debug(_("instance %s: detaching volume %s"),
822 instance_id,
823 volume_id)
824 instance_ref = self.db.instance_get(context, instance_id)
825 volume_ref = self.db.volume_get(context, volume_id)
826 if instance_ref['name'] not in self.driver.list_instances():
827- logging.warn("Detaching volume from unknown instance %s",
828+ logging.warn(_("Detaching volume from unknown instance %s"),
829 instance_ref['name'])
830 else:
831 self.driver.detach_volume(instance_ref['name'],
832
833=== modified file 'nova/compute/monitor.py'
834--- nova/compute/monitor.py 2010-11-29 12:14:26 +0000
835+++ nova/compute/monitor.py 2010-12-22 15:43:09 +0000
836@@ -255,7 +255,7 @@
837 Updates the instances statistics and stores the resulting graphs
838 in the internal object store on the cloud controller.
839 """
840- logging.debug('updating %s...', self.instance_id)
841+ logging.debug(_('updating %s...'), self.instance_id)
842
843 try:
844 data = self.fetch_cpu_stats()
845@@ -285,7 +285,7 @@
846 graph_disk(self, '1w')
847 graph_disk(self, '1m')
848 except Exception:
849- logging.exception('unexpected error during update')
850+ logging.exception(_('unexpected error during update'))
851
852 self.last_updated = utcnow()
853
854@@ -351,7 +351,7 @@
855 rd += rd_bytes
856 wr += wr_bytes
857 except TypeError:
858- logging.error('Cannot get blockstats for "%s" on "%s"',
859+ logging.error(_('Cannot get blockstats for "%s" on "%s"'),
860 disk, self.instance_id)
861 raise
862
863@@ -373,7 +373,7 @@
864 rx += stats[0]
865 tx += stats[4]
866 except TypeError:
867- logging.error('Cannot get ifstats for "%s" on "%s"',
868+ logging.error(_('Cannot get ifstats for "%s" on "%s"'),
869 interface, self.instance_id)
870 raise
871
872@@ -408,7 +408,7 @@
873 try:
874 conn = virt_connection.get_connection(read_only=True)
875 except Exception, exn:
876- logging.exception('unexpected exception getting connection')
877+ logging.exception(_('unexpected exception getting connection'))
878 time.sleep(FLAGS.monitoring_instances_delay)
879 return
880
881@@ -423,7 +423,7 @@
882 if not domain_id in self._instances:
883 instance = Instance(conn, domain_id)
884 self._instances[domain_id] = instance
885- logging.debug('Found instance: %s', domain_id)
886+ logging.debug(_('Found instance: %s'), domain_id)
887
888 for key in self._instances.keys():
889 instance = self._instances[key]
890
891=== modified file 'nova/crypto.py'
892--- nova/crypto.py 2010-11-29 12:14:26 +0000
893+++ nova/crypto.py 2010-12-22 15:43:09 +0000
894@@ -39,13 +39,13 @@
895
896
897 FLAGS = flags.FLAGS
898-flags.DEFINE_string('ca_file', 'cacert.pem', 'Filename of root CA')
899+flags.DEFINE_string('ca_file', 'cacert.pem', _('Filename of root CA'))
900 flags.DEFINE_string('keys_path', '$state_path/keys',
901- 'Where we keep our keys')
902+ _('Where we keep our keys'))
903 flags.DEFINE_string('ca_path', '$state_path/CA',
904- 'Where we keep our root CA')
905+ _('Where we keep our root CA'))
906 flags.DEFINE_boolean('use_intermediate_ca', False,
907- 'Should we use intermediate CAs for each project?')
908+ _('Should we use intermediate CAs for each project?'))
909
910
911 def ca_path(project_id):
912@@ -111,9 +111,9 @@
913 keyfile = os.path.abspath(os.path.join(tmpdir, 'temp.key'))
914 csrfile = os.path.join(tmpdir, 'temp.csr')
915 logging.debug("openssl genrsa -out %s %s" % (keyfile, bits))
916- utils.runthis("Generating private key: %s",
917+ utils.runthis(_("Generating private key: %s"),
918 "openssl genrsa -out %s %s" % (keyfile, bits))
919- utils.runthis("Generating CSR: %s",
920+ utils.runthis(_("Generating CSR: %s"),
921 "openssl req -new -key %s -out %s -batch -subj %s" %
922 (keyfile, csrfile, subject))
923 private_key = open(keyfile).read()
924@@ -131,7 +131,7 @@
925 if not os.path.exists(user_ca):
926 start = os.getcwd()
927 os.chdir(FLAGS.ca_path)
928- utils.runthis("Generating intermediate CA: %s",
929+ utils.runthis(_("Generating intermediate CA: %s"),
930 "sh geninter.sh %s" % (intermediate))
931 os.chdir(start)
932 return _sign_csr(csr_text, user_ca)
933@@ -142,11 +142,11 @@
934 csrfile = open("%s/inbound.csr" % (tmpfolder), "w")
935 csrfile.write(csr_text)
936 csrfile.close()
937- logging.debug("Flags path: %s" % ca_folder)
938+ logging.debug(_("Flags path: %s") % ca_folder)
939 start = os.getcwd()
940 # Change working dir to CA
941 os.chdir(ca_folder)
942- utils.runthis("Signing cert: %s",
943+ utils.runthis(_("Signing cert: %s"),
944 "openssl ca -batch -out %s/outbound.crt "
945 "-config ./openssl.cnf -infiles %s/inbound.csr" %
946 (tmpfolder, tmpfolder))
947
948=== modified file 'nova/db/sqlalchemy/api.py'
949--- nova/db/sqlalchemy/api.py 2010-12-14 21:56:42 +0000
950+++ nova/db/sqlalchemy/api.py 2010-12-22 15:43:09 +0000
951@@ -41,7 +41,7 @@
952 def is_admin_context(context):
953 """Indicates if the request context is an administrator."""
954 if not context:
955- warnings.warn('Use of empty request context is deprecated',
956+ warnings.warn(_('Use of empty request context is deprecated'),
957 DeprecationWarning)
958 raise Exception('die')
959 return context.is_admin
960@@ -130,7 +130,7 @@
961 first()
962
963 if not result:
964- raise exception.NotFound('No service for id %s' % service_id)
965+ raise exception.NotFound(_('No service for id %s') % service_id)
966
967 return result
968
969@@ -227,7 +227,7 @@
970 filter_by(deleted=can_read_deleted(context)).\
971 first()
972 if not result:
973- raise exception.NotFound('No service for %s, %s' % (host, binary))
974+ raise exception.NotFound(_('No service for %s, %s') % (host, binary))
975
976 return result
977
978@@ -491,7 +491,7 @@
979 options(joinedload('instance')).\
980 first()
981 if not result:
982- raise exception.NotFound('No floating ip for address %s' % address)
983+ raise exception.NotFound(_('No floating ip for address %s') % address)
984
985 if is_user_context(context):
986 authorize_project_context(context, result.instance.project_id)
987@@ -593,7 +593,7 @@
988 filter_by(deleted=False).\
989 first()
990 if not result:
991- raise exception.NotFound('No instance for id %s' % instance_id)
992+ raise exception.NotFound(_('No instance for id %s') % instance_id)
993
994 return result
995
996@@ -671,7 +671,7 @@
997 filter_by(deleted=False).\
998 first()
999 if not result:
1000- raise exception.NotFound('Instance %s not found' % (internal_id))
1001+ raise exception.NotFound(_('Instance %s not found') % (internal_id))
1002
1003 return result
1004
1005@@ -792,7 +792,7 @@
1006 filter_by(deleted=can_read_deleted(context)).\
1007 first()
1008 if not result:
1009- raise exception.NotFound('no keypair for user %s, name %s' %
1010+ raise exception.NotFound(_('no keypair for user %s, name %s') %
1011 (user_id, name))
1012 return result
1013
1014@@ -907,7 +907,7 @@
1015 filter_by(deleted=False).\
1016 first()
1017 if not result:
1018- raise exception.NotFound('No network for id %s' % network_id)
1019+ raise exception.NotFound(_('No network for id %s') % network_id)
1020
1021 return result
1022
1023@@ -937,7 +937,7 @@
1024 first()
1025
1026 if not result:
1027- raise exception.NotFound('No network for bridge %s' % bridge)
1028+ raise exception.NotFound(_('No network for bridge %s') % bridge)
1029 return result
1030
1031
1032@@ -951,7 +951,7 @@
1033 filter_by(deleted=False).\
1034 first()
1035 if not rv:
1036- raise exception.NotFound('No network for instance %s' % instance_id)
1037+ raise exception.NotFound(_('No network for instance %s') % instance_id)
1038 return rv
1039
1040
1041@@ -965,7 +965,7 @@
1042 with_lockmode('update').\
1043 first()
1044 if not network_ref:
1045- raise exception.NotFound('No network for id %s' % network_id)
1046+ raise exception.NotFound(_('No network for id %s') % network_id)
1047
1048 # NOTE(vish): if with_lockmode isn't supported, as in sqlite,
1049 # then this has concurrency issues
1050@@ -1077,7 +1077,7 @@
1051 filter_by(token_hash=token_hash).\
1052 first()
1053 if not tk:
1054- raise exception.NotFound('Token %s does not exist' % token_hash)
1055+ raise exception.NotFound(_('Token %s does not exist') % token_hash)
1056 return tk
1057
1058
1059@@ -1101,7 +1101,7 @@
1060 filter_by(deleted=can_read_deleted(context)).\
1061 first()
1062 if not result:
1063- raise exception.NotFound('No quota for project_id %s' % project_id)
1064+ raise exception.NotFound(_('No quota for project_id %s') % project_id)
1065
1066 return result
1067
1068@@ -1256,7 +1256,7 @@
1069 filter_by(deleted=False).\
1070 first()
1071 if not result:
1072- raise exception.NotFound('No volume for id %s' % volume_id)
1073+ raise exception.NotFound(_('No volume for id %s') % volume_id)
1074
1075 return result
1076
1077@@ -1312,7 +1312,7 @@
1078 raise exception.NotAuthorized()
1079
1080 if not result:
1081- raise exception.NotFound('Volume %s not found' % ec2_id)
1082+ raise exception.NotFound(_('Volume %s not found') % ec2_id)
1083
1084 return result
1085
1086@@ -1336,7 +1336,7 @@
1087 options(joinedload('instance')).\
1088 first()
1089 if not result:
1090- raise exception.NotFound('Volume %s not found' % ec2_id)
1091+ raise exception.NotFound(_('Volume %s not found') % ec2_id)
1092
1093 return result.instance
1094
1095@@ -1348,7 +1348,7 @@
1096 filter_by(volume_id=volume_id).\
1097 first()
1098 if not result:
1099- raise exception.NotFound('No export device found for volume %s' %
1100+ raise exception.NotFound(_('No export device found for volume %s') %
1101 volume_id)
1102
1103 return (result.shelf_id, result.blade_id)
1104@@ -1361,7 +1361,7 @@
1105 filter_by(volume_id=volume_id).\
1106 first()
1107 if not result:
1108- raise exception.NotFound('No target id found for volume %s' %
1109+ raise exception.NotFound(_('No target id found for volume %s') %
1110 volume_id)
1111
1112 return result.target_num
1113@@ -1406,7 +1406,7 @@
1114 options(joinedload_all('rules')).\
1115 first()
1116 if not result:
1117- raise exception.NotFound("No secuity group with id %s" %
1118+ raise exception.NotFound(_("No security group with id %s") %
1119 security_group_id)
1120 return result
1121
1122@@ -1423,7 +1423,7 @@
1123 first()
1124 if not result:
1125 raise exception.NotFound(
1126- 'No security group named %s for project: %s' \
1127+ _('No security group named %s for project: %s')
1128 % (group_name, project_id))
1129 return result
1130
1131@@ -1511,7 +1511,7 @@
1132 filter_by(id=security_group_rule_id).\
1133 first()
1134 if not result:
1135- raise exception.NotFound("No secuity group rule with id %s" %
1136+ raise exception.NotFound(_("No secuity group rule with id %s") %
1137 security_group_rule_id)
1138 return result
1139
1140@@ -1547,7 +1547,7 @@
1141 first()
1142
1143 if not result:
1144- raise exception.NotFound('No user for id %s' % id)
1145+ raise exception.NotFound(_('No user for id %s') % id)
1146
1147 return result
1148
1149@@ -1563,7 +1563,7 @@
1150 first()
1151
1152 if not result:
1153- raise exception.NotFound('No user for access key %s' % access_key)
1154+ raise exception.NotFound(_('No user for access key %s') % access_key)
1155
1156 return result
1157
1158@@ -1625,7 +1625,7 @@
1159 first()
1160
1161 if not result:
1162- raise exception.NotFound("No project with id %s" % id)
1163+ raise exception.NotFound(_("No project with id %s") % id)
1164
1165 return result
1166
1167
1168=== modified file 'nova/exception.py'
1169--- nova/exception.py 2010-12-14 21:56:42 +0000
1170+++ nova/exception.py 2010-12-22 15:43:09 +0000
1171@@ -31,11 +31,11 @@
1172 def __init__(self, stdout=None, stderr=None, exit_code=None, cmd=None,
1173 description=None):
1174 if description is None:
1175- description = "Unexpected error while running command."
1176+ description = _("Unexpected error while running command.")
1177 if exit_code is None:
1178 exit_code = '-'
1179- message = "%s\nCommand: %s\nExit code: %s\nStdout: %r\nStderr: %r" % (
1180- description, cmd, exit_code, stdout, stderr)
1181+ message = _("%s\nCommand: %s\nExit code: %s\nStdout: %r\nStderr: %r")\
1182+ % (description, cmd, exit_code, stdout, stderr)
1183 IOError.__init__(self, message)
1184
1185
1186@@ -84,7 +84,7 @@
1187 except Exception, e:
1188 if not isinstance(e, Error):
1189 #exc_type, exc_value, exc_traceback = sys.exc_info()
1190- logging.exception('Uncaught exception')
1191+ logging.exception(_('Uncaught exception'))
1192 #logging.error(traceback.extract_stack(exc_traceback))
1193 raise Error(str(e))
1194 raise
1195
1196=== modified file 'nova/fakerabbit.py'
1197--- nova/fakerabbit.py 2010-10-21 18:49:51 +0000
1198+++ nova/fakerabbit.py 2010-12-22 15:43:09 +0000
1199@@ -37,12 +37,12 @@
1200 self._routes = {}
1201
1202 def publish(self, message, routing_key=None):
1203- logging.debug('(%s) publish (key: %s) %s',
1204+ logging.debug(_('(%s) publish (key: %s) %s'),
1205 self.name, routing_key, message)
1206 routing_key = routing_key.split('.')[0]
1207 if routing_key in self._routes:
1208 for f in self._routes[routing_key]:
1209- logging.debug('Publishing to route %s', f)
1210+ logging.debug(_('Publishing to route %s'), f)
1211 f(message, routing_key=routing_key)
1212
1213 def bind(self, callback, routing_key):
1214@@ -82,16 +82,16 @@
1215
1216 def queue_declare(self, queue, **kwargs):
1217 if queue not in self._queues:
1218- logging.debug('Declaring queue %s', queue)
1219+ logging.debug(_('Declaring queue %s'), queue)
1220 self._queues[queue] = Queue(queue)
1221
1222 def exchange_declare(self, exchange, type, *args, **kwargs):
1223 if exchange not in self._exchanges:
1224- logging.debug('Declaring exchange %s', exchange)
1225+ logging.debug(_('Declaring exchange %s'), exchange)
1226 self._exchanges[exchange] = Exchange(exchange, type)
1227
1228 def queue_bind(self, queue, exchange, routing_key, **kwargs):
1229- logging.debug('Binding %s to %s with key %s',
1230+ logging.debug(_('Binding %s to %s with key %s'),
1231 queue, exchange, routing_key)
1232 self._exchanges[exchange].bind(self._queues[queue].push,
1233 routing_key)
1234@@ -117,7 +117,7 @@
1235 content_type=content_type,
1236 content_encoding=content_encoding)
1237 message.result = True
1238- logging.debug('Getting from %s: %s', queue, message)
1239+ logging.debug(_('Getting from %s: %s'), queue, message)
1240 return message
1241
1242 def prepare_message(self, message_data, delivery_mode,
1243
1244=== modified file 'nova/image/glance.py'
1245--- nova/image/glance.py 2010-11-19 06:09:40 +0000
1246+++ nova/image/glance.py 2010-12-22 15:43:09 +0000
1247@@ -77,8 +77,8 @@
1248 data = json.loads(res.read())['images']
1249 return data
1250 else:
1251- logging.warn("Parallax returned HTTP error %d from "
1252- "request for /images", res.status_int)
1253+ logging.warn(_("Parallax returned HTTP error %d from "
1254+ "request for /images"), res.status_int)
1255 return []
1256 finally:
1257 c.close()
1258@@ -96,8 +96,8 @@
1259 data = json.loads(res.read())['images']
1260 return data
1261 else:
1262- logging.warn("Parallax returned HTTP error %d from "
1263- "request for /images/detail", res.status_int)
1264+ logging.warn(_("Parallax returned HTTP error %d from "
1265+ "request for /images/detail"), res.status_int)
1266 return []
1267 finally:
1268 c.close()
1269
1270=== modified file 'nova/image/s3.py'
1271--- nova/image/s3.py 2010-11-19 05:27:00 +0000
1272+++ nova/image/s3.py 2010-12-22 15:43:09 +0000
1273@@ -79,7 +79,8 @@
1274 result = self.index(context)
1275 result = [i for i in result if i['imageId'] == image_id]
1276 if not result:
1277- raise exception.NotFound('Image %s could not be found' % image_id)
1278+ raise exception.NotFound(_('Image %s could not be found')
1279+ % image_id)
1280 image = result[0]
1281 return image
1282
1283
1284=== modified file 'nova/network/linux_net.py'
1285--- nova/network/linux_net.py 2010-12-01 10:50:25 +0000
1286+++ nova/network/linux_net.py 2010-12-22 15:43:09 +0000
1287@@ -135,7 +135,7 @@
1288 """Create a vlan unless it already exists"""
1289 interface = "vlan%s" % vlan_num
1290 if not _device_exists(interface):
1291- logging.debug("Starting VLAN inteface %s", interface)
1292+ logging.debug(_("Starting VLAN inteface %s"), interface)
1293 _execute("sudo vconfig set_name_type VLAN_PLUS_VID_NO_PAD")
1294 _execute("sudo vconfig add %s %s" % (FLAGS.vlan_interface, vlan_num))
1295 _execute("sudo ifconfig %s up" % interface)
1296@@ -145,7 +145,7 @@
1297 def ensure_bridge(bridge, interface, net_attrs=None):
1298 """Create a bridge unless it already exists"""
1299 if not _device_exists(bridge):
1300- logging.debug("Starting Bridge interface for %s", interface)
1301+ logging.debug(_("Starting Bridge interface for %s"), interface)
1302 _execute("sudo brctl addbr %s" % bridge)
1303 _execute("sudo brctl setfd %s 0" % bridge)
1304 # _execute("sudo brctl setageing %s 10" % bridge)
1305@@ -202,9 +202,9 @@
1306 _execute('sudo kill -HUP %d' % pid)
1307 return
1308 except Exception as exc: # pylint: disable-msg=W0703
1309- logging.debug("Hupping dnsmasq threw %s", exc)
1310+ logging.debug(_("Hupping dnsmasq threw %s"), exc)
1311 else:
1312- logging.debug("Pid %d is stale, relaunching dnsmasq", pid)
1313+ logging.debug(_("Pid %d is stale, relaunching dnsmasq"), pid)
1314
1315 # FLAGFILE and DNSMASQ_INTERFACE in env
1316 env = {'FLAGFILE': FLAGS.dhcpbridge_flagfile,
1317@@ -276,7 +276,7 @@
1318 try:
1319 _execute('sudo kill -TERM %d' % pid)
1320 except Exception as exc: # pylint: disable-msg=W0703
1321- logging.debug("Killing dnsmasq threw %s", exc)
1322+ logging.debug(_("Killing dnsmasq threw %s"), exc)
1323
1324
1325 def _dhcp_file(bridge, kind):
1326
1327=== modified file 'nova/network/manager.py'
1328--- nova/network/manager.py 2010-12-20 20:04:24 +0000
1329+++ nova/network/manager.py 2010-12-22 15:43:09 +0000
1330@@ -115,7 +115,7 @@
1331
1332 def set_network_host(self, context, network_id):
1333 """Safely sets the host of the network."""
1334- logging.debug("setting network host")
1335+ logging.debug(_("setting network host"))
1336 host = self.db.network_set_host(context,
1337 network_id,
1338 self.host)
1339@@ -174,10 +174,10 @@
1340 fixed_ip_ref = self.db.fixed_ip_get_by_address(context, address)
1341 instance_ref = fixed_ip_ref['instance']
1342 if not instance_ref:
1343- raise exception.Error("IP %s leased that isn't associated" %
1344+ raise exception.Error(_("IP %s leased that isn't associated") %
1345 address)
1346 if instance_ref['mac_address'] != mac:
1347- raise exception.Error("IP %s leased to bad mac %s vs %s" %
1348+ raise exception.Error(_("IP %s leased to bad mac %s vs %s") %
1349 (address, instance_ref['mac_address'], mac))
1350 now = datetime.datetime.utcnow()
1351 self.db.fixed_ip_update(context,
1352@@ -185,7 +185,8 @@
1353 {'leased': True,
1354 'updated_at': now})
1355 if not fixed_ip_ref['allocated']:
1356- logging.warn("IP %s leased that was already deallocated", address)
1357+ logging.warn(_("IP %s leased that was already deallocated"),
1358+ address)
1359
1360 def release_fixed_ip(self, context, mac, address):
1361 """Called by dhcp-bridge when ip is released."""
1362@@ -193,13 +194,13 @@
1363 fixed_ip_ref = self.db.fixed_ip_get_by_address(context, address)
1364 instance_ref = fixed_ip_ref['instance']
1365 if not instance_ref:
1366- raise exception.Error("IP %s released that isn't associated" %
1367+ raise exception.Error(_("IP %s released that isn't associated") %
1368 address)
1369 if instance_ref['mac_address'] != mac:
1370- raise exception.Error("IP %s released from bad mac %s vs %s" %
1371+ raise exception.Error(_("IP %s released from bad mac %s vs %s") %
1372 (address, instance_ref['mac_address'], mac))
1373 if not fixed_ip_ref['leased']:
1374- logging.warn("IP %s released that was not leased", address)
1375+ logging.warn(_("IP %s released that was not leased"), address)
1376 self.db.fixed_ip_update(context,
1377 fixed_ip_ref['address'],
1378 {'leased': False})
1379@@ -407,7 +408,7 @@
1380 self.host,
1381 time)
1382 if num:
1383- logging.debug("Dissassociated %s stale fixed ip(s)", num)
1384+ logging.debug(_("Dissassociated %s stale fixed ip(s)"), num)
1385
1386 def init_host(self):
1387 """Do any initialization that needs to be run if this is a
1388
1389=== modified file 'nova/objectstore/handler.py'
1390--- nova/objectstore/handler.py 2010-11-03 20:13:59 +0000
1391+++ nova/objectstore/handler.py 2010-12-22 15:43:09 +0000
1392@@ -102,7 +102,7 @@
1393 _render_parts(subsubvalue, write_cb)
1394 write_cb('</' + utils.utf8(name) + '>')
1395 else:
1396- raise Exception("Unknown S3 value type %r", value)
1397+ raise Exception(_("Unknown S3 value type %r"), value)
1398
1399
1400 def get_argument(request, key, default_value):
1401@@ -134,7 +134,7 @@
1402 check_type='s3')
1403 return context.RequestContext(user, project)
1404 except exception.Error as ex:
1405- logging.debug("Authentication Failure: %s", ex)
1406+ logging.debug(_("Authentication Failure: %s"), ex)
1407 raise exception.NotAuthorized()
1408
1409
1410@@ -227,7 +227,7 @@
1411
1412 def render_PUT(self, request):
1413 "Creates the bucket resource"""
1414- logging.debug("Creating bucket %s", self.name)
1415+ logging.debug(_("Creating bucket %s"), self.name)
1416 logging.debug("calling bucket.Bucket.create(%r, %r)",
1417 self.name,
1418 request.context)
1419@@ -237,7 +237,7 @@
1420
1421 def render_DELETE(self, request):
1422 """Deletes the bucket resource"""
1423- logging.debug("Deleting bucket %s", self.name)
1424+ logging.debug(_("Deleting bucket %s"), self.name)
1425 bucket_object = bucket.Bucket(self.name)
1426
1427 if not bucket_object.is_authorized(request.context):
1428@@ -261,7 +261,9 @@
1429 Raises NotAuthorized if user in request context is not
1430 authorized to delete the object.
1431 """
1432- logging.debug("Getting object: %s / %s", self.bucket.name, self.name)
1433+ logging.debug(_("Getting object: %s / %s"),
1434+ self.bucket.name,
1435+ self.name)
1436
1437 if not self.bucket.is_authorized(request.context):
1438 raise exception.NotAuthorized()
1439@@ -279,7 +281,9 @@
1440 Raises NotAuthorized if user in request context is not
1441 authorized to delete the object.
1442 """
1443- logging.debug("Putting object: %s / %s", self.bucket.name, self.name)
1444+ logging.debug(_("Putting object: %s / %s"),
1445+ self.bucket.name,
1446+ self.name)
1447
1448 if not self.bucket.is_authorized(request.context):
1449 raise exception.NotAuthorized()
1450@@ -298,7 +302,7 @@
1451 authorized to delete the object.
1452 """
1453
1454- logging.debug("Deleting object: %s / %s",
1455+ logging.debug(_("Deleting object: %s / %s"),
1456 self.bucket.name,
1457 self.name)
1458
1459@@ -394,17 +398,17 @@
1460 image_id = get_argument(request, 'image_id', u'')
1461 image_object = image.Image(image_id)
1462 if not image_object.is_authorized(request.context):
1463- logging.debug("not authorized for render_POST in images")
1464+ logging.debug(_("not authorized for render_POST in images"))
1465 raise exception.NotAuthorized()
1466
1467 operation = get_argument(request, 'operation', u'')
1468 if operation:
1469 # operation implies publicity toggle
1470- logging.debug("handling publicity toggle")
1471+ logging.debug(_("handling publicity toggle"))
1472 image_object.set_public(operation == 'add')
1473 else:
1474 # other attributes imply update
1475- logging.debug("update user fields")
1476+ logging.debug(_("update user fields"))
1477 clean_args = {}
1478 for arg in request.args.keys():
1479 clean_args[arg] = request.args[arg][0]
1480
1481=== modified file 'nova/rpc.py'
1482--- nova/rpc.py 2010-12-15 00:05:39 +0000
1483+++ nova/rpc.py 2010-12-22 15:43:09 +0000
1484@@ -91,15 +91,15 @@
1485 self.failed_connection = False
1486 break
1487 except: # Catching all because carrot sucks
1488- logging.exception("AMQP server on %s:%d is unreachable." \
1489- " Trying again in %d seconds." % (
1490+ logging.exception(_("AMQP server on %s:%d is unreachable."
1491+ " Trying again in %d seconds.") % (
1492 FLAGS.rabbit_host,
1493 FLAGS.rabbit_port,
1494 FLAGS.rabbit_retry_interval))
1495 self.failed_connection = True
1496 if self.failed_connection:
1497- logging.exception("Unable to connect to AMQP server" \
1498- " after %d tries. Shutting down." % FLAGS.rabbit_max_retries)
1499+ logging.exception(_("Unable to connect to AMQP server"
1500+ " after %d tries. Shutting down.") % FLAGS.rabbit_max_retries)
1501 sys.exit(1)
1502
1503 def fetch(self, no_ack=None, auto_ack=None, enable_callbacks=False):
1504@@ -116,14 +116,14 @@
1505 self.declare()
1506 super(Consumer, self).fetch(no_ack, auto_ack, enable_callbacks)
1507 if self.failed_connection:
1508- logging.error("Reconnected to queue")
1509+ logging.error(_("Reconnected to queue"))
1510 self.failed_connection = False
1511 # NOTE(vish): This is catching all errors because we really don't
1512 # exceptions to be logged 10 times a second if some
1513 # persistent failure occurs.
1514 except Exception: # pylint: disable-msg=W0703
1515 if not self.failed_connection:
1516- logging.exception("Failed to fetch message from queue")
1517+ logging.exception(_("Failed to fetch message from queue"))
1518 self.failed_connection = True
1519
1520 def attach_to_eventlet(self):
1521@@ -153,7 +153,7 @@
1522 class AdapterConsumer(TopicConsumer):
1523 """Calls methods on a proxy object based on method and args"""
1524 def __init__(self, connection=None, topic="broadcast", proxy=None):
1525- LOG.debug('Initing the Adapter Consumer for %s' % (topic))
1526+ LOG.debug(_('Initing the Adapter Consumer for %s') % (topic))
1527 self.proxy = proxy
1528 super(AdapterConsumer, self).__init__(connection=connection,
1529 topic=topic)
1530@@ -168,7 +168,7 @@
1531
1532 Example: {'method': 'echo', 'args': {'value': 42}}
1533 """
1534- LOG.debug('received %s' % (message_data))
1535+ LOG.debug(_('received %s') % (message_data))
1536 msg_id = message_data.pop('_msg_id', None)
1537
1538 ctxt = _unpack_context(message_data)
1539@@ -181,8 +181,8 @@
1540 # messages stay in the queue indefinitely, so for now
1541 # we just log the message and send an error string
1542 # back to the caller
1543- LOG.warn('no method for message: %s' % (message_data))
1544- msg_reply(msg_id, 'No method for message: %s' % message_data)
1545+ LOG.warn(_('no method for message: %s') % (message_data))
1546+ msg_reply(msg_id, _('No method for message: %s') % message_data)
1547 return
1548
1549 node_func = getattr(self.proxy, str(method))
1550@@ -242,7 +242,7 @@
1551 if failure:
1552 message = str(failure[1])
1553 tb = traceback.format_exception(*failure)
1554- logging.error("Returning exception %s to caller", message)
1555+ logging.error(_("Returning exception %s to caller"), message)
1556 logging.error(tb)
1557 failure = (failure[0].__name__, str(failure[1]), tb)
1558 conn = Connection.instance()
1559@@ -283,7 +283,7 @@
1560 if key.startswith('_context_'):
1561 value = msg.pop(key)
1562 context_dict[key[9:]] = value
1563- LOG.debug('unpacked context: %s', context_dict)
1564+ LOG.debug(_('unpacked context: %s'), context_dict)
1565 return context.RequestContext.from_dict(context_dict)
1566
1567
1568@@ -302,10 +302,10 @@
1569
1570 def call(context, topic, msg):
1571 """Sends a message on a topic and wait for a response"""
1572- LOG.debug("Making asynchronous call...")
1573+ LOG.debug(_("Making asynchronous call..."))
1574 msg_id = uuid.uuid4().hex
1575 msg.update({'_msg_id': msg_id})
1576- LOG.debug("MSG_ID is %s" % (msg_id))
1577+ LOG.debug(_("MSG_ID is %s") % (msg_id))
1578 _pack_context(msg, context)
1579
1580 class WaitMessage(object):
1581@@ -353,7 +353,7 @@
1582
1583 def generic_response(message_data, message):
1584 """Logs a result and exits"""
1585- LOG.debug('response %s', message_data)
1586+ LOG.debug(_('response %s'), message_data)
1587 message.ack()
1588 sys.exit(0)
1589
1590@@ -362,8 +362,8 @@
1591 """Sends a message for testing"""
1592 msg_id = uuid.uuid4().hex
1593 message.update({'_msg_id': msg_id})
1594- LOG.debug('topic is %s', topic)
1595- LOG.debug('message %s', message)
1596+ LOG.debug(_('topic is %s'), topic)
1597+ LOG.debug(_('message %s'), message)
1598
1599 if wait:
1600 consumer = messaging.Consumer(connection=Connection.instance(),
1601
1602=== modified file 'nova/scheduler/chance.py'
1603--- nova/scheduler/chance.py 2010-09-07 20:01:21 +0000
1604+++ nova/scheduler/chance.py 2010-12-22 15:43:09 +0000
1605@@ -34,5 +34,5 @@
1606
1607 hosts = self.hosts_up(context, topic)
1608 if not hosts:
1609- raise driver.NoValidHost("No hosts found")
1610+ raise driver.NoValidHost(_("No hosts found"))
1611 return hosts[int(random.random() * len(hosts))]
1612
1613=== modified file 'nova/scheduler/driver.py'
1614--- nova/scheduler/driver.py 2010-10-22 00:15:21 +0000
1615+++ nova/scheduler/driver.py 2010-12-22 15:43:09 +0000
1616@@ -58,4 +58,4 @@
1617
1618 def schedule(self, context, topic, *_args, **_kwargs):
1619 """Must override at least this method for scheduler to work."""
1620- raise NotImplementedError("Must implement a fallback schedule")
1621+ raise NotImplementedError(_("Must implement a fallback schedule"))
1622
1623=== modified file 'nova/scheduler/manager.py'
1624--- nova/scheduler/manager.py 2010-10-22 00:15:21 +0000
1625+++ nova/scheduler/manager.py 2010-12-22 15:43:09 +0000
1626@@ -65,4 +65,4 @@
1627 db.queue_get_for(context, topic, host),
1628 {"method": method,
1629 "args": kwargs})
1630- logging.debug("Casting to %s %s for %s", topic, host, method)
1631+ logging.debug(_("Casting to %s %s for %s"), topic, host, method)
1632
1633=== modified file 'nova/scheduler/simple.py'
1634--- nova/scheduler/simple.py 2010-10-22 00:15:21 +0000
1635+++ nova/scheduler/simple.py 2010-12-22 15:43:09 +0000
1636@@ -47,7 +47,7 @@
1637 for result in results:
1638 (service, instance_cores) = result
1639 if instance_cores + instance_ref['vcpus'] > FLAGS.max_cores:
1640- raise driver.NoValidHost("All hosts have too many cores")
1641+ raise driver.NoValidHost(_("All hosts have too many cores"))
1642 if self.service_is_up(service):
1643 # NOTE(vish): this probably belongs in the manager, if we
1644 # can generalize this somehow
1645@@ -57,7 +57,7 @@
1646 {'host': service['host'],
1647 'scheduled_at': now})
1648 return service['host']
1649- raise driver.NoValidHost("No hosts found")
1650+ raise driver.NoValidHost(_("No hosts found"))
1651
1652 def schedule_create_volume(self, context, volume_id, *_args, **_kwargs):
1653 """Picks a host that is up and has the fewest volumes."""
1654@@ -66,7 +66,8 @@
1655 for result in results:
1656 (service, volume_gigabytes) = result
1657 if volume_gigabytes + volume_ref['size'] > FLAGS.max_gigabytes:
1658- raise driver.NoValidHost("All hosts have too many gigabytes")
1659+ raise driver.NoValidHost(_("All hosts have too many "
1660+ "gigabytes"))
1661 if self.service_is_up(service):
1662 # NOTE(vish): this probably belongs in the manager, if we
1663 # can generalize this somehow
1664@@ -76,7 +77,7 @@
1665 {'host': service['host'],
1666 'scheduled_at': now})
1667 return service['host']
1668- raise driver.NoValidHost("No hosts found")
1669+ raise driver.NoValidHost(_("No hosts found"))
1670
1671 def schedule_set_network_host(self, context, *_args, **_kwargs):
1672 """Picks a host that is up and has the fewest networks."""
1673@@ -85,7 +86,7 @@
1674 for result in results:
1675 (service, instance_count) = result
1676 if instance_count >= FLAGS.max_networks:
1677- raise driver.NoValidHost("All hosts have too many networks")
1678+ raise driver.NoValidHost(_("All hosts have too many networks"))
1679 if self.service_is_up(service):
1680 return service['host']
1681- raise driver.NoValidHost("No hosts found")
1682+ raise driver.NoValidHost(_("No hosts found"))
1683
1684=== modified file 'nova/service.py'
1685--- nova/service.py 2010-12-16 18:52:30 +0000
1686+++ nova/service.py 2010-12-22 15:43:09 +0000
1687@@ -151,7 +151,7 @@
1688 report_interval = FLAGS.report_interval
1689 if not periodic_interval:
1690 periodic_interval = FLAGS.periodic_interval
1691- logging.warn("Starting %s node", topic)
1692+ logging.warn(_("Starting %s node"), topic)
1693 service_obj = cls(host, binary, topic, manager,
1694 report_interval, periodic_interval)
1695
1696@@ -163,7 +163,7 @@
1697 try:
1698 db.service_destroy(context.get_admin_context(), self.service_id)
1699 except exception.NotFound:
1700- logging.warn("Service killed that has no database entry")
1701+ logging.warn(_("Service killed that has no database entry"))
1702
1703 def stop(self):
1704 for x in self.timers:
1705@@ -184,8 +184,8 @@
1706 try:
1707 service_ref = db.service_get(ctxt, self.service_id)
1708 except exception.NotFound:
1709- logging.debug("The service database object disappeared, "
1710- "Recreating it.")
1711+ logging.debug(_("The service database object disappeared, "
1712+ "Recreating it."))
1713 self._create_service_ref(ctxt)
1714 service_ref = db.service_get(ctxt, self.service_id)
1715
1716@@ -196,13 +196,13 @@
1717 # TODO(termie): make this pattern be more elegant.
1718 if getattr(self, "model_disconnected", False):
1719 self.model_disconnected = False
1720- logging.error("Recovered model server connection!")
1721+ logging.error(_("Recovered model server connection!"))
1722
1723 # TODO(vish): this should probably only catch connection errors
1724 except Exception: # pylint: disable-msg=W0702
1725 if not getattr(self, "model_disconnected", False):
1726 self.model_disconnected = True
1727- logging.exception("model server went away")
1728+ logging.exception(_("model server went away"))
1729
1730
1731 def serve(*services):
1732@@ -221,7 +221,7 @@
1733 else:
1734 logging.getLogger().setLevel(logging.WARNING)
1735
1736- logging.debug("Full set of FLAGS:")
1737+ logging.debug(_("Full set of FLAGS:"))
1738 for flag in FLAGS:
1739 logging.debug("%s : %s" % (flag, FLAGS.get(flag, None)))
1740
1741
1742=== modified file 'nova/twistd.py'
1743--- nova/twistd.py 2010-12-14 21:56:42 +0000
1744+++ nova/twistd.py 2010-12-22 15:43:09 +0000
1745@@ -208,7 +208,7 @@
1746 pid = None
1747
1748 if not pid:
1749- message = "pidfile %s does not exist. Daemon not running?\n"
1750+ message = _("pidfile %s does not exist. Daemon not running?\n")
1751 sys.stderr.write(message % pidfile)
1752 # Not an error in a restart
1753 return
1754@@ -229,7 +229,7 @@
1755
1756
1757 def serve(filename):
1758- logging.debug("Serving %s" % filename)
1759+ logging.debug(_("Serving %s") % filename)
1760 name = os.path.basename(filename)
1761 OptionsClass = WrapTwistedOptions(TwistdServerOptions)
1762 options = OptionsClass()
1763@@ -281,7 +281,7 @@
1764 else:
1765 logging.getLogger().setLevel(logging.WARNING)
1766
1767- logging.debug("Full set of FLAGS:")
1768+ logging.debug(_("Full set of FLAGS:"))
1769 for flag in FLAGS:
1770 logging.debug("%s : %s" % (flag, FLAGS.get(flag, None)))
1771
1772
1773=== modified file 'nova/utils.py'
1774--- nova/utils.py 2010-12-16 18:52:30 +0000
1775+++ nova/utils.py 2010-12-22 15:43:09 +0000
1776@@ -50,7 +50,7 @@
1777 __import__(mod_str)
1778 return getattr(sys.modules[mod_str], class_str)
1779 except (ImportError, ValueError, AttributeError):
1780- raise exception.NotFound('Class %s cannot be found' % class_str)
1781+ raise exception.NotFound(_('Class %s cannot be found') % class_str)
1782
1783
1784 def import_object(import_str):
1785@@ -64,7 +64,7 @@
1786
1787
1788 def fetchfile(url, target):
1789- logging.debug("Fetching %s" % url)
1790+ logging.debug(_("Fetching %s") % url)
1791 # c = pycurl.Curl()
1792 # fp = open(target, "wb")
1793 # c.setopt(c.URL, url)
1794@@ -76,7 +76,7 @@
1795
1796
1797 def execute(cmd, process_input=None, addl_env=None, check_exit_code=True):
1798- logging.debug("Running cmd (subprocess): %s", cmd)
1799+ logging.debug(_("Running cmd (subprocess): %s"), cmd)
1800 env = os.environ.copy()
1801 if addl_env:
1802 env.update(addl_env)
1803@@ -89,7 +89,7 @@
1804 result = obj.communicate()
1805 obj.stdin.close()
1806 if obj.returncode:
1807- logging.debug("Result was %s" % (obj.returncode))
1808+ logging.debug(_("Result was %s") % (obj.returncode))
1809 if check_exit_code and obj.returncode != 0:
1810 (stdout, stderr) = result
1811 raise ProcessExecutionError(exit_code=obj.returncode,
1812@@ -127,7 +127,7 @@
1813
1814
1815 def runthis(prompt, cmd, check_exit_code=True):
1816- logging.debug("Running %s" % (cmd))
1817+ logging.debug(_("Running %s") % (cmd))
1818 rv, err = execute(cmd, check_exit_code=check_exit_code)
1819
1820
1821@@ -160,7 +160,7 @@
1822 csock.close()
1823 return addr
1824 except socket.gaierror as ex:
1825- logging.warn("Couldn't get IP, using 127.0.0.1 %s", ex)
1826+ logging.warn(_("Couldn't get IP, using 127.0.0.1 %s"), ex)
1827 return "127.0.0.1"
1828
1829
1830@@ -204,7 +204,7 @@
1831 if not self.__backend:
1832 backend_name = self.__pivot.value
1833 if backend_name not in self.__backends:
1834- raise exception.Error('Invalid backend: %s' % backend_name)
1835+ raise exception.Error(_('Invalid backend: %s') % backend_name)
1836
1837 backend = self.__backends[backend_name]
1838 if type(backend) == type(tuple()):
1839
1840=== modified file 'nova/virt/connection.py'
1841--- nova/virt/connection.py 2010-11-29 16:31:31 +0000
1842+++ nova/virt/connection.py 2010-12-22 15:43:09 +0000
1843@@ -66,6 +66,6 @@
1844 raise Exception('Unknown connection type "%s"' % t)
1845
1846 if conn is None:
1847- logging.error('Failed to open connection to the hypervisor')
1848+ logging.error(_('Failed to open connection to the hypervisor'))
1849 sys.exit(1)
1850 return conn
1851
1852=== modified file 'nova/virt/fake.py'
1853--- nova/virt/fake.py 2010-12-16 00:31:32 +0000
1854+++ nova/virt/fake.py 2010-12-22 15:43:09 +0000
1855@@ -175,7 +175,8 @@
1856 knowledge of the instance
1857 """
1858 if instance_name not in self.instances:
1859- raise exception.NotFound("Instance %s Not Found" % instance_name)
1860+ raise exception.NotFound(_("Instance %s Not Found")
1861+ % instance_name)
1862 i = self.instances[instance_name]
1863 return {'state': i._state,
1864 'max_mem': 0,
1865
1866=== modified file 'nova/virt/libvirt_conn.py'
1867--- nova/virt/libvirt_conn.py 2010-12-20 22:04:12 +0000
1868+++ nova/virt/libvirt_conn.py 2010-12-22 15:43:09 +0000
1869@@ -108,7 +108,7 @@
1870 @property
1871 def _conn(self):
1872 if not self._wrapped_conn or not self._test_connection():
1873- logging.debug('Connecting to libvirt: %s' % self.libvirt_uri)
1874+ logging.debug(_('Connecting to libvirt: %s') % self.libvirt_uri)
1875 self._wrapped_conn = self._connect(self.libvirt_uri,
1876 self.read_only)
1877 return self._wrapped_conn
1878@@ -120,7 +120,7 @@
1879 except libvirt.libvirtError as e:
1880 if e.get_error_code() == libvirt.VIR_ERR_SYSTEM_ERROR and \
1881 e.get_error_domain() == libvirt.VIR_FROM_REMOTE:
1882- logging.debug('Connection to libvirt broke')
1883+ logging.debug(_('Connection to libvirt broke'))
1884 return False
1885 raise
1886
1887@@ -191,7 +191,7 @@
1888
1889 def _cleanup(self, instance):
1890 target = os.path.join(FLAGS.instances_path, instance['name'])
1891- logging.info('instance %s: deleting instance files %s',
1892+ logging.info(_('instance %s: deleting instance files %s'),
1893 instance['name'], target)
1894 if os.path.exists(target):
1895 shutil.rmtree(target)
1896@@ -233,7 +233,7 @@
1897 mount_device = mountpoint.rpartition("/")[2]
1898 xml = self._get_disk_xml(virt_dom.XMLDesc(0), mount_device)
1899 if not xml:
1900- raise exception.NotFound("No disk at %s" % mount_device)
1901+ raise exception.NotFound(_("No disk at %s") % mount_device)
1902 virt_dom.detachDevice(xml)
1903
1904 @exception.wrap_exception
1905@@ -249,10 +249,10 @@
1906 db.instance_set_state(context.get_admin_context(),
1907 instance['id'], state)
1908 if state == power_state.RUNNING:
1909- logging.debug('instance %s: rebooted', instance['name'])
1910+ logging.debug(_('instance %s: rebooted'), instance['name'])
1911 timer.stop()
1912 except Exception, exn:
1913- logging.error('_wait_for_reboot failed: %s', exn)
1914+ logging.error(_('_wait_for_reboot failed: %s'), exn)
1915 db.instance_set_state(context.get_admin_context(),
1916 instance['id'],
1917 power_state.SHUTDOWN)
1918@@ -287,10 +287,10 @@
1919 state = self.get_info(instance['name'])['state']
1920 db.instance_set_state(None, instance['id'], state)
1921 if state == power_state.RUNNING:
1922- logging.debug('instance %s: rescued', instance['name'])
1923+ logging.debug(_('instance %s: rescued'), instance['name'])
1924 timer.stop()
1925 except Exception, exn:
1926- logging.error('_wait_for_rescue failed: %s', exn)
1927+ logging.error(_('_wait_for_rescue failed: %s'), exn)
1928 db.instance_set_state(None,
1929 instance['id'],
1930 power_state.SHUTDOWN)
1931@@ -315,7 +315,7 @@
1932 NWFilterFirewall(self._conn).setup_nwfilters_for_instance(instance)
1933 self._create_image(instance, xml)
1934 self._conn.createXML(xml, 0)
1935- logging.debug("instance %s: is running", instance['name'])
1936+ logging.debug(_("instance %s: is running"), instance['name'])
1937
1938 timer = utils.LoopingCall(f=None)
1939
1940@@ -325,10 +325,10 @@
1941 db.instance_set_state(context.get_admin_context(),
1942 instance['id'], state)
1943 if state == power_state.RUNNING:
1944- logging.debug('instance %s: booted', instance['name'])
1945+ logging.debug(_('instance %s: booted'), instance['name'])
1946 timer.stop()
1947 except:
1948- logging.exception('instance %s: failed to boot',
1949+ logging.exception(_('instance %s: failed to boot'),
1950 instance['name'])
1951 db.instance_set_state(context.get_admin_context(),
1952 instance['id'],
1953@@ -343,7 +343,7 @@
1954 virsh_output = virsh_output[0].strip()
1955
1956 if virsh_output.startswith('/dev/'):
1957- logging.info('cool, it\'s a device')
1958+ logging.info(_('cool, it\'s a device'))
1959 out, err = utils.execute("sudo dd if=%s iflag=nonblock" %
1960 virsh_output, check_exit_code=False)
1961 return out
1962@@ -351,7 +351,7 @@
1963 return ''
1964
1965 def _append_to_file(self, data, fpath):
1966- logging.info('data: %r, fpath: %r' % (data, fpath))
1967+ logging.info(_('data: %r, fpath: %r') % (data, fpath))
1968 fp = open(fpath, 'a+')
1969 fp.write(data)
1970 return fpath
1971@@ -393,7 +393,7 @@
1972
1973 # TODO(termie): these are blocking calls, it would be great
1974 # if they weren't.
1975- logging.info('instance %s: Creating image', inst['name'])
1976+ logging.info(_('instance %s: Creating image'), inst['name'])
1977 f = open(basepath('libvirt.xml'), 'w')
1978 f.write(libvirt_xml)
1979 f.close()
1980@@ -449,10 +449,10 @@
1981 'dns': network_ref['dns']}
1982 if key or net:
1983 if key:
1984- logging.info('instance %s: injecting key into image %s',
1985+ logging.info(_('instance %s: injecting key into image %s'),
1986 inst['name'], inst.image_id)
1987 if net:
1988- logging.info('instance %s: injecting net into image %s',
1989+ logging.info(_('instance %s: injecting net into image %s'),
1990 inst['name'], inst.image_id)
1991 try:
1992 disk.inject_data(basepath('disk-raw'), key, net,
1993@@ -460,8 +460,8 @@
1994 execute=execute)
1995 except Exception as e:
1996 # This could be a windows image, or a vmdk format disk
1997- logging.warn('instance %s: ignoring error injecting data'
1998- ' into image %s (%s)',
1999+ logging.warn(_('instance %s: ignoring error injecting data'
2000+ ' into image %s (%s)'),
2001 inst['name'], inst.image_id, e)
2002
2003 if inst['kernel_id']:
2004@@ -488,7 +488,8 @@
2005
2006 def to_xml(self, instance, rescue=False):
2007 # TODO(termie): cache?
2008- logging.debug('instance %s: starting toXML method', instance['name'])
2009+ logging.debug(_('instance %s: starting toXML method'),
2010+ instance['name'])
2011 network = db.project_get_network(context.get_admin_context(),
2012 instance['project_id'])
2013 # FIXME(vish): stick this in db
2014@@ -519,7 +520,8 @@
2015 xml_info['disk'] = xml_info['basepath'] + "/disk"
2016
2017 xml = str(Template(self.libvirt_xml, searchList=[xml_info]))
2018- logging.debug('instance %s: finished toXML method', instance['name'])
2019+ logging.debug(_('instance %s: finished toXML method'),
2020+ instance['name'])
2021
2022 return xml
2023
2024@@ -527,7 +529,8 @@
2025 try:
2026 virt_dom = self._conn.lookupByName(instance_name)
2027 except:
2028- raise exception.NotFound("Instance %s not found" % instance_name)
2029+ raise exception.NotFound(_("Instance %s not found")
2030+ % instance_name)
2031 (state, max_mem, mem, num_cpu, cpu_time) = virt_dom.info()
2032 return {'state': state,
2033 'max_mem': max_mem,
2034
2035=== modified file 'nova/virt/xenapi_conn.py'
2036--- nova/virt/xenapi_conn.py 2010-12-16 00:31:32 +0000
2037+++ nova/virt/xenapi_conn.py 2010-12-22 15:43:09 +0000
2038@@ -93,10 +93,10 @@
2039 username = FLAGS.xenapi_connection_username
2040 password = FLAGS.xenapi_connection_password
2041 if not url or password is None:
2042- raise Exception('Must specify xenapi_connection_url, '
2043- 'xenapi_connection_username (optionally), and '
2044- 'xenapi_connection_password to use '
2045- 'connection_type=xenapi')
2046+ raise Exception(_('Must specify xenapi_connection_url, '
2047+ 'xenapi_connection_username (optionally), and '
2048+ 'xenapi_connection_password to use '
2049+ 'connection_type=xenapi'))
2050 return XenAPIConnection(url, username, password)
2051
2052
2053@@ -204,11 +204,11 @@
2054 return
2055 elif status == 'success':
2056 result = self._session.xenapi.task.get_result(task)
2057- logging.info('Task %s status: success. %s', task, result)
2058+ logging.info(_('Task %s status: success. %s'), task, result)
2059 done.send(_parse_xmlrpc_value(result))
2060 else:
2061 error_info = self._session.xenapi.task.get_error_info(task)
2062- logging.warn('Task %s status: %s. %s', task, status,
2063+ logging.warn(_('Task %s status: %s. %s'), task, status,
2064 error_info)
2065 done.send_exception(XenAPI.Failure(error_info))
2066 #logging.debug('Polling task %s done.', task)
2067@@ -222,7 +222,7 @@
2068 try:
2069 return func(*args, **kwargs)
2070 except XenAPI.Failure, exc:
2071- logging.debug("Got exception: %s", exc)
2072+ logging.debug(_("Got exception: %s"), exc)
2073 if (len(exc.details) == 4 and
2074 exc.details[0] == 'XENAPI_PLUGIN_EXCEPTION' and
2075 exc.details[2] == 'Failure'):
2076@@ -235,7 +235,7 @@
2077 else:
2078 raise
2079 except xmlrpclib.ProtocolError, exc:
2080- logging.debug("Got exception: %s", exc)
2081+ logging.debug(_("Got exception: %s"), exc)
2082 raise
2083
2084
2085
2086=== modified file 'nova/volume/driver.py'
2087--- nova/volume/driver.py 2010-12-15 00:05:39 +0000
2088+++ nova/volume/driver.py 2010-12-22 15:43:09 +0000
2089@@ -73,14 +73,14 @@
2090 tries = tries + 1
2091 if tries >= FLAGS.num_shell_tries:
2092 raise
2093- logging.exception("Recovering from a failed execute."
2094- "Try number %s", tries)
2095+ logging.exception(_("Recovering from a failed execute."
2096+ "Try number %s"), tries)
2097 time.sleep(tries ** 2)
2098
2099 def check_for_setup_error(self):
2100 """Returns an error if prerequisites aren't met"""
2101 if not os.path.isdir("/dev/%s" % FLAGS.volume_group):
2102- raise exception.Error("volume group %s doesn't exist"
2103+ raise exception.Error(_("volume group %s doesn't exist")
2104 % FLAGS.volume_group)
2105
2106 def create_volume(self, volume):
2107@@ -205,7 +205,7 @@
2108 @staticmethod
2109 def fake_execute(cmd, *_args, **_kwargs):
2110 """Execute that simply logs the command."""
2111- logging.debug("FAKE AOE: %s", cmd)
2112+ logging.debug(_("FAKE AOE: %s"), cmd)
2113 return (None, None)
2114
2115
2116@@ -310,5 +310,5 @@
2117 @staticmethod
2118 def fake_execute(cmd, *_args, **_kwargs):
2119 """Execute that simply logs the command."""
2120- logging.debug("FAKE ISCSI: %s", cmd)
2121+ logging.debug(_("FAKE ISCSI: %s"), cmd)
2122 return (None, None)
2123
2124=== modified file 'nova/volume/manager.py'
2125--- nova/volume/manager.py 2010-12-09 01:21:43 +0000
2126+++ nova/volume/manager.py 2010-12-22 15:43:09 +0000
2127@@ -81,7 +81,7 @@
2128 self.driver.check_for_setup_error()
2129 ctxt = context.get_admin_context()
2130 volumes = self.db.volume_get_all_by_host(ctxt, self.host)
2131- logging.debug("Re-exporting %s volumes", len(volumes))
2132+ logging.debug(_("Re-exporting %s volumes"), len(volumes))
2133 for volume in volumes:
2134 self.driver.ensure_export(ctxt, volume)
2135
2136@@ -89,7 +89,7 @@
2137 """Creates and exports the volume."""
2138 context = context.elevated()
2139 volume_ref = self.db.volume_get(context, volume_id)
2140- logging.info("volume %s: creating", volume_ref['name'])
2141+ logging.info(_("volume %s: creating"), volume_ref['name'])
2142
2143 self.db.volume_update(context,
2144 volume_id,
2145@@ -98,18 +98,18 @@
2146 # before passing it to the driver.
2147 volume_ref['host'] = self.host
2148
2149- logging.debug("volume %s: creating lv of size %sG",
2150+ logging.debug(_("volume %s: creating lv of size %sG"),
2151 volume_ref['name'], volume_ref['size'])
2152 self.driver.create_volume(volume_ref)
2153
2154- logging.debug("volume %s: creating export", volume_ref['name'])
2155+ logging.debug(_("volume %s: creating export"), volume_ref['name'])
2156 self.driver.create_export(context, volume_ref)
2157
2158 now = datetime.datetime.utcnow()
2159 self.db.volume_update(context,
2160 volume_ref['id'], {'status': 'available',
2161 'launched_at': now})
2162- logging.debug("volume %s: created successfully", volume_ref['name'])
2163+ logging.debug(_("volume %s: created successfully"), volume_ref['name'])
2164 return volume_id
2165
2166 def delete_volume(self, context, volume_id):
2167@@ -117,15 +117,15 @@
2168 context = context.elevated()
2169 volume_ref = self.db.volume_get(context, volume_id)
2170 if volume_ref['attach_status'] == "attached":
2171- raise exception.Error("Volume is still attached")
2172+ raise exception.Error(_("Volume is still attached"))
2173 if volume_ref['host'] != self.host:
2174- raise exception.Error("Volume is not local to this node")
2175- logging.debug("volume %s: removing export", volume_ref['name'])
2176+ raise exception.Error(_("Volume is not local to this node"))
2177+ logging.debug(_("volume %s: removing export"), volume_ref['name'])
2178 self.driver.remove_export(context, volume_ref)
2179- logging.debug("volume %s: deleting", volume_ref['name'])
2180+ logging.debug(_("volume %s: deleting"), volume_ref['name'])
2181 self.driver.delete_volume(volume_ref)
2182 self.db.volume_destroy(context, volume_id)
2183- logging.debug("volume %s: deleted successfully", volume_ref['name'])
2184+ logging.debug(_("volume %s: deleted successfully"), volume_ref['name'])
2185 return True
2186
2187 def setup_compute_volume(self, context, volume_id):