Merge lp:~javier.collado/utah/bug1095669-2 into lp:utah

Proposed by Javier Collado
Status: Merged
Approved by: Max Brustkern
Approved revision: 814
Merged at revision: 803
Proposed branch: lp:~javier.collado/utah/bug1095669-2
Merge into: lp:utah
Diff against target: 600 lines (+161/-79) (has conflicts)
12 files modified
examples/run_install_test.py (+3/-0)
examples/run_test_bamboo_feeder.py (+5/-1)
examples/run_test_cobbler.py (+5/-1)
examples/run_test_vm.py (+3/-1)
examples/run_utah_tests.py (+3/-0)
utah/config.py (+5/-0)
utah/provisioning/baremetal/inventory.py (+4/-5)
utah/provisioning/inventory/sqlite.py (+32/-0)
utah/provisioning/provisioning.py (+1/-29)
utah/provisioning/ssh.py (+27/-26)
utah/provisioning/vm/vm.py (+5/-5)
utah/run.py (+68/-11)
Text conflict in utah/provisioning/inventory/sqlite.py
To merge this branch: bzr merge lp:~javier.collado/utah/bug1095669-2
Reviewer Review Type Date Requested Status
Max Brustkern (community) Approve
Review via email: mp+144365@code.launchpad.net

Description of the change

This branch adds a wrapper method to log all the sql commands executed during
the provisioning step.

In addition to this, since the inventory is created before the machine object,
there a lot of changes regarding logging:
- root logger object is configured in all run_* scripts
- machine logger propagates messages and doesn't contain any handler by itself
- logging messages outside the machine object now use the root logger
- the log format has been updated to include the python module name
- paramiko logs are only written to debug log file regardless of their level

The big change is that since root logger object is configured using pretty much
the same code that was used for the machine logger, now it's possible to use
logging in any place, not only in the machine object. This should be useful to
add logging messages that previously were not even printed because of the lack
of handlers for the root logger.

To post a comment you must log in.
lp:~javier.collado/utah/bug1095669-2 updated
810. By Javier Collado

Added separate file to log ssh related messages

The new file writes the whole set of ssh logs including the ones from paramiko
to a different file. Note that while paramiko logger doesn't propagate, the ssh
one does, so the ssh related messages that were written to log files before
this change will still be written.

811. By Javier Collado

Removed module name from logs

812. By Javier Collado

Fixed activecheck shouldn't use ssh logger

813. By Javier Collado

Set console log level back to WARNING

814. By Javier Collado

Set console log level to debug if -d/--debug option is passed

Revision history for this message
Javier Collado (javier.collado) wrote :

I've added a few changes to the logging part that don't really change anything
with regard to sql.

The biggest change is that ssh log is now written to a different file in case
detailed information is needed to troubleshoot a problem. In particular, the
logs from the paramiko library are also included in that file.

Aside from this, I think that the -d/--debug flag in the command line is kind
of confusing. In this branch, the console and log file handlers use the levels
set from the configuration file (the handler for the debug file is always set
to debug). The -d/--debug flag is used just to set to debug the logging in the
console, but not on the the file log. I'm not sure if this is the intended
behavior, but I don't think setting the file log to debug is a great benefit if
there's already a debug file. Maybe the -d/--debug flag should be removed since
it's not very likely that we use it to troubleshoot something looking just at
the console instead of checking the debug file. Please let me know what do you
think.

Revision history for this message
Max Brustkern (nuclearbob) wrote :

When -d was originally implemented, we didn't have a separate debug log file. I think enabling or disabling debug affects some code paths for cleanup routines as well. I think removing that option in favor of newer ones would be good. We can have explicit options for cleaning things up or not, and leave the log level stuff to the configuration files. I think that would be worth doing as a separate merge, since it'll affect some cleanup routines and need more testing.

Revision history for this message
Max Brustkern (nuclearbob) wrote :

Lines 214, 219, and 247 of the merge look like leftover bzr merge lines. Those are all in utah/provisioning/inventory/sqlite.py I wonder if lines 215-218 are still needed as well, as they would seem to be superseded by the lines after that.

Other than that, it looks like it should improve things a lot. If we do decide to start using rotating file handlers instead of external log rotation, this will make that a lot easier as well.

review: Approve
Revision history for this message
Max Brustkern (nuclearbob) wrote :

I'm not sure if I should have marked approve or not. I feel like the actual changes are good, and I'm confident you'll fix the typos, but I guess technically I wouldn't merge it in the current state. I think anyone who would merge it will be reading enough to know to fix those things and retest.

Revision history for this message
Javier Collado (javier.collado) wrote :

I think it's ok to mark the merge request as approved for now because we don't
have any auto-merging mechanism in place.

However, once tarmac is finally setup, an attempt to merge the code will be
made in such a situation, so we should avoid approving until the changes are
completely fine. Anyway, in this particular case, the merge would fail because
of the conflicts that you saw in the diff.

Revision history for this message
Javier Collado (javier.collado) wrote :

The merge conflict is because both lp:bug1095669 and lp:bug1095669-2 were
branched from the same dev branch revision and the latter is already merged.
Anyway, that's not a problem I'll fix the conflict in the merge itself.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'examples/run_install_test.py'
2--- examples/run_install_test.py 2012-12-19 15:41:25 +0000
3+++ examples/run_install_test.py 2013-01-23 12:29:24 +0000
4@@ -27,6 +27,7 @@
5 file_arguments,
6 run_tests,
7 virtual_arguments,
8+ configure_logging,
9 )
10
11
12@@ -60,6 +61,8 @@
13 exitstatus = 0
14 machine = None
15
16+ configure_logging(args.debug)
17+
18 try:
19 inventory = TinySQLiteInventory()
20 machine = inventory.request(
21
22=== modified file 'examples/run_test_bamboo_feeder.py'
23--- examples/run_test_bamboo_feeder.py 2012-12-19 15:41:25 +0000
24+++ examples/run_test_bamboo_feeder.py 2013-01-23 12:29:24 +0000
25@@ -18,6 +18,7 @@
26 import argparse
27 import os
28 import sys
29+import logging
30
31 from utah.exceptions import UTAHException
32 from utah.group import check_user_group, print_group_error_message
33@@ -29,6 +30,7 @@
34 custom_arguments,
35 name_argument,
36 run_tests,
37+ configure_logging,
38 )
39
40
41@@ -61,6 +63,8 @@
42 exitstatus = 0
43 machine = None
44
45+ configure_logging(args.debug)
46+
47 try:
48 inventory = ManualBaremetalSQLiteInventory(
49 db=os.path.join('~', '.utah-bamboofeeder-inventory'),
50@@ -79,7 +83,7 @@
51 except UTAHException as error:
52 mesg = 'Exception: ' + str(error)
53 try:
54- machine.logger.error(mesg)
55+ logging.error(mesg)
56 except (AttributeError, NameError):
57 sys.stderr.write(mesg)
58 exitstatus = 2
59
60=== modified file 'examples/run_test_cobbler.py'
61--- examples/run_test_cobbler.py 2012-12-19 15:41:25 +0000
62+++ examples/run_test_cobbler.py 2013-01-23 12:29:24 +0000
63@@ -17,6 +17,7 @@
64
65 import argparse
66 import sys
67+import logging
68
69 from utah.exceptions import UTAHException
70 from utah.group import check_user_group, print_group_error_message
71@@ -28,6 +29,7 @@
72 custom_arguments,
73 name_argument,
74 run_tests,
75+ configure_logging,
76 )
77
78
79@@ -60,6 +62,8 @@
80 exitstatus = 0
81 machine = None
82
83+ configure_logging(args.debug)
84+
85 try:
86 inventory = ManualBaremetalSQLiteInventory()
87 kw = {}
88@@ -84,7 +88,7 @@
89 except UTAHException as error:
90 mesg = 'Exception: ' + str(error)
91 try:
92- machine.logger.error(mesg)
93+ logging.error(mesg)
94 except (AttributeError, NameError):
95 sys.stderr.write(mesg)
96 exitstatus = 2
97
98=== modified file 'examples/run_test_vm.py'
99--- examples/run_test_vm.py 2012-12-19 15:41:25 +0000
100+++ examples/run_test_vm.py 2013-01-23 12:29:24 +0000
101@@ -21,7 +21,7 @@
102 from utah.exceptions import UTAHException
103 from utah.group import check_user_group, print_group_error_message
104 from utah.provisioning.vm.vm import TinySQLiteInventory
105-from utah.run import common_arguments, run_tests
106+from utah.run import common_arguments, run_tests, configure_logging
107
108
109 def get_parser():
110@@ -50,6 +50,8 @@
111 exitstatus = 0
112 machine = None
113
114+ configure_logging(args.debug)
115+
116 try:
117 inventory = TinySQLiteInventory()
118 kw = {}
119
120=== modified file 'examples/run_utah_tests.py'
121--- examples/run_utah_tests.py 2012-12-19 15:41:25 +0000
122+++ examples/run_utah_tests.py 2013-01-23 12:29:24 +0000
123@@ -26,6 +26,7 @@
124 file_arguments,
125 name_argument,
126 virtual_arguments,
127+ configure_logging
128 )
129 from utah.timeout import timeout, UTAHTimeout
130 from run_install_test import run_install_test
131@@ -70,6 +71,8 @@
132 else:
133 machinetype = args.machinetype
134
135+ configure_logging(args.debug)
136+
137 # Default is now CustomVM
138 function = run_install_test
139
140
141=== modified file 'utah/config.py'
142--- utah/config.py 2013-01-22 10:35:02 +0000
143+++ utah/config.py 2013-01-23 12:29:24 +0000
144@@ -171,6 +171,11 @@
145 DEFAULTS['logpath'],
146 LOCALDEFAULTS['hostname'] + '-debug.log')
147
148+# Log file used to log SSH related information
149+LOCALDEFAULTS['ssh_logfile'] = \
150+ os.path.join(DEFAULTS['logpath'],
151+ '{}-ssh.log'.format(LOCALDEFAULTS['hostname']))
152+
153 # This depends on a binary that may not be installed.
154 # Also, the null thing may seem obvious, but I found it here:
155 # http://stackoverflow.com/questions/699325/
156
157=== modified file 'utah/provisioning/baremetal/inventory.py'
158--- utah/provisioning/baremetal/inventory.py 2012-12-13 09:07:58 +0000
159+++ utah/provisioning/baremetal/inventory.py 2013-01-23 12:29:24 +0000
160@@ -41,8 +41,7 @@
161 super(ManualBaremetalSQLiteInventory, self).__init__(*args, db=db,
162 lockfile=lockfile,
163 **kw)
164- machines_count = (self.connection
165- .execute('SELECT COUNT(*) FROM machines')
166+ machines_count = (self.execute('SELECT COUNT(*) FROM machines')
167 .fetchall()[0][0])
168 if machines_count == 0:
169 raise UTAHProvisioningInventoryException('No machines in database')
170@@ -54,7 +53,7 @@
171 if name is not None:
172 query += ' WHERE name=?'
173 queryvars.append(name)
174- result = self.connection.execute(query, queryvars).fetchall()
175+ result = self.execute(query, queryvars).fetchall()
176 if result is None:
177 raise UTAHProvisioningInventoryException(
178 'No machines meet criteria')
179@@ -84,7 +83,7 @@
180 name = machineinfo.pop('name')
181 state = machineinfo.pop('state')
182 machineinfo.pop('pid')
183- update = self.connection.execute(
184+ update = self.execute(
185 "UPDATE machines SET pid=?, state='provisioned' WHERE machineid=?"
186 "AND state=?",
187 [os.getpid(), machineid, state]).rowcount
188@@ -115,7 +114,7 @@
189 'name required to release a machine')
190 query = "UPDATE machines SET state='available' WHERE name=?"
191 queryvars = [name]
192- update = self.connection.execute(query, queryvars).rowcount
193+ update = self.execute(query, queryvars).rowcount
194 if update == 1:
195 if machine is not None:
196 if machine in self.machines:
197
198=== modified file 'utah/provisioning/inventory/sqlite.py'
199--- utah/provisioning/inventory/sqlite.py 2013-01-22 10:35:02 +0000
200+++ utah/provisioning/inventory/sqlite.py 2013-01-23 12:29:24 +0000
201@@ -17,6 +17,8 @@
202
203 import sqlite3
204 import os
205+import logging
206+
207 from utah.provisioning.inventory.inventory import Inventory
208 from utah import config
209
210@@ -30,10 +32,40 @@
211 db = os.path.expanduser(db)
212 super(SQLiteInventory, self).__init__(*args, uniqueid=db, **kw)
213 self.db = db
214+<<<<<<< TREE
215 self.connection = sqlite3.connect(self.db,
216 config.sqlite3_db_connection_timeout)
217 self.connection.isolation_level = None
218 self.connection.row_factory = sqlite3.Row
219+=======
220+ self._connection = sqlite3.connect(self.db)
221+ self._connection.isolation_level = None
222+ self._connection.row_factory = sqlite3.Row
223+
224+ def execute(self, sql, parameters=None):
225+ """Execute sql statement and return cursor
226+
227+ This method is expected to be used as a wrapper around all
228+ `connection.execute` calls so that all the SQL statements are logged in
229+ case this information is needed to troubleshoot problems.
230+
231+ :param sql: A sql statement to be executed
232+ :type sql: str
233+ :param parameters:
234+ Parameters to use to replace the placeholder
235+ values in the sql statement.
236+ :type parameters:
237+ list (question mark placeholders) |
238+ dict (colon named placeholders)
239+ """
240+ if parameters is None:
241+ parameters = []
242+
243+ logging.debug('Executing SQL statement: {}, {}'
244+ .format(sql, parameters))
245+ cursor = self._connection.execute(sql, parameters)
246+ return cursor
247+>>>>>>> MERGE-SOURCE
248
249 def delete(self):
250 os.unlink(self.db)
251
252=== modified file 'utah/provisioning/provisioning.py'
253--- utah/provisioning/provisioning.py 2013-01-16 14:08:17 +0000
254+++ utah/provisioning/provisioning.py 2013-01-23 12:29:24 +0000
255@@ -265,33 +265,7 @@
256 Initialize the logging for the machine.
257 Subclasses can override or supplement to customize logging.
258 """
259- self.logger = logging.getLogger('utah.provisioning.' + str(self.name))
260- self.logger.propagate = False
261- self.logger.setLevel(logging.INFO)
262- for handler in list(self.logger.handlers):
263- self.logger.removeHandler(handler)
264- del handler
265- self.consolehandler = logging.StreamHandler(stream=sys.stderr)
266- log_format = logging.Formatter('%(asctime)s ' + self.name
267- + ' %(levelname)s: %(message)s')
268- console_formatter = log_format
269- self.consolehandler.setFormatter(console_formatter)
270- self.consolehandler.setLevel(config.consoleloglevel)
271- self.logger.addHandler(self.consolehandler)
272- self.filehandler = logging.handlers.WatchedFileHandler(config.logfile)
273- file_formatter = log_format
274- self.filehandler.setFormatter(file_formatter)
275- self.filehandler.setLevel(config.fileloglevel)
276- self.logger.addHandler(self.filehandler)
277- if config.debuglog is not None:
278- self.logger.setLevel(logging.DEBUG)
279- self.debughandler = (
280- logging.handlers.WatchedFileHandler(config.debuglog))
281- self.debughandler.setFormatter(file_formatter)
282- self.debughandler.setLevel(logging.DEBUG)
283- self.logger.addHandler(self.debughandler)
284- if self.debug:
285- self.logger.setLevel(logging.DEBUG)
286+ self.logger = logging.getLogger(self.name)
287
288 def _loggerunsetup(self):
289 """
290@@ -299,8 +273,6 @@
291 Principally used when a machine changes name after init.
292 """
293 del self.logger
294- del self.consolehandler
295- del self.filehandler
296
297 def provisioncheck(self):
298 """
299
300=== modified file 'utah/provisioning/ssh.py'
301--- utah/provisioning/ssh.py 2012-12-12 18:56:29 +0000
302+++ utah/provisioning/ssh.py 2013-01-23 12:29:24 +0000
303@@ -43,6 +43,7 @@
304 ssh_client = paramiko.SSHClient()
305 ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
306 self.ssh_client = ssh_client
307+ self.ssh_logger = logging.getLogger('ssh')
308
309 def run(self, command, _quiet=None, root=False, timeout=None):
310 """
311@@ -64,15 +65,15 @@
312 # With additions from here:
313 # http://od-eon.com/blogs/
314 # stefan/automating-remote-commands-over-ssh-paramiko/
315- self.logger.debug('Connecting SSH')
316+ self.ssh_logger.debug('Connecting SSH')
317 self.ssh_client.connect(self.name,
318 username=user,
319 key_filename=config.sshprivatekey)
320
321- self.logger.debug('Opening SSH session')
322+ self.ssh_logger.debug('Opening SSH session')
323 channel = self.ssh_client.get_transport().open_session()
324
325- self.logger.info('Running command through SSH: ' + commandstring)
326+ self.ssh_logger.info('Running command through SSH: ' + commandstring)
327 stdout = channel.makefile('rb')
328 stderr = channel.makefile_stderr('rb')
329 if timeout is None:
330@@ -81,22 +82,22 @@
331 utah.timeout.timeout(timeout, channel.exec_command, commandstring)
332 retval = channel.recv_exit_status()
333
334- self.logger.debug('Closing SSH connection')
335+ self.ssh_logger.debug('Closing SSH connection')
336 self.ssh_client.close()
337
338 log_level = logging.DEBUG if retval == 0 else logging.WARNING
339 log_message = 'Return code: {}'.format(retval)
340- self.logger.log(log_level, log_message)
341+ self.ssh_logger.log(log_level, log_message)
342
343- self.logger.debug('Standard output follows:')
344+ self.ssh_logger.debug('Standard output follows:')
345 stdout_lines = stdout.readlines()
346 for line in stdout_lines:
347- self.logger.debug(line.strip())
348+ self.ssh_logger.debug(line.strip())
349
350- self.logger.debug('Standard error follows:')
351+ self.ssh_logger.debug('Standard error follows:')
352 stderr_lines = stderr.readlines()
353 for line in stderr_lines:
354- self.logger.debug(line.strip())
355+ self.ssh_logger.debug(line.strip())
356
357 return retval, ''.join(stdout_lines), ''.join(stderr_lines)
358
359@@ -116,9 +117,9 @@
360 try:
361 for localpath in files:
362 if os.path.isfile(localpath):
363- self.logger.info('Uploading ' + localpath
364- + ' from the host to ' + target
365- + ' on the machine')
366+ self.ssh_logger.info('Uploading {} from the host '
367+ 'to {} on the machine'
368+ .format(localpath, target))
369 remotepath = os.path.join(target,
370 os.path.basename(localpath))
371 sftp_client.put(localpath, remotepath)
372@@ -155,9 +156,9 @@
373 try:
374 for remotepath in files:
375 localpath = get_localpath(remotepath)
376- self.logger.info('Downloading ' + remotepath
377- + ' from the machine to ' + target
378- + ' on the host')
379+ self.ssh_logger.info('Downloading {} from the machine '
380+ 'to {} on the host'
381+ .format(remotepath, target))
382 sftp_client.get(remotepath, localpath)
383 finally:
384 sftp_client.close()
385@@ -179,9 +180,9 @@
386 for myfile in files:
387 newtarget = os.path.join(target, os.path.basename(myfile))
388 if S_ISDIR(sftp_client.stat(myfile).st_mode):
389- self.logger.debug(myfile + ' is a directory, recursing')
390+ self.ssh_logger.debug(myfile + ' is a directory, recursing')
391 if not os.path.isdir(newtarget):
392- self.logger.debug('Attempting to create ' + newtarget)
393+ self.ssh_logger.debug('Attempting to create ' + newtarget)
394 os.makedirs(newtarget)
395 myfiles = [os.path.join(myfile, x)
396 for x in sftp_client.listdir(myfile)]
397@@ -194,16 +195,16 @@
398 Clean up known hosts for machine.
399 """
400 # TODO: evaluate value of known_hosts with paramiko
401- self.logger.info('Removing machine addresses '
402- 'from ssh known_hosts file')
403+ self.ssh_logger.info('Removing machine addresses '
404+ 'from ssh known_hosts file')
405 addresses = [self.name]
406 try:
407 addresses.append(socket.gethostbyname(self.name))
408 except socket.gaierror as err:
409 if err.errno in [-2, -5]:
410- self.logger.debug(self.name
411- + ' is not resolvable, '
412- + 'so not removing from known_hosts')
413+ self.ssh_logger.debug('{} is not resolvable, '
414+ 'so not removing from known_hosts'
415+ .format(self.name))
416 else:
417 raise err
418
419@@ -227,10 +228,10 @@
420 Return a retryable exception if it is not.
421 Intended for use with retry.
422 """
423- self.logger.info('Sleeping {timeout} seconds'
424- .format(timeout=timeout))
425+ self.ssh_logger.info('Sleeping {timeout} seconds'
426+ .format(timeout=timeout))
427 time.sleep(timeout)
428- self.logger.info('Checking for ssh availability')
429+ self.ssh_logger.info('Checking for ssh availability')
430 try:
431 self.ssh_client.connect(self.name,
432 username=config.user,
433@@ -246,7 +247,7 @@
434 if timeout is None:
435 timeout = self.boottimeout
436 if logmethod is None:
437- logmethod = self.logger.debug
438+ logmethod = self.ssh_logger.debug
439 utah.timeout.timeout(timeout, retry, self.sshcheck, checktimeout,
440 logmethod=logmethod)
441
442
443=== modified file 'utah/provisioning/vm/vm.py'
444--- utah/provisioning/vm/vm.py 2012-12-17 19:55:15 +0000
445+++ utah/provisioning/vm/vm.py 2013-01-23 12:29:24 +0000
446@@ -589,7 +589,7 @@
447 Initialize simple database.
448 """
449 super(TinySQLiteInventory, self).__init__(*args, **kw)
450- self.connection.execute(
451+ self.execute(
452 'CREATE TABLE IF NOT EXISTS '
453 'machines(machineid INTEGER PRIMARY KEY, state TEXT)')
454
455@@ -599,8 +599,8 @@
456 machineid along with all other arguments to that class's constructor,
457 returning the resulting object.
458 """
459- cursor = self.connection.cursor()
460- cursor.execute("INSERT INTO machines (state) VALUES ('provisioned')")
461+ cursor = self.execute("INSERT INTO machines (state) "
462+ "VALUES ('provisioned')")
463 machineid = cursor.lastrowid
464 return machinetype(machineid=machineid, *args, **kw)
465
466@@ -608,7 +608,7 @@
467 """
468 Updates the database to indicate the machine is available.
469 """
470- if self.connection.execute(
471+ if self.execute(
472 "UPDATE machines SET state='available' WHERE machineid=?",
473 [machineid]):
474 return True
475@@ -620,7 +620,7 @@
476 Updates the database to indicate the machine is destroyed, but does not
477 destroy the machine.
478 """
479- if self.connection.execute(
480+ if self.execute(
481 "UPDATE machines SET state='destroyed' ""WHERE machineid=?",
482 [machineid]):
483 return True
484
485=== modified file 'utah/run.py'
486--- utah/run.py 2013-01-16 11:05:00 +0000
487+++ utah/run.py 2013-01-23 12:29:24 +0000
488@@ -134,7 +134,7 @@
489 utah_command = 'utah {} {}'.format(extraopts, options)
490 exitstatus, _stdout, _stderr = machine.run(utah_command, root=True)
491 except UTAHException as error:
492- machine.logger.error('Failed to run test: ' + str(error))
493+ logging.error('Failed to run test: ' + str(error))
494 exitstatus = 1
495 else:
496 _write(runlist, report_remote_path)
497@@ -164,17 +164,12 @@
498 report_local_path = os.path.join(config.logpath, log_filename)
499 report_local_path = os.path.normpath(report_local_path)
500 machine.downloadfiles(report_remote_path, report_local_path)
501- machine.logger.info('Test log copied to ' + report_local_path)
502+ logging.info('Test log copied to ' + report_local_path)
503 locallogs.append(report_local_path)
504 if args.dumplogs:
505 with open(report_local_path, 'r') as f:
506 print(f.read())
507
508- # Since we're running from a script, output INFO
509- for logger in [machine.logger] + machine.logger.handlers:
510- if logger.level > logging.INFO:
511- logger.setLevel(logging.INFO)
512-
513 # Write the machine name to standard out for log gathering
514 print('Running on machine: ' + machine.name)
515
516@@ -196,14 +191,13 @@
517 try:
518 locallogs += getfiles(args, machine)
519 except Exception as err:
520- machine.logger.warning('Failed to download files: ' + str(err))
521+ logging.warning('Failed to download files: ' + str(err))
522
523 if args.outputpreseed or config.outputpreseed:
524 if args.outputpreseed:
525- machine.logger.debug('Capturing preseed due to '
526- 'command line option')
527+ logging.debug('Capturing preseed due to command line option')
528 elif config.outputpreseed:
529- machine.logger.debug('Capturing preseed due to config option')
530+ logging.debug('Capturing preseed due to config option')
531 preseedfile = os.path.join(config.logpath,
532 machine.name + '-preseed.cfg')
533 shutil.copyfile(machine.finalpreseed, preseedfile)
534@@ -230,3 +224,66 @@
535 localfiles.append(myfile)
536
537 return localfiles
538+
539+
540+def configure_logging(debug=False):
541+ """Configure logging.
542+
543+ Configure root logger using three different handlers with a different log
544+ level:
545+ - Console: Set according to `consoleloglevel` from configuration
546+ - Log file: Set according to `fileloglevel` from configuration
547+ - Debug log file: Always set to debug
548+
549+ Other loggers are expected to propagate their messages up to the logging
550+ hierarchy.
551+
552+ """
553+ logger = logging.getLogger()
554+ # configure_logging is being called in different scripts that might call
555+ # other scripts. This check avoids adding multiple handlers when logging as
556+ # already been configured.
557+ if logger.handlers:
558+ logger.debug('Logging already configured. '
559+ 'Skipping logging configuration.')
560+ return
561+
562+ logger.setLevel(logging.NOTSET)
563+
564+ formatter = logging.Formatter('%(asctime)s %(name)s '
565+ '%(levelname)s: %(message)s')
566+
567+ # Console logging
568+ console_handler = logging.StreamHandler()
569+ console_handler.setFormatter(formatter)
570+ log_level = logging.DEBUG if debug else config.consoleloglevel
571+ console_handler.setLevel(log_level)
572+ logger.addHandler(console_handler)
573+
574+ # File logging
575+ file_handler = logging.handlers.WatchedFileHandler(config.logfile)
576+ file_handler.setFormatter(formatter)
577+ file_handler.setLevel(config.fileloglevel)
578+ logger.addHandler(file_handler)
579+
580+ # Debug file logging
581+ if config.debuglog is not None:
582+ debug_file_handler = \
583+ logging.handlers.WatchedFileHandler(config.debuglog)
584+ debug_file_handler.setFormatter(formatter)
585+ debug_file_handler.setLevel(logging.DEBUG)
586+ logger.addHandler(debug_file_handler)
587+
588+ # Configure ssh logging in a separate file
589+ ssh_file_handler = logging.handlers.WatchedFileHandler(config.ssh_logfile)
590+ ssh_file_handler.setFormatter(formatter)
591+ ssh_file_handler.setLevel(logging.DEBUG)
592+
593+ # ssh logs available in both ssh log file and other files
594+ logger = logging.getLogger('ssh')
595+ logger.addHandler(ssh_file_handler)
596+
597+ # paramiko logs only available in ssh log file
598+ logger = logging.getLogger('paramiko')
599+ logger.propagate = False
600+ logger.addHandler(ssh_file_handler)

Subscribers

People subscribed via source and target branches