Merge lp:~javier.collado/utah/bug1095669-2 into lp:utah
- bug1095669-2
- Merge into dev
Status: | Merged |
---|---|
Approved by: | Max Brustkern |
Approved revision: | 814 |
Merged at revision: | 803 |
Proposed branch: | lp:~javier.collado/utah/bug1095669-2 |
Merge into: | lp:utah |
Diff against target: |
600 lines (+161/-79) (has conflicts) 12 files modified
examples/run_install_test.py (+3/-0) examples/run_test_bamboo_feeder.py (+5/-1) examples/run_test_cobbler.py (+5/-1) examples/run_test_vm.py (+3/-1) examples/run_utah_tests.py (+3/-0) utah/config.py (+5/-0) utah/provisioning/baremetal/inventory.py (+4/-5) utah/provisioning/inventory/sqlite.py (+32/-0) utah/provisioning/provisioning.py (+1/-29) utah/provisioning/ssh.py (+27/-26) utah/provisioning/vm/vm.py (+5/-5) utah/run.py (+68/-11) Text conflict in utah/provisioning/inventory/sqlite.py |
To merge this branch: | bzr merge lp:~javier.collado/utah/bug1095669-2 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Max Brustkern (community) | Approve | ||
Review via email:
|
Commit message
Description of the change
This branch adds a wrapper method to log all the sql commands executed during
the provisioning step.
In addition to this, since the inventory is created before the machine object,
there a lot of changes regarding logging:
- root logger object is configured in all run_* scripts
- machine logger propagates messages and doesn't contain any handler by itself
- logging messages outside the machine object now use the root logger
- the log format has been updated to include the python module name
- paramiko logs are only written to debug log file regardless of their level
The big change is that since root logger object is configured using pretty much
the same code that was used for the machine logger, now it's possible to use
logging in any place, not only in the machine object. This should be useful to
add logging messages that previously were not even printed because of the lack
of handlers for the root logger.
- 810. By Javier Collado
-
Added separate file to log ssh related messages
The new file writes the whole set of ssh logs including the ones from paramiko
to a different file. Note that while paramiko logger doesn't propagate, the ssh
one does, so the ssh related messages that were written to log files before
this change will still be written. - 811. By Javier Collado
-
Removed module name from logs
- 812. By Javier Collado
-
Fixed activecheck shouldn't use ssh logger
- 813. By Javier Collado
-
Set console log level back to WARNING
- 814. By Javier Collado
-
Set console log level to debug if -d/--debug option is passed
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Javier Collado (javier.collado) wrote : | # |
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Max Brustkern (nuclearbob) wrote : | # |
When -d was originally implemented, we didn't have a separate debug log file. I think enabling or disabling debug affects some code paths for cleanup routines as well. I think removing that option in favor of newer ones would be good. We can have explicit options for cleaning things up or not, and leave the log level stuff to the configuration files. I think that would be worth doing as a separate merge, since it'll affect some cleanup routines and need more testing.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Max Brustkern (nuclearbob) wrote : | # |
Lines 214, 219, and 247 of the merge look like leftover bzr merge lines. Those are all in utah/provisioni
Other than that, it looks like it should improve things a lot. If we do decide to start using rotating file handlers instead of external log rotation, this will make that a lot easier as well.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Max Brustkern (nuclearbob) wrote : | # |
I'm not sure if I should have marked approve or not. I feel like the actual changes are good, and I'm confident you'll fix the typos, but I guess technically I wouldn't merge it in the current state. I think anyone who would merge it will be reading enough to know to fix those things and retest.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Javier Collado (javier.collado) wrote : | # |
I think it's ok to mark the merge request as approved for now because we don't
have any auto-merging mechanism in place.
However, once tarmac is finally setup, an attempt to merge the code will be
made in such a situation, so we should avoid approving until the changes are
completely fine. Anyway, in this particular case, the merge would fail because
of the conflicts that you saw in the diff.
![](/+icing/build/overlay/assets/skins/sam/images/close.gif)
Javier Collado (javier.collado) wrote : | # |
The merge conflict is because both lp:bug1095669 and lp:bug1095669-2 were
branched from the same dev branch revision and the latter is already merged.
Anyway, that's not a problem I'll fix the conflict in the merge itself.
Preview Diff
1 | === modified file 'examples/run_install_test.py' | |||
2 | --- examples/run_install_test.py 2012-12-19 15:41:25 +0000 | |||
3 | +++ examples/run_install_test.py 2013-01-23 12:29:24 +0000 | |||
4 | @@ -27,6 +27,7 @@ | |||
5 | 27 | file_arguments, | 27 | file_arguments, |
6 | 28 | run_tests, | 28 | run_tests, |
7 | 29 | virtual_arguments, | 29 | virtual_arguments, |
8 | 30 | configure_logging, | ||
9 | 30 | ) | 31 | ) |
10 | 31 | 32 | ||
11 | 32 | 33 | ||
12 | @@ -60,6 +61,8 @@ | |||
13 | 60 | exitstatus = 0 | 61 | exitstatus = 0 |
14 | 61 | machine = None | 62 | machine = None |
15 | 62 | 63 | ||
16 | 64 | configure_logging(args.debug) | ||
17 | 65 | |||
18 | 63 | try: | 66 | try: |
19 | 64 | inventory = TinySQLiteInventory() | 67 | inventory = TinySQLiteInventory() |
20 | 65 | machine = inventory.request( | 68 | machine = inventory.request( |
21 | 66 | 69 | ||
22 | === modified file 'examples/run_test_bamboo_feeder.py' | |||
23 | --- examples/run_test_bamboo_feeder.py 2012-12-19 15:41:25 +0000 | |||
24 | +++ examples/run_test_bamboo_feeder.py 2013-01-23 12:29:24 +0000 | |||
25 | @@ -18,6 +18,7 @@ | |||
26 | 18 | import argparse | 18 | import argparse |
27 | 19 | import os | 19 | import os |
28 | 20 | import sys | 20 | import sys |
29 | 21 | import logging | ||
30 | 21 | 22 | ||
31 | 22 | from utah.exceptions import UTAHException | 23 | from utah.exceptions import UTAHException |
32 | 23 | from utah.group import check_user_group, print_group_error_message | 24 | from utah.group import check_user_group, print_group_error_message |
33 | @@ -29,6 +30,7 @@ | |||
34 | 29 | custom_arguments, | 30 | custom_arguments, |
35 | 30 | name_argument, | 31 | name_argument, |
36 | 31 | run_tests, | 32 | run_tests, |
37 | 33 | configure_logging, | ||
38 | 32 | ) | 34 | ) |
39 | 33 | 35 | ||
40 | 34 | 36 | ||
41 | @@ -61,6 +63,8 @@ | |||
42 | 61 | exitstatus = 0 | 63 | exitstatus = 0 |
43 | 62 | machine = None | 64 | machine = None |
44 | 63 | 65 | ||
45 | 66 | configure_logging(args.debug) | ||
46 | 67 | |||
47 | 64 | try: | 68 | try: |
48 | 65 | inventory = ManualBaremetalSQLiteInventory( | 69 | inventory = ManualBaremetalSQLiteInventory( |
49 | 66 | db=os.path.join('~', '.utah-bamboofeeder-inventory'), | 70 | db=os.path.join('~', '.utah-bamboofeeder-inventory'), |
50 | @@ -79,7 +83,7 @@ | |||
51 | 79 | except UTAHException as error: | 83 | except UTAHException as error: |
52 | 80 | mesg = 'Exception: ' + str(error) | 84 | mesg = 'Exception: ' + str(error) |
53 | 81 | try: | 85 | try: |
55 | 82 | machine.logger.error(mesg) | 86 | logging.error(mesg) |
56 | 83 | except (AttributeError, NameError): | 87 | except (AttributeError, NameError): |
57 | 84 | sys.stderr.write(mesg) | 88 | sys.stderr.write(mesg) |
58 | 85 | exitstatus = 2 | 89 | exitstatus = 2 |
59 | 86 | 90 | ||
60 | === modified file 'examples/run_test_cobbler.py' | |||
61 | --- examples/run_test_cobbler.py 2012-12-19 15:41:25 +0000 | |||
62 | +++ examples/run_test_cobbler.py 2013-01-23 12:29:24 +0000 | |||
63 | @@ -17,6 +17,7 @@ | |||
64 | 17 | 17 | ||
65 | 18 | import argparse | 18 | import argparse |
66 | 19 | import sys | 19 | import sys |
67 | 20 | import logging | ||
68 | 20 | 21 | ||
69 | 21 | from utah.exceptions import UTAHException | 22 | from utah.exceptions import UTAHException |
70 | 22 | from utah.group import check_user_group, print_group_error_message | 23 | from utah.group import check_user_group, print_group_error_message |
71 | @@ -28,6 +29,7 @@ | |||
72 | 28 | custom_arguments, | 29 | custom_arguments, |
73 | 29 | name_argument, | 30 | name_argument, |
74 | 30 | run_tests, | 31 | run_tests, |
75 | 32 | configure_logging, | ||
76 | 31 | ) | 33 | ) |
77 | 32 | 34 | ||
78 | 33 | 35 | ||
79 | @@ -60,6 +62,8 @@ | |||
80 | 60 | exitstatus = 0 | 62 | exitstatus = 0 |
81 | 61 | machine = None | 63 | machine = None |
82 | 62 | 64 | ||
83 | 65 | configure_logging(args.debug) | ||
84 | 66 | |||
85 | 63 | try: | 67 | try: |
86 | 64 | inventory = ManualBaremetalSQLiteInventory() | 68 | inventory = ManualBaremetalSQLiteInventory() |
87 | 65 | kw = {} | 69 | kw = {} |
88 | @@ -84,7 +88,7 @@ | |||
89 | 84 | except UTAHException as error: | 88 | except UTAHException as error: |
90 | 85 | mesg = 'Exception: ' + str(error) | 89 | mesg = 'Exception: ' + str(error) |
91 | 86 | try: | 90 | try: |
93 | 87 | machine.logger.error(mesg) | 91 | logging.error(mesg) |
94 | 88 | except (AttributeError, NameError): | 92 | except (AttributeError, NameError): |
95 | 89 | sys.stderr.write(mesg) | 93 | sys.stderr.write(mesg) |
96 | 90 | exitstatus = 2 | 94 | exitstatus = 2 |
97 | 91 | 95 | ||
98 | === modified file 'examples/run_test_vm.py' | |||
99 | --- examples/run_test_vm.py 2012-12-19 15:41:25 +0000 | |||
100 | +++ examples/run_test_vm.py 2013-01-23 12:29:24 +0000 | |||
101 | @@ -21,7 +21,7 @@ | |||
102 | 21 | from utah.exceptions import UTAHException | 21 | from utah.exceptions import UTAHException |
103 | 22 | from utah.group import check_user_group, print_group_error_message | 22 | from utah.group import check_user_group, print_group_error_message |
104 | 23 | from utah.provisioning.vm.vm import TinySQLiteInventory | 23 | from utah.provisioning.vm.vm import TinySQLiteInventory |
106 | 24 | from utah.run import common_arguments, run_tests | 24 | from utah.run import common_arguments, run_tests, configure_logging |
107 | 25 | 25 | ||
108 | 26 | 26 | ||
109 | 27 | def get_parser(): | 27 | def get_parser(): |
110 | @@ -50,6 +50,8 @@ | |||
111 | 50 | exitstatus = 0 | 50 | exitstatus = 0 |
112 | 51 | machine = None | 51 | machine = None |
113 | 52 | 52 | ||
114 | 53 | configure_logging(args.debug) | ||
115 | 54 | |||
116 | 53 | try: | 55 | try: |
117 | 54 | inventory = TinySQLiteInventory() | 56 | inventory = TinySQLiteInventory() |
118 | 55 | kw = {} | 57 | kw = {} |
119 | 56 | 58 | ||
120 | === modified file 'examples/run_utah_tests.py' | |||
121 | --- examples/run_utah_tests.py 2012-12-19 15:41:25 +0000 | |||
122 | +++ examples/run_utah_tests.py 2013-01-23 12:29:24 +0000 | |||
123 | @@ -26,6 +26,7 @@ | |||
124 | 26 | file_arguments, | 26 | file_arguments, |
125 | 27 | name_argument, | 27 | name_argument, |
126 | 28 | virtual_arguments, | 28 | virtual_arguments, |
127 | 29 | configure_logging | ||
128 | 29 | ) | 30 | ) |
129 | 30 | from utah.timeout import timeout, UTAHTimeout | 31 | from utah.timeout import timeout, UTAHTimeout |
130 | 31 | from run_install_test import run_install_test | 32 | from run_install_test import run_install_test |
131 | @@ -70,6 +71,8 @@ | |||
132 | 70 | else: | 71 | else: |
133 | 71 | machinetype = args.machinetype | 72 | machinetype = args.machinetype |
134 | 72 | 73 | ||
135 | 74 | configure_logging(args.debug) | ||
136 | 75 | |||
137 | 73 | # Default is now CustomVM | 76 | # Default is now CustomVM |
138 | 74 | function = run_install_test | 77 | function = run_install_test |
139 | 75 | 78 | ||
140 | 76 | 79 | ||
141 | === modified file 'utah/config.py' | |||
142 | --- utah/config.py 2013-01-22 10:35:02 +0000 | |||
143 | +++ utah/config.py 2013-01-23 12:29:24 +0000 | |||
144 | @@ -171,6 +171,11 @@ | |||
145 | 171 | DEFAULTS['logpath'], | 171 | DEFAULTS['logpath'], |
146 | 172 | LOCALDEFAULTS['hostname'] + '-debug.log') | 172 | LOCALDEFAULTS['hostname'] + '-debug.log') |
147 | 173 | 173 | ||
148 | 174 | # Log file used to log SSH related information | ||
149 | 175 | LOCALDEFAULTS['ssh_logfile'] = \ | ||
150 | 176 | os.path.join(DEFAULTS['logpath'], | ||
151 | 177 | '{}-ssh.log'.format(LOCALDEFAULTS['hostname'])) | ||
152 | 178 | |||
153 | 174 | # This depends on a binary that may not be installed. | 179 | # This depends on a binary that may not be installed. |
154 | 175 | # Also, the null thing may seem obvious, but I found it here: | 180 | # Also, the null thing may seem obvious, but I found it here: |
155 | 176 | # http://stackoverflow.com/questions/699325/ | 181 | # http://stackoverflow.com/questions/699325/ |
156 | 177 | 182 | ||
157 | === modified file 'utah/provisioning/baremetal/inventory.py' | |||
158 | --- utah/provisioning/baremetal/inventory.py 2012-12-13 09:07:58 +0000 | |||
159 | +++ utah/provisioning/baremetal/inventory.py 2013-01-23 12:29:24 +0000 | |||
160 | @@ -41,8 +41,7 @@ | |||
161 | 41 | super(ManualBaremetalSQLiteInventory, self).__init__(*args, db=db, | 41 | super(ManualBaremetalSQLiteInventory, self).__init__(*args, db=db, |
162 | 42 | lockfile=lockfile, | 42 | lockfile=lockfile, |
163 | 43 | **kw) | 43 | **kw) |
166 | 44 | machines_count = (self.connection | 44 | machines_count = (self.execute('SELECT COUNT(*) FROM machines') |
165 | 45 | .execute('SELECT COUNT(*) FROM machines') | ||
167 | 46 | .fetchall()[0][0]) | 45 | .fetchall()[0][0]) |
168 | 47 | if machines_count == 0: | 46 | if machines_count == 0: |
169 | 48 | raise UTAHProvisioningInventoryException('No machines in database') | 47 | raise UTAHProvisioningInventoryException('No machines in database') |
170 | @@ -54,7 +53,7 @@ | |||
171 | 54 | if name is not None: | 53 | if name is not None: |
172 | 55 | query += ' WHERE name=?' | 54 | query += ' WHERE name=?' |
173 | 56 | queryvars.append(name) | 55 | queryvars.append(name) |
175 | 57 | result = self.connection.execute(query, queryvars).fetchall() | 56 | result = self.execute(query, queryvars).fetchall() |
176 | 58 | if result is None: | 57 | if result is None: |
177 | 59 | raise UTAHProvisioningInventoryException( | 58 | raise UTAHProvisioningInventoryException( |
178 | 60 | 'No machines meet criteria') | 59 | 'No machines meet criteria') |
179 | @@ -84,7 +83,7 @@ | |||
180 | 84 | name = machineinfo.pop('name') | 83 | name = machineinfo.pop('name') |
181 | 85 | state = machineinfo.pop('state') | 84 | state = machineinfo.pop('state') |
182 | 86 | machineinfo.pop('pid') | 85 | machineinfo.pop('pid') |
184 | 87 | update = self.connection.execute( | 86 | update = self.execute( |
185 | 88 | "UPDATE machines SET pid=?, state='provisioned' WHERE machineid=?" | 87 | "UPDATE machines SET pid=?, state='provisioned' WHERE machineid=?" |
186 | 89 | "AND state=?", | 88 | "AND state=?", |
187 | 90 | [os.getpid(), machineid, state]).rowcount | 89 | [os.getpid(), machineid, state]).rowcount |
188 | @@ -115,7 +114,7 @@ | |||
189 | 115 | 'name required to release a machine') | 114 | 'name required to release a machine') |
190 | 116 | query = "UPDATE machines SET state='available' WHERE name=?" | 115 | query = "UPDATE machines SET state='available' WHERE name=?" |
191 | 117 | queryvars = [name] | 116 | queryvars = [name] |
193 | 118 | update = self.connection.execute(query, queryvars).rowcount | 117 | update = self.execute(query, queryvars).rowcount |
194 | 119 | if update == 1: | 118 | if update == 1: |
195 | 120 | if machine is not None: | 119 | if machine is not None: |
196 | 121 | if machine in self.machines: | 120 | if machine in self.machines: |
197 | 122 | 121 | ||
198 | === modified file 'utah/provisioning/inventory/sqlite.py' | |||
199 | --- utah/provisioning/inventory/sqlite.py 2013-01-22 10:35:02 +0000 | |||
200 | +++ utah/provisioning/inventory/sqlite.py 2013-01-23 12:29:24 +0000 | |||
201 | @@ -17,6 +17,8 @@ | |||
202 | 17 | 17 | ||
203 | 18 | import sqlite3 | 18 | import sqlite3 |
204 | 19 | import os | 19 | import os |
205 | 20 | import logging | ||
206 | 21 | |||
207 | 20 | from utah.provisioning.inventory.inventory import Inventory | 22 | from utah.provisioning.inventory.inventory import Inventory |
208 | 21 | from utah import config | 23 | from utah import config |
209 | 22 | 24 | ||
210 | @@ -30,10 +32,40 @@ | |||
211 | 30 | db = os.path.expanduser(db) | 32 | db = os.path.expanduser(db) |
212 | 31 | super(SQLiteInventory, self).__init__(*args, uniqueid=db, **kw) | 33 | super(SQLiteInventory, self).__init__(*args, uniqueid=db, **kw) |
213 | 32 | self.db = db | 34 | self.db = db |
214 | 35 | <<<<<<< TREE | ||
215 | 33 | self.connection = sqlite3.connect(self.db, | 36 | self.connection = sqlite3.connect(self.db, |
216 | 34 | config.sqlite3_db_connection_timeout) | 37 | config.sqlite3_db_connection_timeout) |
217 | 35 | self.connection.isolation_level = None | 38 | self.connection.isolation_level = None |
218 | 36 | self.connection.row_factory = sqlite3.Row | 39 | self.connection.row_factory = sqlite3.Row |
219 | 40 | ======= | ||
220 | 41 | self._connection = sqlite3.connect(self.db) | ||
221 | 42 | self._connection.isolation_level = None | ||
222 | 43 | self._connection.row_factory = sqlite3.Row | ||
223 | 44 | |||
224 | 45 | def execute(self, sql, parameters=None): | ||
225 | 46 | """Execute sql statement and return cursor | ||
226 | 47 | |||
227 | 48 | This method is expected to be used as a wrapper around all | ||
228 | 49 | `connection.execute` calls so that all the SQL statements are logged in | ||
229 | 50 | case this information is needed to troubleshoot problems. | ||
230 | 51 | |||
231 | 52 | :param sql: A sql statement to be executed | ||
232 | 53 | :type sql: str | ||
233 | 54 | :param parameters: | ||
234 | 55 | Parameters to use to replace the placeholder | ||
235 | 56 | values in the sql statement. | ||
236 | 57 | :type parameters: | ||
237 | 58 | list (question mark placeholders) | | ||
238 | 59 | dict (colon named placeholders) | ||
239 | 60 | """ | ||
240 | 61 | if parameters is None: | ||
241 | 62 | parameters = [] | ||
242 | 63 | |||
243 | 64 | logging.debug('Executing SQL statement: {}, {}' | ||
244 | 65 | .format(sql, parameters)) | ||
245 | 66 | cursor = self._connection.execute(sql, parameters) | ||
246 | 67 | return cursor | ||
247 | 68 | >>>>>>> MERGE-SOURCE | ||
248 | 37 | 69 | ||
249 | 38 | def delete(self): | 70 | def delete(self): |
250 | 39 | os.unlink(self.db) | 71 | os.unlink(self.db) |
251 | 40 | 72 | ||
252 | === modified file 'utah/provisioning/provisioning.py' | |||
253 | --- utah/provisioning/provisioning.py 2013-01-16 14:08:17 +0000 | |||
254 | +++ utah/provisioning/provisioning.py 2013-01-23 12:29:24 +0000 | |||
255 | @@ -265,33 +265,7 @@ | |||
256 | 265 | Initialize the logging for the machine. | 265 | Initialize the logging for the machine. |
257 | 266 | Subclasses can override or supplement to customize logging. | 266 | Subclasses can override or supplement to customize logging. |
258 | 267 | """ | 267 | """ |
286 | 268 | self.logger = logging.getLogger('utah.provisioning.' + str(self.name)) | 268 | self.logger = logging.getLogger(self.name) |
260 | 269 | self.logger.propagate = False | ||
261 | 270 | self.logger.setLevel(logging.INFO) | ||
262 | 271 | for handler in list(self.logger.handlers): | ||
263 | 272 | self.logger.removeHandler(handler) | ||
264 | 273 | del handler | ||
265 | 274 | self.consolehandler = logging.StreamHandler(stream=sys.stderr) | ||
266 | 275 | log_format = logging.Formatter('%(asctime)s ' + self.name | ||
267 | 276 | + ' %(levelname)s: %(message)s') | ||
268 | 277 | console_formatter = log_format | ||
269 | 278 | self.consolehandler.setFormatter(console_formatter) | ||
270 | 279 | self.consolehandler.setLevel(config.consoleloglevel) | ||
271 | 280 | self.logger.addHandler(self.consolehandler) | ||
272 | 281 | self.filehandler = logging.handlers.WatchedFileHandler(config.logfile) | ||
273 | 282 | file_formatter = log_format | ||
274 | 283 | self.filehandler.setFormatter(file_formatter) | ||
275 | 284 | self.filehandler.setLevel(config.fileloglevel) | ||
276 | 285 | self.logger.addHandler(self.filehandler) | ||
277 | 286 | if config.debuglog is not None: | ||
278 | 287 | self.logger.setLevel(logging.DEBUG) | ||
279 | 288 | self.debughandler = ( | ||
280 | 289 | logging.handlers.WatchedFileHandler(config.debuglog)) | ||
281 | 290 | self.debughandler.setFormatter(file_formatter) | ||
282 | 291 | self.debughandler.setLevel(logging.DEBUG) | ||
283 | 292 | self.logger.addHandler(self.debughandler) | ||
284 | 293 | if self.debug: | ||
285 | 294 | self.logger.setLevel(logging.DEBUG) | ||
287 | 295 | 269 | ||
288 | 296 | def _loggerunsetup(self): | 270 | def _loggerunsetup(self): |
289 | 297 | """ | 271 | """ |
290 | @@ -299,8 +273,6 @@ | |||
291 | 299 | Principally used when a machine changes name after init. | 273 | Principally used when a machine changes name after init. |
292 | 300 | """ | 274 | """ |
293 | 301 | del self.logger | 275 | del self.logger |
294 | 302 | del self.consolehandler | ||
295 | 303 | del self.filehandler | ||
296 | 304 | 276 | ||
297 | 305 | def provisioncheck(self): | 277 | def provisioncheck(self): |
298 | 306 | """ | 278 | """ |
299 | 307 | 279 | ||
300 | === modified file 'utah/provisioning/ssh.py' | |||
301 | --- utah/provisioning/ssh.py 2012-12-12 18:56:29 +0000 | |||
302 | +++ utah/provisioning/ssh.py 2013-01-23 12:29:24 +0000 | |||
303 | @@ -43,6 +43,7 @@ | |||
304 | 43 | ssh_client = paramiko.SSHClient() | 43 | ssh_client = paramiko.SSHClient() |
305 | 44 | ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) | 44 | ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) |
306 | 45 | self.ssh_client = ssh_client | 45 | self.ssh_client = ssh_client |
307 | 46 | self.ssh_logger = logging.getLogger('ssh') | ||
308 | 46 | 47 | ||
309 | 47 | def run(self, command, _quiet=None, root=False, timeout=None): | 48 | def run(self, command, _quiet=None, root=False, timeout=None): |
310 | 48 | """ | 49 | """ |
311 | @@ -64,15 +65,15 @@ | |||
312 | 64 | # With additions from here: | 65 | # With additions from here: |
313 | 65 | # http://od-eon.com/blogs/ | 66 | # http://od-eon.com/blogs/ |
314 | 66 | # stefan/automating-remote-commands-over-ssh-paramiko/ | 67 | # stefan/automating-remote-commands-over-ssh-paramiko/ |
316 | 67 | self.logger.debug('Connecting SSH') | 68 | self.ssh_logger.debug('Connecting SSH') |
317 | 68 | self.ssh_client.connect(self.name, | 69 | self.ssh_client.connect(self.name, |
318 | 69 | username=user, | 70 | username=user, |
319 | 70 | key_filename=config.sshprivatekey) | 71 | key_filename=config.sshprivatekey) |
320 | 71 | 72 | ||
322 | 72 | self.logger.debug('Opening SSH session') | 73 | self.ssh_logger.debug('Opening SSH session') |
323 | 73 | channel = self.ssh_client.get_transport().open_session() | 74 | channel = self.ssh_client.get_transport().open_session() |
324 | 74 | 75 | ||
326 | 75 | self.logger.info('Running command through SSH: ' + commandstring) | 76 | self.ssh_logger.info('Running command through SSH: ' + commandstring) |
327 | 76 | stdout = channel.makefile('rb') | 77 | stdout = channel.makefile('rb') |
328 | 77 | stderr = channel.makefile_stderr('rb') | 78 | stderr = channel.makefile_stderr('rb') |
329 | 78 | if timeout is None: | 79 | if timeout is None: |
330 | @@ -81,22 +82,22 @@ | |||
331 | 81 | utah.timeout.timeout(timeout, channel.exec_command, commandstring) | 82 | utah.timeout.timeout(timeout, channel.exec_command, commandstring) |
332 | 82 | retval = channel.recv_exit_status() | 83 | retval = channel.recv_exit_status() |
333 | 83 | 84 | ||
335 | 84 | self.logger.debug('Closing SSH connection') | 85 | self.ssh_logger.debug('Closing SSH connection') |
336 | 85 | self.ssh_client.close() | 86 | self.ssh_client.close() |
337 | 86 | 87 | ||
338 | 87 | log_level = logging.DEBUG if retval == 0 else logging.WARNING | 88 | log_level = logging.DEBUG if retval == 0 else logging.WARNING |
339 | 88 | log_message = 'Return code: {}'.format(retval) | 89 | log_message = 'Return code: {}'.format(retval) |
341 | 89 | self.logger.log(log_level, log_message) | 90 | self.ssh_logger.log(log_level, log_message) |
342 | 90 | 91 | ||
344 | 91 | self.logger.debug('Standard output follows:') | 92 | self.ssh_logger.debug('Standard output follows:') |
345 | 92 | stdout_lines = stdout.readlines() | 93 | stdout_lines = stdout.readlines() |
346 | 93 | for line in stdout_lines: | 94 | for line in stdout_lines: |
348 | 94 | self.logger.debug(line.strip()) | 95 | self.ssh_logger.debug(line.strip()) |
349 | 95 | 96 | ||
351 | 96 | self.logger.debug('Standard error follows:') | 97 | self.ssh_logger.debug('Standard error follows:') |
352 | 97 | stderr_lines = stderr.readlines() | 98 | stderr_lines = stderr.readlines() |
353 | 98 | for line in stderr_lines: | 99 | for line in stderr_lines: |
355 | 99 | self.logger.debug(line.strip()) | 100 | self.ssh_logger.debug(line.strip()) |
356 | 100 | 101 | ||
357 | 101 | return retval, ''.join(stdout_lines), ''.join(stderr_lines) | 102 | return retval, ''.join(stdout_lines), ''.join(stderr_lines) |
358 | 102 | 103 | ||
359 | @@ -116,9 +117,9 @@ | |||
360 | 116 | try: | 117 | try: |
361 | 117 | for localpath in files: | 118 | for localpath in files: |
362 | 118 | if os.path.isfile(localpath): | 119 | if os.path.isfile(localpath): |
366 | 119 | self.logger.info('Uploading ' + localpath | 120 | self.ssh_logger.info('Uploading {} from the host ' |
367 | 120 | + ' from the host to ' + target | 121 | 'to {} on the machine' |
368 | 121 | + ' on the machine') | 122 | .format(localpath, target)) |
369 | 122 | remotepath = os.path.join(target, | 123 | remotepath = os.path.join(target, |
370 | 123 | os.path.basename(localpath)) | 124 | os.path.basename(localpath)) |
371 | 124 | sftp_client.put(localpath, remotepath) | 125 | sftp_client.put(localpath, remotepath) |
372 | @@ -155,9 +156,9 @@ | |||
373 | 155 | try: | 156 | try: |
374 | 156 | for remotepath in files: | 157 | for remotepath in files: |
375 | 157 | localpath = get_localpath(remotepath) | 158 | localpath = get_localpath(remotepath) |
379 | 158 | self.logger.info('Downloading ' + remotepath | 159 | self.ssh_logger.info('Downloading {} from the machine ' |
380 | 159 | + ' from the machine to ' + target | 160 | 'to {} on the host' |
381 | 160 | + ' on the host') | 161 | .format(remotepath, target)) |
382 | 161 | sftp_client.get(remotepath, localpath) | 162 | sftp_client.get(remotepath, localpath) |
383 | 162 | finally: | 163 | finally: |
384 | 163 | sftp_client.close() | 164 | sftp_client.close() |
385 | @@ -179,9 +180,9 @@ | |||
386 | 179 | for myfile in files: | 180 | for myfile in files: |
387 | 180 | newtarget = os.path.join(target, os.path.basename(myfile)) | 181 | newtarget = os.path.join(target, os.path.basename(myfile)) |
388 | 181 | if S_ISDIR(sftp_client.stat(myfile).st_mode): | 182 | if S_ISDIR(sftp_client.stat(myfile).st_mode): |
390 | 182 | self.logger.debug(myfile + ' is a directory, recursing') | 183 | self.ssh_logger.debug(myfile + ' is a directory, recursing') |
391 | 183 | if not os.path.isdir(newtarget): | 184 | if not os.path.isdir(newtarget): |
393 | 184 | self.logger.debug('Attempting to create ' + newtarget) | 185 | self.ssh_logger.debug('Attempting to create ' + newtarget) |
394 | 185 | os.makedirs(newtarget) | 186 | os.makedirs(newtarget) |
395 | 186 | myfiles = [os.path.join(myfile, x) | 187 | myfiles = [os.path.join(myfile, x) |
396 | 187 | for x in sftp_client.listdir(myfile)] | 188 | for x in sftp_client.listdir(myfile)] |
397 | @@ -194,16 +195,16 @@ | |||
398 | 194 | Clean up known hosts for machine. | 195 | Clean up known hosts for machine. |
399 | 195 | """ | 196 | """ |
400 | 196 | # TODO: evaluate value of known_hosts with paramiko | 197 | # TODO: evaluate value of known_hosts with paramiko |
403 | 197 | self.logger.info('Removing machine addresses ' | 198 | self.ssh_logger.info('Removing machine addresses ' |
404 | 198 | 'from ssh known_hosts file') | 199 | 'from ssh known_hosts file') |
405 | 199 | addresses = [self.name] | 200 | addresses = [self.name] |
406 | 200 | try: | 201 | try: |
407 | 201 | addresses.append(socket.gethostbyname(self.name)) | 202 | addresses.append(socket.gethostbyname(self.name)) |
408 | 202 | except socket.gaierror as err: | 203 | except socket.gaierror as err: |
409 | 203 | if err.errno in [-2, -5]: | 204 | if err.errno in [-2, -5]: |
413 | 204 | self.logger.debug(self.name | 205 | self.ssh_logger.debug('{} is not resolvable, ' |
414 | 205 | + ' is not resolvable, ' | 206 | 'so not removing from known_hosts' |
415 | 206 | + 'so not removing from known_hosts') | 207 | .format(self.name)) |
416 | 207 | else: | 208 | else: |
417 | 208 | raise err | 209 | raise err |
418 | 209 | 210 | ||
419 | @@ -227,10 +228,10 @@ | |||
420 | 227 | Return a retryable exception if it is not. | 228 | Return a retryable exception if it is not. |
421 | 228 | Intended for use with retry. | 229 | Intended for use with retry. |
422 | 229 | """ | 230 | """ |
425 | 230 | self.logger.info('Sleeping {timeout} seconds' | 231 | self.ssh_logger.info('Sleeping {timeout} seconds' |
426 | 231 | .format(timeout=timeout)) | 232 | .format(timeout=timeout)) |
427 | 232 | time.sleep(timeout) | 233 | time.sleep(timeout) |
429 | 233 | self.logger.info('Checking for ssh availability') | 234 | self.ssh_logger.info('Checking for ssh availability') |
430 | 234 | try: | 235 | try: |
431 | 235 | self.ssh_client.connect(self.name, | 236 | self.ssh_client.connect(self.name, |
432 | 236 | username=config.user, | 237 | username=config.user, |
433 | @@ -246,7 +247,7 @@ | |||
434 | 246 | if timeout is None: | 247 | if timeout is None: |
435 | 247 | timeout = self.boottimeout | 248 | timeout = self.boottimeout |
436 | 248 | if logmethod is None: | 249 | if logmethod is None: |
438 | 249 | logmethod = self.logger.debug | 250 | logmethod = self.ssh_logger.debug |
439 | 250 | utah.timeout.timeout(timeout, retry, self.sshcheck, checktimeout, | 251 | utah.timeout.timeout(timeout, retry, self.sshcheck, checktimeout, |
440 | 251 | logmethod=logmethod) | 252 | logmethod=logmethod) |
441 | 252 | 253 | ||
442 | 253 | 254 | ||
443 | === modified file 'utah/provisioning/vm/vm.py' | |||
444 | --- utah/provisioning/vm/vm.py 2012-12-17 19:55:15 +0000 | |||
445 | +++ utah/provisioning/vm/vm.py 2013-01-23 12:29:24 +0000 | |||
446 | @@ -589,7 +589,7 @@ | |||
447 | 589 | Initialize simple database. | 589 | Initialize simple database. |
448 | 590 | """ | 590 | """ |
449 | 591 | super(TinySQLiteInventory, self).__init__(*args, **kw) | 591 | super(TinySQLiteInventory, self).__init__(*args, **kw) |
451 | 592 | self.connection.execute( | 592 | self.execute( |
452 | 593 | 'CREATE TABLE IF NOT EXISTS ' | 593 | 'CREATE TABLE IF NOT EXISTS ' |
453 | 594 | 'machines(machineid INTEGER PRIMARY KEY, state TEXT)') | 594 | 'machines(machineid INTEGER PRIMARY KEY, state TEXT)') |
454 | 595 | 595 | ||
455 | @@ -599,8 +599,8 @@ | |||
456 | 599 | machineid along with all other arguments to that class's constructor, | 599 | machineid along with all other arguments to that class's constructor, |
457 | 600 | returning the resulting object. | 600 | returning the resulting object. |
458 | 601 | """ | 601 | """ |
461 | 602 | cursor = self.connection.cursor() | 602 | cursor = self.execute("INSERT INTO machines (state) " |
462 | 603 | cursor.execute("INSERT INTO machines (state) VALUES ('provisioned')") | 603 | "VALUES ('provisioned')") |
463 | 604 | machineid = cursor.lastrowid | 604 | machineid = cursor.lastrowid |
464 | 605 | return machinetype(machineid=machineid, *args, **kw) | 605 | return machinetype(machineid=machineid, *args, **kw) |
465 | 606 | 606 | ||
466 | @@ -608,7 +608,7 @@ | |||
467 | 608 | """ | 608 | """ |
468 | 609 | Updates the database to indicate the machine is available. | 609 | Updates the database to indicate the machine is available. |
469 | 610 | """ | 610 | """ |
471 | 611 | if self.connection.execute( | 611 | if self.execute( |
472 | 612 | "UPDATE machines SET state='available' WHERE machineid=?", | 612 | "UPDATE machines SET state='available' WHERE machineid=?", |
473 | 613 | [machineid]): | 613 | [machineid]): |
474 | 614 | return True | 614 | return True |
475 | @@ -620,7 +620,7 @@ | |||
476 | 620 | Updates the database to indicate the machine is destroyed, but does not | 620 | Updates the database to indicate the machine is destroyed, but does not |
477 | 621 | destroy the machine. | 621 | destroy the machine. |
478 | 622 | """ | 622 | """ |
480 | 623 | if self.connection.execute( | 623 | if self.execute( |
481 | 624 | "UPDATE machines SET state='destroyed' ""WHERE machineid=?", | 624 | "UPDATE machines SET state='destroyed' ""WHERE machineid=?", |
482 | 625 | [machineid]): | 625 | [machineid]): |
483 | 626 | return True | 626 | return True |
484 | 627 | 627 | ||
485 | === modified file 'utah/run.py' | |||
486 | --- utah/run.py 2013-01-16 11:05:00 +0000 | |||
487 | +++ utah/run.py 2013-01-23 12:29:24 +0000 | |||
488 | @@ -134,7 +134,7 @@ | |||
489 | 134 | utah_command = 'utah {} {}'.format(extraopts, options) | 134 | utah_command = 'utah {} {}'.format(extraopts, options) |
490 | 135 | exitstatus, _stdout, _stderr = machine.run(utah_command, root=True) | 135 | exitstatus, _stdout, _stderr = machine.run(utah_command, root=True) |
491 | 136 | except UTAHException as error: | 136 | except UTAHException as error: |
493 | 137 | machine.logger.error('Failed to run test: ' + str(error)) | 137 | logging.error('Failed to run test: ' + str(error)) |
494 | 138 | exitstatus = 1 | 138 | exitstatus = 1 |
495 | 139 | else: | 139 | else: |
496 | 140 | _write(runlist, report_remote_path) | 140 | _write(runlist, report_remote_path) |
497 | @@ -164,17 +164,12 @@ | |||
498 | 164 | report_local_path = os.path.join(config.logpath, log_filename) | 164 | report_local_path = os.path.join(config.logpath, log_filename) |
499 | 165 | report_local_path = os.path.normpath(report_local_path) | 165 | report_local_path = os.path.normpath(report_local_path) |
500 | 166 | machine.downloadfiles(report_remote_path, report_local_path) | 166 | machine.downloadfiles(report_remote_path, report_local_path) |
502 | 167 | machine.logger.info('Test log copied to ' + report_local_path) | 167 | logging.info('Test log copied to ' + report_local_path) |
503 | 168 | locallogs.append(report_local_path) | 168 | locallogs.append(report_local_path) |
504 | 169 | if args.dumplogs: | 169 | if args.dumplogs: |
505 | 170 | with open(report_local_path, 'r') as f: | 170 | with open(report_local_path, 'r') as f: |
506 | 171 | print(f.read()) | 171 | print(f.read()) |
507 | 172 | 172 | ||
508 | 173 | # Since we're running from a script, output INFO | ||
509 | 174 | for logger in [machine.logger] + machine.logger.handlers: | ||
510 | 175 | if logger.level > logging.INFO: | ||
511 | 176 | logger.setLevel(logging.INFO) | ||
512 | 177 | |||
513 | 178 | # Write the machine name to standard out for log gathering | 173 | # Write the machine name to standard out for log gathering |
514 | 179 | print('Running on machine: ' + machine.name) | 174 | print('Running on machine: ' + machine.name) |
515 | 180 | 175 | ||
516 | @@ -196,14 +191,13 @@ | |||
517 | 196 | try: | 191 | try: |
518 | 197 | locallogs += getfiles(args, machine) | 192 | locallogs += getfiles(args, machine) |
519 | 198 | except Exception as err: | 193 | except Exception as err: |
521 | 199 | machine.logger.warning('Failed to download files: ' + str(err)) | 194 | logging.warning('Failed to download files: ' + str(err)) |
522 | 200 | 195 | ||
523 | 201 | if args.outputpreseed or config.outputpreseed: | 196 | if args.outputpreseed or config.outputpreseed: |
524 | 202 | if args.outputpreseed: | 197 | if args.outputpreseed: |
527 | 203 | machine.logger.debug('Capturing preseed due to ' | 198 | logging.debug('Capturing preseed due to command line option') |
526 | 204 | 'command line option') | ||
528 | 205 | elif config.outputpreseed: | 199 | elif config.outputpreseed: |
530 | 206 | machine.logger.debug('Capturing preseed due to config option') | 200 | logging.debug('Capturing preseed due to config option') |
531 | 207 | preseedfile = os.path.join(config.logpath, | 201 | preseedfile = os.path.join(config.logpath, |
532 | 208 | machine.name + '-preseed.cfg') | 202 | machine.name + '-preseed.cfg') |
533 | 209 | shutil.copyfile(machine.finalpreseed, preseedfile) | 203 | shutil.copyfile(machine.finalpreseed, preseedfile) |
534 | @@ -230,3 +224,66 @@ | |||
535 | 230 | localfiles.append(myfile) | 224 | localfiles.append(myfile) |
536 | 231 | 225 | ||
537 | 232 | return localfiles | 226 | return localfiles |
538 | 227 | |||
539 | 228 | |||
540 | 229 | def configure_logging(debug=False): | ||
541 | 230 | """Configure logging. | ||
542 | 231 | |||
543 | 232 | Configure root logger using three different handlers with a different log | ||
544 | 233 | level: | ||
545 | 234 | - Console: Set according to `consoleloglevel` from configuration | ||
546 | 235 | - Log file: Set according to `fileloglevel` from configuration | ||
547 | 236 | - Debug log file: Always set to debug | ||
548 | 237 | |||
549 | 238 | Other loggers are expected to propagate their messages up to the logging | ||
550 | 239 | hierarchy. | ||
551 | 240 | |||
552 | 241 | """ | ||
553 | 242 | logger = logging.getLogger() | ||
554 | 243 | # configure_logging is being called in different scripts that might call | ||
555 | 244 | # other scripts. This check avoids adding multiple handlers when logging as | ||
556 | 245 | # already been configured. | ||
557 | 246 | if logger.handlers: | ||
558 | 247 | logger.debug('Logging already configured. ' | ||
559 | 248 | 'Skipping logging configuration.') | ||
560 | 249 | return | ||
561 | 250 | |||
562 | 251 | logger.setLevel(logging.NOTSET) | ||
563 | 252 | |||
564 | 253 | formatter = logging.Formatter('%(asctime)s %(name)s ' | ||
565 | 254 | '%(levelname)s: %(message)s') | ||
566 | 255 | |||
567 | 256 | # Console logging | ||
568 | 257 | console_handler = logging.StreamHandler() | ||
569 | 258 | console_handler.setFormatter(formatter) | ||
570 | 259 | log_level = logging.DEBUG if debug else config.consoleloglevel | ||
571 | 260 | console_handler.setLevel(log_level) | ||
572 | 261 | logger.addHandler(console_handler) | ||
573 | 262 | |||
574 | 263 | # File logging | ||
575 | 264 | file_handler = logging.handlers.WatchedFileHandler(config.logfile) | ||
576 | 265 | file_handler.setFormatter(formatter) | ||
577 | 266 | file_handler.setLevel(config.fileloglevel) | ||
578 | 267 | logger.addHandler(file_handler) | ||
579 | 268 | |||
580 | 269 | # Debug file logging | ||
581 | 270 | if config.debuglog is not None: | ||
582 | 271 | debug_file_handler = \ | ||
583 | 272 | logging.handlers.WatchedFileHandler(config.debuglog) | ||
584 | 273 | debug_file_handler.setFormatter(formatter) | ||
585 | 274 | debug_file_handler.setLevel(logging.DEBUG) | ||
586 | 275 | logger.addHandler(debug_file_handler) | ||
587 | 276 | |||
588 | 277 | # Configure ssh logging in a separate file | ||
589 | 278 | ssh_file_handler = logging.handlers.WatchedFileHandler(config.ssh_logfile) | ||
590 | 279 | ssh_file_handler.setFormatter(formatter) | ||
591 | 280 | ssh_file_handler.setLevel(logging.DEBUG) | ||
592 | 281 | |||
593 | 282 | # ssh logs available in both ssh log file and other files | ||
594 | 283 | logger = logging.getLogger('ssh') | ||
595 | 284 | logger.addHandler(ssh_file_handler) | ||
596 | 285 | |||
597 | 286 | # paramiko logs only available in ssh log file | ||
598 | 287 | logger = logging.getLogger('paramiko') | ||
599 | 288 | logger.propagate = False | ||
600 | 289 | logger.addHandler(ssh_file_handler) |
I've added a few changes to the logging part that don't really change anything
with regard to sql.
The biggest change is that ssh log is now written to a different file in case
detailed information is needed to troubleshoot a problem. In particular, the
logs from the paramiko library are also included in that file.
Aside from this, I think that the -d/--debug flag in the command line is kind
of confusing. In this branch, the console and log file handlers use the levels
set from the configuration file (the handler for the debug file is always set
to debug). The -d/--debug flag is used just to set to debug the logging in the
console, but not on the the file log. I'm not sure if this is the intended
behavior, but I don't think setting the file log to debug is a great benefit if
there's already a debug file. Maybe the -d/--debug flag should be removed since
it's not very likely that we use it to troubleshoot something looking just at
the console instead of checking the debug file. Please let me know what do you
think.