Merge lp:~lool/launchpad-work-items-tracker/jira-support into lp:~linaro-automation/launchpad-work-items-tracker/linaro
- jira-support
- Merge into linaro
Status: | Merged |
---|---|
Merged at revision: | 339 |
Proposed branch: | lp:~lool/launchpad-work-items-tracker/jira-support |
Merge into: | lp:~linaro-automation/launchpad-work-items-tracker/linaro |
Diff against target: |
555 lines (+321/-47) 6 files modified
all-projects (+45/-20) collect (+38/-21) collect_jira (+229/-0) collect_roadmap (+5/-3) jira.py (+4/-2) utils.py (+0/-1) |
To merge this branch: | bzr merge lp:~lool/launchpad-work-items-tracker/jira-support |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Milo Casagrande (community) | Approve | ||
Review via email: mp+112136@code.launchpad.net |
Commit message
Description of the change
This adds support for getting cards data out of JIRA instead of kanbantool.com + papyrs.com; it's only active if you set cards_source = 'jira' in the project's config though (so it can be merged and remain inactive). This is deployed in staging and works there, but with some limitations:
* description is taken as text, but it's a mixture of HTML and of JIRA wiki syntax in cards; in the current version of status.linaro.org, description is empty/broken so this isn't a regression but it looks ugly
* "Roadmap card id" currently refers to e.g. CARD-12 instead of some human-readable text like TCWG2012-
* there are some missing features like setting the sponsor; relatively minor
Probably fixing the latter requires some DB changes; I'm not sure what rules we operate on for these though, e.g. how does one deploy the schema update?
In general, I consider this is ready for merging, but we can keep this branch open until the above bits are also fixed.
- 340. By Loïc Minier
-
No need to remap priority names and component names.
- 341. By Loïc Minier
-
Split and sort system imports.
- 342. By Loïc Minier
-
Misc PEP8 fixes.
- 343. By Loïc Minier
-
Split and sort system imports.
- 344. By Loïc Minier
-
Fix a bunch of PEP8 issues.
- 345. By Loïc Minier
-
Drop useless imports.
Loïc Minier (lool) wrote : | # |
I've now fixed the PEP8 issues you mention and others, thanks! Note that collect_jira was copied from collect_roadmap (kanbantool.com + papyrs.com code) and copied over the syntax problems; now code isn't much repeated, and I also had removed useless imports, but the original syntax layout remained.
One problem with fixing these issues is that it makes the delta with the upstream project larger, but then I don't think we will ever merge back, so...
(I've not fixed all the E501 line too long issues because there were too many of these.)
- 346. By Loïc Minier
-
Update sponsor information from JIRA.
- 347. By Loïc Minier
-
Set card Size from JIRA timetracking information if available.
- 348. By Loïc Minier
-
Wrap card sponsor in unicode().
- 349. By Loïc Minier
-
PEP8 line too long fixes.
- 350. By Loïc Minier
-
Acceptance criteria is in the description by design.
- 351. By Loïc Minier
-
Merge with Linaro branch of workitems tracker which has some independent PEP8
fixes.
Loïc Minier (lool) wrote : | # |
I've pushed some further changes to complete support for other fields; what's still missing is support for fetching the description, but need to get beautifulsoup deployed on status.linaro.org to get this done; the other changes can wait and require DB changes.
Preview Diff
1 | === modified file 'all-projects' |
2 | --- all-projects 2012-01-30 10:03:24 +0000 |
3 | +++ all-projects 2012-07-10 16:26:24 +0000 |
4 | @@ -13,12 +13,19 @@ |
5 | import subprocess |
6 | import sys |
7 | |
8 | +import report_tools |
9 | + |
10 | |
11 | def collect(source_dir, db_file, config_file, extra_args): |
12 | return run_collect_script(source_dir, db_file, config_file, extra_args, |
13 | "collect") |
14 | |
15 | |
16 | +def collect_jira(source_dir, db_file, config_file, extra_args): |
17 | + return run_collect_script(source_dir, db_file, config_file, extra_args, |
18 | + "collect_jira") |
19 | + |
20 | + |
21 | def collect_roadmap(source_dir, db_file, config_file, extra_args): |
22 | return run_collect_script(source_dir, db_file, config_file, extra_args, |
23 | "collect_roadmap") |
24 | @@ -67,7 +74,8 @@ |
25 | |
26 | |
27 | def main(): |
28 | - parser = optparse.OptionParser(usage="%prog <database dir> <www root dir> [www root url]") |
29 | + parser = optparse.OptionParser( |
30 | + usage="%prog <database dir> <www root dir> [www root url]") |
31 | parser.add_option("--config-dir", dest="config_dir", default="config") |
32 | parser.add_option("--roadmap-config-file", dest="roadmap_config_file", |
33 | default="roadmap-config") |
34 | @@ -103,6 +111,11 @@ |
35 | os.path.join(source_dir, opts.config_dir, "*%s" % valid_config_suffix)) |
36 | |
37 | for config_file in filenames: |
38 | + # read roadmap config to find where to get cards from |
39 | + cfg = report_tools.load_config(config_file) |
40 | + # default to kanbantool |
41 | + cards_source = cfg.get('cards_source', 'kanban') |
42 | + |
43 | project_name = os.path.basename(config_file)[:-len(valid_config_suffix)] |
44 | project_output_dir = os.path.join(output_dir, project_name) |
45 | db_file = os.path.join(db_dir, "%s.db" % project_name) |
46 | @@ -127,26 +140,38 @@ |
47 | sys.stderr.write("collect failed for %s" % project_name) |
48 | continue |
49 | |
50 | - extra_collect_roadmap_args = [] |
51 | - extra_collect_roadmap_args.extend(["--board", '10721']) |
52 | - if opts.kanban_token_file is not None: |
53 | - with open(opts.kanban_token_file) as token_file: |
54 | - token = token_file.read() |
55 | - extra_collect_roadmap_args.extend(["--kanbantoken", token]) |
56 | - else: |
57 | - sys.stderr.write("No Kanbantool API token given to " |
58 | - "collect_roadmap for %s" % project_name) |
59 | - if opts.papyrs_token_file is not None: |
60 | - with open(opts.papyrs_token_file) as token_file: |
61 | - token = token_file.read() |
62 | - extra_collect_roadmap_args.extend(["--papyrstoken", token]) |
63 | - else: |
64 | - sys.stderr.write("No Papyrs API token given to " |
65 | - "collect_roadmap for %s" % project_name) |
66 | + if cards_source == 'jira': |
67 | + extra_collect_jira_args = [] |
68 | + if not collect_jira(source_dir, db_file, opts.roadmap_config_file, |
69 | + extra_collect_jira_args): |
70 | + sys.stderr.write("collect_jira failed for %s" % project_name) |
71 | + continue |
72 | + elif cards_source == 'kanban': |
73 | + extra_collect_roadmap_args = [] |
74 | + extra_collect_roadmap_args.extend(["--board", '10721']) |
75 | + if opts.kanban_token_file is not None: |
76 | + with open(opts.kanban_token_file) as token_file: |
77 | + token = token_file.read() |
78 | + extra_collect_roadmap_args.extend(["--kanbantoken", token]) |
79 | + else: |
80 | + sys.stderr.write("No Kanbantool API token given to " |
81 | + "collect_roadmap for %s" % project_name) |
82 | + if opts.papyrs_token_file is not None: |
83 | + with open(opts.papyrs_token_file) as token_file: |
84 | + token = token_file.read() |
85 | + extra_collect_roadmap_args.extend(["--papyrstoken", token]) |
86 | + else: |
87 | + sys.stderr.write("No Papyrs API token given to " |
88 | + "collect_roadmap for %s" % project_name) |
89 | |
90 | - if not collect_roadmap(source_dir, db_file, opts.roadmap_config_file, |
91 | - extra_collect_roadmap_args): |
92 | - sys.stderr.write("collect_roadmap failed for %s" % project_name) |
93 | + if not collect_roadmap(source_dir, db_file, |
94 | + opts.roadmap_config_file, |
95 | + extra_collect_roadmap_args): |
96 | + sys.stderr.write("collect_roadmap failed for %s" % |
97 | + project_name) |
98 | + else: |
99 | + sys.stderr.write("Unknown cards source %s" % cards_source) |
100 | + continue |
101 | |
102 | publish_new_db(project_name, project_output_dir, db_file) |
103 | generate_reports(project_output_dir, config_file, db_file, |
104 | |
105 | === modified file 'collect' |
106 | --- collect 2012-06-29 14:04:58 +0000 |
107 | +++ collect 2012-07-10 16:26:24 +0000 |
108 | @@ -6,15 +6,16 @@ |
109 | # Copyright (C) 2010, 2011 Canonical Ltd. |
110 | # License: GPL-3 |
111 | |
112 | -import urllib |
113 | +import logging |
114 | +import optparse |
115 | +import os |
116 | +import pwd |
117 | import re |
118 | +import smtplib |
119 | import sys |
120 | -import optparse |
121 | -import smtplib |
122 | -import pwd |
123 | -import os |
124 | +import urllib |
125 | import urlparse |
126 | -import logging |
127 | + |
128 | from email.mime.text import MIMEText |
129 | |
130 | from launchpadlib.launchpad import Launchpad, EDGE_SERVICE_ROOT |
131 | @@ -23,8 +24,8 @@ |
132 | CollectorStore, |
133 | PersonCache, |
134 | WorkitemParser, |
135 | - bug_wi_states |
136 | -) |
137 | + bug_wi_states, |
138 | + ) |
139 | from lpworkitems.database import get_store |
140 | from lpworkitems.error_collector import ( |
141 | BlueprintURLError, |
142 | @@ -71,11 +72,13 @@ |
143 | """Get a link to the Launchpad API object on the website.""" |
144 | api_link = item.self_link |
145 | parts = urlparse.urlparse(api_link) |
146 | - link = parts.scheme + "://" + parts.netloc.replace("api.", "") + "/" + parts.path.split("/", 2)[2] |
147 | + link = parts.scheme + "://" + parts.netloc.replace("api.", "") + \ |
148 | + "/" + parts.path.split("/", 2)[2] |
149 | return link.decode("utf-8") |
150 | |
151 | |
152 | import simplejson |
153 | + |
154 | _orig_loads = simplejson.loads |
155 | |
156 | |
157 | @@ -103,7 +106,10 @@ |
158 | ''' |
159 | model_bp = Blueprint.from_launchpad(bp) |
160 | if model_bp.milestone_name not in collector.valid_milestone_names(): |
161 | - data_error(web_link(bp), 'milestone "%s" is unknown/invalid' % model_bp.milestone_name, True) |
162 | + data_error( |
163 | + web_link(bp), |
164 | + 'milestone "%s" is unknown/invalid' % model_bp.milestone_name, |
165 | + True) |
166 | model_bp = collector.store_blueprint(model_bp) |
167 | if model_bp: |
168 | dbg('lp_import_blueprint: added blueprint: %s' % bp.name) |
169 | @@ -123,7 +129,10 @@ |
170 | model_group = BlueprintGroup.from_launchpad(bp) |
171 | model_group.area = area |
172 | if model_group.milestone_name not in collector.valid_milestone_names(): |
173 | - data_error(web_link(bp), 'milestone "%s" is unknown/invalid' % model_group.milestone, True) |
174 | + data_error( |
175 | + web_link(bp), |
176 | + 'milestone "%s" is unknown/invalid' % model_group.milestone, |
177 | + True) |
178 | |
179 | model_group = collector.store_blueprint_group(model_group) |
180 | if model_group is None: |
181 | @@ -154,7 +163,8 @@ |
182 | collector.store_meta(key, value, bp_name) |
183 | |
184 | |
185 | -def parse_complexity_item(collector, line, bp_name, bp_url, def_milestone, def_assignee): |
186 | +def parse_complexity_item(collector, line, bp_name, bp_url, def_milestone, |
187 | + def_assignee): |
188 | line = line.strip() |
189 | # remove special characters people tend to type |
190 | line = re.sub('[^\w -.]', '', line) |
191 | @@ -183,7 +193,9 @@ |
192 | dbg('\tComplexity: %s MS: %s Who: %s' % (num, milestone, assignee)) |
193 | collector.store_complexity(assignee, num, milestone, bp_name) |
194 | except ValueError: |
195 | - data_error(bp_url, "\tComplexity line '%s' could not be parsed %s" % (line, ValueError)) |
196 | + data_error(bp_url, |
197 | + "\tComplexity line '%s' could not be parsed %s" % |
198 | + (line, ValueError)) |
199 | |
200 | |
201 | def milestone_extract(text, valid_milestones): |
202 | @@ -196,7 +208,7 @@ |
203 | return None |
204 | |
205 | |
206 | -def lp_import_blueprint_workitems(collector, bp, distro_release, people_cache=None, projects=None): |
207 | +def lp_import_blueprint_workitems(collector, bp, distro_release, |
208 | '''Collect work items from a Launchpad blueprint. |
209 | |
210 | This includes work items from the whiteboard as well as linked bugs. |
211 | @@ -212,17 +224,19 @@ |
212 | |
213 | model_bp = collector.store.find( |
214 | Blueprint, Blueprint.name == bp.name).one() |
215 | - assert model_bp is not None, "Asked to process workitems of %s when it is not in the db" % bp.name |
216 | + assert model_bp is not None, \ |
217 | + "Asked to process workitems of %s when it is not in the db" % bp.name |
218 | |
219 | - dbg('lp_import_blueprint_workitems(): processing %s (spec milestone: %s, spec assignee: %s, spec implementation: %s)' % ( |
220 | + dbg('lp_import_blueprint_workitems(): processing %s (spec milestone: %s,' \ |
221 | + ' spec assignee: %s, spec implementation: %s)' % ( |
222 | bp.name, model_bp.milestone_name, model_bp.assignee_name, |
223 | model_bp.implementation)) |
224 | |
225 | valid_milestones = collector.valid_milestone_names() |
226 | global error_collector |
227 | parser = WorkitemParser( |
228 | - model_bp, model_bp.milestone_name, collector.lp, people_cache=people_cache, |
229 | - error_collector=error_collector) |
230 | + model_bp, model_bp.milestone_name, collector.lp, |
231 | + people_cache=people_cache, error_collector=error_collector) |
232 | |
233 | # Get work items from both the whiteboard and the new workitems_text |
234 | # property. Once the migration is completed and nobody's using the |
235 | @@ -239,16 +253,19 @@ |
236 | m = work_items_re.search(l) |
237 | if m: |
238 | in_workitems_block = True |
239 | - dbg('lp_import_blueprint_workitems(): starting work items block at ' + l) |
240 | + dbg('lp_import_blueprint_workitems():' |
241 | + ' starting work items block at ' + l) |
242 | milestone = milestone_extract(m.group(1), valid_milestones) |
243 | dbg(' ... setting milestone to ' + str(milestone)) |
244 | - parser.milestone_name = milestone or parser.blueprint.milestone_name |
245 | + parser.milestone_name = \ |
246 | + milestone or parser.blueprint.milestone_name |
247 | continue |
248 | |
249 | if in_workitems_block: |
250 | dbg("\tworkitem (raw): '%s'" % (l.strip())) |
251 | if not l.strip(): |
252 | - dbg('lp_import_blueprint_workitems(): closing work items block with line: ' + l) |
253 | + dbg('lp_import_blueprint_workitems():' |
254 | + ' closing work items block with line: ' + l) |
255 | in_workitems_block = False |
256 | parser.milestone_name = parser.blueprint.milestone_name |
257 | workitem = parser.parse_blueprint_workitem(l) |
258 | |
259 | === added file 'collect_jira' |
260 | --- collect_jira 1970-01-01 00:00:00 +0000 |
261 | +++ collect_jira 2012-07-10 16:26:24 +0000 |
262 | @@ -0,0 +1,229 @@ |
263 | +#!/usr/bin/python |
264 | +# |
265 | +# Pull items from cards.linaro.org and put them into a database. |
266 | + |
267 | +import logging |
268 | +import optparse |
269 | +import os |
270 | +import simplejson |
271 | +import sys |
272 | +import urllib2 |
273 | + |
274 | +import jira |
275 | +from lpworkitems.collect_roadmap import ( |
276 | + CollectorStore, |
277 | + ) |
278 | +from lpworkitems.database import get_store |
279 | +from lpworkitems.error_collector import ( |
280 | + ErrorCollector, |
281 | + StderrErrorCollector, |
282 | + ) |
283 | +from lpworkitems.models_roadmap import ( |
284 | + Lane, |
285 | + Card, |
286 | + ) |
287 | +import report_tools |
288 | + |
289 | + |
290 | +# An ErrorCollector to collect the data errors for later reporting |
291 | +error_collector = None |
292 | + |
293 | + |
294 | +logger = logging.getLogger("linarojira") |
295 | + |
296 | +JIRA_API_URL = 'http://cards.linaro.org/rest/api/2' |
297 | +JIRA_PROJECT_KEY = 'CARD' |
298 | +JIRA_ISSUE_BY_KEY_URL = 'http://cards.linaro.org/browse/%s' |
299 | + |
300 | + |
301 | +def dbg(msg): |
302 | + '''Print out debugging message if debugging is enabled.''' |
303 | + logger.debug(msg) |
304 | + |
305 | + |
306 | +def get_json_data(url): |
307 | + data = None |
308 | + try: |
309 | + data = simplejson.load(urllib2.urlopen(url)) |
310 | + except urllib2.HTTPError, e: |
311 | + print "HTTP error for url '%s': %d" % (url, e.code) |
312 | + except urllib2.URLError, e: |
313 | + print "Network error for url '%s': %s" % (url, e.reason.args[1]) |
314 | + except ValueError, e: |
315 | + print "Data error for url '%s': %s" % (url, e.args[0]) |
316 | + |
317 | + return data |
318 | + |
319 | + |
320 | +def jira_import(collector, cfg, opts): |
321 | + '''Collect roadmap items from JIRA into DB.''' |
322 | + |
323 | + # import JIRA versions as database Lanes |
324 | + result = jira.do_request(opts, 'project/%s/versions' % JIRA_PROJECT_KEY) |
325 | + for version in result: |
326 | + dbg('Adding lane (name = %s, id = %s)' % |
327 | + (version['name'], version['id'])) |
328 | + model_lane = Lane(unicode(version['name']), int(version['id'])) |
329 | + if model_lane.name == cfg['current_lane']: |
330 | + model_lane.is_current = True |
331 | + else: |
332 | + model_lane.is_current = False |
333 | + collector.store_lane(model_lane) |
334 | + |
335 | + # find id of "Sponsor" custom field in JIRA |
336 | + result = jira.do_request(opts, 'field') |
337 | + sponsor_fields = [field for field in result if field['name'] == 'Sponsor'] |
338 | + assert len(sponsor_fields) == 1, 'Not a single Sponsor field' |
339 | + sponsor_field_id = sponsor_fields[0]['id'] |
340 | + |
341 | + # import JIRA issues as database Cards |
342 | + result = jira.do_request( |
343 | + opts, 'search', jql='project = %s' % JIRA_PROJECT_KEY, |
344 | + fields=['summary', 'fixVersions', 'status', 'components', |
345 | + 'priority', 'description', 'timetracking', sponsor_field_id]) |
346 | + for issue in result['issues']: |
347 | + fields = issue['fields'] |
348 | + name = unicode(fields['summary']) |
349 | + card_id = int(issue['id']) |
350 | + key = unicode(issue['key']) |
351 | + fixVersions = fields['fixVersions'] |
352 | + if len(fixVersions) == 0: |
353 | + dbg('Skipping card without lane (name = %s, key = %s)' % |
354 | + (name, key)) |
355 | + continue |
356 | + # JIRA allows listing multiple versions in fixVersions |
357 | + assert len(fixVersions) == 1 |
358 | + lane_id = int(fixVersions[0]['id']) |
359 | + |
360 | + dbg('Adding card (name = %s, id = %s, lane_id = %s, key = %s)' % |
361 | + (name, card_id, lane_id, key)) |
362 | + model_card = Card(name, card_id, lane_id, key) |
363 | + model_card.status = unicode(fields['status']['name']) |
364 | + components = fields['components'] |
365 | + if len(components) == 0: |
366 | + dbg('Skipping card without component (name = %s, key = %s)' % |
367 | + (name, key)) |
368 | + # JIRA allows listing multiple components |
369 | + assert len(components) == 1 |
370 | + model_card.team = unicode(components[0]['name']) |
371 | + model_card.priority = unicode(fields['priority']['name']) |
372 | + size_fields = [] |
373 | + timetracking = fields['timetracking'] |
374 | + if 'originalEstimate' in timetracking: |
375 | + size_fields += [ |
376 | + 'original estimate: %s' % timetracking['originalEstimate']] |
377 | + if 'remainingEstimate' in timetracking: |
378 | + size_fields += [ |
379 | + 'remaining estimate: %s' % timetracking['remainingEstimate']] |
380 | + model_card.size = unicode(', '.join(size_fields)) |
381 | + model_card.sponsor = u'' |
382 | + # None if no sponsor is selected |
383 | + if fields[sponsor_field_id] is not None: |
384 | + sponsors = [s['value'] for s in fields[sponsor_field_id]] |
385 | + model_card.sponsor = unicode(', '.join(sorted(sponsors))) |
386 | + model_card.url = JIRA_ISSUE_BY_KEY_URL % key |
387 | + # XXX need to either download the HTML version or convert this to HTML |
388 | + model_card.description = unicode(fields['description']) |
389 | + # acceptance criteria is in the description |
390 | + model_card.acceptance_criteria = u'' |
391 | + collector.store_card(model_card) |
392 | + return |
393 | + |
394 | +######################################################################## |
395 | +# |
396 | +# Program operations and main |
397 | +# |
398 | +######################################################################## |
399 | + |
400 | + |
401 | +def parse_argv(): |
402 | + '''Parse CLI arguments. |
403 | + |
404 | + Return (options, args) tuple. |
405 | + ''' |
406 | + optparser = optparse.OptionParser() |
407 | + optparser.add_option('-d', '--database', |
408 | + help='Path to database', dest='database', metavar='PATH') |
409 | + optparser.add_option('-c', '--config', |
410 | + help='Path to configuration file', dest='config', metavar='PATH') |
411 | + optparser.add_option('--debug', action='store_true', default=False, |
412 | + help='Enable debugging output in parsing routines') |
413 | + optparser.add_option('--mail', action='store_true', default=False, |
414 | + help='Send data errors as email (according to "error_config" map in ' |
415 | + 'config file) instead of printing to stderr', dest='mail') |
416 | + optparser.add_option('--jira-username', default='robot', |
417 | + help='JIRA username for authentication', dest='jira_username') |
418 | + optparser.add_option('--jira-password', default='cuf4moh2', |
419 | + help='JIRA password for authentication', dest='jira_password') |
420 | + |
421 | + (opts, args) = optparser.parse_args() |
422 | + |
423 | + if not opts.database: |
424 | + optparser.error('No database given') |
425 | + if not opts.config: |
426 | + optparser.error('No config given') |
427 | + |
428 | + return opts, args |
429 | + |
430 | + |
431 | +def setup_logging(debug): |
432 | + ch = logging.StreamHandler() |
433 | + ch.setLevel(logging.INFO) |
434 | + formatter = logging.Formatter("%(message)s") |
435 | + ch.setFormatter(formatter) |
436 | + logger.setLevel(logging.INFO) |
437 | + logger.addHandler(ch) |
438 | + if debug: |
439 | + ch.setLevel(logging.DEBUG) |
440 | + formatter = logging.Formatter( |
441 | + "%(asctime)s - %(name)s - %(levelname)s - %(message)s") |
442 | + ch.setFormatter(formatter) |
443 | + logger.setLevel(logging.DEBUG) |
444 | + |
445 | + |
446 | +def update_todays_blueprint_daily_count_per_state(collector): |
447 | + """Clear today's entries and create them again to reflect the current |
448 | + state of blueprints.""" |
449 | + collector.clear_todays_blueprint_daily_count_per_state() |
450 | + collector.store_roadmap_bp_count_per_state() |
451 | + |
452 | + |
453 | +def main(): |
454 | + report_tools.fix_stdouterr() |
455 | + |
456 | + (opts, args) = parse_argv() |
457 | + opts.jira_api_url = JIRA_API_URL |
458 | + |
459 | + setup_logging(opts.debug) |
460 | + |
461 | + global error_collector |
462 | + if opts.mail: |
463 | + error_collector = ErrorCollector() |
464 | + else: |
465 | + error_collector = StderrErrorCollector() |
466 | + |
467 | + cfg = report_tools.load_config(opts.config) |
468 | + |
469 | + lock_path = opts.database + ".lock" |
470 | + lock_f = open(lock_path, "wb") |
471 | + if report_tools.lock_file(lock_f) is None: |
472 | + print "Another instance is already running" |
473 | + sys.exit(0) |
474 | + |
475 | + store = get_store(opts.database) |
476 | + collector = CollectorStore(store, '', error_collector) |
477 | + |
478 | + collector.clear_lanes() |
479 | + collector.clear_cards() |
480 | + |
481 | + jira_import(collector, cfg, opts) |
482 | + |
483 | + update_todays_blueprint_daily_count_per_state(collector) |
484 | + |
485 | + store.commit() |
486 | + |
487 | + os.unlink(lock_path) |
488 | + |
489 | + |
490 | +if __name__ == '__main__': |
491 | + main() |
492 | |
493 | === modified file 'collect_roadmap' |
494 | --- collect_roadmap 2012-01-30 12:08:06 +0000 |
495 | +++ collect_roadmap 2012-07-10 16:26:24 +0000 |
496 | @@ -2,10 +2,12 @@ |
497 | # |
498 | # Pull items from the Linaro roadmap in Kanbantool and put them into a database. |
499 | |
500 | -import urllib2, re, sys, optparse, smtplib, pwd, os |
501 | +import logging |
502 | +import optparse |
503 | +import os |
504 | import simplejson |
505 | -import logging |
506 | -from email.mime.text import MIMEText |
507 | +import sys |
508 | +import urllib2 |
509 | |
510 | from lpworkitems.collect_roadmap import ( |
511 | CollectorStore, |
512 | |
513 | === modified file 'jira.py' |
514 | --- jira.py 2012-05-14 11:39:09 +0000 |
515 | +++ jira.py 2012-07-10 16:26:24 +0000 |
516 | @@ -6,6 +6,7 @@ |
517 | import simplejson |
518 | import urllib2 |
519 | |
520 | + |
521 | def do_request(opts, relpathname, **kwargs): |
522 | request = urllib2.Request('%s/%s' % (opts.jira_api_url, relpathname)) |
523 | if opts.jira_username and opts.jira_password: |
524 | @@ -20,6 +21,7 @@ |
525 | response_data = urllib2.urlopen(request, request_data) |
526 | return simplejson.load(response_data) |
527 | |
528 | + |
529 | def main(): |
530 | parser = optparse.OptionParser(usage="%prog") |
531 | parser.add_option("--jira-api-url", dest="jira_api_url", |
532 | @@ -31,7 +33,8 @@ |
533 | opts, args = parser.parse_args() |
534 | |
535 | # simple search |
536 | - print do_request(opts, 'search', maxResults=1, jql='project = CARD', fields=['summary', 'status']) |
537 | + print do_request(opts, 'search', maxResults=1, jql='project = CARD', |
538 | + fields=['summary', 'status']) |
539 | |
540 | # information about a project |
541 | #print do_request(opts, 'project/CARD') |
542 | @@ -50,4 +53,3 @@ |
543 | |
544 | if __name__ == "__main__": |
545 | main() |
546 | - |
547 | |
548 | === modified file 'utils.py' |
549 | --- utils.py 2011-09-12 14:41:07 +0000 |
550 | +++ utils.py 2012-07-10 16:26:24 +0000 |
551 | @@ -4,4 +4,3 @@ |
552 | if isinstance(attr, unicode): |
553 | return attr |
554 | return attr.decode("utf-8") |
555 | - |
Hello Loic,
thanks for working on this.
On Tue, Jun 26, 2012 at 5:14 PM, Loïc Minier <email address hidden> wrote:
>
> Probably fixing the latter requires some DB changes; I'm not sure what rules we operate on for these though, e.g. how does one deploy the schema update?
WRT this point, I have to collect info too: our knowledge base does not report anything for this situation, and will have to ask Danilo and maybe IS too.
From a rapid PEP8 run, I get these two errors:
collect_jira:5:15: E401 multiple imports on one line
all-projects:77:80: E501 line too long (94 characters)
Apart from those, everything looks good. In case, fix during the merge.
Thanks.