Merge lp:~lool/launchpad-work-items-tracker/jira-support into lp:~linaro-automation/launchpad-work-items-tracker/linaro
- jira-support
- Merge into linaro
Status: | Merged |
---|---|
Merged at revision: | 339 |
Proposed branch: | lp:~lool/launchpad-work-items-tracker/jira-support |
Merge into: | lp:~linaro-automation/launchpad-work-items-tracker/linaro |
Diff against target: |
555 lines (+321/-47) 6 files modified
all-projects (+45/-20) collect (+38/-21) collect_jira (+229/-0) collect_roadmap (+5/-3) jira.py (+4/-2) utils.py (+0/-1) |
To merge this branch: | bzr merge lp:~lool/launchpad-work-items-tracker/jira-support |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Milo Casagrande (community) | Approve | ||
Review via email: mp+112136@code.launchpad.net |
Commit message
Description of the change
This adds support for getting cards data out of JIRA instead of kanbantool.com + papyrs.com; it's only active if you set cards_source = 'jira' in the project's config though (so it can be merged and remain inactive). This is deployed in staging and works there, but with some limitations:
* description is taken as text, but it's a mixture of HTML and of JIRA wiki syntax in cards; in the current version of status.linaro.org, description is empty/broken so this isn't a regression but it looks ugly
* "Roadmap card id" currently refers to e.g. CARD-12 instead of some human-readable text like TCWG2012-
* there are some missing features like setting the sponsor; relatively minor
Probably fixing the latter requires some DB changes; I'm not sure what rules we operate on for these though, e.g. how does one deploy the schema update?
In general, I consider this is ready for merging, but we can keep this branch open until the above bits are also fixed.
- 340. By Loïc Minier
-
No need to remap priority names and component names.
- 341. By Loïc Minier
-
Split and sort system imports.
- 342. By Loïc Minier
-
Misc PEP8 fixes.
- 343. By Loïc Minier
-
Split and sort system imports.
- 344. By Loïc Minier
-
Fix a bunch of PEP8 issues.
- 345. By Loïc Minier
-
Drop useless imports.
Loïc Minier (lool) wrote : | # |
I've now fixed the PEP8 issues you mention and others, thanks! Note that collect_jira was copied from collect_roadmap (kanbantool.com + papyrs.com code) and copied over the syntax problems; now code isn't much repeated, and I also had removed useless imports, but the original syntax layout remained.
One problem with fixing these issues is that it makes the delta with the upstream project larger, but then I don't think we will ever merge back, so...
(I've not fixed all the E501 line too long issues because there were too many of these.)
- 346. By Loïc Minier
-
Update sponsor information from JIRA.
- 347. By Loïc Minier
-
Set card Size from JIRA timetracking information if available.
- 348. By Loïc Minier
-
Wrap card sponsor in unicode().
- 349. By Loïc Minier
-
PEP8 line too long fixes.
- 350. By Loïc Minier
-
Acceptance criteria is in the description by design.
- 351. By Loïc Minier
-
Merge with Linaro branch of workitems tracker which has some independent PEP8
fixes.
Loïc Minier (lool) wrote : | # |
I've pushed some further changes to complete support for other fields; what's still missing is support for fetching the description, but need to get beautifulsoup deployed on status.linaro.org to get this done; the other changes can wait and require DB changes.
Preview Diff
1 | === modified file 'all-projects' | |||
2 | --- all-projects 2012-01-30 10:03:24 +0000 | |||
3 | +++ all-projects 2012-07-10 16:26:24 +0000 | |||
4 | @@ -13,12 +13,19 @@ | |||
5 | 13 | import subprocess | 13 | import subprocess |
6 | 14 | import sys | 14 | import sys |
7 | 15 | 15 | ||
8 | 16 | import report_tools | ||
9 | 17 | |||
10 | 16 | 18 | ||
11 | 17 | def collect(source_dir, db_file, config_file, extra_args): | 19 | def collect(source_dir, db_file, config_file, extra_args): |
12 | 18 | return run_collect_script(source_dir, db_file, config_file, extra_args, | 20 | return run_collect_script(source_dir, db_file, config_file, extra_args, |
13 | 19 | "collect") | 21 | "collect") |
14 | 20 | 22 | ||
15 | 21 | 23 | ||
16 | 24 | def collect_jira(source_dir, db_file, config_file, extra_args): | ||
17 | 25 | return run_collect_script(source_dir, db_file, config_file, extra_args, | ||
18 | 26 | "collect_jira") | ||
19 | 27 | |||
20 | 28 | |||
21 | 22 | def collect_roadmap(source_dir, db_file, config_file, extra_args): | 29 | def collect_roadmap(source_dir, db_file, config_file, extra_args): |
22 | 23 | return run_collect_script(source_dir, db_file, config_file, extra_args, | 30 | return run_collect_script(source_dir, db_file, config_file, extra_args, |
23 | 24 | "collect_roadmap") | 31 | "collect_roadmap") |
24 | @@ -67,7 +74,8 @@ | |||
25 | 67 | 74 | ||
26 | 68 | 75 | ||
27 | 69 | def main(): | 76 | def main(): |
29 | 70 | parser = optparse.OptionParser(usage="%prog <database dir> <www root dir> [www root url]") | 77 | parser = optparse.OptionParser( |
30 | 78 | usage="%prog <database dir> <www root dir> [www root url]") | ||
31 | 71 | parser.add_option("--config-dir", dest="config_dir", default="config") | 79 | parser.add_option("--config-dir", dest="config_dir", default="config") |
32 | 72 | parser.add_option("--roadmap-config-file", dest="roadmap_config_file", | 80 | parser.add_option("--roadmap-config-file", dest="roadmap_config_file", |
33 | 73 | default="roadmap-config") | 81 | default="roadmap-config") |
34 | @@ -103,6 +111,11 @@ | |||
35 | 103 | os.path.join(source_dir, opts.config_dir, "*%s" % valid_config_suffix)) | 111 | os.path.join(source_dir, opts.config_dir, "*%s" % valid_config_suffix)) |
36 | 104 | 112 | ||
37 | 105 | for config_file in filenames: | 113 | for config_file in filenames: |
38 | 114 | # read roadmap config to find where to get cards from | ||
39 | 115 | cfg = report_tools.load_config(config_file) | ||
40 | 116 | # default to kanbantool | ||
41 | 117 | cards_source = cfg.get('cards_source', 'kanban') | ||
42 | 118 | |||
43 | 106 | project_name = os.path.basename(config_file)[:-len(valid_config_suffix)] | 119 | project_name = os.path.basename(config_file)[:-len(valid_config_suffix)] |
44 | 107 | project_output_dir = os.path.join(output_dir, project_name) | 120 | project_output_dir = os.path.join(output_dir, project_name) |
45 | 108 | db_file = os.path.join(db_dir, "%s.db" % project_name) | 121 | db_file = os.path.join(db_dir, "%s.db" % project_name) |
46 | @@ -127,26 +140,38 @@ | |||
47 | 127 | sys.stderr.write("collect failed for %s" % project_name) | 140 | sys.stderr.write("collect failed for %s" % project_name) |
48 | 128 | continue | 141 | continue |
49 | 129 | 142 | ||
66 | 130 | extra_collect_roadmap_args = [] | 143 | if cards_source == 'jira': |
67 | 131 | extra_collect_roadmap_args.extend(["--board", '10721']) | 144 | extra_collect_jira_args = [] |
68 | 132 | if opts.kanban_token_file is not None: | 145 | if not collect_jira(source_dir, db_file, opts.roadmap_config_file, |
69 | 133 | with open(opts.kanban_token_file) as token_file: | 146 | extra_collect_jira_args): |
70 | 134 | token = token_file.read() | 147 | sys.stderr.write("collect_jira failed for %s" % project_name) |
71 | 135 | extra_collect_roadmap_args.extend(["--kanbantoken", token]) | 148 | continue |
72 | 136 | else: | 149 | elif cards_source == 'kanban': |
73 | 137 | sys.stderr.write("No Kanbantool API token given to " | 150 | extra_collect_roadmap_args = [] |
74 | 138 | "collect_roadmap for %s" % project_name) | 151 | extra_collect_roadmap_args.extend(["--board", '10721']) |
75 | 139 | if opts.papyrs_token_file is not None: | 152 | if opts.kanban_token_file is not None: |
76 | 140 | with open(opts.papyrs_token_file) as token_file: | 153 | with open(opts.kanban_token_file) as token_file: |
77 | 141 | token = token_file.read() | 154 | token = token_file.read() |
78 | 142 | extra_collect_roadmap_args.extend(["--papyrstoken", token]) | 155 | extra_collect_roadmap_args.extend(["--kanbantoken", token]) |
79 | 143 | else: | 156 | else: |
80 | 144 | sys.stderr.write("No Papyrs API token given to " | 157 | sys.stderr.write("No Kanbantool API token given to " |
81 | 145 | "collect_roadmap for %s" % project_name) | 158 | "collect_roadmap for %s" % project_name) |
82 | 159 | if opts.papyrs_token_file is not None: | ||
83 | 160 | with open(opts.papyrs_token_file) as token_file: | ||
84 | 161 | token = token_file.read() | ||
85 | 162 | extra_collect_roadmap_args.extend(["--papyrstoken", token]) | ||
86 | 163 | else: | ||
87 | 164 | sys.stderr.write("No Papyrs API token given to " | ||
88 | 165 | "collect_roadmap for %s" % project_name) | ||
89 | 146 | 166 | ||
93 | 147 | if not collect_roadmap(source_dir, db_file, opts.roadmap_config_file, | 167 | if not collect_roadmap(source_dir, db_file, |
94 | 148 | extra_collect_roadmap_args): | 168 | opts.roadmap_config_file, |
95 | 149 | sys.stderr.write("collect_roadmap failed for %s" % project_name) | 169 | extra_collect_roadmap_args): |
96 | 170 | sys.stderr.write("collect_roadmap failed for %s" % | ||
97 | 171 | project_name) | ||
98 | 172 | else: | ||
99 | 173 | sys.stderr.write("Unknown cards source %s" % cards_source) | ||
100 | 174 | continue | ||
101 | 150 | 175 | ||
102 | 151 | publish_new_db(project_name, project_output_dir, db_file) | 176 | publish_new_db(project_name, project_output_dir, db_file) |
103 | 152 | generate_reports(project_output_dir, config_file, db_file, | 177 | generate_reports(project_output_dir, config_file, db_file, |
104 | 153 | 178 | ||
105 | === modified file 'collect' | |||
106 | --- collect 2012-06-29 14:04:58 +0000 | |||
107 | +++ collect 2012-07-10 16:26:24 +0000 | |||
108 | @@ -6,15 +6,16 @@ | |||
109 | 6 | # Copyright (C) 2010, 2011 Canonical Ltd. | 6 | # Copyright (C) 2010, 2011 Canonical Ltd. |
110 | 7 | # License: GPL-3 | 7 | # License: GPL-3 |
111 | 8 | 8 | ||
113 | 9 | import urllib | 9 | import logging |
114 | 10 | import optparse | ||
115 | 11 | import os | ||
116 | 12 | import pwd | ||
117 | 10 | import re | 13 | import re |
118 | 14 | import smtplib | ||
119 | 11 | import sys | 15 | import sys |
124 | 12 | import optparse | 16 | import urllib |
121 | 13 | import smtplib | ||
122 | 14 | import pwd | ||
123 | 15 | import os | ||
125 | 16 | import urlparse | 17 | import urlparse |
127 | 17 | import logging | 18 | |
128 | 18 | from email.mime.text import MIMEText | 19 | from email.mime.text import MIMEText |
129 | 19 | 20 | ||
130 | 20 | from launchpadlib.launchpad import Launchpad, EDGE_SERVICE_ROOT | 21 | from launchpadlib.launchpad import Launchpad, EDGE_SERVICE_ROOT |
131 | @@ -23,8 +24,8 @@ | |||
132 | 23 | CollectorStore, | 24 | CollectorStore, |
133 | 24 | PersonCache, | 25 | PersonCache, |
134 | 25 | WorkitemParser, | 26 | WorkitemParser, |
137 | 26 | bug_wi_states | 27 | bug_wi_states, |
138 | 27 | ) | 28 | ) |
139 | 28 | from lpworkitems.database import get_store | 29 | from lpworkitems.database import get_store |
140 | 29 | from lpworkitems.error_collector import ( | 30 | from lpworkitems.error_collector import ( |
141 | 30 | BlueprintURLError, | 31 | BlueprintURLError, |
142 | @@ -71,11 +72,13 @@ | |||
143 | 71 | """Get a link to the Launchpad API object on the website.""" | 72 | """Get a link to the Launchpad API object on the website.""" |
144 | 72 | api_link = item.self_link | 73 | api_link = item.self_link |
145 | 73 | parts = urlparse.urlparse(api_link) | 74 | parts = urlparse.urlparse(api_link) |
147 | 74 | link = parts.scheme + "://" + parts.netloc.replace("api.", "") + "/" + parts.path.split("/", 2)[2] | 75 | link = parts.scheme + "://" + parts.netloc.replace("api.", "") + \ |
148 | 76 | "/" + parts.path.split("/", 2)[2] | ||
149 | 75 | return link.decode("utf-8") | 77 | return link.decode("utf-8") |
150 | 76 | 78 | ||
151 | 77 | 79 | ||
152 | 78 | import simplejson | 80 | import simplejson |
153 | 81 | |||
154 | 79 | _orig_loads = simplejson.loads | 82 | _orig_loads = simplejson.loads |
155 | 80 | 83 | ||
156 | 81 | 84 | ||
157 | @@ -103,7 +106,10 @@ | |||
158 | 103 | ''' | 106 | ''' |
159 | 104 | model_bp = Blueprint.from_launchpad(bp) | 107 | model_bp = Blueprint.from_launchpad(bp) |
160 | 105 | if model_bp.milestone_name not in collector.valid_milestone_names(): | 108 | if model_bp.milestone_name not in collector.valid_milestone_names(): |
162 | 106 | data_error(web_link(bp), 'milestone "%s" is unknown/invalid' % model_bp.milestone_name, True) | 109 | data_error( |
163 | 110 | web_link(bp), | ||
164 | 111 | 'milestone "%s" is unknown/invalid' % model_bp.milestone_name, | ||
165 | 112 | True) | ||
166 | 107 | model_bp = collector.store_blueprint(model_bp) | 113 | model_bp = collector.store_blueprint(model_bp) |
167 | 108 | if model_bp: | 114 | if model_bp: |
168 | 109 | dbg('lp_import_blueprint: added blueprint: %s' % bp.name) | 115 | dbg('lp_import_blueprint: added blueprint: %s' % bp.name) |
169 | @@ -123,7 +129,10 @@ | |||
170 | 123 | model_group = BlueprintGroup.from_launchpad(bp) | 129 | model_group = BlueprintGroup.from_launchpad(bp) |
171 | 124 | model_group.area = area | 130 | model_group.area = area |
172 | 125 | if model_group.milestone_name not in collector.valid_milestone_names(): | 131 | if model_group.milestone_name not in collector.valid_milestone_names(): |
174 | 126 | data_error(web_link(bp), 'milestone "%s" is unknown/invalid' % model_group.milestone, True) | 132 | data_error( |
175 | 133 | web_link(bp), | ||
176 | 134 | 'milestone "%s" is unknown/invalid' % model_group.milestone, | ||
177 | 135 | True) | ||
178 | 127 | 136 | ||
179 | 128 | model_group = collector.store_blueprint_group(model_group) | 137 | model_group = collector.store_blueprint_group(model_group) |
180 | 129 | if model_group is None: | 138 | if model_group is None: |
181 | @@ -154,7 +163,8 @@ | |||
182 | 154 | collector.store_meta(key, value, bp_name) | 163 | collector.store_meta(key, value, bp_name) |
183 | 155 | 164 | ||
184 | 156 | 165 | ||
186 | 157 | def parse_complexity_item(collector, line, bp_name, bp_url, def_milestone, def_assignee): | 166 | def parse_complexity_item(collector, line, bp_name, bp_url, def_milestone, |
187 | 167 | def_assignee): | ||
188 | 158 | line = line.strip() | 168 | line = line.strip() |
189 | 159 | # remove special characters people tend to type | 169 | # remove special characters people tend to type |
190 | 160 | line = re.sub('[^\w -.]', '', line) | 170 | line = re.sub('[^\w -.]', '', line) |
191 | @@ -183,7 +193,9 @@ | |||
192 | 183 | dbg('\tComplexity: %s MS: %s Who: %s' % (num, milestone, assignee)) | 193 | dbg('\tComplexity: %s MS: %s Who: %s' % (num, milestone, assignee)) |
193 | 184 | collector.store_complexity(assignee, num, milestone, bp_name) | 194 | collector.store_complexity(assignee, num, milestone, bp_name) |
194 | 185 | except ValueError: | 195 | except ValueError: |
196 | 186 | data_error(bp_url, "\tComplexity line '%s' could not be parsed %s" % (line, ValueError)) | 196 | data_error(bp_url, |
197 | 197 | "\tComplexity line '%s' could not be parsed %s" % | ||
198 | 198 | (line, ValueError)) | ||
199 | 187 | 199 | ||
200 | 188 | 200 | ||
201 | 189 | def milestone_extract(text, valid_milestones): | 201 | def milestone_extract(text, valid_milestones): |
202 | @@ -196,7 +208,7 @@ | |||
203 | 196 | return None | 208 | return None |
204 | 197 | 209 | ||
205 | 198 | 210 | ||
207 | 199 | def lp_import_blueprint_workitems(collector, bp, distro_release, people_cache=None, projects=None): | 211 | def lp_import_blueprint_workitems(collector, bp, distro_release, |
208 | 200 | '''Collect work items from a Launchpad blueprint. | 212 | '''Collect work items from a Launchpad blueprint. |
209 | 201 | 213 | ||
210 | 202 | This includes work items from the whiteboard as well as linked bugs. | 214 | This includes work items from the whiteboard as well as linked bugs. |
211 | @@ -212,17 +224,19 @@ | |||
212 | 212 | 224 | ||
213 | 213 | model_bp = collector.store.find( | 225 | model_bp = collector.store.find( |
214 | 214 | Blueprint, Blueprint.name == bp.name).one() | 226 | Blueprint, Blueprint.name == bp.name).one() |
216 | 215 | assert model_bp is not None, "Asked to process workitems of %s when it is not in the db" % bp.name | 227 | assert model_bp is not None, \ |
217 | 228 | "Asked to process workitems of %s when it is not in the db" % bp.name | ||
218 | 216 | 229 | ||
220 | 217 | dbg('lp_import_blueprint_workitems(): processing %s (spec milestone: %s, spec assignee: %s, spec implementation: %s)' % ( | 230 | dbg('lp_import_blueprint_workitems(): processing %s (spec milestone: %s,' \ |
221 | 231 | ' spec assignee: %s, spec implementation: %s)' % ( | ||
222 | 218 | bp.name, model_bp.milestone_name, model_bp.assignee_name, | 232 | bp.name, model_bp.milestone_name, model_bp.assignee_name, |
223 | 219 | model_bp.implementation)) | 233 | model_bp.implementation)) |
224 | 220 | 234 | ||
225 | 221 | valid_milestones = collector.valid_milestone_names() | 235 | valid_milestones = collector.valid_milestone_names() |
226 | 222 | global error_collector | 236 | global error_collector |
227 | 223 | parser = WorkitemParser( | 237 | parser = WorkitemParser( |
230 | 224 | model_bp, model_bp.milestone_name, collector.lp, people_cache=people_cache, | 238 | model_bp, model_bp.milestone_name, collector.lp, |
231 | 225 | error_collector=error_collector) | 239 | people_cache=people_cache, error_collector=error_collector) |
232 | 226 | 240 | ||
233 | 227 | # Get work items from both the whiteboard and the new workitems_text | 241 | # Get work items from both the whiteboard and the new workitems_text |
234 | 228 | # property. Once the migration is completed and nobody's using the | 242 | # property. Once the migration is completed and nobody's using the |
235 | @@ -239,16 +253,19 @@ | |||
236 | 239 | m = work_items_re.search(l) | 253 | m = work_items_re.search(l) |
237 | 240 | if m: | 254 | if m: |
238 | 241 | in_workitems_block = True | 255 | in_workitems_block = True |
240 | 242 | dbg('lp_import_blueprint_workitems(): starting work items block at ' + l) | 256 | dbg('lp_import_blueprint_workitems():' |
241 | 257 | ' starting work items block at ' + l) | ||
242 | 243 | milestone = milestone_extract(m.group(1), valid_milestones) | 258 | milestone = milestone_extract(m.group(1), valid_milestones) |
243 | 244 | dbg(' ... setting milestone to ' + str(milestone)) | 259 | dbg(' ... setting milestone to ' + str(milestone)) |
245 | 245 | parser.milestone_name = milestone or parser.blueprint.milestone_name | 260 | parser.milestone_name = \ |
246 | 261 | milestone or parser.blueprint.milestone_name | ||
247 | 246 | continue | 262 | continue |
248 | 247 | 263 | ||
249 | 248 | if in_workitems_block: | 264 | if in_workitems_block: |
250 | 249 | dbg("\tworkitem (raw): '%s'" % (l.strip())) | 265 | dbg("\tworkitem (raw): '%s'" % (l.strip())) |
251 | 250 | if not l.strip(): | 266 | if not l.strip(): |
253 | 251 | dbg('lp_import_blueprint_workitems(): closing work items block with line: ' + l) | 267 | dbg('lp_import_blueprint_workitems():' |
254 | 268 | ' closing work items block with line: ' + l) | ||
255 | 252 | in_workitems_block = False | 269 | in_workitems_block = False |
256 | 253 | parser.milestone_name = parser.blueprint.milestone_name | 270 | parser.milestone_name = parser.blueprint.milestone_name |
257 | 254 | workitem = parser.parse_blueprint_workitem(l) | 271 | workitem = parser.parse_blueprint_workitem(l) |
258 | 255 | 272 | ||
259 | === added file 'collect_jira' | |||
260 | --- collect_jira 1970-01-01 00:00:00 +0000 | |||
261 | +++ collect_jira 2012-07-10 16:26:24 +0000 | |||
262 | @@ -0,0 +1,229 @@ | |||
263 | 1 | #!/usr/bin/python | ||
264 | 2 | # | ||
265 | 3 | # Pull items from cards.linaro.org and put them into a database. | ||
266 | 4 | |||
267 | 5 | import logging | ||
268 | 6 | import optparse | ||
269 | 7 | import os | ||
270 | 8 | import simplejson | ||
271 | 9 | import sys | ||
272 | 10 | import urllib2 | ||
273 | 11 | |||
274 | 12 | import jira | ||
275 | 13 | from lpworkitems.collect_roadmap import ( | ||
276 | 14 | CollectorStore, | ||
277 | 15 | ) | ||
278 | 16 | from lpworkitems.database import get_store | ||
279 | 17 | from lpworkitems.error_collector import ( | ||
280 | 18 | ErrorCollector, | ||
281 | 19 | StderrErrorCollector, | ||
282 | 20 | ) | ||
283 | 21 | from lpworkitems.models_roadmap import ( | ||
284 | 22 | Lane, | ||
285 | 23 | Card, | ||
286 | 24 | ) | ||
287 | 25 | import report_tools | ||
288 | 26 | |||
289 | 27 | |||
290 | 28 | # An ErrorCollector to collect the data errors for later reporting | ||
291 | 29 | error_collector = None | ||
292 | 30 | |||
293 | 31 | |||
294 | 32 | logger = logging.getLogger("linarojira") | ||
295 | 33 | |||
296 | 34 | JIRA_API_URL = 'http://cards.linaro.org/rest/api/2' | ||
297 | 35 | JIRA_PROJECT_KEY = 'CARD' | ||
298 | 36 | JIRA_ISSUE_BY_KEY_URL = 'http://cards.linaro.org/browse/%s' | ||
299 | 37 | |||
300 | 38 | |||
301 | 39 | def dbg(msg): | ||
302 | 40 | '''Print out debugging message if debugging is enabled.''' | ||
303 | 41 | logger.debug(msg) | ||
304 | 42 | |||
305 | 43 | |||
306 | 44 | def get_json_data(url): | ||
307 | 45 | data = None | ||
308 | 46 | try: | ||
309 | 47 | data = simplejson.load(urllib2.urlopen(url)) | ||
310 | 48 | except urllib2.HTTPError, e: | ||
311 | 49 | print "HTTP error for url '%s': %d" % (url, e.code) | ||
312 | 50 | except urllib2.URLError, e: | ||
313 | 51 | print "Network error for url '%s': %s" % (url, e.reason.args[1]) | ||
314 | 52 | except ValueError, e: | ||
315 | 53 | print "Data error for url '%s': %s" % (url, e.args[0]) | ||
316 | 54 | |||
317 | 55 | return data | ||
318 | 56 | |||
319 | 57 | |||
320 | 58 | def jira_import(collector, cfg, opts): | ||
321 | 59 | '''Collect roadmap items from JIRA into DB.''' | ||
322 | 60 | |||
323 | 61 | # import JIRA versions as database Lanes | ||
324 | 62 | result = jira.do_request(opts, 'project/%s/versions' % JIRA_PROJECT_KEY) | ||
325 | 63 | for version in result: | ||
326 | 64 | dbg('Adding lane (name = %s, id = %s)' % | ||
327 | 65 | (version['name'], version['id'])) | ||
328 | 66 | model_lane = Lane(unicode(version['name']), int(version['id'])) | ||
329 | 67 | if model_lane.name == cfg['current_lane']: | ||
330 | 68 | model_lane.is_current = True | ||
331 | 69 | else: | ||
332 | 70 | model_lane.is_current = False | ||
333 | 71 | collector.store_lane(model_lane) | ||
334 | 72 | |||
335 | 73 | # find id of "Sponsor" custom field in JIRA | ||
336 | 74 | result = jira.do_request(opts, 'field') | ||
337 | 75 | sponsor_fields = [field for field in result if field['name'] == 'Sponsor'] | ||
338 | 76 | assert len(sponsor_fields) == 1, 'Not a single Sponsor field' | ||
339 | 77 | sponsor_field_id = sponsor_fields[0]['id'] | ||
340 | 78 | |||
341 | 79 | # import JIRA issues as database Cards | ||
342 | 80 | result = jira.do_request( | ||
343 | 81 | opts, 'search', jql='project = %s' % JIRA_PROJECT_KEY, | ||
344 | 82 | fields=['summary', 'fixVersions', 'status', 'components', | ||
345 | 83 | 'priority', 'description', 'timetracking', sponsor_field_id]) | ||
346 | 84 | for issue in result['issues']: | ||
347 | 85 | fields = issue['fields'] | ||
348 | 86 | name = unicode(fields['summary']) | ||
349 | 87 | card_id = int(issue['id']) | ||
350 | 88 | key = unicode(issue['key']) | ||
351 | 89 | fixVersions = fields['fixVersions'] | ||
352 | 90 | if len(fixVersions) == 0: | ||
353 | 91 | dbg('Skipping card without lane (name = %s, key = %s)' % | ||
354 | 92 | (name, key)) | ||
355 | 93 | continue | ||
356 | 94 | # JIRA allows listing multiple versions in fixVersions | ||
357 | 95 | assert len(fixVersions) == 1 | ||
358 | 96 | lane_id = int(fixVersions[0]['id']) | ||
359 | 97 | |||
360 | 98 | dbg('Adding card (name = %s, id = %s, lane_id = %s, key = %s)' % | ||
361 | 99 | (name, card_id, lane_id, key)) | ||
362 | 100 | model_card = Card(name, card_id, lane_id, key) | ||
363 | 101 | model_card.status = unicode(fields['status']['name']) | ||
364 | 102 | components = fields['components'] | ||
365 | 103 | if len(components) == 0: | ||
366 | 104 | dbg('Skipping card without component (name = %s, key = %s)' % | ||
367 | 105 | (name, key)) | ||
368 | 106 | # JIRA allows listing multiple components | ||
369 | 107 | assert len(components) == 1 | ||
370 | 108 | model_card.team = unicode(components[0]['name']) | ||
371 | 109 | model_card.priority = unicode(fields['priority']['name']) | ||
372 | 110 | size_fields = [] | ||
373 | 111 | timetracking = fields['timetracking'] | ||
374 | 112 | if 'originalEstimate' in timetracking: | ||
375 | 113 | size_fields += [ | ||
376 | 114 | 'original estimate: %s' % timetracking['originalEstimate']] | ||
377 | 115 | if 'remainingEstimate' in timetracking: | ||
378 | 116 | size_fields += [ | ||
379 | 117 | 'remaining estimate: %s' % timetracking['remainingEstimate']] | ||
380 | 118 | model_card.size = unicode(', '.join(size_fields)) | ||
381 | 119 | model_card.sponsor = u'' | ||
382 | 120 | # None if no sponsor is selected | ||
383 | 121 | if fields[sponsor_field_id] is not None: | ||
384 | 122 | sponsors = [s['value'] for s in fields[sponsor_field_id]] | ||
385 | 123 | model_card.sponsor = unicode(', '.join(sorted(sponsors))) | ||
386 | 124 | model_card.url = JIRA_ISSUE_BY_KEY_URL % key | ||
387 | 125 | # XXX need to either download the HTML version or convert this to HTML | ||
388 | 126 | model_card.description = unicode(fields['description']) | ||
389 | 127 | # acceptance criteria is in the description | ||
390 | 128 | model_card.acceptance_criteria = u'' | ||
391 | 129 | collector.store_card(model_card) | ||
392 | 130 | return | ||
393 | 131 | |||
394 | 132 | ######################################################################## | ||
395 | 133 | # | ||
396 | 134 | # Program operations and main | ||
397 | 135 | # | ||
398 | 136 | ######################################################################## | ||
399 | 137 | |||
400 | 138 | |||
401 | 139 | def parse_argv(): | ||
402 | 140 | '''Parse CLI arguments. | ||
403 | 141 | |||
404 | 142 | Return (options, args) tuple. | ||
405 | 143 | ''' | ||
406 | 144 | optparser = optparse.OptionParser() | ||
407 | 145 | optparser.add_option('-d', '--database', | ||
408 | 146 | help='Path to database', dest='database', metavar='PATH') | ||
409 | 147 | optparser.add_option('-c', '--config', | ||
410 | 148 | help='Path to configuration file', dest='config', metavar='PATH') | ||
411 | 149 | optparser.add_option('--debug', action='store_true', default=False, | ||
412 | 150 | help='Enable debugging output in parsing routines') | ||
413 | 151 | optparser.add_option('--mail', action='store_true', default=False, | ||
414 | 152 | help='Send data errors as email (according to "error_config" map in ' | ||
415 | 153 | 'config file) instead of printing to stderr', dest='mail') | ||
416 | 154 | optparser.add_option('--jira-username', default='robot', | ||
417 | 155 | help='JIRA username for authentication', dest='jira_username') | ||
418 | 156 | optparser.add_option('--jira-password', default='cuf4moh2', | ||
419 | 157 | help='JIRA password for authentication', dest='jira_password') | ||
420 | 158 | |||
421 | 159 | (opts, args) = optparser.parse_args() | ||
422 | 160 | |||
423 | 161 | if not opts.database: | ||
424 | 162 | optparser.error('No database given') | ||
425 | 163 | if not opts.config: | ||
426 | 164 | optparser.error('No config given') | ||
427 | 165 | |||
428 | 166 | return opts, args | ||
429 | 167 | |||
430 | 168 | |||
431 | 169 | def setup_logging(debug): | ||
432 | 170 | ch = logging.StreamHandler() | ||
433 | 171 | ch.setLevel(logging.INFO) | ||
434 | 172 | formatter = logging.Formatter("%(message)s") | ||
435 | 173 | ch.setFormatter(formatter) | ||
436 | 174 | logger.setLevel(logging.INFO) | ||
437 | 175 | logger.addHandler(ch) | ||
438 | 176 | if debug: | ||
439 | 177 | ch.setLevel(logging.DEBUG) | ||
440 | 178 | formatter = logging.Formatter( | ||
441 | 179 | "%(asctime)s - %(name)s - %(levelname)s - %(message)s") | ||
442 | 180 | ch.setFormatter(formatter) | ||
443 | 181 | logger.setLevel(logging.DEBUG) | ||
444 | 182 | |||
445 | 183 | |||
446 | 184 | def update_todays_blueprint_daily_count_per_state(collector): | ||
447 | 185 | """Clear today's entries and create them again to reflect the current | ||
448 | 186 | state of blueprints.""" | ||
449 | 187 | collector.clear_todays_blueprint_daily_count_per_state() | ||
450 | 188 | collector.store_roadmap_bp_count_per_state() | ||
451 | 189 | |||
452 | 190 | |||
453 | 191 | def main(): | ||
454 | 192 | report_tools.fix_stdouterr() | ||
455 | 193 | |||
456 | 194 | (opts, args) = parse_argv() | ||
457 | 195 | opts.jira_api_url = JIRA_API_URL | ||
458 | 196 | |||
459 | 197 | setup_logging(opts.debug) | ||
460 | 198 | |||
461 | 199 | global error_collector | ||
462 | 200 | if opts.mail: | ||
463 | 201 | error_collector = ErrorCollector() | ||
464 | 202 | else: | ||
465 | 203 | error_collector = StderrErrorCollector() | ||
466 | 204 | |||
467 | 205 | cfg = report_tools.load_config(opts.config) | ||
468 | 206 | |||
469 | 207 | lock_path = opts.database + ".lock" | ||
470 | 208 | lock_f = open(lock_path, "wb") | ||
471 | 209 | if report_tools.lock_file(lock_f) is None: | ||
472 | 210 | print "Another instance is already running" | ||
473 | 211 | sys.exit(0) | ||
474 | 212 | |||
475 | 213 | store = get_store(opts.database) | ||
476 | 214 | collector = CollectorStore(store, '', error_collector) | ||
477 | 215 | |||
478 | 216 | collector.clear_lanes() | ||
479 | 217 | collector.clear_cards() | ||
480 | 218 | |||
481 | 219 | jira_import(collector, cfg, opts) | ||
482 | 220 | |||
483 | 221 | update_todays_blueprint_daily_count_per_state(collector) | ||
484 | 222 | |||
485 | 223 | store.commit() | ||
486 | 224 | |||
487 | 225 | os.unlink(lock_path) | ||
488 | 226 | |||
489 | 227 | |||
490 | 228 | if __name__ == '__main__': | ||
491 | 229 | main() | ||
492 | 0 | 230 | ||
493 | === modified file 'collect_roadmap' | |||
494 | --- collect_roadmap 2012-01-30 12:08:06 +0000 | |||
495 | +++ collect_roadmap 2012-07-10 16:26:24 +0000 | |||
496 | @@ -2,10 +2,12 @@ | |||
497 | 2 | # | 2 | # |
498 | 3 | # Pull items from the Linaro roadmap in Kanbantool and put them into a database. | 3 | # Pull items from the Linaro roadmap in Kanbantool and put them into a database. |
499 | 4 | 4 | ||
501 | 5 | import urllib2, re, sys, optparse, smtplib, pwd, os | 5 | import logging |
502 | 6 | import optparse | ||
503 | 7 | import os | ||
504 | 6 | import simplejson | 8 | import simplejson |
507 | 7 | import logging | 9 | import sys |
508 | 8 | from email.mime.text import MIMEText | 10 | import urllib2 |
509 | 9 | 11 | ||
510 | 10 | from lpworkitems.collect_roadmap import ( | 12 | from lpworkitems.collect_roadmap import ( |
511 | 11 | CollectorStore, | 13 | CollectorStore, |
512 | 12 | 14 | ||
513 | === modified file 'jira.py' | |||
514 | --- jira.py 2012-05-14 11:39:09 +0000 | |||
515 | +++ jira.py 2012-07-10 16:26:24 +0000 | |||
516 | @@ -6,6 +6,7 @@ | |||
517 | 6 | import simplejson | 6 | import simplejson |
518 | 7 | import urllib2 | 7 | import urllib2 |
519 | 8 | 8 | ||
520 | 9 | |||
521 | 9 | def do_request(opts, relpathname, **kwargs): | 10 | def do_request(opts, relpathname, **kwargs): |
522 | 10 | request = urllib2.Request('%s/%s' % (opts.jira_api_url, relpathname)) | 11 | request = urllib2.Request('%s/%s' % (opts.jira_api_url, relpathname)) |
523 | 11 | if opts.jira_username and opts.jira_password: | 12 | if opts.jira_username and opts.jira_password: |
524 | @@ -20,6 +21,7 @@ | |||
525 | 20 | response_data = urllib2.urlopen(request, request_data) | 21 | response_data = urllib2.urlopen(request, request_data) |
526 | 21 | return simplejson.load(response_data) | 22 | return simplejson.load(response_data) |
527 | 22 | 23 | ||
528 | 24 | |||
529 | 23 | def main(): | 25 | def main(): |
530 | 24 | parser = optparse.OptionParser(usage="%prog") | 26 | parser = optparse.OptionParser(usage="%prog") |
531 | 25 | parser.add_option("--jira-api-url", dest="jira_api_url", | 27 | parser.add_option("--jira-api-url", dest="jira_api_url", |
532 | @@ -31,7 +33,8 @@ | |||
533 | 31 | opts, args = parser.parse_args() | 33 | opts, args = parser.parse_args() |
534 | 32 | 34 | ||
535 | 33 | # simple search | 35 | # simple search |
537 | 34 | print do_request(opts, 'search', maxResults=1, jql='project = CARD', fields=['summary', 'status']) | 36 | print do_request(opts, 'search', maxResults=1, jql='project = CARD', |
538 | 37 | fields=['summary', 'status']) | ||
539 | 35 | 38 | ||
540 | 36 | # information about a project | 39 | # information about a project |
541 | 37 | #print do_request(opts, 'project/CARD') | 40 | #print do_request(opts, 'project/CARD') |
542 | @@ -50,4 +53,3 @@ | |||
543 | 50 | 53 | ||
544 | 51 | if __name__ == "__main__": | 54 | if __name__ == "__main__": |
545 | 52 | main() | 55 | main() |
546 | 53 | |||
547 | 54 | 56 | ||
548 | === modified file 'utils.py' | |||
549 | --- utils.py 2011-09-12 14:41:07 +0000 | |||
550 | +++ utils.py 2012-07-10 16:26:24 +0000 | |||
551 | @@ -4,4 +4,3 @@ | |||
552 | 4 | if isinstance(attr, unicode): | 4 | if isinstance(attr, unicode): |
553 | 5 | return attr | 5 | return attr |
554 | 6 | return attr.decode("utf-8") | 6 | return attr.decode("utf-8") |
555 | 7 |
Hello Loic,
thanks for working on this.
On Tue, Jun 26, 2012 at 5:14 PM, Loïc Minier <email address hidden> wrote:
>
> Probably fixing the latter requires some DB changes; I'm not sure what rules we operate on for these though, e.g. how does one deploy the schema update?
WRT this point, I have to collect info too: our knowledge base does not report anything for this situation, and will have to ask Danilo and maybe IS too.
From a rapid PEP8 run, I get these two errors:
collect_jira:5:15: E401 multiple imports on one line
all-projects:77:80: E501 line too long (94 characters)
Apart from those, everything looks good. In case, fix during the merge.
Thanks.