Merge lp:~jml/udd/symbol-import into lp:udd

Proposed by Jonathan Lange
Status: Merged
Merged at revision: 519
Proposed branch: lp:~jml/udd/symbol-import
Merge into: lp:udd
Diff against target: 407 lines (+123/-79)
7 files modified
add_import_jobs.py (+0/-2)
list_packages.py (+26/-33)
mass_import.py (+10/-4)
pkgimport.conf (+9/-0)
udd/icommon.py (+68/-38)
udd/iconfig.py (+6/-0)
udd/lpapi.py (+4/-2)
To merge this branch: bzr merge lp:~jml/udd/symbol-import
Reviewer Review Type Date Requested Status
James Westby Approve
Review via email: mp+77312@code.launchpad.net

Description of the change

I'm trying to do something across all Ubuntu packages. To do this, I need mostly what udd does, but I need to exclude Debian packages and sometimes I need to retry all failed jobs (particularly when debugging). This branch adds these options.

add_import_jobs.py
 * Call create_import_jobs, passing on the new include_debian config option
list_packages.py
 * Respect the new include_debian config option
mass_import.py
 * Use the new re-add all failed jobs method if the appropriate configuration option is set
pkgimport.conf:
 * Add new configuration options, with documentation
udd/icommon.py:
 * Import datetime and timedelta directly
 * Use timedelta(days=N) rather than manually multiplying
 * Refactor create_import_jobs to re-use some other code
 * Add a wrapper for the new config option for include_debian
 * Make create_import_jobs take an option on whether to include debian packages
 * Add a new method to the controller that will re-add all failed jobs
udd/lpapi.py:
 * General pyflakes cleanups

To post a comment you must log in.
Revision history for this message
James Westby (james-w) wrote :

Hi,

Thanks for the branch, there are some nice cleanups in here to sweeten
the deal.

56 + if include_debian:
57 + for s in debian.series:
58 + collection = lpapi.lp_call(lpapi.call_with_limited_size,
59 + d_archive.getPublishedSources,
60 + status="Pending",
61 + distro_series=s, created_since_date=last_update)
62 + for publication in lpapi.iterate_collection(collection):
63 + packages = {}
64 + if publication.source_package_name in packages:
65 + continue
66 + packages[publication.source_package_name] = 0
67 + db.update_packages(packages)

Would this be better as a list of distribution names, rather than a boolean?

Likely someone is going to want to use this code with derived distros
at some point, and that change would seem to be more inline with that.

I'm not saying that you have to support derived distros in every way though :-)

I'm interested in why you changed import in one file from "from datetime import datetime"
to "import datetime" and changed another file in the opposite direction?

I'm sure this comment is going to annoy you, but...

367 - request = operation.wadl_method.request

I think that line should be there, as that code is taken from lazr.restfulclient,
and someone in the future looking to see if we can drop the code will have
to wonder if removing that dead code was significant.

It even annoys me :-)

Thanks,

James

review: Approve
Revision history for this message
Jonathan Lange (jml) wrote :

On Wed, Oct 5, 2011 at 4:57 PM, James Westby <email address hidden> wrote:
> Review: Approve
>
> Hi,
>
> Thanks for the branch, there are some nice cleanups in here to sweeten
> the deal.
>
> 56      + if include_debian:
> 57      + for s in debian.series:
> 58      + collection = lpapi.lp_call(lpapi.call_with_limited_size,
> 59      + d_archive.getPublishedSources,
> 60      + status="Pending",
> 61      + distro_series=s, created_since_date=last_update)
> 62      + for publication in lpapi.iterate_collection(collection):
> 63      + packages = {}
> 64      + if publication.source_package_name in packages:
> 65      + continue
> 66      + packages[publication.source_package_name] = 0
> 67      + db.update_packages(packages)
>
> Would this be better as a list of distribution names, rather than a boolean?
>

Yeah, that's how I started off, and then I saw just how much 'debian'
and 'ubuntu' were spread throughout the code. (e.g., the manually
maintained distro → series maps).

> Likely someone is going to want to use this code with derived distros
> at some point, and that change would seem to be more inline with that.
>
> I'm not saying that you have to support derived distros in every way though :-)
>

OK. I'll do what I can and see what turns up.

>
> I'm interested in why you changed import in one file from "from datetime import datetime"
> to "import datetime" and changed another file in the opposite direction?
>

Umm, I didn't?

>
> I'm sure this comment is going to annoy you, but...
>
> 367     - request = operation.wadl_method.request
>
> I think that line should be there, as that code is taken from lazr.restfulclient,
> and someone in the future looking to see if we can drop the code will have
> to wonder if removing that dead code was significant.
>
> It even annoys me :-)
>

OK. I didn't see a comment there saying it was taken from
lazr.restfulclient. I'll add a comment along those lines when I
restore the removed line.

Thanks!
jml

Revision history for this message
James Westby (james-w) wrote :

On Wed, 05 Oct 2011 16:07:25 -0000, Jonathan Lange <email address hidden> wrote:
> Yeah, that's how I started off, and then I saw just how much 'debian'
> and 'ubuntu' were spread throughout the code. (e.g., the manually
> maintained distro → series maps).

Ah, that's true. They are a pain, but I don't know of a way to get rid
of them.

Perhaps they aren't needed for anything no in ('debian', 'ubuntu')
though.

> > Likely someone is going to want to use this code with derived distros
> > at some point, and that change would seem to be more inline with that.
> >
> > I'm not saying that you have to support derived distros in every way though :-)
> >
>
> OK. I'll do what I can and see what turns up.

We can always deprecate the debian option later, so if it's too much
work we'll just do that.

> >
> > I'm interested in why you changed import in one file from "from datetime import datetime"
> > to "import datetime" and changed another file in the opposite direction?
> >
>
> Umm, I didn't?

Sorry, I have no idea where I saw that, my mistake entirely.

> > I'm sure this comment is going to annoy you, but...
> >
> > 367     - request = operation.wadl_method.request
> >
> > I think that line should be there, as that code is taken from lazr.restfulclient,
> > and someone in the future looking to see if we can drop the code will have
> > to wonder if removing that dead code was significant.
> >
> > It even annoys me :-)
> >
>
> OK. I didn't see a comment there saying it was taken from
> lazr.restfulclient. I'll add a comment along those lines when I
> restore the removed line.

Good idea, thanks.

James

Revision history for this message
Vincent Ladeuil (vila) wrote :

Thanks for working on this (added comments *especially* appreciated).

> sometimes I need to retry all failed jobs (particularly when debugging)

and

239 +
240 + def add_failed_jobs(self):
241 + """Add all failed jobs to the queue."""
242 + c = self.conn.cursor()
243 + try:
244 + rows = c.execute('select * from %s' % self.FAILURES_TABLE).fetchall()
245 + for row in rows:
246 + c.execute(self.FAILURES_TABLE_DELETE, (row[0],))
247 + self._add_job(c, row[0], self.JOB_TYPE_PRIORITY)
248 + self.conn.commit()
249 + except:
250 + self.conn.rollback()
251 + raise
252 + finally:
253 + c.close()
254 +

makes me think that you want to add an option to requeue_package.py instead.
This script can be run without stopping mass_import so if we can share the
code bewtween them it will be even nicer (i.e. I think adding the capability
is a good thing even if the package importer won't use it for now).

You're duplicating part of retry/_retry here without fully implementing what
they do though (recording the failures in OLD_FAILURES_TABLE is helpful in
the long term).

196 + for spph in iter_published_sources(lp, lp.distributions['ubuntu'],

Damn, I'm quite good at deciphering aconyms but spph resists me... sp ==
status pending ?

Can I have more hints ?

;)

Revision history for this message
Vincent Ladeuil (vila) wrote :

Oh ! I had this page opened since yesterday evening and didn't see james review nor your reply.

Revision history for this message
James Westby (james-w) wrote :

On Thu, 06 Oct 2011 11:36:46 -0000, Vincent Ladeuil <email address hidden> wrote:
> 196 + for spph in iter_published_sources(lp, lp.distributions['ubuntu'],
>
> Damn, I'm quite good at deciphering aconyms but spph resists me... sp ==
> status pending ?
>
> Can I have more hints ?

"SourcePackagePublishingHistory" a Launchpad term for a record of
publishing a source package in an archive.

Thanks,

James

Revision history for this message
Jonathan Lange (jml) wrote :

On Thu, Oct 6, 2011 at 12:36 PM, Vincent Ladeuil <email address hidden> wrote:
> Thanks for working on this (added comments *especially* appreciated).
>
>> sometimes I need to retry all failed jobs (particularly when debugging)
>
> and
>
> 239     +
> 240     + def add_failed_jobs(self):
> 241     + """Add all failed jobs to the queue."""
> 242     + c = self.conn.cursor()
> 243     + try:
> 244     + rows = c.execute('select * from %s' % self.FAILURES_TABLE).fetchall()
> 245     + for row in rows:
> 246     + c.execute(self.FAILURES_TABLE_DELETE, (row[0],))
> 247     + self._add_job(c, row[0], self.JOB_TYPE_PRIORITY)
> 248     + self.conn.commit()
> 249     + except:
> 250     + self.conn.rollback()
> 251     + raise
> 252     + finally:
> 253     + c.close()
> 254     +
>
> makes me think that you want to add an option to requeue_package.py instead.
> This script can be run without stopping mass_import so if we can share the
> code bewtween them it will be even nicer (i.e. I think adding the capability
> is a good thing even if the package importer won't use it for now).
>
> You're duplicating part of retry/_retry here without fully implementing what
> they do though (recording the failures in OLD_FAILURES_TABLE is helpful in
> the long term).
>

To be honest, I don't see it. retry() is a big function of unclear
purpose. What seems to be clear is that it's meant to work on one
package at a time. Same with requeue_packages.py.

add_failed_jobs is duplicating add_jobs_for_interrupted. If my new one
is missing functionality, then so is the existing one.

jml

lp:~jml/udd/symbol-import updated
524. By Jonathan Lange

Restore the duplication.

525. By Jonathan Lange

Change include_debian to distribution list.

Revision history for this message
Jonathan Lange (jml) wrote :

r525 makes the include_debian to distribution list change. Worth reviewing before landing.

Revision history for this message
James Westby (james-w) wrote :

38 # XXX: This used to do "Published" for Ubuntu and "Pending" for
39 # Debian (which was really Ubuntu). Now we do both.

Hi,

I think we can just do Published now:

  https://bugs.launchpad.net/udd/+bug/865483

but if you don't want to make the two changes at once then that's ok.

49 # XXX: Don't know why there's a special check here for
50 # main packages in the current series. Used to be for
51 # Ubuntu only, now we check for all distros.

That's so that we record "important" packages in the db, so
we can start with those. Changing to all distros is fine.

That change looks good though. I'll let vila respond on the
comments that he made.

Thanks,

James

Revision history for this message
Vincent Ladeuil (vila) wrote :

> add_failed_jobs is duplicating add_jobs_for_interrupted. If my new one
> is missing functionality, then so is the existing one.

No, retry indeed works for a single package, but it takes care about keeping track of previous failures which is the part that is missing. add_jobs_for_interrupted is catching up for... interrupted jobs which did not fail but needs to be spawn again.

Revision history for this message
Jonathan Lange (jml) wrote :

On Thu, Oct 6, 2011 at 3:51 PM, Vincent Ladeuil <email address hidden> wrote:
>> add_failed_jobs is duplicating add_jobs_for_interrupted. If my new one
>> is missing functionality, then so is the existing one.
>
> No, retry indeed works for a single package, but it takes care about keeping track of previous failures which is the part that is missing. add_jobs_for_interrupted is catching up for... interrupted jobs which did not fail but needs to be spawn again.

They are recorded as failures using a sentinel. I fail to see why they
are different, or, to be honest, what would be required to make retry
usable for my needs.

jml

Revision history for this message
Vincent Ladeuil (vila) wrote :

> On Thu, Oct 6, 2011 at 3:51 PM, Vincent Ladeuil <email address hidden> wrote:
> >> add_failed_jobs is duplicating add_jobs_for_interrupted. If my new one
> >> is missing functionality, then so is the existing one.
> >
> > No, retry indeed works for a single package, but it takes care about keeping
> track of previous failures which is the part that is missing.
> add_jobs_for_interrupted is catching up for... interrupted jobs which did not
> fail but needs to be spawn again.
>
> They are recorded as failures using a sentinel. I fail to see why they
> are different, or, to be honest, what would be required to make retry
> usable for my needs.

sentinel is a hack not a true failure tracked like the others

I won't try to argue and delay the landing of this feature, I think you haven't use requeue_package and may not need it in your actual workflow (if your imports never fail spuriously or permanently for example).

The package importer will never retry all failures blindly either.

As long as the uses case don't overlap I'm fine with the small code duplication (especially since the actual design makes it hard to resolve (one package/several packages, --all-of-type magic, etc).

Keep in mind, though, that the mass_import script relies on transient failures created with requeue_package --auto to avoid failure spikes during lp downtimes.

Revision history for this message
Jonathan Lange (jml) wrote :

On Thu, Oct 6, 2011 at 6:03 PM, Vincent Ladeuil <email address hidden> wrote:
>> On Thu, Oct 6, 2011 at 3:51 PM, Vincent Ladeuil <email address hidden> wrote:
>> >> add_failed_jobs is duplicating add_jobs_for_interrupted. If my new one
>> >> is missing functionality, then so is the existing one.
>> >
>> > No, retry indeed works for a single package, but it takes care about keeping
>> track of previous failures which is the part that is missing.
>> add_jobs_for_interrupted is catching up for... interrupted jobs which did not
>> fail but needs to be spawn again.
>>
>> They are recorded as failures using a sentinel. I fail to see why they
>> are different, or, to be honest, what would be required to make retry
>> usable for my needs.
>
> sentinel is a hack not a true failure tracked like the others
>
> I won't try to argue and delay the landing of this feature, I think you haven't use requeue_package and may not need it in your actual workflow (if your imports never fail spuriously or permanently for example).
>
> The package importer will never retry all failures blindly either.
>
> As long as the uses case don't overlap I'm fine with the small code duplication (especially since the actual design makes it hard to resolve (one package/several packages, --all-of-type magic, etc).
>
> Keep in mind, though, that the mass_import script relies on transient failures created with requeue_package  --auto to avoid failure spikes during lp downtimes.

OK, thanks. I think I need more confidence in this codebase before
attempting a change like that.

Would one of you please merge my branch for me?

Thanks,
jml

lp:~jml/udd/symbol-import updated
526. By Jonathan Lange

Don't bother with Pending status.
Explain why we are treating main/current specially (sort of)

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file 'add_import_jobs.py'
2--- add_import_jobs.py 2011-07-16 09:41:49 +0000
3+++ add_import_jobs.py 2011-10-06 19:09:54 +0000
4@@ -6,7 +6,6 @@
5 from udd import lpapi
6 from udd import paths
7
8-
9 lock = icommon.lock_add_import_jobs()
10 if lock is None:
11 print "Another instance of add_import_jobs is already running."
12@@ -14,7 +13,6 @@
13 try:
14 lp = lpapi.get_lp()
15 status_db = icommon.StatusDatabase(paths.sqlite_file)
16-
17 icommon.create_import_jobs(lp, status_db)
18 finally:
19 lock.close()
20
21=== modified file 'list_packages.py'
22--- list_packages.py 2011-07-19 09:36:38 +0000
23+++ list_packages.py 2011-10-06 19:09:54 +0000
24@@ -8,6 +8,7 @@
25 from launchpadlib.errors import HTTPError
26
27 from udd import icommon
28+from udd import iconfig
29 from udd import lpapi
30 from udd import paths
31
32@@ -18,46 +19,38 @@
33 sys.exit(0)
34 try:
35 lp = lpapi.get_lp()
36- debian = lp.distributions['ubuntu']
37- ubuntu = lp.distributions['ubuntu']
38- d_archive = debian.main_archive
39- u_archive = ubuntu.main_archive
40- current_u_series_name = ubuntu.current_series.name
41-
42
43 db = icommon.PackageDatabase(paths.sqlite_package_file)
44
45 last_update = db.last_update()
46 if last_update is not None:
47- last_update = last_update - datetime.timedelta(0, 86400)
48+ last_update = last_update - datetime.timedelta(days=1)
49 last_update = last_update.isoformat()
50
51- for s in ubuntu.series:
52- collection = lpapi.lp_call(lpapi.call_with_limited_size,
53- u_archive.getPublishedSources,
54- status="Published",
55- distro_series=s, created_since_date=last_update)
56- for publication in lpapi.iterate_collection(collection):
57- packages = {}
58- if publication.source_package_name in packages:
59- continue
60- if (s.name == current_u_series_name and
61- publication.component_name == "main"):
62- packages[publication.source_package_name] = 1
63- else:
64- packages[publication.source_package_name] = 0
65- db.update_packages(packages)
66- for s in debian.series:
67- collection = lpapi.lp_call(lpapi.call_with_limited_size,
68- d_archive.getPublishedSources,
69- status="Pending",
70- distro_series=s, created_since_date=last_update)
71- for publication in lpapi.iterate_collection(collection):
72- packages = {}
73- if publication.source_package_name in packages:
74- continue
75- packages[publication.source_package_name] = 0
76- db.update_packages(packages)
77+ config = iconfig.Iconfig()
78+ distributions = config.get_list('pkgimport.distributions')
79+
80+ for distro_name in distributions:
81+ distro = lp.distributions[distro_name]
82+ archive = distro.main_archive
83+ current_series_name = distro.current_series.name
84+ for s in distro.series:
85+ collection = lpapi.lp_call(lpapi.call_with_limited_size,
86+ archive.getPublishedSources,
87+ status='Published',
88+ distro_series=s, created_since_date=last_update)
89+ for publication in lpapi.iterate_collection(collection):
90+ packages = {}
91+ if publication.source_package_name in packages:
92+ continue
93+ # Packages in the current series in the 'main' component are
94+ # considered to be more important than the rest.
95+ if (s.name == current_series_name and
96+ publication.component_name == "main"):
97+ packages[publication.source_package_name] = 1
98+ else:
99+ packages[publication.source_package_name] = 0
100+ db.update_packages(packages)
101
102 except HTTPError, e:
103 sys.stderr.write(e.content + "\n")
104
105=== modified file 'mass_import.py'
106--- mass_import.py 2011-09-29 11:41:05 +0000
107+++ mass_import.py 2011-10-06 19:09:54 +0000
108@@ -388,14 +388,18 @@
109 # Start with an 8-way Driver
110 self.driver = ImportDriver(8)
111
112- def run(self):
113+ def run(self, retry_all=False):
114 lock = icommon.lock_main()
115 if lock is None:
116 logger.info("Another main process is running")
117 raise Stop
118 try:
119- # First, re-queue previously interrupted jobs.
120- self.status_db.add_jobs_for_interrupted()
121+ if retry_all:
122+ # We have been configured to retry _all_ failed jobs.
123+ self.status_db.add_failed_jobs()
124+ else:
125+ # Re-queue previously interrupted jobs.
126+ self.status_db.add_jobs_for_interrupted()
127 self.driver.start()
128 self.driver.started.wait()
129 while True:
130@@ -470,9 +474,11 @@
131 signal.signal(signal.SIGTERM, die)
132
133
134+config = iconfig.Iconfig()
135+retry_all = config.get('pkgimport.retry_all_failed_jobs') == 'true'
136 # Run the controller
137 try:
138- controller.run()
139+ controller.run(retry_all)
140 except HTTPError, e:
141 logger.critical(e.content)
142 raise
143
144=== modified file 'pkgimport.conf'
145--- pkgimport.conf 2011-09-27 11:41:40 +0000
146+++ pkgimport.conf 2011-10-06 19:09:54 +0000
147@@ -20,3 +20,12 @@
148 # Script that imports the package. Must be executable and take a single
149 # argument: the name of the package to import.
150 pkgimport.import_script = /srv/package-import.canonical.com/new/scripts/import_package.py
151+
152+# Which distributions to operate on. Normally "ubuntu, debian". No default.
153+# Affects behaviour of add_import_jobs.py and list_packages.py.
154+pkgimport.distributions = ubuntu,debian
155+
156+# Whether or not to retry all failed jobs. 'true' means to retry all failed
157+# jobs. Anything else means retry only jobs that were currently running when
158+# the supervisor died. Defaults to 'false' if unset.
159+pkgimport.retry_all_failed_jobs = false
160
161=== modified file 'udd/icommon.py'
162--- udd/icommon.py 2011-09-19 15:48:31 +0000
163+++ udd/icommon.py 2011-10-06 19:09:54 +0000
164@@ -1,4 +1,4 @@
165-import datetime
166+from datetime import datetime, timedelta
167 import errno
168 import fcntl
169 import operator
170@@ -308,7 +308,7 @@
171 checked = set()
172 newest_published = last_known_published
173 for publication in publications:
174- published_time = datetime.datetime.strptime(
175+ published_time = datetime.strptime(
176 str(publication.date_created)[:19], "%Y-%m-%d %H:%M:%S")
177 name = publication.source_package_name
178 checked.add(name)
179@@ -317,34 +317,49 @@
180 status_db.add_import_jobs(checked, newest_published)
181
182
183+def get_last_import_time(status_db, earliest_date=None):
184+ """Get the last import time from status db.
185+
186+ Adjusts for safety, so that we don't miss any packages on the new
187+ synchronization. If ``earliest_date`` provided, then never return a date
188+ earlier than that, even if we've never synced before.
189+ """
190+ last_known_published = status_db.last_import_time()
191+ if last_known_published:
192+ # Safety buffer so we don't accidentally miss other syncs.
193+ return last_known_published - timedelta(minutes=10)
194+ return earliest_date
195+
196+
197 def create_import_jobs(lp, status_db):
198- last_known_published = status_db.last_import_time()
199- last_published_call = None
200- if last_known_published is not None:
201- last_known_published = (last_known_published
202- - datetime.timedelta(0, 600))
203- else:
204- # When starting from scratch, limit the query to the last ten days or
205- # lp timeout (https://bugs.launchpad.net/udd/+bug/717109)
206- last_known_published = (datetime.datetime.now()
207- - datetime.timedelta(0, 10 * 24 * 3600))
208- last_published_call = last_known_published.isoformat()
209+ # When starting from scratch, limit the query to the last ten days or
210+ # lp timeout (https://bugs.launchpad.net/udd/+bug/717109)
211+ last_known_published = get_last_import_time(
212+ status_db, earliest_date=datetime.now() - timedelta(days=10))
213 # We want all the new Published records since the last run
214 publications = set()
215+ config = iconfig.Iconfig()
216+ distributions = config.get_list('pkgimport.distributions')
217+ for distro in distributions:
218+ for spph in iter_published_sources(lp, lp.distributions[distro],
219+ status='Published',
220+ since=last_known_published.isoformat()):
221+ publications.add(spph)
222+ import_from_publications(status_db, publications, last_known_published)
223+
224+
225+def iter_published_sources(lp, distribution, status='Published', since=None,
226+ series=None):
227+ filters = {'status': status}
228+ if since:
229+ filters['created_since_date'] = since
230+ if series:
231+ series = distribution.getSeries(name_or_version=series)
232+ filters['distro_series'] = series
233+ archive = distribution.main_archive
234 collection = lpapi.lp_call(
235- lpapi.call_with_limited_size,
236- lp.distributions['ubuntu'].main_archive.getPublishedSources,
237- status="Published",
238- created_since_date=last_published_call)
239- for p in lpapi.iterate_collection(collection):
240- publications.add(p)
241- collection = lpapi.lp_call(lpapi.call_with_limited_size,
242- lp.distributions['debian'].main_archive.getPublishedSources,
243- status="Pending",
244- created_since_date=last_published_call)
245- for p in lpapi.iterate_collection(collection):
246- publications.add(p)
247- import_from_publications(status_db, publications, last_known_published)
248+ lpapi.call_with_limited_size, archive.getPublishedSources, **filters)
249+ return lpapi.iterate_collection(collection)
250
251
252 class PackageStatus(object):
253@@ -450,6 +465,22 @@
254 c.execute('insert into %s values (?, ?, ?, 0)'
255 % self.FAILURES_TABLE, (package, reason, now))
256
257+
258+ def add_failed_jobs(self):
259+ """Add all failed jobs to the queue."""
260+ c = self.conn.cursor()
261+ try:
262+ rows = c.execute('select * from %s' % self.FAILURES_TABLE).fetchall()
263+ for row in rows:
264+ c.execute(self.FAILURES_TABLE_DELETE, (row[0],))
265+ self._add_job(c, row[0], self.JOB_TYPE_PRIORITY)
266+ self.conn.commit()
267+ except:
268+ self.conn.rollback()
269+ raise
270+ finally:
271+ c.close()
272+
273 def add_jobs_for_interrupted(self):
274 c = self.conn.cursor()
275 try:
276@@ -470,7 +501,7 @@
277 # Search the active jobs for the given package ordered by job type
278 # (priority, new, retry)
279 rows = c.execute(self.JOBS_TABLE_FIND, (package, 1)).fetchall()
280- now = datetime.datetime.utcnow()
281+ now = datetime.utcnow()
282 first = now
283 job_id = 0
284 for row in rows:
285@@ -505,7 +536,7 @@
286 def finish_job(self, package, job_id, success, output):
287 c = self.conn.cursor()
288 try:
289- now = datetime.datetime.utcnow()
290+ now = datetime.utcnow()
291 if job_id > 0:
292 row = c.execute(self.JOBS_TABLE_SELECT_JOB,
293 (job_id,)).fetchone()
294@@ -530,7 +561,7 @@
295
296 def _add_job(self, c, package, job_type):
297 c.execute(self.JOBS_TABLE_INSERT, (None, package, 1, job_type,
298- datetime.datetime.utcnow(), None, None))
299+ datetime.utcnow(), None, None))
300
301 def add_jobs(self, packages, job_type):
302 c = self.conn.cursor()
303@@ -739,8 +770,8 @@
304 return False
305 info.auto_retry = True
306 info.auto_retry_time = (
307- info.timestamp + datetime.timedelta(0, self.AUTO_RETRY_SECONDS))
308- if info.auto_retry_time > datetime.datetime.utcnow():
309+ info.timestamp + timedelta(seconds=self.AUTO_RETRY_SECONDS))
310+ if info.auto_retry_time > datetime.utcnow():
311 # Too early to retry
312 return False
313 row = c.execute(self.OLD_FAILURES_TABLE_FIND, (info.name,)).fetchone()
314@@ -930,7 +961,7 @@
315 else:
316 c.execute(self.PACKAGE_TABLE_INSERT, (package, main))
317 c.execute('insert into %s values (?)' % self.UPDATE_TABLE,
318- (datetime.datetime.utcnow(),))
319+ (datetime.utcnow(),))
320 self.conn.commit()
321 except:
322 self.conn.rollback()
323@@ -1206,9 +1237,8 @@
324 def _set_counts(self, table, queued, failed):
325 c = self.conn.cursor()
326 try:
327- rows = c.execute("insert into %s values (?, ?, ?)"
328- % table,
329- (datetime.datetime.utcnow(), queued, failed))
330+ c.execute("insert into %s values (?, ?, ?)" % table,
331+ (datetime.utcnow(), queued, failed))
332 self.conn.commit()
333 except:
334 self.conn.rollback()
335@@ -1345,7 +1375,7 @@
336
337 def side_branch_location(self, distro, release, suite):
338 distro, suite = self._translate_suite(distro, suite)
339- now = datetime.datetime.utcnow()
340+ now = datetime.utcnow()
341 now_str = now.strftime("%Y%m%d%H%M")
342 return ("bzr+ssh://%s/~%s/%s/%s/%s/%s-%s" % (
343 self._get_host_name(),
344@@ -1497,7 +1527,7 @@
345
346 def side_branch_location(self, distro, release, suite):
347 distro, suite = self._translate_suite(distro, suite)
348- now = datetime.datetime.utcnow()
349+ now = datetime.utcnow()
350 now_str = now.strftime("%Y%m%d%H%M")
351 return os.path.join(self.branches_dir, "%s-%s-%s"
352 % (distro, suite, now_str))
353@@ -1544,7 +1574,7 @@
354
355 def side_branch_location(self, distro, release, suite):
356 distro, suite = self._translate_suite(distro, suite)
357- now = datetime.datetime.utcnow()
358+ now = datetime.utcnow()
359 now_str = now.strftime("%Y%m%d%H%M")
360 return os.path.join(paths.updates_dir, self.package,
361 suite + "-%s" % now_str)
362
363=== modified file 'udd/iconfig.py'
364--- udd/iconfig.py 2011-06-15 00:13:56 +0000
365+++ udd/iconfig.py 2011-10-06 19:09:54 +0000
366@@ -62,6 +62,12 @@
367 break
368 return value
369
370+ def get_list(self, name):
371+ items = self.get(name)
372+ if not items:
373+ return []
374+ return [item.strip() for item in items.split(',')]
375+
376
377 class Iconfig(ConfigStack):
378 """An import packager configuration taking ``locations.conf`` into account.
379
380=== modified file 'udd/lpapi.py'
381--- udd/lpapi.py 2011-08-11 06:28:31 +0000
382+++ udd/lpapi.py 2011-10-06 19:09:54 +0000
383@@ -1,5 +1,4 @@
384 import cgi
385-import operator
386 import os
387 import random
388 import simplejson
389@@ -13,7 +12,7 @@
390
391 from launchpadlib.credentials import Credentials
392 from launchpadlib.errors import HTTPError
393-from launchpadlib.launchpad import Launchpad, EDGE_SERVICE_ROOT
394+from launchpadlib.launchpad import Launchpad
395
396 from udd import paths
397
398@@ -160,6 +159,9 @@
399
400 def call_with_limited_size(operation, size=20, *args, **kwargs):
401 """Invoke the method and process the result."""
402+ # XXX: Copied from lazr.restfulclient.resource.NamedOperation.__call__ and
403+ # changed to be a function that takes a NamedOperation, rather than a
404+ # method on it, also to set the ws.size in the call.
405 if len(args) > 0:
406 raise TypeError('Method must be called with keyword args.')
407 http_method = operation.wadl_method.name

Subscribers

People subscribed via source and target branches