Merge lp:~jcsackett/charmworld/better-stats-window into lp:~juju-jitsu/charmworld/trunk
Status: | Merged | ||||
---|---|---|---|---|---|
Approved by: | j.c.sackett | ||||
Approved revision: | 328 | ||||
Merged at revision: | 394 | ||||
Proposed branch: | lp:~jcsackett/charmworld/better-stats-window | ||||
Merge into: | lp:~juju-jitsu/charmworld/trunk | ||||
Diff against target: |
657 lines (+423/-77) 7 files modified
charmworld/jobs/review.py (+103/-60) charmworld/jobs/tests/test_review.py (+110/-0) charmworld/static/jspark.js (+107/-0) charmworld/templates/review.pt (+15/-2) charmworld/utils.py (+16/-13) charmworld/views/tests/test_tools.py (+34/-1) charmworld/views/tools.py (+38/-1) |
||||
To merge this branch: | bzr merge lp:~jcsackett/charmworld/better-stats-window | ||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Aaron Bentley (community) | Approve | ||
Review via email: mp+186044@code.launchpad.net |
Commit message
Gets and displays review latency statistics in the review queue
Description of the change
This branch updates the review job and the charm-review view to calculate and
display latency information for review items.
charmworld/
-------
The review job has been altered in several ways.
* The global log has been done away with; the log created in `main` is passed
into the other functions.
* A new function, `get_tasks_
Launchpad. Rather than pulling only those items to be displayed in the queue,
it pulls all items so that latency can be properly calculated.
* The loops that created entries out of tasks and proposals first calculates the
latency of any item created in the last 6 months as well as any item that is
still open, even if it is older than 6 months. It then determines if an item
is still in an "open" status and if so adds it to the review queue. These new
rules supersede the "not in progress" rule used for juju doc proposals.
* Average, min, and max latency across all items is stored in a new collection,
`review_
latency as well as the trend in the charm-review view.
`get_tasks_
task. This is a tradeoff so we don't have to make two fetches for each sort of
item; one for anything created in the last six months, and another for anything
that is still open. In theory our greatest delay is connection time between
charmworld and launchpad--I'm open to arguments refuting that and altering this
to do fetches and create a set of all unique items found in the pair of each
requests from LP.
charmworld/
-------
The charm-review view is still pretty simple, though it grabs a bit more data.
* The latency stats for the last six months are pulled from the new collection
created in the review job. The most recent item is used to display the current
min, max, and average. The set of data is used to display sparklines showing
the 6 month history.
* A new function, `get_sparklines`, uses the stat data to create data strings
that can be used by a new sparkline library.
charmworld/
-------
The template has been updated to show the stats and the sparklines, and to
correct an odd oversight in showing the age of items.
* A new javascript file has been added--this is from resig's set of jquery
tools. While D3 provides richer visualizations, it is very heavyweight and
likley unecessary unless/until we're doing a lot more reporting on this site.
* The review queue loads the stats at the top of the listing, along with the
currently displayed item count. The sparkline spans are automatically turned
into graphics by jspark. If there is no data or insufficient data (i.e. only
one data point) jspark removes the span.
* Age is now from date_created, not date_modified, for the entry. As an example
of why, there is a 22 month old item that was being shown as only a few weeks
old.
Other
-----
* test_tools.py has been moved into the test directory.
* The portion of `pretty_timedelta` that actually creates a human readable display
has been broken out as `prettify_
`pretty_
mean latencies.
* tests have been added
Thanks. This looks good to me. In theory, I think we do want to do two fetches (open and <6 months old), so that the size of the response is proportional to the size of the data we want to evaluate. But I'm happy to wait until we need that.