Merge lp:~josharenson/qa-dashboard/mir_performance into lp:qa-dashboard

Proposed by Josh Arenson
Status: Rejected
Rejected by: Joe Talbott
Proposed branch: lp:~josharenson/qa-dashboard/mir_performance
Merge into: lp:qa-dashboard
Diff against target: 574 lines (+514/-0)
10 files modified
mir_performance/dashboard.py (+48/-0)
mir_performance/management/commands/README (+15/-0)
mir_performance/management/commands/_PerformanceBenchmarkFactory.py (+94/-0)
mir_performance/management/commands/jenkins_pull_mir_performance.py (+122/-0)
mir_performance/models.py (+33/-0)
mir_performance/templates/mir_performance/mako_results.html (+82/-0)
mir_performance/templates/mir_performance/mir_performance_layout.html (+13/-0)
mir_performance/tests.py (+16/-0)
mir_performance/urls.py (+21/-0)
mir_performance/views.py (+70/-0)
To merge this branch: bzr merge lp:~josharenson/qa-dashboard/mir_performance
Reviewer Review Type Date Requested Status
PS Jenkins bot continuous-integration Approve
Canonical CI Engineering Pending
Review via email: mp+224212@code.launchpad.net

Commit message

Adds mir_performance application plugin

Description of the change

Adds mir_performance application plugin

To post a comment you must log in.
765. By Josh Arenson

Removed benchmarks.js

766. By Josh Arenson

Removed stray views.py

767. By Josh Arenson

Removed pyc files

768. By Josh Arenson

Fixed a forgotten import statement

769. By Josh Arenson

Added __get_master_job method as the one in the jenkinsapi doesn't work quite right

Revision history for this message
Chris Johnston (cjohnston) :
Revision history for this message
Josh Arenson (josharenson) :
770. By Josh Arenson

 * Added correct copyright notices
 * Disabled debug mode

771. By Josh Arenson

Fix style violations

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :

FAILED: Continuous integration, rev:767
http://s-jenkins.ubuntu-ci:8080/job/dashboard-ci/329/
Executed test runs:

Click here to trigger a rebuild:
http://s-jenkins.ubuntu-ci:8080/job/dashboard-ci/329/rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :

FAILED: Continuous integration, rev:771
http://s-jenkins.ubuntu-ci:8080/job/dashboard-ci/332/
Executed test runs:

Click here to trigger a rebuild:
http://s-jenkins.ubuntu-ci:8080/job/dashboard-ci/332/rebuild

review: Needs Fixing (continuous-integration)
Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :

PASSED: Continuous integration, rev:771
http://s-jenkins.ubuntu-ci:8080/job/dashboard-ci/336/
Executed test runs:

Click here to trigger a rebuild:
http://s-jenkins.ubuntu-ci:8080/job/dashboard-ci/336/rebuild

review: Approve (continuous-integration)
Revision history for this message
Chris Johnston (cjohnston) wrote :

If you add the following line to your local_settings.py you should be able to use the local .js files and remove the inline JS..

STATIC_URL = '/static/'

Revision history for this message
Josh Arenson (josharenson) wrote :

> If you add the following line to your local_settings.py you should be able to
> use the local .js files and remove the inline JS..
>
> STATIC_URL = '/static/'

Is there any way you can make an exception? I am unable to test the site when I do this as it introduces external dependencies that do not work in my test environment.

772. By Josh Arenson

Merged upstream

773. By Josh Arenson

Make the scale of the graphs such that they show less detail. This makes it easier to see sharp trends.

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :

PASSED: Continuous integration, rev:773
http://s-jenkins.ubuntu-ci:8080/job/dashboard-ci/358/
Executed test runs:

Click here to trigger a rebuild:
http://s-jenkins.ubuntu-ci:8080/job/dashboard-ci/358/rebuild

review: Approve (continuous-integration)
774. By Josh Arenson

Fix homepage link for Mir Performance

Revision history for this message
PS Jenkins bot (ps-jenkins) wrote :

PASSED: Continuous integration, rev:774
http://s-jenkins.ubuntu-ci:8080/job/dashboard-ci/375/
Executed test runs:

Click here to trigger a rebuild:
http://s-jenkins.ubuntu-ci:8080/job/dashboard-ci/375/rebuild

review: Approve (continuous-integration)

Unmerged revisions

774. By Josh Arenson

Fix homepage link for Mir Performance

773. By Josh Arenson

Make the scale of the graphs such that they show less detail. This makes it easier to see sharp trends.

772. By Josh Arenson

Merged upstream

771. By Josh Arenson

Fix style violations

770. By Josh Arenson

 * Added correct copyright notices
 * Disabled debug mode

769. By Josh Arenson

Added __get_master_job method as the one in the jenkinsapi doesn't work quite right

768. By Josh Arenson

Fixed a forgotten import statement

767. By Josh Arenson

Removed pyc files

766. By Josh Arenson

Removed stray views.py

765. By Josh Arenson

Removed benchmarks.js

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added directory 'mir_performance'
2=== added file 'mir_performance/__init__.py'
3=== added file 'mir_performance/dashboard.py'
4--- mir_performance/dashboard.py 1970-01-01 00:00:00 +0000
5+++ mir_performance/dashboard.py 2014-10-21 13:09:04 +0000
6@@ -0,0 +1,48 @@
7+# QA Dashboard
8+# Copyright 2014 Canonical Ltd.
9+
10+# This program is free software: you can redistribute it and/or modify it
11+# under the terms of the GNU Affero General Public License version 3, as
12+# published by the Free Software Foundation.
13+
14+# This program is distributed in the hope that it will be useful, but
15+# WITHOUT ANY WARRANTY; without even the implied warranties of
16+# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
17+# PURPOSE. See the GNU Affero General Public License for more details.
18+
19+# You should have received a copy of the GNU Affero General Public License
20+# along with this program. If not, see <http://www.gnu.org/licenses/>.
21+
22+from django.core.management import call_command
23+from django.conf.urls import include, patterns, url
24+from django.core.urlresolvers import reverse_lazy
25+
26+from common.plugin_helper import (
27+ Extension,
28+)
29+
30+
31+class MirPerformanceExtension(Extension):
32+ URL_PATTERNS = patterns('',
33+ url(r'^mir_performance/', include('mir_performance.urls')),
34+ )
35+
36+ LINKS = [
37+ {
38+ 'name' : 'mir_performance',
39+ 'url' : reverse_lazy('glmark2'),
40+ 'label' : 'Mir Performance',
41+ 'index' : 500,
42+ },
43+ ]
44+
45+ def pull_data(self):
46+ call_command('jenkins_pull_mir_performance', '--scrape')
47+
48+ @classmethod
49+ def name(cls):
50+ return cls.__name__
51+
52+
53+extension = MirPerformanceExtension()
54+extension.pull_data()
55
56=== added directory 'mir_performance/management'
57=== added file 'mir_performance/management/__init__.py'
58=== added directory 'mir_performance/management/commands'
59=== added file 'mir_performance/management/commands/README'
60--- mir_performance/management/commands/README 1970-01-01 00:00:00 +0000
61+++ mir_performance/management/commands/README 2014-10-21 13:09:04 +0000
62@@ -0,0 +1,15 @@
63+The flow of this is as follows:
64+
65+// --scrape example given, as --last-successful is a subset of that
66+
67+1. Connect to jenkins <build_host>
68+
69+2. Determine the last successful build number
70+
71+3. Get metadata for all artifacts from last {n} builds (default is 20)
72+
73+4. If the artifact filename indicated that it is a benchmark, as determined by <benchmark_artifact_suffix>, the artifact is saved for processing.
74+
75+5. PerformanceBenchmarkFactory returns an object that is capable of processing that specific artifact's data. Extending this class is how new benchmarks can be added.
76+
77+6. Whatever the factory returns is parsed, and if the result is unique, a Benchmark object (Django model) is saved into the database!
78
79=== added file 'mir_performance/management/commands/_PerformanceBenchmarkFactory.py'
80--- mir_performance/management/commands/_PerformanceBenchmarkFactory.py 1970-01-01 00:00:00 +0000
81+++ mir_performance/management/commands/_PerformanceBenchmarkFactory.py 2014-10-21 13:09:04 +0000
82@@ -0,0 +1,94 @@
83+# QA Dashboard
84+# Copyright 2014 Canonical Ltd.
85+
86+# This program is free software: you can redistribute it and/or modify it
87+# under the terms of the GNU Affero General Public License version 3, as
88+# published by the Free Software Foundation.
89+
90+# This program is distributed in the hope that it will be useful, but
91+# WITHOUT ANY WARRANTY; without even the implied warranties of
92+# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
93+# PURPOSE. See the GNU Affero General Public License for more details.
94+
95+# You should have received a copy of the GNU Affero General Public License
96+# along with this program. If not, see <http://www.gnu.org/licenses/>.
97+
98+import json
99+import re
100+
101+from mir_performance.models import (
102+ GLMARK2,
103+ Benchmark
104+)
105+
106+"""
107+ A Boilerplate, polyymorphic factory class to create benchmark objects
108+ based on the benchmark type
109+"""
110+class PerformanceBenchmarkFactory:
111+ factories = {}
112+ def addFactory(id, performanceBenchmarkFactory):
113+ PerformanceBenchmarkFactory.factories.put[id] = performanceBenchmarkFactory
114+ addFactory = staticmethod(addFactory)
115+
116+ def createPerformanceBenchmark(job_name, build_num, artifact, parent_benchmark_name):
117+ pbf = PerformanceBenchmarkFactory
118+ if not pbf.factories.has_key(parent_benchmark_name):
119+ pbf.factories[parent_benchmark_name] = eval(parent_benchmark_name + '.Factory()')
120+ return pbf.factories[parent_benchmark_name].create(job_name, build_num, artifact, parent_benchmark_name)
121+ createPerformanceBenchmark = staticmethod(createPerformanceBenchmark)
122+
123+class PerformanceBenchmark(object):
124+ def __init__(self, job_name, build_num, artifact, parent_benchmark_name):
125+ self.job_name = job_name
126+ self.build_num = build_num
127+ self.artifact = artifact
128+ self.parent_benchmark_name = parent_benchmark_name
129+
130+ def insert_artifact_data(self):
131+ raise NotImplementedError
132+
133+class glmark2_fullscreen_default(PerformanceBenchmark):
134+
135+ def insert_artifact_data(self):
136+ benchmarks = self.__get_benchmark_objects(self.artifact)
137+ for benchmark in benchmarks:
138+ # Do we already have a benchmark with this name && build number?
139+ done_this_build = Benchmark.objects.filter(jenkins_build=self.build_num).filter(name=benchmark.name)
140+ if not done_this_build:
141+ benchmark.save()
142+ else:
143+ continue
144+ #print "Already had GLMark2 data for build %d" % self.build_num
145+
146+ def __get_benchmark_objects(self, artifact):
147+ result = []
148+ re_benchmark_name = re.compile("(?P<name>^.*?)\:\s+FPS\:\s+(?P<score>.*)\sFrameTime")
149+ re_overall_score = "(?P<name>glmark2\s+Score)\:\s+(?P<score>\d+)"
150+
151+ stream = artifact.get_data()
152+ for line in stream.splitlines():
153+ b = None
154+ match = re.search(re_benchmark_name, line)
155+ if match:
156+ b = Benchmark() # our django model
157+ b.parent = GLMARK2
158+ b.name = match.group('name')
159+ b.score = match.group('score')
160+ b.jenkins_build = int(self.build_num)
161+ else:
162+ match = re.search(re_overall_score, line)
163+ if match:
164+ b = Benchmark()
165+ b.parent = GLMARK2
166+ b.name = match.group('name')
167+ b.score = match.group('score')
168+ b.jenkins_build = int(self.build_num)
169+ if b is not None:
170+ result.append(b)
171+ return result
172+
173+ class Factory:
174+ def create(self, job_name, build_num, artifact, parent_benchmark_name):
175+ return glmark2_fullscreen_default(job_name, build_num, artifact, parent_benchmark_name)
176+
177
178=== added file 'mir_performance/management/commands/__init__.py'
179=== added file 'mir_performance/management/commands/jenkins_pull_mir_performance.py'
180--- mir_performance/management/commands/jenkins_pull_mir_performance.py 1970-01-01 00:00:00 +0000
181+++ mir_performance/management/commands/jenkins_pull_mir_performance.py 2014-10-21 13:09:04 +0000
182@@ -0,0 +1,122 @@
183+# QA Dashboard
184+# Copyright 2014 Canonical Ltd.
185+
186+# This program is free software: you can redistribute it and/or modify it
187+# under the terms of the GNU Affero General Public License version 3, as
188+# published by the Free Software Foundation.
189+
190+# This program is distributed in the hope that it will be useful, but
191+# WITHOUT ANY WARRANTY; without even the implied warranties of
192+# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
193+# PURPOSE. See the GNU Affero General Public License for more details.
194+
195+# You should have received a copy of the GNU Affero General Public License
196+# along with this program. If not, see <http://www.gnu.org/licenses/>.
197+
198+import jenkinsapi
199+from jenkinsapi.build import Build
200+from jenkinsapi.jenkins import Jenkins
201+from jenkinsapi.jenkins import Job
202+from optparse import make_option
203+import traceback
204+
205+from django.core.management.base import BaseCommand, CommandError
206+
207+import _PerformanceBenchmarkFactory as Pbf
208+from mir_performance.models import Benchmark
209+
210+DBG = False
211+
212+build_host = "https://jenkins.qa.ubuntu.com"
213+
214+benchmark_artifact_suffix = ".results"
215+job_names = ['mir-mediumtests-runner-mako']
216+
217+
218+class Command(BaseCommand):
219+ help = "Pull mir performance information from jenkins into the database"
220+ option_list = BaseCommand.option_list + (
221+ make_option('--scrape',
222+ action='store_true',
223+ dest='scrape',
224+ default=False,
225+ help='Scrape the results from the last 20 builds.'
226+ ),
227+ make_option('--last-successful',
228+ action='store_true',
229+ dest='last_successful',
230+ default=False,
231+ help='Get results from the last successful build'),
232+ )
233+
234+ def handle(self, *args, **options):
235+ try:
236+ self.J = Jenkins(build_host)
237+ if options['scrape']:
238+ self.scrape()
239+ if options['last_successful']:
240+ self.get_last_successful()
241+ except:
242+ if DBG:
243+ traceback.print_exc()
244+
245+ def get_last_successful(self):
246+ self.scrape(1)
247+
248+ def scrape(self, build_nums_back=20):
249+ for job_name in job_names:
250+ most_recent = self.__last_successful_job_num(job_name)
251+
252+ # Get a single build's artifacts
253+ for build_num in range(most_recent - build_nums_back,
254+ most_recent):
255+ artifacts = self.__get_benchmark_artifacts(build_num, job_name)
256+ # Create appropriate object based on artifact filename
257+ for artifact in artifacts:
258+ self.__insert_artifact(job_name, build_num, artifact)
259+
260+
261+ """
262+ Returns: list of Artifact objects whose filenames
263+ end with <benchmark_artifact_suffix>
264+ """
265+ def __get_benchmark_artifacts(self, build_num, job_name):
266+ results = []
267+ job = self.J.get_job(job_name)
268+ build = job.get_build(build_num)
269+ artifacts = build.get_artifacts()
270+ for artifact in artifacts:
271+ if artifact.filename.endswith(benchmark_artifact_suffix):
272+ print "Found benchmark artifact %s" % artifact.filename
273+ results.append(artifact)
274+ return results
275+
276+
277+ def __insert_artifact(self, job_name, build_num, artifact):
278+ try:
279+ # By convention, the parent benchmark name is the part of the
280+ # filename before the suffix
281+ fname = artifact.filename
282+ parent_benchmark = fname[:fname.index(benchmark_artifact_suffix)]
283+ benchmark = Pbf.PerformanceBenchmarkFactory.createPerformanceBenchmark(
284+ job_name, build_num, artifact, parent_benchmark)
285+ benchmark.insert_artifact_data()
286+ except:
287+ raise
288+
289+ def __get_master_job(self, child_job_name):
290+ print "entry child: %s" % child_job_name
291+ if child_job_name in self.J:
292+ build_num = self.J[child_job_name].get_last_good_buildnumber()
293+ parent = self.J[child_job_name].get_build(build_num).get_upstream_job()
294+ if parent is not None:
295+ return self.__get_master_job(str(parent))
296+ else:
297+ return child_job_name
298+
299+
300+ def __last_successful_job_num(self, job_name="mir-mediumtests-runner-mako"):
301+ last_good = self.J[job_name].get_last_good_buildnumber()
302+ print "Branch %" % self.J[job_name].get_build(last_good).get_revision_branch()
303+ return last_good
304+
305
306=== added file 'mir_performance/models.py'
307--- mir_performance/models.py 1970-01-01 00:00:00 +0000
308+++ mir_performance/models.py 2014-10-21 13:09:04 +0000
309@@ -0,0 +1,33 @@
310+# QA Dashboard
311+# Copyright 2014 Canonical Ltd.
312+
313+# This program is free software: you can redistribute it and/or modify it
314+# under the terms of the GNU Affero General Public License version 3, as
315+# published by the Free Software Foundation.
316+
317+# This program is distributed in the hope that it will be useful, but
318+# WITHOUT ANY WARRANTY; without even the implied warranties of
319+# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
320+# PURPOSE. See the GNU Affero General Public License for more details.
321+
322+# You should have received a copy of the GNU Affero General Public License
323+# along with this program. If not, see <http://www.gnu.org/licenses/>.
324+
325+
326+from django.db import models
327+
328+from common.models import DashboardBaseModel
329+
330+UNKNOWN = 0
331+GLMARK2 = 1
332+PARENT_BENCHMARK_CHOICES = (
333+ (UNKNOWN, 'Unknown'),
334+ (GLMARK2, 'GLMark2'),
335+)
336+
337+class Benchmark(DashboardBaseModel):
338+ parent = models.SmallIntegerField(choices=PARENT_BENCHMARK_CHOICES)
339+ name = models.CharField(max_length=1024)
340+ score = models.IntegerField()
341+ jenkins_build = models.IntegerField()
342+
343
344=== added directory 'mir_performance/static'
345=== added directory 'mir_performance/static/mir_performance'
346=== added directory 'mir_performance/static/mir_performance/js'
347=== added directory 'mir_performance/templates'
348=== added directory 'mir_performance/templates/mir_performance'
349=== added file 'mir_performance/templates/mir_performance/mako_results.html'
350--- mir_performance/templates/mir_performance/mako_results.html 1970-01-01 00:00:00 +0000
351+++ mir_performance/templates/mir_performance/mako_results.html 2014-10-21 13:09:04 +0000
352@@ -0,0 +1,82 @@
353+{% extends "mir_performance/mir_performance_layout.html" %}
354+{% load staticfiles %}
355+
356+{% block page_name %}Mir performance{% endblock %}
357+
358+{% block content %}
359+<style>
360+ {% for name, atts in benchmark_listi.items %}
361+ {# since the benchmark name is garbage, we use a simple index #}
362+ #chart-{{ forloop.counter }} svg {
363+ height: 600px;
364+ }
365+ {% endfor %}
366+</style>
367+
368+<script>
369+
370+ function draw_benchmark_score_over_time_chart(name, atts, chart_id){
371+ nv.addGraph(function() {
372+ var chart = nv.models.lineChart()
373+ .margin({left: 100})
374+ .showYAxis(true)
375+ .showXAxis(true)
376+ ;
377+
378+ chart.xAxis
379+ .axisLabel("Build Number")
380+
381+ chart.yAxis
382+ .axisLabel('Score')
383+ .scale().domain([0,100])
384+
385+ var myData = get_chart_data(atts);
386+
387+ d3.select('#chart-' + chart_id + ' svg')
388+ .datum(myData)
389+ .call(chart)
390+ ;
391+
392+ d3.select('#chart-' + chart_id + ' div')
393+ .text(name)
394+ ;
395+
396+
397+ nv.utils.windowResize(function() { chart.update() });
398+ return chart;
399+ });
400+ }
401+
402+ function get_chart_data(atts) {
403+ snb = [];
404+ for(var i = 0; i < atts.length; i++){
405+ snb.push({x: atts[i].build_num, y: atts[i].score})
406+ }
407+
408+ //Line chart data should be sent as an array of series objects.
409+ return [
410+ {
411+ values: snb,
412+ key: 'Score Trend',
413+ color: '#dd4814'
414+ },
415+ ];
416+ }
417+
418+ {% autoescape off %}
419+ {% for name, atts in benchmark_list.items %}
420+ draw_benchmark_score_over_time_chart("{{ name|escape }}", {{ atts }}, {{forloop.counter}})
421+ {% endfor %}
422+ {% endautoescape %}
423+</script>
424+
425+<div>
426+ {% for name, atts in benchmark_list.items %}
427+ <div id="chart-{{ forloop.counter }}">
428+ <div style='text-align:center;padding-top:10px'></div>
429+ <svg></svg>
430+ </div>
431+ {% endfor %}
432+</div>
433+
434+{% endblock %}
435
436=== added file 'mir_performance/templates/mir_performance/mir_performance_layout.html'
437--- mir_performance/templates/mir_performance/mir_performance_layout.html 1970-01-01 00:00:00 +0000
438+++ mir_performance/templates/mir_performance/mir_performance_layout.html 2014-10-21 13:09:04 +0000
439@@ -0,0 +1,13 @@
440+{% extends "layout.html" %}
441+
442+{% load staticfiles %}
443+
444+{% block extra_headers %}
445+ <link href='http://6df403e3d98e2ac67ac2-180150c581869d2c4c18db9c9e3179c4.r40.cf1.rackcdn.com/nv.d3.css' rel='stylesheet' type='text/css' / >
446+ <script type='text/javascript' src='//cdnjs.cloudflare.com/ajax/libs/d3/3.1.6/d3.min.js'></script>
447+ <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/nvd3/1.0.0-beta/nv.d3.min.js"></script>
448+{% endblock %}
449+
450+{% block sub_nav_links %}
451+<li {% ifequal url.1 'glmark2' %}class="active"{% endifequal %}><a class="sub-nav-item" href='{% url "glmark2" %}'>GLMark2</a></li>
452+{% endblock %}
453
454=== added file 'mir_performance/tests.py'
455--- mir_performance/tests.py 1970-01-01 00:00:00 +0000
456+++ mir_performance/tests.py 2014-10-21 13:09:04 +0000
457@@ -0,0 +1,16 @@
458+"""
459+This file demonstrates writing tests using the unittest module. These will pass
460+when you run "manage.py test".
461+
462+Replace this with more appropriate tests for your application.
463+"""
464+
465+from django.test import TestCase
466+
467+
468+class SimpleTest(TestCase):
469+ def test_basic_addition(self):
470+ """
471+ Tests that 1 + 1 always equals 2.
472+ """
473+ self.assertEqual(1 + 1, 2)
474
475=== added file 'mir_performance/urls.py'
476--- mir_performance/urls.py 1970-01-01 00:00:00 +0000
477+++ mir_performance/urls.py 2014-10-21 13:09:04 +0000
478@@ -0,0 +1,21 @@
479+# QA Dashboard
480+# Copyright 2014 Canonical Ltd.
481+
482+# This program is free software: you can redistribute it and/or modify it
483+# under the terms of the GNU Affero General Public License version 3, as
484+# published by the Free Software Foundation.
485+
486+# This program is distributed in the hope that it will be useful, but
487+# WITHOUT ANY WARRANTY; without even the implied warranties of
488+# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
489+# PURPOSE. See the GNU Affero General Public License for more details.
490+
491+# You should have received a copy of the GNU Affero General Public License
492+# along with this program. If not, see <http://www.gnu.org/licenses/>.
493+
494+from django.conf.urls import patterns, url
495+
496+urlpatterns = patterns(
497+ 'mir_performance.views',
498+ url('^glmark2/$', 'glmark2', name='glmark2'),
499+)
500
501=== added file 'mir_performance/views.py'
502--- mir_performance/views.py 1970-01-01 00:00:00 +0000
503+++ mir_performance/views.py 2014-10-21 13:09:04 +0000
504@@ -0,0 +1,70 @@
505+# QA Dashboard
506+# Copyright 2014 Canonical Ltd.
507+
508+# This program is free software: you can redistribute it and/or modify it
509+# under the terms of the GNU Affero General Public License version 3, as
510+# published by the Free Software Foundation.
511+
512+# This program is distributed in the hope that it will be useful, but
513+# WITHOUT ANY WARRANTY; without even the implied warranties of
514+# MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR
515+# PURPOSE. See the GNU Affero General Public License for more details.
516+
517+# You should have received a copy of the GNU Affero General Public License
518+# along with this program. If not, see <http://www.gnu.org/licenses/>.
519+
520+import json
521+import os
522+
523+from django.http import HttpResponse
524+from django.template import RequestContext
525+from django.shortcuts import render_to_response
526+
527+from common.views import index
528+
529+from mir_performance.models import (
530+ GLMARK2,
531+ Benchmark
532+)
533+
534+def sample_data(request):
535+ return
536+
537+def glmark2(request):
538+ # Rather than just limit the results returned by the query, we have to
539+ # do some work.
540+ # TODO rearchitect this to be better
541+ data = {}
542+ data = organize_benchmarks(GLMARK2)
543+
544+ page = {'benchmark_list' : data}
545+ return render_to_response('mir_performance/mako_results.html',page,RequestContext(request))
546+
547+
548+"""
549+TRICKY: This method returns a list of json objects, where each object is a list of
550+size <data_points>. Each inner object is a group of the same children benchmarks.
551+This allows a template for each parent benchmark to display data from all children benchmarks
552+looking back over <data_points> number of builds.
553+"""
554+def organize_benchmarks(benchmark_parent_id, data_points=20):
555+ # First, get the most recent.
556+ children_list = {}
557+ top_res = Benchmark.objects.filter(parent=benchmark_parent_id).order_by('-jenkins_build')[0]
558+ newest = top_res.jenkins_build
559+ oldest = newest - data_points
560+
561+ for build_num in range(oldest,newest):
562+ children = Benchmark.objects.filter(jenkins_build=build_num).filter(parent=benchmark_parent_id)
563+
564+ for child in children:
565+ if not child.name in children_list:
566+ children_list[child.name] = []
567+ t = {}
568+ t['score'] = child.score
569+ t['build_num'] = child.jenkins_build
570+ children_list[child.name].append(t)
571+ return children_list
572+
573+
574+

Subscribers

People subscribed via source and target branches