Merge lp:~jtv/launchpad/buildfail-recipebuilder into lp:~launchpad/launchpad/recife
- buildfail-recipebuilder
- Merge into recife
Proposed by
Jeroen T. Vermeulen
on 2010-11-19
| Status: | Rejected |
|---|---|
| Rejected by: | Jeroen T. Vermeulen on 2010-11-19 |
| Proposed branch: | lp:~jtv/launchpad/buildfail-recipebuilder |
| Merge into: | lp:~launchpad/launchpad/recife |
| Diff against target: |
2870 lines (+1574/-314) 41 files modified
Makefile (+38/-29) cronscripts/publishing/cron.publish-copy-archives (+4/-21) lib/canonical/buildd/binarypackage.py (+0/-2) lib/canonical/buildd/buildrecipe (+2/-0) lib/canonical/launchpad/doc/db-policy.txt (+26/-0) lib/canonical/launchpad/doc/emailaddress.txt (+9/-7) lib/canonical/launchpad/interfaces/__init__.py (+0/-6) lib/canonical/launchpad/tests/readonly.py (+18/-1) lib/canonical/launchpad/webapp/adapter.py (+33/-13) lib/canonical/launchpad/webapp/dbpolicy.py (+5/-0) lib/lp/app/javascript/tests/test_lp_collapsibles.html (+6/-6) lib/lp/app/javascript/tests/test_lp_collapsibles.js (+17/-9) lib/lp/app/windmill/testing.py (+21/-0) lib/lp/app/windmill/tests/test_yuitests.py (+24/-0) lib/lp/archivepublisher/domination.py (+59/-47) lib/lp/bugs/model/bugtask.py (+167/-118) lib/lp/bugs/tests/test_bugtask_search.py (+67/-4) lib/lp/buildmaster/manager.py (+5/-3) lib/lp/buildmaster/model/builder.py (+39/-3) lib/lp/buildmaster/tests/test_builder.py (+54/-0) lib/lp/code/browser/branchlisting.py (+4/-2) lib/lp/code/browser/branchmergequeuelisting.py (+105/-0) lib/lp/code/browser/configure.zcml (+18/-0) lib/lp/code/browser/tests/test_branchmergequeuelisting.py (+227/-0) lib/lp/code/configure.zcml (+11/-0) lib/lp/code/interfaces/branchmergequeue.py (+14/-0) lib/lp/code/interfaces/branchmergequeuecollection.py (+64/-0) lib/lp/code/model/branchmergequeue.py (+4/-2) lib/lp/code/model/branchmergequeuecollection.py (+174/-0) lib/lp/code/model/recipebuilder.py (+3/-1) lib/lp/code/model/tests/test_branchmergequeuecollection.py (+201/-0) lib/lp/code/templates/branchmergequeue-listing.pt (+68/-0) lib/lp/code/templates/branchmergequeue-macros.pt (+20/-0) lib/lp/code/templates/person-codesummary.pt (+8/-1) lib/lp/registry/javascript/tests/test_milestone_table.html (+1/-1) lib/lp/services/apachelogparser/base.py (+13/-7) lib/lp/services/apachelogparser/tests/test_apachelogparser.py (+30/-0) lib/lp/services/mailman/doc/postings.txt (+0/-19) lib/lp/testing/factory.py (+3/-2) lib/lp/translations/scripts/tests/test_message_sharing_migration.py (+9/-9) lib/lp/translations/windmill/tests/test_languages.py (+3/-1) |
| To merge this branch: | bzr merge lp:~jtv/launchpad/buildfail-recipebuilder |
| Related bugs: |
| Reviewer | Review Type | Date Requested | Status |
|---|---|---|---|
| Launchpad code reviewers | code | 2010-11-19 | Pending |
|
Review via email:
|
|||
Commit Message
Fix spurious test failure in test_message_
Description of the Change
This fixes spurious failures in one test, test_messagesCa
To post a comment you must log in.
| Jeroen T. Vermeulen (jtv) wrote : | # |
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
| 1 | === modified file 'Makefile' |
| 2 | --- Makefile 2010-11-02 01:34:05 +0000 |
| 3 | +++ Makefile 2010-11-19 10:27:10 +0000 |
| 4 | @@ -45,6 +45,8 @@ |
| 5 | bin/start_librarian bin/stxdocs bin/tags bin/test bin/tracereport \ |
| 6 | bin/twistd bin/update-download-cache bin/windmill |
| 7 | |
| 8 | +BUILDOUT_TEMPLATES = buildout-templates/_pythonpath.py.in |
| 9 | + |
| 10 | # DO NOT ALTER : this should just build by default |
| 11 | default: inplace |
| 12 | |
| 13 | @@ -55,10 +57,10 @@ |
| 14 | newsampledata: |
| 15 | $(MAKE) -C database/schema newsampledata |
| 16 | |
| 17 | -hosted_branches: $(PY) |
| 18 | +hosted_branches: buildout_bin |
| 19 | $(PY) ./utilities/make-dummy-hosted-branches |
| 20 | |
| 21 | -$(API_INDEX): $(BZR_VERSION_INFO) |
| 22 | +$(API_INDEX): $(BZR_VERSION_INFO) buildout_bin |
| 23 | mkdir -p $(APIDOC_DIR).tmp |
| 24 | LPCONFIG=$(LPCONFIG) $(PY) ./utilities/create-lp-wadl-and-apidoc.py --force "$(WADL_TEMPLATE)" |
| 25 | mv $(APIDOC_DIR).tmp $(APIDOC_DIR) |
| 26 | @@ -66,12 +68,12 @@ |
| 27 | apidoc: compile $(API_INDEX) |
| 28 | |
| 29 | # Run by PQM. |
| 30 | -check_merge: $(PY) |
| 31 | +check_merge: buildout_bin |
| 32 | [ `PYTHONPATH= bzr status -S database/schema/ | \ |
| 33 | grep -v "\(^P\|pending\|security.cfg\|Makefile\|unautovacuumable\|_pythonpath.py\)" | wc -l` -eq 0 ] |
| 34 | ${PY} lib/canonical/tests/test_no_conflict_marker.py |
| 35 | |
| 36 | -check_db_merge: $(PY) |
| 37 | +check_db_merge: buildout_bin |
| 38 | ${PY} lib/canonical/tests/test_no_conflict_marker.py |
| 39 | |
| 40 | check_config: build |
| 41 | @@ -109,16 +111,16 @@ |
| 42 | ${PY} -t ./test_on_merge.py $(VERBOSITY) $(TESTOPTS) \ |
| 43 | --layer=MailmanLayer |
| 44 | |
| 45 | -lint: ${PY} |
| 46 | +lint: buildout_bin |
| 47 | @bash ./bin/lint.sh |
| 48 | |
| 49 | -lint-verbose: ${PY} |
| 50 | +lint-verbose: buildout_bin |
| 51 | @bash ./bin/lint.sh -v |
| 52 | |
| 53 | -xxxreport: $(PY) |
| 54 | +xxxreport: buildout_bin |
| 55 | ${PY} -t ./utilities/xxxreport.py -f csv -o xxx-report.csv ./ |
| 56 | |
| 57 | -check-configs: $(PY) |
| 58 | +check-configs: buildout_bin |
| 59 | ${PY} utilities/check-configs.py |
| 60 | |
| 61 | pagetests: build |
| 62 | @@ -140,12 +142,14 @@ |
| 63 | ${SHHH} bin/sprite-util create-image |
| 64 | |
| 65 | jsbuild_lazr: bin/jsbuild |
| 66 | - # We absolutely do not want to include the lazr.testing module and its |
| 67 | - # jsTestDriver test harness modifications in the lazr.js and launchpad.js |
| 68 | - # roll-up files. They fiddle with built-in functions! See Bug 482340. |
| 69 | - ${SHHH} bin/jsbuild $(JSFLAGS) -b $(LAZR_BUILT_JS_ROOT) -x testing/ -c $(LAZR_BUILT_JS_ROOT)/yui |
| 70 | + # We absolutely do not want to include the lazr.testing module and |
| 71 | + # its jsTestDriver test harness modifications in the lazr.js and |
| 72 | + # launchpad.js roll-up files. They fiddle with built-in functions! |
| 73 | + # See Bug 482340. |
| 74 | + ${SHHH} bin/jsbuild $(JSFLAGS) -b $(LAZR_BUILT_JS_ROOT) -x testing/ \ |
| 75 | + -c $(LAZR_BUILT_JS_ROOT)/yui |
| 76 | |
| 77 | -jsbuild: jsbuild_lazr bin/jsbuild bin/jssize |
| 78 | +jsbuild: jsbuild_lazr bin/jsbuild bin/jssize buildout_bin |
| 79 | ${SHHH} bin/jsbuild \ |
| 80 | $(JSFLAGS) \ |
| 81 | -n launchpad \ |
| 82 | @@ -173,12 +177,12 @@ |
| 83 | @exit 1 |
| 84 | endif |
| 85 | |
| 86 | -buildonce_eggs: $(PY) |
| 87 | +buildonce_eggs: buildout_bin |
| 88 | find eggs -name '*.pyc' -exec rm {} \; |
| 89 | |
| 90 | # The download-cache dependency comes *before* eggs so that developers get the |
| 91 | -# warning before the eggs directory is made. The target for the eggs directory |
| 92 | -# is only there for deployment convenience. |
| 93 | +# warning before the eggs directory is made. The target for the eggs |
| 94 | +# directory is only there for deployment convenience. |
| 95 | # Note that the buildout version must be maintained here and in versions.cfg |
| 96 | # to make sure that the build does not go over the network. |
| 97 | bin/buildout: download-cache eggs |
| 98 | @@ -192,19 +196,22 @@ |
| 99 | # and the other bits might run into problems like bug 575037. This |
| 100 | # target runs buildout, and then removes everything created except for |
| 101 | # the eggs. |
| 102 | -build_eggs: $(BUILDOUT_BIN) clean_buildout |
| 103 | +build_eggs: buildout_bin clean_buildout |
| 104 | + |
| 105 | +$(BUILDOUT_BIN): buildout_bin |
| 106 | |
| 107 | # This builds bin/py and all the other bin files except bin/buildout. |
| 108 | # Remove the target before calling buildout to ensure that buildout |
| 109 | # updates the timestamp. |
| 110 | -$(BUILDOUT_BIN): bin/buildout versions.cfg $(BUILDOUT_CFG) setup.py |
| 111 | +buildout_bin: bin/buildout versions.cfg $(BUILDOUT_CFG) setup.py \ |
| 112 | + $(BUILDOUT_TEMPLATES) |
| 113 | $(RM) $@ |
| 114 | $(SHHH) PYTHONPATH= ./bin/buildout \ |
| 115 | configuration:instance_name=${LPCONFIG} -c $(BUILDOUT_CFG) |
| 116 | |
| 117 | # bin/compile_templates is responsible for building all chameleon templates, |
| 118 | # of which there is currently one, but of which many more are coming. |
| 119 | -compile: $(PY) $(BZR_VERSION_INFO) |
| 120 | +compile: buildout_bin $(BZR_VERSION_INFO) |
| 121 | mkdir -p /var/tmp/vostok-archive |
| 122 | ${SHHH} $(MAKE) -C sourcecode build PYTHON=${PYTHON} \ |
| 123 | LPCONFIG=${LPCONFIG} |
| 124 | @@ -405,7 +412,8 @@ |
| 125 | # We insert the absolute path to the branch-rewrite script |
| 126 | # into the Apache config as we copy the file into position. |
| 127 | sed -e 's,%BRANCH_REWRITE%,$(shell pwd)/scripts/branch-rewrite.py,' configs/development/local-launchpad-apache > /etc/apache2/sites-available/local-launchpad |
| 128 | - cp configs/development/local-vostok-apache /etc/apache2/sites-available/local-vostok |
| 129 | + cp configs/development/local-vostok-apache \ |
| 130 | + /etc/apache2/sites-available/local-vostok |
| 131 | touch /var/tmp/bazaar.launchpad.dev/rewrite.log |
| 132 | chown $(SUDO_UID):$(SUDO_GID) /var/tmp/bazaar.launchpad.dev/rewrite.log |
| 133 | |
| 134 | @@ -430,8 +438,9 @@ |
| 135 | |
| 136 | lp.sfood: |
| 137 | # Generate import dependency graph |
| 138 | - sfood -i -u -I lib/sqlobject -I lib/schoolbell -I lib/devscripts -I lib/contrib \ |
| 139 | - -I lib/canonical/not-used lib/canonical lib/lp 2>/dev/null | grep -v contrib/ \ |
| 140 | + sfood -i -u -I lib/sqlobject -I lib/schoolbell -I lib/devscripts \ |
| 141 | + -I lib/contrib -I lib/canonical/not-used lib/canonical \ |
| 142 | + lib/lp 2>/dev/null | grep -v contrib/ \ |
| 143 | | grep -v sqlobject | grep -v BeautifulSoup | grep -v psycopg \ |
| 144 | | grep -v schoolbell > lp.sfood.tmp |
| 145 | mv lp.sfood.tmp lp.sfood |
| 146 | @@ -463,10 +472,10 @@ |
| 147 | --docformat restructuredtext --verbose-about epytext-summary \ |
| 148 | $(PYDOCTOR_OPTIONS) |
| 149 | |
| 150 | -.PHONY: apidoc check tags TAGS zcmldocs realclean clean debug stop\ |
| 151 | - start run ftest_build ftest_inplace test_build test_inplace pagetests\ |
| 152 | - check check_merge \ |
| 153 | - schema default launchpad.pot check_merge_ui pull scan sync_branches\ |
| 154 | - reload-apache hosted_branches check_db_merge check_mailman check_config\ |
| 155 | - jsbuild jsbuild_lazr clean_js clean_buildout buildonce_eggs build_eggs\ |
| 156 | - sprite_css sprite_image css_combine compile check_schema pydoctor |
| 157 | +.PHONY: apidoc buildout_bin check tags TAGS zcmldocs realclean clean debug \ |
| 158 | + stop start run ftest_build ftest_inplace test_build test_inplace \ |
| 159 | + pagetests check check_merge schema default launchpad.pot \ |
| 160 | + check_merge_ui pull scan sync_branches reload-apache hosted_branches \ |
| 161 | + check_db_merge check_mailman check_config jsbuild jsbuild_lazr \ |
| 162 | + clean_js clean_buildout buildonce_eggs build_eggs sprite_css \ |
| 163 | + sprite_image css_combine compile check_schema pydoctor |
| 164 | |
| 165 | === modified file 'cronscripts/publishing/cron.publish-copy-archives' |
| 166 | --- cronscripts/publishing/cron.publish-copy-archives 2010-06-25 14:36:11 +0000 |
| 167 | +++ cronscripts/publishing/cron.publish-copy-archives 2010-11-19 10:27:10 +0000 |
| 168 | @@ -10,7 +10,6 @@ |
| 169 | exit 1 |
| 170 | fi |
| 171 | |
| 172 | -set -x |
| 173 | set -e |
| 174 | set -u |
| 175 | |
| 176 | @@ -20,24 +19,23 @@ |
| 177 | |
| 178 | |
| 179 | # Informational -- this *MUST* match the database. |
| 180 | -ARCHIVEROOT=/srv/launchpad.net/ubuntu-archive/ubuntu |
| 181 | +ARCHIVEROOT=/srv/launchpad.net/rebuild-test/ubuntu |
| 182 | DISTSROOT=$ARCHIVEROOT/dists |
| 183 | OVERRIDEROOT=$ARCHIVEROOT/../ubuntu-overrides |
| 184 | INDICES=$ARCHIVEROOT/indices |
| 185 | PRODUCTION_CONFIG=ftpmaster-publish |
| 186 | |
| 187 | if [ "$LPCONFIG" = "$PRODUCTION_CONFIG" ]; then |
| 188 | - GNUPGHOME=/srv/launchpad.net/ubuntu-archive/gnupg-home |
| 189 | + GNUPGHOME=/srv/launchpad.net/rebuild-test/gnupg-home |
| 190 | else |
| 191 | echo GPG keys will come from ~/.gnupg |
| 192 | # GNUPGHOME does not need to be set, keys can come from ~/.gnupg. |
| 193 | fi |
| 194 | |
| 195 | # Configuration options. |
| 196 | -LAUNCHPADROOT=/srv/launchpad.net/codelines/current |
| 197 | -LOCKFILE=/srv/launchpad.net/ubuntu-archive/cron.daily.lock |
| 198 | +LAUNCHPADROOT=/srv/launchpad.net/production/launchpad |
| 199 | +LOCKFILE=/srv/launchpad.net/rebuild-test/cron.daily.lock |
| 200 | DISTRONAME=ubuntu |
| 201 | -TRACEFILE=$ARCHIVEROOT/project/trace/$(hostname --fqdn) |
| 202 | |
| 203 | # Manipulate the environment. |
| 204 | export GNUPGHOME |
| 205 | @@ -64,20 +62,5 @@ |
| 206 | # Publish the packages to disk. |
| 207 | publish-distro.py -v -v --copy-archive -d $DISTRONAME |
| 208 | |
| 209 | -set +x |
| 210 | - |
| 211 | echo Removing uncompressed Packages and Sources files |
| 212 | find ${DISTSROOT} \( -name "Packages" -o -name "Sources" \) -exec rm "{}" \; |
| 213 | - |
| 214 | -# Copy in the indices. |
| 215 | -if [ "$LPCONFIG" = "$PRODUCTION_CONFIG" ]; then |
| 216 | - echo Copying the indices into place. |
| 217 | - rm -f $INDICES/override.* |
| 218 | - cp $OVERRIDEROOT/override.* $INDICES |
| 219 | -fi |
| 220 | - |
| 221 | -# Timestamp our trace file to track when the last archive publisher run took |
| 222 | -# place. |
| 223 | -if [ "$LPCONFIG" = "$PRODUCTION_CONFIG" ]; then |
| 224 | - date -u > "$TRACEFILE" |
| 225 | -fi |
| 226 | |
| 227 | === modified file 'lib/canonical/buildd/binarypackage.py' |
| 228 | --- lib/canonical/buildd/binarypackage.py 2010-07-13 09:13:41 +0000 |
| 229 | +++ lib/canonical/buildd/binarypackage.py 2010-11-19 10:27:10 +0000 |
| 230 | @@ -19,9 +19,7 @@ |
| 231 | class BuildLogRegexes: |
| 232 | """Build log regexes for performing actions based on regexes, and extracting dependencies for auto dep-waits""" |
| 233 | GIVENBACK = [ |
| 234 | - (" terminated by signal 4"), |
| 235 | ("^E: There are problems and -y was used without --force-yes"), |
| 236 | - ("^make.* Illegal instruction"), |
| 237 | ] |
| 238 | DEPFAIL = [ |
| 239 | ("(?P<pk>[\-+.\w]+)\(inst [^ ]+ ! >> wanted (?P<v>[\-.+\w:~]+)\)","\g<pk> (>> \g<v>)"), |
| 240 | |
| 241 | === modified file 'lib/canonical/buildd/buildrecipe' |
| 242 | --- lib/canonical/buildd/buildrecipe 2010-09-30 20:22:15 +0000 |
| 243 | +++ lib/canonical/buildd/buildrecipe 2010-11-19 10:27:10 +0000 |
| 244 | @@ -11,6 +11,7 @@ |
| 245 | import os |
| 246 | import pwd |
| 247 | import re |
| 248 | +from resource import RLIMIT_AS, setrlimit |
| 249 | import socket |
| 250 | from subprocess import call, Popen, PIPE |
| 251 | import sys |
| 252 | @@ -206,6 +207,7 @@ |
| 253 | |
| 254 | |
| 255 | if __name__ == '__main__': |
| 256 | + setrlimit(RLIMIT_AS, (1000000000, -1)) |
| 257 | builder = RecipeBuilder(*sys.argv[1:]) |
| 258 | if builder.buildTree() != 0: |
| 259 | sys.exit(RETCODE_FAILURE_BUILD_TREE) |
| 260 | |
| 261 | === modified file 'lib/canonical/launchpad/doc/db-policy.txt' |
| 262 | --- lib/canonical/launchpad/doc/db-policy.txt 2010-02-22 12:16:02 +0000 |
| 263 | +++ lib/canonical/launchpad/doc/db-policy.txt 2010-11-19 10:27:10 +0000 |
| 264 | @@ -124,3 +124,29 @@ |
| 265 | >>> IMasterObject(ro_janitor) is writable_janitor |
| 266 | True |
| 267 | |
| 268 | +Read-Only Mode |
| 269 | +-------------- |
| 270 | + |
| 271 | +During database outages, we run in read-only mode. In this mode, no |
| 272 | +matter what database policy is currently installed, explicit requests |
| 273 | +for a master store fail and the default store is always the slave. |
| 274 | + |
| 275 | + >>> from canonical.launchpad.tests.readonly import read_only_mode |
| 276 | + >>> from canonical.launchpad.webapp.dbpolicy import MasterDatabasePolicy |
| 277 | + >>> from contextlib import nested |
| 278 | + |
| 279 | + >>> with nested(read_only_mode(), MasterDatabasePolicy()): |
| 280 | + ... default_store = IStore(Person) |
| 281 | + ... IMasterStore.providedBy(default_store) |
| 282 | + False |
| 283 | + |
| 284 | + >>> with nested(read_only_mode(), MasterDatabasePolicy()): |
| 285 | + ... slave_store = ISlaveStore(Person) |
| 286 | + ... IMasterStore.providedBy(slave_store) |
| 287 | + False |
| 288 | + |
| 289 | + >>> with nested(read_only_mode(), MasterDatabasePolicy()): |
| 290 | + ... master_store = IMasterStore(Person) |
| 291 | + Traceback (most recent call last): |
| 292 | + ... |
| 293 | + ReadOnlyModeDisallowedStore: ('main', 'master') |
| 294 | |
| 295 | === renamed file 'lib/canonical/launchpad/doc/emailaddress.txt.disabled' => 'lib/canonical/launchpad/doc/emailaddress.txt' |
| 296 | --- lib/canonical/launchpad/doc/emailaddress.txt.disabled 2009-08-13 19:03:36 +0000 |
| 297 | +++ lib/canonical/launchpad/doc/emailaddress.txt 2010-11-19 10:27:10 +0000 |
| 298 | @@ -1,4 +1,5 @@ |
| 299 | -= Email Addresses = |
| 300 | +Email Addresses |
| 301 | +=============== |
| 302 | |
| 303 | In Launchpad we use email addresses to uniquely identify a person. This is why |
| 304 | email addresses must be unique. |
| 305 | @@ -22,7 +23,7 @@ |
| 306 | |
| 307 | Email addresses provide both IEmailAddress and IHasOwner. |
| 308 | |
| 309 | - >>> from canonical.launchpad.interfaces.launchpad import IHasOwner |
| 310 | + >>> from lp.registry.interfaces.role import IHasOwner |
| 311 | >>> verifyObject(IEmailAddress, email) |
| 312 | True |
| 313 | >>> verifyObject(IHasOwner, email) |
| 314 | @@ -66,11 +67,12 @@ |
| 315 | [u'celso.providelo@canonical.com', u'colin.watson@ubuntulinux.com', |
| 316 | u'daniel.silverstone@canonical.com', u'edgar@monteparadiso.hr', |
| 317 | u'foo.bar@canonical.com', u'jeff.waugh@ubuntulinux.com', |
| 318 | - u'limi@plone.org', u'mark@example.com', u'steve.alexander@ubuntulinux.com', |
| 319 | - u'support@ubuntu.com'] |
| 320 | - |
| 321 | - |
| 322 | -== Deleting email addresses == |
| 323 | + u'limi@plone.org', u'mark@example.com', |
| 324 | + u'steve.alexander@ubuntulinux.com', u'support@ubuntu.com'] |
| 325 | + |
| 326 | + |
| 327 | +Deleting email addresses |
| 328 | +------------------------ |
| 329 | |
| 330 | Email addresses may be deleted if they're not a person's preferred one |
| 331 | or the address of a team's mailing list. |
| 332 | |
| 333 | === modified file 'lib/canonical/launchpad/interfaces/__init__.py' |
| 334 | --- lib/canonical/launchpad/interfaces/__init__.py 2010-11-12 20:58:49 +0000 |
| 335 | +++ lib/canonical/launchpad/interfaces/__init__.py 2010-11-19 10:27:10 +0000 |
| 336 | @@ -11,9 +11,3 @@ |
| 337 | locations under the 'lp' package. See the `lp` docstring for more details. |
| 338 | """ |
| 339 | |
| 340 | -# XXX henninge 2010-11-12: This is needed by the file |
| 341 | -# +inbound-email-config.zcml which resides outside of the LP tree and can |
| 342 | -# only be safely updated at roll-out time. The import can be removed again |
| 343 | -# after the 10.11 roll-out. |
| 344 | -from canonical.launchpad.interfaces.mail import IMailHandler |
| 345 | - |
| 346 | |
| 347 | === modified file 'lib/canonical/launchpad/tests/readonly.py' |
| 348 | --- lib/canonical/launchpad/tests/readonly.py 2010-08-20 20:31:18 +0000 |
| 349 | +++ lib/canonical/launchpad/tests/readonly.py 2010-11-19 10:27:10 +0000 |
| 350 | @@ -7,15 +7,20 @@ |
| 351 | __metaclass__ = type |
| 352 | __all__ = [ |
| 353 | 'touch_read_only_file', |
| 354 | + 'read_only_mode', |
| 355 | 'remove_read_only_file', |
| 356 | ] |
| 357 | |
| 358 | +from contextlib import contextmanager |
| 359 | import os |
| 360 | |
| 361 | +from lazr.restful.utils import get_current_browser_request |
| 362 | + |
| 363 | from canonical.launchpad.readonly import ( |
| 364 | is_read_only, |
| 365 | read_only_file_exists, |
| 366 | read_only_file_path, |
| 367 | + READ_ONLY_MODE_ANNOTATIONS_KEY, |
| 368 | ) |
| 369 | |
| 370 | |
| 371 | @@ -37,7 +42,7 @@ |
| 372 | def remove_read_only_file(assert_mode_switch=True): |
| 373 | """Remove the file named read-only.txt from the root of the tree. |
| 374 | |
| 375 | - May also assert that the mode switch actually happened (i.e. not |
| 376 | + May also assert that the mode switch actually happened (i.e. not |
| 377 | is_read_only()). This assertion has to be conditional because some tests |
| 378 | will use this during the processing of a request, when a mode change can't |
| 379 | happen (i.e. is_read_only() will still return True during that request's |
| 380 | @@ -48,3 +53,15 @@ |
| 381 | # Assert that the switch succeeded and make sure the mode change is |
| 382 | # logged. |
| 383 | assert not is_read_only(), "Switching to read-write failed." |
| 384 | + |
| 385 | + |
| 386 | +@contextmanager |
| 387 | +def read_only_mode(flag=True): |
| 388 | + request = get_current_browser_request() |
| 389 | + current = request.annotations[READ_ONLY_MODE_ANNOTATIONS_KEY] |
| 390 | + request.annotations[READ_ONLY_MODE_ANNOTATIONS_KEY] = flag |
| 391 | + try: |
| 392 | + assert is_read_only() == flag, 'Failed to set read-only mode' |
| 393 | + yield |
| 394 | + finally: |
| 395 | + request.annotations[READ_ONLY_MODE_ANNOTATIONS_KEY] = current |
| 396 | |
| 397 | === modified file 'lib/canonical/launchpad/webapp/adapter.py' |
| 398 | --- lib/canonical/launchpad/webapp/adapter.py 2010-11-08 12:52:43 +0000 |
| 399 | +++ lib/canonical/launchpad/webapp/adapter.py 2010-11-19 10:27:10 +0000 |
| 400 | @@ -60,6 +60,7 @@ |
| 401 | IStoreSelector, |
| 402 | MAIN_STORE, |
| 403 | MASTER_FLAVOR, |
| 404 | + ReadOnlyModeDisallowedStore, |
| 405 | ReadOnlyModeViolation, |
| 406 | SLAVE_FLAVOR, |
| 407 | ) |
| 408 | @@ -129,6 +130,7 @@ |
| 409 | |
| 410 | |
| 411 | class CommitLogger: |
| 412 | + |
| 413 | def __init__(self, txn): |
| 414 | self.txn = txn |
| 415 | |
| 416 | @@ -261,15 +263,16 @@ |
| 417 | def set_permit_timeout_from_features(enabled): |
| 418 | """Control request timeouts being obtained from the 'hard_timeout' flag. |
| 419 | |
| 420 | - Until we've fully setup a page to render - routed the request to the right |
| 421 | - object, setup a participation etc, feature flags cannot be completely used; |
| 422 | - and because doing feature flag lookups will trigger DB access, attempting |
| 423 | - to do a DB lookup will cause a nested DB lookup (the one being done, and |
| 424 | - the flags lookup). To resolve all of this, timeouts start as a config file |
| 425 | - only setting, and are then overridden once the request is ready to execute. |
| 426 | + Until we've fully setup a page to render - routed the request to the |
| 427 | + right object, setup a participation etc, feature flags cannot be |
| 428 | + completely used; and because doing feature flag lookups will trigger |
| 429 | + DB access, attempting to do a DB lookup will cause a nested DB |
| 430 | + lookup (the one being done, and the flags lookup). To resolve all of |
| 431 | + this, timeouts start as a config file only setting, and are then |
| 432 | + overridden once the request is ready to execute. |
| 433 | |
| 434 | - :param enabled: If True permit looking up request timeouts in feature |
| 435 | - flags. |
| 436 | + :param enabled: If True permit looking up request timeouts in |
| 437 | + feature flags. |
| 438 | """ |
| 439 | _local._permit_feature_timeout = enabled |
| 440 | |
| 441 | @@ -350,6 +353,7 @@ |
| 442 | |
| 443 | _main_thread_id = None |
| 444 | |
| 445 | + |
| 446 | def break_main_thread_db_access(*ignored): |
| 447 | """Ensure that Storm connections are not made in the main thread. |
| 448 | |
| 449 | @@ -390,6 +394,7 @@ |
| 450 | |
| 451 | class ReadOnlyModeConnection(PostgresConnection): |
| 452 | """storm.database.Connection for read-only mode Launchpad.""" |
| 453 | + |
| 454 | def execute(self, statement, params=None, noresult=False): |
| 455 | """See storm.database.Connection.""" |
| 456 | try: |
| 457 | @@ -550,13 +555,14 @@ |
| 458 | # XXX: This code does not belong here - see bug=636804. |
| 459 | # Robert Collins 20100913. |
| 460 | OpStats.stats['timeouts'] += 1 |
| 461 | - # XXX bug=636801 Robert Colins 20100914 This is duplicated from the |
| 462 | - # statement tracer, because the tracers are not arranged in a stack |
| 463 | - # rather a queue: the done-code in the statement tracer never runs. |
| 464 | + # XXX bug=636801 Robert Colins 20100914 This is duplicated |
| 465 | + # from the statement tracer, because the tracers are not |
| 466 | + # arranged in a stack rather a queue: the done-code in the |
| 467 | + # statement tracer never runs. |
| 468 | action = getattr(connection, '_lp_statement_action', None) |
| 469 | if action is not None: |
| 470 | - # action may be None if the tracer was installed after the |
| 471 | - # statement was submitted. |
| 472 | + # action may be None if the tracer was installed after |
| 473 | + # the statement was submitted. |
| 474 | action.finish() |
| 475 | info = sys.exc_info() |
| 476 | transaction.doom() |
| 477 | @@ -666,6 +672,20 @@ |
| 478 | @staticmethod |
| 479 | def get(name, flavor): |
| 480 | """See `IStoreSelector`.""" |
| 481 | + if is_read_only(): |
| 482 | + # If we are in read-only mode, override the default to the |
| 483 | + # slave no matter what the existing policy says (it might |
| 484 | + # work), and raise an exception if the master was explicitly |
| 485 | + # requested. Most of the time, this doesn't matter as when |
| 486 | + # we are in read-only mode we have a suitable database |
| 487 | + # policy installed. However, code can override the policy so |
| 488 | + # we still need to catch disallowed requests here. |
| 489 | + if flavor == DEFAULT_FLAVOR: |
| 490 | + flavor = SLAVE_FLAVOR |
| 491 | + elif flavor == MASTER_FLAVOR: |
| 492 | + raise ReadOnlyModeDisallowedStore(name, flavor) |
| 493 | + else: |
| 494 | + pass |
| 495 | db_policy = StoreSelector.get_current() |
| 496 | if db_policy is None: |
| 497 | db_policy = MasterDatabasePolicy(None) |
| 498 | |
| 499 | === modified file 'lib/canonical/launchpad/webapp/dbpolicy.py' |
| 500 | --- lib/canonical/launchpad/webapp/dbpolicy.py 2010-11-08 12:52:43 +0000 |
| 501 | +++ lib/canonical/launchpad/webapp/dbpolicy.py 2010-11-19 10:27:10 +0000 |
| 502 | @@ -149,6 +149,7 @@ |
| 503 | |
| 504 | class DatabaseBlockedPolicy(BaseDatabasePolicy): |
| 505 | """`IDatabasePolicy` that blocks all access to the database.""" |
| 506 | + |
| 507 | def getStore(self, name, flavor): |
| 508 | """Raises `DisallowedStore`. No Database access is allowed.""" |
| 509 | raise DisallowedStore(name, flavor) |
| 510 | @@ -180,6 +181,7 @@ |
| 511 | This policy is used for Feeds requests and other always-read only request. |
| 512 | """ |
| 513 | default_flavor = SLAVE_FLAVOR |
| 514 | + |
| 515 | def getStore(self, name, flavor): |
| 516 | """See `IDatabasePolicy`.""" |
| 517 | if flavor == MASTER_FLAVOR: |
| 518 | @@ -210,6 +212,7 @@ |
| 519 | |
| 520 | Selects the DEFAULT_FLAVOR based on the request. |
| 521 | """ |
| 522 | + |
| 523 | def __init__(self, request): |
| 524 | # The super constructor is a no-op. |
| 525 | # pylint: disable-msg=W0231 |
| 526 | @@ -364,6 +367,7 @@ |
| 527 | |
| 528 | Access to all master Stores is blocked. |
| 529 | """ |
| 530 | + |
| 531 | def getStore(self, name, flavor): |
| 532 | """See `IDatabasePolicy`. |
| 533 | |
| 534 | @@ -383,6 +387,7 @@ |
| 535 | |
| 536 | class WhichDbView(LaunchpadView): |
| 537 | "A page that reports which database is being used by default." |
| 538 | + |
| 539 | def render(self): |
| 540 | store = getUtility(IStoreSelector).get(MAIN_STORE, DEFAULT_FLAVOR) |
| 541 | dbname = store.execute("SELECT current_database()").get_one()[0] |
| 542 | |
| 543 | === modified file 'lib/lp/app/javascript/tests/test_lp_collapsibles.html' |
| 544 | --- lib/lp/app/javascript/tests/test_lp_collapsibles.html 2010-07-26 13:42:32 +0000 |
| 545 | +++ lib/lp/app/javascript/tests/test_lp_collapsibles.html 2010-11-19 10:27:10 +0000 |
| 546 | @@ -4,14 +4,14 @@ |
| 547 | <title>Launchpad Collapsibles</title> |
| 548 | |
| 549 | <!-- YUI 3.0 Setup --> |
| 550 | - <script type="text/javascript" src="../../../icing/yui/yui/yui.js"></script> |
| 551 | - <link rel="stylesheet" href="../../../icing/yui/cssreset/reset.css"/> |
| 552 | - <link rel="stylesheet" href="../../../icing/yui/cssfonts/fonts.css"/> |
| 553 | - <link rel="stylesheet" href="../../../icing/yui/cssbase/base.css"/> |
| 554 | - <link rel="stylesheet" href="../../test.css" /> |
| 555 | + <script type="text/javascript" src="../../../../canonical/launchpad/icing/yui/yui/yui.js"></script> |
| 556 | + <script type="text/javascript" src="../../../../canonical/launchpad/icing/lazr/build/lazr.js"></script> |
| 557 | + <link rel="stylesheet" href="../../../../canonical/launchpad/icing/yui/cssreset/reset.css"/> |
| 558 | + <link rel="stylesheet" href="../../../../canonical/launchpad/icing/yui/cssfonts/fonts.css"/> |
| 559 | + <link rel="stylesheet" href="../../../../canonical/launchpad/icing/yui/cssbase/base.css"/> |
| 560 | + <link rel="stylesheet" href="../../../../canonical/launchpad/javascript/test.css" /> |
| 561 | |
| 562 | <!-- The module under test --> |
| 563 | - <script type="text/javascript" src="../../../icing/lazr/build/effects/effects.js"></script> |
| 564 | <script type="text/javascript" src="../lp.js"></script> |
| 565 | |
| 566 | <!-- The test suite --> |
| 567 | |
| 568 | === modified file 'lib/lp/app/javascript/tests/test_lp_collapsibles.js' |
| 569 | --- lib/lp/app/javascript/tests/test_lp_collapsibles.js 2010-07-26 13:42:32 +0000 |
| 570 | +++ lib/lp/app/javascript/tests/test_lp_collapsibles.js 2010-11-19 10:27:10 +0000 |
| 571 | @@ -1,21 +1,21 @@ |
| 572 | /* Copyright (c) 2009, Canonical Ltd. All rights reserved. */ |
| 573 | |
| 574 | YUI({ |
| 575 | - base: '../../../icing/yui/', |
| 576 | + base: '../../../../canonical/launchpad/icing/yui/', |
| 577 | filter: 'raw', |
| 578 | combine: false |
| 579 | }).use('test', 'console', 'lp', function(Y) { |
| 580 | |
| 581 | var Assert = Y.Assert; // For easy access to isTrue(), etc. |
| 582 | |
| 583 | -Y.Test.Runner.add(new Y.Test.Case({ |
| 584 | +var suite = new Y.Test.Suite("Collapsibles Tests"); |
| 585 | +suite.add(new Y.Test.Case({ |
| 586 | name: "activate_collapsibles", |
| 587 | |
| 588 | _should: { |
| 589 | - error: { |
| 590 | + fail: { |
| 591 | test_toggle_collapsible_fails_on_wrapperless_collapsible: true, |
| 592 | test_toggle_collapsible_fails_on_iconless_collapsible: true, |
| 593 | - test_activate_collapsibles_handles_no_collapsibles: false |
| 594 | } |
| 595 | }, |
| 596 | |
| 597 | @@ -149,17 +149,16 @@ |
| 598 | test_toggle_collapsible_opens_collapsed_collapsible: function() { |
| 599 | // Calling toggle_collapsible() on a collapsed collapsible will |
| 600 | // toggle its state to open. |
| 601 | + Y.lp.activate_collapsibles(); |
| 602 | var collapsible = this.container.one('.collapsible'); |
| 603 | - collapsible.addClass('collapsed'); |
| 604 | + var wrapper_div = collapsible.one('.collapseWrapper'); |
| 605 | + wrapper_div.addClass('lazr-closed'); |
| 606 | |
| 607 | - Y.lp.activate_collapsibles(); |
| 608 | Y.lp.toggle_collapsible(collapsible); |
| 609 | this.wait(function() { |
| 610 | - |
| 611 | // The collapsible's wrapper div will now be open. |
| 612 | var icon = collapsible.one('img'); |
| 613 | - var wrapper_div = collapsible.one('.collapseWrapper'); |
| 614 | - Assert.isTrue(wrapper_div.hasClass('lazr-open')); |
| 615 | + Assert.isFalse(wrapper_div.hasClass('lazr-closed')); |
| 616 | Assert.areNotEqual( |
| 617 | -1, icon.get('src').indexOf('/@@/treeExpanded')); |
| 618 | }, 500); |
| 619 | @@ -321,6 +320,15 @@ |
| 620 | } |
| 621 | })); |
| 622 | |
| 623 | +// Lock, stock, and two smoking barrels. |
| 624 | +var handle_complete = function(data) { |
| 625 | + status_node = Y.Node.create( |
| 626 | + '<p id="complete">Test status: complete</p>'); |
| 627 | + Y.get('body').appendChild(status_node); |
| 628 | + }; |
| 629 | +Y.Test.Runner.on('complete', handle_complete); |
| 630 | +Y.Test.Runner.add(suite); |
| 631 | + |
| 632 | var yui_console = new Y.Console({ |
| 633 | newestOnTop: false |
| 634 | }); |
| 635 | |
| 636 | === added directory 'lib/lp/app/windmill' |
| 637 | === added file 'lib/lp/app/windmill/__init__.py' |
| 638 | === added file 'lib/lp/app/windmill/testing.py' |
| 639 | --- lib/lp/app/windmill/testing.py 1970-01-01 00:00:00 +0000 |
| 640 | +++ lib/lp/app/windmill/testing.py 2010-11-19 10:27:10 +0000 |
| 641 | @@ -0,0 +1,21 @@ |
| 642 | +# Copyright 2009-2010 Canonical Ltd. This software is licensed under the |
| 643 | +# GNU Affero General Public License version 3 (see the file LICENSE). |
| 644 | + |
| 645 | +"""Launchpad app specific testing infrastructure for Windmill.""" |
| 646 | + |
| 647 | +__metaclass__ = type |
| 648 | +__all__ = [ |
| 649 | + 'AppWindmillLayer', |
| 650 | + ] |
| 651 | + |
| 652 | + |
| 653 | +from canonical.testing.layers import BaseWindmillLayer |
| 654 | + |
| 655 | + |
| 656 | +class AppWindmillLayer(BaseWindmillLayer): |
| 657 | + """Layer for App Windmill tests.""" |
| 658 | + |
| 659 | + @classmethod |
| 660 | + def setUp(cls): |
| 661 | + cls.base_url = cls.appserver_root_url() |
| 662 | + super(AppWindmillLayer, cls).setUp() |
| 663 | |
| 664 | === added directory 'lib/lp/app/windmill/tests' |
| 665 | === added file 'lib/lp/app/windmill/tests/__init__.py' |
| 666 | === added file 'lib/lp/app/windmill/tests/test_yuitests.py' |
| 667 | --- lib/lp/app/windmill/tests/test_yuitests.py 1970-01-01 00:00:00 +0000 |
| 668 | +++ lib/lp/app/windmill/tests/test_yuitests.py 2010-11-19 10:27:10 +0000 |
| 669 | @@ -0,0 +1,24 @@ |
| 670 | +# Copyright 2010 Canonical Ltd. This software is licensed under the |
| 671 | +# GNU Affero General Public License version 3 (see the file LICENSE). |
| 672 | + |
| 673 | +"""Run YUI.test tests.""" |
| 674 | + |
| 675 | +__metaclass__ = type |
| 676 | +__all__ = [] |
| 677 | + |
| 678 | +from lp.app.windmill.testing import AppWindmillLayer |
| 679 | +from lp.testing import ( |
| 680 | + build_yui_unittest_suite, |
| 681 | + YUIUnitTestCase, |
| 682 | + ) |
| 683 | + |
| 684 | + |
| 685 | +class AppYUIUnitTestCase(YUIUnitTestCase): |
| 686 | + |
| 687 | + layer = AppWindmillLayer |
| 688 | + suite_name = 'AppYUIUnitTests' |
| 689 | + |
| 690 | + |
| 691 | +def test_suite(): |
| 692 | + app_testing_path = 'lp/app/javascript/tests' |
| 693 | + return build_yui_unittest_suite(app_testing_path, AppYUIUnitTestCase) |
| 694 | |
| 695 | === modified file 'lib/lp/archivepublisher/domination.py' |
| 696 | --- lib/lp/archivepublisher/domination.py 2010-10-03 15:30:06 +0000 |
| 697 | +++ lib/lp/archivepublisher/domination.py 2010-11-19 10:27:10 +0000 |
| 698 | @@ -58,19 +58,24 @@ |
| 699 | import operator |
| 700 | |
| 701 | import apt_pkg |
| 702 | +from storm.expr import And, Count, Select |
| 703 | |
| 704 | from canonical.database.constants import UTC_NOW |
| 705 | from canonical.database.sqlbase import ( |
| 706 | clear_current_connection_cache, |
| 707 | - cursor, |
| 708 | flush_database_updates, |
| 709 | sqlvalues, |
| 710 | ) |
| 711 | +from canonical.launchpad.interfaces.lpstorm import IMasterStore |
| 712 | from lp.archivepublisher import ELIGIBLE_DOMINATION_STATES |
| 713 | +from lp.registry.model.sourcepackagename import SourcePackageName |
| 714 | from lp.soyuz.enums import ( |
| 715 | BinaryPackageFormat, |
| 716 | PackagePublishingStatus, |
| 717 | ) |
| 718 | +from lp.soyuz.model.binarypackagename import BinaryPackageName |
| 719 | +from lp.soyuz.model.binarypackagerelease import BinaryPackageRelease |
| 720 | +from lp.soyuz.model.sourcepackagerelease import SourcePackageRelease |
| 721 | |
| 722 | |
| 723 | def clear_cache(): |
| 724 | @@ -287,60 +292,67 @@ |
| 725 | self.debug("Performing domination across %s/%s (%s)" % ( |
| 726 | dr.name, pocket.title, distroarchseries.architecturetag)) |
| 727 | |
| 728 | - # Here we go behind SQLObject's back to generate an assistance |
| 729 | - # table which will seriously improve the performance of this |
| 730 | - # part of the publisher. |
| 731 | - # XXX: dsilvers 2006-02-04: It would be nice to not have to do |
| 732 | - # this. Most of this methodology is stolen from person.py |
| 733 | - # XXX: malcc 2006-08-03: This should go away when we shift to |
| 734 | - # doing this one package at a time. |
| 735 | - flush_database_updates() |
| 736 | - cur = cursor() |
| 737 | - cur.execute("""SELECT bpn.id AS name, count(bpn.id) AS count INTO |
| 738 | - temporary table PubDomHelper FROM BinaryPackageRelease bpr, |
| 739 | - BinaryPackageName bpn, BinaryPackagePublishingHistory |
| 740 | - sbpph WHERE bpr.binarypackagename = bpn.id AND |
| 741 | - sbpph.binarypackagerelease = bpr.id AND |
| 742 | - sbpph.distroarchseries = %s AND sbpph.archive = %s AND |
| 743 | - sbpph.status = %s AND sbpph.pocket = %s |
| 744 | - GROUP BY bpn.id""" % sqlvalues( |
| 745 | - distroarchseries, self.archive, |
| 746 | - PackagePublishingStatus.PUBLISHED, pocket)) |
| 747 | - |
| 748 | - binaries = BinaryPackagePublishingHistory.select( |
| 749 | - """ |
| 750 | - binarypackagepublishinghistory.distroarchseries = %s |
| 751 | - AND binarypackagepublishinghistory.archive = %s |
| 752 | - AND binarypackagepublishinghistory.pocket = %s |
| 753 | - AND binarypackagepublishinghistory.status = %s AND |
| 754 | - binarypackagepublishinghistory.binarypackagerelease = |
| 755 | - binarypackagerelease.id |
| 756 | - AND binarypackagerelease.binpackageformat != %s |
| 757 | - AND binarypackagerelease.binarypackagename IN ( |
| 758 | - SELECT name FROM PubDomHelper WHERE count > 1)""" |
| 759 | - % sqlvalues(distroarchseries, self.archive, |
| 760 | - pocket, PackagePublishingStatus.PUBLISHED, |
| 761 | - BinaryPackageFormat.DDEB), |
| 762 | - clauseTables=['BinaryPackageRelease']) |
| 763 | - |
| 764 | + bpph_location_clauses = And( |
| 765 | + BinaryPackagePublishingHistory.status == |
| 766 | + PackagePublishingStatus.PUBLISHED, |
| 767 | + BinaryPackagePublishingHistory.distroarchseries == |
| 768 | + distroarchseries, |
| 769 | + BinaryPackagePublishingHistory.archive == self.archive, |
| 770 | + BinaryPackagePublishingHistory.pocket == pocket, |
| 771 | + ) |
| 772 | + candidate_binary_names = Select( |
| 773 | + BinaryPackageName.id, |
| 774 | + And( |
| 775 | + BinaryPackageRelease.binarypackagenameID == |
| 776 | + BinaryPackageName.id, |
| 777 | + BinaryPackagePublishingHistory.binarypackagereleaseID == |
| 778 | + BinaryPackageRelease.id, |
| 779 | + bpph_location_clauses, |
| 780 | + ), |
| 781 | + group_by=BinaryPackageName.id, |
| 782 | + having=Count(BinaryPackagePublishingHistory.id) > 1) |
| 783 | + binaries = IMasterStore(BinaryPackagePublishingHistory).find( |
| 784 | + BinaryPackagePublishingHistory, |
| 785 | + BinaryPackageRelease.id == |
| 786 | + BinaryPackagePublishingHistory.binarypackagereleaseID, |
| 787 | + BinaryPackageRelease.binarypackagenameID.is_in( |
| 788 | + candidate_binary_names), |
| 789 | + BinaryPackageRelease.binpackageformat != |
| 790 | + BinaryPackageFormat.DDEB, |
| 791 | + bpph_location_clauses) |
| 792 | self.debug("Dominating binaries...") |
| 793 | self._dominatePublications(self._sortPackages(binaries, False)) |
| 794 | if do_clear_cache: |
| 795 | self.debug("Flushing SQLObject cache.") |
| 796 | clear_cache() |
| 797 | |
| 798 | - flush_database_updates() |
| 799 | - cur.execute("DROP TABLE PubDomHelper") |
| 800 | - |
| 801 | - if do_clear_cache: |
| 802 | - self.debug("Flushing SQLObject cache.") |
| 803 | - clear_cache() |
| 804 | - |
| 805 | self.debug("Performing domination across %s/%s (Source)" % |
| 806 | (dr.name, pocket.title)) |
| 807 | - sources = SourcePackagePublishingHistory.selectBy( |
| 808 | - distroseries=dr, archive=self.archive, pocket=pocket, |
| 809 | - status=PackagePublishingStatus.PUBLISHED) |
| 810 | + spph_location_clauses = And( |
| 811 | + SourcePackagePublishingHistory.status == |
| 812 | + PackagePublishingStatus.PUBLISHED, |
| 813 | + SourcePackagePublishingHistory.distroseries == dr, |
| 814 | + SourcePackagePublishingHistory.archive == self.archive, |
| 815 | + SourcePackagePublishingHistory.pocket == pocket, |
| 816 | + ) |
| 817 | + candidate_source_names = Select( |
| 818 | + SourcePackageName.id, |
| 819 | + And( |
| 820 | + SourcePackageRelease.sourcepackagenameID == |
| 821 | + SourcePackageName.id, |
| 822 | + SourcePackagePublishingHistory.sourcepackagereleaseID == |
| 823 | + SourcePackageRelease.id, |
| 824 | + spph_location_clauses, |
| 825 | + ), |
| 826 | + group_by=SourcePackageName.id, |
| 827 | + having=Count(SourcePackagePublishingHistory.id) > 1) |
| 828 | + sources = IMasterStore(SourcePackagePublishingHistory).find( |
| 829 | + SourcePackagePublishingHistory, |
| 830 | + SourcePackageRelease.id == |
| 831 | + SourcePackagePublishingHistory.sourcepackagereleaseID, |
| 832 | + SourcePackageRelease.sourcepackagenameID.is_in( |
| 833 | + candidate_source_names), |
| 834 | + spph_location_clauses) |
| 835 | self.debug("Dominating sources...") |
| 836 | self._dominatePublications(self._sortPackages(sources)) |
| 837 | flush_database_updates() |
| 838 | |
| 839 | === modified file 'lib/lp/bugs/model/bugtask.py' |
| 840 | --- lib/lp/bugs/model/bugtask.py 2010-11-15 16:25:05 +0000 |
| 841 | +++ lib/lp/bugs/model/bugtask.py 2010-11-19 10:27:10 +0000 |
| 842 | @@ -35,11 +35,12 @@ |
| 843 | from storm.expr import ( |
| 844 | Alias, |
| 845 | And, |
| 846 | - AutoTables, |
| 847 | Desc, |
| 848 | + In, |
| 849 | Join, |
| 850 | LeftJoin, |
| 851 | Or, |
| 852 | + Select, |
| 853 | SQL, |
| 854 | ) |
| 855 | from storm.store import ( |
| 856 | @@ -160,6 +161,7 @@ |
| 857 | ) |
| 858 | from lp.registry.model.pillar import pillar_sort_key |
| 859 | from lp.registry.model.sourcepackagename import SourcePackageName |
| 860 | +from lp.registry.model.structuralsubscription import StructuralSubscription |
| 861 | from lp.services.propertycache import get_property_cache |
| 862 | from lp.soyuz.enums import PackagePublishingStatus |
| 863 | from lp.soyuz.model.publishing import SourcePackagePublishingHistory |
| 864 | @@ -1606,7 +1608,9 @@ |
| 865 | from lp.bugs.model.bug import Bug |
| 866 | extra_clauses = ['Bug.id = BugTask.bug'] |
| 867 | clauseTables = [BugTask, Bug] |
| 868 | + join_tables = [] |
| 869 | decorators = [] |
| 870 | + has_duplicate_results = False |
| 871 | |
| 872 | # These arguments can be processed in a loop without any other |
| 873 | # special handling. |
| 874 | @@ -1662,7 +1666,7 @@ |
| 875 | extra_clauses.append("BugTask.milestone %s" % where_cond) |
| 876 | |
| 877 | if params.project: |
| 878 | - # Circular. |
| 879 | + # Prevent circular import problems. |
| 880 | from lp.registry.model.product import Product |
| 881 | clauseTables.append(Product) |
| 882 | extra_clauses.append("BugTask.product = Product.id") |
| 883 | @@ -1713,47 +1717,54 @@ |
| 884 | sqlvalues(personid=params.subscriber.id)) |
| 885 | |
| 886 | if params.structural_subscriber is not None: |
| 887 | - structural_subscriber_clause = ("""BugTask.id IN ( |
| 888 | - SELECT BugTask.id FROM BugTask, StructuralSubscription |
| 889 | - WHERE BugTask.product = StructuralSubscription.product |
| 890 | - AND StructuralSubscription.subscriber = %(personid)s |
| 891 | - UNION ALL |
| 892 | - SELECT BugTask.id FROM BugTask, StructuralSubscription |
| 893 | - WHERE |
| 894 | - BugTask.distribution = StructuralSubscription.distribution |
| 895 | - AND BugTask.sourcepackagename = |
| 896 | - StructuralSubscription.sourcepackagename |
| 897 | - AND StructuralSubscription.subscriber = %(personid)s |
| 898 | - UNION ALL |
| 899 | - SELECT BugTask.id FROM BugTask, StructuralSubscription |
| 900 | - WHERE |
| 901 | - BugTask.distroseries = StructuralSubscription.distroseries |
| 902 | - AND StructuralSubscription.subscriber = %(personid)s |
| 903 | - UNION ALL |
| 904 | - SELECT BugTask.id FROM BugTask, StructuralSubscription |
| 905 | - WHERE |
| 906 | - BugTask.milestone = StructuralSubscription.milestone |
| 907 | - AND StructuralSubscription.subscriber = %(personid)s |
| 908 | - UNION ALL |
| 909 | - SELECT BugTask.id FROM BugTask, StructuralSubscription |
| 910 | - WHERE |
| 911 | - BugTask.productseries = StructuralSubscription.productseries |
| 912 | - AND StructuralSubscription.subscriber = %(personid)s |
| 913 | - UNION ALL |
| 914 | - SELECT BugTask.id |
| 915 | - FROM BugTask, StructuralSubscription, Product |
| 916 | - WHERE |
| 917 | - BugTask.product = Product.id |
| 918 | - AND Product.project = StructuralSubscription.project |
| 919 | - AND StructuralSubscription.subscriber = %(personid)s |
| 920 | - UNION ALL |
| 921 | - SELECT BugTask.id FROM BugTask, StructuralSubscription |
| 922 | - WHERE |
| 923 | - BugTask.distribution = StructuralSubscription.distribution |
| 924 | - AND StructuralSubscription.sourcepackagename is NULL |
| 925 | - AND StructuralSubscription.subscriber = %(personid)s)""" % |
| 926 | - sqlvalues(personid=params.structural_subscriber)) |
| 927 | - extra_clauses.append(structural_subscriber_clause) |
| 928 | + ssub_match_product = ( |
| 929 | + BugTask.productID == |
| 930 | + StructuralSubscription.productID) |
| 931 | + ssub_match_productseries = ( |
| 932 | + BugTask.productseriesID == |
| 933 | + StructuralSubscription.productseriesID) |
| 934 | + # Prevent circular import problems. |
| 935 | + from lp.registry.model.product import Product |
| 936 | + ssub_match_project = And( |
| 937 | + Product.projectID == |
| 938 | + StructuralSubscription.projectID, |
| 939 | + BugTask.product == Product.id) |
| 940 | + ssub_match_distribution = ( |
| 941 | + BugTask.distributionID == |
| 942 | + StructuralSubscription.distributionID) |
| 943 | + ssub_match_sourcepackagename = ( |
| 944 | + BugTask.sourcepackagenameID == |
| 945 | + StructuralSubscription.sourcepackagenameID) |
| 946 | + ssub_match_null_sourcepackagename = ( |
| 947 | + StructuralSubscription.sourcepackagename == None) |
| 948 | + ssub_match_distribution_with_optional_package = And( |
| 949 | + ssub_match_distribution, Or( |
| 950 | + ssub_match_sourcepackagename, |
| 951 | + ssub_match_null_sourcepackagename)) |
| 952 | + ssub_match_distribution_series = ( |
| 953 | + BugTask.distroseriesID == |
| 954 | + StructuralSubscription.distroseriesID) |
| 955 | + ssub_match_milestone = ( |
| 956 | + BugTask.milestoneID == |
| 957 | + StructuralSubscription.milestoneID) |
| 958 | + |
| 959 | + join_clause = Or( |
| 960 | + ssub_match_product, |
| 961 | + ssub_match_productseries, |
| 962 | + ssub_match_project, |
| 963 | + ssub_match_distribution_with_optional_package, |
| 964 | + ssub_match_distribution_series, |
| 965 | + ssub_match_milestone) |
| 966 | + |
| 967 | + join_tables.append( |
| 968 | + (Product, LeftJoin(Product, BugTask.productID == Product.id))) |
| 969 | + join_tables.append( |
| 970 | + (StructuralSubscription, |
| 971 | + Join(StructuralSubscription, join_clause))) |
| 972 | + extra_clauses.append( |
| 973 | + 'StructuralSubscription.subscriber = %s' |
| 974 | + % sqlvalues(params.structural_subscriber)) |
| 975 | + has_duplicate_results = True |
| 976 | |
| 977 | if params.component: |
| 978 | clauseTables += [SourcePackagePublishingHistory, |
| 979 | @@ -1836,7 +1847,7 @@ |
| 980 | if params.bug_commenter: |
| 981 | bug_commenter_clause = """ |
| 982 | BugTask.id IN ( |
| 983 | - SELECT BugTask.id FROM BugTask, BugMessage, Message |
| 984 | + SELECT DISTINCT BugTask.id FROM BugTask, BugMessage, Message |
| 985 | WHERE Message.owner = %(bug_commenter)s |
| 986 | AND Message.id = BugMessage.message |
| 987 | AND BugTask.bug = BugMessage.bug |
| 988 | @@ -1928,7 +1939,9 @@ |
| 989 | for decor in decorators: |
| 990 | obj = decor(obj) |
| 991 | return obj |
| 992 | - return query, clauseTables, orderby_arg, decorator |
| 993 | + return ( |
| 994 | + query, clauseTables, orderby_arg, decorator, join_tables, |
| 995 | + has_duplicate_results) |
| 996 | |
| 997 | def _buildUpstreamClause(self, params): |
| 998 | """Return an clause for returning upstream data if the data exists. |
| 999 | @@ -2156,101 +2169,137 @@ |
| 1000 | ', '.join(tables), ' AND '.join(clauses)) |
| 1001 | return clause |
| 1002 | |
| 1003 | - def search(self, params, *args, **kwargs): |
| 1004 | - """See `IBugTaskSet`. |
| 1005 | - |
| 1006 | - :param _noprejoins: Private internal parameter to BugTaskSet which |
| 1007 | - disables all use of prejoins : consolidated from code paths that |
| 1008 | - claim they were inefficient and unwanted. |
| 1009 | - """ |
| 1010 | - # Circular. |
| 1011 | - from lp.registry.model.product import Product |
| 1012 | - from lp.bugs.model.bug import Bug |
| 1013 | - _noprejoins = kwargs.get('_noprejoins', False) |
| 1014 | + def buildOrigin(self, join_tables, prejoin_tables, clauseTables): |
| 1015 | + """Build the parameter list for Store.using(). |
| 1016 | + |
| 1017 | + :param join_tables: A sequence of tables that should be joined |
| 1018 | + as returned by buildQuery(). Each element has the form |
| 1019 | + (table, join), where table is the table to join and join |
| 1020 | + is a Storm Join or LeftJoin instance. |
| 1021 | + :param prejoin_tables: A sequence of tables that should additionally |
| 1022 | + be joined. Each element has the form (table, join), |
| 1023 | + where table is the table to join and join is a Storm Join |
| 1024 | + or LeftJoin instance. |
| 1025 | + :param clauseTables: A sequence of tables that should appear in |
| 1026 | + the FROM clause of a query. The join condition is defined in |
| 1027 | + the WHERE clause. |
| 1028 | + |
| 1029 | + Tables may appear simultaneously in join_tables, prejoin_tables |
| 1030 | + and in clauseTables. This method ensures that each table |
| 1031 | + appears exactly once in the returned sequence. |
| 1032 | + """ |
| 1033 | + origin = [BugTask] |
| 1034 | + already_joined = set(origin) |
| 1035 | + for table, join in join_tables: |
| 1036 | + origin.append(join) |
| 1037 | + already_joined.add(table) |
| 1038 | + for table, join in prejoin_tables: |
| 1039 | + if table not in already_joined: |
| 1040 | + origin.append(join) |
| 1041 | + already_joined.add(table) |
| 1042 | + for table in clauseTables: |
| 1043 | + if table not in already_joined: |
| 1044 | + origin.append(table) |
| 1045 | + return origin |
| 1046 | + |
| 1047 | + def _search(self, resultrow, prejoins, params, *args, **kw): |
| 1048 | + """Return a Storm result set for the given search parameters. |
| 1049 | + |
| 1050 | + :param resultrow: The type of data returned by the query. |
| 1051 | + :param prejoins: A sequence of Storm SQL row instances which are |
| 1052 | + pre-joined. |
| 1053 | + :param params: A BugTaskSearchParams instance. |
| 1054 | + :param args: optional additional BugTaskSearchParams instances, |
| 1055 | + """ |
| 1056 | store = IStore(BugTask) |
| 1057 | - query, clauseTables, orderby, bugtask_decorator = self.buildQuery( |
| 1058 | - params) |
| 1059 | + [query, clauseTables, orderby, bugtask_decorator, join_tables, |
| 1060 | + has_duplicate_results] = self.buildQuery(params) |
| 1061 | if len(args) == 0: |
| 1062 | - if _noprejoins: |
| 1063 | - resultset = store.find(BugTask, |
| 1064 | - AutoTables(SQL("1=1"), clauseTables), |
| 1065 | - query) |
| 1066 | + if has_duplicate_results: |
| 1067 | + origin = self.buildOrigin(join_tables, [], clauseTables) |
| 1068 | + outer_origin = self.buildOrigin([], prejoins, []) |
| 1069 | + subquery = Select(BugTask.id, where=SQL(query), tables=origin) |
| 1070 | + resultset = store.using(*outer_origin).find( |
| 1071 | + resultrow, In(BugTask.id, subquery)) |
| 1072 | + else: |
| 1073 | + origin = self.buildOrigin(join_tables, prejoins, clauseTables) |
| 1074 | + resultset = store.using(*origin).find(resultrow, query) |
| 1075 | + if prejoins: |
| 1076 | + decorator = lambda row: bugtask_decorator(row[0]) |
| 1077 | + else: |
| 1078 | decorator = bugtask_decorator |
| 1079 | - else: |
| 1080 | - tables = clauseTables + [Product, SourcePackageName] |
| 1081 | - origin = [ |
| 1082 | - BugTask, |
| 1083 | - LeftJoin(Bug, BugTask.bug == Bug.id), |
| 1084 | - LeftJoin(Product, BugTask.product == Product.id), |
| 1085 | - LeftJoin( |
| 1086 | - SourcePackageName, |
| 1087 | - BugTask.sourcepackagename == SourcePackageName.id), |
| 1088 | - ] |
| 1089 | - # NB: these may work with AutoTables, but its hard to tell, |
| 1090 | - # this way is known to work. |
| 1091 | - if BugNomination in tables: |
| 1092 | - # The relation is already in query. |
| 1093 | - origin.append(BugNomination) |
| 1094 | - if BugSubscription in tables: |
| 1095 | - # The relation is already in query. |
| 1096 | - origin.append(BugSubscription) |
| 1097 | - if SourcePackageRelease in tables: |
| 1098 | - origin.append(SourcePackageRelease) |
| 1099 | - if SourcePackagePublishingHistory in tables: |
| 1100 | - origin.append(SourcePackagePublishingHistory) |
| 1101 | - resultset = store.using(*origin).find( |
| 1102 | - (BugTask, Product, SourcePackageName, Bug), |
| 1103 | - AutoTables(SQL("1=1"), tables), |
| 1104 | - query) |
| 1105 | - decorator=lambda row: bugtask_decorator(row[0]) |
| 1106 | + |
| 1107 | resultset.order_by(orderby) |
| 1108 | return DecoratedResultSet(resultset, result_decorator=decorator) |
| 1109 | |
| 1110 | bugtask_fti = SQL('BugTask.fti') |
| 1111 | - result = store.find((BugTask, bugtask_fti), query, |
| 1112 | - AutoTables(SQL("1=1"), clauseTables)) |
| 1113 | + inner_resultrow = (BugTask, bugtask_fti) |
| 1114 | + origin = self.buildOrigin(join_tables, [], clauseTables) |
| 1115 | + resultset = store.using(*origin).find(inner_resultrow, query) |
| 1116 | + |
| 1117 | decorators = [bugtask_decorator] |
| 1118 | for arg in args: |
| 1119 | - query, clauseTables, dummy, decorator = self.buildQuery(arg) |
| 1120 | - result = result.union( |
| 1121 | - store.find((BugTask, bugtask_fti), query, |
| 1122 | - AutoTables(SQL("1=1"), clauseTables))) |
| 1123 | + [query, clauseTables, ignore, decorator, join_tables, |
| 1124 | + has_duplicate_results] = self.buildQuery(arg) |
| 1125 | + origin = self.buildOrigin(join_tables, [], clauseTables) |
| 1126 | + next_result = store.using(*origin).find(inner_resultrow, query) |
| 1127 | + resultset = resultset.union(next_result) |
| 1128 | # NB: assumes the decorators are all compatible. |
| 1129 | # This may need revisiting if e.g. searches on behalf of different |
| 1130 | # users are combined. |
| 1131 | decorators.append(decorator) |
| 1132 | |
| 1133 | - def decorator(row): |
| 1134 | + def prejoin_decorator(row): |
| 1135 | bugtask = row[0] |
| 1136 | for decorator in decorators: |
| 1137 | bugtask = decorator(bugtask) |
| 1138 | return bugtask |
| 1139 | |
| 1140 | - # Build up the joins. |
| 1141 | - # TODO: implement _noprejoins for this code path: as of 20100818 it |
| 1142 | - # has been silently disabled because clients of the API were setting |
| 1143 | - # prejoins=[] which had no effect; this TODO simply notes the reality |
| 1144 | - # already existing when it was added. |
| 1145 | - joins = Alias(result._get_select(), "BugTask") |
| 1146 | - joins = Join(joins, Bug, BugTask.bug == Bug.id) |
| 1147 | - joins = LeftJoin(joins, Product, BugTask.product == Product.id) |
| 1148 | - joins = LeftJoin(joins, SourcePackageName, |
| 1149 | - BugTask.sourcepackagename == SourcePackageName.id) |
| 1150 | - |
| 1151 | - result = store.using(joins).find( |
| 1152 | - (BugTask, Bug, Product, SourcePackageName)) |
| 1153 | + def simple_decorator(bugtask): |
| 1154 | + for decorator in decorators: |
| 1155 | + bugtask = decorator(bugtask) |
| 1156 | + return bugtask |
| 1157 | + |
| 1158 | + origin = [Alias(resultset._get_select(), "BugTask")] |
| 1159 | + if prejoins: |
| 1160 | + origin += [join for table, join in prejoins] |
| 1161 | + decorator = prejoin_decorator |
| 1162 | + else: |
| 1163 | + decorator = simple_decorator |
| 1164 | + |
| 1165 | + result = store.using(*origin).find(resultrow) |
| 1166 | result.order_by(orderby) |
| 1167 | return DecoratedResultSet(result, result_decorator=decorator) |
| 1168 | |
| 1169 | + def search(self, params, *args, **kwargs): |
| 1170 | + """See `IBugTaskSet`. |
| 1171 | + |
| 1172 | + :param _noprejoins: Private internal parameter to BugTaskSet which |
| 1173 | + disables all use of prejoins : consolidated from code paths that |
| 1174 | + claim they were inefficient and unwanted. |
| 1175 | + """ |
| 1176 | + # Prevent circular import problems. |
| 1177 | + from lp.registry.model.product import Product |
| 1178 | + from lp.bugs.model.bug import Bug |
| 1179 | + _noprejoins = kwargs.get('_noprejoins', False) |
| 1180 | + if _noprejoins: |
| 1181 | + prejoins = [] |
| 1182 | + resultrow = BugTask |
| 1183 | + else: |
| 1184 | + prejoins = [ |
| 1185 | + (Bug, LeftJoin(Bug, BugTask.bug == Bug.id)), |
| 1186 | + (Product, LeftJoin(Product, BugTask.product == Product.id)), |
| 1187 | + (SourcePackageName, |
| 1188 | + LeftJoin( |
| 1189 | + SourcePackageName, |
| 1190 | + BugTask.sourcepackagename == SourcePackageName.id)), |
| 1191 | + ] |
| 1192 | + resultrow = (BugTask, Bug, Product, SourcePackageName, ) |
| 1193 | + return self._search(resultrow, prejoins, params, *args) |
| 1194 | + |
| 1195 | def searchBugIds(self, params): |
| 1196 | """See `IBugTaskSet`.""" |
| 1197 | - query, clauseTables, orderby, decorator = self.buildQuery( |
| 1198 | - params) |
| 1199 | - store = IStore(BugTask) |
| 1200 | - resultset = store.find(BugTask.bugID, |
| 1201 | - AutoTables(SQL("1=1"), clauseTables), query) |
| 1202 | - resultset.order_by(orderby) |
| 1203 | - return resultset |
| 1204 | + return self._search(BugTask.bugID, [], params).result_set |
| 1205 | |
| 1206 | def getAssignedMilestonesFromSearch(self, search_results): |
| 1207 | """See `IBugTaskSet`.""" |
| 1208 | @@ -2854,8 +2903,8 @@ |
| 1209 | |
| 1210 | if recipients is not None: |
| 1211 | # We need to process subscriptions, so pull all the |
| 1212 | - # subscribes into the cache, then update recipients with |
| 1213 | - # the subscriptions. |
| 1214 | + # subscribers into the cache, then update recipients |
| 1215 | + # with the subscriptions. |
| 1216 | subscribers = list(subscribers) |
| 1217 | for subscription in subscriptions: |
| 1218 | recipients.addStructuralSubscriber( |
| 1219 | |
| 1220 | === modified file 'lib/lp/bugs/tests/test_bugtask_search.py' |
| 1221 | --- lib/lp/bugs/tests/test_bugtask_search.py 2010-11-09 06:55:07 +0000 |
| 1222 | +++ lib/lp/bugs/tests/test_bugtask_search.py 2010-11-19 10:27:10 +0000 |
| 1223 | @@ -504,6 +504,23 @@ |
| 1224 | self.assertSearchFinds(params, self.bugtasks[:1]) |
| 1225 | |
| 1226 | |
| 1227 | +class ProjectGroupAndDistributionTests: |
| 1228 | + """Tests which are useful for project groups and distributions.""" |
| 1229 | + |
| 1230 | + def setUpStructuralSubscriptions(self): |
| 1231 | + # Subscribe a user to the search target of this test and to |
| 1232 | + # another target. |
| 1233 | + raise NotImplementedError |
| 1234 | + |
| 1235 | + def test_unique_results_for_multiple_structural_subscriptions(self): |
| 1236 | + # Searching for a subscriber who is more than once subscribed to a |
| 1237 | + # bug task returns this bug task only once. |
| 1238 | + subscriber = self.setUpStructuralSubscriptions() |
| 1239 | + params = self.getBugTaskSearchParams( |
| 1240 | + user=None, structural_subscriber=subscriber) |
| 1241 | + self.assertSearchFinds(params, self.bugtasks) |
| 1242 | + |
| 1243 | + |
| 1244 | class BugTargetTestBase: |
| 1245 | """A base class for the bug target mixin classes.""" |
| 1246 | |
| 1247 | @@ -625,7 +642,8 @@ |
| 1248 | bugtask, self.searchtarget.product) |
| 1249 | |
| 1250 | |
| 1251 | -class ProjectGroupTarget(BugTargetTestBase, BugTargetWithBugSuperVisor): |
| 1252 | +class ProjectGroupTarget(BugTargetTestBase, BugTargetWithBugSuperVisor, |
| 1253 | + ProjectGroupAndDistributionTests): |
| 1254 | """Use a project group as the bug target.""" |
| 1255 | |
| 1256 | def setUp(self): |
| 1257 | @@ -695,6 +713,15 @@ |
| 1258 | 'No bug task found for a product that is not the target of ' |
| 1259 | 'the main test bugtask.') |
| 1260 | |
| 1261 | + def setUpStructuralSubscriptions(self): |
| 1262 | + # See `ProjectGroupAndDistributionTests`. |
| 1263 | + subscriber = self.factory.makePerson() |
| 1264 | + self.subscribeToTarget(subscriber) |
| 1265 | + with person_logged_in(subscriber): |
| 1266 | + self.bugtasks[0].target.addSubscription( |
| 1267 | + subscriber, subscribed_by=subscriber) |
| 1268 | + return subscriber |
| 1269 | + |
| 1270 | |
| 1271 | class MilestoneTarget(BugTargetTestBase): |
| 1272 | """Use a milestone as the bug target.""" |
| 1273 | @@ -728,7 +755,8 @@ |
| 1274 | |
| 1275 | |
| 1276 | class DistributionTarget(BugTargetTestBase, ProductAndDistributionTests, |
| 1277 | - BugTargetWithBugSuperVisor): |
| 1278 | + BugTargetWithBugSuperVisor, |
| 1279 | + ProjectGroupAndDistributionTests): |
| 1280 | """Use a distribution as the bug target.""" |
| 1281 | |
| 1282 | def setUp(self): |
| 1283 | @@ -750,6 +778,18 @@ |
| 1284 | """See `ProductAndDistributionTests`.""" |
| 1285 | return self.factory.makeDistroSeries(distribution=self.searchtarget) |
| 1286 | |
| 1287 | + def setUpStructuralSubscriptions(self): |
| 1288 | + # See `ProjectGroupAndDistributionTests`. |
| 1289 | + subscriber = self.factory.makePerson() |
| 1290 | + sourcepackage = self.factory.makeDistributionSourcePackage( |
| 1291 | + distribution=self.searchtarget) |
| 1292 | + self.bugtasks.append(self.factory.makeBugTask(target=sourcepackage)) |
| 1293 | + self.subscribeToTarget(subscriber) |
| 1294 | + with person_logged_in(subscriber): |
| 1295 | + sourcepackage.addSubscription( |
| 1296 | + subscriber, subscribed_by=subscriber) |
| 1297 | + return subscriber |
| 1298 | + |
| 1299 | |
| 1300 | class DistroseriesTarget(BugTargetTestBase): |
| 1301 | """Use a distro series as the bug target.""" |
| 1302 | @@ -835,7 +875,30 @@ |
| 1303 | ) |
| 1304 | |
| 1305 | |
| 1306 | -class PreloadBugtaskTargets: |
| 1307 | +class MultipleParams: |
| 1308 | + """A mixin class for tests with more than one search parameter object. |
| 1309 | + |
| 1310 | + BugTaskSet.search() can be called with more than one |
| 1311 | + BugTaskSearchParams instances, while BugTaskSet.searchBugIds() |
| 1312 | + accepts exactly one instance. |
| 1313 | + """ |
| 1314 | + |
| 1315 | + def test_two_param_objects(self): |
| 1316 | + # We can pass more than one BugTaskSearchParams instance to |
| 1317 | + # BugTaskSet.search(). |
| 1318 | + params1 = self.getBugTaskSearchParams( |
| 1319 | + user=None, status=BugTaskStatus.FIXCOMMITTED) |
| 1320 | + subscriber = self.factory.makePerson() |
| 1321 | + self.subscribeToTarget(subscriber) |
| 1322 | + params2 = self.getBugTaskSearchParams( |
| 1323 | + user=None, status=BugTaskStatus.NEW, |
| 1324 | + structural_subscriber=subscriber) |
| 1325 | + search_result = self.runSearch(params1, params2) |
| 1326 | + expected = self.resultValuesForBugtasks(self.bugtasks[1:]) |
| 1327 | + self.assertEqual(expected, search_result) |
| 1328 | + |
| 1329 | + |
| 1330 | +class PreloadBugtaskTargets(MultipleParams): |
| 1331 | """Preload bug targets during a BugTaskSet.search() query.""" |
| 1332 | |
| 1333 | def setUp(self): |
| 1334 | @@ -849,7 +912,7 @@ |
| 1335 | return expected_bugtasks |
| 1336 | |
| 1337 | |
| 1338 | -class NoPreloadBugtaskTargets: |
| 1339 | +class NoPreloadBugtaskTargets(MultipleParams): |
| 1340 | """Do not preload bug targets during a BugTaskSet.search() query.""" |
| 1341 | |
| 1342 | def setUp(self): |
| 1343 | |
| 1344 | === modified file 'lib/lp/buildmaster/manager.py' |
| 1345 | --- lib/lp/buildmaster/manager.py 2010-10-28 15:04:15 +0000 |
| 1346 | +++ lib/lp/buildmaster/manager.py 2010-11-19 10:27:10 +0000 |
| 1347 | @@ -151,10 +151,12 @@ |
| 1348 | if failure.check( |
| 1349 | BuildSlaveFailure, CannotBuild, BuildBehaviorMismatch, |
| 1350 | CannotResumeHost, BuildDaemonError, CannotFetchFile): |
| 1351 | - self.logger.info("Scanning failed with: %s" % error_message) |
| 1352 | + self.logger.info("Scanning %s failed with: %s" % ( |
| 1353 | + self.builder_name, error_message)) |
| 1354 | else: |
| 1355 | - self.logger.info("Scanning failed with: %s\n%s" % |
| 1356 | - (failure.getErrorMessage(), failure.getTraceback())) |
| 1357 | + self.logger.info("Scanning %s failed with: %s\n%s" % ( |
| 1358 | + self.builder_name, failure.getErrorMessage(), |
| 1359 | + failure.getTraceback())) |
| 1360 | |
| 1361 | # Decide if we need to terminate the job or fail the |
| 1362 | # builder. |
| 1363 | |
| 1364 | === modified file 'lib/lp/buildmaster/model/builder.py' |
| 1365 | --- lib/lp/buildmaster/model/builder.py 2010-11-10 13:06:05 +0000 |
| 1366 | +++ lib/lp/buildmaster/model/builder.py 2010-11-19 10:27:10 +0000 |
| 1367 | @@ -8,6 +8,7 @@ |
| 1368 | __all__ = [ |
| 1369 | 'Builder', |
| 1370 | 'BuilderSet', |
| 1371 | + 'ProxyWithConnectionTimeout', |
| 1372 | 'rescueBuilderIfLost', |
| 1373 | 'updateBuilderStatus', |
| 1374 | ] |
| 1375 | @@ -99,6 +100,41 @@ |
| 1376 | noisy = False |
| 1377 | |
| 1378 | |
| 1379 | +class ProxyWithConnectionTimeout(xmlrpc.Proxy): |
| 1380 | + """Extend Twisted's Proxy to provide a configurable connection timeout.""" |
| 1381 | + |
| 1382 | + def __init__(self, url, user=None, password=None, allowNone=False, |
| 1383 | + useDateTime=False, timeout=None): |
| 1384 | + xmlrpc.Proxy.__init__( |
| 1385 | + self, url, user, password, allowNone, useDateTime) |
| 1386 | + if timeout is None: |
| 1387 | + self.timeout = config.builddmaster.socket_timeout |
| 1388 | + else: |
| 1389 | + self.timeout = timeout |
| 1390 | + |
| 1391 | + def callRemote(self, method, *args): |
| 1392 | + """Basically a carbon copy of the parent but passes the timeout |
| 1393 | + to connectTCP.""" |
| 1394 | + |
| 1395 | + def cancel(d): |
| 1396 | + factory.deferred = None |
| 1397 | + connector.disconnect() |
| 1398 | + factory = self.queryFactory( |
| 1399 | + self.path, self.host, method, self.user, |
| 1400 | + self.password, self.allowNone, args, cancel, self.useDateTime) |
| 1401 | + if self.secure: |
| 1402 | + from twisted.internet import ssl |
| 1403 | + connector = default_reactor.connectSSL( |
| 1404 | + self.host, self.port or 443, factory, |
| 1405 | + ssl.ClientContextFactory(), |
| 1406 | + timeout=self.timeout) |
| 1407 | + else: |
| 1408 | + connector = default_reactor.connectTCP( |
| 1409 | + self.host, self.port or 80, factory, |
| 1410 | + timeout=self.timeout) |
| 1411 | + return factory.deferred |
| 1412 | + |
| 1413 | + |
| 1414 | class BuilderSlave(object): |
| 1415 | """Add in a few useful methods for the XMLRPC slave. |
| 1416 | |
| 1417 | @@ -141,7 +177,7 @@ |
| 1418 | """ |
| 1419 | rpc_url = urlappend(builder_url.encode('utf-8'), 'rpc') |
| 1420 | if proxy is None: |
| 1421 | - server_proxy = xmlrpc.Proxy(rpc_url, allowNone=True) |
| 1422 | + server_proxy = ProxyWithConnectionTimeout(rpc_url, allowNone=True) |
| 1423 | server_proxy.queryFactory = QuietQueryFactory |
| 1424 | else: |
| 1425 | server_proxy = proxy |
| 1426 | @@ -213,7 +249,7 @@ |
| 1427 | :param libraryfilealias: An `ILibraryFileAlias`. |
| 1428 | """ |
| 1429 | url = libraryfilealias.http_url |
| 1430 | - logger.debug( |
| 1431 | + logger.info( |
| 1432 | "Asking builder on %s to ensure it has file %s (%s, %s)" % ( |
| 1433 | self._file_cache_url, libraryfilealias.filename, url, |
| 1434 | libraryfilealias.content.sha1)) |
| 1435 | @@ -432,7 +468,7 @@ |
| 1436 | return defer.fail(CannotResumeHost('Undefined vm_host.')) |
| 1437 | |
| 1438 | logger = self._getSlaveScannerLogger() |
| 1439 | - logger.debug("Resuming %s (%s)" % (self.name, self.url)) |
| 1440 | + logger.info("Resuming %s (%s)" % (self.name, self.url)) |
| 1441 | |
| 1442 | d = self.slave.resume() |
| 1443 | def got_resume_ok((stdout, stderr, returncode)): |
| 1444 | |
| 1445 | === modified file 'lib/lp/buildmaster/tests/test_builder.py' |
| 1446 | --- lib/lp/buildmaster/tests/test_builder.py 2010-11-10 22:40:05 +0000 |
| 1447 | +++ lib/lp/buildmaster/tests/test_builder.py 2010-11-19 10:27:10 +0000 |
| 1448 | @@ -43,6 +43,10 @@ |
| 1449 | ) |
| 1450 | from lp.buildmaster.interfaces.buildqueue import IBuildQueueSet |
| 1451 | from lp.buildmaster.interfaces.builder import CannotResumeHost |
| 1452 | +from lp.buildmaster.model.builder import ( |
| 1453 | + BuilderSlave, |
| 1454 | + ProxyWithConnectionTimeout, |
| 1455 | + ) |
| 1456 | from lp.buildmaster.model.buildfarmjobbehavior import IdleBuildBehavior |
| 1457 | from lp.buildmaster.model.buildqueue import BuildQueue |
| 1458 | from lp.buildmaster.tests.mock_slaves import ( |
| 1459 | @@ -1059,6 +1063,56 @@ |
| 1460 | self.slave.build(None, None, None, None, None)) |
| 1461 | |
| 1462 | |
| 1463 | +class TestSlaveConnectionTimeouts(TrialTestCase): |
| 1464 | + # Testing that we can override the default 30 second connection |
| 1465 | + # timeout. |
| 1466 | + |
| 1467 | + layer = TwistedLayer |
| 1468 | + |
| 1469 | + def setUp(self): |
| 1470 | + super(TestSlaveConnectionTimeouts, self).setUp() |
| 1471 | + self.slave_helper = SlaveTestHelpers() |
| 1472 | + self.slave_helper.setUp() |
| 1473 | + self.addCleanup(self.slave_helper.cleanUp) |
| 1474 | + self.clock = Clock() |
| 1475 | + self.proxy = ProxyWithConnectionTimeout("fake_url") |
| 1476 | + self.slave = self.slave_helper.getClientSlave( |
| 1477 | + reactor=self.clock, proxy=self.proxy) |
| 1478 | + |
| 1479 | + def test_connection_timeout(self): |
| 1480 | + # The default timeout of 30 seconds should not cause a timeout, |
| 1481 | + # only the config value should. |
| 1482 | + timeout_config = """ |
| 1483 | + [builddmaster] |
| 1484 | + socket_timeout: 180 |
| 1485 | + """ |
| 1486 | + config.push('timeout', timeout_config) |
| 1487 | + self.addCleanup(config.pop, 'timeout') |
| 1488 | + |
| 1489 | + d = self.slave.echo() |
| 1490 | + # Advance past the 30 second timeout. The real reactor will |
| 1491 | + # never call connectTCP() since we're not spinning it up. This |
| 1492 | + # avoids "connection refused" errors and simulates an |
| 1493 | + # environment where the endpoint doesn't respond. |
| 1494 | + self.clock.advance(31) |
| 1495 | + self.assertFalse(d.called) |
| 1496 | + |
| 1497 | + # Now advance past the real socket timeout and expect a |
| 1498 | + # Failure. |
| 1499 | + |
| 1500 | + def got_timeout(failure): |
| 1501 | + self.assertIsInstance(failure.value, CancelledError) |
| 1502 | + |
| 1503 | + d.addBoth(got_timeout) |
| 1504 | + self.clock.advance(config.builddmaster.socket_timeout + 1) |
| 1505 | + self.assertTrue(d.called) |
| 1506 | + |
| 1507 | + def test_BuilderSlave_uses_ProxyWithConnectionTimeout(self): |
| 1508 | + # Make sure that BuilderSlaves use the custom proxy class. |
| 1509 | + slave = BuilderSlave.makeBuilderSlave("url", "host") |
| 1510 | + self.assertIsInstance(slave._server, ProxyWithConnectionTimeout) |
| 1511 | + |
| 1512 | + |
| 1513 | class TestSlaveWithLibrarian(TrialTestCase): |
| 1514 | """Tests that need more of Launchpad to run.""" |
| 1515 | |
| 1516 | |
| 1517 | === modified file 'lib/lp/code/browser/branchlisting.py' |
| 1518 | --- lib/lp/code/browser/branchlisting.py 2010-11-09 07:13:41 +0000 |
| 1519 | +++ lib/lp/code/browser/branchlisting.py 2010-11-19 10:27:10 +0000 |
| 1520 | @@ -94,6 +94,7 @@ |
| 1521 | PersonActiveReviewsView, |
| 1522 | PersonProductActiveReviewsView, |
| 1523 | ) |
| 1524 | +from lp.code.browser.branchmergequeuelisting import HasMergeQueuesMenuMixin |
| 1525 | from lp.code.browser.branchvisibilitypolicy import BranchVisibilityPolicyMixin |
| 1526 | from lp.code.browser.summary import BranchCountSummaryView |
| 1527 | from lp.code.enums import ( |
| 1528 | @@ -849,18 +850,19 @@ |
| 1529 | .scanned()) |
| 1530 | |
| 1531 | |
| 1532 | -class PersonBranchesMenu(ApplicationMenu): |
| 1533 | +class PersonBranchesMenu(ApplicationMenu, HasMergeQueuesMenuMixin): |
| 1534 | |
| 1535 | usedfor = IPerson |
| 1536 | facet = 'branches' |
| 1537 | links = ['registered', 'owned', 'subscribed', 'addbranch', |
| 1538 | - 'active_reviews'] |
| 1539 | + 'active_reviews', 'mergequeues'] |
| 1540 | extra_attributes = [ |
| 1541 | 'active_review_count', |
| 1542 | 'owned_branch_count', |
| 1543 | 'registered_branch_count', |
| 1544 | 'show_summary', |
| 1545 | 'subscribed_branch_count', |
| 1546 | + 'mergequeue_count', |
| 1547 | ] |
| 1548 | |
| 1549 | def _getCountCollection(self): |
| 1550 | |
| 1551 | === added file 'lib/lp/code/browser/branchmergequeuelisting.py' |
| 1552 | --- lib/lp/code/browser/branchmergequeuelisting.py 1970-01-01 00:00:00 +0000 |
| 1553 | +++ lib/lp/code/browser/branchmergequeuelisting.py 2010-11-19 10:27:10 +0000 |
| 1554 | @@ -0,0 +1,105 @@ |
| 1555 | +# Copyright 2010 Canonical Ltd. This software is licensed under the |
| 1556 | +# GNU Affero General Public License version 3 (see the file LICENSE). |
| 1557 | + |
| 1558 | +"""Base class view for merge queue listings.""" |
| 1559 | + |
| 1560 | +__metaclass__ = type |
| 1561 | + |
| 1562 | +__all__ = [ |
| 1563 | + 'MergeQueueListingView', |
| 1564 | + 'HasMergeQueuesMenuMixin', |
| 1565 | + 'PersonMergeQueueListingView', |
| 1566 | + ] |
| 1567 | + |
| 1568 | +from zope.component import getUtility |
| 1569 | + |
| 1570 | +from canonical.launchpad.browser.feeds import FeedsMixin |
| 1571 | +from canonical.launchpad.webapp import ( |
| 1572 | + LaunchpadView, |
| 1573 | + Link, |
| 1574 | + ) |
| 1575 | +from lp.code.interfaces.branchmergequeuecollection import ( |
| 1576 | + IAllBranchMergeQueues, |
| 1577 | + ) |
| 1578 | +from lp.services.browser_helpers import get_plural_text |
| 1579 | +from lp.services.propertycache import cachedproperty |
| 1580 | + |
| 1581 | + |
| 1582 | +class HasMergeQueuesMenuMixin: |
| 1583 | + """A context menus mixin for objects that can own merge queues.""" |
| 1584 | + |
| 1585 | + def _getCollection(self): |
| 1586 | + return getUtility(IAllBranchMergeQueues).visibleByUser(self.user) |
| 1587 | + |
| 1588 | + @property |
| 1589 | + def person(self): |
| 1590 | + """The `IPerson` for the context of the view. |
| 1591 | + |
| 1592 | + In simple cases this is the context itself, but in others, like the |
| 1593 | + PersonProduct, it is an attribute of the context. |
| 1594 | + """ |
| 1595 | + return self.context |
| 1596 | + |
| 1597 | + def mergequeues(self): |
| 1598 | + return Link( |
| 1599 | + '+merge-queues', |
| 1600 | + get_plural_text( |
| 1601 | + self.mergequeue_count, |
| 1602 | + 'merge queue', 'merge queues'), site='code') |
| 1603 | + |
| 1604 | + @cachedproperty |
| 1605 | + def mergequeue_count(self): |
| 1606 | + return self._getCollection().ownedBy(self.person).count() |
| 1607 | + |
| 1608 | + |
| 1609 | +class MergeQueueListingView(LaunchpadView, FeedsMixin): |
| 1610 | + |
| 1611 | + # No feeds initially |
| 1612 | + feed_types = () |
| 1613 | + |
| 1614 | + branch_enabled = True |
| 1615 | + owner_enabled = True |
| 1616 | + |
| 1617 | + label_template = 'Merge Queues for %(displayname)s' |
| 1618 | + |
| 1619 | + @property |
| 1620 | + def label(self): |
| 1621 | + return self.label_template % { |
| 1622 | + 'displayname': self.context.displayname, |
| 1623 | + 'title': getattr(self.context, 'title', 'no-title')} |
| 1624 | + |
| 1625 | + # Provide a default page_title for distros and other things without |
| 1626 | + # breadcrumbs.. |
| 1627 | + page_title = label |
| 1628 | + |
| 1629 | + def _getCollection(self): |
| 1630 | + """Override this to say what queues will be in the listing.""" |
| 1631 | + raise NotImplementedError(self._getCollection) |
| 1632 | + |
| 1633 | + def getVisibleQueuesForUser(self): |
| 1634 | + """Branch merge queues that are visible by the logged in user.""" |
| 1635 | + collection = self._getCollection().visibleByUser(self.user) |
| 1636 | + return collection.getMergeQueues() |
| 1637 | + |
| 1638 | + @cachedproperty |
| 1639 | + def mergequeues(self): |
| 1640 | + return self.getVisibleQueuesForUser() |
| 1641 | + |
| 1642 | + @cachedproperty |
| 1643 | + def mergequeue_count(self): |
| 1644 | + """Return the number of merge queues that will be returned.""" |
| 1645 | + return self._getCollection().visibleByUser(self.user).count() |
| 1646 | + |
| 1647 | + @property |
| 1648 | + def no_merge_queue_message(self): |
| 1649 | + """Shown when there is no table to show.""" |
| 1650 | + return "%s has no merge queues." % self.context.displayname |
| 1651 | + |
| 1652 | + |
| 1653 | +class PersonMergeQueueListingView(MergeQueueListingView): |
| 1654 | + |
| 1655 | + label_template = 'Merge Queues owned by %(displayname)s' |
| 1656 | + owner_enabled = False |
| 1657 | + |
| 1658 | + def _getCollection(self): |
| 1659 | + return getUtility(IAllBranchMergeQueues).ownedBy(self.context) |
| 1660 | |
| 1661 | === modified file 'lib/lp/code/browser/configure.zcml' |
| 1662 | --- lib/lp/code/browser/configure.zcml 2010-11-08 17:17:45 +0000 |
| 1663 | +++ lib/lp/code/browser/configure.zcml 2010-11-19 10:27:10 +0000 |
| 1664 | @@ -1318,6 +1318,24 @@ |
| 1665 | for="lp.code.interfaces.sourcepackagerecipe.ISourcePackageRecipe" |
| 1666 | factory="canonical.launchpad.webapp.breadcrumb.NameBreadcrumb" |
| 1667 | permission="zope.Public"/> |
| 1668 | + |
| 1669 | + <browser:page |
| 1670 | + for="lp.registry.interfaces.person.IPerson" |
| 1671 | + layer="lp.code.publisher.CodeLayer" |
| 1672 | + class="lp.code.browser.branchmergequeuelisting.PersonMergeQueueListingView" |
| 1673 | + permission="zope.Public" |
| 1674 | + facet="branches" |
| 1675 | + name="+merge-queues" |
| 1676 | + template="../templates/branchmergequeue-listing.pt"/> |
| 1677 | + |
| 1678 | + <browser:page |
| 1679 | + for="*" |
| 1680 | + layer="lp.code.publisher.CodeLayer" |
| 1681 | + name="+bmq-macros" |
| 1682 | + permission="zope.Public" |
| 1683 | + template="../templates/branchmergequeue-macros.pt"/> |
| 1684 | + |
| 1685 | + |
| 1686 | </facet> |
| 1687 | |
| 1688 | <browser:url |
| 1689 | |
| 1690 | === added file 'lib/lp/code/browser/tests/test_branchmergequeuelisting.py' |
| 1691 | --- lib/lp/code/browser/tests/test_branchmergequeuelisting.py 1970-01-01 00:00:00 +0000 |
| 1692 | +++ lib/lp/code/browser/tests/test_branchmergequeuelisting.py 2010-11-19 10:27:10 +0000 |
| 1693 | @@ -0,0 +1,227 @@ |
| 1694 | +# Copyright 2010 Canonical Ltd. This software is licensed under the |
| 1695 | +# GNU Affero General Public License version 3 (see the file LICENSE). |
| 1696 | + |
| 1697 | +"""Tests for branch listing.""" |
| 1698 | + |
| 1699 | +__metaclass__ = type |
| 1700 | + |
| 1701 | +import re |
| 1702 | + |
| 1703 | +from mechanize import LinkNotFoundError |
| 1704 | +import soupmatchers |
| 1705 | +from zope.security.proxy import removeSecurityProxy |
| 1706 | + |
| 1707 | +from canonical.launchpad.testing.pages import ( |
| 1708 | + extract_link_from_tag, |
| 1709 | + extract_text, |
| 1710 | + find_tag_by_id, |
| 1711 | + ) |
| 1712 | +from canonical.launchpad.webapp import canonical_url |
| 1713 | +from canonical.testing.layers import DatabaseFunctionalLayer |
| 1714 | +from lp.services.features.model import ( |
| 1715 | + FeatureFlag, |
| 1716 | + getFeatureStore, |
| 1717 | + ) |
| 1718 | +from lp.testing import ( |
| 1719 | + BrowserTestCase, |
| 1720 | + login_person, |
| 1721 | + person_logged_in, |
| 1722 | + TestCaseWithFactory, |
| 1723 | + ) |
| 1724 | +from lp.testing.views import create_initialized_view |
| 1725 | + |
| 1726 | + |
| 1727 | +class MergeQueuesTestMixin: |
| 1728 | + |
| 1729 | + def setUp(self): |
| 1730 | + self.branch_owner = self.factory.makePerson(name='eric') |
| 1731 | + |
| 1732 | + def enable_queue_flag(self): |
| 1733 | + getFeatureStore().add(FeatureFlag( |
| 1734 | + scope=u'default', flag=u'code.branchmergequeue', |
| 1735 | + value=u'on', priority=1)) |
| 1736 | + |
| 1737 | + def _makeMergeQueues(self, nr_queues=3, nr_with_private_branches=0): |
| 1738 | + # We create nr_queues merge queues in total, and the first |
| 1739 | + # nr_with_private_branches of them will have at least one private |
| 1740 | + # branch in the queue. |
| 1741 | + with person_logged_in(self.branch_owner): |
| 1742 | + mergequeues = [ |
| 1743 | + self.factory.makeBranchMergeQueue( |
| 1744 | + owner=self.branch_owner, branches=self._makeBranches()) |
| 1745 | + for i in range(nr_queues-nr_with_private_branches)] |
| 1746 | + mergequeues_with_private_branches = [ |
| 1747 | + self.factory.makeBranchMergeQueue( |
| 1748 | + owner=self.branch_owner, |
| 1749 | + branches=self._makeBranches(nr_private=1)) |
| 1750 | + for i in range(nr_with_private_branches)] |
| 1751 | + |
| 1752 | + return mergequeues, mergequeues_with_private_branches |
| 1753 | + |
| 1754 | + def _makeBranches(self, nr_public=3, nr_private=0): |
| 1755 | + branches = [ |
| 1756 | + self.factory.makeProductBranch(owner=self.branch_owner) |
| 1757 | + for i in range(nr_public)] |
| 1758 | + |
| 1759 | + private_branches = [ |
| 1760 | + self.factory.makeProductBranch( |
| 1761 | + owner=self.branch_owner, private=True) |
| 1762 | + for i in range(nr_private)] |
| 1763 | + |
| 1764 | + branches.extend(private_branches) |
| 1765 | + return branches |
| 1766 | + |
| 1767 | + |
| 1768 | +class TestPersonMergeQueuesView(TestCaseWithFactory, MergeQueuesTestMixin): |
| 1769 | + |
| 1770 | + layer = DatabaseFunctionalLayer |
| 1771 | + |
| 1772 | + def setUp(self): |
| 1773 | + TestCaseWithFactory.setUp(self) |
| 1774 | + MergeQueuesTestMixin.setUp(self) |
| 1775 | + self.user = self.factory.makePerson() |
| 1776 | + |
| 1777 | + def test_mergequeues_with_all_public_branches(self): |
| 1778 | + # Anyone can see mergequeues containing all public branches. |
| 1779 | + mq, mq_with_private = self._makeMergeQueues() |
| 1780 | + login_person(self.user) |
| 1781 | + view = create_initialized_view( |
| 1782 | + self.branch_owner, name="+merge-queues", rootsite='code') |
| 1783 | + self.assertEqual(set(mq), set(view.mergequeues)) |
| 1784 | + |
| 1785 | + def test_mergequeues_with_a_private_branch_for_owner(self): |
| 1786 | + # Only users with access to private branches can see any queues |
| 1787 | + # containing such branches. |
| 1788 | + mq, mq_with_private = ( |
| 1789 | + self._makeMergeQueues(nr_with_private_branches=1)) |
| 1790 | + login_person(self.branch_owner) |
| 1791 | + view = create_initialized_view( |
| 1792 | + self.branch_owner, name="+merge-queues", rootsite='code') |
| 1793 | + mq.extend(mq_with_private) |
| 1794 | + self.assertEqual(set(mq), set(view.mergequeues)) |
| 1795 | + |
| 1796 | + def test_mergequeues_with_a_private_branch_for_other_user(self): |
| 1797 | + # Only users with access to private branches can see any queues |
| 1798 | + # containing such branches. |
| 1799 | + mq, mq_with_private = ( |
| 1800 | + self._makeMergeQueues(nr_with_private_branches=1)) |
| 1801 | + login_person(self.user) |
| 1802 | + view = create_initialized_view( |
| 1803 | + self.branch_owner, name="+merge-queues", rootsite='code') |
| 1804 | + self.assertEqual(set(mq), set(view.mergequeues)) |
| 1805 | + |
| 1806 | + |
| 1807 | +class TestPersonCodePage(BrowserTestCase, MergeQueuesTestMixin): |
| 1808 | + """Tests for the person code homepage. |
| 1809 | + |
| 1810 | + This is the default page shown for a person on the code subdomain. |
| 1811 | + """ |
| 1812 | + |
| 1813 | + layer = DatabaseFunctionalLayer |
| 1814 | + |
| 1815 | + def setUp(self): |
| 1816 | + BrowserTestCase.setUp(self) |
| 1817 | + MergeQueuesTestMixin.setUp(self) |
| 1818 | + self._makeMergeQueues() |
| 1819 | + |
| 1820 | + def test_merge_queue_menu_link_without_feature_flag(self): |
| 1821 | + login_person(self.branch_owner) |
| 1822 | + browser = self.getUserBrowser( |
| 1823 | + canonical_url(self.branch_owner, rootsite='code'), |
| 1824 | + self.branch_owner) |
| 1825 | + self.assertRaises( |
| 1826 | + LinkNotFoundError, |
| 1827 | + browser.getLink, |
| 1828 | + url='+merge-queues') |
| 1829 | + |
| 1830 | + def test_merge_queue_menu_link(self): |
| 1831 | + self.enable_queue_flag() |
| 1832 | + login_person(self.branch_owner) |
| 1833 | + browser = self.getUserBrowser( |
| 1834 | + canonical_url(self.branch_owner, rootsite='code'), |
| 1835 | + self.branch_owner) |
| 1836 | + browser.getLink(url='+merge-queues').click() |
| 1837 | + self.assertEqual( |
| 1838 | + 'http://code.launchpad.dev/~eric/+merge-queues', |
| 1839 | + browser.url) |
| 1840 | + |
| 1841 | + |
| 1842 | +class TestPersonMergeQueuesListPage(BrowserTestCase, MergeQueuesTestMixin): |
| 1843 | + """Tests for the person merge queue list page.""" |
| 1844 | + |
| 1845 | + layer = DatabaseFunctionalLayer |
| 1846 | + |
| 1847 | + def setUp(self): |
| 1848 | + BrowserTestCase.setUp(self) |
| 1849 | + MergeQueuesTestMixin.setUp(self) |
| 1850 | + mq, mq_with_private = self._makeMergeQueues() |
| 1851 | + self.merge_queues = mq |
| 1852 | + self.merge_queues.extend(mq_with_private) |
| 1853 | + |
| 1854 | + def test_merge_queue_list_contents_without_feature_flag(self): |
| 1855 | + login_person(self.branch_owner) |
| 1856 | + browser = self.getUserBrowser( |
| 1857 | + canonical_url(self.branch_owner, rootsite='code', |
| 1858 | + view_name='+merge-queues'), self.branch_owner) |
| 1859 | + table = find_tag_by_id(browser.contents, 'mergequeuetable') |
| 1860 | + self.assertIs(None, table) |
| 1861 | + noqueue_matcher = soupmatchers.HTMLContains( |
| 1862 | + soupmatchers.Tag( |
| 1863 | + 'No merge queues', 'div', |
| 1864 | + text=re.compile( |
| 1865 | + '\w*No merge queues\w*'))) |
| 1866 | + self.assertThat(browser.contents, noqueue_matcher) |
| 1867 | + |
| 1868 | + def test_merge_queue_list_contents(self): |
| 1869 | + self.enable_queue_flag() |
| 1870 | + login_person(self.branch_owner) |
| 1871 | + browser = self.getUserBrowser( |
| 1872 | + canonical_url(self.branch_owner, rootsite='code', |
| 1873 | + view_name='+merge-queues'), self.branch_owner) |
| 1874 | + |
| 1875 | + table = find_tag_by_id(browser.contents, 'mergequeuetable') |
| 1876 | + |
| 1877 | + merge_queue_info = {} |
| 1878 | + for row in table.tbody.fetch('tr'): |
| 1879 | + cells = row('td') |
| 1880 | + row_info = {} |
| 1881 | + queue_name = extract_text(cells[0]) |
| 1882 | + if not queue_name.startswith('queue'): |
| 1883 | + continue |
| 1884 | + qlink = extract_link_from_tag(cells[0].find('a')) |
| 1885 | + row_info['queue_link'] = qlink |
| 1886 | + queue_size = extract_text(cells[1]) |
| 1887 | + row_info['queue_size'] = queue_size |
| 1888 | + queue_branches = cells[2]('a') |
| 1889 | + branch_links = set() |
| 1890 | + for branch_tag in queue_branches: |
| 1891 | + branch_links.add(extract_link_from_tag(branch_tag)) |
| 1892 | + row_info['branch_links'] = branch_links |
| 1893 | + merge_queue_info[queue_name] = row_info |
| 1894 | + |
| 1895 | + expected_queue_names = [queue.name for queue in self.merge_queues] |
| 1896 | + self.assertEqual( |
| 1897 | + set(expected_queue_names), set(merge_queue_info.keys())) |
| 1898 | + |
| 1899 | + #TODO: when IBranchMergeQueue API is available remove '4' |
| 1900 | + expected_queue_sizes = dict( |
| 1901 | + [(queue.name, '4') for queue in self.merge_queues]) |
| 1902 | + observed_queue_sizes = dict( |
| 1903 | + [(queue.name, merge_queue_info[queue.name]['queue_size']) |
| 1904 | + for queue in self.merge_queues]) |
| 1905 | + self.assertEqual( |
| 1906 | + expected_queue_sizes, observed_queue_sizes) |
| 1907 | + |
| 1908 | + def branch_links(branches): |
| 1909 | + return [canonical_url(removeSecurityProxy(branch), |
| 1910 | + force_local_path=True) |
| 1911 | + for branch in branches] |
| 1912 | + |
| 1913 | + expected_queue_branches = dict( |
| 1914 | + [(queue.name, set(branch_links(queue.branches))) |
| 1915 | + for queue in self.merge_queues]) |
| 1916 | + observed_queue_branches = dict( |
| 1917 | + [(queue.name, merge_queue_info[queue.name]['branch_links']) |
| 1918 | + for queue in self.merge_queues]) |
| 1919 | + self.assertEqual( |
| 1920 | + expected_queue_branches, observed_queue_branches) |
| 1921 | |
| 1922 | === modified file 'lib/lp/code/configure.zcml' |
| 1923 | --- lib/lp/code/configure.zcml 2010-11-11 11:55:53 +0000 |
| 1924 | +++ lib/lp/code/configure.zcml 2010-11-19 10:27:10 +0000 |
| 1925 | @@ -94,6 +94,12 @@ |
| 1926 | <allow attributes="browserDefault |
| 1927 | __call__"/> |
| 1928 | </class> |
| 1929 | + <class class="lp.code.model.branchmergequeuecollection.GenericBranchMergeQueueCollection"> |
| 1930 | + <allow interface="lp.code.interfaces.branchmergequeuecollection.IBranchMergeQueueCollection"/> |
| 1931 | + </class> |
| 1932 | + <class class="lp.code.model.branchmergequeuecollection.VisibleBranchMergeQueueCollection"> |
| 1933 | + <allow interface="lp.code.interfaces.branchmergequeuecollection.IBranchMergeQueueCollection"/> |
| 1934 | + </class> |
| 1935 | <class class="lp.code.model.branchcollection.GenericBranchCollection"> |
| 1936 | <allow interface="lp.code.interfaces.branchcollection.IBranchCollection"/> |
| 1937 | </class> |
| 1938 | @@ -148,6 +154,11 @@ |
| 1939 | provides="lp.code.interfaces.revisioncache.IRevisionCache"> |
| 1940 | <allow interface="lp.code.interfaces.revisioncache.IRevisionCache"/> |
| 1941 | </securedutility> |
| 1942 | + <securedutility |
| 1943 | + class="lp.code.model.branchmergequeuecollection.GenericBranchMergeQueueCollection" |
| 1944 | + provides="lp.code.interfaces.branchmergequeuecollection.IAllBranchMergeQueues"> |
| 1945 | + <allow interface="lp.code.interfaces.branchmergequeuecollection.IAllBranchMergeQueues"/> |
| 1946 | + </securedutility> |
| 1947 | <adapter |
| 1948 | for="lp.registry.interfaces.person.IPerson" |
| 1949 | provides="lp.code.interfaces.revisioncache.IRevisionCache" |
| 1950 | |
| 1951 | === modified file 'lib/lp/code/interfaces/branchmergequeue.py' |
| 1952 | --- lib/lp/code/interfaces/branchmergequeue.py 2010-10-20 15:32:38 +0000 |
| 1953 | +++ lib/lp/code/interfaces/branchmergequeue.py 2010-11-19 10:27:10 +0000 |
| 1954 | @@ -8,6 +8,7 @@ |
| 1955 | __all__ = [ |
| 1956 | 'IBranchMergeQueue', |
| 1957 | 'IBranchMergeQueueSource', |
| 1958 | + 'user_has_special_merge_queue_access', |
| 1959 | ] |
| 1960 | |
| 1961 | from lazr.restful.declarations import ( |
| 1962 | @@ -21,6 +22,7 @@ |
| 1963 | CollectionField, |
| 1964 | Reference, |
| 1965 | ) |
| 1966 | +from zope.component import getUtility |
| 1967 | from zope.interface import Interface |
| 1968 | from zope.schema import ( |
| 1969 | Datetime, |
| 1970 | @@ -30,6 +32,7 @@ |
| 1971 | ) |
| 1972 | |
| 1973 | from canonical.launchpad import _ |
| 1974 | +from canonical.launchpad.interfaces.launchpad import ILaunchpadCelebrities |
| 1975 | from lp.services.fields import ( |
| 1976 | PersonChoice, |
| 1977 | PublicPersonChoice, |
| 1978 | @@ -113,3 +116,14 @@ |
| 1979 | :param registrant: The registrant of the queue. |
| 1980 | :param branches: A list of branches to add to the queue. |
| 1981 | """ |
| 1982 | + |
| 1983 | + |
| 1984 | +def user_has_special_merge_queue_access(user): |
| 1985 | + """Admins and bazaar experts have special access. |
| 1986 | + |
| 1987 | + :param user: A 'Person' or None. |
| 1988 | + """ |
| 1989 | + if user is None: |
| 1990 | + return False |
| 1991 | + celebs = getUtility(ILaunchpadCelebrities) |
| 1992 | + return user.inTeam(celebs.admin) or user.inTeam(celebs.bazaar_experts) |
| 1993 | |
| 1994 | === added file 'lib/lp/code/interfaces/branchmergequeuecollection.py' |
| 1995 | --- lib/lp/code/interfaces/branchmergequeuecollection.py 1970-01-01 00:00:00 +0000 |
| 1996 | +++ lib/lp/code/interfaces/branchmergequeuecollection.py 2010-11-19 10:27:10 +0000 |
| 1997 | @@ -0,0 +1,64 @@ |
| 1998 | +# Copyright 2010 Canonical Ltd. This software is licensed under the |
| 1999 | +# GNU Affero General Public License version 3 (see the file LICENSE). |
| 2000 | + |
| 2001 | +# pylint: disable-msg=E0211, E0213 |
| 2002 | + |
| 2003 | +"""A collection of branche merge queues. |
| 2004 | + |
| 2005 | +See `IBranchMergeQueueCollection` for more details. |
| 2006 | +""" |
| 2007 | + |
| 2008 | +__metaclass__ = type |
| 2009 | +__all__ = [ |
| 2010 | + 'IAllBranchMergeQueues', |
| 2011 | + 'IBranchMergeQueueCollection', |
| 2012 | + 'InvalidFilter', |
| 2013 | + ] |
| 2014 | + |
| 2015 | +from zope.interface import Interface |
| 2016 | + |
| 2017 | + |
| 2018 | +class InvalidFilter(Exception): |
| 2019 | + """Raised when an `IBranchMergeQueueCollection` can't apply the filter.""" |
| 2020 | + |
| 2021 | + |
| 2022 | +class IBranchMergeQueueCollection(Interface): |
| 2023 | + """A collection of branch merge queues. |
| 2024 | + |
| 2025 | + An `IBranchMergeQueueCollection` is an immutable collection of branch |
| 2026 | + merge queues. It has two kinds of methods: |
| 2027 | + filter methods and query methods. |
| 2028 | + |
| 2029 | + Query methods get information about the contents of collection. See |
| 2030 | + `IBranchMergeQueueCollection.count` and |
| 2031 | + `IBranchMergeQueueCollection.getMergeQueues`. |
| 2032 | + |
| 2033 | + Implementations of this interface are not 'content classes'. That is, they |
| 2034 | + do not correspond to a particular row in the database. |
| 2035 | + |
| 2036 | + This interface is intended for use within Launchpad, not to be exported as |
| 2037 | + a public API. |
| 2038 | + """ |
| 2039 | + |
| 2040 | + def count(): |
| 2041 | + """The number of merge queues in this collection.""" |
| 2042 | + |
| 2043 | + def getMergeQueues(): |
| 2044 | + """Return a result set of all merge queues in this collection. |
| 2045 | + |
| 2046 | + The returned result set will also join across the specified tables as |
| 2047 | + defined by the arguments to this function. These extra tables are |
| 2048 | + joined specificly to allow the caller to sort on values not in the |
| 2049 | + Branch table itself. |
| 2050 | + """ |
| 2051 | + |
| 2052 | + def ownedBy(person): |
| 2053 | + """Restrict the collection to queues owned by 'person'.""" |
| 2054 | + |
| 2055 | + def visibleByUser(person): |
| 2056 | + """Restrict the collection to queues that 'person' is allowed to see. |
| 2057 | + """ |
| 2058 | + |
| 2059 | + |
| 2060 | +class IAllBranchMergeQueues(IBranchMergeQueueCollection): |
| 2061 | + """An `IBranchMergeQueueCollection` of all branch merge queues.""" |
| 2062 | |
| 2063 | === modified file 'lib/lp/code/model/branchmergequeue.py' |
| 2064 | --- lib/lp/code/model/branchmergequeue.py 2010-10-28 03:08:41 +0000 |
| 2065 | +++ lib/lp/code/model/branchmergequeue.py 2010-11-19 10:27:10 +0000 |
| 2066 | @@ -7,7 +7,6 @@ |
| 2067 | __all__ = ['BranchMergeQueue'] |
| 2068 | |
| 2069 | import simplejson |
| 2070 | - |
| 2071 | from storm.locals import ( |
| 2072 | Int, |
| 2073 | Reference, |
| 2074 | @@ -68,7 +67,7 @@ |
| 2075 | |
| 2076 | @classmethod |
| 2077 | def new(cls, name, owner, registrant, description=None, |
| 2078 | - configuration=None): |
| 2079 | + configuration=None, branches=None): |
| 2080 | """See `IBranchMergeQueueSource`.""" |
| 2081 | store = IMasterStore(BranchMergeQueue) |
| 2082 | |
| 2083 | @@ -81,6 +80,9 @@ |
| 2084 | queue.registrant = registrant |
| 2085 | queue.description = description |
| 2086 | queue.configuration = configuration |
| 2087 | + if branches is not None: |
| 2088 | + for branch in branches: |
| 2089 | + branch.addToQueue(queue) |
| 2090 | |
| 2091 | store.add(queue) |
| 2092 | return queue |
| 2093 | |
| 2094 | === added file 'lib/lp/code/model/branchmergequeuecollection.py' |
| 2095 | --- lib/lp/code/model/branchmergequeuecollection.py 1970-01-01 00:00:00 +0000 |
| 2096 | +++ lib/lp/code/model/branchmergequeuecollection.py 2010-11-19 10:27:10 +0000 |
| 2097 | @@ -0,0 +1,174 @@ |
| 2098 | +# Copyright 2010 Canonical Ltd. This software is licensed under the |
| 2099 | +# GNU Affero General Public License version 3 (see the file LICENSE). |
| 2100 | + |
| 2101 | +"""Implementations of `IBranchMergeQueueCollection`.""" |
| 2102 | + |
| 2103 | +__metaclass__ = type |
| 2104 | +__all__ = [ |
| 2105 | + 'GenericBranchCollection', |
| 2106 | + ] |
| 2107 | + |
| 2108 | +from zope.interface import implements |
| 2109 | + |
| 2110 | +from canonical.launchpad.interfaces.lpstorm import IMasterStore |
| 2111 | +from lp.code.interfaces.branchmergequeue import ( |
| 2112 | + user_has_special_merge_queue_access, |
| 2113 | + ) |
| 2114 | +from lp.code.interfaces.branchmergequeuecollection import ( |
| 2115 | + IBranchMergeQueueCollection, |
| 2116 | + InvalidFilter, |
| 2117 | + ) |
| 2118 | +from lp.code.interfaces.codehosting import LAUNCHPAD_SERVICES |
| 2119 | +from lp.code.model.branchmergequeue import BranchMergeQueue |
| 2120 | + |
| 2121 | + |
| 2122 | +class GenericBranchMergeQueueCollection: |
| 2123 | + """See `IBranchMergeQueueCollection`.""" |
| 2124 | + |
| 2125 | + implements(IBranchMergeQueueCollection) |
| 2126 | + |
| 2127 | + def __init__(self, store=None, merge_queue_filter_expressions=None, |
| 2128 | + tables=None, exclude_from_search=None): |
| 2129 | + """Construct a `GenericBranchMergeQueueCollection`. |
| 2130 | + |
| 2131 | + :param store: The store to look in for merge queues. If not specified, |
| 2132 | + use the default store. |
| 2133 | + :param merge_queue_filter_expressions: A list of Storm expressions to |
| 2134 | + restrict the queues in the collection. If unspecified, then |
| 2135 | + there will be no restrictions on the result set. That is, all |
| 2136 | + queues in the store will be in the collection. |
| 2137 | + :param tables: A dict of Storm tables to the Join expression. If an |
| 2138 | + expression in merge_queue_filter_expressions refers to a table, |
| 2139 | + then that table *must* be in this list. |
| 2140 | + """ |
| 2141 | + self._store = store |
| 2142 | + if merge_queue_filter_expressions is None: |
| 2143 | + merge_queue_filter_expressions = [] |
| 2144 | + self._merge_queue_filter_expressions = merge_queue_filter_expressions |
| 2145 | + if tables is None: |
| 2146 | + tables = {} |
| 2147 | + self._tables = tables |
| 2148 | + if exclude_from_search is None: |
| 2149 | + exclude_from_search = [] |
| 2150 | + self._exclude_from_search = exclude_from_search |
| 2151 | + |
| 2152 | + def count(self): |
| 2153 | + return self._getCount() |
| 2154 | + |
| 2155 | + def _getCount(self): |
| 2156 | + """See `IBranchMergeQueueCollection`.""" |
| 2157 | + return self._getMergeQueues().count() |
| 2158 | + |
| 2159 | + @property |
| 2160 | + def store(self): |
| 2161 | + if self._store is None: |
| 2162 | + return IMasterStore(BranchMergeQueue) |
| 2163 | + else: |
| 2164 | + return self._store |
| 2165 | + |
| 2166 | + def _filterBy(self, expressions, table=None, join=None, |
| 2167 | + exclude_from_search=None): |
| 2168 | + """Return a subset of this collection, filtered by 'expressions'.""" |
| 2169 | + tables = self._tables.copy() |
| 2170 | + if table is not None: |
| 2171 | + if join is None: |
| 2172 | + raise InvalidFilter("Cannot specify a table without a join.") |
| 2173 | + tables[table] = join |
| 2174 | + if exclude_from_search is None: |
| 2175 | + exclude_from_search = [] |
| 2176 | + if expressions is None: |
| 2177 | + expressions = [] |
| 2178 | + return self.__class__( |
| 2179 | + self.store, |
| 2180 | + self._merge_queue_filter_expressions + expressions, |
| 2181 | + tables, |
| 2182 | + self._exclude_from_search + exclude_from_search) |
| 2183 | + |
| 2184 | + def _getMergeQueueExpressions(self): |
| 2185 | + """Return the where expressions for this collection.""" |
| 2186 | + return self._merge_queue_filter_expressions |
| 2187 | + |
| 2188 | + def getMergeQueues(self): |
| 2189 | + return list(self._getMergeQueues()) |
| 2190 | + |
| 2191 | + def _getMergeQueues(self): |
| 2192 | + """See `IBranchMergeQueueCollection`.""" |
| 2193 | + tables = [BranchMergeQueue] + self._tables.values() |
| 2194 | + expressions = self._getMergeQueueExpressions() |
| 2195 | + return self.store.using(*tables).find(BranchMergeQueue, *expressions) |
| 2196 | + |
| 2197 | + def ownedBy(self, person): |
| 2198 | + """See `IBranchMergeQueueCollection`.""" |
| 2199 | + return self._filterBy([BranchMergeQueue.owner == person]) |
| 2200 | + |
| 2201 | + def visibleByUser(self, person): |
| 2202 | + """See `IBranchMergeQueueCollection`.""" |
| 2203 | + if (person == LAUNCHPAD_SERVICES or |
| 2204 | + user_has_special_merge_queue_access(person)): |
| 2205 | + return self |
| 2206 | + return VisibleBranchMergeQueueCollection( |
| 2207 | + person, |
| 2208 | + self._store, None, |
| 2209 | + self._tables, self._exclude_from_search) |
| 2210 | + |
| 2211 | + |
| 2212 | +class VisibleBranchMergeQueueCollection(GenericBranchMergeQueueCollection): |
| 2213 | + """A mergequeue collection which provides queues visible by a user.""" |
| 2214 | + |
| 2215 | + def __init__(self, person, store=None, |
| 2216 | + merge_queue_filter_expressions=None, tables=None, |
| 2217 | + exclude_from_search=None): |
| 2218 | + super(VisibleBranchMergeQueueCollection, self).__init__( |
| 2219 | + store=store, |
| 2220 | + merge_queue_filter_expressions=merge_queue_filter_expressions, |
| 2221 | + tables=tables, |
| 2222 | + exclude_from_search=exclude_from_search, |
| 2223 | + ) |
| 2224 | + self._user = person |
| 2225 | + |
| 2226 | + def _filterBy(self, expressions, table=None, join=None, |
| 2227 | + exclude_from_search=None): |
| 2228 | + """Return a subset of this collection, filtered by 'expressions'.""" |
| 2229 | + tables = self._tables.copy() |
| 2230 | + if table is not None: |
| 2231 | + if join is None: |
| 2232 | + raise InvalidFilter("Cannot specify a table without a join.") |
| 2233 | + tables[table] = join |
| 2234 | + if exclude_from_search is None: |
| 2235 | + exclude_from_search = [] |
| 2236 | + if expressions is None: |
| 2237 | + expressions = [] |
| 2238 | + return self.__class__( |
| 2239 | + self._user, |
| 2240 | + self.store, |
| 2241 | + self._merge_queue_filter_expressions + expressions, |
| 2242 | + tables, |
| 2243 | + self._exclude_from_search + exclude_from_search) |
| 2244 | + |
| 2245 | + def visibleByUser(self, person): |
| 2246 | + """See `IBranchMergeQueueCollection`.""" |
| 2247 | + if person == self._user: |
| 2248 | + return self |
| 2249 | + raise InvalidFilter( |
| 2250 | + "Cannot filter for merge queues visible by user %r, already " |
| 2251 | + "filtering for %r" % (person, self._user)) |
| 2252 | + |
| 2253 | + def _getCount(self): |
| 2254 | + """See `IBranchMergeQueueCollection`.""" |
| 2255 | + return len(self._getMergeQueues()) |
| 2256 | + |
| 2257 | + def _getMergeQueues(self): |
| 2258 | + """Return the queues visible by self._user. |
| 2259 | + |
| 2260 | + A queue is visible to a user if that user can see all the branches |
| 2261 | + associated with the queue. |
| 2262 | + """ |
| 2263 | + |
| 2264 | + def allBranchesVisible(user, branches): |
| 2265 | + return len([branch for branch in branches |
| 2266 | + if branch.visibleByUser(user)]) == branches.count() |
| 2267 | + |
| 2268 | + queues = super( |
| 2269 | + VisibleBranchMergeQueueCollection, self)._getMergeQueues() |
| 2270 | + return [queue for queue in queues |
| 2271 | + if allBranchesVisible(self._user, queue.branches)] |
| 2272 | |
| 2273 | === modified file 'lib/lp/code/model/recipebuilder.py' |
| 2274 | --- lib/lp/code/model/recipebuilder.py 2010-10-27 14:25:19 +0000 |
| 2275 | +++ lib/lp/code/model/recipebuilder.py 2010-11-19 10:27:10 +0000 |
| 2276 | @@ -122,6 +122,8 @@ |
| 2277 | if chroot is None: |
| 2278 | raise CannotBuild("Unable to find a chroot for %s" % |
| 2279 | distroarchseries.displayname) |
| 2280 | + logger.info( |
| 2281 | + "Sending chroot file for recipe build to %s" % self._builder.name) |
| 2282 | d = self._builder.slave.cacheFile(logger, chroot) |
| 2283 | |
| 2284 | def got_cache_file(ignored): |
| 2285 | @@ -131,7 +133,7 @@ |
| 2286 | buildid = "%s-%s" % (self.build.id, build_queue_id) |
| 2287 | cookie = self.buildfarmjob.generateSlaveBuildCookie() |
| 2288 | chroot_sha1 = chroot.content.sha1 |
| 2289 | - logger.debug( |
| 2290 | + logger.info( |
| 2291 | "Initiating build %s on %s" % (buildid, self._builder.url)) |
| 2292 | |
| 2293 | return self._builder.slave.build( |
| 2294 | |
| 2295 | === added file 'lib/lp/code/model/tests/test_branchmergequeuecollection.py' |
| 2296 | --- lib/lp/code/model/tests/test_branchmergequeuecollection.py 1970-01-01 00:00:00 +0000 |
| 2297 | +++ lib/lp/code/model/tests/test_branchmergequeuecollection.py 2010-11-19 10:27:10 +0000 |
| 2298 | @@ -0,0 +1,201 @@ |
| 2299 | +# Copyright 2009 Canonical Ltd. This software is licensed under the |
| 2300 | +# GNU Affero General Public License version 3 (see the file LICENSE). |
| 2301 | + |
| 2302 | +"""Tests for branch merge queue collections.""" |
| 2303 | + |
| 2304 | +__metaclass__ = type |
| 2305 | + |
| 2306 | +from zope.component import getUtility |
| 2307 | +from zope.security.proxy import removeSecurityProxy |
| 2308 | + |
| 2309 | +from canonical.launchpad.interfaces.launchpad import ILaunchpadCelebrities |
| 2310 | +from canonical.launchpad.interfaces.lpstorm import IMasterStore |
| 2311 | +from canonical.testing.layers import DatabaseFunctionalLayer |
| 2312 | +from lp.code.interfaces.branchmergequeuecollection import ( |
| 2313 | + IAllBranchMergeQueues, |
| 2314 | + IBranchMergeQueueCollection, |
| 2315 | + ) |
| 2316 | +from lp.code.interfaces.codehosting import LAUNCHPAD_SERVICES |
| 2317 | +from lp.code.model.branchmergequeue import BranchMergeQueue |
| 2318 | +from lp.code.model.branchmergequeuecollection import ( |
| 2319 | + GenericBranchMergeQueueCollection, |
| 2320 | + ) |
| 2321 | +from lp.testing import TestCaseWithFactory |
| 2322 | + |
| 2323 | + |
| 2324 | +class TestGenericBranchMergeQueueCollection(TestCaseWithFactory): |
| 2325 | + |
| 2326 | + layer = DatabaseFunctionalLayer |
| 2327 | + |
| 2328 | + def setUp(self): |
| 2329 | + TestCaseWithFactory.setUp(self) |
| 2330 | + self.store = IMasterStore(BranchMergeQueue) |
| 2331 | + |
| 2332 | + def test_provides_branchmergequeuecollection(self): |
| 2333 | + # `GenericBranchMergeQueueCollection` |
| 2334 | + # provides the `IBranchMergeQueueCollection` interface. |
| 2335 | + self.assertProvides( |
| 2336 | + GenericBranchMergeQueueCollection(self.store), |
| 2337 | + IBranchMergeQueueCollection) |
| 2338 | + |
| 2339 | + def test_getMergeQueues_no_filter_no_queues(self): |
| 2340 | + # If no filter is specified, then the collection is of all branches |
| 2341 | + # merge queues. By default, there are no branch merge queues. |
| 2342 | + collection = GenericBranchMergeQueueCollection(self.store) |
| 2343 | + self.assertEqual([], list(collection.getMergeQueues())) |
| 2344 | + |
| 2345 | + def test_getMergeQueues_no_filter(self): |
| 2346 | + # If no filter is specified, then the collection is of all branch |
| 2347 | + # merge queues. |
| 2348 | + collection = GenericBranchMergeQueueCollection(self.store) |
| 2349 | + queue = self.factory.makeBranchMergeQueue() |
| 2350 | + self.assertEqual([queue], list(collection.getMergeQueues())) |
| 2351 | + |
| 2352 | + def test_count(self): |
| 2353 | + # The 'count' property of a collection is the number of elements in |
| 2354 | + # the collection. |
| 2355 | + collection = GenericBranchMergeQueueCollection(self.store) |
| 2356 | + self.assertEqual(0, collection.count()) |
| 2357 | + for i in range(3): |
| 2358 | + self.factory.makeBranchMergeQueue() |
| 2359 | + self.assertEqual(3, collection.count()) |
| 2360 | + |
| 2361 | + def test_count_respects_filter(self): |
| 2362 | + # If a collection is a subset of all possible queues, then the count |
| 2363 | + # will be the size of that subset. That is, 'count' respects any |
| 2364 | + # filters that are applied. |
| 2365 | + person = self.factory.makePerson() |
| 2366 | + queue = self.factory.makeBranchMergeQueue(owner=person) |
| 2367 | + queue2 = self.factory.makeAnyBranch() |
| 2368 | + collection = GenericBranchMergeQueueCollection( |
| 2369 | + self.store, [BranchMergeQueue.owner == person]) |
| 2370 | + self.assertEqual(1, collection.count()) |
| 2371 | + |
| 2372 | + |
| 2373 | +class TestBranchMergeQueueCollectionFilters(TestCaseWithFactory): |
| 2374 | + |
| 2375 | + layer = DatabaseFunctionalLayer |
| 2376 | + |
| 2377 | + def setUp(self): |
| 2378 | + TestCaseWithFactory.setUp(self) |
| 2379 | + self.all_queues = getUtility(IAllBranchMergeQueues) |
| 2380 | + |
| 2381 | + def test_count_respects_visibleByUser_filter(self): |
| 2382 | + # IBranchMergeQueueCollection.count() returns the number of queues |
| 2383 | + # that getMergeQueues() yields, even when the visibleByUser filter is |
| 2384 | + # applied. |
| 2385 | + branch = self.factory.makeAnyBranch(private=True) |
| 2386 | + naked_branch = removeSecurityProxy(branch) |
| 2387 | + queue = self.factory.makeBranchMergeQueue(branches=[naked_branch]) |
| 2388 | + branch2 = self.factory.makeAnyBranch(private=True) |
| 2389 | + naked_branch2 = removeSecurityProxy(branch2) |
| 2390 | + queue2 = self.factory.makeBranchMergeQueue(branches=[naked_branch2]) |
| 2391 | + collection = self.all_queues.visibleByUser(naked_branch.owner) |
| 2392 | + self.assertEqual(1, len(collection.getMergeQueues())) |
| 2393 | + self.assertEqual(1, collection.count()) |
| 2394 | + |
| 2395 | + def test_ownedBy(self): |
| 2396 | + # 'ownedBy' returns a new collection restricted to queues owned by |
| 2397 | + # the given person. |
| 2398 | + queue = self.factory.makeBranchMergeQueue() |
| 2399 | + queue2 = self.factory.makeBranchMergeQueue() |
| 2400 | + collection = self.all_queues.ownedBy(queue.owner) |
| 2401 | + self.assertEqual([queue], collection.getMergeQueues()) |
| 2402 | + |
| 2403 | + |
| 2404 | +class TestGenericBranchMergeQueueCollectionVisibleFilter(TestCaseWithFactory): |
| 2405 | + |
| 2406 | + layer = DatabaseFunctionalLayer |
| 2407 | + |
| 2408 | + def setUp(self): |
| 2409 | + TestCaseWithFactory.setUp(self) |
| 2410 | + public_branch = self.factory.makeAnyBranch(name='public') |
| 2411 | + self.queue_with_public_branch = self.factory.makeBranchMergeQueue( |
| 2412 | + branches=[removeSecurityProxy(public_branch)]) |
| 2413 | + private_branch1 = self.factory.makeAnyBranch( |
| 2414 | + private=True, name='private1') |
| 2415 | + naked_private_branch1 = removeSecurityProxy(private_branch1) |
| 2416 | + self.private_branch1_owner = naked_private_branch1.owner |
| 2417 | + self.queue1_with_private_branch = self.factory.makeBranchMergeQueue( |
| 2418 | + branches=[naked_private_branch1]) |
| 2419 | + private_branch2 = self.factory.makeAnyBranch( |
| 2420 | + private=True, name='private2') |
| 2421 | + self.queue2_with_private_branch = self.factory.makeBranchMergeQueue( |
| 2422 | + branches=[removeSecurityProxy(private_branch2)]) |
| 2423 | + self.all_queues = getUtility(IAllBranchMergeQueues) |
| 2424 | + |
| 2425 | + def test_all_queues(self): |
| 2426 | + # Without the visibleByUser filter, all queues are in the |
| 2427 | + # collection. |
| 2428 | + self.assertEqual( |
| 2429 | + sorted([self.queue_with_public_branch, |
| 2430 | + self.queue1_with_private_branch, |
| 2431 | + self.queue2_with_private_branch]), |
| 2432 | + sorted(self.all_queues.getMergeQueues())) |
| 2433 | + |
| 2434 | + def test_anonymous_sees_only_public(self): |
| 2435 | + # Anonymous users can see only queues with public branches. |
| 2436 | + queues = self.all_queues.visibleByUser(None) |
| 2437 | + self.assertEqual([self.queue_with_public_branch], |
| 2438 | + list(queues.getMergeQueues())) |
| 2439 | + |
| 2440 | + def test_random_person_sees_only_public(self): |
| 2441 | + # Logged in users with no special permissions can see only queues with |
| 2442 | + # public branches. |
| 2443 | + person = self.factory.makePerson() |
| 2444 | + queues = self.all_queues.visibleByUser(person) |
| 2445 | + self.assertEqual([self.queue_with_public_branch], |
| 2446 | + list(queues.getMergeQueues())) |
| 2447 | + |
| 2448 | + def test_owner_sees_own_branches(self): |
| 2449 | + # Users can always see the queues with branches that they own, as well |
| 2450 | + # as queues with public branches. |
| 2451 | + queues = self.all_queues.visibleByUser(self.private_branch1_owner) |
| 2452 | + self.assertEqual( |
| 2453 | + sorted([self.queue_with_public_branch, |
| 2454 | + self.queue1_with_private_branch]), |
| 2455 | + sorted(queues.getMergeQueues())) |
| 2456 | + |
| 2457 | + def test_owner_member_sees_own_queues(self): |
| 2458 | + # Members of teams that own queues can see queues owned by those |
| 2459 | + # teams, as well as public branches. |
| 2460 | + team_owner = self.factory.makePerson() |
| 2461 | + team = self.factory.makeTeam(team_owner) |
| 2462 | + private_branch = self.factory.makeAnyBranch( |
| 2463 | + owner=team, private=True, name='team') |
| 2464 | + queue_with_private_branch = self.factory.makeBranchMergeQueue( |
| 2465 | + branches=[removeSecurityProxy(private_branch)]) |
| 2466 | + queues = self.all_queues.visibleByUser(team_owner) |
| 2467 | + self.assertEqual( |
| 2468 | + sorted([self.queue_with_public_branch, |
| 2469 | + queue_with_private_branch]), |
| 2470 | + sorted(queues.getMergeQueues())) |
| 2471 | + |
| 2472 | + def test_launchpad_services_sees_all(self): |
| 2473 | + # The LAUNCHPAD_SERVICES special user sees *everything*. |
| 2474 | + queues = self.all_queues.visibleByUser(LAUNCHPAD_SERVICES) |
| 2475 | + self.assertEqual( |
| 2476 | + sorted(self.all_queues.getMergeQueues()), |
| 2477 | + sorted(queues.getMergeQueues())) |
| 2478 | + |
| 2479 | + def test_admins_see_all(self): |
| 2480 | + # Launchpad administrators see *everything*. |
| 2481 | + admin = self.factory.makePerson() |
| 2482 | + admin_team = removeSecurityProxy( |
| 2483 | + getUtility(ILaunchpadCelebrities).admin) |
| 2484 | + admin_team.addMember(admin, admin_team.teamowner) |
| 2485 | + queues = self.all_queues.visibleByUser(admin) |
| 2486 | + self.assertEqual( |
| 2487 | + sorted(self.all_queues.getMergeQueues()), |
| 2488 | + sorted(queues.getMergeQueues())) |
| 2489 | + |
| 2490 | + def test_bazaar_experts_see_all(self): |
| 2491 | + # Members of the bazaar_experts team see *everything*. |
| 2492 | + bzr_experts = removeSecurityProxy( |
| 2493 | + getUtility(ILaunchpadCelebrities).bazaar_experts) |
| 2494 | + expert = self.factory.makePerson() |
| 2495 | + bzr_experts.addMember(expert, bzr_experts.teamowner) |
| 2496 | + queues = self.all_queues.visibleByUser(expert) |
| 2497 | + self.assertEqual( |
| 2498 | + sorted(self.all_queues.getMergeQueues()), |
| 2499 | + sorted(queues.getMergeQueues())) |
| 2500 | |
| 2501 | === added file 'lib/lp/code/templates/branchmergequeue-listing.pt' |
| 2502 | --- lib/lp/code/templates/branchmergequeue-listing.pt 1970-01-01 00:00:00 +0000 |
| 2503 | +++ lib/lp/code/templates/branchmergequeue-listing.pt 2010-11-19 10:27:10 +0000 |
| 2504 | @@ -0,0 +1,68 @@ |
| 2505 | +<html |
| 2506 | + xmlns="http://www.w3.org/1999/xhtml" |
| 2507 | + xmlns:tal="http://xml.zope.org/namespaces/tal" |
| 2508 | + xmlns:metal="http://xml.zope.org/namespaces/metal" |
| 2509 | + xmlns:i18n="http://xml.zope.org/namespaces/i18n" |
| 2510 | + metal:use-macro="view/macro:page/main_only" |
| 2511 | + i18n:domain="launchpad"> |
| 2512 | + |
| 2513 | +<body> |
| 2514 | + |
| 2515 | + <div metal:fill-slot="main"> |
| 2516 | + |
| 2517 | + <div tal:condition="not: features/code.branchmergequeue"> |
| 2518 | + <em> |
| 2519 | + No merge queues |
| 2520 | + </em> |
| 2521 | + </div> |
| 2522 | + |
| 2523 | + <div tal:condition="features/code.branchmergequeue"> |
| 2524 | + |
| 2525 | + <tal:has-queues condition="view/mergequeue_count"> |
| 2526 | + |
| 2527 | + <table id="mergequeuetable" class="listing sortable"> |
| 2528 | + <thead> |
| 2529 | + <tr> |
| 2530 | + <th colspan="2">Name</th> |
| 2531 | + <th tal:condition="view/owner_enabled">Owner</th> |
| 2532 | + <th>Queue Size</th> |
| 2533 | + <th>Associated Branches</th> |
| 2534 | + </tr> |
| 2535 | + </thead> |
| 2536 | + <tbody> |
| 2537 | + <tal:mergequeues repeat="mergeQueue view/mergequeues"> |
| 2538 | + <tr> |
| 2539 | + <td colspan="2"> |
| 2540 | + <a tal:attributes="href mergeQueue/fmt:url" |
| 2541 | + tal:content="mergeQueue/name">Merge queue name</a> |
| 2542 | + </td> |
| 2543 | + <td tal:condition="view/owner_enabled"> |
| 2544 | + <a tal:replace="structure mergeQueue/owner/fmt:link"> |
| 2545 | + Owner |
| 2546 | + </a> |
| 2547 | + </td> |
| 2548 | + <td>4</td> |
| 2549 | + <td> |
| 2550 | + <metal:display-branches |
| 2551 | + use-macro="context/@@+bmq-macros/merge_queue_branches"/> |
| 2552 | + </td> |
| 2553 | + </tr> |
| 2554 | + </tal:mergequeues> |
| 2555 | + </tbody> |
| 2556 | + </table> |
| 2557 | + |
| 2558 | + </tal:has-queues> |
| 2559 | + |
| 2560 | + <em id="no-queues" |
| 2561 | + tal:condition="not: view/mergequeue_count" |
| 2562 | + tal:content="view/no_merge_queue_message"> |
| 2563 | + No merge queues |
| 2564 | + </em> |
| 2565 | + |
| 2566 | + </div> |
| 2567 | + |
| 2568 | + </div> |
| 2569 | + |
| 2570 | +</body> |
| 2571 | +</html> |
| 2572 | + |
| 2573 | |
| 2574 | === added file 'lib/lp/code/templates/branchmergequeue-macros.pt' |
| 2575 | --- lib/lp/code/templates/branchmergequeue-macros.pt 1970-01-01 00:00:00 +0000 |
| 2576 | +++ lib/lp/code/templates/branchmergequeue-macros.pt 2010-11-19 10:27:10 +0000 |
| 2577 | @@ -0,0 +1,20 @@ |
| 2578 | + <tal:root |
| 2579 | + xmlns:tal="http://xml.zope.org/namespaces/tal" |
| 2580 | + xmlns:metal="http://xml.zope.org/namespaces/metal" |
| 2581 | + omit-tag=""> |
| 2582 | + |
| 2583 | +<metal:merge_queue_branches define-macro="merge_queue_branches"> |
| 2584 | + <table class="listing"> |
| 2585 | + <tbody> |
| 2586 | + <tal:mergequeue-branches repeat="branch mergeQueue/branches"> |
| 2587 | + <tr> |
| 2588 | + <td> |
| 2589 | + <a tal:attributes="href branch/fmt:url" |
| 2590 | + tal:content="branch/name">Branch name</a> |
| 2591 | + </td> |
| 2592 | + </tr> |
| 2593 | + </tal:mergequeue-branches> |
| 2594 | + </tbody> |
| 2595 | + </table> |
| 2596 | +</metal:merge_queue_branches> |
| 2597 | +</tal:root> |
| 2598 | \ No newline at end of file |
| 2599 | |
| 2600 | === modified file 'lib/lp/code/templates/person-codesummary.pt' |
| 2601 | --- lib/lp/code/templates/person-codesummary.pt 2010-11-08 09:03:59 +0000 |
| 2602 | +++ lib/lp/code/templates/person-codesummary.pt 2010-11-19 10:27:10 +0000 |
| 2603 | @@ -4,7 +4,8 @@ |
| 2604 | xmlns:i18n="http://xml.zope.org/namespaces/i18n" |
| 2605 | id="portlet-person-codesummary" |
| 2606 | class="portlet" |
| 2607 | - tal:define="menu context/menu:branches" |
| 2608 | + tal:define="menu context/menu:branches; |
| 2609 | + features request/features" |
| 2610 | tal:condition="menu/show_summary"> |
| 2611 | |
| 2612 | <table> |
| 2613 | @@ -26,5 +27,11 @@ |
| 2614 | <td class="code-count" tal:content="menu/active_review_count">5</td> |
| 2615 | <td tal:content="structure menu/active_reviews/render" /> |
| 2616 | </tr> |
| 2617 | + <tr tal:condition="features/code.branchmergequeue" id="mergequeue-counts"> |
| 2618 | + <td class="code-count" tal:content="menu/mergequeue_count">5</td> |
| 2619 | + <td tal:condition="menu" |
| 2620 | + tal:content="structure menu/mergequeues/render" |
| 2621 | + /> |
| 2622 | + </tr> |
| 2623 | </table> |
| 2624 | </div> |
| 2625 | |
| 2626 | === modified file 'lib/lp/registry/javascript/tests/test_milestone_table.html' |
| 2627 | --- lib/lp/registry/javascript/tests/test_milestone_table.html 2010-04-28 18:43:25 +0000 |
| 2628 | +++ lib/lp/registry/javascript/tests/test_milestone_table.html 2010-11-19 10:27:10 +0000 |
| 2629 | @@ -9,7 +9,7 @@ |
| 2630 | <link rel="stylesheet" href="../../../../canonical/launchpad/icing/yui/cssreset/reset.css"/> |
| 2631 | <link rel="stylesheet" href="../../../../canonical/launchpad/icing/yui/cssfonts/fonts.css"/> |
| 2632 | <link rel="stylesheet" href="../../../../canonical/launchpad/icing/yui/cssbase/base.css"/> |
| 2633 | - <link rel="stylesheet" href="../../../canonical/launchpad/javascript/test.css" /> |
| 2634 | + <link rel="stylesheet" href="../../../../canonical/launchpad/javascript/test.css" /> |
| 2635 | |
| 2636 | <!-- The module under test --> |
| 2637 | <script type="text/javascript" src="../milestonetable.js"></script> |
| 2638 | |
| 2639 | === modified file 'lib/lp/services/apachelogparser/base.py' |
| 2640 | --- lib/lp/services/apachelogparser/base.py 2010-09-11 19:25:13 +0000 |
| 2641 | +++ lib/lp/services/apachelogparser/base.py 2010-11-19 10:27:10 +0000 |
| 2642 | @@ -204,15 +204,21 @@ |
| 2643 | |
| 2644 | def get_method_and_path(request): |
| 2645 | """Extract the method of the request and path of the requested file.""" |
| 2646 | - L = request.split() |
| 2647 | - # HTTP 1.0 requests might omit the HTTP version so we must cope with them. |
| 2648 | - if len(L) == 2: |
| 2649 | - method, path = L |
| 2650 | + method, ignore, rest = request.partition(' ') |
| 2651 | + # In the below, the common case is that `first` is the path and `last` is |
| 2652 | + # the protocol. |
| 2653 | + first, ignore, last = rest.rpartition(' ') |
| 2654 | + if first == '': |
| 2655 | + # HTTP 1.0 requests might omit the HTTP version so we cope with them. |
| 2656 | + path = last |
| 2657 | + elif not last.startswith('HTTP'): |
| 2658 | + # We cope with HTTP 1.0 protocol without HTTP version *and* a |
| 2659 | + # space in the path (see bug 676489 for example). |
| 2660 | + path = rest |
| 2661 | else: |
| 2662 | - method, path, protocol = L |
| 2663 | - |
| 2664 | + # This is the common case. |
| 2665 | + path = first |
| 2666 | if path.startswith('http://') or path.startswith('https://'): |
| 2667 | uri = URI(path) |
| 2668 | path = uri.path |
| 2669 | - |
| 2670 | return method, path |
| 2671 | |
| 2672 | === modified file 'lib/lp/services/apachelogparser/tests/test_apachelogparser.py' |
| 2673 | --- lib/lp/services/apachelogparser/tests/test_apachelogparser.py 2010-10-04 19:50:45 +0000 |
| 2674 | +++ lib/lp/services/apachelogparser/tests/test_apachelogparser.py 2010-11-19 10:27:10 +0000 |
| 2675 | @@ -29,6 +29,7 @@ |
| 2676 | get_fd_and_file_size, |
| 2677 | get_files_to_parse, |
| 2678 | get_host_date_status_and_request, |
| 2679 | + get_method_and_path, |
| 2680 | parse_file, |
| 2681 | ) |
| 2682 | from lp.services.apachelogparser.model.parsedapachelog import ParsedApacheLog |
| 2683 | @@ -71,6 +72,35 @@ |
| 2684 | date = '[13/Jun/2008:18:38:57 +0100]' |
| 2685 | self.assertEqual(get_day(date), datetime(2008, 6, 13)) |
| 2686 | |
| 2687 | + def test_parsing_path_with_missing_protocol(self): |
| 2688 | + request = (r'GET /56222647/deluge-gtk_1.3.0-0ubuntu1_all.deb?' |
| 2689 | + r'N\x1f\x9b') |
| 2690 | + method, path = get_method_and_path(request) |
| 2691 | + self.assertEqual(method, 'GET') |
| 2692 | + self.assertEqual( |
| 2693 | + path, |
| 2694 | + r'/56222647/deluge-gtk_1.3.0-0ubuntu1_all.deb?N\x1f\x9b') |
| 2695 | + |
| 2696 | + def test_parsing_path_with_space(self): |
| 2697 | + # See bug 676489. |
| 2698 | + request = (r'GET /56222647/deluge-gtk_1.3.0-0ubuntu1_all.deb?' |
| 2699 | + r'N\x1f\x9b Z%7B... HTTP/1.0') |
| 2700 | + method, path = get_method_and_path(request) |
| 2701 | + self.assertEqual(method, 'GET') |
| 2702 | + self.assertEqual( |
| 2703 | + path, |
| 2704 | + r'/56222647/deluge-gtk_1.3.0-0ubuntu1_all.deb?N\x1f\x9b Z%7B...') |
| 2705 | + |
| 2706 | + def test_parsing_path_with_space_and_missing_protocol(self): |
| 2707 | + # This is a variation of bug 676489. |
| 2708 | + request = (r'GET /56222647/deluge-gtk_1.3.0-0ubuntu1_all.deb?' |
| 2709 | + r'N\x1f\x9b Z%7B...') |
| 2710 | + method, path = get_method_and_path(request) |
| 2711 | + self.assertEqual(method, 'GET') |
| 2712 | + self.assertEqual( |
| 2713 | + path, |
| 2714 | + r'/56222647/deluge-gtk_1.3.0-0ubuntu1_all.deb?N\x1f\x9b Z%7B...') |
| 2715 | + |
| 2716 | |
| 2717 | class Test_get_fd_and_file_size(TestCase): |
| 2718 | |
| 2719 | |
| 2720 | === modified file 'lib/lp/services/mailman/doc/postings.txt' |
| 2721 | --- lib/lp/services/mailman/doc/postings.txt 2010-10-25 12:11:43 +0000 |
| 2722 | +++ lib/lp/services/mailman/doc/postings.txt 2010-11-19 10:27:10 +0000 |
| 2723 | @@ -177,25 +177,6 @@ |
| 2724 | From: itest-one-...@lists.launchpad.dev |
| 2725 | To: anne.person@example.com |
| 2726 | ... |
| 2727 | - Sender: itest-one-bounces+anne.person=example.com@lists.launchpad.dev |
| 2728 | - Errors-To: itest-one-bounces+anne.person=example.com@lists.launchpad.dev |
| 2729 | - ... |
| 2730 | - X-MailFrom: itest-one-bounces+anne.person=example.com@lists.launchpad.dev |
| 2731 | - X-RcptTo: anne.person@example.com |
| 2732 | - <BLANKLINE> |
| 2733 | - Your request to the Itest-one mailing list |
| 2734 | - <BLANKLINE> |
| 2735 | - Posting of your message titled "An unsubscribed post" |
| 2736 | - <BLANKLINE> |
| 2737 | - has been rejected by the list moderator. The moderator gave the |
| 2738 | - following reason for rejecting your request: |
| 2739 | - <BLANKLINE> |
| 2740 | - "[No reason given]" |
| 2741 | - <BLANKLINE> |
| 2742 | - Any questions or comments should be directed to the list administrator |
| 2743 | - at: |
| 2744 | - <BLANKLINE> |
| 2745 | - itest-one-owner@lists.launchpad.dev |
| 2746 | |
| 2747 | Anne posts another message to the mailing list, but she is still not |
| 2748 | subscribed to it. The team administrator deems this message to be spam and |
| 2749 | |
| 2750 | === modified file 'lib/lp/testing/factory.py' |
| 2751 | --- lib/lp/testing/factory.py 2010-11-09 09:46:20 +0000 |
| 2752 | +++ lib/lp/testing/factory.py 2010-11-19 10:27:10 +0000 |
| 2753 | @@ -1119,7 +1119,8 @@ |
| 2754 | return namespace.createBranch(branch_type, name, creator) |
| 2755 | |
| 2756 | def makeBranchMergeQueue(self, registrant=None, owner=None, name=None, |
| 2757 | - description=None, configuration=None): |
| 2758 | + description=None, configuration=None, |
| 2759 | + branches=None): |
| 2760 | """Create a BranchMergeQueue.""" |
| 2761 | if name is None: |
| 2762 | name = unicode(self.getUniqueString('queue')) |
| 2763 | @@ -1134,7 +1135,7 @@ |
| 2764 | self.getUniqueString('key'): self.getUniqueString('value')})) |
| 2765 | |
| 2766 | queue = getUtility(IBranchMergeQueueSource).new( |
| 2767 | - name, owner, registrant, description, configuration) |
| 2768 | + name, owner, registrant, description, configuration, branches) |
| 2769 | return queue |
| 2770 | |
| 2771 | def enableDefaultStackingForProduct(self, product, branch=None): |
| 2772 | |
| 2773 | === modified file 'lib/lp/translations/scripts/tests/test_message_sharing_migration.py' |
| 2774 | --- lib/lp/translations/scripts/tests/test_message_sharing_migration.py 2010-10-18 16:36:46 +0000 |
| 2775 | +++ lib/lp/translations/scripts/tests/test_message_sharing_migration.py 2010-11-19 10:27:10 +0000 |
| 2776 | @@ -18,6 +18,7 @@ |
| 2777 | record_statements, |
| 2778 | TestCaseWithFactory, |
| 2779 | ) |
| 2780 | +from lp.testing.sampledata import ADMIN_EMAIL |
| 2781 | from lp.translations.interfaces.pofiletranslator import IPOFileTranslatorSet |
| 2782 | from lp.translations.model.pomsgid import POMsgID |
| 2783 | from lp.translations.model.potemplate import POTemplate |
| 2784 | @@ -62,8 +63,8 @@ |
| 2785 | # This test needs the privileges of rosettaadmin (to delete |
| 2786 | # POTMsgSets) but it also needs to set up test conditions which |
| 2787 | # requires other privileges. |
| 2788 | + super(TestPOTMsgSetMerging, self).setUp(user=ADMIN_EMAIL) |
| 2789 | self.layer.switchDbUser('postgres') |
| 2790 | - super(TestPOTMsgSetMerging, self).setUp(user='mark@example.com') |
| 2791 | super(TestPOTMsgSetMerging, self).setUpProduct() |
| 2792 | |
| 2793 | def test_matchedPOTMsgSetsShare(self): |
| 2794 | @@ -252,9 +253,9 @@ |
| 2795 | The matching POTMsgSets will be merged by the _mergePOTMsgSets |
| 2796 | call. |
| 2797 | """ |
| 2798 | - self.layer.switchDbUser('postgres') |
| 2799 | super(TestPOTMsgSetMergingAndTranslations, self).setUp( |
| 2800 | - user='mark@example.com') |
| 2801 | + user=ADMIN_EMAIL) |
| 2802 | + self.layer.switchDbUser('postgres') |
| 2803 | super(TestPOTMsgSetMergingAndTranslations, self).setUpProduct() |
| 2804 | |
| 2805 | def test_sharingDivergedMessages(self): |
| 2806 | @@ -374,9 +375,8 @@ |
| 2807 | layer = LaunchpadZopelessLayer |
| 2808 | |
| 2809 | def setUp(self): |
| 2810 | + super(TestTranslationMessageNonMerging, self).setUp(user=ADMIN_EMAIL) |
| 2811 | self.layer.switchDbUser('postgres') |
| 2812 | - super(TestTranslationMessageNonMerging, self).setUp( |
| 2813 | - user='mark@example.com') |
| 2814 | super(TestTranslationMessageNonMerging, self).setUpProduct() |
| 2815 | |
| 2816 | def test_MessagesAreNotSharedAcrossPOTMsgSets(self): |
| 2817 | @@ -402,9 +402,9 @@ |
| 2818 | layer = LaunchpadZopelessLayer |
| 2819 | |
| 2820 | def setUp(self): |
| 2821 | + super(TestTranslationMessageMerging, self).setUp(user=ADMIN_EMAIL) |
| 2822 | + transaction.commit() |
| 2823 | self.layer.switchDbUser('postgres') |
| 2824 | - super(TestTranslationMessageMerging, self).setUp( |
| 2825 | - user='mark@example.com') |
| 2826 | super(TestTranslationMessageMerging, self).setUpProduct() |
| 2827 | |
| 2828 | def test_messagesCanStayDiverged(self): |
| 2829 | @@ -565,8 +565,8 @@ |
| 2830 | layer = LaunchpadZopelessLayer |
| 2831 | |
| 2832 | def setUp(self): |
| 2833 | + super(TestRemoveDuplicates, self).setUp(user=ADMIN_EMAIL) |
| 2834 | self.layer.switchDbUser('postgres') |
| 2835 | - super(TestRemoveDuplicates, self).setUp(user='mark@example.com') |
| 2836 | super(TestRemoveDuplicates, self).setUpProduct() |
| 2837 | |
| 2838 | def test_duplicatesAreCleanedUp(self): |
| 2839 | @@ -738,8 +738,8 @@ |
| 2840 | layer = LaunchpadZopelessLayer |
| 2841 | |
| 2842 | def setUp(self): |
| 2843 | - self.layer.switchDbUser('postgres') |
| 2844 | super(TestSharingMigrationPerformance, self).setUp() |
| 2845 | + self.layer.switchDbUser('postgres') |
| 2846 | super(TestSharingMigrationPerformance, self).setUpProduct() |
| 2847 | |
| 2848 | def _flushDbObjects(self): |
| 2849 | |
| 2850 | === modified file 'lib/lp/translations/windmill/tests/test_languages.py' |
| 2851 | --- lib/lp/translations/windmill/tests/test_languages.py 2010-10-18 12:56:47 +0000 |
| 2852 | +++ lib/lp/translations/windmill/tests/test_languages.py 2010-11-19 10:27:10 +0000 |
| 2853 | @@ -7,6 +7,7 @@ |
| 2854 | __all__ = [] |
| 2855 | |
| 2856 | from canonical.launchpad.windmill.testing.constants import ( |
| 2857 | + FOR_ELEMENT, |
| 2858 | PAGE_LOAD, |
| 2859 | SLEEP, |
| 2860 | ) |
| 2861 | @@ -61,7 +62,8 @@ |
| 2862 | # "Not-matching" message is hidden and languages are visible. |
| 2863 | self.client.asserts.assertProperty( |
| 2864 | id=u'no_filter_matches', |
| 2865 | - validator='className|unseen') |
| 2866 | + validator='className|unseen', |
| 2867 | + timeout=FOR_ELEMENT) |
| 2868 | self._assert_languages_visible({ |
| 2869 | u'German': True, |
| 2870 | u'Mende': True, |

Sorry, submitted for wrong branch.