Merge lp:~bloodearnest/canonical-identity-provider/sso-dev into lp:canonical-identity-provider/release

Proposed by Simon Davy
Status: Work in progress
Proposed branch: lp:~bloodearnest/canonical-identity-provider/sso-dev
Merge into: lp:canonical-identity-provider/release
Diff against target: 1052 lines (+803/-37)
16 files modified
.bzrignore (+5/-1)
Makefile (+162/-0)
Makefile.common (+35/-28)
Makefile.juju (+120/-0)
README.juju (+220/-0)
django_project/deploy.yaml (+25/-0)
django_project/paths.py (+3/-1)
django_project/settings_base.py (+1/-1)
django_project/settings_devel_juju.py (+54/-0)
django_project/ssh_config (+8/-0)
requirements_devel.txt (+1/-1)
run-tests-juju (+52/-0)
scripts/acceptance-dev-juju.sh (+66/-0)
scripts/setup-localmail.sh (+47/-0)
src/identityprovider/tests/test_command_cleanup.py (+4/-4)
src/webui/tests/test_views_ui.py (+0/-1)
To merge this branch: bzr merge lp:~bloodearnest/canonical-identity-provider/sso-dev
Reviewer Review Type Date Requested Status
Daniel Manrique (community) Approve
Review via email: mp+277474@code.launchpad.net

Description of the change

Merge juju dev work
Notes:

1) This MP supports side-by-side dev envs. The juju one is enabled with
   I_CAN_HAZ_JUJU=yes env var.

The following files are forks specific to juju dev env. When the juju dev
env becomes default, these will replace/merge with their original files.

./django_project/settings_acceptance_juju.py
./django_project/settings_devel_juju.py
./scripts/acceptance-dev-juju.sh
./run-tests-juju
./Makefile.juju
./README.juju

2) The charm is checked out into ../sso-charm, and then used to deploy.
   You can deploy a custom charm branch by specifying CHARM_BRANCH. The
   following directories are bind mounted into the container

   $PWD -> /srv/login.ubuntu.com/devel/code/devel
   $PWD/logs -> /srv/login.ubuntu.com/devel/logs
   $PWD/django_project -> /srv/login.ubuntu.com/devel/etc
   $HOME -> $HOME

   This means all logs will be available in ./logs, and the charm will
   generate it's config into ./django_project, for easy inspection. $HOME
   mounting is mainly for convienince.

3) bootstrap: this now effectively does two things, both separate make
   targets in their own right.

   make dependencies - this is what the old bootstrap did (venv, cfgmgr,
                       wheels, install), plus it now also installs system
                       packages.

   make deploy - this will setup juju repo and envoronemtn and deploy
                       sso, and is idempotent. If you are already deployed,
                       it's pretty fast.

   So, for getting set up first time, we still use make bootstrap. But once
   we're set up, we probably want to use make dependencies/deploy as needed,
   but bootstrap will still work.

   We shouldn't need to teardown the juju env often. If we do, make
   juju-destroy will do that, and deploy will create it again and deploy.

4) config has changed quite a bit to support developing with the charm, but
   has also simplified a lot in the process, for both the charm and sso
   trunk.

   - We use explicit DJANGO_SETTINGS_MODULE values, which is more in-line
     with normal django usage.

     e.g. DJANGO_SETTINGS_MODULE=django_project.settings_devel_juju

     This is exposed on the charm, and allows for explicit staging/prod
     settings (in other projects, anyway - sso doesn't use them because it's
     open source)

   - all these modules import from 'settings', which are the charm generated
     settings.

   - the symlink from django_project/settings.py to ../../local_settings is
     gone (along with related issues)

   - A developer can create and modify django_project/settings_local.py to
     modify settings in development - this file is bzrignored.

5) DB: postgres now runs as system postgres in the container, configured via
   the charm and very close to the old db dev config. The big difference is
   that it's not running on /dev/shm, as this added significant complexity
   for little benefit. Happy to revisit this if people find it an issue.

   So:

   - start-db/stop-db are gone as now obsolete
   - reset-db/setup-db are preserved, but are the same command - drop the db
     and recreate it, run migrations
   - have proposed db-setup/db-reset as better tab-completion friendly
     names, but kept old names too

6) gunicorn: now runs constantly as charm configured service in the
   container, on port 8080, like prod, but with code reloading enabled.

   - gunicorn logs are in ./logs dir
   - For debugging, use make run as before, It stops the system gunicorn,
     and then runs it manually in the foreground on the same port. It uses
     the same config, but timeout is increased and logging is directed to
     stdout. When you've finished debugging, it will start gunicorn again
     in the container.

7) localmail: this is now installed by default in the container, and the dev
   configuration uses it by default. No more need to manually start up an
   smtp server.

   - logs will be in ./logs/localmail.log
   - mbox file at ./logs/localmail.mbox
   - simple web view at port 8880 on container

8) acceptance tests have been simplified a lot in the process

 - in dev, we no longer need to run sso/localmail in screen - they are
   always running

 - all server and client side config for dev acceptance tests was moved to
   the default development config. This means no config changes are need
   to run acceptance tests against a local dev server. This involved changing the default test emails, so as not to leak it, and couple of test changes to match

   No need to juggle files or change the server config at all :)

 - For staging prod, we use an explicit config to run the tests

   DJANGO_SETTINGS_MOD=django_project.settings_acceptace_juju

 - the acceptance db setup was moved into the Makefile

9) The Makefiles has seen a lot of work. It is more complex that it needs to
   be, in order to support side-by-side dev envs, but that should only be
   temporary.

  - Makefile.common - the tasks that do sso stuff, shared between old and
    new dev environments.

  - Makefile.db - old env db tasks

  - Makefile - new entrypoint that sshs into container if needed, and houses
    the commands to manage the lxc. If using old env, it just includes
    Makefile.common and Makefile.db, and the old env's definition of run and
    bootstrap.

  - Makefile.juju - the new juju specific commands, mainly about charms and
    deploying.

  Once this work has fully landed, it is anticipated that we will return to
  just a single Makefile, which will include a base common ols Makefile as
  a dependency.

  I also added a make help command, since we'd discussed the idea before, and I was adding a whole bunch of new commands, so it seemed like a good plan

To post a comment you must log in.
Revision history for this message
Daniel Manrique (roadmr) wrote :

Awesome, thanks for this.

Maybe you want review by someone more familiar with all the makefile wizardry. In general it looks OK to me, the code looks sensible and the thing works very nicely (and I tried some uncommon scenarios). I made some minor comments below, mainly typos. There's one thing that may need clarification so I didn't "Approve" outright.

review: Needs Information
Revision history for this message
Simon Davy (bloodearnest) wrote :

Yeah, some of the makefile stuff is more complex that I'd like. Make is so good at many things, but it has some rough edges. I'm pretty comfortable with make, but I know not every one is. I'm hoping vila might spot some improvements I can make.

Are there any specific bits which look too much like magic?

Revision history for this message
Simon Davy (bloodearnest) wrote :

I've fixed the various typos too.

Revision history for this message
Daniel Manrique (roadmr) wrote :

+1 from me but don't forget to add a commit message :)

As for the makefile, nothing looks *too* magical, it's just unfamiliar due to rustiness with Makefiles. Maybe the way targets.mk is generated, but tbh I haven't looked at it too closely and it works anyway \o/.

review: Approve
Revision history for this message
Simon Davy (bloodearnest) wrote :
1366. By Simon Davy

add comment about order-only make pre-requisites

Revision history for this message
Ricardo Kirkner (ricardokirkner) :
Revision history for this message
Simon Davy (bloodearnest) :
1367. By Simon Davy

- whitespace
- remove unneeded sessionid config

1368. By Simon Davy

re-add missing SSH_ARGS

1369. By Simon Davy

 - add ssh_config file to avoid unwieldy cli args
 - fix location of .juju-repo

1370. By Simon Davy

clean up juju repo

Revision history for this message
Natalia Bidart (nataliabidart) :
Revision history for this message
Natalia Bidart (nataliabidart) wrote :

One more question, I ran:

$ make lint
make: /home/nessita/canonical/bloodearnest/sso-dev/env/bin/flake8: Command not found
Makefile.common:152: recipe for target 'lint' failed

any idea what is missing?

Revision history for this message
Simon Davy (bloodearnest) wrote :
Download full text (11.2 KiB)

On Tue, Nov 17, 2015 at 3:18 PM, Natalia Bidart
<email address hidden> wrote:
>
>
> Diff comments:
>
>>
>> === renamed file 'Makefile' => 'Makefile.common'
>> --- Makefile 2015-11-12 17:08:24 +0000
>> +++ Makefile.common 2015-11-17 13:23:47 +0000
>> @@ -1,9 +1,9 @@
>> -# Copyright (C) 2014 Canonical Ltd.
>> -
>> +# Copyright (C) 2014 Canonical Ltd.
>
> Since you are editing this file, would you mind updating the Copyright notice to 2014-2015?

Done

>> +#
>> CM ?= /usr/lib/config-manager/cm.py
>> CONFIGMANAGER = config-manager.txt
>> CONN_CHECK_CONFIG_PATH ?= /tmp/sso_conn_check_config.yaml
>> -DJANGO_SETTINGS_MODULE = settings
>> +DJANGO_SETTINGS_MODULE ?= settings
>> ENV = $(CURDIR)/env
>> JUJU_ENV ?= local
>> JUJU_REPO ?= ../.juju-repo
>> @@ -42,27 +42,32 @@
>> $(TARBALL_BUILD_DIR):
>> @mkdir -p $(TARBALL_BUILD_DIR)
>>
>> +## run django's collectstatic command, STATIC_ROOT defaults to ./staticfiles
>> collectstatic:
>> # this uses a modified collectstatic command, called linkstatic, where the
>> # --link option uses relative urls, so save space when tarballed
>> @DJANGO_SETTINGS_MODULE=django_project.settings_build $(DJANGO_MANAGE) linkstatic --noinput --link
>>
>> +## create a pre-gzipped version of static text assets
>> zipstatic:
>> @echo Gzipping static asset files...
>> @find $(STATIC_ROOT) -name \*.js -or -name \*.css -or -name \*.svg | xargs gzip -kf
>>
>> -# variation of compilemessages target that uses settings_build
>> +## variation of compilemessages target that uses settings_build
>> compilemessages-build:
>> @DJANGO_SETTINGS_MODULE=django_project.settings_build $(MAKE) compilemessages
>>
>> # note: fetch-sourcedeps needs to be manually called before this will work.
>> +## build a deployment tarbal
>
> Typo here? tarball

Fixed

>> build-tarball: install-wheels $(TARBALL_BUILD_PATH)
>>
>> +## write the bzr revno to lib/versioninfo.py
>> version:
>> bzr version-info --format=python > lib/versioninfo.py
>>
>> ### Wheels ###
>>
>> +## create/reinitialise the virtualenv
>> virtualenv:
>> @virtualenv --clear --system-site-packages $(ENV)
>>
>>
>> === added file 'Makefile.juju'
>> --- Makefile.juju 1970-01-01 00:00:00 +0000
>> +++ Makefile.juju 2015-11-17 13:23:47 +0000
>> @@ -0,0 +1,124 @@
>> +# sso specific config
>> +CHARM_SERIES = trusty
>> +CHARM = canonical-identity-provider
>> +CHARM_BRANCH = lp:~ubuntuone-pqm-team/canonical-is-charms/canonical-identity-provider
>> +CHARM_SRC = ../sso-charm
>> +DEPLOYER_CONFIG = $(CURDIR)/django_project/deploy.yaml
>> +DEPLOYER_TARGET = sso-app-dev
>> +
>> +# generic config
>> +LOG_DIRS ?= ./logs/www-oops ./logs/schema-updates
>> +ABS_CHARM_SRC = $(shell readlink -f $(CHARM_SRC))
>> +LOCAL_JUJU_CONFIG = .local.yaml
>> +JUJU_REPOSITORY ?= $(CURDIR)/.juju-repo
>> +JUJU_HOME ?= $(HOME)/.juju
>> +JENV = $(JUJU_HOME)/environments/$(JUJU_ENV).jenv
>> +export JUJU_REPOSITORY JUJU_ENV
>> +export I_CAN_HAZ_JUJU=yes
>> +
>> +# python config, overriding the non-juju config
>> +PYTHONPATH := $(LXC_BASEDIR)/etc:$(SRC_DIR):$(LIB_DIR):$(CURDIR)
>> +export PYTHONPATH
>> +
>> +
>> +
>> +$(CHARM_SRC):
>> + @bzr branch $...

Revision history for this message
Simon Davy (bloodearnest) wrote :

On Tue, Nov 17, 2015 at 3:18 PM, Natalia Bidart
<email address hidden> wrote:
> One more question, I ran:
>
> $ make lint
> make: /home/nessita/canonical/bloodearnest/sso-dev/env/bin/flake8: Command not found
> Makefile.common:152: recipe for target 'lint' failed
>
> any idea what is missing?

Assuming you've bootstrapped (or just make dependencies), not sure. I
haven;t altered any of the dev deps or make targets beyond bootrstrap
(which is now dependecnies) and make run.

It works for me here.

Revision history for this message
Natalia Bidart (nataliabidart) wrote :

More errors. On the host, on a sso-dev clean branch, I tried:

nessita@dali:~/canonical/bloodearnest/sso-dev$ export I_CAN_HAZ_JUJU=yes
nessita@dali:~/canonical/bloodearnest/sso-dev$ make clean
(Makefiles changed - rebuilding lxc target list)
unknown option -- -
usage: ssh [-1246AaCfGgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
           [-D [bind_address:]port] [-E log_file] [-e escape_char]
           [-F configfile] [-I pkcs11] [-i identity_file]
           [-L [bind_address:]port:host:hostport] [-l login_name] [-m mac_spec]
           [-O ctl_cmd] [-o option] [-p port]
           [-Q cipher | cipher-auth | mac | kex | key]
           [-R [bind_address:]port:host:hostport] [-S ctl_path] [-W host:port]
           [-w local_tun[:remote_tun]] [user@]hostname [command]
Makefile:106: recipe for target 'clean' failed
make: *** [clean] Error 255

The main issue I see with this errors is that I have no idea what is going on, nor I can debug in a easy way.

Revision history for this message
Simon Davy (bloodearnest) wrote :

On Tue, Nov 17, 2015 at 9:05 PM, Natalia Bidart
<email address hidden> wrote:
> More errors. On the host, on a sso-dev clean branch, I tried:
>
> nessita@dali:~/canonical/bloodearnest/sso-dev$ export I_CAN_HAZ_JUJU=yes
> nessita@dali:~/canonical/bloodearnest/sso-dev$ make clean
> (Makefiles changed - rebuilding lxc target list)
> unknown option -- -
> usage: ssh [-1246AaCfGgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
> [-D [bind_address:]port] [-E log_file] [-e escape_char]
> [-F configfile] [-I pkcs11] [-i identity_file]
> [-L [bind_address:]port:host:hostport] [-l login_name] [-m mac_spec]
> [-O ctl_cmd] [-o option] [-p port]
> [-Q cipher | cipher-auth | mac | kex | key]
> [-R [bind_address:]port:host:hostport] [-S ctl_path] [-W host:port]
> [-w local_tun[:remote_tun]] [user@]hostname [command]
> Makefile:106: recipe for target 'clean' failed
> make: *** [clean] Error 255

Hmm, this suggests your lxd is not set up correctly, it doesn't have
an IP currently.

The ssh command is on line 106:

@ssh -qtAF $(SSH_CONFIG_FILE) $(IP) -- $(MAKE) $(LXC_MAKE_FLAGS) -C
$(PWD) $@ $(LXC_MAKE_VARS)

I suspect the issue is that the $(IP) is coming out as empty, which is
confusing ssh

I will try and add an explicit check for an IP address, with suitable
error message. Previously, I have just been checking the container was
created, not also a) it was running and b) it has an IP address, which
are all things that are required so ssh in, but can fail.

Additionally, clean should be able to be run from host without ssh,
which I will fix

Would echoing the ssh command by default be better? It would make
things clearer in case of error, but at the expense of largely
irrelevant line noise on most command output.

--

Thanks

1371. By Simon Davy

 - make clean now works in host w/o ssh
 - add explicit check for IP address for better error messages

Revision history for this message
Simon Davy (bloodearnest) wrote :

On Wed, Nov 18, 2015 at 10:34 AM, Simon Davy <email address hidden> wrote:
> On Tue, Nov 17, 2015 at 9:05 PM, Natalia Bidart
> <email address hidden> wrote:
>> More errors. On the host, on a sso-dev clean branch, I tried:
>>
>> nessita@dali:~/canonical/bloodearnest/sso-dev$ export I_CAN_HAZ_JUJU=yes
>> nessita@dali:~/canonical/bloodearnest/sso-dev$ make clean
>> (Makefiles changed - rebuilding lxc target list)
>> unknown option -- -
>> usage: ssh [-1246AaCfGgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec]
>> [-D [bind_address:]port] [-E log_file] [-e escape_char]
>> [-F configfile] [-I pkcs11] [-i identity_file]
>> [-L [bind_address:]port:host:hostport] [-l login_name] [-m mac_spec]
>> [-O ctl_cmd] [-o option] [-p port]
>> [-Q cipher | cipher-auth | mac | kex | key]
>> [-R [bind_address:]port:host:hostport] [-S ctl_path] [-W host:port]
>> [-w local_tun[:remote_tun]] [user@]hostname [command]
>> Makefile:106: recipe for target 'clean' failed
>> make: *** [clean] Error 255
>
> Hmm, this suggests your lxd is not set up correctly, it doesn't have
> an IP currently.
>
> The ssh command is on line 106:
>
> @ssh -qtAF $(SSH_CONFIG_FILE) $(IP) -- $(MAKE) $(LXC_MAKE_FLAGS) -C
> $(PWD) $@ $(LXC_MAKE_VARS)
>
> I suspect the issue is that the $(IP) is coming out as empty, which is
> confusing ssh
>
> I will try and add an explicit check for an IP address, with suitable
> error message.

Done. If it can't find an IP address for the container, any command
that needs ssh will fail with a more helpful error.

Another option would be to auto-start the container it its not started, perhaps?

--
Simon

Revision history for this message
Natalia Bidart (nataliabidart) wrote :

I pulled the changes. A run of make clean without the env var and I run with it both succeded with the same output:

nessita@dali:~/canonical/bloodearnest/sso-dev$ make clean
rm -rf /home/nessita/canonical/bloodearnest/sso-dev/env
rm -rf branches/wheels
rm -rf ../.juju-repo
rm -rf branches/*
rm -rf logs/*.*
rm -rf staticfiles
rm -f lib/versioninfo.py
rm -f targets.mk
find -name '*.pyc' -delete
find -name '*.~*' -delete
nessita@dali:~/canonical/bloodearnest/sso-dev$ export I_CAN_HAZ_JUJU=yes
nessita@dali:~/canonical/bloodearnest/sso-dev$ make clean
(Makefiles changed - rebuilding lxc target list)
rm -rf
rm -rf
rm -rf
rm -rf branches/*
rm -rf logs/*.*
rm -rf staticfiles
rm -f lib/versioninfo.py
rm -f targets.mk
find -name '*.pyc' -delete
find -name '*.~*' -delete
nessita@dali:~/canonical/bloodearnest/sso-dev$

Were you expecting this or there is some randomness around? What confuses me is that I don't see the container in my list of containers:

nessita@dali:~/canonical/bloodearnest/sso-dev$ sudo lxc-ls --fancy
NAME STATE IPV4 IPV6 GROUPS AUTOSTART
----------------------------------------------------------------
cpi-trusty STOPPED - - - NO
juju-trusty-lxc-template STOPPED - - - NO
sca-trusty STOPPED - - - NO
sso-trusty STOPPED - - - NO

Revision history for this message
Natalia Bidart (nataliabidart) wrote :

Replying to:

"FWIW: that script is just:

#!/bin/bash
sudo service gunicorn stop
trap 'sudo service gunicorn start' INT
bash /srv/gunicorn/run_sso_wsgi.sh $@

We could actually get rid of this, perhaps. Some thing like:

run: gunicorn-stop
    @trap 'sudo service gunicorn start' INT;
/srv/gunicorn/run_sso_wsgi.sh --bind=$(ARGS) --reload
--error-logfile=- --access-logfile=- --timeout=99999

would probably work. I can have a look at that if you've prefer to get
rid of the script?"

I would prefer to get rid of the script mainly because having the script mean that when reading the Makefile it requires to go an open other script in other folder; and also sometimes I try to cleanup stuff and is not trivial to know where a script is used or not, to decide if it can be removed.

Revision history for this message
Simon Davy (bloodearnest) wrote :

On Wed, Nov 18, 2015 at 12:15 PM, Natalia Bidart
<email address hidden> wrote:
> I pulled the changes. A run of make clean without the env var and I run with it both succeded with the same output:
>
> nessita@dali:~/canonical/bloodearnest/sso-dev$ make clean
> rm -rf /home/nessita/canonical/bloodearnest/sso-dev/env
> rm -rf branches/wheels
> rm -rf ../.juju-repo
> rm -rf branches/*
> rm -rf logs/*.*
> rm -rf staticfiles
> rm -f lib/versioninfo.py
> rm -f targets.mk
> find -name '*.pyc' -delete
> find -name '*.~*' -delete
> nessita@dali:~/canonical/bloodearnest/sso-dev$ export I_CAN_HAZ_JUJU=yes
> nessita@dali:~/canonical/bloodearnest/sso-dev$ make clean
> (Makefiles changed - rebuilding lxc target list)
> rm -rf
> rm -rf
> rm -rf
> rm -rf branches/*
> rm -rf logs/*.*
> rm -rf staticfiles
> rm -f lib/versioninfo.py
> rm -f targets.mk
> find -name '*.pyc' -delete
> find -name '*.~*' -delete
> nessita@dali:~/canonical/bloodearnest/sso-dev$
>
> Were you expecting this or there is some randomness around?

This is correct. Most make targets are unaltered and shared between
old and new dev envs

> What confuses me is that I don't see the container in my list of containers:
>
> nessita@dali:~/canonical/bloodearnest/sso-dev$ sudo lxc-ls --fancy
> NAME STATE IPV4 IPV6 GROUPS AUTOSTART
> ----------------------------------------------------------------
> cpi-trusty STOPPED - - - NO
> juju-trusty-lxc-template STOPPED - - - NO
> sca-trusty STOPPED - - - NO
> sso-trusty STOPPED - - - NO

lxd and lxc containers are differently managed. Try

lxc list

FYI lxd containers are in /var/lib/lxd/containers, not /var/lib/lxc

--
Simon

Revision history for this message
Simon Davy (bloodearnest) wrote :

On Wed, Nov 18, 2015 at 12:24 PM, Natalia Bidart
<email address hidden> wrote:
> Replying to:
>
> "FWIW: that script is just:
>
> #!/bin/bash
> sudo service gunicorn stop
> trap 'sudo service gunicorn start' INT
> bash /srv/gunicorn/run_sso_wsgi.sh $@
>
> We could actually get rid of this, perhaps. Some thing like:
>
> run: gunicorn-stop
> @trap 'sudo service gunicorn start' INT;
> /srv/gunicorn/run_sso_wsgi.sh --bind=$(ARGS) --reload
> --error-logfile=- --access-logfile=- --timeout=99999
>
> would probably work. I can have a look at that if you've prefer to get
> rid of the script?"
>
> I would prefer to get rid of the script mainly because having the script mean that when reading the Makefile it requires to go an open other script in other folder; and also sometimes I try to cleanup stuff and is not trivial to know where a script is used or not, to decide if it can be removed.

Ok, I will try this approach.

--
Simon

1372. By Simon Davy

- remove scripts/run_gunicorn.sh, handle trap in makefile instead
- deploy target now has dependencies as a prerequisite
- removed docs about deploy/dependencies targets to avoid confusion - bootstrap should be the main one
- auto matically update the sso-charm to latest on deploy

Revision history for this message
Natalia Bidart (nataliabidart) wrote :
Download full text (26.8 KiB)

Thank you for all the answers and improvements. I tried a new run from scratch, issuing:

$ make lxc-delete
$ make clean

$ make lxc-setup
Using saved parent location: bzr+ssh://bazaar.launchpad.net/~bloodearnest/+junk/ols-tools/
No revisions or tags to pull.
./branches/ols-tools/setup-devel-lxd sso /srv/login.ubuntu.com/devel
Using lxc sso
Checking ssh key set up...done
Setting up posgtresql init on /dev/shm on boot...done
Device .home removed from sso
Device .src removed from sso
Device .logs removed from sso
Device .etc removed from sso
Device .home added to sso
Device .src added to sso
Device .logs added to sso
Device .etc added to sso
Creating fake tarball...done.
Setting up nessita for passwordless sudo...done.
Removing landscape-common...done.
Checking for network access...done.
Updating package index...done.
Installing ppa deps...done.
Installing juju stable ppa...done.
Updating package index (again)...done.
Updating system packages...done.
Installing required packages...done.
Cleaning up packages...done.
Configuring lxc sshd...done.
Restarting ssh...done.
Updating sso juju environment to use lxc ip 10.0.3.41
Checking for basenode access...done
Checking for CAT access...done.
Updating basenode...done.
Installing basenode...done.

And then make bootstrap, which fails as follow (I'm connected to the VPN):

nessita@dali:~/canonical/bloodearnest/sso-dev$ make bootstrap
cat dependencies.txt | xargs sudo apt-get install -y
Reading package lists... Done
Building dependency tree
Reading state information... Done
python-amqplib is already the newest version.
python-bson is already the newest version.
python-iso8601 is already the newest version.
python-launchpadlib is already the newest version.
python-m2crypto is already the newest version.
python-memcache is already the newest version.
python-psutil is already the newest version.
python-psycopg2 is already the newest version.
python-simplejson is already the newest version.
python-testresources is already the newest version.
python-bcrypt is already the newest version.
python-beautifulsoup is already the newest version.
python-virtualenv is already the newest version.
bzr is already the newest version.
python-lxml is already the newest version.
python-tz is already the newest version.
python-yaml is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded.
cat dependencies-devel.txt | xargs sudo apt-get install -y
Reading package lists... Done
Building dependency tree
Reading state information... Done
build-essential is already the newest version.
libxslt1-dev is already the newest version.
make is already the newest version.
memcached is already the newest version.
python-dev is already the newest version.
screen is already the newest version.
swig is already the newest version.
config-manager is already the newest version.
firefox is already the newest version.
gettext is already the newest version.
libpq-dev is already the newest version.
libxml2-dev is already the newest version.
postgresql-client-9.3 is already the newest version.
postgresql-contrib-9....

1373. By Simon Davy

 - fix jenv file not being delete
 - fix LXC_MAKE_ARGS partial rename
 - run db-reset on lxc-start, to cope with shared memory

Revision history for this message
Simon Davy (bloodearnest) wrote :

>> === modified file 'src/identityprovider/tests/test_command_cleanup.py'
>> --- src/identityprovider/tests/test_command_cleanup.py 2015-04-30 17:20:09 +0000
>> +++ src/identityprovider/tests/test_command_cleanup.py 2015-11-17 13:23:47 +0000
>> @@ -48,8 +48,8 @@
>>
>> def make_test_accounts(self, count=0, date_created=None):
>> for i in xrange(count):
>> - email = self.factory.make_email_address(prefix='isdtest+',
>> - domain='canonical.com')
>> + email = self.factory.make_email_address(prefix='testemail+',
>> + domain='example.com')
>
> What's the rationale for this change? The job needs to clear from the DB all email addressed created by the acceptance tests, which use the isdtest+ prefix, so changing it will not trigger the right cleanup.

So, one of the things I tried to do was "normalise" the dev acceptance
test configuration, so it could just be part default devel
configuration. This would mean that a) you wouldn't need access to the
config branch to run acceptance tests and b) could run the acceptance
tests against the dev server OOTB with no config changes, which
simplified stuff a lot.

The only thing in the *dev* acceptance settings that was at all
sensitive was the isdtest email stuff, as it might leak info about our
production config.

Now, the isdtest email stuff was already in the code base anyway (like
in the above test), but I went ahead an made the default dev config
use <email address hidden>, and updated some tests that were expecting
other config.

I don't *think* this will have broken the clean up job, as that uses
the configured emails, which are unchanged in prod or staging.

But maybe I missed something?

--
Thanks

1374. By Simon Davy

make clean now works properly again

1375. By Simon Davy

merging

Revision history for this message
Daniel Manrique (roadmr) wrote :

Poke... what's the status on this? Simon, could I at least request, if feasible, for you to merge trunk into this? I did so but there was a (minor) conflict, I think I solved it well enough that my local copy is working but I'd be more at ease if I knew you did the right thing while merging :)

The actual reason I'm asking is this: I tried running acceptance tests in my old-style lxc setup and failed miserably. The set of config directives I used to just inject in my ../local_config/settings.py no longer "just works", and after spending a couple of hours trying, I gave up and decided to give sso-dev another whirl. Acceptance tests worked magically out of the box. So I'm wondering if perhaps the work done to remove the settings-by-symlink workflow broke the old way of doing acceptance testing.

Does this make sense to you? What do I need to do now in order to run acceptance tests locally? OTOH if it's easier to just use sso-dev, that's fair enough, but we'd need to push for this to land so that people wanting to run acceptance aren't left in the same limbo as me (or I was just being extra dumb today - always a possibility, particularly on Mondays).

Thanks and apologies for the rant!

1376. By Simon Davy

merging

Revision history for this message
Simon Davy (bloodearnest) wrote :

On Mon, Dec 7, 2015 at 10:37 PM, Daniel Manrique
<email address hidden> wrote:
> Poke... what's the status on this? Simon, could I at least request, if feasible, for you to merge trunk into this?

Done. All tests passing.

> I did so but there was a (minor) conflict, I think I solved it well enough that my local copy is working but I'd be more at ease if I knew you did the right thing while merging :)

Yep, it was just bzr merge failing hard on trivial changes, for
unknown reasons :)

> The actual reason I'm asking is this: I tried running acceptance tests in my old-style lxc setup and failed miserably. The set of config directives I used to just inject in my ../local_config/settings.py no longer "just works", and after spending a couple of hours trying, I gave up and decided to give sso-dev another whirl. Acceptance tests worked magically out of the box. So I'm wondering if perhaps the work done to remove the settings-by-symlink workflow broke the old way of doing acceptance testing.

Hmm, it shouldn't have. ../local_config/settings.py is still used, but
just via PYTHONPATH, not a symlink. I did test the acceptances tests
with that change, with out issue. I will try again.

However, the old way of running acceptance tests (via run-tests and
scripts/acceptance-tests) mv's ../local_config/settings.py out of the
way, writes its own version there, runs the server and tests, and then
copies back your old settings.py. So, to my knowledge, you've never
been able to manually tweak ../local_config/settings.py when running
tests this way.

However, if you run the server yourself manually, and run the tests
separately with make run-acceptance, then your local settings.py is
used (and you'll need to have the acceptance relations

The sso-dev work removes all this complexity. The is now no difference
between devel settings and the acceptance tests settings, so no file
mv'ing needed. It also already has an sso server and mail server
running, so everything is much simpler when it comes to running

> Does this make sense to you? What do I need to do now in order to run acceptance tests locally? OTOH if it's easier to just use sso-dev, that's fair enough, but we'd need to push for this to land so that people wanting to run acceptance aren't left in the same limbo as me (or I was just being extra dumb today - always a possibility, particularly on Mondays).

I would like to land, but still waiting on review approval, there's
some outstanding comments. I will ask Natalia to have a look again.

> Thanks and apologies for the rant!

No worries - sorry it's not landed yet :(

--
Simon

Revision history for this message
Simon Davy (bloodearnest) wrote :

On Tue, Dec 8, 2015 at 11:18 AM, Simon Davy <email address hidden> wrote:
> On Mon, Dec 7, 2015 at 10:37 PM, Daniel Manrique
> <email address hidden> wrote:
>> Poke... what's the status on this? Simon, could I at least request, if feasible, for you to merge trunk into this?
>
> Done. All tests passing.
>
>> I did so but there was a (minor) conflict, I think I solved it well enough that my local copy is working but I'd be more at ease if I knew you did the right thing while merging :)
>
> Yep, it was just bzr merge failing hard on trivial changes, for
> unknown reasons :)
>
>> The actual reason I'm asking is this: I tried running acceptance tests in my old-style lxc setup and failed miserably. The set of config directives I used to just inject in my ../local_config/settings.py no longer "just works", and after spending a couple of hours trying, I gave up and decided to give sso-dev another whirl. Acceptance tests worked magically out of the box. So I'm wondering if perhaps the work done to remove the settings-by-symlink workflow broke the old way of doing acceptance testing.
>
> Hmm, it shouldn't have. ../local_config/settings.py is still used, but
> just via PYTHONPATH, not a symlink. I did test the acceptances tests
> with that change, with out issue. I will try again.

Using ./run-tests dev, the acceptance tests ran fine for me on r1374

Thanks

--
Simon

1377. By Simon Davy

merging properly this time

Revision history for this message
Daniel Manrique (roadmr) wrote :

Argh, so I did ./run-tests dev (note, using trunk, not sso-dev) and it still won't run acceptance :( (r1376 here).

What I'm seeing is that ./run-tests dev starts by unconditionally bootstrapping, which I think destroys my ../local_config/settings.py, so by the time the acceptance tests start, the config is all borked and obviously they fail.

No worries though, I have a reasonable workaround by using sso-dev whenever I need acceptance tests run ;) (which is not often, really).

1378. By Simon Davy

merging

1379. By Simon Davy

merging

Revision history for this message
Daniel Manrique (roadmr) wrote :

Ohnoes... here comes Daniel with more crazy problems...

Apparently once the container is shut down and restarted, part of the db configuration changes, so on next boot it fails with (ultimately) this:

django.db.utils.OperationalError: FATAL: password authentication failed for user "ssoadmin"

How to repro (I_CAN_HAZ_JUJU assumed to be set at all times):

(maybe delete any old sso containers with lxc delete)
(set up my crazy /src mount in the ols-dev profile)
get a checkout of sso-dev
merge latest trunk (I did this myself, maybe I screwed up?)
make lxc-setup
make bootstrap
ssh into container, change to /src/blahblah/sso-dev dir
make test ARGS=some.quick.test
*it works*
sudo poweroff (the container!)
lxc start sso
wait until juju stabilizes (on startup a series of config-changeds fire up)
ssh into container, change to /src/blahblah/sso-dev dir
make test ARGS=some.quick.test
* now it fails with the above-shown message *

Logging out of the container and redoing "make bootstrap" from the host brings things to a working state again, but I seem to recall this shouldn't be necessary.

Revision history for this message
Simon Davy (bloodearnest) wrote :

On Wed, Jan 6, 2016 at 10:16 PM, Daniel Manrique
<email address hidden> wrote:
> Ohnoes... here comes Daniel with more crazy problems...
>
> Apparently once the container is shut down and restarted, part of the db configuration changes, so on next boot it fails with (ultimately) this:
>
> django.db.utils.OperationalError: FATAL: password authentication failed for user "ssoadmin"
>
> How to repro (I_CAN_HAZ_JUJU assumed to be set at all times):
>
>
> (maybe delete any old sso containers with lxc delete)
> (set up my crazy /src mount in the ols-dev profile)
> get a checkout of sso-dev
> merge latest trunk (I did this myself, maybe I screwed up?)
> make lxc-setup
> make bootstrap
> ssh into container, change to /src/blahblah/sso-dev dir
> make test ARGS=some.quick.test
> *it works*
> sudo poweroff (the container!)
> lxc start sso
> wait until juju stabilizes (on startup a series of config-changeds fire up)
> ssh into container, change to /src/blahblah/sso-dev dir
> make test ARGS=some.quick.test
> * now it fails with the above-shown message *
>
> Logging out of the container and redoing "make bootstrap" from the host brings things to a working state again, but I seem to recall this shouldn't be necessary.

It shouldn't. I will look into it, and merge trunk in the process

FYI, there are lxc-start and lxc-stop make commands which I would
encourage rather than sudo poweroff

--
Simon

1380. By Simon Davy

merging

1381. By Simon Davy

merging

1382. By Simon Davy

merging

1383. By Simon Davy

fix some small issues from user testing

Revision history for this message
Daniel Manrique (roadmr) wrote :

Hello :)

Unmerged revisions

1383. By Simon Davy

fix some small issues from user testing

1382. By Simon Davy

merging

1381. By Simon Davy

merging

1380. By Simon Davy

merging

1379. By Simon Davy

merging

1378. By Simon Davy

merging

1377. By Simon Davy

merging properly this time

1376. By Simon Davy

merging

1375. By Simon Davy

merging

1374. By Simon Davy

make clean now works properly again

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== modified file '.bzrignore'
2--- .bzrignore 2015-10-09 16:03:29 +0000
3+++ .bzrignore 2016-03-04 19:06:06 +0000
4@@ -7,4 +7,8 @@
5 node_modules
6 staticfiles
7 lib/versioninfo.py
8-settings.py
9+.juju-repo
10+EXTRACTED
11+django_project/settings.py
12+django_project/settings_local.py
13+targets.mk
14
15=== added file 'Makefile'
16--- Makefile 1970-01-01 00:00:00 +0000
17+++ Makefile 2016-03-04 19:06:06 +0000
18@@ -0,0 +1,162 @@
19+.DEFAULT_GOAL = help
20+# disable builtin rules, which are not used
21+.SUFFIXES:
22+
23+
24+# sso specific config
25+SSO_LXC ?= sso
26+# base dir inside lxc
27+LXC_BASEDIR = /srv/login.ubuntu.com/devel
28+CODE_DIR=$(LXC_BASEDIR)/code/devel
29+
30+# shared config between all dev envs
31+WHEELS_BRANCH_URL ?= lp:~ubuntuone-pqm-team/$(PROJECT_NAME)/dependencies
32+WHEELS_DIR = branches/wheels
33+ENV = $(CURDIR)/env
34+
35+# commands that can be run on host or container
36+clean:
37+ rm -rf $(ENV)
38+ rm -rf $(WHEELS_DIR)
39+ rm -rf $(JUJU_REPOSITORY)
40+ rm -rf branches/*
41+ rm -rf logs/*.*
42+ rm -rf staticfiles
43+ rm -f lib/versioninfo.py
44+ rm -f targets.mk
45+ find -name '*.pyc' -delete
46+ find -name '*.~*' -delete
47+
48+
49+ifeq ("$(I_CAN_HAZ_JUJU)","yes")
50+
51+LXC_NAME = $(SSO_LXC)
52+JUJU_ENV = $(LXC_NAME)
53+JUJU_HOME ?= $(HOME)/.juju
54+JENV = $(JUJU_HOME)/environments/$(JUJU_ENV).jenv
55+JUJU_REPOSITORY ?= $(CURDIR)/.juju-repo
56+JUJU_HOME ?= $(HOME)/.juju
57+export JUJU_REPOSITORY JUJU_ENV
58+
59+OLS_TOOLS_BRANCH ?= lp:~bloodearnest/+junk/ols-tools
60+TOOLS = branches/ols-tools
61+LXC_PATH = /var/lib/lxd/containers/$(LXC_NAME)
62+LXC_STATE = $(shell lxc info $(LXC_NAME) | grep Status | awk '{ print $$2 }')
63+IP = $(shell lxc info $(LXC_NAME) | grep eth0 | grep -v inet6 | awk '{ print $$3 }')
64+HOST = $(shell hostname)
65+SSH_CONFIG_FILE ?= django_project/ssh_config
66+LXC_TARGETS ?= targets.mk
67+# flags/vars for passing to make invocation inside lxc
68+LXC_MAKE_FLAGS=--no-print-directory --no-builtin-rules
69+LXC_MAKE_VARS = "I_CAN_HAZ_JUJU=yes"
70+ERR_LOG=$(shell mktemp)
71+
72+APT_PROXY ?=
73+export APT_PROXY
74+
75+# multiline error message
76+define RUN_LXC_SETUP
77+
78+lxc container $(LXC_NAME) not found
79+Please run the following to create it:
80+make lxc-setup
81+endef
82+
83+
84+ifneq ($(HOST),$(LXC_NAME))
85+# We are on the host machine, so ssh to the lxc and run the command
86+
87+# use the container dir as a fs-level proxy to whether the lxc exists
88+$(LXC_PATH):
89+ $(error $(RUN_LXC_SETUP))
90+
91+lxc-available: $(LXC_PATH)
92+ @test -n "$(IP)" || { echo "Cannot find ip address for container $(LXC_NAME) - is it running? Try: make lxc-start"; exit 1; }
93+
94+# sadly (or maybe not), these can't be in config-manager.txt, as we need to
95+# bootstrap to get those, and we can't bootstrap until we have an lxc.
96+## fetch the shared ols-tools branch
97+$(TOOLS):
98+ @[ -d $@ ] && bzr pull -d $@ || bzr branch $(OLS_TOOLS_BRANCH) $@
99+.PHONY: $(TOOLS)
100+
101+# this is idempotent (kinda), can be rerun to update the lxc
102+## create/update the development lxc
103+lxc-setup: $(TOOLS)
104+ ./branches/ols-tools/setup-devel-lxd $(LXC_NAME) $(LXC_BASEDIR)
105+
106+## start the lxc and reset the db
107+lxc-start: $(LXC_PATH)
108+ @[ "$(LXC_STATE)" = "Running" ] && echo "lxc $(LXC_NAME) already running" || (echo "Starting lxc $(LXC_NAME)" && lxc start $(LXC_NAME) && sleep 3)
109+ @# reset db, as it's on shared memory, and thus been wiped
110+ $(MAKE) db-reset
111+
112+## stop the lxc, including the juju env
113+lxc-stop: $(LXC_PATH)
114+ @[ "$(LXC_STATE)" != "Running" ] && echo "lxc $(LXC_NAME) already stopped" || (echo "Stopping lxc $(LXC_NAME)" && lxc stop $(LXC_NAME))
115+
116+## stop and delete the lxc
117+lxc-delete: lxc-stop
118+ @echo Destroying lxc $(LXC_NAME)
119+ @lxc delete $(LXC_NAME)
120+ @rm -f $(JENV) # juju env is gone now, so clean up
121+
122+## convenient shortcut to ssh in
123+ssh: lxc-available
124+ JUJU_ENV=$(JUJU_ENV) ssh -tA -F $(SSH_CONFIG_FILE) $(IP) 'cd $(PWD) ; exec "$$SHELL" -l'
125+
126+# TODO: an easy way to set env vars inside the lxc
127+# an explicit list of envvars to transfer as make args
128+# if args is not present, don't set it, or else we override the default to be empty
129+ifdef ARGS
130+LXC_MAKE_VARS += ARGS=\"$(ARGS)\"
131+endif
132+ifdef DJANGO_SETTINGS_MODULE
133+LXC_MAKE_VARS += DJANGO_SETTINGS_MODULE=$(DJANGO_SETTINGS_MODULE)
134+endif
135+ifdef SST_BASE_URL
136+LXC_MAKE_VARS += SST_BASE_URL=$(SST_BASE_URL)
137+endif
138+
139+
140+# These rules build a list of the makefile targets that are available in the lxc
141+# to use for providing tab completion on the host.
142+# They utilise the special re-making behaviour where make will first try and
143+# build any targets that are also included Makefiles, then restart parsing.
144+# So in this case, make will try and build $(LXC_TARGETS) first, then
145+# include it
146+# https://www.gnu.org/software/make/manual/html_node/Remaking-Makefiles.html
147+$(LXC_TARGETS): Makefile Makefile.common Makefile.juju
148+ @echo "(Makefiles changed - rebuilding lxc target list)"
149+ @echo -n "TARGETS=" > $(LXC_TARGETS)
150+ @echo '$(shell make -nrRpq -f Makefile.juju -f Makefile.common | egrep -o "^[a-zA-Z0-9._/-]+:" | grep -v "^\." | grep -v Makefile | cut -d: -f1 | sort -u)' >> $(LXC_TARGETS)
151+
152+# the - means its absence is ignored on the first pass
153+-include $(LXC_TARGETS)
154+
155+# We can now guarantee the file target list is generated and up to date
156+$(TARGETS): lxc-available
157+ @ssh -qtAF $(SSH_CONFIG_FILE) $(IP) -- $(MAKE) $(LXC_MAKE_FLAGS) -C $(PWD) $@ $(LXC_MAKE_VARS)
158+
159+else # we are in the lxc
160+
161+# here to avoid overriding non-juju config. Can be moved once juju is default
162+DJANGO_SETTINGS_MODULE ?= django_project.settings_devel_juju
163+
164+include Makefile.common
165+include Makefile.juju
166+
167+endif
168+
169+else # we are not using juju
170+
171+include Makefile.common
172+include Makefile.db
173+
174+bootstrap: $(LOCAL_SETTINGS_PATH) virtualenv sourcedeps install-wheels-dev
175+
176+run: ARGS=0.0.0.0:8000
177+run:
178+ $(ENV)/bin/gunicorn django_project.wsgi:application --workers=2 --reload --pid=logs/gunicorn.pid --bind=$(ARGS) --timeout=99999
179+
180+endif
181
182=== renamed file 'Makefile' => 'Makefile.common'
183--- Makefile 2016-02-25 17:59:36 +0000
184+++ Makefile.common 2016-03-04 19:06:06 +0000
185@@ -1,12 +1,11 @@
186-# Copyright (C) 2014 Canonical Ltd.
187-
188+# Copyright (C) 2015 Canonical Ltd.
189+#
190 CM ?= /usr/lib/config-manager/cm.py
191 CONFIGMANAGER = config-manager.txt
192 CONN_CHECK_CONFIG_PATH ?= /tmp/sso_conn_check_config.yaml
193 DJANGO_SETTINGS_MODULE ?= settings
194-ENV = $(CURDIR)/env
195 JUJU_ENV ?= local
196-JUJU_REPO ?= ../.juju-repo
197+JUJU_REPOSITORY ?= ../.juju-repo
198 LOCAL_SETTINGS_DIR ?= $(CURDIR)/../local_config
199 LOCAL_SETTINGS_PATH = $(LOCAL_SETTINGS_DIR)/settings.py
200 FLAKE8 = $(ENV)/bin/flake8
201@@ -21,13 +20,11 @@
202 TRANSLATIONS_DIR = $(CURDIR)/branches/translations/po
203 # this is a bit hackey, could be an external script in python (Simon)
204 UNITTEST_TARGETS = $(shell find $(SRC_DIR) -name test_\*.py | sed 's+\(^$(SRC_DIR)/\|.py\)++g' | sed 's+/+.+g')
205-WHEELS_BRANCH_URL ?= lp:~ubuntuone-pqm-team/$(PROJECT_NAME)/dependencies
206-WHEELS_DIR = branches/wheels
207 STATIC_ROOT ?= $(shell grep -R STATIC_ROOT django_project/settings_base.py | awk '{print $$3}')
208 # Create archives in labelled directories (ie. r27/$(PROJECT_NAME).tbz2)
209 TARBALL_BUILD_LABEL ?= $(shell bzr version-info --custom --template="r{revno}\n")
210 TARBALL_FILE_NAME = $(PROJECT_NAME).tbz2
211-TARBALL_BUILDS_DIR ?= $(JUJU_REPO)/builds
212+TARBALL_BUILDS_DIR ?= $(JUJU_REPOSITORY)/builds
213 TARBALL_BUILD_DIR = $(TARBALL_BUILDS_DIR)/$(TARBALL_BUILD_LABEL)
214 TARBALL_BUILD_PATH = $(TARBALL_BUILD_DIR)/$(TARBALL_FILE_NAME)
215
216@@ -59,13 +56,16 @@
217 @DJANGO_SETTINGS_MODULE=django_project.settings_build $(MAKE) compilemessages
218
219 # note: fetch-sourcedeps needs to be manually called before this will work.
220+## build a deployment tarball
221 build-tarball: install-wheels $(TARBALL_BUILD_PATH)
222
223+## write the bzr revno to lib/versioninfo.py
224 version:
225 bzr version-info --format=python > lib/versioninfo.py
226
227 ### Wheels ###
228
229+## create/reinitialise the virtualenv
230 virtualenv:
231 @virtualenv --clear --system-site-packages $(ENV)
232
233@@ -73,24 +73,33 @@
234 update-wheels: virtualenv
235 $(PIP) wheel -w $(WHEELS_DIR) -f $(WHEELS_DIR) $(ARGS)
236
237+## install wheel dependencies
238 install-wheels: ARGS=-r requirements.txt
239 install-wheels: virtualenv
240 $(PIP) install --find-links=$(WHEELS_DIR) --no-index $(ARGS)
241
242+## install development wheel dependencies
243 install-wheels-dev: virtualenv
244 $(MAKE) install-wheels ARGS="-r requirements_devel.txt --pre"
245
246+## setup localmail server in container
247+localmail:
248+ ./scripts/setup-localmail.sh
249+
250 ### dependencies ###
251
252 branches/last_build: $(CONFIGMANAGER)
253 $(CM) update $(CONFIGMANAGER)
254 touch $@
255
256+## update wheels branch
257 wheels:
258 [ -d $(WHEELS_DIR) ] && (cd $(WHEELS_DIR) && bzr pull) || (bzr branch $(WHEELS_BRANCH_URL) $(WHEELS_DIR))
259
260+## fetch sourcedeps and wheels branch
261 fetch-sourcedeps: branches/last_build wheels version
262
263+## fetch sourcedeps
264 sourcedeps: fetch-sourcedeps install-wheels
265
266 ### local config ###
267@@ -101,23 +110,8 @@
268 $(LOCAL_SETTINGS_PATH): $(LOCAL_SETTINGS_DIR)
269 @cp django_project/settings.py.example $(LOCAL_SETTINGS_PATH)
270
271-### for devel: start and stop the local postgresql DB ###
272-
273-include Makefile.db
274-
275 ### what you really need ###
276
277-bootstrap: $(LOCAL_SETTINGS_PATH) virtualenv sourcedeps install-wheels-dev
278-
279-clean:
280- rm -rf $(ENV)
281- rm -rf $(WHEELS_DIR)
282- rm -rf branches/*
283- rm -rf staticfiles
284- rm -f lib/versioninfo.py
285- find -name '*.pyc' -delete
286- find -name '*.~*' -delete
287-
288 conn-check:
289 @echo "Generating conn-check config from Django settings.."
290 @sudo -u $$(ls -ld $(CURDIR) | awk '{print $$3}') PYTHONPATH=$(PYTHONPATH) \
291@@ -142,17 +136,13 @@
292 # Check python files (except for the south migration files that are not pep8
293 # compliant and probably never will).
294 lint:
295- @$(FLAKE8) --exclude='migrations' \
296+ @$(FLAKE8) --exclude='migrations,django_project/settings.py' \
297 --filename='*.py' \
298 src/ django_project/ scripts/
299
300 manage:
301 $(DJANGO_MANAGE) $(ARGS)
302
303-run: ARGS=0.0.0.0:8000
304-run: collectstatic
305- $(ENV)/bin/gunicorn django_project.wsgi:application --workers=2 --reload --pid=logs/gunicorn.pid --bind=$(ARGS) --timeout=99999 --error-logfile=- --access-logfile=-
306-
307 start: bootstrap start-db
308
309 stop: stop-db
310@@ -186,4 +176,21 @@
311 cd -; \
312 done
313
314-.PHONY: clean conn-check version makemessages compilemessages collectstatic run-acceptance docs
315+## display this help message
316+help:
317+ $(info Available targets)
318+ @awk '/^[a-zA-Z\-\_0-9]+:/ { \
319+ nb = sub( /^## /, "", helpMsg ); \
320+ if(nb == 0) { \
321+ helpMsg = $$0; \
322+ nb = sub( /^[^:]*:.* ## /, "", helpMsg ); \
323+ } \
324+ if (nb) \
325+ printf "\033[1;31m%-" width "s\033[0m %s\n", $$1, helpMsg; \
326+ } \
327+ { helpMsg = $$0 }' \
328+ width=$$(grep -o '^[a-zA-Z_0-9]\+:' $(MAKEFILE_LIST) | wc -L) \
329+ $(MAKEFILE_LIST)
330+
331+
332+.PHONY: conn-check version makemessages compilemessages collectstatic run-acceptance docs
333
334=== added file 'Makefile.juju'
335--- Makefile.juju 1970-01-01 00:00:00 +0000
336+++ Makefile.juju 2016-03-04 19:06:06 +0000
337@@ -0,0 +1,120 @@
338+# sso specific config
339+CHARM_SERIES = trusty
340+CHARM = canonical-identity-provider
341+CHARM_BRANCH = lp:~ubuntuone-pqm-team/canonical-is-charms/canonical-identity-provider
342+CHARM_SRC = ../sso-charm
343+DEPLOYER_CONFIG = $(CURDIR)/django_project/deploy.yaml
344+DEPLOYER_TARGET = sso-app-dev
345+
346+
347+LOG_DIRS ?= ./logs/www-oops ./logs/schema-updates
348+ABS_CHARM_SRC = $(shell readlink -f $(CHARM_SRC))
349+LOCAL_JUJU_CONFIG = .local.yaml
350+export I_CAN_HAZ_JUJU=yes
351+
352+# python config, overriding the non-juju config
353+PYTHONPATH := $(LXC_BASEDIR)/etc:$(SRC_DIR):$(LIB_DIR):$(CURDIR)
354+export PYTHONPATH
355+
356+
357+
358+$(CHARM_SRC):
359+ @test -d $@ && bzr pull -d $@ || bzr branch $(CHARM_BRANCH) $@
360+
361+
362+# | is order only dependency - means if the dir is present, we're good. no need to rebuild
363+# https://www.gnu.org/software/make/manual/html_node/Prerequisite-Types.html
364+$(JUJU_REPOSITORY): $(CHARM_SRC)
365+ @mkdir -p $(JUJU_REPOSITORY)/$(CHARM_SERIES)
366+ @rm -f $(JUJU_REPOSITORY)/$(CHARM_SERIES)/$(CHARM)
367+ @ln -sf $(ABS_CHARM_SRC) $(JUJU_REPOSITORY)/$(CHARM_SERIES)/$(CHARM)
368+
369+# make sure log subdirs are created with user ownership, or else they are owned by root.
370+$(LOG_DIRS):
371+ mkdir -p $@
372+ sudo chown $$USER:$$USER $@
373+
374+$(LOCAL_JUJU_CONFIG):
375+ @echo "$(DEPLOYER_TARGET): {services: {sso-app: {options: { user: $$USER }}}}" > $(LOCAL_JUJU_CONFIG)
376+.INTERMEDIATE: $(LOCAL_JUJU_CONFIG)
377+.DELETE_ON_ERROR: $(LOCAL_JUJU_CONFIG)
378+
379+
380+# we need to stop gunicorn so we can relink the python in the venv
381+gunicorn-start:
382+ @test -f /etc/init/gunicorn.conf && sudo service gunicorn start || true
383+gunicorn-stop:
384+ @test -f /etc/init/gunicorn.conf && { sudo service gunicorn stop; sleep 1; } || true
385+
386+dependencies: system-dependencies system-dependencies-devel gunicorn-stop virtualenv sourcedeps install-wheels-dev gunicorn-start
387+
388+system-dependencies: dependencies.txt
389+ cat dependencies.txt | xargs sudo apt-get install -y
390+
391+system-dependencies-devel: dependencies-devel.txt
392+ @cat dependencies-devel.txt | xargs sudo apt-get install -y
393+
394+## install dependencies and deploy charms
395+bootstrap: dependencies deploy
396+
397+# use jenv file as indication the environment is up or not
398+$(JENV):
399+ juju bootstrap --upload-tools
400+
401+# clean up any stale jenv files
402+clean-old-jenv:
403+ @test -f $(JENV) && grep -q $(shell hostname -I) $(JENV) || rm -f $(JENV)
404+
405+juju-bootstrap: clean-old-jenv $(JENV)
406+
407+deploy: dependencies juju-bootstrap $(JUJU_REPOSITORY) $(LOCAL_JUJU_CONFIG) | $(LOG_DIRS)
408+ juju-deployer --local-mods -w 15 -s 0 --debug -c $(CHARM_SRC)/deploy.yaml -c $(DEPLOYER_CONFIG) -c $(LOCAL_JUJU_CONFIG) $(DEPLOYER_TARGET)
409+ $(MAKE) setup-db localmail I_CAN_HAZ_JUJU=yes
410+
411+## destroy juju environment
412+juju-destroy:
413+ juju destroy-environment $(JUJU_ENV) --force || true
414+ @# work around manual provider rubbishness, and remove charms
415+ @echo Cleaning up manual provider leftovers
416+ @sudo rm -rf /var/lib/juju/*
417+ @sudo rm -f /etc/init/juju*
418+ @sudo killall -q jujud || true
419+ @sudo killall -q mongod || true
420+ @# clean up any charm left overs
421+ @sudo rm -f /etc/init/gunicorn*
422+ @sudo rm -f /etc/init.d/gunicorn*
423+ @sudo rm -rf /srv/gunicorn /srv/conn-check
424+
425+## destroy juju environment and rebuild
426+redeploy: juju-destroy deploy
427+
428+# preserving old alias
429+setup-db reset-db: db-setup
430+
431+## setup a clean database
432+db-setup db-reset:
433+ @# protect for when the juju env is not up yet
434+ juju ssh sso-postgresql/0 true &>/dev/null && { $(CHARM_SRC)/scripts/create-dev-db; $(MAKE) migrate; }
435+
436+## migrate the db
437+migrate:
438+ $(DJANGO_MANAGE) migrate --noinput
439+
440+# this stops the system gunicorn and runs it manually, restarting on exit
441+## run server in foreground for debugging
442+run: ARGS=0.0.0.0:8080
443+run: GUNICORN_DEVEL_ARGS=--reload --error-logfile=- --access-logfile=- --timeout=99999
444+run: gunicorn-stop
445+ @trap 'sudo service gunicorn start' INT; bash /srv/gunicorn/run_sso_wsgi.sh --bind=$(ARGS) $(GUNICORN_DEVEL_ARGS)
446+
447+## run acceptance tests in development
448+acceptance-dev:
449+ ./scripts/acceptance-dev-juju.sh $(ARGS)
450+.PHONY: acceptance-dev
451+
452+acceptance-db: db-reset
453+ $(DJANGO_MANAGE) loaddata test
454+ $(DJANGO_MANAGE) create_test_team
455+ # add openid RP config
456+ # we need this to explicitly allow unverified logins
457+ $(DJANGO_MANAGE) add_openid_rp_config $$SST_BASE_URL/consumer --allow-unverified --allowed-user-attribs=\"fullname,nickname,email,language,account_verified\"
458
459=== added file 'README.juju'
460--- README.juju 1970-01-01 00:00:00 +0000
461+++ README.juju 2016-03-04 19:06:06 +0000
462@@ -0,0 +1,220 @@
463+=============================
464+Development environment setup
465+=============================
466+
467+Getting started
468+===============
469+
470+Note: currently you need to set I_CAN_HAZ_JUJU=yes in the environment to enable the juju dev env.
471+
472+0. Get the code
473+::
474+
475+.. note::
476+ You need to install, configure and setup bzr and ssh keys. See BAZAAR below.
477+
478+ bzr branch lp:canonical-identity-provider
479+
480+
481+1. Install lxd
482+::
483+
484+ The supported way to develop Canonical Identity Provider is using Juju and LXD
485+
486+ Follow the instructions here to install/configure lxd
487+
488+ https://linuxcontainers.org/lxd/getting-started-cli/
489+
490+ Optional: you don't need juju on the host (as it will be installed in the
491+ lxc), but it is useful to be able to query the juju system direct from the
492+ host
493+
494+ sudo apt-get install juju-core
495+
496+2. Create the lxd container.
497+::
498+
499+ The following command will create an lxd container, in which everything
500+ will be deployed. This will create the container if it does not exist,
501+ update it, install dependencies, and set up juju properly. It mounts the
502+ current directory into the container, so you can edit your code on the
503+ host. It also mirrors your user and mounts your home dir into the
504+ container. This allows conveniences like ssh/bzr/bash configs to be used
505+ inside the container. Some important configuration options that can be
506+ configured via env vars or make args:
507+
508+ SSO_LXC - name of lxd container and juju environment. Default: sso
509+
510+ APT_PROXY - http://url of local apt proxy, highly recommended to have one
511+ set up. Default: none
512+
513+ DEVELOPER_PACKAGES - list of additional packages to install. Default: none
514+
515+ $ cd canonical-identity-provider
516+ $ make lxc-setup [APT_PROXY=http://...]
517+
518+.. note::
519+
520+ You can access the container via the normal lxd mechanism (e.g lxc
521+ exec), or via ssh, or just do
522+
523+ make ssh
524+
525+ If you create an lxd profile called ols-dev, this will be applied to the
526+ container. This is useful if you have an unusual $HOME set up, amongst
527+ other things. For example, to have all your lxd containers for OLS
528+ projects have a special mount into your $HOME dir:
529+
530+ lxc profile create ols-dev
531+ lxc profile device add ols-dev <name> disk source=/mnt/encrypted path=$HOME/work
532+
533+ By default, the development environment uses django_project/ssh_config
534+ as its ssh config file. This is to provide a good default experience.
535+ However, If you wish to configure ssh into the container yourself, you
536+ can set SSH_CONFIG_FILE env var or make argument. Note: agent
537+ forwarding is required, and is hardcoded by cli arguments in the
538+ makefiles.
539+
540+ You should start, stop or destroy the container using the following:
541+
542+ make lxc-stop
543+ make lxc-start
544+ make lxc-delete # this will wipe everything!
545+
546+
547+3. Bootstrap the project
548+
549+ This will fetch python dependencies, and install them into a virtualenv. It
550+ will also check out the juju charm, and bootstrap/deploy a juju environment
551+ in the container, inluding postgresql, memcached, and gunicorn. Both of
552+ these steps take a few minutes each the first time you do them, but
553+ subsequent bootstraps are much quicker. When complete, the service should
554+ be accessible on port 8080 of the container.
555+
556+ $ make bootstrap
557+
558+.. note::
559+
560+ For more commands, see
561+
562+ $ make help
563+
564+
565+4. Run the tests
566+::
567+
568+ $ make test
569+
570+ To run tests in 4 parallel
571+
572+ $ make test ARGS=-c4
573+
574+
575+5. Debug the instance
576+::
577+ By default, the service is running on port 8080 under upstart, exactly
578+ like production. If you wish to use a debugger, you need to use the
579+ following command. This will stop the upstart controlled gunicorn, and
580+ run it in the foreground, so you can attach to stdin/out
581+
582+ $ make run
583+
584+
585+6. (Optional) Setup reCaptcha
586+::
587+
588+ captcha is disabled by default. If you wish to enable it, you can do
589+
590+ juju set sso-app captcha_public_key=<your public key>
591+ juju set sso-app captcha_private_key=<your private key>
592+
593+ You may also need to enable the gargoyle flag for captcha
594+
595+
596+7. (Optional) Accessing the mail server
597+::
598+ The development set up includes an email server (lp:localmail), which
599+ supports delivery via smtp on port 2025, and access via IMAP on port 2143,
600+ or a simple web interface on port 8880.
601+
602+ Additionally, the logs are available in logs/localmail, and a mbox file is
603+ at logs/localmail.mbox, which you can read with mutt or other mail client.
604+
605+
606+8. (Optional) Create your own certificate and private key for SAML 2.0
607+::
608+ Required if you want to test interaction with a real Service Point.
609+
610+ See instructions in the django-saml2-idp upstream project:
611+ http://code.google.com/p/django-saml2-idp/source/browse/trunk/idptest/keys/mk_keys.sh
612+
613+ Once you have the files (certificate.pem and private-key.pem), update the
614+ saml_remotes config on the charm to point to them.
615+
616+
617+9. (Optional) Running acceptance tests
618+::
619+
620+ There is suite of selenium tests in ./acceptance. To run those against
621+ your container:
622+
623+ make acceptance-dev
624+
625+ For more info:
626+
627+ make acceptance-dev ARGS=--help
628+
629+10. (Optional) CSS Hacking
630+::
631+
632+ (https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager)
633+
634+ npm install
635+ npm install -g gulp
636+ gulp
637+
638+ Will start a watcher to lint, minify and concat CSS (see ./gulpfile.js)
639+ CSS source can be found in src/identityprovider/static/src/css and is built
640+ to src/identityprovider/static/css/ (see ./gulpfile.js)
641+
642+
643+11. (Optional) Build deployment tarball
644+::
645+ To build a tarball that contains all the code, dependencies and the
646+ virtualenv:
647+
648+ make fetch-sourcedeps
649+ make build-tarball
650+
651+ Note: this is two separate steps as in Mojo specs we need to separate the
652+ dependency collect step from the build step, as the build step runs in an
653+ lxc with no internet access at all.
654+
655+
656+
657+BAZAAR
658+------
659+You can install bzr like this:
660+
661+$ sudo apt-get install bzr
662+$ bzr whoami "You Name <youremail@example.com"
663+
664+You will also need to setup SSH keys. If you already have them, you will need
665+to copy them to the machine you wish to use. (Otherwise, Launchpad won't talk
666+to you.) Although, this suggests you can have multiple keys:
667+
668+https://launchpad.net/~YOURUSERNAME/+editsshkeys
669+
670+Background Reading
671+------------------
672+For those new to this project, this may help bring you up to speed quickly.
673+The Identity Provider uses some conventions that you may not have seen before.
674+These include:
675+
676+- virtualenv
677+- preflight
678+
679+Enjoy!
680+======
681+
682+We hope you enjoy using this software.
683
684=== added file 'django_project/deploy.yaml'
685--- django_project/deploy.yaml 1970-01-01 00:00:00 +0000
686+++ django_project/deploy.yaml 2016-03-04 19:06:06 +0000
687@@ -0,0 +1,25 @@
688+sso-app-dev:
689+ inherits:
690+ - sso-dev-minimal
691+ services:
692+ sso-app:
693+ to: [0]
694+ options:
695+ build_label: devel
696+ deployment: devel
697+ secret_key: development
698+ email_hostport: localhost:2025
699+ django_settings_module: django_project.settings_devel_juju
700+ # use unix socket
701+ db_host: ""
702+ db_port: ""
703+ # use db superuser for dev
704+ db_user: ssoadmin
705+ db_password: ssoadmin
706+ sso-postgresql:
707+ to: [0]
708+ options:
709+ # enable access via local socket
710+ extra_pg_auth: local all sso md5,local all ssoadmin md5
711+ sso-memcached:
712+ to: [0]
713
714=== modified file 'django_project/paths.py'
715--- django_project/paths.py 2015-10-09 16:03:29 +0000
716+++ django_project/paths.py 2016-03-04 19:06:06 +0000
717@@ -29,7 +29,9 @@
718
719
720 def setup_paths():
721- sys.path = list(get_paths(PATHS)) + sys.path + APPEND_PATHS
722+ sys.path = list(get_paths(PATHS)) + sys.path
723+ if 'I_CAN_HAZ_JUJU' not in os.environ:
724+ sys.path += APPEND_PATHS
725 os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'settings')
726
727
728
729=== added symlink 'django_project/settings_acceptance_juju.py'
730=== target is u'../branches/config/settings_acceptance_juju.py'
731=== modified file 'django_project/settings_base.py'
732--- django_project/settings_base.py 2016-01-27 12:00:39 +0000
733+++ django_project/settings_base.py 2016-03-04 19:06:06 +0000
734@@ -503,7 +503,7 @@
735 SESSION_SAVE_EVERY_REQUEST = False
736 SESSION_SERIALIZER = 'django.contrib.sessions.serializers.PickleSerializer'
737 SETTINGS_ENCODING = 'utf-8'
738-SETTINGS_MODULE = 'django_project.settings'
739+SETTINGS_MODULE = 'settings'
740 SHORT_DATETIME_FORMAT = 'm/d/Y P'
741 SHORT_DATE_FORMAT = 'm/d/Y'
742 SHOW_USER_NUM_AUTHLOGS = 50
743
744=== added file 'django_project/settings_devel_juju.py'
745--- django_project/settings_devel_juju.py 1970-01-01 00:00:00 +0000
746+++ django_project/settings_devel_juju.py 2016-03-04 19:06:06 +0000
747@@ -0,0 +1,54 @@
748+import os
749+
750+os.environ.setdefault('SSO_ROOT_URL', 'http://0.0.0.0:8080/')
751+from settings import * # noqa
752+
753+ALLOWED_HOSTS = ['*']
754+CSRF_COOKIE_SECURE = False
755+DEBUG = True
756+GARGOYLE_SWITCH_DEFAULTS = {
757+ 'TWOFACTOR': {'is_active': True},
758+ 'CAN_VIEW_SUPPORT_PHONE': {'is_active': True},
759+ 'CAPTCHA': {'is_active': False},
760+ 'PREFLIGHT': {'is_active': True},
761+ 'LOGIN_BY_TOKEN': {'is_active': True},
762+}
763+PASSWORD_HASHERS = [
764+ 'django.contrib.auth.hashers.SHA1PasswordHasher',
765+ 'identityprovider.auth.BCrypt13PasswordHasher',
766+ 'django.contrib.auth.hashers.PBKDF2PasswordHasher',
767+]
768+SESSION_COOKIE_SECURE = False
769+STATSD_CLIENT = 'django_statsd.clients.null'
770+TEMPLATE_DEBUG = True
771+TESTING = True
772+TWOFACTOR_MANDATORY_TEAMS = ['ubuntuone-hackers']
773+# Load a minimal blacklist in development so as to not need too much
774+# time and memory and not break any tests
775+PASSWORD_BLACKLIST_FILE = os.path.join(BASE_DIR, "blacklist-devel.txt")
776+
777+# this config is only needed for running the sst acceptance tests
778+# the server doesn't need this config
779+EMAIL_ADDRESS_PATTERN = 'testemail+%s@example.com'
780+SSO_TEST_ACCOUNT_EMAIL = 'testemail@example.com'
781+SSO_TEST_ACCOUNT_FULL_NAME = 'Test Account'
782+SSO_TEST_ACCOUNT_PASSWORD = 'password'
783+TEST_ACCOUNT_EMAIL = 'testemail+something@example.com'
784+TEST_ACCOUNT_PASSWORD = 'password'
785+TEST_DISCOVER_ROOT = SRC_DIR
786+TEST_DISCOVER_TOP_LEVEL = SRC_DIR
787+TEST_PROFILE = 'test.prof'
788+TEST_RUNNER = 'testing.runner.TestRunner'
789+
790+# localmail imap server
791+IMAP_PORT = 2143
792+IMAP_USERNAME = 'DOESNOTMATTER'
793+IMAP_PASSWORD = 'DOESNOTMATTER'
794+IMAP_SERVER = 'localhost'
795+IMAP_USE_SSL = False
796+
797+
798+try:
799+ from django_project.settings_local import * # noqa
800+except ImportError:
801+ pass
802
803=== added file 'django_project/ssh_config'
804--- django_project/ssh_config 1970-01-01 00:00:00 +0000
805+++ django_project/ssh_config 2016-03-04 19:06:06 +0000
806@@ -0,0 +1,8 @@
807+# disable all key checking and warnings
808+CheckHostIP no
809+UserKnownHostsFile /dev/null
810+StrictHostKeyChecking no
811+LogLevel ERROR
812+# pass through JUJU_ENV
813+SendEnv JUJU_ENV
814+SendEnv I_CAN_HAZ_JUJU
815
816=== modified file 'requirements_devel.txt'
817--- requirements_devel.txt 2016-01-20 18:34:00 +0000
818+++ requirements_devel.txt 2016-03-04 19:06:06 +0000
819@@ -7,7 +7,7 @@
820 fixtures==0.3.12
821 flake8==2.4.0
822 junitxml==0.7
823-localmail==0.3
824+localmail==0.4.1
825 logilab-astng==0.24.3
826 logilab-common==0.59.1
827 mechanize==0.2.5
828
829=== added file 'run-tests-juju'
830--- run-tests-juju 1970-01-01 00:00:00 +0000
831+++ run-tests-juju 2016-03-04 19:06:06 +0000
832@@ -0,0 +1,52 @@
833+#! /bin/bash
834+# How the tests are run in Jenkins by Tarmac
835+#
836+# ./run-tests -- run unit/acceptance tests for an environment.
837+#
838+# Usage: ./run-tests [dev|staging|production]
839+set -e
840+
841+TARGET=$1
842+BOOTSTRAP=${2:-yes}
843+
844+if [ "$TARGET" = "production" ]; then
845+ SST_BASE_URL="https://login.ubuntu.com"
846+elif [ "$TARGET" = "staging" ]; then
847+ SST_BASE_URL="https://login.staging.ubuntu.com"
848+fi
849+
850+# Some Jenkins jobs copy workspace.tar.gz from successful build.
851+if [ -r "workspace.tar.gz" ]; then
852+ tar zxf workspace.tar.gz -C .
853+fi
854+
855+# clean old results
856+rm -rf results/*
857+
858+# make sure that dependencies are up to date
859+if [ $BOOTSTRAP = "yes" ]; then
860+ echo "Setting up lxc..."
861+ make lxc-setup
862+ echo "Bootstrapping..."
863+ make clean && make bootstrap
864+fi
865+
866+# Before setting up the DB, ensure developer generated all needed migrations
867+# Will abort if there are pending migrations
868+echo "Check missing Django migrations left behind. If output is not 'No changes detected', run will abort."
869+make manage ARGS=makemigrations | tee /tmp/sso.log && grep "No changes detected" /tmp/sso.log > /dev/null
870+
871+# run tests
872+if [[ -z ${TARGET} ]]; then
873+ echo "Updating translation messages"
874+ make makemessages
875+ echo "Running canonical-identity-provider tests in tarmac"
876+ make test ARGS="--noinput --failfast"
877+else
878+ # run acceptance tests if TARGET given
879+ if [ ${TARGET} = "dev" ]; then
880+ make acceptance-dev
881+ else
882+ make run-acceptance SST_BASE_URL="$SST_BASE_URL" DJANGO_SETTINGS_MODULE=django_project.settings_acceptance_juju
883+ fi
884+fi
885
886=== added file 'scripts/acceptance-dev-juju.sh'
887--- scripts/acceptance-dev-juju.sh 1970-01-01 00:00:00 +0000
888+++ scripts/acceptance-dev-juju.sh 2016-03-04 19:06:06 +0000
889@@ -0,0 +1,66 @@
890+#!/bin/bash
891+
892+set -ue
893+
894+function usage {
895+ cat << EOF
896+Usage: $0 [-t TESTCASE] [-h|-f] [-p PORT]
897+Run SSO acceptance tests on a random port.
898+ -t [TESTCASE] run using TESTCASE regex. Will not reset db
899+ -h run headless (default)
900+ -f run headful
901+ -p [PORT] use a specific port (e.g. -p 8001)
902+ -c force a clean db (if using -t option)
903+ -1 stop on the first failure (AKA "fail-fast")
904+EOF
905+}
906+
907+# random port by default
908+PORT=8080
909+TESTCASE=
910+HEADLESS="-x"
911+CLEAN="false"
912+FAILFAST=
913+
914+while getopts ":t:hfc1p:" opt; do
915+ case $opt in
916+ t) TESTCASE=$OPTARG ;;
917+ h) HEADLESS="-x" ;;
918+ f) HEADLESS="" ;;
919+ p) PORT=$OPTARG ;;
920+ c) CLEAN="true" ;;
921+ 1) FAILFAST="--failfast" ;;
922+ \?)
923+ echo "Invalid option: -$OPTARG" >&2
924+ usage
925+ exit 1
926+ ;;
927+ :)
928+ echo "Option -$OPTARG requires an argument." >&2
929+ usage
930+ exit 1
931+ ;;
932+ esac
933+done
934+
935+export PORT=$PORT
936+export SSO_HOSTNAME="0.0.0.0:$PORT"
937+export SSO_ROOT_URL="http://$SSO_HOSTNAME"
938+export SST_BASE_URL="http://$SSO_HOSTNAME"
939+
940+# only reset db if we're doing a full run or explicitly told to
941+if [ -z $TESTCASE ] || $CLEAN
942+then
943+ echo "Setting up acceptance database"
944+ make acceptance-db
945+ make collectstatic
946+fi
947+
948+SST_ARGS="-s -r xml -q -c 1 $FAILFAST $HEADLESS"
949+
950+if [ -n "$TESTCASE" ]; then
951+ SST_ARGS="$SST_ARGS \"$TESTCASE\""
952+fi
953+echo $SST_ARGS
954+
955+make run-acceptance ARGS="$SST_ARGS" DJANGO_SETTINGS_MODULE=django_project.settings_devel_juju
956
957=== added file 'scripts/setup-localmail.sh'
958--- scripts/setup-localmail.sh 1970-01-01 00:00:00 +0000
959+++ scripts/setup-localmail.sh 2016-03-04 19:06:06 +0000
960@@ -0,0 +1,47 @@
961+#!/bin/bash
962+set -eu
963+
964+sudo service localmail stop || true
965+
966+LOGS=$PWD/logs
967+LOCALMAIL=/srv/localmail
968+if [ ! -d $LOCALMAIL ]
969+then
970+ sudo mkdir -p $LOCALMAIL
971+ sudo chown $USER:$USER $LOCALMAIL
972+ virtualenv $LOCALMAIL
973+ $LOCALMAIL/bin/pip install -U pip
974+
975+fi
976+$LOCALMAIL/bin/pip install -U localmail
977+upstart=/tmp/localmail.conf
978+cron=/tmp/localmail.sh
979+
980+cat > $upstart <<EOF
981+description "localmail development smtp/imap server"
982+
983+start on (local-filesystems and net-device-up IFACE=eth0)
984+stop on runlevel [!12345]
985+
986+# If the process quits unexpectadly trigger a respawn
987+respawn
988+respawn limit 10 5
989+
990+setuid $USER
991+setgid $USER
992+chdir $PWD
993+
994+exec $LOCALMAIL/bin/twistd -n --pidfile= --logfile $LOGS/localmail.log localmail --smtp 2025 --imap 2143 --http 8880 --file $LOGS/localmail.mbox
995+EOF
996+
997+cat > $cron <<EOF
998+#!/bin/bash
999+service localmail stop
1000+rm -f $LOGS/localmail.mbox $LOGS/localmail.log
1001+service localmail start
1002+EOF
1003+
1004+sudo mv $upstart /etc/init/localmail.conf
1005+sudo mv $cron /etc/cron.daily/localmail
1006+sudo chmod +x /etc/cron.daily/localmail
1007+sudo service localmail start
1008
1009=== modified file 'src/identityprovider/tests/test_command_cleanup.py'
1010--- src/identityprovider/tests/test_command_cleanup.py 2015-12-09 20:01:10 +0000
1011+++ src/identityprovider/tests/test_command_cleanup.py 2016-03-04 19:06:06 +0000
1012@@ -47,8 +47,8 @@
1013
1014 def make_test_accounts(self, count=0, date_created=None):
1015 for i in xrange(count):
1016- email = self.factory.make_email_address(prefix='isdtest+',
1017- domain='canonical.com')
1018+ email = self.factory.make_email_address(prefix='testemail+',
1019+ domain='example.com')
1020
1021 account = self.factory.make_account(email=email)
1022 if date_created is not None:
1023@@ -134,7 +134,7 @@
1024 def assert_testdata(self, accounts=0, emails=0, passwords=0):
1025 tomorrow = now().date() + timedelta(days=1)
1026 test_emails = EmailAddress.objects.filter(
1027- email__iregex=r'^isdtest\+[^@]+@canonical\.com$',
1028+ email__iregex=r'^testemail\+[^@]+@example\.com$',
1029 date_created__lt=tomorrow)
1030 test_accounts = Account.objects.filter(
1031 displayname__startswith='Test Account')
1032@@ -209,7 +209,7 @@
1033 def test_cleanup_orphaned_accounts(self):
1034 self.make_test_accounts(count=10)
1035 emails = EmailAddress.objects.filter(
1036- email__iregex=r'^isdtest\+[^@]+@canonical\.com$')
1037+ email__iregex=r'^testemail\+[^@]+@example\.com$')
1038 email_ids = emails.values_list('pk')[:5]
1039 EmailAddress.objects.filter(pk__in=email_ids).delete()
1040
1041
1042=== modified file 'src/webui/tests/test_views_ui.py'
1043--- src/webui/tests/test_views_ui.py 2016-01-13 19:11:31 +0000
1044+++ src/webui/tests/test_views_ui.py 2016-03-04 19:06:06 +0000
1045@@ -606,7 +606,6 @@
1046
1047 def assert_logged_out_in_sso(self, response):
1048 self.assertTemplateUsed(response, 'registration/logout.html')
1049- self.assertContains(response, 'You have been logged out')
1050 self.assertIsInstance(response.context['user'], AnonymousUser)
1051
1052 def assert_redirect_as_per_request(self, response, redirected_to):