Merge lp:~caio1982/capomastro/releasedocs into lp:capomastro
- releasedocs
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Daniel Manrique |
Approved revision: | 203 |
Merged at revision: | 196 |
Proposed branch: | lp:~caio1982/capomastro/releasedocs |
Merge into: | lp:capomastro |
Diff against target: |
463 lines (+292/-82) 6 files modified
docs/source/conf.py (+3/-3) docs/source/deployment.rst (+149/-0) docs/source/index.rst (+3/-3) docs/source/releasing.rst (+110/-0) docs/source/setup.rst (+26/-75) tox.ini (+1/-1) |
To merge this branch: | bzr merge lp:~caio1982/capomastro/releasedocs |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Daniel Manrique (community) | Approve | ||
Review via email: mp+260873@code.launchpad.net |
Commit message
Document the release process of Capomastro, also add deployment instructions.
Description of the change
This is for story 1339 -- as Capomastro developer I want to create a Capomastro release process so have a list of steps to follow for a release instead of winging it every time.
I've published on https:/
Since I'm used to the charm, the packaging and spec runs I'm biased to test it but I gave it all a try and it seemed okay.
Also, I'm afraid it's all too verbose, so please let me know what can be simplified and needs fixing!
Caio Begotti (caio1982) wrote : | # |
Daniel Manrique (roadmr) wrote : | # |
I like this. I made two comments below. Since this is documentation I think we should fix those but they're minor so it should be easy.
- 202. By Caio Begotti
-
remove whitespaces found in blank lines during review, oops
- 203. By Caio Begotti
-
remove alien word glued to another, which probably means there is a "them" missing somewhere else now? dunno how it got here anyway
Caio Begotti (caio1982) wrote : | # |
Thanks, Daniel. Just fixed those.
Daniel Manrique (roadmr) wrote : | # |
+1 then, thanks!
Preview Diff
1 | === modified file 'docs/source/conf.py' |
2 | --- docs/source/conf.py 2015-03-17 15:26:28 +0000 |
3 | +++ docs/source/conf.py 2015-06-02 20:49:01 +0000 |
4 | @@ -49,16 +49,16 @@ |
5 | |
6 | # General information about the project. |
7 | project = u'Capomastro' |
8 | -copyright = u'2014, Canonical' |
9 | +copyright = u'2015, Canonical' |
10 | |
11 | # The version info for the project you're documenting, acts as replacement for |
12 | # |version| and |release|, also used in various other places throughout the |
13 | # built documents. |
14 | # |
15 | # The short X.Y version. |
16 | -version = '20141030' |
17 | +version = '20150601' |
18 | # The full version, including alpha/beta/rc tags. |
19 | -release = '20141030' |
20 | +release = '20150601' |
21 | |
22 | # The language for content autogenerated by Sphinx. Refer to documentation |
23 | # for a list of supported languages. |
24 | |
25 | === added file 'docs/source/deployment.rst' |
26 | --- docs/source/deployment.rst 1970-01-01 00:00:00 +0000 |
27 | +++ docs/source/deployment.rst 2015-06-02 20:49:01 +0000 |
28 | @@ -0,0 +1,149 @@ |
29 | +========== |
30 | +Deployment |
31 | +========== |
32 | + |
33 | +.. Most of these notes came from https://docs.google.com/document/d/14elkWYOLCeKKHaFagr4jzKb9TZYWEEdNJG0oqf2auso |
34 | + |
35 | +Please follow the instructions below on how to deploy Capomastro, choosing what suits best in your scenario. |
36 | + |
37 | +With packages |
38 | +------------- |
39 | + |
40 | +This is the basis for cloud deployments as the software is pre-packaged so it is much easier to install. However, ensure your system provides Django 1.6 as this is the version currently supported on Capomastro. |
41 | + |
42 | +Add `ppa:ce-infrastructure/capomastro <https://launchpad.net/~ce-infrastructure/+archive/ubuntu/capomastro>`_ to your system, but note that it's a private PPA so you may need to request access to it. Then run: |
43 | + |
44 | +:: |
45 | + |
46 | + sudo apt-get update |
47 | + sudo apt-get install capomastro |
48 | + |
49 | +The package should install and start up RabbitMQ and Celery automatically for you. |
50 | + |
51 | +`Install and configure a Jenkins server as documented upstream <http://pkg.jenkins-ci.org/debian/>`_. |
52 | + |
53 | +`Install its Notifications plugin <https://wiki.jenkins-ci.org/display/JENKINS/Notification+Plugin>`_ too, as Capomastro needs this to receive build status back. |
54 | + |
55 | +On your local system, edit ``/etc/hosts`` and add an entry called ``capomastro`` pointing to the IP address of your Capomastro server. For instance, if you're doing this in the same system you're using the browser on, add: |
56 | + |
57 | +:: |
58 | + |
59 | + 127.0.0.1 capomastro |
60 | + |
61 | +Access http://capomastro and you should see the login page. The default credentials are ``capomastro:capomastro``. |
62 | + |
63 | +All configurations files of Capomastro are stored via symlinks inside ``/etc/capomastro/`` and you may need to tweak them depending on your setup. |
64 | + |
65 | +With Juju |
66 | +--------- |
67 | + |
68 | +For a proper production deployment of Capomastro, it's best to have the corresponding units deployed separately as this allows better resilience and scalability. Unfortunately, since Capomastro is indeed a complex stack, deploying all these services and then connecting them appropriately has become a somewhat elaborate process. |
69 | + |
70 | +The Capomastro Juju charm lives in `lp:~ce-infrastructure/capomastro/charm <https://launchpad.net/~ce-infrastructure/capomastro/charm>`_ and it contains a pretty good ``README`` file which details how to prepare it with additional services to deploy, and how to connect them. |
71 | + |
72 | +However, this entire process is already scripted and automated using Mojo, so while manual deployment with Juju is possible we encourage using Mojo instead. |
73 | + |
74 | +Setup |
75 | ++++++ |
76 | + |
77 | +Ensure your ``.juju/environments.yaml`` has a section like the below, replacing the values found in your ``novarc`` (be careful with indenting, use spaces and not tabs). Option value ``YOUR_SSH_PUBLIC_KEY`` should be a key you have in your Launchpad account. |
78 | + |
79 | +:: |
80 | + |
81 | + environments: |
82 | + canonistack: |
83 | + type: openstack |
84 | + auth-url: https://keystone.canonistack.canonical.com:443/v2.0/ |
85 | + default-series: trusty |
86 | + username: $OS_USERNAME |
87 | + tenant-name: $OS_TENANT_NAME |
88 | + password: $OS_PASSWORD |
89 | + region: $OS_REGION_NAME |
90 | + authorized-keys: $YOUR_SSH_PUBLIC_KEY |
91 | + |
92 | + |
93 | +Switch to the ``canonistack`` environment by running ``juju switch canonistack``. Currently, `Canonistack <https://wiki.canonical.com/InformationInfrastructure/IS/CanonicalOpenstack>`_ is the best place to test charms if not locally. |
94 | + |
95 | +Bootstrap the first Juju node with ``juju bootstrap -v`` and wait a few minutes. You'll get the address of your bootstrap node and some info will be printed by Juju. |
96 | + |
97 | +If you have trouble related to network access or if Juju times out connecting to the instance, `ensure your VPN configuration is correct <https://wiki.canonical.com/InformationInfrastructure/IS/HowTo/CompanyOpenVPN>`_ and try SSH directly to the instance. If it fails, maybe the VPN is not working correctly. |
98 | + |
99 | +Now you can do e.g. ``juju status`` and deploy charms normally. |
100 | + |
101 | +With Mojo |
102 | +--------- |
103 | + |
104 | +Mojo automates the complex process outlined in the Juju charm’s ``README`` file. Things begin to get a bit complex to do by hand, but fortunately all the services comprising the Capomastro stack can be magically deployed with Mojo. |
105 | + |
106 | +By default it tries to deploy Capomastro on an OpenStack-compliant cloud, and it will create and automatically attach volumes to the units that require them (System Image Server, Jenkins and the DB unit). |
107 | + |
108 | +Setup |
109 | ++++++ |
110 | + |
111 | +Install Mojo from ``ppa:mojo-maintainers/ppa`` and also ``python-novaclient``. |
112 | + |
113 | +Fetch `the current spec code for Capomastro that is kept in a repository owned by IS <https://launchpad.net/~canonical-is/canonical-mojo-specs/mojo-pes-capomastro/>`_. Capomastro's spec code is inside the directory ``pes/mojo-pes-capomastro``. |
114 | + |
115 | +:: |
116 | + |
117 | + bzr branch lp:~canonical-is/canonical-mojo-specs/mojo-pes-capomastro local-canonical-mojo-specs |
118 | + |
119 | +Whether you deploy with Mojo on Canonistack, on Staging or Production, you'll need 3 Ceph volumes of at least 30GB each. One of them will contain the DB, the other will be used by System Image Server and the last one will be Jenkins’s storage. They need to be created in advance because we want them to be persistent, so note the volume IDs for all. On production these volumes are likely to be created by IS, but on Staging it's all on you. |
120 | + |
121 | +You'll also need to provide a set of required secrets files for Mojo before proceeding. |
122 | + |
123 | +Please `check the README of the spec <https://bazaar.launchpad.net/~canonical-is/canonical-mojo-specs/mojo-pes-capomastro/view/head%3A/pes/mojo-pes-capomastro/README>`_ as it should explain how to get all these configured properly. In any case, the spec itself will abort and tell you what's wrong if anything is missing, so that you can fix the files before trying again. |
124 | + |
125 | +Capomastro provides a Makefile to ease setting up the project and running the spec, so once you have collected the needed secrets just use the ``make`` and its targets. |
126 | + |
127 | +On Canonistack |
128 | +++++++++++++++ |
129 | + |
130 | +Getting a proper Canonistack setup should be outside of this document's scope, so please refer to `the official instructions <https://wiki.canonical.com/InformationInfrastructure/IS/CanonicalOpenstack>`_ and come back here once it's up and running. Please also `set up a VPN <https://wiki.canonical.com/InformationInfrastructure/IS/HowTo/CompanyOpenVPN>`_ to make things easier for yourself. |
131 | + |
132 | +Once the secrets files are placed in ``/srv/mojo/LOCAL/capomastro/pes/mojo-pes-capomastro/canonistack/`` you're ready to deploy. Using the Makefile: |
133 | + |
134 | +:: |
135 | + |
136 | + MOJO_STAGE_NAME=canonistack make deploy |
137 | + |
138 | +Variable ``MOJO_STAGE_NAME`` tells the spec to run from the ``canonistack`` directory. It defaults to ``staging`` so that's why you need to specify it for deployments on Canonistack. |
139 | + |
140 | +On Canonistack the deploy command will take between 2 to 3 hours, while it takes only 30 minutes on a faster cloud provider. Be patient, usually if it looks stuck it's because it is still working. If something fails, it will exit with suitable error messages. |
141 | + |
142 | +Once everything has been deployed and verified you should then be able to log into ``apache2`` unit's IP address and see the Capomastro login screen. |
143 | + |
144 | +On Staging |
145 | +++++++++++ |
146 | + |
147 | +As the user ``stg-pes-capomastro`` on ``wendigo.canonical.com`` (the Staging server) you'll be doing a fresh deployment on staging so you can test new releases, running the full spec is recommended. Simply run the following and the spec Makefile will do the rest: |
148 | + |
149 | +:: |
150 | + |
151 | + make deploy |
152 | + |
153 | + |
154 | +On Production |
155 | ++++++++++++++ |
156 | + |
157 | +Production deployments are controlled by IS and we can't do them ourselves. However, we must guide them during upgrades so they run the Mojo bits we need for each different upgrade. |
158 | + |
159 | +Usually on production a Mojo upgrade is what you need, as you don't want to re-deploy the whole spec all the time. In this case, telling IS to run the file ``manifest-upgrade`` is enough, but you can call it with our Makefile as such: |
160 | + |
161 | +:: |
162 | + |
163 | + make upgrade |
164 | + |
165 | +Clean up |
166 | +-------- |
167 | + |
168 | +You'll probably want to destroy any previous deployment first before running the spec again or deploying existing charms manually. From the Capomastro directory inside the spec, run these to clean it all up: |
169 | + |
170 | +:: |
171 | + |
172 | + make destroy |
173 | + juju destroy-environment --yes <juju environment name> |
174 | + juju bootstrap |
175 | + |
176 | + |
177 | +Before redeploying again, depending on what you did previously, you may want to erase the Ceph volumes and create new ones. If that's the case see the current volumes available with ``nova volumes-list`` and delete each one of them with ``nova volume-delete <ID>``. Notice that in this case you might need to update the new IDs in the configuration files. |
178 | |
179 | === modified file 'docs/source/index.rst' |
180 | --- docs/source/index.rst 2015-03-17 15:05:05 +0000 |
181 | +++ docs/source/index.rst 2015-06-02 20:49:01 +0000 |
182 | @@ -38,9 +38,9 @@ |
183 | credentials |
184 | stats |
185 | development |
186 | - |
187 | - |
188 | - |
189 | + deployment |
190 | + releasing |
191 | + |
192 | |
193 | Indices and tables |
194 | ================== |
195 | |
196 | === added file 'docs/source/releasing.rst' |
197 | --- docs/source/releasing.rst 1970-01-01 00:00:00 +0000 |
198 | +++ docs/source/releasing.rst 2015-06-02 20:49:01 +0000 |
199 | @@ -0,0 +1,110 @@ |
200 | +.. _deployment: deployment.html |
201 | + |
202 | +========= |
203 | +Releasing |
204 | +========= |
205 | + |
206 | +This document explains how Capomastro's release process works. |
207 | + |
208 | +Capomastro has no specific RC or QA process integrated with new releases. Instead, all new versions are supposed to be tested before they are made public through Juju and Mojo deployments in a self-contained environment, using a separate PPA containing the new release package. Capomastro doesn't have such broad audience yet that justify RC packages, though we can change this in the future. |
209 | + |
210 | +To make things short, essentially the release process goes like this: |
211 | + |
212 | + #. `Merging of all pending branches`_ |
213 | + #. `Update of the package changelog`_ |
214 | + #. `Package release to a testing PPA`_ |
215 | + #. `Merge service charm and spec changes`_ |
216 | + #. `Staging deployment plus testing`_ |
217 | + #. `Production upgrade`_ |
218 | + #. `Release notifications`_ |
219 | + |
220 | +Merging of all pending branches |
221 | +------------------------------- |
222 | + |
223 | +Make sure all relevant pending branches are merged automatically by Tarmac after being reviewed and approved. Tarmac will run the test suite itself so you don't need to worry about it (most of the time). |
224 | + |
225 | + |
226 | +Update of the package changelog |
227 | +------------------------------- |
228 | + |
229 | +The package changelog is also used as the source changelog for now, so use ``dch -v <version> -D <supported distro name> -u high`` to append a new changelog entry to ``debian/changelog`` directly. |
230 | + |
231 | +Currently the version string is the date of the package release, but we could and should use semver strings in the near future. E.g.: |
232 | + |
233 | +:: |
234 | + |
235 | + dch -v 20150601 -D trusty -u high |
236 | + |
237 | +Will render... |
238 | + |
239 | +:: |
240 | + |
241 | + capomastro (20150601) trusty; urgency=high |
242 | + |
243 | + * Fix #123456 (short summary of the bug) |
244 | + |
245 | + -- Caio Begotti <caio.begotti@canonical.com> Mon, 01 Jun 2015 14:56:05 -0300 |
246 | + |
247 | +Make sure you include the number and a short summary of all the bugs closed between the last and this new release. Hotfixes without bug reports must be included too, as well other changes needed for the service charm or spec. |
248 | + |
249 | +.. Changelogs are manually crafted this way due to many auto merges we have, we have tried to use the Bazaar plugin Checkbox has to produce changelogs but it'd need too many changes to start off |
250 | + |
251 | +Changes to ``debian/changelog`` for a release should be minimal and their MRs can be self-approved on Launchpad as long as they only contain changes to this one file. |
252 | + |
253 | +Package release to a testing PPA |
254 | +-------------------------------- |
255 | + |
256 | +Build the source package of Capomastro with ``debuild -S``, which will ask you for your GPG data to sign the new files. The resulting file ``../capomastro_<version>_source.changes`` will be used to upload this release to a PPA. |
257 | + |
258 | +You can now upload the source package to a testing PPA with ``dput ppa:<testing PPA name> ../capomastro_<version>_source.changes``. It may take Launchpad several minutes before the binary package is available for installation and testing from the PPA. |
259 | + |
260 | +Merge service charm and spec changes |
261 | +------------------------------------ |
262 | + |
263 | +Some changes to Capomastro's source might require an update of the service charm, like when some default settings from ``/etc/capomastro`` needs an update as they're templated in the charm code too. Make sure the charm is always in sync with source changes before a release, otherwise we might break Juju deployments! |
264 | + |
265 | +If the charm gets a critical fix, bump its revno in the spec code inside ``common/collect`` so the next deployments will use that revno as base and people won't deploy broken code. |
266 | + |
267 | +Similarly, keep the Mojo spec up to date with any way the source expects to be deployed. If some infrastructure bit of Capomastro needs a change due to a new feature just released, make sure the spec supports it. |
268 | + |
269 | +Unfortunately, these steps need to be done manually as it's expected the people involved will know all these parts of the service. |
270 | + |
271 | +Staging deployment plus testing |
272 | +------------------------------- |
273 | + |
274 | +Do a full and clean Mojo run of the Capomastro spec on Wendigo, the staging environment, before releasing a new package. Ideally an upgrade of the spec using the last package released to this newest one is desired, but it's not always feasible. |
275 | + |
276 | +Please follow the deployment_ document to see how to do spec runs. |
277 | + |
278 | +After it's finished test some dependencies and projects builds. You should also test the archival of some artifacts to ensure nothing broke miserably in this release. The whole testing might take a while... |
279 | + |
280 | +Capomastro lacks a testing plan and some of the tests can't be fully automated we're afraid, so you'll need to use your best judgment here deciding what to test and what to skip during a release. |
281 | + |
282 | +Production upgrade |
283 | +------------------ |
284 | + |
285 | +Once you know the new release works you can put it in the project's PPA with ``dput ppa:ce-infrastructure/capomastro ../capomastro_<version>_source.changes``. This PPA is also used by the `CI server <https://ci.admin.canonical.com/>`_, so keep this up to date with new releases. |
286 | + |
287 | +Sometimes you may want to upgrade production to use the new release. Until Capomastro has a safe way to lock users out and pause its builds before an upgrade, you'll need to disable all Django accounts manually on its admin UI and wait until all builds are done before proceeding with a production upgrade. |
288 | + |
289 | +If it's the case during a particular new release, create a RT for IS to action the upgrade by sending an e-mail to pes@rt.canonical.com with a meaningful subject line. Make sure you ask them: |
290 | + |
291 | + #. To sync our PPA to the IS one, ``ppa:~canonical-losas/capomastro`` |
292 | + #. To run the latest spec revno on CI so the new release can be tested once more using a pristine environment controlled by IS. |
293 | + #. To run the relevant Mojo manifest you need, usually ``manifest-upgrade``. |
294 | + |
295 | +Finally you can then raise the priority of the RT just created, ping the vanguard at Canonical's #webops IRC channel and follow up the upgrade. Ideally service upgrades should happen after 6PM UTC as it's a period of low service usage. |
296 | + |
297 | +Release notifications |
298 | +--------------------- |
299 | + |
300 | +Every time there's a production change to Capomastro please send out notifications to its stakeholders and immediate users so they're aware of it. |
301 | + |
302 | +Try to keep the communication short: |
303 | + |
304 | + * List critical issues that had been solved with the release |
305 | + * List major features changes only |
306 | + |
307 | +If the release changes are not any of these then consider it to be a maintenance release and state it in the communication to the Phablet and PES mailing lists. |
308 | + |
309 | +.. It would be nice to have e-mail templates somewhere for all sort of communication we have, like Hexr seems to have |
310 | \ No newline at end of file |
311 | |
312 | === modified file 'docs/source/setup.rst' |
313 | --- docs/source/setup.rst 2015-05-07 17:19:25 +0000 |
314 | +++ docs/source/setup.rst 2015-06-02 20:49:01 +0000 |
315 | @@ -2,14 +2,9 @@ |
316 | Setup |
317 | ===== |
318 | |
319 | -This file was created based on the old historic README file from Capomastro, if |
320 | -anything seems to be missing here please refer to the whole documentation files |
321 | -in the ``docs/`` directory. |
322 | +This file was created based on the old historic README file from Capomastro, if anything seems to be missing here please refer to the whole documentation files in the ``docs/`` directory. |
323 | |
324 | -If you're an old school Django guy, you can get started with local testing and |
325 | -development by following the next sections. You'll need the following services |
326 | -in order to bring the application up anyway, no matter if using the packaged |
327 | -code or not: |
328 | +If you're an old school Django guy, you can get started with local testing and development by following the next sections. You'll need the following services in order to bring the application up anyway, no matter if using the packaged code or not: |
329 | |
330 | * PostgreSQL, default database |
331 | * Apache with mod WSGI, to expose the UI |
332 | @@ -19,18 +14,11 @@ |
333 | Environment |
334 | ----------- |
335 | |
336 | -Create your environment either with ``mkvirtualenv <name>``, if you have |
337 | -virtualenvwrapper installed, or simply ``tox -e devenv`` since it does some |
338 | -extra steps for you. It's recommended to install ``tox`` and go with the latter |
339 | -option. |
340 | - |
341 | -If you decide for using ``mkvirtualenv`` you'll also need to manually install |
342 | -the Python dependencies with ``pip install -r requirements.txt`` or even ``pip |
343 | -install -r dev-requirements.txt`` if you're going to run the code tests. |
344 | - |
345 | -Then either activate the virtual environment manually with ``source |
346 | -devenv/bin/activate`` or explicitly call Python from the virtual environment |
347 | -with ``devenv/bin/python``. |
348 | +Create your environment either with ``mkvirtualenv <name>``, if you have virtualenvwrapper installed, or simply ``tox -e devenv`` since it does some extra steps for you. It's recommended to install ``tox`` and go with the latter option. |
349 | + |
350 | +If you decide for using ``mkvirtualenv`` you'll also need to manually install the Python dependencies with ``pip install -r requirements.txt`` or even ``pip install -r dev-requirements.txt`` if you're going to run the code tests. |
351 | + |
352 | +Then either activate the virtual environment manually with ``source devenv/bin/activate`` or explicitly call Python from the virtual environment with ``devenv/bin/python``. |
353 | |
354 | Database |
355 | -------- |
356 | @@ -42,44 +30,30 @@ |
357 | .. include:: ../../testing/requirements/capomastro.sql |
358 | :literal: |
359 | |
360 | -If needed, fixup the database entry in ``capomastro/db_settings.py``, but most |
361 | -likely the default values will work okay unless you're testing database |
362 | -changes. |
363 | +If needed, fixup the database entry in ``capomastro/db_settings.py``, but most likely the default values will work okay unless you're testing database changes. |
364 | |
365 | -You'll need to prepare the database with ``./manage.py syncdb`` (and you'll be |
366 | -prompted to create a superuser at this time). |
367 | +You'll need to prepare the database with ``./manage.py syncdb`` (and you'll be prompted to create a superuser at this time). |
368 | |
369 | Web server |
370 | ---------- |
371 | + |
372 | +Start the Celery worker that will talk to the Rabbit queues with ``celery -A capomastro worker -l debug``. Capomastro will need it to be running first. |
373 | |
374 | -You can manually start Capomastro from its source directory by executing |
375 | -``./manage.py runserver`` otherwise. However, if you want to set it up with |
376 | -WSGI, take a look at the file ``wsgi.conf``. |
377 | - |
378 | -All the remaining setup steps are through the ``/admin/`` interface of |
379 | -Capomastro, so go to http://localhost:8000/admin/ then login with the username |
380 | -and password created before and continue. |
381 | - |
382 | -If you prefer to test things with Gunicorn you'll need a Celery worker running |
383 | -already, as well as RabbitMQ. |
384 | - |
385 | -Just run ``gunicorn -b 0.0.0.0:8000 capomastro.wsgi:application`` and start the |
386 | -Celery worker that will talk to the Rabbit queue with ``celery -A capomastro |
387 | -worker -l debug``. |
388 | +You can manually start Capomastro from its source directory by executing ``./manage.py runserver``. However, if you want to set it up with WSGI, take a look at the file ``wsgi.conf``. |
389 | + |
390 | +All the remaining setup steps are through the ``/admin/`` interface of Capomastro, so go to http://localhost:8000/admin/ then login with the username and password created before and continue. |
391 | + |
392 | +If you prefer to test things with Gunicorn you'll need a Celery worker running already, as well as RabbitMQ. Just run ``gunicorn -b 0.0.0.0:8000 capomastro.wsgi:application``. |
393 | |
394 | Django Admin Setup |
395 | ------------------ |
396 | |
397 | -All that remains now is the first the dependencies setup, then the projects |
398 | -setup, in that particular order. Creating of projects before its dependencies |
399 | -are set is not currently supported. |
400 | +All that remains now is the first the dependencies setup, then the projects setup, in that particular order. Creating projects before its dependencies is supported but has little purpose besides testing. |
401 | |
402 | -In the ``/admin/`` interface of Capomastro you'll need to create a |
403 | -``jenkins.JenkinsServer`` object, with the correct credentials and |
404 | +In the ``/admin/`` interface of Capomastro you'll need to create a ``jenkins.JenkinsServer`` object, with the correct credentials and |
405 | ``REMOTE_ADDR`` set up correctly, so that it can receive callbacks. |
406 | |
407 | -Create a ``JobType`` and a ``Job`` for that ``JobType``. Then |
408 | -create a ``Dependency`` for that ``Job`` model. |
409 | +Create a ``JobType`` and a ``Job`` for that ``JobType``. Then create a ``Dependency`` for that ``Job`` model. |
410 | |
411 | You should have something like this: |
412 | |
413 | @@ -91,33 +65,10 @@ |
414 | JenkinsServer -> Job; |
415 | } |
416 | |
417 | -You can create several dependencies, with different jobs. Your Jenkins server |
418 | -requires the ``notifications`` plugin installed, so make sure it's |
419 | -available otherwise you won't get status back from the builds. |
420 | - |
421 | -The jobs that are created must have a parameter ``BUILD_ID`` and they must have |
422 | -a notification setup, type ``http/json`` and with a callback address of |
423 | -``http://hostname/jenkins/notifications/``. |
424 | - |
425 | -Now you can create a ``Project``, associated with your dependencies, at |
426 | -``http://localhost:8000/projects/create/``. The option "auto track" means that |
427 | -the project will use the latest version of any dependencies automatically. |
428 | - |
429 | -Now you can build your project. This will create a project build, and trigger |
430 | -the tasks to build your project on the Jenkins server. |
431 | - |
432 | -Packaged setup |
433 | --------------- |
434 | - |
435 | -All the configurations of Capomastro (messaging, DB, the service etc) are |
436 | -stored via symlinks inside of ``/etc/capomastro/`` if you're using the Debian |
437 | -package of the application. This directory also contains the file ``wsgi.conf`` |
438 | -which is symlinked to the Apache configuration so the site is enabled by |
439 | -default. The default Django superuser can be used to access the service for the |
440 | -first time, the username and password should be both ``capomastro`` and it's |
441 | -created by the package automatically. |
442 | - |
443 | -All code is actually validated through the application package, so although you |
444 | -can test and develop things really locally, only code from packages is |
445 | -considered deployable. |
446 | - |
447 | +You can create several dependencies, with different jobs. Your Jenkins server requires the ``Notifications`` plugin installed, so make sure it's available otherwise you won't get status back from the builds. |
448 | + |
449 | +The jobs that are created must have a parameter ``BUILD_ID`` and they must have a notification setup, type ``http/json`` and with a callback address of ``http://hostname/jenkins/notifications/``. |
450 | + |
451 | +Now you can create a ``Project``, associated with your dependencies, at ``http://localhost:8000/projects/create/``. The option "auto track" means that the project will use the latest version of any dependencies automatically. |
452 | + |
453 | +Now you can build your project. This will create a project build, and trigger the tasks to build your project on the Jenkins server. |
454 | |
455 | === modified file 'tox.ini' |
456 | --- tox.ini 2015-04-15 18:19:38 +0000 |
457 | +++ tox.ini 2015-06-02 20:49:01 +0000 |
458 | @@ -17,4 +17,4 @@ |
459 | [testenv:devenv] |
460 | envdir = devenv |
461 | basepython = python2.7 |
462 | -commands = |
463 | +deps = -r{toxinidir}/dev-requirements.txt |
I think these changes covers https:/ /docs.google. com/document/ d/14elkWYOLCeKK HaFagr4jzKb9TZY WEEdNJG0oqf2aus o in pretty much everything but assumes many defaults so let me know if any part is still not very clear and that differs between the doc and the commits :-)