Merge lp:~eday/burrow/example-docs into lp:burrow

Proposed by Eric Day
Status: Merged
Approved by: Eric Day
Approved revision: 16
Merged at revision: 16
Proposed branch: lp:~eday/burrow/example-docs
Merge into: lp:burrow
Diff against target: 236 lines (+215/-0)
2 files modified
doc/source/examples.rst (+214/-0)
doc/source/index.rst (+1/-0)
To merge this branch: bzr merge lp:~eday/burrow/example-docs
Reviewer Review Type Date Requested Status
Burrow Core Team Pending
Review via email: mp+61673@code.launchpad.net

Description of the change

Moved example docs from wiki into sphinx.

To post a comment you must log in.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1=== added file 'doc/source/example_deployment_dev.png'
2Binary files doc/source/example_deployment_dev.png 1970-01-01 00:00:00 +0000 and doc/source/example_deployment_dev.png 2011-05-19 22:35:51 +0000 differ
3=== added file 'doc/source/example_deployment_ha.png'
4Binary files doc/source/example_deployment_ha.png 1970-01-01 00:00:00 +0000 and doc/source/example_deployment_ha.png 2011-05-19 22:35:51 +0000 differ
5=== added file 'doc/source/example_deployment_pub.png'
6Binary files doc/source/example_deployment_pub.png 1970-01-01 00:00:00 +0000 and doc/source/example_deployment_pub.png 2011-05-19 22:35:51 +0000 differ
7=== added file 'doc/source/examples.rst'
8--- doc/source/examples.rst 1970-01-01 00:00:00 +0000
9+++ doc/source/examples.rst 2011-05-19 22:35:51 +0000
10@@ -0,0 +1,214 @@
11+..
12+ Copyright (C) 2011 OpenStack LLC.
13+
14+ Licensed under the Apache License, Version 2.0 (the "License");
15+ you may not use this file except in compliance with the License.
16+ You may obtain a copy of the License at
17+
18+ http://www.apache.org/licenses/LICENSE-2.0
19+
20+ Unless required by applicable law or agreed to in writing, software
21+ distributed under the License is distributed on an "AS IS" BASIS,
22+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
23+ See the License for the specific language governing permissions and
24+ limitations under the License.
25+
26+Examples
27+********
28+
29+Client Usage Examples
30+=====================
31+
32+These examples are demonstrated using the REST API, but they could
33+be performed with any other supported client protocol or API.
34+
35+Basic Asynchronous Queue
36+------------------------
37+
38+Multiple workers long-poll for messages until a client inserts
39+one. Both workers tell the server to hide the message once it is read
40+so only one worker will be able to see the message. The POST request
41+from a worker is an atomic get/set operation.
42+
43+Worker1: long-polling worker, request blocks until a message is ready
44+
45+``POST /account/queue?limit=1&wait=60&hide=60&detail=all``
46+
47+Worker2: long-polling worker, request blocks until a message is ready
48+
49+``POST /account/queue?limit=1&wait=60&hide=60&detail=all``
50+
51+Client: insert message
52+
53+``PUT /account/queue/id``
54+
55+Worker1: Return from the blocking POST request with the new message
56+and process it
57+
58+Worker1: delete message once it is done processing
59+
60+``DELETE /account/queue/id``
61+
62+If Worker1 crashes or takes longer than 60 seconds to process the
63+message, the hide time will expire on the server and message will
64+reappear. At this point another worker waiting for messages (such as
65+Worker2) will return with the message. If Worker1 doesn't crash but
66+needs more time, it can update the message with a longer hide time
67+to ensure another worker doesn't get a chance to process it.
68+
69+
70+Fast Asynchronous Queue
71+-----------------------
72+
73+In this example message loss is acceptable and workers have the ability
74+to do batch-processing (process multiple messages at once). Workers
75+grab up to 100 messages at a time and tell the server to remove the
76+messages as they are returned. If a worker crashes after the server
77+returns them but before they were successfully processed, they
78+are lost. The DELETE request from a worker is an atomic get/delete
79+operation.
80+
81+Worker1: long-polling worker, request blocks until a message is ready
82+
83+``DELETE /account/queue?limit=100&wait=60&detail=all``
84+
85+Worker2: long-polling worker, request blocks until a message is ready
86+
87+``DELETE /account/queue?limit=100&wait=60&detail=all``
88+
89+Client: insert messages
90+
91+``PUT /account/queue/id1``
92+``PUT /account/queue/id2``
93+``PUT /account/queue/id3``
94+...
95+
96+Worker1: Return from the blocking DELETE request with all new messages
97+and process them
98+
99+Worker2: Return from the blocking DELETE request as more messages
100+are inserted that were not returned to Worker1
101+
102+
103+Multi-cast Event Notifications
104+------------------------------
105+
106+This type of messages access allows multiple workers to read the same
107+message since the message is not hidden or removed by any worker. The
108+server will automatically remove the message once the message TTL
109+has expired.
110+
111+Worker1:
112+
113+``GET /account/queue?wait=60``
114+
115+Worker2:
116+
117+``GET /account/queue?wait=60``
118+
119+Client:
120+
121+``PUT /account/queue/id1?ttl=60``
122+
123+Worker1: Return from blocking GET request with message id1
124+
125+Worker2: Return from blocking GET request with message id1
126+
127+Worker1:
128+
129+``GET /account/queue?wait=60&marker=id1``
130+
131+Worker2:
132+
133+``GET /account/queue?wait=60&marker=id1``
134+
135+The ``marker`` parameter is used to let the server know the last ID
136+that was seen by the worker. Only messages with IDs inserted after
137+this marker will be returned for that request.
138+
139+
140+Deployment Examples
141+===================
142+
143+Configuration of the queue service will be flexible and this section
144+describes three sample deployments. Each of the queue servers (green
145+boxes) can optionally have persistence configured for them. Note that
146+queue servers should not share the same backend storage unless the
147+backend-storage is suitable for the HA needs of the deployment. The
148+queue servers will not know of or coordinate with one another in
149+any way, the clients and workers (or proxy, in the last example) are
150+responsible for balancing the load between available queue servers
151+and failing over to another server if one goes down.
152+
153+Developer
154+---------
155+
156+In this deployment, a single queue server is run, and all clients
157+and workers connect to it.
158+
159+.. image:: example_deployment_ha.png
160+
161+
162+Simple HA
163+---------
164+
165+In this deployment, multiple queue servers are run and all clients
166+and workers are given the list of IP addresses for the available
167+queue workers. Clients should either connect to the first available
168+server, or if it wants to distribute load amongst all three, should
169+use some algorithm depending on message. If all messages are unique,
170+a simple round-robin distribution may be sufficient. For messages with
171+client-side IDs that could possibly have duplicates, the client should
172+use a hashing algorithm on the ID to ensure the same ID hits the same
173+server. This allows the queue services to coalesce all messages with
174+the same ID. Should a queue server go down, the clients can simply
175+re-balance to the new server set. Workers should poll or remain
176+connected to all queue servers to ensure it can pull a message no
177+mater where it is inserted.
178+
179+.. image:: example_deployment_ha.png
180+
181+
182+Public Cloud Service
183+--------------------
184+
185+In this deployment, proxy servers are placed in front of each cluster
186+of queue servers. The proxy servers manage spreading the load across
187+the queue cluster instead of relying on the clients and workers to
188+manage multiple connections. This is only suitable when your proxy
189+servers are configured in a redundantly (such as when using a HA load
190+balancer). For a given account ID, all proxy servers in a zone should
191+hash to the same subset of queue workers (with a default max of three),
192+and use that set to spread load across. This is similar to how Swift
193+spreads objects based on the placement in the hash ring. Once the
194+account ID determines the set of queue servers to use, the queue
195+name and message ID (other components of the unique message ID) will
196+determine which queue server in the set to use. The algorithm used in
197+the proxy should be modular, so you can easily alter how many queue
198+servers to use for an account, and how to distribute to them within
199+that set.
200+
201+.. image:: example_deployment_pub.png
202+
203+For example, if a client creates a message with ID
204+/account_1/queue_A/msg_123, the proxy server will parse out the
205+"account_1" component and use that in the hashing algorithm to get a
206+set of queue servers (lets say it returns the set qs1, qs4, qs5). With
207+this set, the proxy then hashes the rest of the ID "queue_A/msg_123" to
208+determine which queue server to proxy to (lets say it maps to qs4). If
209+a message comes in with the exact same ID, the same algorithm is used
210+to proxy it to the same queue server, possibly allowing the queue
211+server to coalesces the message so it is processed by a worker only
212+once (eliminating the thundering herd problem). If a queue server in
213+the returned set should fail, it can either run with two servers or
214+choose a third server until the original node comes back up.
215+
216+When the proxy is handling worker requests it will use the same
217+hashing algorithms. When a worker GETs a queue name to read messages,
218+the account portion is parsed and a connection is made to all queue
219+servers. It will then aggregate messages from all queue servers
220+handling that account into one view for the worker to consume. The
221+proxy and queue servers may need to use a more efficient multiplexing
222+protocol that can keep state for multiple accounts and requests
223+rather than simple REST based calls to keep the number of connections
224+reasonable.
225
226=== modified file 'doc/source/index.rst'
227--- doc/source/index.rst 2011-04-20 18:21:51 +0000
228+++ doc/source/index.rst 2011-05-19 22:35:51 +0000
229@@ -50,6 +50,7 @@
230 :maxdepth: 1
231
232 protocols
233+ examples
234
235 Source Code
236 ===========

Subscribers

People subscribed via source and target branches