Merge lp:~eday/burrow/example-docs into lp:burrow

Proposed by Eric Day
Status: Merged
Approved by: Eric Day
Approved revision: 16
Merged at revision: 16
Proposed branch: lp:~eday/burrow/example-docs
Merge into: lp:burrow
Diff against target: 236 lines (+215/-0)
2 files modified
doc/source/examples.rst (+214/-0)
doc/source/index.rst (+1/-0)
To merge this branch: bzr merge lp:~eday/burrow/example-docs
Reviewer Review Type Date Requested Status
Burrow Core Team Pending
Review via email: mp+61673@code.launchpad.net

Description of the change

Moved example docs from wiki into sphinx.

To post a comment you must log in.

Preview Diff

[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
=== added file 'doc/source/example_deployment_dev.png'
0Binary files doc/source/example_deployment_dev.png 1970-01-01 00:00:00 +0000 and doc/source/example_deployment_dev.png 2011-05-19 22:35:51 +0000 differ0Binary files doc/source/example_deployment_dev.png 1970-01-01 00:00:00 +0000 and doc/source/example_deployment_dev.png 2011-05-19 22:35:51 +0000 differ
=== added file 'doc/source/example_deployment_ha.png'
1Binary files doc/source/example_deployment_ha.png 1970-01-01 00:00:00 +0000 and doc/source/example_deployment_ha.png 2011-05-19 22:35:51 +0000 differ1Binary files doc/source/example_deployment_ha.png 1970-01-01 00:00:00 +0000 and doc/source/example_deployment_ha.png 2011-05-19 22:35:51 +0000 differ
=== added file 'doc/source/example_deployment_pub.png'
2Binary files doc/source/example_deployment_pub.png 1970-01-01 00:00:00 +0000 and doc/source/example_deployment_pub.png 2011-05-19 22:35:51 +0000 differ2Binary files doc/source/example_deployment_pub.png 1970-01-01 00:00:00 +0000 and doc/source/example_deployment_pub.png 2011-05-19 22:35:51 +0000 differ
=== added file 'doc/source/examples.rst'
--- doc/source/examples.rst 1970-01-01 00:00:00 +0000
+++ doc/source/examples.rst 2011-05-19 22:35:51 +0000
@@ -0,0 +1,214 @@
1..
2 Copyright (C) 2011 OpenStack LLC.
3
4 Licensed under the Apache License, Version 2.0 (the "License");
5 you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at
7
8 http://www.apache.org/licenses/LICENSE-2.0
9
10 Unless required by applicable law or agreed to in writing, software
11 distributed under the License is distributed on an "AS IS" BASIS,
12 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 See the License for the specific language governing permissions and
14 limitations under the License.
15
16Examples
17********
18
19Client Usage Examples
20=====================
21
22These examples are demonstrated using the REST API, but they could
23be performed with any other supported client protocol or API.
24
25Basic Asynchronous Queue
26------------------------
27
28Multiple workers long-poll for messages until a client inserts
29one. Both workers tell the server to hide the message once it is read
30so only one worker will be able to see the message. The POST request
31from a worker is an atomic get/set operation.
32
33Worker1: long-polling worker, request blocks until a message is ready
34
35``POST /account/queue?limit=1&wait=60&hide=60&detail=all``
36
37Worker2: long-polling worker, request blocks until a message is ready
38
39``POST /account/queue?limit=1&wait=60&hide=60&detail=all``
40
41Client: insert message
42
43``PUT /account/queue/id``
44
45Worker1: Return from the blocking POST request with the new message
46and process it
47
48Worker1: delete message once it is done processing
49
50``DELETE /account/queue/id``
51
52If Worker1 crashes or takes longer than 60 seconds to process the
53message, the hide time will expire on the server and message will
54reappear. At this point another worker waiting for messages (such as
55Worker2) will return with the message. If Worker1 doesn't crash but
56needs more time, it can update the message with a longer hide time
57to ensure another worker doesn't get a chance to process it.
58
59
60Fast Asynchronous Queue
61-----------------------
62
63In this example message loss is acceptable and workers have the ability
64to do batch-processing (process multiple messages at once). Workers
65grab up to 100 messages at a time and tell the server to remove the
66messages as they are returned. If a worker crashes after the server
67returns them but before they were successfully processed, they
68are lost. The DELETE request from a worker is an atomic get/delete
69operation.
70
71Worker1: long-polling worker, request blocks until a message is ready
72
73``DELETE /account/queue?limit=100&wait=60&detail=all``
74
75Worker2: long-polling worker, request blocks until a message is ready
76
77``DELETE /account/queue?limit=100&wait=60&detail=all``
78
79Client: insert messages
80
81``PUT /account/queue/id1``
82``PUT /account/queue/id2``
83``PUT /account/queue/id3``
84...
85
86Worker1: Return from the blocking DELETE request with all new messages
87and process them
88
89Worker2: Return from the blocking DELETE request as more messages
90are inserted that were not returned to Worker1
91
92
93Multi-cast Event Notifications
94------------------------------
95
96This type of messages access allows multiple workers to read the same
97message since the message is not hidden or removed by any worker. The
98server will automatically remove the message once the message TTL
99has expired.
100
101Worker1:
102
103``GET /account/queue?wait=60``
104
105Worker2:
106
107``GET /account/queue?wait=60``
108
109Client:
110
111``PUT /account/queue/id1?ttl=60``
112
113Worker1: Return from blocking GET request with message id1
114
115Worker2: Return from blocking GET request with message id1
116
117Worker1:
118
119``GET /account/queue?wait=60&marker=id1``
120
121Worker2:
122
123``GET /account/queue?wait=60&marker=id1``
124
125The ``marker`` parameter is used to let the server know the last ID
126that was seen by the worker. Only messages with IDs inserted after
127this marker will be returned for that request.
128
129
130Deployment Examples
131===================
132
133Configuration of the queue service will be flexible and this section
134describes three sample deployments. Each of the queue servers (green
135boxes) can optionally have persistence configured for them. Note that
136queue servers should not share the same backend storage unless the
137backend-storage is suitable for the HA needs of the deployment. The
138queue servers will not know of or coordinate with one another in
139any way, the clients and workers (or proxy, in the last example) are
140responsible for balancing the load between available queue servers
141and failing over to another server if one goes down.
142
143Developer
144---------
145
146In this deployment, a single queue server is run, and all clients
147and workers connect to it.
148
149.. image:: example_deployment_ha.png
150
151
152Simple HA
153---------
154
155In this deployment, multiple queue servers are run and all clients
156and workers are given the list of IP addresses for the available
157queue workers. Clients should either connect to the first available
158server, or if it wants to distribute load amongst all three, should
159use some algorithm depending on message. If all messages are unique,
160a simple round-robin distribution may be sufficient. For messages with
161client-side IDs that could possibly have duplicates, the client should
162use a hashing algorithm on the ID to ensure the same ID hits the same
163server. This allows the queue services to coalesce all messages with
164the same ID. Should a queue server go down, the clients can simply
165re-balance to the new server set. Workers should poll or remain
166connected to all queue servers to ensure it can pull a message no
167mater where it is inserted.
168
169.. image:: example_deployment_ha.png
170
171
172Public Cloud Service
173--------------------
174
175In this deployment, proxy servers are placed in front of each cluster
176of queue servers. The proxy servers manage spreading the load across
177the queue cluster instead of relying on the clients and workers to
178manage multiple connections. This is only suitable when your proxy
179servers are configured in a redundantly (such as when using a HA load
180balancer). For a given account ID, all proxy servers in a zone should
181hash to the same subset of queue workers (with a default max of three),
182and use that set to spread load across. This is similar to how Swift
183spreads objects based on the placement in the hash ring. Once the
184account ID determines the set of queue servers to use, the queue
185name and message ID (other components of the unique message ID) will
186determine which queue server in the set to use. The algorithm used in
187the proxy should be modular, so you can easily alter how many queue
188servers to use for an account, and how to distribute to them within
189that set.
190
191.. image:: example_deployment_pub.png
192
193For example, if a client creates a message with ID
194/account_1/queue_A/msg_123, the proxy server will parse out the
195"account_1" component and use that in the hashing algorithm to get a
196set of queue servers (lets say it returns the set qs1, qs4, qs5). With
197this set, the proxy then hashes the rest of the ID "queue_A/msg_123" to
198determine which queue server to proxy to (lets say it maps to qs4). If
199a message comes in with the exact same ID, the same algorithm is used
200to proxy it to the same queue server, possibly allowing the queue
201server to coalesces the message so it is processed by a worker only
202once (eliminating the thundering herd problem). If a queue server in
203the returned set should fail, it can either run with two servers or
204choose a third server until the original node comes back up.
205
206When the proxy is handling worker requests it will use the same
207hashing algorithms. When a worker GETs a queue name to read messages,
208the account portion is parsed and a connection is made to all queue
209servers. It will then aggregate messages from all queue servers
210handling that account into one view for the worker to consume. The
211proxy and queue servers may need to use a more efficient multiplexing
212protocol that can keep state for multiple accounts and requests
213rather than simple REST based calls to keep the number of connections
214reasonable.
0215
=== modified file 'doc/source/index.rst'
--- doc/source/index.rst 2011-04-20 18:21:51 +0000
+++ doc/source/index.rst 2011-05-19 22:35:51 +0000
@@ -50,6 +50,7 @@
50 :maxdepth: 150 :maxdepth: 1
5151
52 protocols52 protocols
53 examples
5354
54Source Code55Source Code
55===========56===========

Subscribers

People subscribed via source and target branches