Merge lp:~wallyworld/goose/rate-limit-retry-tests into lp:~gophers/goose/trunk
- rate-limit-retry-tests
- Merge into trunk
Status: | Merged |
---|---|
Merged at revision: | 55 |
Proposed branch: | lp:~wallyworld/goose/rate-limit-retry-tests |
Merge into: | lp:~gophers/goose/trunk |
Prerequisite: | lp:~wallyworld/goose/control-test-doubles |
Diff against target: |
382 lines (+138/-51) (has conflicts) 7 files modified
http/client.go (+14/-8) identity/userpass.go (+6/-0) nova/live_test.go (+36/-0) nova/local_test.go (+50/-13) testservices/novaservice/service.go (+10/-14) testservices/novaservice/service_http.go (+17/-16) testservices/service.go (+5/-0) Text conflict in identity/userpass.go |
To merge this branch: | bzr merge lp:~wallyworld/goose/rate-limit-retry-tests |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
The Go Language Gophers | Pending | ||
Review via email: mp+144849@code.launchpad.net |
Commit message
Description of the change
Rewrite rate limit retry tests
This branch uses the new service double control hooks introduced in the previous branch to rewrite the tests which check that rate limit exceeded retries are handled properly.
The kludge used previously to induce a retry error has been removed, and now an additional test can also be easily added to check the behaviour if too many rate limit retry responses are received.
So that the tests run fast, I've allowed for the Retry-After header value to be a float (even though a real instance only assigns an int). This means the tests can specify a really
short retry time (I used 1ms).
I've also re-added a rate limit retry test for use with the live instance, but improved it so that it exists as soon as the first rate limit exceeded response is received.
Ian Booth (wallyworld) wrote : | # |
John A Meinel (jameinel) wrote : | # |
Overall LGTM.
https:/
File http/client.go (right):
https:/
http/client.go:195: for i := 0; i <= c.maxRetries; i++ {
<= ? Doesn't that mean we will try 4 times instead of 3?
https:/
File identity/
https:/
identity/
I think I have a more complete fix for this:
https:/
https:/
File nova/live_test.go (right):
https:/
nova/live_
Is it possible to use your hooks to check for a retry request, rather
than reading the output string? (I realize this probably didn't exist
before, but it seems like a really good use of it.)
https:/
File testservices/
https:/
testservices/
an int but we don't want the tests taking too long to run.
Maybe: RFC says that Retry-After should be an int, but we don't want to
wait an entire second during the test suite.
Ian Booth (wallyworld) wrote : | # |
Please take a look.
https:/
File http/client.go (right):
https:/
http/client.go:195: for i := 0; i <= c.maxRetries; i++ {
On 2013/01/28 06:44:57, jameinel wrote:
> <= ? Doesn't that mean we will try 4 times instead of 3?
The logic was confusing retries with total attempts. Using "retries"
terminology, the client sends the first request and retries X times.
With "sendAttempts" terminology, the client tries "sendAttempts"
requests in total.
I've change the variable and constant to be maxSendAttempts to remove
the ambiguity.
https:/
File identity/
https:/
identity/
On 2013/01/28 06:44:57, jameinel wrote:
> I think I have a more complete fix for this:
https:/
I had a quick look, the mp was missing the pre-req branch so was very
large. Looked ok at first glance. Perhaps we can land this branch now as
is and follow up with yours?
https:/
File nova/live_test.go (right):
https:/
nova/live_
On 2013/01/28 06:44:57, jameinel wrote:
> Is it possible to use your hooks to check for a retry request, rather
than
> reading the output string? (I realize this probably didn't exist
before, but it
> seems like a really good use of it.)
Possibly, but the hook stuff has been set up to work with the test
doubles (manipulates the state of the test service back end) rather than
a real live instance. The nova client implementation would need to be
retooled to implement the ServiceControl interface. Perhaps this can be
done in a followup branch.
https:/
File testservices/
https:/
testservices/
an int but we don't want the tests taking too long to run.
On 2013/01/28 06:44:57, jameinel wrote:
> Maybe: RFC says that Retry-After should be an int, but we don't want
to wait an
> entire second during the test suite.
Done.
John A Meinel (jameinel) wrote : | # |
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
...
> https:/
>
> I had a quick look, the mp was missing the pre-req branch so was
> very large. Looked ok at first glance. Perhaps we can land this
> branch now as is and follow up with yours?
Yeah, goose-bot's trunk wasn't up to date with lp:goose, and which I
based my patch on. I tried to re-push so this should be cleaned up.
As for your patch, I think it is an accurate fix, but my concern is
just a more general thing. When we encounter edge cases the code
wasn't covering (a service that is only exported in one region), we
really need a test case for it, because those are the sorts of things
we are going to forget about in 6 months when we go tweaking things
for HP vs Canonistack vs Rackspace, etc.
>
> https:/
> File nova/live_test.go (right):
>
> https:/
>
>
nova/live_
> On 2013/01/28 06:44:57, jameinel wrote:
>> Is it possible to use your hooks to check for a retry request,
>> rather
> than
>> reading the output string? (I realize this probably didn't exist
> before, but it
>> seems like a really good use of it.)
>
> Possibly, but the hook stuff has been set up to work with the test
> doubles (manipulates the state of the test service back end) rather
> than a real live instance. The nova client implementation would
> need to be retooled to implement the ServiceControl interface.
> Perhaps this can be done in a followup branch.
I guess it is hooking the client-side stuff, vs hooking the service
side stuff.
>
> https:/
>
>
File testservices/
>
> https:/
>
>
testservices/
> an int but we don't want the tests taking too long to run. On
> 2013/01/28 06:44:57, jameinel wrote:
>> Maybe: RFC says that Retry-After should be an int, but we don't
>> want
> to wait an
>> entire second during the test suite.
>
> Done.
>
> https:/
>
LGTM.
John
=:->
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (Cygwin)
Comment: Using GnuPG with Thunderbird - http://
iEYEARECAAYFAlE
p0IAoItwzHDxMx7
=dice
-----END PGP SIGNATURE-----
John A Meinel (jameinel) wrote : | # |
Ian Booth (wallyworld) wrote : | # |
*** Submitted:
Rewrite rate limit retry tests
This branch uses the new service double control hooks introduced in the
previous branch to rewrite the tests which check that rate limit
exceeded retries are handled properly.
The kludge used previously to induce a retry error has been removed, and
now an additional test can also be easily added to check the behaviour
if too many rate limit retry responses are received.
So that the tests run fast, I've allowed for the Retry-After header
value to be a float (even though a real instance only assigns an int).
This means the tests can specify a really
short retry time (I used 1ms).
I've also re-added a rate limit retry test for use with the live
instance, but improved it so that it exists as soon as the first rate
limit exceeded response is received.
R=jameinel
CC=
https:/
Preview Diff
1 | === modified file 'http/client.go' |
2 | --- http/client.go 2012-12-21 04:10:14 +0000 |
3 | +++ http/client.go 2013-01-29 01:46:23 +0000 |
4 | @@ -25,9 +25,9 @@ |
5 | |
6 | type Client struct { |
7 | http.Client |
8 | - logger *log.Logger |
9 | - authToken string |
10 | - maxRetries int |
11 | + logger *log.Logger |
12 | + authToken string |
13 | + maxSendAttempts int |
14 | } |
15 | |
16 | type ErrorResponse struct { |
17 | @@ -55,11 +55,17 @@ |
18 | RespReader io.ReadCloser |
19 | } |
20 | |
21 | +const ( |
22 | + // The maximum number of times to try sending a request before we give up |
23 | + // (assuming any unsuccessful attempts can be sensibly tried again). |
24 | + MaxSendAttempts = 3 |
25 | +) |
26 | + |
27 | func New(httpClient http.Client, logger *log.Logger, token string) *Client { |
28 | if logger == nil { |
29 | logger = log.New(os.Stderr, "", log.LstdFlags) |
30 | } |
31 | - return &Client{httpClient, logger, token, 3} |
32 | + return &Client{httpClient, logger, token, MaxSendAttempts} |
33 | } |
34 | |
35 | // JsonRequest JSON encodes and sends the supplied object (if any) to the specified URL. |
36 | @@ -187,7 +193,7 @@ |
37 | } |
38 | |
39 | func (c *Client) sendRateLimitedRequest(method, URL string, headers http.Header, reqData []byte) (resp *http.Response, err error) { |
40 | - for i := 0; i < c.maxRetries; i++ { |
41 | + for i := 0; i < c.maxSendAttempts; i++ { |
42 | var reqReader io.Reader |
43 | if reqData != nil { |
44 | reqReader = bytes.NewReader(reqData) |
45 | @@ -209,17 +215,17 @@ |
46 | if resp.StatusCode != http.StatusRequestEntityTooLarge { |
47 | return resp, nil |
48 | } |
49 | - retryAfter, err := strconv.Atoi(resp.Header.Get("Retry-After")) |
50 | + retryAfter, err := strconv.ParseFloat(resp.Header.Get("Retry-After"), 32) |
51 | if err != nil { |
52 | return nil, errors.Newf(err, URL, "Invalid Retry-After header") |
53 | } |
54 | if retryAfter == 0 { |
55 | return nil, errors.Newf(err, URL, "Resource limit exeeded at URL %s.", URL) |
56 | } |
57 | - c.logger.Printf("Too many requests, retrying in %d seconds.", retryAfter) |
58 | + c.logger.Printf("Too many requests, retrying in %dms.", int(retryAfter*1000)) |
59 | time.Sleep(time.Duration(retryAfter) * time.Second) |
60 | } |
61 | - return nil, errors.Newf(err, URL, "Maximum number of retries (%d) reached sending request to %s.", c.maxRetries, URL) |
62 | + return nil, errors.Newf(err, URL, "Maximum number of attempts (%d) reached sending request to %s.", c.maxSendAttempts, URL) |
63 | } |
64 | |
65 | type HttpError struct { |
66 | |
67 | === modified file 'identity/userpass.go' |
68 | --- identity/userpass.go 2013-01-27 13:37:53 +0000 |
69 | +++ identity/userpass.go 2013-01-29 01:46:23 +0000 |
70 | @@ -105,10 +105,16 @@ |
71 | service.Endpoints = append(service.Endpoints[:i], service.Endpoints[i+1:]...) |
72 | } |
73 | } |
74 | +<<<<<<< TREE |
75 | if len(service.Endpoints) == 0 { |
76 | fmt.Fprintf(os.Stderr, "Found no endpoints for %v\n", service.Type) |
77 | } |
78 | details.ServiceURLs[service.Type] = service.Endpoints[0].PublicURL |
79 | +======= |
80 | + if len(service.Endpoints) > 0 { |
81 | + details.ServiceURLs[service.Type] = service.Endpoints[0].PublicURL |
82 | + } |
83 | +>>>>>>> MERGE-SOURCE |
84 | } |
85 | |
86 | return details, nil |
87 | |
88 | === modified file 'nova/live_test.go' |
89 | --- nova/live_test.go 2013-01-21 11:18:33 +0000 |
90 | +++ nova/live_test.go 2013-01-29 01:46:23 +0000 |
91 | @@ -1,11 +1,14 @@ |
92 | package nova_test |
93 | |
94 | import ( |
95 | + "bytes" |
96 | . "launchpad.net/gocheck" |
97 | "launchpad.net/goose/client" |
98 | "launchpad.net/goose/errors" |
99 | "launchpad.net/goose/identity" |
100 | "launchpad.net/goose/nova" |
101 | + "log" |
102 | + "strings" |
103 | "time" |
104 | ) |
105 | |
106 | @@ -419,3 +422,36 @@ |
107 | c.Check(fip.FixedIP, IsNil) |
108 | c.Check(fip.InstanceId, IsNil) |
109 | } |
110 | + |
111 | +// TestRateLimitRetry checks that when we make too many requests and receive a Retry-After response, the retry |
112 | +// occurs and the request ultimately succeeds. |
113 | +func (s *LiveTests) TestRateLimitRetry(c *C) { |
114 | + // Capture the logged output so we can check for retry messages. |
115 | + var logout bytes.Buffer |
116 | + logger := log.New(&logout, "", log.LstdFlags) |
117 | + client := client.NewClient(s.cred, identity.AuthUserPass, logger) |
118 | + nova := nova.New(client) |
119 | + // Delete the artifact if it already exists. |
120 | + testGroup, err := nova.SecurityGroupByName("test_group") |
121 | + if err != nil { |
122 | + c.Assert(errors.IsNotFound(err), Equals, true) |
123 | + } else { |
124 | + nova.DeleteSecurityGroup(testGroup.Id) |
125 | + c.Assert(err, IsNil) |
126 | + } |
127 | + // Create some artifacts a number of times in succession and ensure each time is successful, |
128 | + // even with retries being required. As soon as we see a retry message, the test has passed |
129 | + // and we exit. |
130 | + for i := 0; i < 50; i++ { |
131 | + testGroup, err = nova.CreateSecurityGroup("test_group", "test") |
132 | + c.Assert(err, IsNil) |
133 | + nova.DeleteSecurityGroup(testGroup.Id) |
134 | + c.Assert(err, IsNil) |
135 | + output := logout.String() |
136 | + if strings.Contains(output, "Too many requests, retrying in") == true { |
137 | + return |
138 | + } |
139 | + } |
140 | + // No retry message logged so test has failed. |
141 | + c.Fail() |
142 | +} |
143 | |
144 | === modified file 'nova/local_test.go' |
145 | --- nova/local_test.go 2013-01-29 01:46:22 +0000 |
146 | +++ nova/local_test.go 2013-01-29 01:46:23 +0000 |
147 | @@ -2,11 +2,14 @@ |
148 | |
149 | import ( |
150 | "bytes" |
151 | + "fmt" |
152 | . "launchpad.net/gocheck" |
153 | "launchpad.net/goose/client" |
154 | "launchpad.net/goose/errors" |
155 | + goosehttp "launchpad.net/goose/http" |
156 | "launchpad.net/goose/identity" |
157 | "launchpad.net/goose/nova" |
158 | + "launchpad.net/goose/testservices" |
159 | "launchpad.net/goose/testservices/openstackservice" |
160 | "log" |
161 | "net/http" |
162 | @@ -23,9 +26,12 @@ |
163 | type localLiveSuite struct { |
164 | LiveTests |
165 | // The following attributes are for using testing doubles. |
166 | - Server *httptest.Server |
167 | - Mux *http.ServeMux |
168 | - oldHandler http.Handler |
169 | + Server *httptest.Server |
170 | + Mux *http.ServeMux |
171 | + oldHandler http.Handler |
172 | + openstack *openstackservice.Openstack |
173 | + retryErrorCount int // The current retry error count. |
174 | + retryErrorCountToSend int // The number of retry errors to send. |
175 | } |
176 | |
177 | func (s *localLiveSuite) SetUpSuite(c *C) { |
178 | @@ -45,8 +51,8 @@ |
179 | Region: "some region", |
180 | TenantName: "tenant", |
181 | } |
182 | - openstack := openstackservice.New(s.cred) |
183 | - openstack.SetupHTTP(s.Mux) |
184 | + s.openstack = openstackservice.New(s.cred) |
185 | + s.openstack.SetupHTTP(s.Mux) |
186 | |
187 | s.LiveTests.SetUpSuite(c) |
188 | } |
189 | @@ -59,6 +65,7 @@ |
190 | } |
191 | |
192 | func (s *localLiveSuite) SetUpTest(c *C) { |
193 | + s.retryErrorCount = 0 |
194 | s.LiveTests.SetUpTest(c) |
195 | } |
196 | |
197 | @@ -68,12 +75,18 @@ |
198 | |
199 | // Additional tests to be run against the service double only go here. |
200 | |
201 | -// TestRateLimitRetry checks that when we make too many requests and receive a Retry-After response, the retry |
202 | -// occurs and the request ultimately succeeds. |
203 | -func (s *localLiveSuite) TestRateLimitRetry(c *C) { |
204 | - // Capture the logged output so we can check for retry messages. |
205 | - var logout bytes.Buffer |
206 | - logger := log.New(&logout, "", log.LstdFlags) |
207 | +func (s *localLiveSuite) retryLimitHook(sc testservices.ServiceControl) testservices.ControlProcessor { |
208 | + return func(sc testservices.ServiceControl, args ...interface{}) error { |
209 | + sendError := s.retryErrorCount < s.retryErrorCountToSend |
210 | + if sendError { |
211 | + s.retryErrorCount++ |
212 | + return &testservices.RateLimitExceededError{fmt.Errorf("retry limit exceeded")} |
213 | + } |
214 | + return nil |
215 | + } |
216 | +} |
217 | + |
218 | +func (s *localLiveSuite) setupRetryErrorTest(c *C, logger *log.Logger) (*nova.Client, *nova.SecurityGroup) { |
219 | client := client.NewClient(s.cred, identity.AuthUserPass, logger) |
220 | nova := nova.New(client) |
221 | // Delete the artifact if it already exists. |
222 | @@ -84,11 +97,35 @@ |
223 | nova.DeleteSecurityGroup(testGroup.Id) |
224 | c.Assert(err, IsNil) |
225 | } |
226 | - testGroup, err = nova.CreateSecurityGroup("test_group", "test rate limit") |
227 | + testGroup, err = nova.CreateSecurityGroup("test_group", "test") |
228 | c.Assert(err, IsNil) |
229 | - nova.DeleteSecurityGroup(testGroup.Id) |
230 | + return nova, testGroup |
231 | +} |
232 | + |
233 | +// TestRateLimitRetry checks that when we make too many requests and receive a Retry-After response, the retry |
234 | +// occurs and the request ultimately succeeds. |
235 | +func (s *localLiveSuite) TestRateLimitRetry(c *C) { |
236 | + // Capture the logged output so we can check for retry messages. |
237 | + var logout bytes.Buffer |
238 | + logger := log.New(&logout, "", log.LstdFlags) |
239 | + nova, testGroup := s.setupRetryErrorTest(c, logger) |
240 | + s.retryErrorCountToSend = goosehttp.MaxSendAttempts - 1 |
241 | + s.openstack.Nova.RegisterControlPoint("removeSecurityGroup", s.retryLimitHook(s.openstack.Nova)) |
242 | + defer s.openstack.Nova.RegisterControlPoint("removeSecurityGroup", nil) |
243 | + err := nova.DeleteSecurityGroup(testGroup.Id) |
244 | c.Assert(err, IsNil) |
245 | // Ensure we got at least one retry message. |
246 | output := logout.String() |
247 | c.Assert(strings.Contains(output, "Too many requests, retrying in"), Equals, true) |
248 | } |
249 | + |
250 | +// TestRateLimitRetryExceeded checks that an error is raised if too many retry responses are received from the server. |
251 | +func (s *localLiveSuite) TestRateLimitRetryExceeded(c *C) { |
252 | + nova, testGroup := s.setupRetryErrorTest(c, nil) |
253 | + s.retryErrorCountToSend = goosehttp.MaxSendAttempts |
254 | + s.openstack.Nova.RegisterControlPoint("removeSecurityGroup", s.retryLimitHook(s.openstack.Nova)) |
255 | + defer s.openstack.Nova.RegisterControlPoint("removeSecurityGroup", nil) |
256 | + err := nova.DeleteSecurityGroup(testGroup.Id) |
257 | + c.Assert(err, Not(IsNil)) |
258 | + c.Assert(err.Error(), Matches, ".*Maximum number of attempts.*") |
259 | +} |
260 | |
261 | === modified file 'testservices/novaservice/service.go' |
262 | --- testservices/novaservice/service.go 2013-01-29 01:46:22 +0000 |
263 | +++ testservices/novaservice/service.go 2013-01-29 01:46:23 +0000 |
264 | @@ -18,17 +18,16 @@ |
265 | // contains the service double's internal state. |
266 | type Nova struct { |
267 | testservices.ServiceInstance |
268 | - flavors map[string]nova.FlavorDetail |
269 | - servers map[string]nova.ServerDetail |
270 | - groups map[int]nova.SecurityGroup |
271 | - rules map[int]nova.SecurityGroupRule |
272 | - floatingIPs map[int]nova.FloatingIP |
273 | - serverGroups map[string][]int |
274 | - serverIPs map[string][]int |
275 | - nextGroupId int |
276 | - nextRuleId int |
277 | - nextIPId int |
278 | - sendFakeRateLimitResponse bool |
279 | + flavors map[string]nova.FlavorDetail |
280 | + servers map[string]nova.ServerDetail |
281 | + groups map[int]nova.SecurityGroup |
282 | + rules map[int]nova.SecurityGroupRule |
283 | + floatingIPs map[int]nova.FloatingIP |
284 | + serverGroups map[string][]int |
285 | + serverIPs map[string][]int |
286 | + nextGroupId int |
287 | + nextRuleId int |
288 | + nextIPId int |
289 | } |
290 | |
291 | // endpoint returns either a versioned or non-versioned service |
292 | @@ -82,9 +81,6 @@ |
293 | floatingIPs: make(map[int]nova.FloatingIP), |
294 | serverGroups: make(map[string][]int), |
295 | serverIPs: make(map[string][]int), |
296 | - // The following attribute controls whether rate limit responses are sent back to the caller. |
297 | - // This is switched off when we want to ensure the client eventually gets a proper response. |
298 | - sendFakeRateLimitResponse: true, |
299 | ServiceInstance: testservices.ServiceInstance{ |
300 | IdentityService: identityService, |
301 | Hostname: hostname, |
302 | |
303 | === modified file 'testservices/novaservice/service_http.go' |
304 | --- testservices/novaservice/service_http.go 2013-01-23 23:37:38 +0000 |
305 | +++ testservices/novaservice/service_http.go 2013-01-29 01:46:23 +0000 |
306 | @@ -9,6 +9,7 @@ |
307 | "io" |
308 | "io/ioutil" |
309 | "launchpad.net/goose/nova" |
310 | + "launchpad.net/goose/testservices" |
311 | "launchpad.net/goose/testservices/identityservice" |
312 | "net/http" |
313 | "path" |
314 | @@ -212,7 +213,8 @@ |
315 | "", |
316 | "text/plain; charset=UTF-8", |
317 | "too many requests", |
318 | - map[string]string{"Retry-After": "1"}, |
319 | + // RFC says that Retry-After should be an int, but we don't want to wait an entire second during the test suite. |
320 | + map[string]string{"Retry-After": "0.001"}, |
321 | nil, |
322 | } |
323 | ) |
324 | @@ -286,15 +288,20 @@ |
325 | if err == nil { |
326 | return |
327 | } |
328 | - resp, _ := err.(http.Handler) |
329 | - if resp == nil { |
330 | - resp = &errorResponse{ |
331 | - http.StatusInternalServerError, |
332 | - `{"internalServerError":{"message":"$ERROR$",code:500}}`, |
333 | - "application/json", |
334 | - err.Error(), |
335 | - nil, |
336 | - h.n, |
337 | + var resp http.Handler |
338 | + if _, ok := err.(*testservices.RateLimitExceededError); ok { |
339 | + resp = errRateLimitExceeded |
340 | + } else { |
341 | + resp, _ = err.(http.Handler) |
342 | + if resp == nil { |
343 | + resp = &errorResponse{ |
344 | + http.StatusInternalServerError, |
345 | + `{"internalServerError":{"message":"$ERROR$",code:500}}`, |
346 | + "application/json", |
347 | + err.Error(), |
348 | + nil, |
349 | + h.n, |
350 | + } |
351 | } |
352 | } |
353 | resp.ServeHTTP(w, r) |
354 | @@ -782,12 +789,6 @@ |
355 | if err == nil { |
356 | return errBadRequestDuplicateValue |
357 | } |
358 | - if req.Group.Description == "test rate limit" && n.sendFakeRateLimitResponse { |
359 | - n.sendFakeRateLimitResponse = false |
360 | - return errRateLimitExceeded |
361 | - } else { |
362 | - n.sendFakeRateLimitResponse = true |
363 | - } |
364 | n.nextGroupId++ |
365 | nextId := n.nextGroupId |
366 | err = n.addSecurityGroup(nova.SecurityGroup{ |
367 | |
368 | === modified file 'testservices/service.go' |
369 | --- testservices/service.go 2013-01-29 01:46:22 +0000 |
370 | +++ testservices/service.go 2013-01-29 01:46:23 +0000 |
371 | @@ -25,6 +25,11 @@ |
372 | ControlHooks map[string]ControlProcessor |
373 | } |
374 | |
375 | +// Internal Openstack errors. |
376 | +type RateLimitExceededError struct { |
377 | + error |
378 | +} |
379 | + |
380 | // ControlProcessor defines a function that is run when a specified control point is reached in the service |
381 | // business logic. The function receives the service instance so internal state can be inspected, plus for any |
382 | // arguments passed to the currently executing service function. |
Reviewers: mp+144849_ code.launchpad. net,
Message:
Please take a look.
Description:
Rewrite rate limit retry tests
This branch uses the new service double control hooks introduced in the
previous branch to rewrite the tests which check that rate limit
exceeded retries are handled properly.
The kludge used previously to induce a retry error has been removed, and
now an additional test can also be easily added to check the behaviour
if too many rate limit retry responses are received.
So that the tests run fast, I've allowed for the Retry-After header
value to be a float (even though a real instance only assigns an int).
This means the tests can specify a really
short retry time (I used 1ms).
I've also re-added a rate limit retry test for use with the live
instance, but improved it so that it exists as soon as the first rate
limit exceeded response is received.
https:/ /code.launchpad .net/~wallyworl d/goose/ rate-limit- retry-tests/ +merge/ 144849
Requires: /code.launchpad .net/~wallyworl d/goose/ control- test-doubles/ +merge/ 144838
https:/
(do not edit description out of merge proposal)
Please review this at https:/ /codereview. appspot. com/7200049/
Affected files: userpass. go novaservice/ service. go novaservice/ service_ http.go service. go
A [revision details]
M http/client.go
M identity/
M nova/live_test.go
M nova/local_test.go
M testservices/
M testservices/
M testservices/