I think we could actually do that with a NotifyWatchLooper or some such
that did all of this work and just took an
type interface Looper {
Loop(out chan struct{}) error
}
I've done some bits like that in the Workers, where I implemented
NotifyWorker that takes a Handler that triggers when the changes are
actually interesting.
Anyway, not something you have to do here, because our pattern in code
has definitely been to just copy & paste the boilerplate.
https://codereview.appspot.com/80340043/diff/20001/worker/provisioner/provisioner_test.go#newcode804
worker/provisioner/provisioner_test.go:804: // Machine 4 is never
provisioned.
I don't understand why Machine 4 is listed as never provisioned, when
the StartInstance calls claim that it will be provisioned after 2
retries.
Is this sensitive to the timeout times? (we assume we poll at exactly
50ms, and we get 2 polls in the 100ms wait above?)
Anything that requires the timing to get right tends to fail on the bot.
It may be that checkStartInstance properly retries until success, or
something else.
Is there even a reason why we need to have ErrorRetryWaitDelay not
something like even 5ms?
I like where this is going, I wish our code base would let it be
smaller.
I'm a little concerned that the new test is going to be timing
dependent, but otherwise LGTM.
https:/ /codereview. appspot. com/80340043/ diff/20001/ state/apiserver /provisioner/ machineerror. go /provisioner/ machineerror. go (right):
File state/apiserver
https:/ /codereview. appspot. com/80340043/ diff/20001/ state/apiserver /provisioner/ machineerror. go#newcode20 /provisioner/ machineerror. go:20:
state/apiserver
I really wish we could inherit from a common NotifyWatcher
implementation and then just implement the "loop()" in our custom code.
I think we could actually do that with a NotifyWatchLooper or some such
that did all of this work and just took an
type interface Looper {
Loop(out chan struct{}) error
}
I've done some bits like that in the Workers, where I implemented
NotifyWorker that takes a Handler that triggers when the changes are
actually interesting.
Anyway, not something you have to do here, because our pattern in code
has definitely been to just copy & paste the boilerplate.
https:/ /codereview. appspot. com/80340043/ diff/20001/ state/apiserver /provisioner/ machineerror. go#newcode66 /provisioner/ machineerror. go:66: // triggering every 2 elay. Maybe
state/apiserver
minutes.
Your comment here doesn't match the actual ErrorRetryWaitD
just reference the variable instead of an explicit time ?
https:/ /codereview. appspot. com/80340043/ diff/20001/ state/apiserver /provisioner/ machineerror. go#newcode73 /provisioner/ machineerror. go:73: case After(ErrorRetr yWaitDelay) :
state/apiserver
<-time.
But all of this boiler plate exists *only* for this line. (when this
channel triggers, trigger an output).
https:/ /codereview. appspot. com/80340043/ diff/20001/ worker/ provisioner/ provisioner_ test.go provisioner/ provisioner_ test.go (right):
File worker/
https:/ /codereview. appspot. com/80340043/ diff/20001/ worker/ provisioner/ provisioner_ test.go# newcode773 provisioner/ provisioner_ test.go: 773: &apiserverprovi sioner. ErrorRetryWaitD elay, Millisecond)
worker/
s.PatchValue(
50*time.
Can we use testing.ShortWait here instead of another hard-coded time?
https:/ /codereview. appspot. com/80340043/ diff/20001/ worker/ provisioner/ provisioner_ test.go# newcode804 provisioner/ provisioner_ test.go: 804: // Machine 4 is never
worker/
provisioned.
I don't understand why Machine 4 is listed as never provisioned, when
the StartInstance calls claim that it will be provisioned after 2
retries.
Is this sensitive to the timeout times? (we assume we poll at exactly
50ms, and we get 2 polls in the 100ms wait above?)
Anything that requires the timing to get right tends to fail on the bot.
It may be that checkStartInstance properly retries until success, or
something else.
Is there even a reason why we need to have ErrorRetryWaitDelay not
something like even 5ms?
https:/ /codereview. appspot. com/80340043/