Merge lp:~themue/juju-core/go-worker-firewaller-machines into lp:~juju/juju-core/trunk
- go-worker-firewaller-machines
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Gustavo Niemeyer |
Approved revision: | 302 |
Merged at revision: | 313 |
Proposed branch: | lp:~themue/juju-core/go-worker-firewaller-machines |
Merge into: | lp:~juju/juju-core/trunk |
Diff against target: |
325 lines (+310/-0) 3 files modified
worker/firewaller/export_test.go (+16/-0) worker/firewaller/firewaller.go (+154/-0) worker/firewaller/firewaller_test.go (+140/-0) |
To merge this branch: | bzr merge lp:~themue/juju-core/go-worker-firewaller-machines |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
The Go Language Gophers | Pending | ||
Review via email: mp+114938@code.launchpad.net |
Commit message
Description of the change
worker: started implementation of the firewaller
Based on the work of Dave and Roger this proposal implements
the first part of the firewaller worker. The major loop now
contains a machines watcher beside the environment watcher.
This machines watcher adds and removes the watchers for the
individual machine units watchers. The changes of those watchers
will be handled with the next iteration.
Roger Peppe (rogpeppe) wrote : | # |
William Reade (fwereade) wrote : | # |
https:/
File worker/
https:/
worker/
*state.
Shouldn't be a field, only used inside loop().
Also, I think I mentioned this somewhere before... why do the firewaller
and the provisioner each hold independent instances of what should be
the same Environ? Given that we can (AIUI) change environ config, and
use the environ itself, safely, at any time in any goroutine, I don't
see the benefit of duplicating the
get-me-
https:/
worker/
*state.
Shouldn't be a field, only used inside loop().
https:/
worker/
On 2012/07/16 16:40:46, rog wrote:
> defer watcher.
defer close(fw.
defer watcher.
(although as stated above I don't think machinesWatcher needs to be a
field)
https:/
worker/
fw.tomb.
(except it shouln't be a field, and IMO shouldn't even exist here)
https:/
worker/
watcher.MustErr, as above.
https:/
worker/
Feels like a panic situation to me.
https:/
worker/
go machine.loop()
In fact, plausibly, m := newMachine(
https:/
worker/
fw.tomb.
William Reade (fwereade) wrote : | # |
I think there are several lurking gotchas in the sub-goroutines. While
I'm not *certain* that my analogous implementations in
https:/
https:/
sure they ensure events happen in the right order (ie no changes after a
depart, etc), and that this code doesn't. Ping me tomorrow if you want
to talk about it, the comments are a little scattered.
https:/
File worker/
https:/
worker/
Don't we need to wait until the *machine* has finished, not just the
watcher? ie calling m.watcher.Stop() here does *not* guarantee that we
won't see any more events from it on machineUnitsCha
need to be sure that the machine.loop goroutine has exited.
I think we need tombs for the sub-goroutines, as I have in the
relation-
the tomb and Stop() should probably be on machine)
https:/
worker/
On 2012/07/16 23:23:59, fwereade wrote:
> fw.tomb.
Feels a bit over-familiar, but better than dropping errors on the floor.
Gustavo Niemeyer (niemeyer) wrote : | # |
This is going in a great direction. Some initial comments.
https:/
File worker/
https:/
worker/
and closing of ports.
// Firewaller watches the state for ports open or closed
// and reflects those changes onto the backing environment.
https:/
worker/
I believe we want to execute the line below no matter what. The err is
nil, the MustErr is supposed to explode pointing out the inconsistency.
https:/
worker/
machine")
"trying to remove machine that wasn't added"
https:/
worker/
We can't panic here. stop can fail in pretty usual circumstances that
are outside of the application's control (e.g. network issues). This
kind of error condition must be properly managed as usual.
https:/
worker/
s/machine/
https:/
worker/
fw *Firewaller, changes chan *machineUnitsCh
newMachineTracker
This is not creating a machine. Please fix docs accordingly.
Also, the channel should be qualified with <- to make it clear that this
is an output channel.
https:/
worker/
case <-m.tomb.Dying():
https:/
worker/
Checking for ok after using the received change?
https:/
worker/
stop working.
// stop stops the machine tracker.
https:/
worker/
A whole type just to enclose a bool feels like overkill.
https:/
worker/
Is this a unit tracker too?
Roger Peppe (rogpeppe) wrote : | # |
https:/
File worker/
https:/
worker/
i think this line should go before the machinesWatcher stop, otherwise
we'll see eof on the channel before the firewaller has properly stopped.
maybe that doesn't matter; i'm not sure.
https:/
worker/
On 2012/07/17 16:13:50, niemeyer wrote:
> I believe we want to execute the line below no matter what. The err is
nil, the
> MustErr is supposed to explode pointing out the inconsistency.
in fact, if there's EOF from the machinesWatcher, it implies that the
machines watcher has already stopped, so there's no need to call Stop.
https:/
worker/
we could store this in firewaller to make it obvious that the changes
channel is shared between all machines.
Gustavo Niemeyer (niemeyer) wrote : | # |
https:/
File worker/
https:/
worker/
On 2012/07/17 17:26:24, rog wrote:
> On 2012/07/17 16:13:50, niemeyer wrote:
> > I believe we want to execute the line below no matter what. The err
is nil,
> the
> > MustErr is supposed to explode pointing out the inconsistency.
> in fact, if there's EOF from the machinesWatcher, it implies that the
machines
> watcher has already stopped, so there's no need to call Stop.
Good point.
Roger Peppe (rogpeppe) wrote : | # |
Very close to LGTM, but I'd like to see more testing of the error paths.
What happens when zookeeper goes down, for example?
And I'm still unhappy about the non-thread-safe access to firewaller
variables in the tests.
https:/
File worker/
https:/
worker/
loop.
// The watcher terminated prematurely, so end the loop.
https:/
File worker/
https:/
worker/
DeepEquals, allMachines)
c.Assert(
https:/
worker/
DeepEquals, allMachines)
c.Assert(
Roger Peppe (rogpeppe) wrote : | # |
On 2012/07/18 12:26:39, rog wrote:
> Very close to LGTM, but I'd like to see more testing of the error
paths.
> What happens when zookeeper goes down, for example?
from IRC:
[13:32:33] <rog> TheMue: here's a thought: currently the firewaller
opens its own state. how about we change the signature to
NewFirewaller(
[13:33:01] <rog> TheMue: then we can pass in a state that we have a
handle to, and can close that and see what happens
[13:33:34] <rog> TheMue: in fact, that enables both the PA and the
firewaller to use the same state object, which is probably a good thing.
Gustavo Niemeyer (niemeyer) wrote : | # |
On Wed, Jul 18, 2012 at 9:35 AM, <email address hidden> wrote:
> from IRC:
>
> [13:32:33] <rog> TheMue: here's a thought: currently the firewaller
> opens its own state. how about we change the signature to
> NewFirewaller(
> [13:33:01] <rog> TheMue: then we can pass in a state that we have a
> handle to, and can close that and see what happens
> [13:33:34] <rog> TheMue: in fact, that enables both the PA and the
> firewaller to use the same state object, which is probably a good thing.
+1!
We should change all workers to behave like that (in their respective CLs).
--
gustavo @ http://
Gustavo Niemeyer (niemeyer) wrote : | # |
One more little round.
https:/
File worker/
https:/
worker/
On 2012/07/18 07:52:44, TheMue wrote:
> On 2012/07/17 16:13:50, niemeyer wrote:
> > We can't panic here. stop can fail in pretty usual circumstances
that are
> > outside of the application's control (e.g. network issues). This
kind of error
> > condition must be properly managed as usual.
> Changed into logging and keeping the machine tracker, so that a future
removing
> won't fail or at least the finish of the firewaller will again try to
stop it.
It will not try to stop it again, as far as I can see. You got a Removed
notice for the respective node, and it won't show up again in the
future. There's also no good reason for a machineTracker to fail to
stop, unless there's a real problem. Unless you have something else in
mind, I think this should be a real error rather than a logged message.
https:/
worker/
On 2012/07/18 07:52:44, TheMue wrote:
> In case of ok == false I need to send &machineUnitsCh
Doing that
> explicitely I would need the same select as regular above. This way
change is
> nil, it would be send and aferwards the loop is left.
Ok, a comment sounds fine.
https:/
File worker/
https:/
worker/
ports open or closed
Sorry, my typo: s/open/opened/
https:/
worker/
&machineUnitsCh
What happens if mt.changes is closed?
We have to keep that kind of concern in mind when working with channels.
Roger Peppe (rogpeppe) wrote : | # |
https:/
File worker/
https:/
worker/
&machineUnitsCh
On 2012/07/18 13:56:01, niemeyer wrote:
> What happens if mt.changes is closed?
> We have to keep that kind of concern in mind when working with
channels.
AFAICS mt.changes is the same channel as machineUnitsCha
there's no reason to close ever IMO (although I see that it is closed in
fact - i'd remove that).
I suggest putting machineUnitsChanges inside the firewaller type, so
this line would be:
case mt.firewaller.
which makes the mux logic more clear, i think.
Gustavo Niemeyer (niemeyer) wrote : | # |
https:/
File worker/
https:/
worker/
As I mentioned on IRC, I think your approach of logging is actually
right. What's misguiding here is the comment: the machine tracker *is*
stopped if we got here. The message should look like:
"machine tracker returned error when stopping: %v"
https:/
File worker/
https:/
worker/
removedMachine.
This should also be moved above the if block.
- 301. By Frank Mueller
-
firewaller: more simplification and better testing after review
- 302. By Frank Mueller
-
firewaller: small line swap
Gustavo Niemeyer (niemeyer) wrote : | # |
LGTM with the following handled:
https:/
File worker/
https:/
worker/
(*Firewaller, error) {
Nice to see how this was simplified by moving logic out.
https:/
worker/
Nice cleanup as well!
https:/
worker/
remove-machine %v", removedMachine.
Either this should be dropped, or clarified to state what's being
reported. No machines are being removed here. I suggest just dropping.
https:/
worker/
%v", m.id)
Ditto.
https:/
worker/
Why do we need this here? If the firewaller dies, it will stop the
tracker.
https:/
worker/
Ditto.
Gustavo Niemeyer (niemeyer) wrote : | # |
https:/
File worker/
https:/
worker/
remove-machine %v", removedMachine.
On 2012/07/19 00:16:16, niemeyer wrote:
> Either this should be dropped, or clarified to state what's being
reported. No
> machines are being removed here. I suggest just dropping.
Oops. I now realize you're using this to test the logic, which sounds
fine.
Please reword it as:
"firewaller: started tracking machine %d"
and
"firewaller: stopped tracking machine %d"
- 303. By Frank Mueller
-
firewaller: final changes and merge of trunk before submit
Preview Diff
1 | === added directory 'worker/firewaller' |
2 | === added file 'worker/firewaller/export_test.go' |
3 | --- worker/firewaller/export_test.go 1970-01-01 00:00:00 +0000 |
4 | +++ worker/firewaller/export_test.go 2012-07-18 16:28:20 +0000 |
5 | @@ -0,0 +1,16 @@ |
6 | +package firewaller |
7 | + |
8 | +// AllMachines returns the ids of all monitored machines. |
9 | +func (fw *Firewaller) AllMachines() []int { |
10 | + all := []int{} |
11 | + for _, machine := range fw.machines { |
12 | + all = append(all, machine.id) |
13 | + } |
14 | + return all |
15 | +} |
16 | + |
17 | +// CloseState allows to close the state of the firewaller |
18 | +// externally. |
19 | +func (fw *Firewaller) CloseState() error { |
20 | + return fw.st.Close() |
21 | +} |
22 | |
23 | === added file 'worker/firewaller/firewaller.go' |
24 | --- worker/firewaller/firewaller.go 1970-01-01 00:00:00 +0000 |
25 | +++ worker/firewaller/firewaller.go 2012-07-18 16:28:20 +0000 |
26 | @@ -0,0 +1,154 @@ |
27 | +package firewaller |
28 | + |
29 | +import ( |
30 | + "launchpad.net/juju-core/log" |
31 | + "launchpad.net/juju-core/state" |
32 | + "launchpad.net/juju-core/state/watcher" |
33 | + "launchpad.net/tomb" |
34 | +) |
35 | + |
36 | +// Firewaller watches the state for ports opened or closed |
37 | +// and reflects those changes onto the backing environment. |
38 | +type Firewaller struct { |
39 | + st *state.State |
40 | + tomb tomb.Tomb |
41 | + machinesWatcher *state.MachinesWatcher |
42 | + machines map[int]*machineTracker |
43 | + machineUnitsChanges chan *machineUnitsChange |
44 | +} |
45 | + |
46 | +// NewFirewaller returns a new Firewaller. |
47 | +func NewFirewaller(st *state.State) (*Firewaller, error) { |
48 | + fw := &Firewaller{ |
49 | + st: st, |
50 | + machinesWatcher: st.WatchMachines(), |
51 | + machines: make(map[int]*machineTracker), |
52 | + machineUnitsChanges: make(chan *machineUnitsChange), |
53 | + } |
54 | + go fw.loop() |
55 | + return fw, nil |
56 | +} |
57 | + |
58 | +func (fw *Firewaller) loop() { |
59 | + defer fw.finish() |
60 | + for { |
61 | + select { |
62 | + case <-fw.tomb.Dying(): |
63 | + return |
64 | + case change, ok := <-fw.machinesWatcher.Changes(): |
65 | + if !ok { |
66 | + return |
67 | + } |
68 | + for _, removedMachine := range change.Removed { |
69 | + m, ok := fw.machines[removedMachine.Id()] |
70 | + if !ok { |
71 | + panic("trying to remove machine that wasn't added") |
72 | + } |
73 | + delete(fw.machines, removedMachine.Id()) |
74 | + if err := m.stop(); err != nil { |
75 | + log.Printf("machine tracker %d returned error when stopping: %v", removedMachine.Id(), err) |
76 | + } |
77 | + log.Debugf("firewaller: remove-machine %v", removedMachine.Id()) |
78 | + } |
79 | + for _, addedMachine := range change.Added { |
80 | + m := newMachineTracker(addedMachine, fw) |
81 | + fw.machines[addedMachine.Id()] = m |
82 | + log.Debugf("firewaller: add-machine %v", m.id) |
83 | + } |
84 | + case <-fw.machineUnitsChanges: |
85 | + // TODO(mue) fill with life. |
86 | + } |
87 | + } |
88 | +} |
89 | + |
90 | +// finishes cleans up when the firewaller is stopping. |
91 | +func (fw *Firewaller) finish() { |
92 | + watcher.Stop(fw.machinesWatcher, &fw.tomb) |
93 | + for _, m := range fw.machines { |
94 | + fw.tomb.Kill(m.stop()) |
95 | + } |
96 | + fw.tomb.Done() |
97 | +} |
98 | + |
99 | +// Wait waits for the Firewaller to exit. |
100 | +func (fw *Firewaller) Wait() error { |
101 | + return fw.tomb.Wait() |
102 | +} |
103 | + |
104 | +// Stop stops the Firewaller and returns any error encountered while stopping. |
105 | +func (fw *Firewaller) Stop() error { |
106 | + fw.tomb.Kill(nil) |
107 | + return fw.tomb.Wait() |
108 | +} |
109 | + |
110 | +// machineUnitsChange contains the changed units for one specific machine. |
111 | +type machineUnitsChange struct { |
112 | + machine *machineTracker |
113 | + change *state.MachineUnitsChange |
114 | +} |
115 | + |
116 | +// machineTracker keeps track of the unit changes of a machine. |
117 | +type machineTracker struct { |
118 | + tomb tomb.Tomb |
119 | + firewaller *Firewaller |
120 | + id int |
121 | + watcher *state.MachineUnitsWatcher |
122 | + ports map[state.Port]*unitTracker |
123 | +} |
124 | + |
125 | +// newMachineTracker creates a new machine tracker keeping track of |
126 | +// unit changes of the passed machine. |
127 | +func newMachineTracker(mst *state.Machine, fw *Firewaller) *machineTracker { |
128 | + mt := &machineTracker{ |
129 | + firewaller: fw, |
130 | + id: mst.Id(), |
131 | + watcher: mst.WatchUnits(), |
132 | + ports: make(map[state.Port]*unitTracker), |
133 | + } |
134 | + go mt.loop() |
135 | + return mt |
136 | +} |
137 | + |
138 | +// loop is the backend watching for machine units changes. |
139 | +func (mt *machineTracker) loop() { |
140 | + defer mt.tomb.Done() |
141 | + defer mt.watcher.Stop() |
142 | + for { |
143 | + select { |
144 | + case <-mt.firewaller.tomb.Dying(): |
145 | + return |
146 | + case <-mt.tomb.Dying(): |
147 | + return |
148 | + case change, ok := <-mt.watcher.Changes(): |
149 | + // Send change or nil. |
150 | + select { |
151 | + case mt.firewaller.machineUnitsChanges <- &machineUnitsChange{mt, change}: |
152 | + case <-mt.firewaller.tomb.Dying(): |
153 | + return |
154 | + case <-mt.tomb.Dying(): |
155 | + return |
156 | + } |
157 | + // The watcher terminated prematurely, so end the loop. |
158 | + if !ok { |
159 | + mt.firewaller.tomb.Kill(watcher.MustErr(mt.watcher)) |
160 | + return |
161 | + } |
162 | + } |
163 | + } |
164 | +} |
165 | + |
166 | +// stop stops the machine tracker. |
167 | +func (mt *machineTracker) stop() error { |
168 | + mt.tomb.Kill(nil) |
169 | + return mt.tomb.Wait() |
170 | +} |
171 | + |
172 | +type unitTracker struct { |
173 | + service *serviceTracker |
174 | + id string |
175 | + ports []state.Port |
176 | +} |
177 | + |
178 | +type serviceTracker struct { |
179 | + exposed bool |
180 | +} |
181 | |
182 | === added file 'worker/firewaller/firewaller_test.go' |
183 | --- worker/firewaller/firewaller_test.go 1970-01-01 00:00:00 +0000 |
184 | +++ worker/firewaller/firewaller_test.go 2012-07-18 16:28:20 +0000 |
185 | @@ -0,0 +1,140 @@ |
186 | +package firewaller_test |
187 | + |
188 | +import ( |
189 | + "fmt" |
190 | + . "launchpad.net/gocheck" |
191 | + "launchpad.net/juju-core/environs/dummy" |
192 | + "launchpad.net/juju-core/log" |
193 | + "launchpad.net/juju-core/state/testing" |
194 | + coretesting "launchpad.net/juju-core/testing" |
195 | + "launchpad.net/juju-core/worker/firewaller" |
196 | + "sort" |
197 | + "strings" |
198 | + stdtesting "testing" |
199 | + "time" |
200 | +) |
201 | + |
202 | +func TestPackage(t *stdtesting.T) { |
203 | + coretesting.ZkTestPackage(t) |
204 | +} |
205 | + |
206 | +// hooLogger allows the grabbing of debug log statements |
207 | +// to compare them inside the tests. |
208 | +type hookLogger struct { |
209 | + event chan string |
210 | + oldTarget log.Logger |
211 | +} |
212 | + |
213 | +var logHook *hookLogger |
214 | + |
215 | +const prefix = "JUJU:DEBUG firewaller: " |
216 | + |
217 | +func (h *hookLogger) Output(calldepth int, s string) error { |
218 | + err := h.oldTarget.Output(calldepth, s) |
219 | + if strings.HasPrefix(s, prefix) { |
220 | + h.event <- s[len(prefix):] |
221 | + } |
222 | + return err |
223 | +} |
224 | + |
225 | +func setUpLogHook() { |
226 | + logHook = &hookLogger{ |
227 | + event: make(chan string, 30), |
228 | + oldTarget: log.Target, |
229 | + } |
230 | + log.Target = logHook |
231 | +} |
232 | + |
233 | +func tearDownLogHook() { |
234 | + log.Target = logHook.oldTarget |
235 | +} |
236 | + |
237 | +// assertEvents asserts that the expected events are received from |
238 | +// the firewaller, in no particular order. |
239 | +func assertEvents(c *C, expect []string) { |
240 | + var got []string |
241 | + for _ = range expect { |
242 | + select { |
243 | + case e := <-logHook.event: |
244 | + got = append(got, e) |
245 | + case <-time.After(500 * time.Millisecond): |
246 | + c.Fatalf("expected %q; timed out after %q", expect, got) |
247 | + } |
248 | + } |
249 | + select { |
250 | + case e := <-logHook.event: |
251 | + got = append(got, e) |
252 | + c.Fatalf("expected %q; too many events %q ", expect, got) |
253 | + case <-time.After(100 * time.Millisecond): |
254 | + } |
255 | + sort.Strings(expect) |
256 | + sort.Strings(got) |
257 | + c.Assert(got, DeepEquals, expect) |
258 | +} |
259 | + |
260 | +type FirewallerSuite struct { |
261 | + coretesting.LoggingSuite |
262 | + testing.StateSuite |
263 | + op <-chan dummy.Operation |
264 | +} |
265 | + |
266 | +var _ = Suite(&FirewallerSuite{}) |
267 | + |
268 | +func (s *FirewallerSuite) SetUpTest(c *C) { |
269 | + s.LoggingSuite.SetUpTest(c) |
270 | + |
271 | + op := make(chan dummy.Operation, 500) |
272 | + dummy.Listen(op) |
273 | + s.op = op |
274 | + |
275 | + s.StateSuite.SetUpTest(c) |
276 | +} |
277 | + |
278 | +func (s *FirewallerSuite) TearDownTest(c *C) { |
279 | + dummy.Reset() |
280 | + s.LoggingSuite.TearDownTest(c) |
281 | +} |
282 | + |
283 | +func (s *FirewallerSuite) TestStartStop(c *C) { |
284 | + fw, err := firewaller.NewFirewaller(s.State) |
285 | + c.Assert(err, IsNil) |
286 | + c.Assert(fw.Stop(), IsNil) |
287 | +} |
288 | + |
289 | +func (s *FirewallerSuite) TestAddRemoveMachine(c *C) { |
290 | + fw, err := firewaller.NewFirewaller(s.State) |
291 | + c.Assert(err, IsNil) |
292 | + |
293 | + setUpLogHook() |
294 | + defer tearDownLogHook() |
295 | + |
296 | + m1, err := s.State.AddMachine() |
297 | + c.Assert(err, IsNil) |
298 | + m2, err := s.State.AddMachine() |
299 | + c.Assert(err, IsNil) |
300 | + m3, err := s.State.AddMachine() |
301 | + c.Assert(err, IsNil) |
302 | + |
303 | + assertEvents(c, []string{ |
304 | + fmt.Sprint("add-machine ", m1.Id()), |
305 | + fmt.Sprint("add-machine ", m2.Id()), |
306 | + fmt.Sprint("add-machine ", m3.Id()), |
307 | + }) |
308 | + |
309 | + err = s.State.RemoveMachine(m2.Id()) |
310 | + c.Assert(err, IsNil) |
311 | + |
312 | + assertEvents(c, []string{ |
313 | + fmt.Sprint("remove-machine ", m2.Id()), |
314 | + }) |
315 | + |
316 | + c.Assert(fw.Stop(), IsNil) |
317 | +} |
318 | + |
319 | +func (s *FirewallerSuite) TestFirewallerStopOnStateClose(c *C) { |
320 | + fw, err := firewaller.NewFirewaller(s.State) |
321 | + c.Assert(err, IsNil) |
322 | + fw.CloseState() |
323 | + c.Check(fw.Wait(), ErrorMatches, ".* zookeeper is closing") |
324 | + c.Assert(fw.Stop(), ErrorMatches, ".* zookeeper is closing") |
325 | +} |
mostly LGTM apart from the watcher stopping below,
and the unsafe testing access.
https:/ /codereview. appspot. com/6374069/ diff/1/ worker/ firewaller/ firewaller. go firewaller/ firewaller. go (right):
File worker/
https:/ /codereview. appspot. com/6374069/ diff/1/ worker/ firewaller/ firewaller. go#newcode42 firewaller/ firewaller. go:42: // Get environment first. Stop(fw. machineUnitsCha nges, &fw.tomb)
worker/
defer watcher.
https:/ /codereview. appspot. com/6374069/ diff/1/ worker/ firewaller/ firewaller. go#newcode43 firewaller/ firewaller. go:43: fw.environWatcher = ronConfig( ) Stop(fw. environWatcher, &fw.tomb)
worker/
fw.st.WatchEnvi
defer watcher.
https:/ /codereview. appspot. com/6374069/ diff/1/ worker/ firewaller/ firewaller. go#newcode54 firewaller/ firewaller. go:54: return
worker/
this isn't killing the machines watcher
https:/ /codereview. appspot. com/6374069/ diff/1/ worker/ firewaller/ firewaller. go#newcode69 firewaller/ firewaller. go:69: return
worker/
this isn't killing the environs watcher
https:/ /codereview. appspot. com/6374069/ diff/1/ worker/ firewaller/ firewaller. go#newcode90 firewaller/ firewaller. go:90: // TODO(mue) fill with life.
worker/
i like the image :-)
https:/ /codereview. appspot. com/6374069/ diff/1/ worker/ firewaller/ firewaller_ test.go firewaller/ firewaller_ test.go (right):
File worker/
https:/ /codereview. appspot. com/6374069/ diff/1/ worker/ firewaller/ firewaller_ test.go# newcode100 firewaller/ firewaller_ test.go: 100: allMachines :=
worker/
fw.AllMachines()
i liked this initially, but after a moment's thought, i'm not sure about
it - it accesses fw.machines in a non-thread-safe way and it makes the
test slower than it should be.
i'll have a chat online about this - i have an idea :-)
https:/ /codereview. appspot. com/6374069/