Merge lp:~fwereade/juju-core/jujud-integrate-cleaner-resumer into lp:~go-bot/juju-core/trunk
- jujud-integrate-cleaner-resumer
- Merge into trunk
Status: | Rejected |
---|---|
Rejected by: | William Reade |
Proposed branch: | lp:~fwereade/juju-core/jujud-integrate-cleaner-resumer |
Merge into: | lp:~go-bot/juju-core/trunk |
Diff against target: |
680 lines (+279/-198) 6 files modified
cmd/jujud/agent.go (+8/-6) cmd/jujud/agent_test.go (+189/-4) cmd/jujud/machine.go (+11/-6) cmd/jujud/machine_test.go (+67/-178) cmd/jujud/unit.go (+3/-3) environs/testing/storage.go (+1/-1) |
To merge this branch: | bzr merge lp:~fwereade/juju-core/jujud-integrate-cleaner-resumer |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Juju Engineering | Pending | ||
Review via email: mp+170907@code.launchpad.net |
Commit message
Description of the change
jujud: JobManageState runs cleaner, resumer
Nothing to see there... but I also got angry and tweaked Agent so I could
just test the damn tasks themselves and not worry about the side effects.
And then I needed to test RunAgentLoop as well to verify that it does the
right thing with the tasks it gets from a mocked Agent.
William Reade (fwereade) wrote : | # |
John A Meinel (jameinel) wrote : | # |
Some questions about the testing infrastructure, but overall
LGTM
https:/
File cmd/jujud/agent.go (left):
https:/
cmd/jujud/
error
I do really like the rename of RunOnce to StartTasks.
As RunOnce can be interpreted as "only ever run this once", which
confuses it a bit with why you are calling it in a loop.
https:/
File cmd/jujud/
https:/
cmd/jujud/
I thought all machines that aren't machine/0 were meant to have their
own Nonce's. Is this just to have *something*? (As in, it could just as
easily be a random string, as long as you set it and preserve it?)
https:/
cmd/jujud/
time.Millisecond
I appreciate not having delays in the test suite.
I wonder about using a more obvious naming convention for package-global
variables that will be mutated in separate files.
Certainly when I read this, it looks like a function-local 'retryDelay'
variable. I realize there is no ':', and I can do the 'grep' to figure
out where 'retryDelay' is defined (somewhere in the cmd/jujud package).
Nothing to do with your proposed changes, just something about "where is
this variable coming from" that I find hard to read with our current
idioms. (In python you at least have to say 'global retryDelay' so you
know to go look for it somewhere else. And even then it would still be
in the same file.)
https:/
cmd/jujud/
Not specifically related to your patch, but "type Agent interface" has
no doc strings for me to understand what it is actually trying to do.
And AgentState is defined in 'upgrade.go' while 'Agent' is defined in
agent.go.
Anyway, the point I wanted to make was to think about Mock and what we
are trying to do. The concern is always that a Mock gets out of sync
with the real thing, and your tests just always keep passing because
Mock doesn't migrate.
I'm guessing go's "interface" helps us enough here, in that if Agent
changes its interface mockAgent will be updated to match (at least the
function signatures). Like in this case, you changed from RunOnce to
StartTasks, you do have to update the mock or the compiler will
complain.
In general, I love to see decoupling of the various bits of Juju and
make it easier to test just one layer. Though I generally prefer tested
Doubles (they act like the thing they are doubling) rather than
ultralightweight Mocks (they record what they were called with).
It also seems odd that you are able to mock out running 'heavyweight
tasks', but you still have to have a State object that connects to a
real mongodb running. It feels like if you are going to mock anything,
you would want to remove the State conn...
William Reade (fwereade) wrote : | # |
Conflicts irreparably with API changes to jujud
William Reade (fwereade) wrote : | # |
Thanks for the review; dropping this due to collision.
https:/
File cmd/jujud/
https:/
cmd/jujud/
On 2013/06/23 12:53:07, jameinel wrote:
> I thought all machines that aren't machine/0 were meant to have their
own
> Nonce's. Is this just to have *something*? (As in, it could just as
easily be a
> random string, as long as you set it and preserve it?)
Ha, yeah, I moved that method without reading it. Not sure why that's
there at all.
https:/
cmd/jujud/
time.Millisecond
On 2013/06/23 12:53:07, jameinel wrote:
> I appreciate not having delays in the test suite.
> I wonder about using a more obvious naming convention for
package-global
> variables that will be mutated in separate files.
> Certainly when I read this, it looks like a function-local
'retryDelay'
> variable. I realize there is no ':', and I can do the 'grep' to figure
out where
> 'retryDelay' is defined (somewhere in the cmd/jujud package).
> Nothing to do with your proposed changes, just something about "where
is this
> variable coming from" that I find hard to read with our current
idioms. (In
> python you at least have to say 'global retryDelay' so you know to go
look for
> it somewhere else. And even then it would still be in the same file.)
I think we've got a bit of a general problem with code being pretty
randomly scattered across files in a package... that's not for want of
bikeshedding, though. It's worse here because we're testing in-package
and the usual restrictions don't apply.
https:/
cmd/jujud/
On 2013/06/23 12:53:07, jameinel wrote:
> Not specifically related to your patch, but "type Agent interface" has
no doc
> strings for me to understand what it is actually trying to do. And
AgentState is
> defined in 'upgrade.go' while 'Agent' is defined in agent.go.
I hate spelunking through this package for exactly these reasons :).
> Anyway, the point I wanted to make was to think about Mock and what we
are
> trying to do. The concern is always that a Mock gets out of sync with
the real
> thing, and your tests just always keep passing because Mock doesn't
migrate.
> I'm guessing go's "interface" helps us enough here, in that if Agent
changes its
> interface mockAgent will be updated to match (at least the function
signatures).
> Like in this case, you changed from RunOnce to StartTasks, you do have
to update
> the mock or the compiler will complain.
Yeah; doesn't help with semantic changes, but it's handy all the same.
> In general, I love to see decoupling of the various bits of Juju and
make it
> easier to test just one layer. Though I generally prefer tested
Doubles (they
> act like the thing they are doubling) rather than ultralightweight
Mocks (they
> record what they were called with).
No theoretical argument, but I'm not...
Unmerged revisions
- 1318. By William Reade
-
go fmt :/
- 1317. By William Reade
-
merge parent
- 1316. By William Reade
-
machine agent with JobManageState now runs Cleaner, Resumer; also, more unity testing for machine agent
Preview Diff
1 | === modified file 'cmd/jujud/agent.go' |
2 | --- cmd/jujud/agent.go 2013-05-31 08:33:34 +0000 |
3 | +++ cmd/jujud/agent.go 2013-06-21 20:42:27 +0000 |
4 | @@ -126,10 +126,10 @@ |
5 | } |
6 | |
7 | type Agent interface { |
8 | + Tag() string |
9 | Tomb() *tomb.Tomb |
10 | - RunOnce(st *state.State, entity AgentState) error |
11 | Entity(st *state.State) (AgentState, error) |
12 | - Tag() string |
13 | + StartTasks(st *state.State, entity AgentState) []task |
14 | } |
15 | |
16 | // runLoop repeatedly calls runOnce until it returns worker.ErrTerminateAgent |
17 | @@ -187,17 +187,19 @@ |
18 | } |
19 | |
20 | // RunAgentLoop repeatedly connects to the state server |
21 | -// and calls the agent's RunOnce method. |
22 | +// and runs the agent's tasks. |
23 | func RunAgentLoop(c *agent.Conf, a Agent) error { |
24 | - return runLoop(func() error { |
25 | + dying := a.Tomb().Dying() |
26 | + body := func() error { |
27 | st, entity, err := openState(c, a) |
28 | if err != nil { |
29 | return err |
30 | } |
31 | defer st.Close() |
32 | // TODO(rog) connect to API. |
33 | - return a.RunOnce(st, entity) |
34 | - }, a.Tomb().Dying()) |
35 | + return runTasks(dying, a.StartTasks(st, entity)...) |
36 | + } |
37 | + return runLoop(body, dying) |
38 | } |
39 | |
40 | // This mutex ensures that we can have two concurrent workers |
41 | |
42 | === modified file 'cmd/jujud/agent_test.go' |
43 | --- cmd/jujud/agent_test.go 2013-05-31 08:33:34 +0000 |
44 | +++ cmd/jujud/agent_test.go 2013-06-21 20:42:27 +0000 |
45 | @@ -5,9 +5,16 @@ |
46 | |
47 | import ( |
48 | "bytes" |
49 | + stderrors "errors" |
50 | "fmt" |
51 | + "net/http" |
52 | + "os" |
53 | + "path/filepath" |
54 | + "time" |
55 | + |
56 | . "launchpad.net/gocheck" |
57 | "launchpad.net/juju-core/cmd" |
58 | + "launchpad.net/juju-core/constraints" |
59 | "launchpad.net/juju-core/environs/agent" |
60 | "launchpad.net/juju-core/environs/tools" |
61 | "launchpad.net/juju-core/errors" |
62 | @@ -17,10 +24,6 @@ |
63 | "launchpad.net/juju-core/version" |
64 | "launchpad.net/juju-core/worker" |
65 | "launchpad.net/tomb" |
66 | - "net/http" |
67 | - "os" |
68 | - "path/filepath" |
69 | - "time" |
70 | ) |
71 | |
72 | var _ = Suite(&toolSuite{}) |
73 | @@ -258,6 +261,23 @@ |
74 | testing.JujuConnSuite |
75 | } |
76 | |
77 | +// primeMachineAgent adds a new Machine to run the given jobs, and sets up the |
78 | +// machine agent's directory. It returns the new machine, the |
79 | +// agent's configuration and the tools currently running. |
80 | +func (s *agentSuite) primeMachineAgent(c *C, jobs ...state.MachineJob) (*state.Machine, *agent.Conf, *state.Tools) { |
81 | + m, err := s.State.InjectMachine("series", constraints.Value{}, "ardbeg-0", jobs...) |
82 | + c.Assert(err, IsNil) |
83 | + err = m.SetMongoPassword("machine-password") |
84 | + c.Assert(err, IsNil) |
85 | + err = m.SetPassword("machine-api-password") |
86 | + c.Assert(err, IsNil) |
87 | + conf, tools := s.primeAgent(c, state.MachineTag(m.Id()), "machine-password") |
88 | + conf.MachineNonce = state.BootstrapNonce |
89 | + conf.Write() |
90 | + c.Assert(err, IsNil) |
91 | + return m, conf, tools |
92 | +} |
93 | + |
94 | // primeAgent writes the configuration file and tools |
95 | // for an agent with the given entity name. |
96 | // It returns the agent's configuration and the current tools. |
97 | @@ -399,3 +419,168 @@ |
98 | *c = *nc |
99 | return nil |
100 | } |
101 | + |
102 | +type RunAgentLoopSuite struct { |
103 | + agentSuite |
104 | + retryDelay time.Duration |
105 | +} |
106 | + |
107 | +func (s *RunAgentLoopSuite) SetUpTest(c *C) { |
108 | + s.agentSuite.SetUpTest(c) |
109 | + s.retryDelay, retryDelay = retryDelay, time.Millisecond |
110 | +} |
111 | + |
112 | +func (s *RunAgentLoopSuite) TearDownTest(c *C) { |
113 | + retryDelay = s.retryDelay |
114 | + s.agentSuite.TearDownTest(c) |
115 | +} |
116 | + |
117 | +var _ = Suite(&RunAgentLoopSuite{}) |
118 | + |
119 | +// mockAgent allows us to inject our own task list into RunAgentLoop and thus |
120 | +// verify its behaviour without running real, heavyweight, tasks. |
121 | +type mockAgent struct { |
122 | + c *C |
123 | + conf *agent.Conf |
124 | + tomb tomb.Tomb |
125 | + entity AgentState |
126 | + tasks func() []task |
127 | +} |
128 | + |
129 | +func (m *mockAgent) Tag() string { |
130 | + return m.entity.Tag() |
131 | +} |
132 | + |
133 | +func (m *mockAgent) Tomb() *tomb.Tomb { |
134 | + return &m.tomb |
135 | +} |
136 | + |
137 | +func (m *mockAgent) Entity(_ *state.State) (AgentState, error) { |
138 | + return m.entity, nil |
139 | +} |
140 | + |
141 | +func (m *mockAgent) StartTasks(_ *state.State, entity AgentState) []task { |
142 | + m.c.Assert(entity, Equals, m.entity) |
143 | + return m.tasks() |
144 | +} |
145 | + |
146 | +func (s *RunAgentLoopSuite) startLoop(c *C, tasks func() []task) (*tomb.Tomb, <-chan error) { |
147 | + machine, conf, _ := s.primeMachineAgent(c, state.JobHostUnits) |
148 | + agent := &mockAgent{ |
149 | + conf: conf, |
150 | + entity: machine, |
151 | + tasks: tasks, |
152 | + } |
153 | + done := make(chan error) |
154 | + go func() { |
155 | + done <- RunAgentLoop(conf, agent) |
156 | + }() |
157 | + return agent.Tomb(), done |
158 | +} |
159 | + |
160 | +// errorTask returns a new error each time it's run (and then panics). |
161 | +type errorTask struct { |
162 | + c *C |
163 | + waitCount int |
164 | + errs []error |
165 | +} |
166 | + |
167 | +func (t *errorTask) Stop() error { |
168 | + err := t.errs[t.waitCount] |
169 | + t.waitCount++ |
170 | + return err |
171 | +} |
172 | + |
173 | +func (t *errorTask) Wait() error { |
174 | + return t.errs[t.waitCount] |
175 | +} |
176 | + |
177 | +func (t *errorTask) String() string { |
178 | + return fmt.Sprintf("%#v", t) |
179 | +} |
180 | + |
181 | +func (s *RunAgentLoopSuite) TestRetryUntilErrTerminateAgent(c *C) { |
182 | + errTask := &errorTask{c: c, errs: []error{ |
183 | + stderrors.New("abritrary"), |
184 | + stderrors.New("different"), |
185 | + nil, |
186 | + stderrors.New("another"), |
187 | + worker.ErrTerminateAgent, |
188 | + }} |
189 | + _, done := s.startLoop(c, func() []task { |
190 | + return []task{errTask} |
191 | + }) |
192 | + select { |
193 | + case <-time.After(500 * time.Millisecond): |
194 | + c.Fatalf("tasks not stopped") |
195 | + case err := <-done: |
196 | + c.Assert(err, IsNil) |
197 | + } |
198 | +} |
199 | + |
200 | +func (s *RunAgentLoopSuite) TestRetryUntilUpgradedError(c *C) { |
201 | + upgradeErr := &UpgradeReadyError{} |
202 | + errTask := &errorTask{c: c, errs: []error{ |
203 | + stderrors.New("abritrary"), |
204 | + stderrors.New("different"), |
205 | + nil, |
206 | + stderrors.New("another"), |
207 | + upgradeErr, |
208 | + }} |
209 | + _, done := s.startLoop(c, func() []task { |
210 | + return []task{errTask} |
211 | + }) |
212 | + select { |
213 | + case <-time.After(500 * time.Millisecond): |
214 | + c.Fatalf("tasks not stopped") |
215 | + case err := <-done: |
216 | + c.Assert(err, Equals, upgradeErr) |
217 | + } |
218 | +} |
219 | + |
220 | +// waitTask is a task that "runs" until stopped and returns its error. |
221 | +type waitTask struct { |
222 | + tomb tomb.Tomb |
223 | + err error |
224 | +} |
225 | + |
226 | +func (t *waitTask) Stop() error { |
227 | + t.tomb.Kill(t.err) |
228 | + t.tomb.Done() |
229 | + return t.Wait() |
230 | +} |
231 | + |
232 | +func (t *waitTask) Wait() error { |
233 | + return t.tomb.Wait() |
234 | +} |
235 | + |
236 | +func (t *waitTask) String() string { |
237 | + return fmt.Sprintf("%#v", t) |
238 | +} |
239 | + |
240 | +func (s *RunAgentLoopSuite) TestRunUntilStopped(c *C) { |
241 | + tomb, done := s.startLoop(c, func() []task { |
242 | + return []task{&waitTask{}} |
243 | + }) |
244 | + tomb.Kill(stderrors.New("external")) |
245 | + select { |
246 | + case <-time.After(500 * time.Millisecond): |
247 | + c.Fatalf("task not stopped") |
248 | + case err := <-done: |
249 | + c.Assert(err, IsNil) |
250 | + } |
251 | +} |
252 | + |
253 | +func (s *RunAgentLoopSuite) TestRunUntilStoppedError(c *C) { |
254 | + tomb, done := s.startLoop(c, func() []task { |
255 | + return []task{&waitTask{err: stderrors.New("don't stop me now")}} |
256 | + }) |
257 | + tomb.Kill(stderrors.New("external")) |
258 | + select { |
259 | + case <-time.After(500 * time.Millisecond): |
260 | + c.Fatalf("task not stopped") |
261 | + case err := <-done: |
262 | + // It's OK, it'll be logged. |
263 | + c.Assert(err, IsNil) |
264 | + } |
265 | +} |
266 | |
267 | === modified file 'cmd/jujud/machine.go' |
268 | --- cmd/jujud/machine.go 2013-06-18 11:04:41 +0000 |
269 | +++ cmd/jujud/machine.go 2013-06-21 20:42:27 +0000 |
270 | @@ -5,6 +5,9 @@ |
271 | |
272 | import ( |
273 | "fmt" |
274 | + "path/filepath" |
275 | + "time" |
276 | + |
277 | "launchpad.net/gnuflag" |
278 | "launchpad.net/juju-core/charm" |
279 | "launchpad.net/juju-core/cmd" |
280 | @@ -13,12 +16,12 @@ |
281 | "launchpad.net/juju-core/state" |
282 | "launchpad.net/juju-core/state/apiserver" |
283 | "launchpad.net/juju-core/worker" |
284 | + "launchpad.net/juju-core/worker/cleaner" |
285 | "launchpad.net/juju-core/worker/firewaller" |
286 | "launchpad.net/juju-core/worker/machiner" |
287 | "launchpad.net/juju-core/worker/provisioner" |
288 | + "launchpad.net/juju-core/worker/resumer" |
289 | "launchpad.net/tomb" |
290 | - "path/filepath" |
291 | - "time" |
292 | ) |
293 | |
294 | var retryDelay = 3 * time.Second |
295 | @@ -105,7 +108,8 @@ |
296 | return err |
297 | } |
298 | |
299 | -func (a *MachineAgent) RunOnce(st *state.State, e AgentState) error { |
300 | +// StartTasks returns all tasks the machine agent is expected to perform. |
301 | +func (a *MachineAgent) StartTasks(st *state.State, e AgentState) []task { |
302 | m := e.(*state.Machine) |
303 | log.Infof("jobs for machine agent: %v", m.Jobs()) |
304 | dataDir := a.Conf.DataDir |
305 | @@ -123,13 +127,14 @@ |
306 | provisioner.NewProvisioner(st, a.MachineId), |
307 | firewaller.NewFirewaller(st)) |
308 | case state.JobManageState: |
309 | - // Ignore because it's started independently. |
310 | - continue |
311 | + tasks = append(tasks, |
312 | + cleaner.NewCleaner(st), |
313 | + resumer.NewResumer(st)) |
314 | default: |
315 | log.Warningf("ignoring unknown job %q", j) |
316 | } |
317 | } |
318 | - return runTasks(a.tomb.Dying(), tasks...) |
319 | + return tasks |
320 | } |
321 | |
322 | func (a *MachineAgent) Entity(st *state.State) (AgentState, error) { |
323 | |
324 | === modified file 'cmd/jujud/machine_test.go' |
325 | --- cmd/jujud/machine_test.go 2013-06-21 13:33:32 +0000 |
326 | +++ cmd/jujud/machine_test.go 2013-06-21 20:42:27 +0000 |
327 | @@ -5,23 +5,21 @@ |
328 | |
329 | import ( |
330 | "fmt" |
331 | + "path/filepath" |
332 | + "reflect" |
333 | + "sort" |
334 | + "sync" |
335 | + "time" |
336 | + |
337 | . "launchpad.net/gocheck" |
338 | "launchpad.net/juju-core/charm" |
339 | "launchpad.net/juju-core/cmd" |
340 | - "launchpad.net/juju-core/constraints" |
341 | "launchpad.net/juju-core/environs/agent" |
342 | "launchpad.net/juju-core/environs/dummy" |
343 | - envtesting "launchpad.net/juju-core/environs/testing" |
344 | - "launchpad.net/juju-core/errors" |
345 | "launchpad.net/juju-core/state" |
346 | "launchpad.net/juju-core/state/api" |
347 | "launchpad.net/juju-core/state/api/params" |
348 | - "launchpad.net/juju-core/state/watcher" |
349 | "launchpad.net/juju-core/testing" |
350 | - "launchpad.net/juju-core/version" |
351 | - "path/filepath" |
352 | - "reflect" |
353 | - "time" |
354 | ) |
355 | |
356 | type MachineSuite struct { |
357 | @@ -41,23 +39,6 @@ |
358 | s.agentSuite.TearDownSuite(c) |
359 | } |
360 | |
361 | -// primeAgent adds a new Machine to run the given jobs, and sets up the |
362 | -// machine agent's directory. It returns the new machine, the |
363 | -// agent's configuration and the tools currently running. |
364 | -func (s *MachineSuite) primeAgent(c *C, jobs ...state.MachineJob) (*state.Machine, *agent.Conf, *state.Tools) { |
365 | - m, err := s.State.InjectMachine("series", constraints.Value{}, "ardbeg-0", jobs...) |
366 | - c.Assert(err, IsNil) |
367 | - err = m.SetMongoPassword("machine-password") |
368 | - c.Assert(err, IsNil) |
369 | - err = m.SetPassword("machine-api-password") |
370 | - c.Assert(err, IsNil) |
371 | - conf, tools := s.agentSuite.primeAgent(c, state.MachineTag(m.Id()), "machine-password") |
372 | - conf.MachineNonce = state.BootstrapNonce |
373 | - conf.Write() |
374 | - c.Assert(err, IsNil) |
375 | - return m, conf, tools |
376 | -} |
377 | - |
378 | // newAgent returns a new MachineAgent instance |
379 | func (s *MachineSuite) newAgent(c *C, m *state.Machine) *MachineAgent { |
380 | a := &MachineAgent{} |
381 | @@ -92,13 +73,13 @@ |
382 | |
383 | func (s *MachineSuite) TestRunInvalidMachineId(c *C) { |
384 | c.Skip("agents don't yet distinguish between temporary and permanent errors") |
385 | - m, _, _ := s.primeAgent(c, state.JobHostUnits) |
386 | + m, _, _ := s.primeMachineAgent(c, state.JobHostUnits) |
387 | err := s.newAgent(c, m).Run(nil) |
388 | c.Assert(err, ErrorMatches, "some error") |
389 | } |
390 | |
391 | func (s *MachineSuite) TestRunStop(c *C) { |
392 | - m, ac, _ := s.primeAgent(c, state.JobHostUnits) |
393 | + m, ac, _ := s.primeMachineAgent(c, state.JobHostUnits) |
394 | a := s.newAgent(c, m) |
395 | done := make(chan error) |
396 | go func() { |
397 | @@ -111,7 +92,7 @@ |
398 | } |
399 | |
400 | func (s *MachineSuite) TestWithDeadMachine(c *C) { |
401 | - m, _, _ := s.primeAgent(c, state.JobHostUnits, state.JobManageState) |
402 | + m, _, _ := s.primeMachineAgent(c, state.JobHostUnits, state.JobManageState) |
403 | err := m.EnsureDead() |
404 | c.Assert(err, IsNil) |
405 | a := s.newAgent(c, m) |
406 | @@ -126,153 +107,6 @@ |
407 | c.Assert(err, IsNil) |
408 | } |
409 | |
410 | -func (s *MachineSuite) TestDyingMachine(c *C) { |
411 | - m, _, _ := s.primeAgent(c, state.JobHostUnits) |
412 | - a := s.newAgent(c, m) |
413 | - done := make(chan error) |
414 | - go func() { |
415 | - done <- a.Run(nil) |
416 | - }() |
417 | - defer func() { |
418 | - c.Check(a.Stop(), IsNil) |
419 | - }() |
420 | - time.Sleep(1 * time.Second) |
421 | - err := m.Destroy() |
422 | - c.Assert(err, IsNil) |
423 | - select { |
424 | - case err := <-done: |
425 | - c.Assert(err, IsNil) |
426 | - case <-time.After(watcher.Period * 5 / 4): |
427 | - // TODO(rog) Fix this so it doesn't wait for so long. |
428 | - // https://bugs.launchpad.net/juju-core/+bug/1163983 |
429 | - c.Fatalf("timed out waiting for agent to terminate") |
430 | - } |
431 | - err = m.Refresh() |
432 | - c.Assert(err, IsNil) |
433 | - c.Assert(m.Life(), Equals, state.Dead) |
434 | -} |
435 | - |
436 | -func (s *MachineSuite) TestHostUnits(c *C) { |
437 | - m, conf, _ := s.primeAgent(c, state.JobHostUnits) |
438 | - a := s.newAgent(c, m) |
439 | - ctx, reset := patchDeployContext(c, conf.StateInfo, conf.DataDir) |
440 | - defer reset() |
441 | - go func() { c.Check(a.Run(nil), IsNil) }() |
442 | - defer func() { c.Check(a.Stop(), IsNil) }() |
443 | - |
444 | - // check that unassigned units don't trigger any deployments. |
445 | - svc, err := s.State.AddService("wordpress", s.AddTestingCharm(c, "wordpress")) |
446 | - c.Assert(err, IsNil) |
447 | - u0, err := svc.AddUnit() |
448 | - c.Assert(err, IsNil) |
449 | - u1, err := svc.AddUnit() |
450 | - c.Assert(err, IsNil) |
451 | - ctx.waitDeployed(c) |
452 | - |
453 | - // assign u0, check it's deployed. |
454 | - err = u0.AssignToMachine(m) |
455 | - c.Assert(err, IsNil) |
456 | - ctx.waitDeployed(c, u0.Name()) |
457 | - |
458 | - // "start the agent" for u0 to prevent short-circuited remove-on-destroy; |
459 | - // check that it's kept deployed despite being Dying. |
460 | - err = u0.SetStatus(params.StatusStarted, "") |
461 | - c.Assert(err, IsNil) |
462 | - err = u0.Destroy() |
463 | - c.Assert(err, IsNil) |
464 | - ctx.waitDeployed(c, u0.Name()) |
465 | - |
466 | - // add u1 to the machine, check it's deployed. |
467 | - err = u1.AssignToMachine(m) |
468 | - c.Assert(err, IsNil) |
469 | - ctx.waitDeployed(c, u0.Name(), u1.Name()) |
470 | - |
471 | - // make u0 dead; check the deployer recalls the unit and removes it from |
472 | - // state. |
473 | - err = u0.EnsureDead() |
474 | - c.Assert(err, IsNil) |
475 | - ctx.waitDeployed(c, u1.Name()) |
476 | - err = u0.Refresh() |
477 | - c.Assert(errors.IsNotFoundError(err), Equals, true) |
478 | - |
479 | - // short-circuit-remove u1 after it's been deployed; check it's recalled |
480 | - // and removed from state. |
481 | - err = u1.Destroy() |
482 | - c.Assert(err, IsNil) |
483 | - err = u1.Refresh() |
484 | - c.Assert(errors.IsNotFoundError(err), Equals, true) |
485 | - ctx.waitDeployed(c) |
486 | -} |
487 | - |
488 | -func (s *MachineSuite) TestManageEnviron(c *C) { |
489 | - usefulVersion := version.Current |
490 | - usefulVersion.Series = "series" // to match the charm created below |
491 | - envtesting.UploadFakeToolsVersion(c, s.Conn.Environ.Storage(), usefulVersion) |
492 | - m, _, _ := s.primeAgent(c, state.JobManageEnviron) |
493 | - op := make(chan dummy.Operation, 200) |
494 | - dummy.Listen(op) |
495 | - |
496 | - a := s.newAgent(c, m) |
497 | - // Make sure the agent is stopped even if the test fails. |
498 | - defer a.Stop() |
499 | - done := make(chan error) |
500 | - go func() { |
501 | - done <- a.Run(nil) |
502 | - }() |
503 | - |
504 | - // Check that the provisioner and firewaller are alive by doing |
505 | - // a rudimentary check that it responds to state changes. |
506 | - |
507 | - // Add one unit to a service; it should get allocated a machine |
508 | - // and then its ports should be opened. |
509 | - charm := s.AddTestingCharm(c, "dummy") |
510 | - svc, err := s.State.AddService("test-service", charm) |
511 | - c.Assert(err, IsNil) |
512 | - err = svc.SetExposed() |
513 | - c.Assert(err, IsNil) |
514 | - units, err := s.Conn.AddUnits(svc, 1, "") |
515 | - c.Assert(err, IsNil) |
516 | - c.Check(opRecvTimeout(c, s.State, op, dummy.OpStartInstance{}), NotNil) |
517 | - |
518 | - // Wait for the instance id to show up in the state. |
519 | - id1, err := units[0].AssignedMachineId() |
520 | - c.Assert(err, IsNil) |
521 | - m1, err := s.State.Machine(id1) |
522 | - c.Assert(err, IsNil) |
523 | - w := m1.Watch() |
524 | - defer w.Stop() |
525 | - for _ = range w.Changes() { |
526 | - err = m1.Refresh() |
527 | - c.Assert(err, IsNil) |
528 | - if _, ok := m1.InstanceId(); ok { |
529 | - break |
530 | - } |
531 | - } |
532 | - err = units[0].OpenPort("tcp", 999) |
533 | - c.Assert(err, IsNil) |
534 | - |
535 | - c.Check(opRecvTimeout(c, s.State, op, dummy.OpOpenPorts{}), NotNil) |
536 | - |
537 | - err = a.Stop() |
538 | - c.Assert(err, IsNil) |
539 | - |
540 | - select { |
541 | - case err := <-done: |
542 | - c.Assert(err, IsNil) |
543 | - case <-time.After(5 * time.Second): |
544 | - c.Fatalf("timed out waiting for agent to terminate") |
545 | - } |
546 | -} |
547 | - |
548 | -func (s *MachineSuite) TestUpgrade(c *C) { |
549 | - m, conf, currentTools := s.primeAgent(c, state.JobManageState, state.JobManageEnviron, state.JobHostUnits) |
550 | - addAPIInfo(conf, m) |
551 | - err := conf.Write() |
552 | - c.Assert(err, IsNil) |
553 | - a := s.newAgent(c, m) |
554 | - s.testUpgrade(c, a, currentTools) |
555 | -} |
556 | - |
557 | func addAPIInfo(conf *agent.Conf, m *state.Machine) { |
558 | port := testing.FindTCPPort() |
559 | conf.APIInfo = &api.Info{ |
560 | @@ -292,7 +126,7 @@ |
561 | } |
562 | |
563 | func (s *MachineSuite) TestServeAPI(c *C) { |
564 | - stm, conf, _ := s.primeAgent(c, state.JobManageState) |
565 | + stm, conf, _ := s.primeMachineAgent(c, state.JobManageState) |
566 | addAPIInfo(conf, stm) |
567 | err := conf.Write() |
568 | c.Assert(err, IsNil) |
569 | @@ -338,7 +172,7 @@ |
570 | }} |
571 | |
572 | func (s *MachineSuite) TestServeAPIWithBadConf(c *C) { |
573 | - m, conf, _ := s.primeAgent(c, state.JobManageState) |
574 | + m, conf, _ := s.primeMachineAgent(c, state.JobManageState) |
575 | addAPIInfo(conf, m) |
576 | for i, t := range serveAPIWithBadConfTests { |
577 | c.Logf("test %d: %q", i, t.err) |
578 | @@ -375,9 +209,64 @@ |
579 | } |
580 | |
581 | func (s *MachineSuite) TestChangePasswordChanging(c *C) { |
582 | - m, _, _ := s.primeAgent(c, state.JobHostUnits) |
583 | + m, _, _ := s.primeMachineAgent(c, state.JobHostUnits) |
584 | newAgent := func() runner { |
585 | return s.newAgent(c, m) |
586 | } |
587 | s.testAgentPasswordChanging(c, m, newAgent) |
588 | } |
589 | + |
590 | +type jobs []state.MachineJob |
591 | + |
592 | +func (s *MachineSuite) TestStateTasks(c *C) { |
593 | + for i, test := range []struct { |
594 | + info string |
595 | + jobs []state.MachineJob |
596 | + expect []string |
597 | + }{{ |
598 | + jobs: jobs{state.JobHostUnits}, |
599 | + expect: []string{"upgrader", "machiner", "deployer"}, |
600 | + }, { |
601 | + jobs: jobs{state.JobManageEnviron}, |
602 | + expect: []string{"upgrader", "machiner", "provisioning worker", "firewaller"}, |
603 | + }, { |
604 | + jobs: jobs{state.JobManageState}, |
605 | + expect: []string{"upgrader", "machiner", "cleaner", "resumer"}, |
606 | + }, { |
607 | + jobs: jobs{state.JobManageState, state.JobHostUnits}, |
608 | + expect: []string{"upgrader", "machiner", "cleaner", "resumer", "deployer"}, |
609 | + }} { |
610 | + c.Logf("test %d: %v", i, test.jobs) |
611 | + machine, conf, _ := s.primeMachineAgent(c, test.jobs...) |
612 | + expect := make([]string, len(test.expect)) |
613 | + for i, kind := range test.expect { |
614 | + switch kind { |
615 | + case "machiner": |
616 | + kind = fmt.Sprintf("machiner %s", machine.Id()) |
617 | + case "deployer": |
618 | + kind = fmt.Sprintf("deployer for machine-%s", machine.Id()) |
619 | + } |
620 | + expect[i] = kind |
621 | + } |
622 | + sort.Strings(expect) |
623 | + |
624 | + // This is horrible. But it makes this functionality somewhat testable... |
625 | + agent := s.newAgent(c, machine) |
626 | + agent.Conf.Conf = conf |
627 | + |
628 | + var wg = sync.WaitGroup{} |
629 | + var actual []string |
630 | + for _, t := range agent.StartTasks(s.State, machine) { |
631 | + actual = append(actual, t.String()) |
632 | + wg.Add(1) |
633 | + go func(t task) { |
634 | + c.Check(t.Stop(), IsNil) |
635 | + wg.Done() |
636 | + }(t) |
637 | + } |
638 | + wg.Wait() |
639 | + sort.Strings(actual) |
640 | + c.Logf("%#v", actual) |
641 | + c.Check(actual, DeepEquals, expect) |
642 | + } |
643 | +} |
644 | |
645 | === modified file 'cmd/jujud/unit.go' |
646 | --- cmd/jujud/unit.go 2013-05-28 14:22:24 +0000 |
647 | +++ cmd/jujud/unit.go 2013-06-21 20:42:27 +0000 |
648 | @@ -66,8 +66,8 @@ |
649 | return err |
650 | } |
651 | |
652 | -// RunOnce runs a unit agent once. |
653 | -func (a *UnitAgent) RunOnce(st *state.State, e AgentState) error { |
654 | +// StartTasks returns all tasks the unit agent is expected to perform. |
655 | +func (a *UnitAgent) StartTasks(st *state.State, e AgentState) []task { |
656 | unit := e.(*state.Unit) |
657 | dataDir := a.Conf.DataDir |
658 | tasks := []task{ |
659 | @@ -78,7 +78,7 @@ |
660 | tasks = append(tasks, |
661 | newDeployer(st, unit.WatchSubordinateUnits(), dataDir)) |
662 | } |
663 | - return runTasks(a.tomb.Dying(), tasks...) |
664 | + return tasks |
665 | } |
666 | |
667 | func (a *UnitAgent) Entity(st *state.State) (AgentState, error) { |
668 | |
669 | === modified file 'environs/testing/storage.go' |
670 | --- environs/testing/storage.go 2013-06-20 13:27:50 +0000 |
671 | +++ environs/testing/storage.go 2013-06-21 20:42:27 +0000 |
672 | @@ -46,7 +46,7 @@ |
673 | listener net.Listener |
674 | } |
675 | |
676 | -// NewEC2HTTPTestStorage creates a storage server for tests |
677 | +// NewEC2HTTPTestStorage creates a storage server for tests |
678 | // with the HTTPStorageReader. |
679 | func NewEC2HTTPTestStorage(ip string) (*EC2HTTPTestStorage, error) { |
680 | var err error |
Reviewers: mp+170907_ code.launchpad. net,
Message:
Please take a look.
Description:
jujud: JobManageState runs cleaner, resumer
Nothing to see there... but I also got angry and tweaked Agent so I
could
just test the damn tasks themselves and not worry about the side
effects.
And then I needed to test RunAgentLoop as well to verify that it does
the
right thing with the tasks it gets from a mocked Agent.
https:/ /code.launchpad .net/~fwereade/ juju-core/ jujud-integrate -cleaner- resumer/ +merge/ 170907
(do not edit description out of merge proposal)
Please review this at https:/ /codereview. appspot. com/10439046/
Affected files: agent_test. go machine. go machine_ test.go testing/ storage. go
A [revision details]
M cmd/jujud/agent.go
M cmd/jujud/
M cmd/jujud/
M cmd/jujud/
M cmd/jujud/unit.go
M environs/