Merge lp:~rogpeppe/juju-core/mfoord-wrapsingletonworkers into lp:~go-bot/juju-core/trunk
- mfoord-wrapsingletonworkers
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | Roger Peppe |
Approved revision: | no longer in the source branch. |
Merged at revision: | 2608 |
Proposed branch: | lp:~rogpeppe/juju-core/mfoord-wrapsingletonworkers |
Merge into: | lp:~go-bot/juju-core/trunk |
Diff against target: |
346 lines (+144/-17) 5 files modified
agent/mongo/mongo.go (+8/-0) cmd/jujud/machine.go (+66/-15) cmd/jujud/machine_test.go (+49/-0) replicaset/replicaset.go (+8/-2) replicaset/replicaset_test.go (+13/-0) |
To merge this branch: | bzr merge lp:~rogpeppe/juju-core/mfoord-wrapsingletonworkers |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Juju Engineering | Pending | ||
Review via email: mp+215035@code.launchpad.net |
Commit message
cmd/jujud: wrap singular workers
We start the various workers that require only a single
instance to be running via the runner returned by singular.New.
This is awkward to test - we just do a basic sanity check
that the right workers are started through the singular
runner.
Description of the change
cmd/jujud: wrap singular workers
We start the various workers that require only a single
instance to be running via the runner returned by singular.New.
This is awkward to test - we just do a basic sanity check
that the right workers are started through the singular
runner.
Roger Peppe (rogpeppe) wrote : | # |
Roger Peppe (rogpeppe) wrote : | # |
Please take a look.
Nate Finch (natefinch) wrote : | # |
LGTM with just a couple minor suggestions.
https:/
File cmd/jujud/
https:/
cmd/jujud/
could this code get moved to the loop over entity.jobs below? It
diesn't seem to neec to be here, and it puts the declaration far from
the use of the variable.
https:/
File cmd/jujud/
https:/
cmd/jujud/
jc.DeepEquals, []string{
This checks order of the slice, too, right? Does the order matter? If
not, we may want to use jc.SameContents here, so we don't fail the test
if the order changes.
Roger Peppe (rogpeppe) wrote : | # |
https:/
File cmd/jujud/
https:/
cmd/jujud/
On 2014/04/10 11:14:38, nate.finch wrote:
> could this code get moved to the loop over entity.jobs below? It
diesn't seem
> to neec to be here, and it puts the declaration far from the use of
the
> variable.
i don't we can do this, because newSingularRunner can fail, and if that
happens we don't want to leave the other jobs running.
https:/
File cmd/jujud/
https:/
cmd/jujud/
jc.DeepEquals, []string{
On 2014/04/10 11:14:38, nate.finch wrote:
> This checks order of the slice, too, right? Does the order matter?
If not, we
> may want to use jc.SameContents here, so we don't fail the test if the
order
> changes.
the slice is sorted (see the implementation of the started method)
Go Bot (go-bot) wrote : | # |
The attempt to merge lp:~rogpeppe/juju-core/mfoord-wrapsingletonworkers into lp:juju-core failed. Below is the output from the failed tests.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
? launchpad.
? launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
? launchpad.
? launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
? launchpad.
ok launchpad.
? launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
? launchpad.
ok launchpad.
? launchpad.
? launchpad.
? launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
? launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
? launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
? launchpad.
ok launchpad.
ok launchpad.
? launchpad.
ok launchpad.
Go Bot (go-bot) wrote : | # |
The attempt to merge lp:~rogpeppe/juju-core/mfoord-wrapsingletonworkers into lp:juju-core failed. Below is the output from the failed tests.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
? launchpad.
? launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
? launchpad.
? launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
? launchpad.
ok launchpad.
? launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
? launchpad.
ok launchpad.
? launchpad.
? launchpad.
? launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
? launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
? launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
ok launchpad.
? launchpad.
ok launchpad.
ok launchpad.
? launchpad.
ok launchpad.
Preview Diff
1 | === modified file 'agent/mongo/mongo.go' | |||
2 | --- agent/mongo/mongo.go 2014-04-08 20:10:22 +0000 | |||
3 | +++ agent/mongo/mongo.go 2014-04-10 14:24:10 +0000 | |||
4 | @@ -51,6 +51,14 @@ | |||
5 | 51 | addrs := obj.Addresses() | 51 | addrs := obj.Addresses() |
6 | 52 | 52 | ||
7 | 53 | masterHostPort, err := replicaset.MasterHostPort(session) | 53 | masterHostPort, err := replicaset.MasterHostPort(session) |
8 | 54 | |||
9 | 55 | // If the replica set has not been configured, then we | ||
10 | 56 | // can have only one master and the caller must | ||
11 | 57 | // be that master. | ||
12 | 58 | if err == replicaset.ErrMasterNotConfigured { | ||
13 | 59 | return true, nil | ||
14 | 60 | } | ||
15 | 61 | |||
16 | 54 | if err != nil { | 62 | if err != nil { |
17 | 55 | return false, err | 63 | return false, err |
18 | 56 | } | 64 | } |
19 | 57 | 65 | ||
20 | === modified file 'cmd/jujud/machine.go' | |||
21 | --- cmd/jujud/machine.go 2014-04-08 20:44:59 +0000 | |||
22 | +++ cmd/jujud/machine.go 2014-04-10 14:24:10 +0000 | |||
23 | @@ -11,6 +11,7 @@ | |||
24 | 11 | "time" | 11 | "time" |
25 | 12 | 12 | ||
26 | 13 | "github.com/juju/loggo" | 13 | "github.com/juju/loggo" |
27 | 14 | "labix.org/v2/mgo" | ||
28 | 14 | "launchpad.net/gnuflag" | 15 | "launchpad.net/gnuflag" |
29 | 15 | "launchpad.net/tomb" | 16 | "launchpad.net/tomb" |
30 | 16 | 17 | ||
31 | @@ -50,6 +51,7 @@ | |||
32 | 50 | "launchpad.net/juju-core/worker/provisioner" | 51 | "launchpad.net/juju-core/worker/provisioner" |
33 | 51 | "launchpad.net/juju-core/worker/resumer" | 52 | "launchpad.net/juju-core/worker/resumer" |
34 | 52 | "launchpad.net/juju-core/worker/rsyslog" | 53 | "launchpad.net/juju-core/worker/rsyslog" |
35 | 54 | "launchpad.net/juju-core/worker/singular" | ||
36 | 53 | "launchpad.net/juju-core/worker/terminationworker" | 55 | "launchpad.net/juju-core/worker/terminationworker" |
37 | 54 | "launchpad.net/juju-core/worker/upgrader" | 56 | "launchpad.net/juju-core/worker/upgrader" |
38 | 55 | ) | 57 | ) |
39 | @@ -64,11 +66,15 @@ | |||
40 | 64 | type eitherState interface{} | 66 | type eitherState interface{} |
41 | 65 | 67 | ||
42 | 66 | var ( | 68 | var ( |
46 | 67 | retryDelay = 3 * time.Second | 69 | retryDelay = 3 * time.Second |
47 | 68 | jujuRun = "/usr/local/bin/juju-run" | 70 | jujuRun = "/usr/local/bin/juju-run" |
48 | 69 | useMultipleCPUs = utils.UseMultipleCPUs | 71 | useMultipleCPUs = utils.UseMultipleCPUs |
49 | 72 | |||
50 | 73 | // The following are defined as variables to | ||
51 | 74 | // allow the tests to intercept calls to the functions. | ||
52 | 70 | ensureMongoServer = mongo.EnsureMongoServer | 75 | ensureMongoServer = mongo.EnsureMongoServer |
53 | 71 | maybeInitiateMongoServer = mongo.MaybeInitiateMongoServer | 76 | maybeInitiateMongoServer = mongo.MaybeInitiateMongoServer |
54 | 77 | newSingularRunner = singular.New | ||
55 | 72 | 78 | ||
56 | 73 | // reportOpenedAPI is exposed for tests to know when | 79 | // reportOpenedAPI is exposed for tests to know when |
57 | 74 | // the State has been successfully opened. | 80 | // the State has been successfully opened. |
58 | @@ -239,15 +245,22 @@ | |||
59 | 239 | } | 245 | } |
60 | 240 | 246 | ||
61 | 241 | rsyslogMode := rsyslog.RsyslogModeForwarding | 247 | rsyslogMode := rsyslog.RsyslogModeForwarding |
62 | 248 | runner := newRunner(connectionIsFatal(st), moreImportant) | ||
63 | 249 | var singularRunner worker.Runner | ||
64 | 242 | for _, job := range entity.Jobs() { | 250 | for _, job := range entity.Jobs() { |
65 | 243 | if job == params.JobManageEnviron { | 251 | if job == params.JobManageEnviron { |
66 | 244 | rsyslogMode = rsyslog.RsyslogModeAccumulate | 252 | rsyslogMode = rsyslog.RsyslogModeAccumulate |
67 | 253 | conn := singularAPIConn{st, st.Agent()} | ||
68 | 254 | singularRunner, err = newSingularRunner(runner, conn) | ||
69 | 255 | if err != nil { | ||
70 | 256 | return nil, fmt.Errorf("cannot make singular API Runner: %v", err) | ||
71 | 257 | } | ||
72 | 245 | break | 258 | break |
73 | 246 | } | 259 | } |
74 | 247 | } | 260 | } |
75 | 248 | runner := newRunner(connectionIsFatal(st), moreImportant) | ||
76 | 249 | 261 | ||
78 | 250 | // Run the upgrader and the upgrade-steps worker without waiting for the upgrade steps to complete. | 262 | // Run the upgrader and the upgrade-steps worker without waiting for |
79 | 263 | // the upgrade steps to complete. | ||
80 | 251 | runner.StartWorker("upgrader", func() (worker.Worker, error) { | 264 | runner.StartWorker("upgrader", func() (worker.Worker, error) { |
81 | 252 | return upgrader.NewUpgrader(st.Upgrader(), agentConfig), nil | 265 | return upgrader.NewUpgrader(st.Upgrader(), agentConfig), nil |
82 | 253 | }) | 266 | }) |
83 | @@ -255,7 +268,8 @@ | |||
84 | 255 | return a.upgradeWorker(st, entity.Jobs(), agentConfig), nil | 268 | return a.upgradeWorker(st, entity.Jobs(), agentConfig), nil |
85 | 256 | }) | 269 | }) |
86 | 257 | 270 | ||
88 | 258 | // All other workers must wait for the upgrade steps to complete before starting. | 271 | // All other workers must wait for the upgrade steps to complete |
89 | 272 | // before starting. | ||
90 | 259 | a.startWorkerAfterUpgrade(runner, "machiner", func() (worker.Worker, error) { | 273 | a.startWorkerAfterUpgrade(runner, "machiner", func() (worker.Worker, error) { |
91 | 260 | return machiner.NewMachiner(st.Machiner(), agentConfig), nil | 274 | return machiner.NewMachiner(st.Machiner(), agentConfig), nil |
92 | 261 | }) | 275 | }) |
93 | @@ -272,7 +286,8 @@ | |||
94 | 272 | return newRsyslogConfigWorker(st.Rsyslog(), agentConfig, rsyslogMode) | 286 | return newRsyslogConfigWorker(st.Rsyslog(), agentConfig, rsyslogMode) |
95 | 273 | }) | 287 | }) |
96 | 274 | 288 | ||
98 | 275 | // If not a local provider bootstrap machine, start the worker to manage SSH keys. | 289 | // If not a local provider bootstrap machine, start the worker to |
99 | 290 | // manage SSH keys. | ||
100 | 276 | providerType := agentConfig.Value(agent.ProviderType) | 291 | providerType := agentConfig.Value(agent.ProviderType) |
101 | 277 | if providerType != provider.Local || a.MachineId != bootstrapMachineId { | 292 | if providerType != provider.Local || a.MachineId != bootstrapMachineId { |
102 | 278 | a.startWorkerAfterUpgrade(runner, "authenticationworker", func() (worker.Worker, error) { | 293 | a.startWorkerAfterUpgrade(runner, "authenticationworker", func() (worker.Worker, error) { |
103 | @@ -293,16 +308,17 @@ | |||
104 | 293 | return deployer.NewDeployer(apiDeployer, context), nil | 308 | return deployer.NewDeployer(apiDeployer, context), nil |
105 | 294 | }) | 309 | }) |
106 | 295 | case params.JobManageEnviron: | 310 | case params.JobManageEnviron: |
108 | 296 | a.startWorkerAfterUpgrade(runner, "environ-provisioner", func() (worker.Worker, error) { | 311 | a.startWorkerAfterUpgrade(singularRunner, "environ-provisioner", func() (worker.Worker, error) { |
109 | 297 | return provisioner.NewEnvironProvisioner(st.Provisioner(), agentConfig), nil | 312 | return provisioner.NewEnvironProvisioner(st.Provisioner(), agentConfig), nil |
110 | 298 | }) | 313 | }) |
111 | 299 | // TODO(axw) 2013-09-24 bug #1229506 | 314 | // TODO(axw) 2013-09-24 bug #1229506 |
115 | 300 | // Make another job to enable the firewaller. Not all environments | 315 | // Make another job to enable the firewaller. Not all |
116 | 301 | // are capable of managing ports centrally. | 316 | // environments are capable of managing ports |
117 | 302 | a.startWorkerAfterUpgrade(runner, "firewaller", func() (worker.Worker, error) { | 317 | // centrally. |
118 | 318 | a.startWorkerAfterUpgrade(singularRunner, "firewaller", func() (worker.Worker, error) { | ||
119 | 303 | return firewaller.NewFirewaller(st.Firewaller()) | 319 | return firewaller.NewFirewaller(st.Firewaller()) |
120 | 304 | }) | 320 | }) |
122 | 305 | a.startWorkerAfterUpgrade(runner, "charm-revision-updater", func() (worker.Worker, error) { | 321 | a.startWorkerAfterUpgrade(singularRunner, "charm-revision-updater", func() (worker.Worker, error) { |
123 | 306 | return charmrevisionworker.NewRevisionUpdateWorker(st.CharmRevisionUpdater()), nil | 322 | return charmrevisionworker.NewRevisionUpdateWorker(st.CharmRevisionUpdater()), nil |
124 | 307 | }) | 323 | }) |
125 | 308 | case params.JobManageStateDeprecated: | 324 | case params.JobManageStateDeprecated: |
126 | @@ -399,7 +415,12 @@ | |||
127 | 399 | } | 415 | } |
128 | 400 | reportOpenedState(st) | 416 | reportOpenedState(st) |
129 | 401 | 417 | ||
130 | 418 | singularStateConn := singularStateConn{st.MongoSession(), m} | ||
131 | 402 | runner := newRunner(connectionIsFatal(st), moreImportant) | 419 | runner := newRunner(connectionIsFatal(st), moreImportant) |
132 | 420 | singularRunner, err := newSingularRunner(runner, singularStateConn) | ||
133 | 421 | if err != nil { | ||
134 | 422 | return nil, fmt.Errorf("cannot make singular State Runner: %v", err) | ||
135 | 423 | } | ||
136 | 403 | 424 | ||
137 | 404 | // Take advantage of special knowledge here in that we will only ever want | 425 | // Take advantage of special knowledge here in that we will only ever want |
138 | 405 | // the storage provider on one machine, and that is the "bootstrap" node. | 426 | // the storage provider on one machine, and that is the "bootstrap" node. |
139 | @@ -444,16 +465,16 @@ | |||
140 | 444 | return apiserver.NewServer( | 465 | return apiserver.NewServer( |
141 | 445 | st, fmt.Sprintf(":%d", port), cert, key, dataDir, logDir) | 466 | st, fmt.Sprintf(":%d", port), cert, key, dataDir, logDir) |
142 | 446 | }) | 467 | }) |
144 | 447 | a.startWorkerAfterUpgrade(runner, "cleaner", func() (worker.Worker, error) { | 468 | a.startWorkerAfterUpgrade(singularRunner, "cleaner", func() (worker.Worker, error) { |
145 | 448 | return cleaner.NewCleaner(st), nil | 469 | return cleaner.NewCleaner(st), nil |
146 | 449 | }) | 470 | }) |
148 | 450 | a.startWorkerAfterUpgrade(runner, "resumer", func() (worker.Worker, error) { | 471 | a.startWorkerAfterUpgrade(singularRunner, "resumer", func() (worker.Worker, error) { |
149 | 451 | // The action of resumer is so subtle that it is not tested, | 472 | // The action of resumer is so subtle that it is not tested, |
150 | 452 | // because we can't figure out how to do so without brutalising | 473 | // because we can't figure out how to do so without brutalising |
151 | 453 | // the transaction log. | 474 | // the transaction log. |
152 | 454 | return resumer.NewResumer(st), nil | 475 | return resumer.NewResumer(st), nil |
153 | 455 | }) | 476 | }) |
155 | 456 | a.startWorkerAfterUpgrade(runner, "minunitsworker", func() (worker.Worker, error) { | 477 | a.startWorkerAfterUpgrade(singularRunner, "minunitsworker", func() (worker.Worker, error) { |
156 | 457 | return minunitsworker.NewMinUnitsWorker(st), nil | 478 | return minunitsworker.NewMinUnitsWorker(st), nil |
157 | 458 | }) | 479 | }) |
158 | 459 | case state.JobManageStateDeprecated: | 480 | case state.JobManageStateDeprecated: |
159 | @@ -670,3 +691,33 @@ | |||
160 | 670 | } | 691 | } |
161 | 671 | return fmt.Errorf("uninstall failed: %v", errors) | 692 | return fmt.Errorf("uninstall failed: %v", errors) |
162 | 672 | } | 693 | } |
163 | 694 | |||
164 | 695 | // singularAPIConn implements singular.Conn on | ||
165 | 696 | // top of an API connection. | ||
166 | 697 | type singularAPIConn struct { | ||
167 | 698 | apiState *api.State | ||
168 | 699 | agentState *apiagent.State | ||
169 | 700 | } | ||
170 | 701 | |||
171 | 702 | func (c singularAPIConn) IsMaster() (bool, error) { | ||
172 | 703 | return c.agentState.IsMaster() | ||
173 | 704 | } | ||
174 | 705 | |||
175 | 706 | func (c singularAPIConn) Ping() error { | ||
176 | 707 | return c.apiState.Ping() | ||
177 | 708 | } | ||
178 | 709 | |||
179 | 710 | // singularStateConn implements singular.Conn on | ||
180 | 711 | // top of a State connection. | ||
181 | 712 | type singularStateConn struct { | ||
182 | 713 | session *mgo.Session | ||
183 | 714 | machine *state.Machine | ||
184 | 715 | } | ||
185 | 716 | |||
186 | 717 | func (c singularStateConn) IsMaster() (bool, error) { | ||
187 | 718 | return mongo.IsMaster(c.session, c.machine) | ||
188 | 719 | } | ||
189 | 720 | |||
190 | 721 | func (c singularStateConn) Ping() error { | ||
191 | 722 | return c.session.Ping() | ||
192 | 723 | } | ||
193 | 673 | 724 | ||
194 | === modified file 'cmd/jujud/machine_test.go' | |||
195 | --- cmd/jujud/machine_test.go 2014-04-08 20:42:04 +0000 | |||
196 | +++ cmd/jujud/machine_test.go 2014-04-10 14:24:10 +0000 | |||
197 | @@ -9,6 +9,7 @@ | |||
198 | 9 | "path/filepath" | 9 | "path/filepath" |
199 | 10 | "reflect" | 10 | "reflect" |
200 | 11 | "strings" | 11 | "strings" |
201 | 12 | "sync" | ||
202 | 12 | "time" | 13 | "time" |
203 | 13 | 14 | ||
204 | 14 | "github.com/juju/testing" | 15 | "github.com/juju/testing" |
205 | @@ -39,6 +40,7 @@ | |||
206 | 39 | "launchpad.net/juju-core/tools" | 40 | "launchpad.net/juju-core/tools" |
207 | 40 | "launchpad.net/juju-core/upstart" | 41 | "launchpad.net/juju-core/upstart" |
208 | 41 | "launchpad.net/juju-core/utils" | 42 | "launchpad.net/juju-core/utils" |
209 | 43 | "launchpad.net/juju-core/utils/set" | ||
210 | 42 | "launchpad.net/juju-core/utils/ssh" | 44 | "launchpad.net/juju-core/utils/ssh" |
211 | 43 | sshtesting "launchpad.net/juju-core/utils/ssh/testing" | 45 | sshtesting "launchpad.net/juju-core/utils/ssh/testing" |
212 | 44 | "launchpad.net/juju-core/version" | 46 | "launchpad.net/juju-core/version" |
213 | @@ -48,11 +50,13 @@ | |||
214 | 48 | "launchpad.net/juju-core/worker/instancepoller" | 50 | "launchpad.net/juju-core/worker/instancepoller" |
215 | 49 | "launchpad.net/juju-core/worker/machineenvironmentworker" | 51 | "launchpad.net/juju-core/worker/machineenvironmentworker" |
216 | 50 | "launchpad.net/juju-core/worker/rsyslog" | 52 | "launchpad.net/juju-core/worker/rsyslog" |
217 | 53 | "launchpad.net/juju-core/worker/singular" | ||
218 | 51 | "launchpad.net/juju-core/worker/upgrader" | 54 | "launchpad.net/juju-core/worker/upgrader" |
219 | 52 | ) | 55 | ) |
220 | 53 | 56 | ||
221 | 54 | type commonMachineSuite struct { | 57 | type commonMachineSuite struct { |
222 | 55 | agentSuite | 58 | agentSuite |
223 | 59 | singularRecord *singularRunnerRecord | ||
224 | 56 | lxctesting.TestSuite | 60 | lxctesting.TestSuite |
225 | 57 | } | 61 | } |
226 | 58 | 62 | ||
227 | @@ -85,6 +89,9 @@ | |||
228 | 85 | fakeCmd(filepath.Join(testpath, "stop")) | 89 | fakeCmd(filepath.Join(testpath, "stop")) |
229 | 86 | 90 | ||
230 | 87 | s.PatchValue(&upstart.InitDir, c.MkDir()) | 91 | s.PatchValue(&upstart.InitDir, c.MkDir()) |
231 | 92 | |||
232 | 93 | s.singularRecord = &singularRunnerRecord{} | ||
233 | 94 | testing.PatchValue(&newSingularRunner, s.singularRecord.newSingularRunner) | ||
234 | 88 | } | 95 | } |
235 | 89 | 96 | ||
236 | 90 | func fakeCmd(path string) { | 97 | func fakeCmd(path string) { |
237 | @@ -373,6 +380,15 @@ | |||
238 | 373 | case <-time.After(5 * time.Second): | 380 | case <-time.After(5 * time.Second): |
239 | 374 | c.Fatalf("timed out waiting for agent to terminate") | 381 | c.Fatalf("timed out waiting for agent to terminate") |
240 | 375 | } | 382 | } |
241 | 383 | |||
242 | 384 | c.Assert(s.singularRecord.started(), jc.DeepEquals, []string{ | ||
243 | 385 | "charm-revision-updater", | ||
244 | 386 | "cleaner", | ||
245 | 387 | "environ-provisioner", | ||
246 | 388 | "firewaller", | ||
247 | 389 | "minunitsworker", | ||
248 | 390 | "resumer", | ||
249 | 391 | }) | ||
250 | 376 | } | 392 | } |
251 | 377 | 393 | ||
252 | 378 | func (s *MachineSuite) TestManageEnvironRunsInstancePoller(c *gc.C) { | 394 | func (s *MachineSuite) TestManageEnvironRunsInstancePoller(c *gc.C) { |
253 | @@ -948,3 +964,36 @@ | |||
254 | 948 | } | 964 | } |
255 | 949 | c.Assert(success, gc.Equals, true) | 965 | c.Assert(success, gc.Equals, true) |
256 | 950 | } | 966 | } |
257 | 967 | |||
258 | 968 | type singularRunnerRecord struct { | ||
259 | 969 | mu sync.Mutex | ||
260 | 970 | startedWorkers set.Strings | ||
261 | 971 | } | ||
262 | 972 | |||
263 | 973 | func (r *singularRunnerRecord) newSingularRunner(runner worker.Runner, conn singular.Conn) (worker.Runner, error) { | ||
264 | 974 | sr, err := singular.New(runner, conn) | ||
265 | 975 | if err != nil { | ||
266 | 976 | return nil, err | ||
267 | 977 | } | ||
268 | 978 | return &fakeSingularRunner{ | ||
269 | 979 | Runner: sr, | ||
270 | 980 | record: r, | ||
271 | 981 | }, nil | ||
272 | 982 | } | ||
273 | 983 | |||
274 | 984 | // started returns the names of all singular-started workers. | ||
275 | 985 | func (r *singularRunnerRecord) started() []string { | ||
276 | 986 | return r.startedWorkers.SortedValues() | ||
277 | 987 | } | ||
278 | 988 | |||
279 | 989 | type fakeSingularRunner struct { | ||
280 | 990 | worker.Runner | ||
281 | 991 | record *singularRunnerRecord | ||
282 | 992 | } | ||
283 | 993 | |||
284 | 994 | func (r *fakeSingularRunner) StartWorker(name string, start func() (worker.Worker, error)) error { | ||
285 | 995 | r.record.mu.Lock() | ||
286 | 996 | defer r.record.mu.Unlock() | ||
287 | 997 | r.record.startedWorkers.Add(name) | ||
288 | 998 | return r.Runner.StartWorker(name, start) | ||
289 | 999 | } | ||
290 | 951 | 1000 | ||
291 | === modified file 'replicaset/replicaset.go' | |||
292 | --- replicaset/replicaset.go 2014-04-09 16:45:35 +0000 | |||
293 | +++ replicaset/replicaset.go 2014-04-10 14:24:10 +0000 | |||
294 | @@ -241,13 +241,19 @@ | |||
295 | 241 | return results, nil | 241 | return results, nil |
296 | 242 | } | 242 | } |
297 | 243 | 243 | ||
300 | 244 | // MasterHostPort returns the "address:port" string for the | 244 | var ErrMasterNotConfigured = fmt.Errorf("mongo master not configured") |
301 | 245 | // primary mongo server in the replicaset. | 245 | |
302 | 246 | // MasterHostPort returns the "address:port" string for the primary | ||
303 | 247 | // mongo server in the replicaset. It returns ErrMasterNotConfigured if | ||
304 | 248 | // the replica set has not yet been initiated. | ||
305 | 246 | func MasterHostPort(session *mgo.Session) (string, error) { | 249 | func MasterHostPort(session *mgo.Session) (string, error) { |
306 | 247 | results, err := IsMaster(session) | 250 | results, err := IsMaster(session) |
307 | 248 | if err != nil { | 251 | if err != nil { |
308 | 249 | return "", err | 252 | return "", err |
309 | 250 | } | 253 | } |
310 | 254 | if results.PrimaryAddress == "" { | ||
311 | 255 | return "", ErrMasterNotConfigured | ||
312 | 256 | } | ||
313 | 251 | return results.PrimaryAddress, nil | 257 | return results.PrimaryAddress, nil |
314 | 252 | } | 258 | } |
315 | 253 | 259 | ||
316 | 254 | 260 | ||
317 | === modified file 'replicaset/replicaset_test.go' | |||
318 | --- replicaset/replicaset_test.go 2014-04-09 16:38:25 +0000 | |||
319 | +++ replicaset/replicaset_test.go 2014-04-10 14:24:10 +0000 | |||
320 | @@ -274,6 +274,8 @@ | |||
321 | 274 | break | 274 | break |
322 | 275 | } | 275 | } |
323 | 276 | c.Logf("attempting Set got error: %v", err) | 276 | c.Logf("attempting Set got error: %v", err) |
324 | 277 | c.Logf("current session mode: %v", session.Mode()) | ||
325 | 278 | session.Refresh() | ||
326 | 277 | } | 279 | } |
327 | 278 | c.Logf("Set() %d attempts in %s", attemptCount, time.Since(start)) | 280 | c.Logf("Set() %d attempts in %s", attemptCount, time.Since(start)) |
328 | 279 | c.Assert(err, gc.IsNil) | 281 | c.Assert(err, gc.IsNil) |
329 | @@ -354,6 +356,17 @@ | |||
330 | 354 | c.Assert(result, gc.Equals, expected) | 356 | c.Assert(result, gc.Equals, expected) |
331 | 355 | } | 357 | } |
332 | 356 | 358 | ||
333 | 359 | func (s *MongoSuite) TestMasterHostPortOnUnconfiguredReplicaSet(c *gc.C) { | ||
334 | 360 | inst := &coretesting.MgoInstance{} | ||
335 | 361 | err := inst.Start(true) | ||
336 | 362 | c.Assert(err, gc.IsNil) | ||
337 | 363 | defer inst.Destroy() | ||
338 | 364 | session := inst.MustDial() | ||
339 | 365 | hp, err := MasterHostPort(session) | ||
340 | 366 | c.Assert(err, gc.Equals, ErrMasterNotConfigured) | ||
341 | 367 | c.Assert(hp, gc.Equals, "") | ||
342 | 368 | } | ||
343 | 369 | |||
344 | 357 | func (s *MongoSuite) TestCurrentStatus(c *gc.C) { | 370 | func (s *MongoSuite) TestCurrentStatus(c *gc.C) { |
345 | 358 | session := root.MustDial() | 371 | session := root.MustDial() |
346 | 359 | defer session.Close() | 372 | defer session.Close() |
Reviewers: mp+215035_ code.launchpad. net,
Message:
Please take a look.
Description:
cmd/jujud: wrap singular workers
We start the various workers that require only a single
instance to be running via the runner returned by singular.New.
This is awkward to test - we just do a basic sanity check
that the right workers are started through the singular
runner.
https:/ /code.launchpad .net/~rogpeppe/ juju-core/ mfoord- wrapsingletonwo rkers/+ merge/215035
(do not edit description out of merge proposal)
Please review this at https:/ /codereview. appspot. com/86200043/
Affected files (+113, -15 lines): machine. go machine_ test.go
A [revision details]
M cmd/jujud/
M cmd/jujud/