Merge lp:~jameinel/juju-core/ensure-availability-default-3-1311083 into lp:~go-bot/juju-core/trunk
- ensure-availability-default-3-1311083
- Merge into trunk
Status: | Merged |
---|---|
Approved by: | John A Meinel |
Approved revision: | no longer in the source branch. |
Merged at revision: | 2670 |
Proposed branch: | lp:~jameinel/juju-core/ensure-availability-default-3-1311083 |
Merge into: | lp:~go-bot/juju-core/trunk |
Diff against target: |
474 lines (+232/-65) 6 files modified
cmd/juju/ensureavailability.go (+12/-13) cmd/juju/ensureavailability_test.go (+56/-27) state/addmachine.go (+13/-6) state/apiserver/client/client.go (+17/-5) state/apiserver/client/client_test.go (+78/-12) state/state_test.go (+56/-2) |
To merge this branch: | bzr merge lp:~jameinel/juju-core/ensure-availability-default-3-1311083 |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Juju Engineering | Pending | ||
Review via email: mp+216706@code.launchpad.net |
Commit message
cmd/juju/
I filed bug #1311083 because we currently have a default of '-1' and we
immediately fail when running the command. I don't think this is very
good UI for a user. I think we can make an opinionated statement that
n=3 is going to be the best for someone who wants to run this command.
And that is what this patch does.
The new default is not strictly '3' but to use 3 if you only have 1, and otherwise preserve whatever number of servers you already have. (which will start new servers to bring you back up to your requested availability.)
Description of the change
cmd/juju/
I filed bug #1311083 because we currently have a default of '-1' and we
immediately fail when running the command. I don't think this is very
good UI for a user. I think we can make an opinionated statement that
n=3 is going to be the best for someone who wants to run this command.
And that is what this patch does.
John A Meinel (jameinel) wrote : | # |
Andrew Wilkins (axwalk) wrote : | # |
On 2014/04/22 12:32:29, jameinel wrote:
> Please take a look.
I originally just defaulted it to 1, but rog asked me to take out the
default so that we can, in the future, change the command to maintain
the current availability. That would enable a simple cron job to
periodically "juju ensure-
servers and reinstating available ones.
John A Meinel (jameinel) wrote : | # |
On 2014/04/22 12:42:23, axw wrote:
> On 2014/04/22 12:32:29, jameinel wrote:
> > Please take a look.
> I originally just defaulted it to 1, but rog asked me to take out the
default so
> that we can, in the future, change the command to maintain the current
> availability. That would enable a simple cron job to periodically
"juju
> ensure-
reinstating
> available ones.
ensure-availability 1 can't ever do anything.
Because if 1 machine is dead, there is nobody to contact to set up
availability.
And if you pass 1 to a system that has 3, you get a message about "you
cannot reduce the number", because we would be trying to decrease from 3
to 1.
There is room to say that "ensure-
is actually -1, which means "3 if you are currently running with only 1,
and N if you are currently running with N".
I would be happy to make that change, so that we allow -1 or 0 to be
passed up, and then interpret it according to the above rules.
But *quite honestly* I don't expect anyone to ever run with anything but
1 or 3, so making it overly flexible doesn't actually gain us much. (It
is possible that in a year or more we'll have found ways to scale beyond
3 productively, I don't think we're anywhere close to that yet).
Roger Peppe (rogpeppe) wrote : | # |
On 2014/04/22 14:11:08, jameinel wrote:
> On 2014/04/22 12:42:23, axw wrote:
> > On 2014/04/22 12:32:29, jameinel wrote:
> > > Please take a look.
> >
> > I originally just defaulted it to 1, but rog asked me to take out
the default
> so
> > that we can, in the future, change the command to maintain the
current
> > availability. That would enable a simple cron job to periodically
"juju
> > ensure-
reinstating
> > available ones.
> ensure-availability 1 can't ever do anything.
> Because if 1 machine is dead, there is nobody to contact to set up
availability.
> And if you pass 1 to a system that has 3, you get a message about "you
cannot
> reduce the number", because we would be trying to decrease from 3 to
1.
That should be fixed eventually - we should allow people to reduce their
state server count.
> There is room to say that "ensure-
is
> actually -1, which means "3 if you are currently running with only 1,
and N if
> you are currently running with N".
That was the plan.
> I would be happy to make that change, so that we allow -1 or 0 to be
passed up,
> and then interpret it according to the above rules.
SGTM
Ian Booth (wallyworld) wrote : | # |
LGTM
https:/
File cmd/juju/
https:/
cmd/juju/
nitpick - add another example showing use of n?, eg
juju ensure-availability -n 5
Ensure that 5 state servers ....
John A Meinel (jameinel) wrote : | # |
Please take a look.
Andrew Wilkins (axwalk) wrote : | # |
On 2014/04/23 06:55:47, jameinel wrote:
> Please take a look.
LGTM
Preview Diff
1 | === modified file 'cmd/juju/ensureavailability.go' |
2 | --- cmd/juju/ensureavailability.go 2014-04-01 04:53:43 +0000 |
3 | +++ cmd/juju/ensureavailability.go 2014-04-23 06:53:26 +0000 |
4 | @@ -33,18 +33,17 @@ |
5 | An odd number of state servers is required. |
6 | |
7 | Examples: |
8 | - juju ensure-availability -n 3 |
9 | - Ensure that 3 state servers are available, |
10 | - with newly created state server machines |
11 | - having the default series and constraints. |
12 | + juju ensure-availability |
13 | + Ensure that the system is still in highly available mode. If |
14 | + there is only 1 state server running, this will ensure there |
15 | + are 3 running. If you have previously requested more than 3, |
16 | + then that number will be ensured. |
17 | juju ensure-availability -n 5 --series=trusty |
18 | - Ensure that 5 state servers are available, |
19 | - with newly created state server machines |
20 | - having the "trusty" series. |
21 | + Ensure that 5 state servers are available, with newly created |
22 | + state server machines having the "trusty" series. |
23 | juju ensure-availability -n 7 --constraints mem=8G |
24 | - Ensure that 7 state servers are available, |
25 | - with newly created state server machines |
26 | - having the default series, and at least |
27 | + Ensure that 7 state servers are available, with newly created |
28 | + state server machines having the default series, and at least |
29 | 8GB RAM. |
30 | ` |
31 | |
32 | @@ -58,7 +57,7 @@ |
33 | |
34 | func (c *EnsureAvailabilityCommand) SetFlags(f *gnuflag.FlagSet) { |
35 | c.EnvCommandBase.SetFlags(f) |
36 | - f.IntVar(&c.NumStateServers, "n", -1, "number of state servers to make available") |
37 | + f.IntVar(&c.NumStateServers, "n", 0, "number of state servers to make available") |
38 | f.StringVar(&c.Series, "series", "", "the charm series") |
39 | f.Var(constraints.ConstraintsValue{&c.Constraints}, "constraints", "additional machine constraints") |
40 | } |
41 | @@ -68,8 +67,8 @@ |
42 | if err != nil { |
43 | return err |
44 | } |
45 | - if c.NumStateServers%2 != 1 || c.NumStateServers <= 0 { |
46 | - return fmt.Errorf("must specify a number of state servers odd and greater than zero") |
47 | + if c.NumStateServers < 0 || (c.NumStateServers%2 != 1 && c.NumStateServers != 0) { |
48 | + return fmt.Errorf("must specify a number of state servers odd and non-negative") |
49 | } |
50 | return cmd.CheckEmpty(args) |
51 | } |
52 | |
53 | === modified file 'cmd/juju/ensureavailability_test.go' |
54 | --- cmd/juju/ensureavailability_test.go 2014-04-10 13:00:21 +0000 |
55 | +++ cmd/juju/ensureavailability_test.go 2014-04-23 06:53:26 +0000 |
56 | @@ -12,17 +12,44 @@ |
57 | "launchpad.net/juju-core/constraints" |
58 | jujutesting "launchpad.net/juju-core/juju/testing" |
59 | "launchpad.net/juju-core/state" |
60 | - "launchpad.net/juju-core/testing" |
61 | + "launchpad.net/juju-core/state/presence" |
62 | + coretesting "launchpad.net/juju-core/testing" |
63 | ) |
64 | |
65 | type EnsureAvailabilitySuite struct { |
66 | jujutesting.RepoSuite |
67 | + machine0Pinger *presence.Pinger |
68 | } |
69 | |
70 | var _ = gc.Suite(&EnsureAvailabilitySuite{}) |
71 | |
72 | +func (s *EnsureAvailabilitySuite) SetUpTest(c *gc.C) { |
73 | + s.RepoSuite.SetUpTest(c) |
74 | + // Add a state server to the environment, and ensure that it is |
75 | + // considered 'alive' so that calls don't spawn new instances |
76 | + _, err := s.State.AddMachine("precise", state.JobManageEnviron) |
77 | + c.Assert(err, gc.IsNil) |
78 | + m, err := s.BackingState.Machine("0") |
79 | + c.Assert(err, gc.IsNil) |
80 | + s.machine0Pinger, err = m.SetAgentAlive() |
81 | + c.Assert(err, gc.IsNil) |
82 | + s.BackingState.StartSync() |
83 | + err = m.WaitAgentAlive(coretesting.LongWait) |
84 | + c.Assert(err, gc.IsNil) |
85 | +} |
86 | + |
87 | +func (s *EnsureAvailabilitySuite) TearDownTest(c *gc.C) { |
88 | + // We have to Kill the Pinger before TearDownTest, otherwise the State |
89 | + // connection is already closed. |
90 | + if s.machine0Pinger != nil { |
91 | + s.machine0Pinger.Kill() |
92 | + s.machine0Pinger = nil |
93 | + } |
94 | + s.RepoSuite.TearDownTest(c) |
95 | +} |
96 | + |
97 | func runEnsureAvailability(c *gc.C, args ...string) error { |
98 | - _, err := testing.RunCommand(c, &EnsureAvailabilityCommand{}, args) |
99 | + _, err := coretesting.RunCommand(c, &EnsureAvailabilityCommand{}, args) |
100 | return err |
101 | } |
102 | |
103 | @@ -39,17 +66,17 @@ |
104 | } |
105 | |
106 | func (s *EnsureAvailabilitySuite) TestEnsureAvailabilityWithSeries(c *gc.C) { |
107 | - err := runEnsureAvailability(c, "--series", "series", "-n", "1") |
108 | + err := runEnsureAvailability(c, "--series", "series", "-n", "3") |
109 | c.Assert(err, gc.IsNil) |
110 | - m, err := s.State.Machine("0") |
111 | + m, err := s.State.Machine("1") |
112 | c.Assert(err, gc.IsNil) |
113 | c.Assert(m.Series(), gc.DeepEquals, "series") |
114 | } |
115 | |
116 | func (s *EnsureAvailabilitySuite) TestEnsureAvailabilityWithConstraints(c *gc.C) { |
117 | - err := runEnsureAvailability(c, "--constraints", "mem=4G", "-n", "1") |
118 | + err := runEnsureAvailability(c, "--constraints", "mem=4G", "-n", "3") |
119 | c.Assert(err, gc.IsNil) |
120 | - m, err := s.State.Machine("0") |
121 | + m, err := s.State.Machine("1") |
122 | c.Assert(err, gc.IsNil) |
123 | mcons, err := m.Constraints() |
124 | c.Assert(err, gc.IsNil) |
125 | @@ -62,6 +89,9 @@ |
126 | err := runEnsureAvailability(c, "-n", "1") |
127 | c.Assert(err, gc.IsNil) |
128 | } |
129 | + machines, err := s.State.AllMachines() |
130 | + c.Assert(err, gc.IsNil) |
131 | + c.Assert(machines, gc.HasLen, 1) |
132 | m, err := s.State.Machine("0") |
133 | c.Assert(err, gc.IsNil) |
134 | mcons, err := m.Constraints() |
135 | @@ -81,22 +111,7 @@ |
136 | } |
137 | |
138 | func (s *EnsureAvailabilitySuite) TestEnsureAvailabilityMultiple(c *gc.C) { |
139 | - err := runEnsureAvailability(c, "-n", "1") |
140 | - c.Assert(err, gc.IsNil) |
141 | - |
142 | - // make sure machine-0 remains alive for the second call to |
143 | - // EnsureAvailability, or machine-0 will get bumped down to |
144 | - // non-voting. |
145 | - m0, err := s.BackingState.Machine("0") |
146 | - c.Assert(err, gc.IsNil) |
147 | - pinger, err := m0.SetAgentAlive() |
148 | - c.Assert(err, gc.IsNil) |
149 | - defer pinger.Kill() |
150 | - s.BackingState.StartSync() |
151 | - err = m0.WaitAgentAlive(testing.LongWait) |
152 | - c.Assert(err, gc.IsNil) |
153 | - |
154 | - err = runEnsureAvailability(c, "-n", "3", "--constraints", "mem=4G") |
155 | + err := runEnsureAvailability(c, "-n", "3", "--constraints", "mem=4G") |
156 | c.Assert(err, gc.IsNil) |
157 | |
158 | machines, err := s.State.AllMachines() |
159 | @@ -114,14 +129,28 @@ |
160 | } |
161 | |
162 | func (s *EnsureAvailabilitySuite) TestEnsureAvailabilityErrors(c *gc.C) { |
163 | - err := runEnsureAvailability(c) |
164 | - c.Assert(err, gc.ErrorMatches, "must specify a number of state servers odd and greater than zero") |
165 | - for _, n := range []int{-1, 0, 2} { |
166 | + for _, n := range []int{-1, 2} { |
167 | err := runEnsureAvailability(c, "-n", fmt.Sprint(n)) |
168 | - c.Assert(err, gc.ErrorMatches, "must specify a number of state servers odd and greater than zero") |
169 | + c.Assert(err, gc.ErrorMatches, "must specify a number of state servers odd and non-negative") |
170 | } |
171 | - err = runEnsureAvailability(c, "-n", "3") |
172 | + err := runEnsureAvailability(c, "-n", "3") |
173 | c.Assert(err, gc.IsNil) |
174 | err = runEnsureAvailability(c, "-n", "1") |
175 | c.Assert(err, gc.ErrorMatches, "cannot reduce state server count") |
176 | } |
177 | + |
178 | +func (s *EnsureAvailabilitySuite) TestEnsureAvailabilityAllows0(c *gc.C) { |
179 | + err := runEnsureAvailability(c, "-n", "0") |
180 | + c.Assert(err, gc.IsNil) |
181 | + machines, err := s.State.AllMachines() |
182 | + c.Assert(err, gc.IsNil) |
183 | + c.Assert(machines, gc.HasLen, 3) |
184 | +} |
185 | + |
186 | +func (s *EnsureAvailabilitySuite) TestEnsureAvailabilityDefaultsTo3(c *gc.C) { |
187 | + err := runEnsureAvailability(c) |
188 | + c.Assert(err, gc.IsNil) |
189 | + machines, err := s.State.AllMachines() |
190 | + c.Assert(err, gc.IsNil) |
191 | + c.Assert(machines, gc.HasLen, 3) |
192 | +} |
193 | |
194 | === modified file 'state/addmachine.go' |
195 | --- state/addmachine.go 2014-04-21 23:10:05 +0000 |
196 | +++ state/addmachine.go 2014-04-23 06:53:26 +0000 |
197 | @@ -497,8 +497,8 @@ |
198 | // the number of live state servers equal to numStateServers. The given |
199 | // constraints and series will be attached to any new machines. |
200 | func (st *State) EnsureAvailability(numStateServers int, cons constraints.Value, series string) error { |
201 | - if numStateServers%2 != 1 || numStateServers <= 0 { |
202 | - return fmt.Errorf("number of state servers must be odd and greater than zero") |
203 | + if numStateServers < 0 || (numStateServers != 0 && numStateServers%2 != 1) { |
204 | + return fmt.Errorf("number of state servers must be odd and non-negative") |
205 | } |
206 | if numStateServers > replicaset.MaxPeers { |
207 | return fmt.Errorf("state server count is too large (allowed %d)", replicaset.MaxPeers) |
208 | @@ -508,7 +508,14 @@ |
209 | if err != nil { |
210 | return err |
211 | } |
212 | - if len(currentInfo.VotingMachineIds) > numStateServers { |
213 | + desiredStateServerCount := numStateServers |
214 | + if desiredStateServerCount == 0 { |
215 | + desiredStateServerCount = len(currentInfo.VotingMachineIds) |
216 | + if desiredStateServerCount <= 1 { |
217 | + desiredStateServerCount = 3 |
218 | + } |
219 | + } |
220 | + if len(currentInfo.VotingMachineIds) > desiredStateServerCount { |
221 | return fmt.Errorf("cannot reduce state server count") |
222 | } |
223 | |
224 | @@ -522,15 +529,15 @@ |
225 | voteCount++ |
226 | } |
227 | } |
228 | - if voteCount == numStateServers && len(intent.remove) == 0 { |
229 | + if voteCount == desiredStateServerCount && len(intent.remove) == 0 { |
230 | return nil |
231 | } |
232 | // Promote as many machines as we can to fulfil the shortfall. |
233 | - if n := numStateServers - voteCount; n < len(intent.promote) { |
234 | + if n := desiredStateServerCount - voteCount; n < len(intent.promote) { |
235 | intent.promote = intent.promote[:n] |
236 | } |
237 | voteCount += len(intent.promote) |
238 | - intent.newCount = numStateServers - voteCount |
239 | + intent.newCount = desiredStateServerCount - voteCount |
240 | logger.Infof("%d new machines; promoting %v", intent.newCount, intent.promote) |
241 | ops, err := st.ensureAvailabilityIntentionOps(intent, currentInfo, cons, series) |
242 | if err != nil { |
243 | |
244 | === modified file 'state/apiserver/client/client.go' |
245 | --- state/apiserver/client/client.go 2014-04-17 12:53:23 +0000 |
246 | +++ state/apiserver/client/client.go 2014-04-23 06:53:26 +0000 |
247 | @@ -1044,11 +1044,23 @@ |
248 | func (c *Client) EnsureAvailability(args params.EnsureAvailability) error { |
249 | series := args.Series |
250 | if series == "" { |
251 | - cfg, err := c.api.state.EnvironConfig() |
252 | - if err != nil { |
253 | - return err |
254 | - } |
255 | - series = config.PreferredSeries(cfg) |
256 | + ssi, err := c.api.state.StateServerInfo() |
257 | + if err != nil { |
258 | + return err |
259 | + } |
260 | + // We should always have at least one voting machine |
261 | + // If we *really* wanted we could just pick whatever series is |
262 | + // in the majority, but really, if we always copy the value of |
263 | + // the first one, then they'll stay in sync. |
264 | + if len(ssi.VotingMachineIds) == 0 { |
265 | + // Better than a panic()? |
266 | + return fmt.Errorf("internal error, failed to find any voting machines") |
267 | + } |
268 | + templateMachine, err := c.api.state.Machine(ssi.VotingMachineIds[0]) |
269 | + if err != nil { |
270 | + return err |
271 | + } |
272 | + series = templateMachine.Series() |
273 | } |
274 | return c.api.state.EnsureAvailability(args.NumStateServers, args.Constraints, series) |
275 | } |
276 | |
277 | === modified file 'state/apiserver/client/client_test.go' |
278 | --- state/apiserver/client/client_test.go 2014-04-17 12:53:23 +0000 |
279 | +++ state/apiserver/client/client_test.go 2014-04-23 06:53:26 +0000 |
280 | @@ -2143,27 +2143,51 @@ |
281 | return pinger |
282 | } |
283 | |
284 | +var ( |
285 | + emptyCons = constraints.Value{} |
286 | + defaultSeries = "" |
287 | +) |
288 | + |
289 | func (s *clientSuite) TestClientEnsureAvailabilitySeries(c *gc.C) { |
290 | - err := s.APIState.Client().EnsureAvailability(1, constraints.Value{}, "") |
291 | + _, err := s.State.AddMachine("quantal", state.JobManageEnviron) |
292 | c.Assert(err, gc.IsNil) |
293 | + // We have to ensure the agents are alive, or EnsureAvailability will |
294 | + // create more to replace them. |
295 | pinger := s.setAgentAlive(c, "0") |
296 | defer pinger.Kill() |
297 | - err = s.APIState.Client().EnsureAvailability(3, constraints.Value{}, "non-default") |
298 | - c.Assert(err, gc.IsNil) |
299 | machines, err := s.State.AllMachines() |
300 | c.Assert(err, gc.IsNil) |
301 | + c.Assert(machines, gc.HasLen, 1) |
302 | + c.Assert(machines[0].Series(), gc.Equals, "quantal") |
303 | + err = s.APIState.Client().EnsureAvailability(3, emptyCons, defaultSeries) |
304 | + c.Assert(err, gc.IsNil) |
305 | + machines, err = s.State.AllMachines() |
306 | + c.Assert(err, gc.IsNil) |
307 | c.Assert(machines, gc.HasLen, 3) |
308 | - c.Assert(machines[0].Series(), gc.Equals, "precise") |
309 | - c.Assert(machines[1].Series(), gc.Equals, "non-default") |
310 | - c.Assert(machines[2].Series(), gc.Equals, "non-default") |
311 | + c.Assert(machines[0].Series(), gc.Equals, "quantal") |
312 | + c.Assert(machines[1].Series(), gc.Equals, "quantal") |
313 | + c.Assert(machines[2].Series(), gc.Equals, "quantal") |
314 | + defer s.setAgentAlive(c, "1").Kill() |
315 | + defer s.setAgentAlive(c, "2").Kill() |
316 | + err = s.APIState.Client().EnsureAvailability(5, emptyCons, "non-default") |
317 | + c.Assert(err, gc.IsNil) |
318 | + machines, err = s.State.AllMachines() |
319 | + c.Assert(err, gc.IsNil) |
320 | + c.Assert(machines, gc.HasLen, 5) |
321 | + c.Assert(machines[0].Series(), gc.Equals, "quantal") |
322 | + c.Assert(machines[1].Series(), gc.Equals, "quantal") |
323 | + c.Assert(machines[2].Series(), gc.Equals, "quantal") |
324 | + c.Assert(machines[3].Series(), gc.Equals, "non-default") |
325 | + c.Assert(machines[4].Series(), gc.Equals, "non-default") |
326 | } |
327 | |
328 | func (s *clientSuite) TestClientEnsureAvailabilityConstraints(c *gc.C) { |
329 | - err := s.APIState.Client().EnsureAvailability(1, constraints.Value{}, "") |
330 | + _, err := s.State.AddMachine("quantal", state.JobManageEnviron) |
331 | c.Assert(err, gc.IsNil) |
332 | pinger := s.setAgentAlive(c, "0") |
333 | defer pinger.Kill() |
334 | - err = s.APIState.Client().EnsureAvailability(3, constraints.MustParse("mem=4G"), "") |
335 | + err = s.APIState.Client().EnsureAvailability( |
336 | + 3, constraints.MustParse("mem=4G"), defaultSeries) |
337 | c.Assert(err, gc.IsNil) |
338 | machines, err := s.State.AllMachines() |
339 | c.Assert(err, gc.IsNil) |
340 | @@ -2180,11 +2204,53 @@ |
341 | } |
342 | } |
343 | |
344 | +func (s *clientSuite) TestClientEnsureAvailability0Preserves(c *gc.C) { |
345 | + _, err := s.State.AddMachine("quantal", state.JobManageEnviron) |
346 | + c.Assert(err, gc.IsNil) |
347 | + pinger := s.setAgentAlive(c, "0") |
348 | + defer pinger.Kill() |
349 | + // A value of 0 says either "if I'm not HA, make me HA" or "preserve my |
350 | + // current HA settings". |
351 | + err = s.APIState.Client().EnsureAvailability(0, emptyCons, defaultSeries) |
352 | + c.Assert(err, gc.IsNil) |
353 | + machines, err := s.State.AllMachines() |
354 | + c.Assert(machines, gc.HasLen, 3) |
355 | + defer s.setAgentAlive(c, "1").Kill() |
356 | + // Now, we keep agent 1 alive, but not agent 2, calling |
357 | + // EnsureAvailability(0) again will cause us to start another machine |
358 | + err = s.APIState.Client().EnsureAvailability(0, emptyCons, defaultSeries) |
359 | + c.Assert(err, gc.IsNil) |
360 | + machines, err = s.State.AllMachines() |
361 | + c.Assert(machines, gc.HasLen, 4) |
362 | +} |
363 | + |
364 | +func (s *clientSuite) TestClientEnsureAvailability0Preserves5(c *gc.C) { |
365 | + _, err := s.State.AddMachine("quantal", state.JobManageEnviron) |
366 | + c.Assert(err, gc.IsNil) |
367 | + pinger := s.setAgentAlive(c, "0") |
368 | + defer pinger.Kill() |
369 | + // Start off with 5 servers |
370 | + err = s.APIState.Client().EnsureAvailability(5, emptyCons, defaultSeries) |
371 | + c.Assert(err, gc.IsNil) |
372 | + machines, err := s.State.AllMachines() |
373 | + c.Assert(machines, gc.HasLen, 5) |
374 | + defer s.setAgentAlive(c, "1").Kill() |
375 | + defer s.setAgentAlive(c, "2").Kill() |
376 | + defer s.setAgentAlive(c, "3").Kill() |
377 | + // Keeping all alive but one, will bring up 1 more server to preserve 5 |
378 | + err = s.APIState.Client().EnsureAvailability(0, emptyCons, defaultSeries) |
379 | + c.Assert(err, gc.IsNil) |
380 | + machines, err = s.State.AllMachines() |
381 | + c.Assert(machines, gc.HasLen, 6) |
382 | +} |
383 | + |
384 | func (s *clientSuite) TestClientEnsureAvailabilityErrors(c *gc.C) { |
385 | - var emptyCons constraints.Value |
386 | - defaultSeries := "" |
387 | - err := s.APIState.Client().EnsureAvailability(0, emptyCons, defaultSeries) |
388 | - c.Assert(err, gc.ErrorMatches, "number of state servers must be odd and greater than zero") |
389 | + _, err := s.State.AddMachine("quantal", state.JobManageEnviron) |
390 | + c.Assert(err, gc.IsNil) |
391 | + pinger := s.setAgentAlive(c, "0") |
392 | + defer pinger.Kill() |
393 | + err = s.APIState.Client().EnsureAvailability(-1, emptyCons, defaultSeries) |
394 | + c.Assert(err, gc.ErrorMatches, "number of state servers must be odd and non-negative") |
395 | err = s.APIState.Client().EnsureAvailability(3, emptyCons, defaultSeries) |
396 | c.Assert(err, gc.IsNil) |
397 | err = s.APIState.Client().EnsureAvailability(1, emptyCons, defaultSeries) |
398 | |
399 | === modified file 'state/state_test.go' |
400 | --- state/state_test.go 2014-04-21 23:10:05 +0000 |
401 | +++ state/state_test.go 2014-04-23 06:53:26 +0000 |
402 | @@ -2926,9 +2926,9 @@ |
403 | } |
404 | |
405 | func (s *StateSuite) TestEnsureAvailabilityFailsWithBadCount(c *gc.C) { |
406 | - for _, n := range []int{-1, 0, 2, 6} { |
407 | + for _, n := range []int{-1, 2, 6} { |
408 | err := s.State.EnsureAvailability(n, constraints.Value{}, "") |
409 | - c.Assert(err, gc.ErrorMatches, "number of state servers must be odd and greater than zero") |
410 | + c.Assert(err, gc.ErrorMatches, "number of state servers must be odd and non-negative") |
411 | } |
412 | err := s.State.EnsureAvailability(replicaset.MaxPeers+2, constraints.Value{}, "") |
413 | c.Assert(err, gc.ErrorMatches, `state server count is too large \(allowed \d+\)`) |
414 | @@ -3069,6 +3069,60 @@ |
415 | c.Assert(m0.IsManager(), jc.IsFalse) |
416 | } |
417 | |
418 | +func (s *StateSuite) TestEnsureAvailabilityMaintainsVoteList(c *gc.C) { |
419 | + err := s.State.EnsureAvailability(5, constraints.Value{}, "quantal") |
420 | + c.Assert(err, gc.IsNil) |
421 | + s.assertStateServerInfo(c, |
422 | + []string{"0", "1", "2", "3", "4"}, |
423 | + []string{"0", "1", "2", "3", "4"}) |
424 | + // Mark machine-0 as dead, so we'll want to create another one again |
425 | + s.PatchValue(state.StateServerAvailable, func(m *state.Machine) (bool, error) { |
426 | + return m.Id() != "0", nil |
427 | + }) |
428 | + err = s.State.EnsureAvailability(0, constraints.Value{}, "quantal") |
429 | + c.Assert(err, gc.IsNil) |
430 | + |
431 | + // New state server machine "5" is created; "0" still exists in MachineIds, |
432 | + // but no longer in VotingMachineIds. |
433 | + s.assertStateServerInfo(c, |
434 | + []string{"0", "1", "2", "3", "4", "5"}, |
435 | + []string{"1", "2", "3", "4", "5"}) |
436 | + m0, err := s.State.Machine("0") |
437 | + c.Assert(err, gc.IsNil) |
438 | + c.Assert(m0.WantsVote(), jc.IsFalse) |
439 | + c.Assert(m0.IsManager(), jc.IsTrue) // job still intact for now |
440 | + m3, err := s.State.Machine("5") |
441 | + c.Assert(err, gc.IsNil) |
442 | + c.Assert(m3.WantsVote(), jc.IsTrue) |
443 | + c.Assert(m3.IsManager(), jc.IsTrue) |
444 | +} |
445 | + |
446 | +func (s *StateSuite) TestEnsureAvailabilityDefaultsTo3(c *gc.C) { |
447 | + err := s.State.EnsureAvailability(0, constraints.Value{}, "quantal") |
448 | + c.Assert(err, gc.IsNil) |
449 | + s.assertStateServerInfo(c, []string{"0", "1", "2"}, []string{"0", "1", "2"}) |
450 | + // Mark machine-0 as dead, so we'll want to create it again |
451 | + s.PatchValue(state.StateServerAvailable, func(m *state.Machine) (bool, error) { |
452 | + return m.Id() != "0", nil |
453 | + }) |
454 | + err = s.State.EnsureAvailability(0, constraints.Value{}, "quantal") |
455 | + c.Assert(err, gc.IsNil) |
456 | + |
457 | + // New state server machine "3" is created; "0" still exists in MachineIds, |
458 | + // but no longer in VotingMachineIds. |
459 | + s.assertStateServerInfo(c, |
460 | + []string{"0", "1", "2", "3"}, |
461 | + []string{"1", "2", "3"}) |
462 | + m0, err := s.State.Machine("0") |
463 | + c.Assert(err, gc.IsNil) |
464 | + c.Assert(m0.WantsVote(), jc.IsFalse) |
465 | + c.Assert(m0.IsManager(), jc.IsTrue) // job still intact for now |
466 | + m3, err := s.State.Machine("3") |
467 | + c.Assert(err, gc.IsNil) |
468 | + c.Assert(m3.WantsVote(), jc.IsTrue) |
469 | + c.Assert(m3.IsManager(), jc.IsTrue) |
470 | +} |
471 | + |
472 | func (s *StateSuite) TestEnsureAvailabilityConcurrentSame(c *gc.C) { |
473 | s.PatchValue(state.StateServerAvailable, func(m *state.Machine) (bool, error) { |
474 | return true, nil |
Reviewers: mp+216706_ code.launchpad. net,
Message:
Please take a look.
Description: ensureavailabil ity: default to 3 servers
cmd/juju/
I filed bug #1311083 because we currently have a default of '-1' and we
immediately fail when running the command. I don't think this is very
good UI for a user. I think we can make an opinionated statement that
n=3 is going to be the best for someone who wants to run this command.
And that is what this patch does.
https:/ /code.launchpad .net/~jameinel/ juju-core/ ensure- availability- default- 3-1311083/ +merge/ 216706
(do not edit description out of merge proposal)
Please review this at https:/ /codereview. appspot. com/90160044/
Affected files (+13, -5 lines): ensureavailabil ity.go ensureavailabil ity_test. go
A [revision details]
M cmd/juju/
M cmd/juju/
Index: [revision details] 20140422051657- 1svpzqvmysbn4tq m
=== added file '[revision details]'
--- [revision details] 2012-01-01 00:00:00 +0000
+++ [revision details] 2012-01-01 00:00:00 +0000
@@ -0,0 +1,2 @@
+Old revision: tarmac-
+New revision: <email address hidden>
Index: cmd/juju/ ensureavailabil ity.go ensureavailabil ity.go' ensureavailabil ity.go 2014-04-01 04:53:43 +0000 ensureavailabil ity.go 2014-04-22 12:28:07 +0000
=== modified file 'cmd/juju/
--- cmd/juju/
+++ cmd/juju/
@@ -33,7 +33,7 @@
An odd number of state servers is required.
Examples:
- juju ensure-availability -n 3
+ juju ensure-availability
Ensure that 3 state servers are available,
with newly created state server machines
having the default series and constraints.
@@ -58,7 +58,7 @@
func (c *EnsureAvailabi lityCommand) SetFlags(f *gnuflag.FlagSet) { Base.SetFlags( f) &c.NumStateServ ers, "n", -1, "number of state servers to make &c.NumStateServ ers, "n", 3, "number of state servers to make &c.Series, "series", "", "the charm series")
c.EnvCommand
- f.IntVar(
available")
+ f.IntVar(
available")
f.StringVar(
f.Var(constrain ts.ConstraintsV alue{&c. Constraints} , "constraints", "additional
machine constraints")
}
Index: cmd/juju/ ensureavailabil ity_test. go ensureavailabil ity_test. go' ensureavailabil ity_test. go 2014-04-10 13:00:21 +0000 ensureavailabil ity_test. go 2014-04-22 12:28:07 +0000
=== modified file 'cmd/juju/
--- cmd/juju/
+++ cmd/juju/
@@ -114,14 +114,20 @@
}
func (s *EnsureAvailabi litySuite) TestEnsureAvail abilityErrors( c *gc.C) { bility( c) bility( c, "-n", fmt.Sprint(n)) bility( c, "-n", "3") bility( c, "-n", "3") bility( c, "-n", "1") litySuite) TestEnsureAvail abilityDefaults To3(c bility( c) AllMachines( )
- err := runEnsureAvaila
- c.Assert(err, gc.ErrorMatches, "must specify a number of state servers
odd and greater than zero")
for _, n := range []int{-1, 0, 2} {
err := runEnsureAvaila
c.Assert(err, gc.ErrorMatches, "must specify a number of state servers
odd and greater than zero")
}
- err = runEnsureAvaila
+ err := runEnsureAvaila
c.Assert(err, gc.IsNil)
err = runEnsureAvaila
c.Assert(err, gc.ErrorMatches, "cannot reduce state server count")
}
+
+func (s *EnsureAvailabi
*gc.C) {
+ err := runEnsureAvaila
+ c.Assert(err, gc.IsNil)
+ machines, err := s.State.
+ c.Assert(err, gc.IsNil)
+ c.Assert(...