Merge lp:~doanac/lava-lab/lava-scripts into lp:lava-lab
- lava-scripts
- Merge into salt-states
Status: | Needs review |
---|---|
Proposed branch: | lp:~doanac/lava-lab/lava-scripts |
Merge into: | lp:lava-lab |
Diff against target: |
380 lines (+341/-0) 7 files modified
_grains/lava.py (+6/-0) scripts/lava-add-worker (+22/-0) scripts/lava-start (+20/-0) scripts/lava-status (+19/-0) scripts/lava-stop (+22/-0) scripts/lava-upgrade (+34/-0) scripts/lava_salt.py (+218/-0) |
To merge this branch: | bzr merge lp:~doanac/lava-lab/lava-scripts |
Related bugs: |
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Linaro Validation Team | Pending | ||
Review via email: mp+146547@code.launchpad.net |
Commit message
Description of the change
start work on some helper functions that should allow us to manage lava deployments using salt
Andy Doan (doanac) wrote : | # |
On 02/05, Antonio Terceiro wrote:
> One general comment: you don't need to run l-d-t as root. I did the following
> on the vagrant-bootstrap script inside the l-d-t repo:
>
> if ! test -d /srv/lava/
> cd $basedir
> su vagrant -c './lava-
> su vagrant -c './lava-
> fi
You must be setting up your "vagrant" user to allow password-less sudo?
We don't do that for our instance-manager and it seems a little scare
(to me) to allow that.
> so you could do something like that, and you won't need to chown/chmod away, or
> even add that check on l-d-t to allow running as root.
Antonio Terceiro (terceiro) wrote : | # |
On Tue, Feb 05, 2013 at 06:46:22PM -0000, Andy Doan wrote:
> On 02/05, Antonio Terceiro wrote:
> > One general comment: you don't need to run l-d-t as root. I did the following
> > on the vagrant-bootstrap script inside the l-d-t repo:
> >
> > if ! test -d /srv/lava/
> > cd $basedir
> > su vagrant -c './lava-
> > su vagrant -c './lava-
> > fi
>
> You must be setting up your "vagrant" user to allow password-less sudo?
> We don't do that for our instance-manager and it seems a little scare
> (to me) to allow that.
in that case we could setup password-less sudo for instance-manager
exclusively for running the lava-deployment
--
Antonio Terceiro
Software Engineer - Linaro
http://
Michael Hudson-Doyle (mwhudson) wrote : | # |
Andy Doan <email address hidden> writes:
> Andy Doan has proposed merging lp:~doanac/lava-lab/lava-scripts into lp:lava-lab.
>
> Requested reviews:
> Linaro Validation Team (linaro-validation)
>
> For more details, see:
> https:/
>
> start work on some helper functions that should allow us to manage lava deployments using salt
> --
> https:/
> You are subscribed to branch lp:lava-lab.
> === modified file '_grains/lava.py'
> --- _grains/lava.py 2013-01-10 16:37:55 +0000
> +++ _grains/lava.py 2013-02-05 04:19:20 +0000
> @@ -9,4 +9,10 @@
> p = os.path.
> if os.path.isdir(p):
> insts.append(p)
> + # This seems to be a bug in salt. Even after you reload/resync grains
> + # the module still uses what it read at the time the minion started up.
> + # this makes this dynamic so we don't have to restart salt just to see
> + # a new instance
> + if 'lava_instances' in __grains__:
> + __grains_
Thats because grains are supposed to be invariant. From
http://
It is important to remember that grains are bits of information
loaded when the salt minion starts, so this information is
static. This means that the information in grains is unchanging,
therefore the nature of the data is static. So grains information
are things like the running kernel, or the operating system.
I think grains-
sure. Might be worth asking #saltstack IRC...
Andy Doan (doanac) wrote : | # |
On 02/05, Antonio Terceiro wrote:
> in that case we could setup password-less sudo for instance-manager
> exclusively for running the lava-deployment
l-d-t runs too many sudo commands to make a sane sudoers entry. Its
really an all or nothing approach. So I'd say we just allow this hack in
l-d-t for now.
Unmerged revisions
- 69. By Andy Doan
-
minor bug from last commit
make sure we return the most interest part of this function
- 68. By Andy Doan
-
add support for installing a remote worker instance
this makes adding a new worker node as simple as possible
- 67. By Andy Doan
-
add some lava scripts to help manage lab
lava-status - shows the nodes contributing to a lava deployment, if
they are running, and which is the master instancelava-{start/stop} - start/stop each lava-instance contributing to a lava deployment
lava-upgrade - performs an upgrade for a lava-deployment
Preview Diff
1 | === modified file '_grains/lava.py' | |||
2 | --- _grains/lava.py 2013-01-10 16:37:55 +0000 | |||
3 | +++ _grains/lava.py 2013-02-05 04:19:20 +0000 | |||
4 | @@ -9,4 +9,10 @@ | |||
5 | 9 | p = os.path.join(instdir, f) | 9 | p = os.path.join(instdir, f) |
6 | 10 | if os.path.isdir(p): | 10 | if os.path.isdir(p): |
7 | 11 | insts.append(p) | 11 | insts.append(p) |
8 | 12 | # This seems to be a bug in salt. Even after you reload/resync grains | ||
9 | 13 | # the module still uses what it read at the time the minion started up. | ||
10 | 14 | # this makes this dynamic so we don't have to restart salt just to see | ||
11 | 15 | # a new instance | ||
12 | 16 | if 'lava_instances' in __grains__: | ||
13 | 17 | __grains__['lava_instances'] = insts | ||
14 | 12 | return {'lava_instances': insts} | 18 | return {'lava_instances': insts} |
15 | 13 | 19 | ||
16 | === added directory 'scripts' | |||
17 | === added file 'scripts/lava-add-worker' | |||
18 | --- scripts/lava-add-worker 1970-01-01 00:00:00 +0000 | |||
19 | +++ scripts/lava-add-worker 2013-02-05 04:19:20 +0000 | |||
20 | @@ -0,0 +1,22 @@ | |||
21 | 1 | #!/usr/bin/env python | ||
22 | 2 | |||
23 | 3 | import argparse | ||
24 | 4 | |||
25 | 5 | import lava_salt | ||
26 | 6 | |||
27 | 7 | |||
28 | 8 | if __name__ == '__main__': | ||
29 | 9 | parser = argparse.ArgumentParser(description=lava_salt.add_worker.__doc__) | ||
30 | 10 | parser.add_argument('minion', metavar='<minion>', | ||
31 | 11 | help='The host to install the lava worker instance on.') | ||
32 | 12 | parser.add_argument('ip', metavar='<ip>', | ||
33 | 13 | help='The public IP address for the minion.') | ||
34 | 14 | parser.add_argument('instance', metavar='<instance>', | ||
35 | 15 | help='The instance name we are creating on the worker') | ||
36 | 16 | parser.add_argument('--dry-run', dest='dryrun', action='store_true', | ||
37 | 17 | help='Just display what would be changed') | ||
38 | 18 | args = parser.parse_args() | ||
39 | 19 | |||
40 | 20 | client = lava_salt.salt_client() | ||
41 | 21 | ret = lava_salt.add_worker(client, args.minion, args.ip, args.instance, args.dryrun) | ||
42 | 22 | print ret[args.minion] | ||
43 | 0 | 23 | ||
44 | === added file 'scripts/lava-start' | |||
45 | --- scripts/lava-start 1970-01-01 00:00:00 +0000 | |||
46 | +++ scripts/lava-start 2013-02-05 04:19:20 +0000 | |||
47 | @@ -0,0 +1,20 @@ | |||
48 | 1 | #!/usr/bin/env python | ||
49 | 2 | |||
50 | 3 | import argparse | ||
51 | 4 | |||
52 | 5 | import lava_salt | ||
53 | 6 | |||
54 | 7 | |||
55 | 8 | if __name__ == '__main__': | ||
56 | 9 | parser = argparse.ArgumentParser(description=lava_salt.start.__doc__) | ||
57 | 10 | parser.add_argument('instance', metavar='<instance>', | ||
58 | 11 | help='The instance name to start') | ||
59 | 12 | args = parser.parse_args() | ||
60 | 13 | |||
61 | 14 | client = lava_salt.salt_client() | ||
62 | 15 | ret = lava_salt.start(client, args.instance) | ||
63 | 16 | if ret: | ||
64 | 17 | print 'salt started the following instances:' | ||
65 | 18 | print ret | ||
66 | 19 | else: | ||
67 | 20 | print 'Instance already running on all minions' | ||
68 | 0 | 21 | ||
69 | === added file 'scripts/lava-status' | |||
70 | --- scripts/lava-status 1970-01-01 00:00:00 +0000 | |||
71 | +++ scripts/lava-status 2013-02-05 04:19:20 +0000 | |||
72 | @@ -0,0 +1,19 @@ | |||
73 | 1 | #!/usr/bin/env python | ||
74 | 2 | |||
75 | 3 | import argparse | ||
76 | 4 | |||
77 | 5 | import lava_salt | ||
78 | 6 | |||
79 | 7 | |||
80 | 8 | if __name__ == '__main__': | ||
81 | 9 | parser = argparse.ArgumentParser(description=lava_salt.info.__doc__) | ||
82 | 10 | parser.add_argument('instance', metavar='<instance>', | ||
83 | 11 | help='The instance name to stop') | ||
84 | 12 | args = parser.parse_args() | ||
85 | 13 | |||
86 | 14 | client = lava_salt.salt_client() | ||
87 | 15 | |||
88 | 16 | for host, props in lava_salt.info(client, args.instance).iteritems(): | ||
89 | 17 | running = lava_salt.STATE_STRING[props['running']] | ||
90 | 18 | master = props['master'] | ||
91 | 19 | print '{0}: running({1}) master({2})'.format(host, running, master) | ||
92 | 0 | 20 | ||
93 | === added file 'scripts/lava-stop' | |||
94 | --- scripts/lava-stop 1970-01-01 00:00:00 +0000 | |||
95 | +++ scripts/lava-stop 2013-02-05 04:19:20 +0000 | |||
96 | @@ -0,0 +1,22 @@ | |||
97 | 1 | #!/usr/bin/env python | ||
98 | 2 | |||
99 | 3 | import argparse | ||
100 | 4 | |||
101 | 5 | import lava_salt | ||
102 | 6 | |||
103 | 7 | |||
104 | 8 | if __name__ == '__main__': | ||
105 | 9 | parser = argparse.ArgumentParser(description=lava_salt.stop.__doc__) | ||
106 | 10 | parser.add_argument('--just-workers', dest='just_workers', action='store_true', | ||
107 | 11 | help='Just stop worker instances, not the master') | ||
108 | 12 | parser.add_argument('instance', metavar='<instance>', | ||
109 | 13 | help='The instance name to stop') | ||
110 | 14 | args = parser.parse_args() | ||
111 | 15 | |||
112 | 16 | client = lava_salt.salt_client() | ||
113 | 17 | ret = lava_salt.stop(client, args.instance, args.just_workers) | ||
114 | 18 | if ret: | ||
115 | 19 | print 'salt stopped the following instances:' | ||
116 | 20 | print ret | ||
117 | 21 | else: | ||
118 | 22 | print 'Instance not running on any minions' | ||
119 | 0 | 23 | ||
120 | === added file 'scripts/lava-upgrade' | |||
121 | --- scripts/lava-upgrade 1970-01-01 00:00:00 +0000 | |||
122 | +++ scripts/lava-upgrade 2013-02-05 04:19:20 +0000 | |||
123 | @@ -0,0 +1,34 @@ | |||
124 | 1 | #!/usr/bin/env python | ||
125 | 2 | |||
126 | 3 | import argparse | ||
127 | 4 | |||
128 | 5 | import lava_salt | ||
129 | 6 | |||
130 | 7 | |||
131 | 8 | def _indented(buff, indent_char): | ||
132 | 9 | indent_char = '\n' + indent_char | ||
133 | 10 | return ' ' + indent_char.join(buff.split('\n')) | ||
134 | 11 | |||
135 | 12 | |||
136 | 13 | if __name__ == '__main__': | ||
137 | 14 | parser = argparse.ArgumentParser(description=lava_salt.upgrade.__doc__) | ||
138 | 15 | parser.add_argument('instance', metavar='<instance>', | ||
139 | 16 | help='The instance name to upgrade') | ||
140 | 17 | parser.add_argument('--dry-run', dest='dryrun', action='store_true', | ||
141 | 18 | help='Just display what would be changed') | ||
142 | 19 | args = parser.parse_args() | ||
143 | 20 | |||
144 | 21 | client = lava_salt.salt_client() | ||
145 | 22 | m_ret, w_ret = lava_salt.upgrade(client, args.instance, args.dryrun) | ||
146 | 23 | |||
147 | 24 | print 'Master:' | ||
148 | 25 | for host, msg in m_ret.iteritems(): | ||
149 | 26 | print ' {0}:'.format(host) | ||
150 | 27 | print ' upgrade:\n{0}'.format(_indented(msg, ' |')) | ||
151 | 28 | |||
152 | 29 | print '\nWorkers:' | ||
153 | 30 | for host, rets in w_ret.iteritems(): | ||
154 | 31 | print ' {0}:'.format(host) | ||
155 | 32 | print ' stop:\n{0}'.format(_indented(rets['stop'], ' |')) | ||
156 | 33 | print ' upgrade:\n{0}'.format(_indented(rets['upgrade'], ' |')) | ||
157 | 34 | print ' start:\n{0}'.format(_indented(rets['start'], ' |')) | ||
158 | 0 | 35 | ||
159 | === added file 'scripts/lava_salt.py' | |||
160 | --- scripts/lava_salt.py 1970-01-01 00:00:00 +0000 | |||
161 | +++ scripts/lava_salt.py 2013-02-05 04:19:20 +0000 | |||
162 | @@ -0,0 +1,218 @@ | |||
163 | 1 | import salt.client | ||
164 | 2 | |||
165 | 3 | RUNNING = 0 | ||
166 | 4 | STOPPED = 1 | ||
167 | 5 | UNKNOWN = 3 | ||
168 | 6 | |||
169 | 7 | STATE_STRING = { | ||
170 | 8 | RUNNING: 'running', | ||
171 | 9 | STOPPED: 'stopped', | ||
172 | 10 | UNKNOWN: '???', | ||
173 | 11 | } | ||
174 | 12 | |||
175 | 13 | LDT = '/home/instance-manager/lava-deployment-tool/lava-deployment-tool' | ||
176 | 14 | |||
177 | 15 | |||
178 | 16 | def salt_client(): | ||
179 | 17 | return salt.client.LocalClient() | ||
180 | 18 | |||
181 | 19 | |||
182 | 20 | def info(client, instance): | ||
183 | 21 | """ | ||
184 | 22 | Shows whether an instance of LAVA is running or not on its configured hosts. | ||
185 | 23 | """ | ||
186 | 24 | cmd = 'status lava-instance LAVA_INSTANCE={0}'.format(instance) | ||
187 | 25 | inst_path = '/srv/lava/instances/{0}'.format(instance) | ||
188 | 26 | worker_file = '{0}/sbin/mount-masterfs'.format(inst_path) | ||
189 | 27 | |||
190 | 28 | inf = {} | ||
191 | 29 | |||
192 | 30 | ret = client.cmd('*', 'grains.item', ['lava_instances']) | ||
193 | 31 | for k, v in ret.iteritems(): | ||
194 | 32 | if inst_path in v: | ||
195 | 33 | ret = client.cmd(k, 'cmd.run', [cmd]) | ||
196 | 34 | running = UNKNOWN | ||
197 | 35 | if ret[k] == 'status: Unknown instance: %s' % instance: | ||
198 | 36 | running = STOPPED | ||
199 | 37 | elif ret[k] == 'lava-instance (%s) start/running' % instance: | ||
200 | 38 | running = RUNNING | ||
201 | 39 | |||
202 | 40 | ret = client.cmd(k, 'file.file_exists', [worker_file]) | ||
203 | 41 | master = not ret[k] | ||
204 | 42 | |||
205 | 43 | inf[k] = {'running': running, 'master': master} | ||
206 | 44 | return inf | ||
207 | 45 | |||
208 | 46 | |||
209 | 47 | def stop(client, instance, just_workers=False): | ||
210 | 48 | """ | ||
211 | 49 | Issues a command to stop a given instance name on all minions where the | ||
212 | 50 | LAVA instance appears to be running. | ||
213 | 51 | """ | ||
214 | 52 | cmd = 'stop lava-instance LAVA_INSTANCE={0}'.format(instance) | ||
215 | 53 | |||
216 | 54 | hosts = [] | ||
217 | 55 | for host, props in info(client, instance).iteritems(): | ||
218 | 56 | if props['running'] != STOPPED: | ||
219 | 57 | if not just_workers or not props['master']: | ||
220 | 58 | hosts.append(host) | ||
221 | 59 | |||
222 | 60 | if len(hosts): | ||
223 | 61 | return client.cmd(hosts, 'cmd.run', [cmd], expr_form='list') | ||
224 | 62 | |||
225 | 63 | |||
226 | 64 | def start(client, instance): | ||
227 | 65 | """ | ||
228 | 66 | Issues a command to start a given instance name on all minions where the | ||
229 | 67 | LAVA instance appears to not be running. | ||
230 | 68 | """ | ||
231 | 69 | cmd = 'start lava-instance LAVA_INSTANCE={0}'.format(instance) | ||
232 | 70 | |||
233 | 71 | hosts = [] | ||
234 | 72 | for host, props in info(client, instance).iteritems(): | ||
235 | 73 | if props['running'] != RUNNING: | ||
236 | 74 | hosts.append(host) | ||
237 | 75 | |||
238 | 76 | if len(hosts): | ||
239 | 77 | return client.cmd(hosts, 'cmd.run', [cmd], expr_form='list') | ||
240 | 78 | |||
241 | 79 | |||
242 | 80 | def upgrade(client, instance, dry_run=True): | ||
243 | 81 | """ | ||
244 | 82 | Runs lava-deployment-tool upgrade for a LAVA setup. It first shuts down | ||
245 | 83 | each worker node instance. Then it runs lava-deployment-tool upgrade on | ||
246 | 84 | the master. Lastly, it runs lava-deployment-tool upgradeworker on the | ||
247 | 85 | worker nodes. | ||
248 | 86 | """ | ||
249 | 87 | timeout = 300 # 5 minutes | ||
250 | 88 | workers = [] | ||
251 | 89 | master = None | ||
252 | 90 | |||
253 | 91 | for host, props in info(client, instance).iteritems(): | ||
254 | 92 | if props['master']: | ||
255 | 93 | assert not master, 'Detected multiple master instances in LAVA deployment' | ||
256 | 94 | master = host | ||
257 | 95 | else: | ||
258 | 96 | workers.append(host) | ||
259 | 97 | |||
260 | 98 | assert master, 'No master instance found in LAVA deployment' | ||
261 | 99 | |||
262 | 100 | w_ret = {} | ||
263 | 101 | for h in workers: | ||
264 | 102 | w_ret[h] = {'stop': 'dry-run', 'upgrade': 'dry-run', 'start': 'dry-run'} | ||
265 | 103 | |||
266 | 104 | if dry_run: | ||
267 | 105 | m_ret = {master: 'dry-run: upgrade master'} | ||
268 | 106 | return m_ret, w_ret | ||
269 | 107 | |||
270 | 108 | # first stop workers. This prevents a DB error if the upgrade changes | ||
271 | 109 | # the schema. | ||
272 | 110 | ret = stop(client, instance, True) | ||
273 | 111 | if ret: | ||
274 | 112 | for host, msg in ret.iteritems(): | ||
275 | 113 | w_ret[host]['stop'] = msg | ||
276 | 114 | |||
277 | 115 | # now upgrade the master node | ||
278 | 116 | cmd = 'SKIP_ROOT_CHECK=yes {0} upgrade {1}'.format(LDT, instance) | ||
279 | 117 | m_ret = client.cmd(master, 'cmd.run', [cmd], timeout=timeout) | ||
280 | 118 | |||
281 | 119 | # now upgrade the workers | ||
282 | 120 | cmd = 'SKIP_ROOT_CHECK=yes {0} upgradeworker {1}'.format(LDT, instance) | ||
283 | 121 | if len(workers): | ||
284 | 122 | ret = client.cmd(workers, 'cmd.run', [cmd], timeout=timeout, expr_form='list') | ||
285 | 123 | for host, msg in ret.iteritems(): | ||
286 | 124 | w_ret[host]['upgrade'] = msg | ||
287 | 125 | |||
288 | 126 | ret = start(client, instance) | ||
289 | 127 | if ret: | ||
290 | 128 | for host, msg in ret.iteritems(): | ||
291 | 129 | if host in w_ret: | ||
292 | 130 | w_ret[host]['start'] = msg | ||
293 | 131 | |||
294 | 132 | # last thing: l-d-t ran as root, lets chmod things | ||
295 | 133 | cmd = 'chown -R instance-manager:instance-manager /srv/lava/instances/{0}/code/*'.format(instance) | ||
296 | 134 | client.cmd(workers + [master], 'cmd.run', [cmd], expr_form='list') | ||
297 | 135 | |||
298 | 136 | return m_ret, w_ret | ||
299 | 137 | |||
300 | 138 | |||
301 | 139 | def _update_props(inifile_content, props): | ||
302 | 140 | for line in inifile_content.split('\n'): | ||
303 | 141 | if not line.strip().startswith('#'): | ||
304 | 142 | key, val = line.split('=') | ||
305 | 143 | if key in props: | ||
306 | 144 | props[key] = val.replace("'", '') | ||
307 | 145 | |||
308 | 146 | |||
309 | 147 | def add_worker(client, minion, minion_ip, instance, dry_run=True): | ||
310 | 148 | """ | ||
311 | 149 | Creates a new lava workernode on a salt-minion. | ||
312 | 150 | """ | ||
313 | 151 | |||
314 | 152 | args = { | ||
315 | 153 | 'LAVA_SERVER_IP': None, | ||
316 | 154 | 'LAVA_SYS_USER': None, | ||
317 | 155 | 'LAVA_PROXY': None, | ||
318 | 156 | 'masterdir': '/srv/lava/instances/{0}'.format(instance), | ||
319 | 157 | 'workerip': minion_ip, | ||
320 | 158 | 'ldt': LDT, | ||
321 | 159 | 'instance': instance, | ||
322 | 160 | 'dbuser': None, | ||
323 | 161 | 'dbpass': None, | ||
324 | 162 | 'dbname': None, | ||
325 | 163 | 'dbserver': None, | ||
326 | 164 | } | ||
327 | 165 | |||
328 | 166 | # ensure the instance exists and isn't already installed on the minion | ||
329 | 167 | master = None | ||
330 | 168 | for host, props in info(client, instance).iteritems(): | ||
331 | 169 | if props['master']: | ||
332 | 170 | assert not master, 'Detected multiple master instances in LAVA deployment' | ||
333 | 171 | master = host | ||
334 | 172 | assert minion != host, 'LAVA instance already deployed on minion' | ||
335 | 173 | |||
336 | 174 | assert master, 'No master instance found in LAVA deployment' | ||
337 | 175 | |||
338 | 176 | # determine settings needed by looking at master instance | ||
339 | 177 | cmd = 'cat {0}/instance.conf'.format(args['masterdir']) | ||
340 | 178 | ret = client.cmd(master, 'cmd.run', [cmd]) | ||
341 | 179 | _update_props(ret[master], args) | ||
342 | 180 | |||
343 | 181 | # get the db information | ||
344 | 182 | cmd = 'cat {0}/etc/lava-server/default_database.conf'.format(args['masterdir']) | ||
345 | 183 | ret = client.cmd(master, 'cmd.run', [cmd]) | ||
346 | 184 | _update_props(ret[master], args) | ||
347 | 185 | if not args['dbserver']: | ||
348 | 186 | args['dbserver'] = args['LAVA_SERVER_IP'] | ||
349 | 187 | |||
350 | 188 | cmd = ('SKIP_ROOT_CHECK=yes ' | ||
351 | 189 | 'LAVA_DB_SERVER={dbserver} LAVA_DB_NAME={dbname} ' | ||
352 | 190 | 'LAVA_DB_USER={dbuser} LAVA_DB_PASSWORD={dbpass} ' | ||
353 | 191 | 'LAVA_REMOTE_FS_HOST={LAVA_SERVER_IP} ' | ||
354 | 192 | 'LAVA_REMOTE_FS_USER={LAVA_SYS_USER} LAVA_REMOTE_FS_DIR={masterdir} ' | ||
355 | 193 | 'LAVA_PROXY="{LAVA_PROXY}" LAVA_SERVER_IP={workerip} ' | ||
356 | 194 | '{ldt} installworker -n {instance} 2>&1 | tee /tmp/ldt.log'.format(**args)) | ||
357 | 195 | |||
358 | 196 | if dry_run: | ||
359 | 197 | return {minion: 'dry-run: {0}'.format(cmd)} | ||
360 | 198 | |||
361 | 199 | ret = client.cmd(minion, 'cmd.run', [cmd], timeout=600) | ||
362 | 200 | |||
363 | 201 | # l-d-t ran as root, lets chmod things | ||
364 | 202 | cmd = 'chown -R instance-manager:instance-manager /srv/lava/instances/{0}/code/*'.format(instance) | ||
365 | 203 | client.cmd(minion, 'cmd.run', [cmd]) | ||
366 | 204 | |||
367 | 205 | # now add the pubkey of the minion to the master's list of authorized keys | ||
368 | 206 | cmd = 'cat /srv/lava/instances/{0}/home/.ssh/id_rsa.pub'.format(instance) | ||
369 | 207 | pubkey = client.cmd(minion, 'cmd.run', [cmd]) | ||
370 | 208 | pubkey = pubkey[minion].replace('ssh key used by LAVA for sshfs', minion) | ||
371 | 209 | authorized_keys = '{0}/home/.ssh/authorized_keys'.format(args['masterdir']) | ||
372 | 210 | client.cmd(master, 'file.append', [authorized_keys, pubkey]) | ||
373 | 211 | |||
374 | 212 | # salt doens't seem to properly deal with grains that are dynamic. | ||
375 | 213 | # and calling this only once doesn't get the grain updated | ||
376 | 214 | # see _grains/lava.py for further details | ||
377 | 215 | client.cmd(minion, 'saltutil.sync_grains', []) | ||
378 | 216 | client.cmd(minion, 'saltutil.sync_grains', []) | ||
379 | 217 | |||
380 | 218 | return ret |
This looks awesome!
One general comment: you don't need to run l-d-t as root. I did the following
on the vagrant-bootstrap script inside the l-d-t repo:
if ! test -d /srv/lava/ instances/ development; then deployment- tool setup -nd' deployment- tool install -nd development'
cd $basedir
su vagrant -c './lava-
su vagrant -c './lava-
fi
so you could do something like that, and you won't need to chown/chmod away, or
even add that check on l-d-t to allow running as root.
-- www.linaro. org
Antonio Terceiro
Software Engineer - Linaro
http://