Merge ~gjolly/ubuntu/+source/sshuttle:merge-1.3.1-1-devel into ubuntu/+source/sshuttle:debian/sid
- Git
- lp:~gjolly/ubuntu/+source/sshuttle
- merge-1.3.1-1-devel
- Merge into debian/sid
Proposed by
Gauthier Jolly
Status: | Needs review | ||||
---|---|---|---|---|---|
Proposed branch: | ~gjolly/ubuntu/+source/sshuttle:merge-1.3.1-1-devel | ||||
Merge into: | ubuntu/+source/sshuttle:debian/sid | ||||
Diff against target: |
850 lines (+793/-1) 4 files modified
debian/changelog (+87/-0) debian/control (+2/-1) debian/tests/control (+3/-0) debian/tests/cross-release (+701/-0) |
||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Vladimir Petko (community) | Abstain | ||
git-ubuntu import | Pending | ||
Review via email:
|
Commit message
Manual merge of the package. We were carrying a delta in d/control that is not needed anymore.
Description of the change
To post a comment you must log in.
Revision history for this message

Gauthier Jolly (gjolly) wrote : | # |
Closing in favor of the sync: https:/
Unmerged commits
- 62ff4b4... by Gauthier Jolly
-
update-maintainer
- ec4423b... by Gauthier Jolly
-
reconstruct-
changelog - b92fc59... by Gauthier Jolly
-
merge-changelogs
- 083bff7... by Gauthier Jolly
-
d/t/control,
cross-release: Add autopkgtest for cross-release compatibility checks
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | diff --git a/debian/changelog b/debian/changelog |
2 | index 749f9d2..ccb9464 100644 |
3 | --- a/debian/changelog |
4 | +++ b/debian/changelog |
5 | @@ -1,3 +1,13 @@ |
6 | +sshuttle (1.3.1-1ubuntu1) questing; urgency=medium |
7 | + |
8 | + * Merge with Debian unstable. Remaining changes: |
9 | + - d/t/control,cross-release: Add autopkgtest for cross-release |
10 | + compatibility checks |
11 | + * Drop the removal of python3-distutils from d/control as distutils has also |
12 | + been dropped from the upstream Debian control file. |
13 | + |
14 | + -- Gauthier Jolly <contact@gjolly.fr> Fri, 02 May 2025 14:24:08 +0000 |
15 | + |
16 | sshuttle (1.3.1-1) unstable; urgency=medium |
17 | |
18 | * New upstream version. |
19 | @@ -37,6 +47,21 @@ sshuttle (1.1.2-1) unstable; urgency=medium |
20 | |
21 | -- Brian May <bam@debian.org> Mon, 19 Feb 2024 11:55:11 +1100 |
22 | |
23 | +sshuttle (1.1.1-2ubuntu2) noble; urgency=medium |
24 | + |
25 | + * Drop dependency on python3-distutils. |
26 | + |
27 | + -- Matthias Klose <doko@ubuntu.com> Sat, 09 Mar 2024 12:23:57 +0100 |
28 | + |
29 | +sshuttle (1.1.1-2ubuntu1) noble; urgency=low |
30 | + |
31 | + * Merge from Debian unstable. Remaining changes: |
32 | + - d/t/control,cross-release: Add autopkgtest for |
33 | + cross-release compatibility checks |
34 | + * Drop all patches as included in new release. |
35 | + |
36 | + -- James Page <james.page@ubuntu.com> Wed, 14 Feb 2024 09:49:52 +0000 |
37 | + |
38 | sshuttle (1.1.1-2) unstable; urgency=medium |
39 | |
40 | [ Debian Janitor ] |
41 | @@ -60,6 +85,36 @@ sshuttle (1.1.0-1) unstable; urgency=medium |
42 | |
43 | -- Brian May <bam@debian.org> Fri, 28 Jan 2022 09:57:26 +1100 |
44 | |
45 | +sshuttle (1.0.5-1ubuntu4) jammy; urgency=medium |
46 | + |
47 | + * d/p/*use-pty.patch: Cherry-picked from upstream master to fix |
48 | + shuttle permissions failure (LP: #1965829). |
49 | + |
50 | + -- Corey Bryant <corey.bryant@canonical.com> Mon, 21 Mar 2022 16:50:35 -0400 |
51 | + |
52 | +sshuttle (1.0.5-1ubuntu3) impish; urgency=medium |
53 | + |
54 | + * d/t/cross-release: |
55 | + - fix flakiness and speed of test |
56 | + - install net-tools in testbed instances |
57 | + |
58 | + -- Dan Streetman <ddstreet@canonical.com> Wed, 23 Jun 2021 16:34:30 -0400 |
59 | + |
60 | +sshuttle (1.0.5-1ubuntu2) impish; urgency=medium |
61 | + |
62 | + * d/t/cross-release: reduce total test time by waiting less |
63 | + for expected timeouts, and fix when we notice sshuttle started |
64 | + |
65 | + -- Dan Streetman <ddstreet@canonical.com> Mon, 10 May 2021 10:46:14 -0400 |
66 | + |
67 | +sshuttle (1.0.5-1ubuntu1) hirsute; urgency=medium |
68 | + |
69 | + * Merge with Debian; remaining changes: |
70 | + - d/t/control, d/t/cross-release: |
71 | + - add autopkgtest for cross-release compatibility checks |
72 | + |
73 | + -- Matthias Klose <doko@ubuntu.com> Tue, 16 Mar 2021 10:25:36 +0100 |
74 | + |
75 | sshuttle (1.0.5-1) unstable; urgency=medium |
76 | |
77 | * New upstream version. |
78 | @@ -67,6 +122,37 @@ sshuttle (1.0.5-1) unstable; urgency=medium |
79 | |
80 | -- Brian May <bam@debian.org> Tue, 29 Dec 2020 11:00:34 +1100 |
81 | |
82 | +sshuttle (1.0.4-1ubuntu4) groovy; urgency=medium |
83 | + |
84 | + * d/t/cross-release: |
85 | + - test without providing --python param |
86 | + |
87 | + -- Dan Streetman <ddstreet@canonical.com> Wed, 30 Sep 2020 17:23:22 -0400 |
88 | + |
89 | +sshuttle (1.0.4-1ubuntu3) groovy; urgency=medium |
90 | + |
91 | + * d/t/cross-release: |
92 | + - fixes for autopkgtest |
93 | + |
94 | + -- Dan Streetman <ddstreet@canonical.com> Sat, 19 Sep 2020 08:28:03 -0400 |
95 | + |
96 | +sshuttle (1.0.4-1ubuntu2) groovy; urgency=medium |
97 | + |
98 | + * d/t/cross-release: fix error in checking sshuttle version |
99 | + |
100 | + -- Dan Streetman <ddstreet@canonical.com> Fri, 18 Sep 2020 19:22:08 -0400 |
101 | + |
102 | +sshuttle (1.0.4-1ubuntu1) groovy; urgency=medium |
103 | + |
104 | + * d/p/lp1873368/0001-Fix-python2-server-compatibility.patch, |
105 | + d/p/lp1873368/0002-Fix-flake8-line-too-long.patch, |
106 | + d/p/lp1873368/0003-Fix-python2-client-compatibility.patch: |
107 | + - fix compatibility with remote py2 (LP: #1873368) |
108 | + * d/t/control, d/t/cross-release: |
109 | + - add autopkgtest for cross-release compatibility checks |
110 | + |
111 | + -- Dan Streetman <ddstreet@canonical.com> Fri, 18 Sep 2020 13:57:01 -0400 |
112 | + |
113 | sshuttle (1.0.4-1) unstable; urgency=low |
114 | |
115 | [ Debian Janitor ] |
116 | @@ -286,3 +372,4 @@ sshuttle (0.42-1) unstable; urgency=low |
117 | * Write manpage for the Debian release |
118 | |
119 | -- Javier Fernandez-Sanguino Pen~a <jfs@debian.org> Wed, 27 Oct 2010 02:50:49 +0200 |
120 | + |
121 | diff --git a/debian/control b/debian/control |
122 | index ccbd6f5..458ac61 100644 |
123 | --- a/debian/control |
124 | +++ b/debian/control |
125 | @@ -1,7 +1,8 @@ |
126 | Source: sshuttle |
127 | Section: net |
128 | Priority: optional |
129 | -Maintainer: Brian May <bam@debian.org> |
130 | +Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com> |
131 | +XSBC-Original-Maintainer: Brian May <bam@debian.org> |
132 | Build-Depends: debhelper-compat (= 13), dh-python, |
133 | python3-all, python3-pytest, |
134 | python3-sphinx, |
135 | diff --git a/debian/tests/control b/debian/tests/control |
136 | new file mode 100644 |
137 | index 0000000..42bf2b9 |
138 | --- /dev/null |
139 | +++ b/debian/tests/control |
140 | @@ -0,0 +1,3 @@ |
141 | +Tests: cross-release |
142 | +Restrictions: allow-stderr, isolation-machine, needs-root, breaks-testbed, skippable |
143 | +Depends: @, lxd, ssh, python3, python3-apt, python3-distro-info |
144 | diff --git a/debian/tests/cross-release b/debian/tests/cross-release |
145 | new file mode 100644 |
146 | index 0000000..575d6a4 |
147 | --- /dev/null |
148 | +++ b/debian/tests/cross-release |
149 | @@ -0,0 +1,701 @@ |
150 | +#!/usr/bin/python3 |
151 | +# |
152 | +# This test uses lxd to create a container for each supported Ubuntu release, |
153 | +# and test if sshuttle works from the local testbed (using sshhuttle under test) |
154 | +# to the remote containter. |
155 | +# |
156 | +# This also tests the reverse, by creating a container matching the testbed's release, |
157 | +# and connecting from each supported Ubuntu release's container. Note that the reverse |
158 | +# direction tests *do not* test the sshuttle under test by this autopkgtest, since |
159 | +# on a "remote" system sshuttle is not involved at all, and does not even need to be |
160 | +# installed; the reverse direction test primarily tests if anything *else* has changed |
161 | +# that breaks the *existing* sshuttle on the instance (most likely, changes in python). |
162 | + |
163 | +import apt_pkg |
164 | +import functools |
165 | +import ipaddress |
166 | +import json |
167 | +import os |
168 | +import re |
169 | +import sys |
170 | +import subprocess |
171 | +import tempfile |
172 | +import time |
173 | +import unittest |
174 | + |
175 | +from aptsources.distro import get_distro |
176 | +from contextlib import suppress |
177 | +from distro_info import UbuntuDistroInfo |
178 | +from pathlib import Path |
179 | + |
180 | + |
181 | +DISTROINFO = UbuntuDistroInfo() |
182 | +VALID_RELEASES = set(DISTROINFO.supported_esm() + DISTROINFO.supported() + [DISTROINFO.devel()]) |
183 | +RELEASES = [] |
184 | +TESTBED = None |
185 | +SSH_KEY = None |
186 | +SSH_CONFIG = None |
187 | +IF_NAME = 'eth1' |
188 | +BR_NAME = 'br1' |
189 | + |
190 | +# really silly that users need to call this...especially just to use version_compare |
191 | +apt_pkg.init_system() |
192 | + |
193 | +class Testbed(object): |
194 | + def __init__(self): |
195 | + self.net = {} |
196 | + self.index = {} |
197 | + self._find_subnets() |
198 | + |
199 | + @property |
200 | + def shared_glob(self): |
201 | + '''Get the 10.X.* glob-format subnet for ~/.ssh/config usage''' |
202 | + b = self.net.get('shared').exploded.split('.')[1] |
203 | + return f'10.{b}.0.*' |
204 | + |
205 | + @property |
206 | + def shared_next(self): |
207 | + '''This is the next unique ip address on the shared subnet''' |
208 | + return self._next_interface('shared') |
209 | + |
210 | + @property |
211 | + def remote(self): |
212 | + '''This is the unique private subnet used for each "remote" instance''' |
213 | + return self._interface('remote') |
214 | + |
215 | + @property |
216 | + def reverse(self): |
217 | + '''This is the unique private subnet used for the single "reverse remote" instance''' |
218 | + return self._interface('reverse') |
219 | + |
220 | + def _network(self, b): |
221 | + return ipaddress.ip_network(f'10.{b}.0.0/24') |
222 | + |
223 | + def _network_to_interface(self, network, index=0): |
224 | + addr = list(network.hosts())[index].exploded |
225 | + return ipaddress.ip_interface(f'{addr}/{network.prefixlen}') |
226 | + |
227 | + def _interface(self, name, index=0): |
228 | + return self._network_to_interface(self.net.get(name), index) |
229 | + |
230 | + def _next_interface(self, name): |
231 | + index = self.index.get(name) |
232 | + self.index[name] = index + 1 |
233 | + return self._interface(name, index) |
234 | + |
235 | + def _find_subnets(self): |
236 | + start = 254 |
237 | + for name in ['shared', 'remote', 'reverse']: |
238 | + start = self._find_subnet(name, start) |
239 | + |
240 | + def _find_subnet(self, name, start): |
241 | + for b in range(start, 0, -1): |
242 | + network = self._network(b) |
243 | + iface = self._network_to_interface(network) |
244 | + result = subprocess.run(f'ip r get {iface.ip}'.split(), |
245 | + encoding='utf-8', stdout=subprocess.PIPE, stderr=subprocess.PIPE) |
246 | + # we try to find a subnet that isn't locally reachable |
247 | + # if returncode != 0, then the subnet isn't reachable (i.e. no gateway) |
248 | + # if 'via' is in stdout, then the subnet isn't locally reachable |
249 | + if result.returncode != 0 or 'via' in result.stdout: |
250 | + self.net[name] = network |
251 | + self.index[name] = 0 |
252 | + return b - 1 |
253 | + else: |
254 | + raise Exception('Could not find any 10.* subnet to use for private addresses') |
255 | + |
256 | +@functools.lru_cache |
257 | +def get_arch(): |
258 | + return run_cmd('dpkg --print-architecture').stdout.strip() |
259 | + |
260 | +def is_expected_failure(src, dst, python): |
261 | + if not python and is_expected_failure_nopy(src, dst): |
262 | + return True |
263 | + if python == 'python2' and is_expected_failure_py2(src, dst): |
264 | + return True |
265 | + if python == 'python3' and is_expected_failure_py3(src, dst): |
266 | + return True |
267 | + |
268 | + # otherwise, we don't expect failure |
269 | + return False |
270 | + |
271 | +def is_expected_failure_nopy(src, dst): |
272 | + # failure due to regression in patch to detect python command |
273 | + # should be fixed in version after this; LP: #1897961 |
274 | + if src.release == 'xenial' and apt_pkg.version_compare(src.sshuttle_version, '0.76-1ubuntu1.1') <= 0: |
275 | + return True |
276 | + |
277 | +def is_expected_failure_py2(src, dst): |
278 | + # failure due to regression from initial fix for py3.8 fix |
279 | + # should be fixed in version after this; LP: #1873368 |
280 | + if src.release == 'focal' and apt_pkg.version_compare(src.sshuttle_version, '0.78.5-1ubuntu1') <= 0: |
281 | + return True |
282 | + |
283 | +def is_expected_failure_py3(src, dst): |
284 | + # expected failure: trusty -> any |
285 | + # since trusty is now ESM only, this isn't expected to be fixed |
286 | + if src.release == 'trusty': |
287 | + return True |
288 | + |
289 | + # failure with py3.8 (or later) target, which is default py3 in focal (or later) |
290 | + if DISTROINFO.version(dst.release) >= DISTROINFO.version('focal'): |
291 | + # should be fixed in version after each of these; LP: #1873368 |
292 | + if src.release == 'xenial' and apt_pkg.version_compare(src.sshuttle_version, '0.76-1ubuntu1') <= 0: |
293 | + return True |
294 | + if src.release == 'bionic' and apt_pkg.version_compare(src.sshuttle_version, '0.78.3-1ubuntu1') <= 0: |
295 | + return True |
296 | + if src.release == 'focal' and apt_pkg.version_compare(src.sshuttle_version, '0.78.5-1ubuntu1') <= 0: |
297 | + return True |
298 | + |
299 | + # otherwise, we don't expect failure |
300 | + return False |
301 | + |
302 | +def set_releases(releases): |
303 | + invalid_releases = list(set(releases) - VALID_RELEASES) |
304 | + if invalid_releases: |
305 | + print(f'ignoring invalid release(s): {", ".join(invalid_releases)}') |
306 | + valid_releases = list(set(releases) & VALID_RELEASES) |
307 | + if valid_releases: |
308 | + print(f'limiting remote release(s) to: {", ".join(valid_releases)}') |
309 | + RELEASES.clear() |
310 | + RELEASES.extend(valid_releases) |
311 | + |
312 | +def load_tests(loader, standard_tests, pattern): |
313 | + suite = unittest.TestSuite() |
314 | + for release in sorted(RELEASES or VALID_RELEASES): |
315 | + cls = type(f'SshuttleTest_{release}', (SshuttleTest,), |
316 | + {'release': release}) |
317 | + suite.addTests(loader.loadTestsFromTestCase(cls)) |
318 | + return suite |
319 | + |
320 | +def setUpModule(): |
321 | + global TESTBED |
322 | + |
323 | + _run_cmd('lxd init --auto', check=True) |
324 | + |
325 | + TESTBED = Testbed() |
326 | + |
327 | + add_shared_bridge() |
328 | + add_private_subnets() |
329 | + init_ssh_config() |
330 | + init_base_test_class() |
331 | + |
332 | +def tearDownModule(): |
333 | + remove_ssh_config() |
334 | + remove_private_subnets() |
335 | + remove_shared_bridge() |
336 | + del SshuttleTest.reverse_remote |
337 | + |
338 | +def add_shared_bridge(): |
339 | + _run_cmd(f'ip l add dev {BR_NAME} type bridge') |
340 | + _run_cmd(f'ip l set up dev {BR_NAME}') |
341 | + _run_cmd(f'ip a add {TESTBED.shared_next} dev {BR_NAME}') |
342 | + |
343 | +def add_private_subnets(): |
344 | + # Force the private addrs unreachable so we don't try to reach them out our normal gateway |
345 | + _run_cmd(f'ip r add {TESTBED.remote.network} dev lo') |
346 | + _run_cmd(f'ip r add {TESTBED.reverse.network} dev lo') |
347 | + |
348 | +def remove_private_subnets(): |
349 | + _run_cmd(f'ip r del {TESTBED.remote.network} dev lo') |
350 | + _run_cmd(f'ip r del {TESTBED.reverse.network} dev lo') |
351 | + |
352 | +def remove_shared_bridge(): |
353 | + _run_cmd(f'ip l del dev {BR_NAME}') |
354 | + |
355 | +def init_ssh_config(): |
356 | + global SSH_KEY |
357 | + global SSH_CONFIG |
358 | + |
359 | + id_rsa = Path('/root/.ssh/id_rsa') |
360 | + if not id_rsa.exists(): |
361 | + _run_cmd(['ssh-keygen', '-f', str(id_rsa), '-P', ''], check=True) |
362 | + SSH_KEY = id_rsa.with_suffix('.pub').read_text(encoding='utf-8') |
363 | + |
364 | + SSH_CONFIG = '\n'.join([f'Host {TESTBED.remote.ip} {TESTBED.reverse.ip} {TESTBED.shared_glob}', |
365 | + ' StrictHostKeyChecking no', |
366 | + ' UserKnownHostsFile /dev/null', |
367 | + ' ConnectTimeout 10', |
368 | + ' ConnectionAttempts 18', |
369 | + '']) |
370 | + config = Path('/root/.ssh/config') |
371 | + if config.exists(): |
372 | + content = config.read_text(encoding='utf-8') or '' |
373 | + if content and not content.endswith('\n'): |
374 | + content += '\n' |
375 | + else: |
376 | + content = '' |
377 | + content += SSH_CONFIG |
378 | + config.write_text(content, encoding='utf-8') |
379 | + |
380 | +def remove_ssh_config(): |
381 | + config = Path('/root/.ssh/config') |
382 | + content = config.read_text(encoding='utf-8') |
383 | + config.write_text(content.replace(SSH_CONFIG, ''), encoding='utf-8') |
384 | + |
385 | +def init_base_test_class(): |
386 | + cls = SshuttleTest |
387 | + |
388 | + cls.release = get_distro().codename |
389 | + reverse_remote = Remote(f'reverse-remote-{cls.release}', cls.release) |
390 | + reverse_remote.add_ssh_key(SSH_KEY) |
391 | + reverse_remote.add_ssh_config(SSH_CONFIG) |
392 | + reverse_remote.private = TESTBED.reverse |
393 | + reverse_remote.add_start_cmd(f'ip a add {reverse_remote.private} dev lo') |
394 | + reverse_remote.snapshot_create() |
395 | + cls.reverse_remote = reverse_remote |
396 | + |
397 | +def _run_cmd(cmd, **kwargs): |
398 | + if type(cmd) == str: |
399 | + cmd = cmd.split() |
400 | + return subprocess.run(cmd, **kwargs) |
401 | + |
402 | +def run_cmd(cmd, **kwargs): |
403 | + kwargs.setdefault('stdout', subprocess.PIPE) |
404 | + kwargs.setdefault('stderr', subprocess.STDOUT) |
405 | + kwargs.setdefault('encoding', 'utf-8') |
406 | + return _run_cmd(cmd, **kwargs) |
407 | + |
408 | + |
409 | +class Remote(object): |
410 | + def __init__(self, name, release): |
411 | + self.name = name |
412 | + self.release = release |
413 | + self.shared = TESTBED.shared_next |
414 | + self._start_cmds = [] |
415 | + |
416 | + cmd = f'lxc delete --force {self.name}' |
417 | + self.log(cmd) |
418 | + run_cmd(cmd) |
419 | + |
420 | + image = f'ubuntu-daily:{release}' |
421 | + cmd = f'lxc launch --quiet {image} {self.name}' |
422 | + self.log(cmd) |
423 | + result = run_cmd(cmd) |
424 | + if result.returncode != 0: |
425 | + raise Exception(f'Could not launch {self.name}: {result.stdout}') |
426 | + |
427 | + cmd = f'lxc config device add {self.name} {IF_NAME} nic name={IF_NAME} nictype=bridged parent={BR_NAME}' |
428 | + self.log(cmd) |
429 | + result = run_cmd(cmd) |
430 | + if result.returncode != 0: |
431 | + raise Exception(f'Could not add {IF_NAME}: {result.stdout}') |
432 | + |
433 | + self.add_start_cmd(f'ip l set up dev {IF_NAME}') |
434 | + self.add_start_cmd(f'ip a add {self.shared} dev {IF_NAME}') |
435 | + |
436 | + self._wait_for_networking() |
437 | + self._create_ssh_key() |
438 | + self._add_local_ppas() |
439 | + self._add_proposed() |
440 | + self._apt_update_upgrade() |
441 | + self._install_net_tools() |
442 | + self._install_sshuttle() |
443 | + self._install_python() |
444 | + self.stop(force=False) |
445 | + |
446 | + def log(self, msg): |
447 | + print(f'{self.name}: {msg}') |
448 | + |
449 | + def save_journal(self, testname, remotename): |
450 | + artifacts_dir = os.getenv('AUTOPKGTEST_ARTIFACTS') |
451 | + if not artifacts_dir: |
452 | + self.log('AUTOPKGTEST_ARTIFACTS unset, not saving container journal') |
453 | + return |
454 | + |
455 | + dst = Path(artifacts_dir) / testname / remotename |
456 | + dst.mkdir(parents=True, exist_ok=True) |
457 | + |
458 | + self.lxc_exec('journalctl --sync --flush') |
459 | + self.lxc_file_pull('/var/log/journal', dst, recursive=True) |
460 | + |
461 | + def _wait_for_networking(self): |
462 | + self.log(f'Waiting for {self.name} to finish starting') |
463 | + for sec in range(120): |
464 | + if 'via' in self.lxc_exec('ip r show default').stdout: |
465 | + break |
466 | + time.sleep(0.5) |
467 | + else: |
468 | + raise Exception(f'Timed out waiting for remote {self.name} networking') |
469 | + |
470 | + def _create_ssh_key(self): |
471 | + self.log('creating ssh key') |
472 | + self.lxc_exec(['ssh-keygen', '-f', '/root/.ssh/id_rsa', '-P', '']) |
473 | + self._ssh_key = self.lxc_exec('cat /root/.ssh/id_rsa.pub').stdout |
474 | + |
475 | + def _add_local_ppas(self): |
476 | + paths = list(Path('/etc/apt/sources.list.d').glob('*.list')) |
477 | + paths.append(Path('/etc/apt/sources.list')) |
478 | + ppas = [] |
479 | + for path in paths: |
480 | + for line in path.read_text(encoding='utf-8').splitlines(): |
481 | + match = re.match(r'^deb .*ppa.launchpad.net/(?P<team>\w+)/(?P<ppa>\w+)/ubuntu', line) |
482 | + if match: |
483 | + ppas.append(f'ppa:{match.group("team")}/{match.group("ppa")}') |
484 | + for ppa in ppas: |
485 | + self.log(f'adding PPA {ppa}') |
486 | + self.lxc_exec(['add-apt-repository', '-y', ppa]) |
487 | + |
488 | + def _add_proposed(self): |
489 | + with tempfile.TemporaryDirectory() as d: |
490 | + f = Path(d) / 'tempfile' |
491 | + self.lxc_file_pull('/etc/apt/sources.list', str(f)) |
492 | + for line in f.read_text(encoding='utf-8').splitlines(): |
493 | + match = re.match(rf'^deb (?P<uri>\S+) {self.release} main.*', line) |
494 | + if match: |
495 | + uri = match.group('uri') |
496 | + components = 'man universe restricted multiverse' |
497 | + proposed_line = f'deb {uri} {self.release}-proposed {components}' |
498 | + self.log(f'adding {self.release}-proposed using {uri}') |
499 | + self.lxc_exec(['add-apt-repository', '-y', proposed_line]) |
500 | + return |
501 | + |
502 | + def _apt_update_upgrade(self): |
503 | + self.log('upgrading packages') |
504 | + self.lxc_apt('update') |
505 | + self.lxc_apt('upgrade -y') |
506 | + |
507 | + def _install_net_tools(self): |
508 | + self.log('installing net-tools') |
509 | + result_install = self.lxc_apt('install -y net-tools') |
510 | + result_which = self.lxc_exec('which netstat') |
511 | + if result_which.returncode != 0: |
512 | + err = result_install.stdout + result_which.stdout |
513 | + raise Exception(f'could not install net-tools: {err}') |
514 | + |
515 | + def _install_sshuttle(self): |
516 | + self.log('installing sshuttle') |
517 | + result_install = self.lxc_apt('install -y sshuttle') |
518 | + result_which = self.lxc_exec('which sshuttle') |
519 | + if result_which.returncode != 0: |
520 | + err = result_install.stdout + result_which.stdout |
521 | + raise Exception(f'could not install sshuttle: {err}') |
522 | + self.sshuttle_version = self.lxc_exec('dpkg-query -f ${Version} -W sshuttle').stdout |
523 | + |
524 | + def _install_python(self): |
525 | + self.log('installing python') |
526 | + self.lxc_apt('install -y python') |
527 | + for python in ['python2', 'python3']: |
528 | + result_install = self.lxc_apt(['install', '-y', python]) |
529 | + result_which = self.lxc_exec(['which', python]) |
530 | + if result_which.returncode != 0: |
531 | + err = result_install.stdout + result_which.stdout |
532 | + raise Exception(f'could not install {python}: {err}') |
533 | + |
534 | + def snapshot_create(self, name='default'): |
535 | + self.log(f'creating snapshot: {name}') |
536 | + self.stop(force=False) |
537 | + subprocess.run(['lxc', 'snapshot', self.name, name], check=True) |
538 | + |
539 | + def snapshot_restore(self, name='default', start=True): |
540 | + self.log(f'restoring snapshot: {name}') |
541 | + self.stop() |
542 | + subprocess.run(['lxc', 'restore', self.name, name], check=True) |
543 | + if start: |
544 | + self.start() |
545 | + |
546 | + def snapshot_update(self, name='default'): |
547 | + self.log(f'updating snapshot: {name}') |
548 | + subprocess.run(['lxc', 'delete', '--force', f'{self.name}/{name}'], check=True) |
549 | + self.snapshot_create(name) |
550 | + |
551 | + @functools.cached_property |
552 | + def ssh_key(self): |
553 | + return self._ssh_key |
554 | + |
555 | + def add_start_cmd(self, cmd): |
556 | + self.log(f'adding start cmd: {cmd}') |
557 | + self._start_cmds.append(cmd) |
558 | + |
559 | + def add_file_content(self, path, content): |
560 | + with tempfile.TemporaryDirectory() as d: |
561 | + localfile = Path(d) / Path(path).name |
562 | + self.lxc_file_pull(path, str(localfile)) |
563 | + existing_content = localfile.read_text(encoding='utf-8') or '' |
564 | + if content not in existing_content: |
565 | + if existing_content and not existing_content.endswith('\n'): |
566 | + existing_content += '\n' |
567 | + existing_content += content |
568 | + localfile.write_text(existing_content) |
569 | + self.lxc_file_push(str(localfile), path) |
570 | + |
571 | + def add_ssh_key(self, key): |
572 | + self.log(f'adding ssh key: {key.strip()}') |
573 | + self.add_file_content('/root/.ssh/authorized_keys', key) |
574 | + |
575 | + def add_ssh_config(self, config): |
576 | + self.log('adding ssh config') |
577 | + self.add_file_content('/root/.ssh/config', config) |
578 | + |
579 | + def lxc_exec(self, cmd, **kwargs): |
580 | + if type(cmd) == str: |
581 | + cmd = cmd.split() |
582 | + return run_cmd(['lxc', 'exec', self.name, '--'] + cmd, **kwargs) |
583 | + |
584 | + def lxc_apt(self, cmd, **kwargs): |
585 | + if type(cmd) == str: |
586 | + cmd = cmd.split() |
587 | + return run_cmd(['lxc', 'exec', self.name, '--env', 'DEBIAN_FRONTEND=noninteractive', '--', 'apt'] + cmd, **kwargs) |
588 | + |
589 | + def lxc_file_pull(self, remote, local, fail_if_missing=False, recursive=False): |
590 | + remote = f'{self.name}{remote}' |
591 | + self.log(f'{local} <- {remote}') |
592 | + cmd = ['lxc', 'file', 'pull', remote, local] |
593 | + if recursive: |
594 | + cmd += ['--recursive', '--create-dirs'] |
595 | + try: |
596 | + run_cmd(cmd, check=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) |
597 | + except subprocess.CalledProcessError: |
598 | + if fail_if_missing: |
599 | + raise |
600 | + if recursive: |
601 | + self.log('remote dir missing, ignoring') |
602 | + return |
603 | + localpath = Path(local) |
604 | + if localpath.is_dir(): |
605 | + localpath = localpath / Path(remote).name |
606 | + localpath.touch() |
607 | + self.log(f'remote file missing, created empty file {localpath}') |
608 | + |
609 | + def lxc_file_push(self, local, remote): |
610 | + remote = f'{self.name}{remote}' |
611 | + self.log(f'{local} -> {remote}') |
612 | + run_cmd(['lxc', 'file', 'push', local, remote], check=True) |
613 | + |
614 | + @property |
615 | + def json(self): |
616 | + listjson = run_cmd('lxc list --format json').stdout |
617 | + filtered = list(filter(lambda i: i['name'] == self.name, json.loads(listjson))) |
618 | + if len(filtered) != 1: |
619 | + raise Exception(f'Expected only 1 lxc list entry for {self.name}, found {len(filtered)}:\n{listjson}') |
620 | + return filtered[0] |
621 | + |
622 | + @property |
623 | + def is_running(self): |
624 | + return self.json['status'] == 'Running' |
625 | + |
626 | + def start(self): |
627 | + if not self.is_running: |
628 | + cmd = f'lxc start {self.name}' |
629 | + self.log(cmd) |
630 | + result = run_cmd(cmd, check=True) |
631 | + if result.stdout: |
632 | + self.log(result.stdout) |
633 | + self._wait_for_networking() |
634 | + for cmd in self._start_cmds: |
635 | + self.lxc_exec(cmd) |
636 | + |
637 | + def stop(self, force=True): |
638 | + cmd = 'lxc stop' |
639 | + if force: |
640 | + cmd += ' --force' |
641 | + cmd += f' {self.name}' |
642 | + self.log(cmd) |
643 | + result = run_cmd(cmd) |
644 | + if result.stdout: |
645 | + self.log(result.stdout) |
646 | + |
647 | + def __del__(self): |
648 | + run_cmd(['lxc', 'delete', '--force', self.name], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) |
649 | + |
650 | + |
651 | +class VerboseAssertionError(AssertionError): |
652 | + __logs = [] |
653 | + |
654 | + def __init__(self, *args): |
655 | + logs = list(args) + self.read_log() |
656 | + super(VerboseAssertionError, self).__init__('\n'.join(logs)) |
657 | + |
658 | + @classmethod |
659 | + def add_log(cls, msg): |
660 | + cls.__logs.append(str(msg)) |
661 | + |
662 | + @classmethod |
663 | + def read_log(cls): |
664 | + log = cls.__logs |
665 | + cls.__logs = [] |
666 | + return log |
667 | + |
668 | + @classmethod |
669 | + def clear_log(cls): |
670 | + cls.read_log() |
671 | + |
672 | + |
673 | +class SshuttleTest(unittest.TestCase): |
674 | + release = None |
675 | + failureException = VerboseAssertionError |
676 | + |
677 | + @classmethod |
678 | + def is_arch_supported(cls): |
679 | + if cls.release == 'trusty': |
680 | + return get_arch() == 'amd64' |
681 | + return True |
682 | + |
683 | + @classmethod |
684 | + def setUpClass(cls): |
685 | + # note that some of the cls attrs used here are set by setUpModule() |
686 | + |
687 | + # this is set by the subclass, and required |
688 | + assert(cls.release) |
689 | + |
690 | + if not cls.is_arch_supported(): |
691 | + raise unittest.SkipTest(f'Release {cls.release} not available for {get_arch()}') |
692 | + |
693 | + remote = Remote(f'remote-{cls.release}', cls.release) |
694 | + remote.add_ssh_key(SSH_KEY) |
695 | + remote.add_ssh_config(SSH_CONFIG) |
696 | + remote.private = TESTBED.remote |
697 | + remote.add_start_cmd(f'ip a add {remote.private} dev lo') |
698 | + remote.snapshot_create() |
699 | + cls.remote = remote |
700 | + |
701 | + cls.reverse_remote.snapshot_restore() |
702 | + cls.reverse_remote.add_ssh_key(cls.remote.ssh_key) |
703 | + cls.reverse_remote.snapshot_update() |
704 | + |
705 | + @classmethod |
706 | + def tearDownClass(cls): |
707 | + del cls.remote |
708 | + |
709 | + def setUp(self): |
710 | + self.name = f'testbed-{self.release}' |
711 | + self.reverse_remote.snapshot_restore() |
712 | + self.remote.snapshot_restore() |
713 | + self.sshuttle_process = None |
714 | + self.sshuttle_log = tempfile.NamedTemporaryFile() |
715 | + self.failureException.clear_log() |
716 | + |
717 | + def tearDown(self): |
718 | + self.sshuttle_stop() |
719 | + self.sshuttle_log.close() |
720 | + self.remote.stop() |
721 | + self.reverse_remote.stop() |
722 | + |
723 | + @functools.cached_property |
724 | + def sshuttle_version(self): |
725 | + return run_cmd('dpkg-query -f ${Version} -W sshuttle').stdout |
726 | + |
727 | + def sshuttle_started_ok(self): |
728 | + output = Path(self.sshuttle_log.name).read_text(encoding='utf-8') |
729 | + # Unfortunately we have to just grep the output to see if it 'connected' |
730 | + # and the specific output format has changed across versions |
731 | + # Since all output so far includes 'Connected' we'll use that word |
732 | + return 'connected' in output.lower() |
733 | + |
734 | + def sshuttle_start(self, dst, python): |
735 | + sshuttle_cmd = 'sshuttle' |
736 | + if python: |
737 | + sshuttle_cmd += f' --python {python}' |
738 | + sshuttle_cmd += f' -r {dst.shared.ip} {dst.private.network}' |
739 | + if dst is self.reverse_remote: |
740 | + sshuttle_cmd = f'lxc exec {self.remote.name} -- {sshuttle_cmd}' |
741 | + print(f'running: {sshuttle_cmd}') |
742 | + self.sshuttle_process = subprocess.Popen(sshuttle_cmd.split(), encoding='utf-8', |
743 | + stdout=self.sshuttle_log, stderr=self.sshuttle_log) |
744 | + print('waiting for sshuttle to start...', end='', flush=True) |
745 | + for sec in range(300): |
746 | + if self.sshuttle_process.poll() is not None: |
747 | + print('sshuttle failed :-(', flush=True) |
748 | + break |
749 | + if self.sshuttle_started_ok(): |
750 | + print('started', flush=True) |
751 | + break |
752 | + time.sleep(1) |
753 | + print('.', end='', flush=True) |
754 | + else: |
755 | + print("WARNING: timed out waiting for sshuttle to start, the test may fail") |
756 | + if self.sshuttle_process.poll() is not None: |
757 | + self.fail('sshuttle process failed to start') |
758 | + |
759 | + def sshuttle_stop(self): |
760 | + if self.sshuttle_process and self.sshuttle_process.poll() is None: |
761 | + print('stopping sshuttle...') |
762 | + self.sshuttle_process.terminate() |
763 | + with suppress(subprocess.TimeoutExpired): |
764 | + self.sshuttle_process.communicate(timeout=30) |
765 | + print('sshuttle stopped') |
766 | + self.sshuttle_process = None |
767 | + return |
768 | + |
769 | + print('sshuttle did not respond, killing sshuttle...') |
770 | + self.sshuttle_process.kill() |
771 | + with suppress(subprocess.TimeoutExpired): |
772 | + self.sshuttle_process.communicate(timeout=30) |
773 | + print('sshuttle stopped') |
774 | + self.sshuttle_process = None |
775 | + return |
776 | + |
777 | + self.fail('sshuttle subprocess refused to stop') |
778 | + |
779 | + def ssh_to(self, remote, expect_timeout=False): |
780 | + ssh_cmd = 'ssh' |
781 | + if expect_timeout: |
782 | + # No need to wait long if we expect it to timeout |
783 | + ssh_cmd += ' -o ConnectionAttempts=2' |
784 | + ssh_cmd += f' -v {remote.private.ip} -- cat /proc/sys/kernel/hostname' |
785 | + if remote is self.reverse_remote: |
786 | + ssh_cmd = f'lxc exec {self.remote.name} -- {ssh_cmd}' |
787 | + print(f'running: {ssh_cmd}') |
788 | + result = run_cmd(ssh_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) |
789 | + failed = result.returncode != 0 |
790 | + if failed: |
791 | + if result.stderr: |
792 | + # just print the last line here |
793 | + print(result.stderr.splitlines()[-1]) |
794 | + if failed != expect_timeout: |
795 | + self.failureException.add_log(result.stdout) |
796 | + self.failureException.add_log(result.stderr) |
797 | + elif result.stdout: |
798 | + print(f'Connected to: {result.stdout.strip()}') |
799 | + msg = 'ssh' |
800 | + if failed: |
801 | + msg += ' failed' |
802 | + if expect_timeout: |
803 | + msg += ' (as expected)' |
804 | + else: |
805 | + msg += ' connected' |
806 | + print(msg, flush=True) |
807 | + return not failed |
808 | + |
809 | + def test_local_to_remote_nopy(self): |
810 | + self._test_to_remote(self, self.remote, None) |
811 | + |
812 | + def test_local_to_remote_py2(self): |
813 | + self._test_to_remote(self, self.remote, 'python2') |
814 | + |
815 | + def test_local_to_remote_py3(self): |
816 | + self._test_to_remote(self, self.remote, 'python3') |
817 | + |
818 | + def test_remote_to_reverse_remote_nopy(self): |
819 | + self._test_to_remote(self.remote, self.reverse_remote, None) |
820 | + |
821 | + def test_remote_to_reverse_remote_py2(self): |
822 | + self._test_to_remote(self.remote, self.reverse_remote, 'python2') |
823 | + |
824 | + def test_remote_to_reverse_remote_py3(self): |
825 | + self._test_to_remote(self.remote, self.reverse_remote, 'python3') |
826 | + |
827 | + def _test_to_remote(self, src, dst, python): |
828 | + self.failureException.add_log(f'Test detail: {src.name} sshuttle {src.sshuttle_version} to {dst.name} {python if python else ""}') |
829 | + print('this ssh connection should timeout:') |
830 | + self.assertFalse(self.ssh_to(dst, expect_timeout=True)) |
831 | + try: |
832 | + self.sshuttle_start(dst, python) |
833 | + print('this ssh connection should not timeout:') |
834 | + self.assertTrue(self.ssh_to(dst)) |
835 | + except AssertionError: |
836 | + if is_expected_failure(src, dst, python): |
837 | + self.skipTest('This is an expected failure, ignoring test failure') |
838 | + else: |
839 | + self.failureException.add_log(Path(self.sshuttle_log.name).read_text(encoding='utf-8')) |
840 | + testname = '.'.join(self.id().split('.')[-2:]) |
841 | + self.remote.save_journal(testname, 'remote') |
842 | + self.reverse_remote.save_journal(testname, 'reverse_remote') |
843 | + raise |
844 | + |
845 | + |
846 | +if __name__ == '__main__': |
847 | + if len(sys.argv) > 1: |
848 | + set_releases(sys.argv[1:]) |
849 | + del sys.argv[1:] |
850 | + unittest.main(verbosity=2) |
Running autopkgtests results in BADPKG since we no longer package lxd:
---
The following packages have unmet dependencies: command- line : Depends: sshuttle but it is not going to be installed
Depends: lxd but it is not installable 1.3.1-1ubuntu1_ all.deb deb:sshuttle sshuttle @@@@@@@ @@@@@@ summary
satisfy:
E: Unable to correct problems, you have held broken packages.
cross-release FAIL badpkg
blame: arg:sshuttle_
badpkg: Test dependencies are unsatisfiable. A common reason is that your testbed is out of date with respect to the archive, and you need to use a current testbed or run apt-get update or use -U.
autopkgtest [11:31:45]: @@@@@@@
cross-release FAIL badpkg
----