Merge ~gjolly/ubuntu/+source/sshuttle:merge-1.3.1-1-devel into ubuntu/+source/sshuttle:debian/sid
- Git
- lp:~gjolly/ubuntu/+source/sshuttle
- merge-1.3.1-1-devel
- Merge into debian/sid
Proposed by
Gauthier Jolly
Status: | Needs review | ||||
---|---|---|---|---|---|
Proposed branch: | ~gjolly/ubuntu/+source/sshuttle:merge-1.3.1-1-devel | ||||
Merge into: | ubuntu/+source/sshuttle:debian/sid | ||||
Diff against target: |
850 lines (+793/-1) 4 files modified
debian/changelog (+87/-0) debian/control (+2/-1) debian/tests/control (+3/-0) debian/tests/cross-release (+701/-0) |
||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Vladimir Petko (community) | Abstain | ||
git-ubuntu import | Pending | ||
Review via email:
|
Commit message
Manual merge of the package. We were carrying a delta in d/control that is not needed anymore.
Description of the change
To post a comment you must log in.
Revision history for this message

Gauthier Jolly (gjolly) wrote : | # |
Closing in favor of the sync: https:/
Unmerged commits
- 62ff4b4... by Gauthier Jolly
-
update-maintainer
- ec4423b... by Gauthier Jolly
-
reconstruct-
changelog - b92fc59... by Gauthier Jolly
-
merge-changelogs
- 083bff7... by Gauthier Jolly
-
d/t/control,
cross-release: Add autopkgtest for cross-release compatibility checks
Preview Diff
[H/L] Next/Prev Comment, [J/K] Next/Prev File, [N/P] Next/Prev Hunk
1 | diff --git a/debian/changelog b/debian/changelog | |||
2 | index 749f9d2..ccb9464 100644 | |||
3 | --- a/debian/changelog | |||
4 | +++ b/debian/changelog | |||
5 | @@ -1,3 +1,13 @@ | |||
6 | 1 | sshuttle (1.3.1-1ubuntu1) questing; urgency=medium | ||
7 | 2 | |||
8 | 3 | * Merge with Debian unstable. Remaining changes: | ||
9 | 4 | - d/t/control,cross-release: Add autopkgtest for cross-release | ||
10 | 5 | compatibility checks | ||
11 | 6 | * Drop the removal of python3-distutils from d/control as distutils has also | ||
12 | 7 | been dropped from the upstream Debian control file. | ||
13 | 8 | |||
14 | 9 | -- Gauthier Jolly <contact@gjolly.fr> Fri, 02 May 2025 14:24:08 +0000 | ||
15 | 10 | |||
16 | 1 | sshuttle (1.3.1-1) unstable; urgency=medium | 11 | sshuttle (1.3.1-1) unstable; urgency=medium |
17 | 2 | 12 | ||
18 | 3 | * New upstream version. | 13 | * New upstream version. |
19 | @@ -37,6 +47,21 @@ sshuttle (1.1.2-1) unstable; urgency=medium | |||
20 | 37 | 47 | ||
21 | 38 | -- Brian May <bam@debian.org> Mon, 19 Feb 2024 11:55:11 +1100 | 48 | -- Brian May <bam@debian.org> Mon, 19 Feb 2024 11:55:11 +1100 |
22 | 39 | 49 | ||
23 | 50 | sshuttle (1.1.1-2ubuntu2) noble; urgency=medium | ||
24 | 51 | |||
25 | 52 | * Drop dependency on python3-distutils. | ||
26 | 53 | |||
27 | 54 | -- Matthias Klose <doko@ubuntu.com> Sat, 09 Mar 2024 12:23:57 +0100 | ||
28 | 55 | |||
29 | 56 | sshuttle (1.1.1-2ubuntu1) noble; urgency=low | ||
30 | 57 | |||
31 | 58 | * Merge from Debian unstable. Remaining changes: | ||
32 | 59 | - d/t/control,cross-release: Add autopkgtest for | ||
33 | 60 | cross-release compatibility checks | ||
34 | 61 | * Drop all patches as included in new release. | ||
35 | 62 | |||
36 | 63 | -- James Page <james.page@ubuntu.com> Wed, 14 Feb 2024 09:49:52 +0000 | ||
37 | 64 | |||
38 | 40 | sshuttle (1.1.1-2) unstable; urgency=medium | 65 | sshuttle (1.1.1-2) unstable; urgency=medium |
39 | 41 | 66 | ||
40 | 42 | [ Debian Janitor ] | 67 | [ Debian Janitor ] |
41 | @@ -60,6 +85,36 @@ sshuttle (1.1.0-1) unstable; urgency=medium | |||
42 | 60 | 85 | ||
43 | 61 | -- Brian May <bam@debian.org> Fri, 28 Jan 2022 09:57:26 +1100 | 86 | -- Brian May <bam@debian.org> Fri, 28 Jan 2022 09:57:26 +1100 |
44 | 62 | 87 | ||
45 | 88 | sshuttle (1.0.5-1ubuntu4) jammy; urgency=medium | ||
46 | 89 | |||
47 | 90 | * d/p/*use-pty.patch: Cherry-picked from upstream master to fix | ||
48 | 91 | shuttle permissions failure (LP: #1965829). | ||
49 | 92 | |||
50 | 93 | -- Corey Bryant <corey.bryant@canonical.com> Mon, 21 Mar 2022 16:50:35 -0400 | ||
51 | 94 | |||
52 | 95 | sshuttle (1.0.5-1ubuntu3) impish; urgency=medium | ||
53 | 96 | |||
54 | 97 | * d/t/cross-release: | ||
55 | 98 | - fix flakiness and speed of test | ||
56 | 99 | - install net-tools in testbed instances | ||
57 | 100 | |||
58 | 101 | -- Dan Streetman <ddstreet@canonical.com> Wed, 23 Jun 2021 16:34:30 -0400 | ||
59 | 102 | |||
60 | 103 | sshuttle (1.0.5-1ubuntu2) impish; urgency=medium | ||
61 | 104 | |||
62 | 105 | * d/t/cross-release: reduce total test time by waiting less | ||
63 | 106 | for expected timeouts, and fix when we notice sshuttle started | ||
64 | 107 | |||
65 | 108 | -- Dan Streetman <ddstreet@canonical.com> Mon, 10 May 2021 10:46:14 -0400 | ||
66 | 109 | |||
67 | 110 | sshuttle (1.0.5-1ubuntu1) hirsute; urgency=medium | ||
68 | 111 | |||
69 | 112 | * Merge with Debian; remaining changes: | ||
70 | 113 | - d/t/control, d/t/cross-release: | ||
71 | 114 | - add autopkgtest for cross-release compatibility checks | ||
72 | 115 | |||
73 | 116 | -- Matthias Klose <doko@ubuntu.com> Tue, 16 Mar 2021 10:25:36 +0100 | ||
74 | 117 | |||
75 | 63 | sshuttle (1.0.5-1) unstable; urgency=medium | 118 | sshuttle (1.0.5-1) unstable; urgency=medium |
76 | 64 | 119 | ||
77 | 65 | * New upstream version. | 120 | * New upstream version. |
78 | @@ -67,6 +122,37 @@ sshuttle (1.0.5-1) unstable; urgency=medium | |||
79 | 67 | 122 | ||
80 | 68 | -- Brian May <bam@debian.org> Tue, 29 Dec 2020 11:00:34 +1100 | 123 | -- Brian May <bam@debian.org> Tue, 29 Dec 2020 11:00:34 +1100 |
81 | 69 | 124 | ||
82 | 125 | sshuttle (1.0.4-1ubuntu4) groovy; urgency=medium | ||
83 | 126 | |||
84 | 127 | * d/t/cross-release: | ||
85 | 128 | - test without providing --python param | ||
86 | 129 | |||
87 | 130 | -- Dan Streetman <ddstreet@canonical.com> Wed, 30 Sep 2020 17:23:22 -0400 | ||
88 | 131 | |||
89 | 132 | sshuttle (1.0.4-1ubuntu3) groovy; urgency=medium | ||
90 | 133 | |||
91 | 134 | * d/t/cross-release: | ||
92 | 135 | - fixes for autopkgtest | ||
93 | 136 | |||
94 | 137 | -- Dan Streetman <ddstreet@canonical.com> Sat, 19 Sep 2020 08:28:03 -0400 | ||
95 | 138 | |||
96 | 139 | sshuttle (1.0.4-1ubuntu2) groovy; urgency=medium | ||
97 | 140 | |||
98 | 141 | * d/t/cross-release: fix error in checking sshuttle version | ||
99 | 142 | |||
100 | 143 | -- Dan Streetman <ddstreet@canonical.com> Fri, 18 Sep 2020 19:22:08 -0400 | ||
101 | 144 | |||
102 | 145 | sshuttle (1.0.4-1ubuntu1) groovy; urgency=medium | ||
103 | 146 | |||
104 | 147 | * d/p/lp1873368/0001-Fix-python2-server-compatibility.patch, | ||
105 | 148 | d/p/lp1873368/0002-Fix-flake8-line-too-long.patch, | ||
106 | 149 | d/p/lp1873368/0003-Fix-python2-client-compatibility.patch: | ||
107 | 150 | - fix compatibility with remote py2 (LP: #1873368) | ||
108 | 151 | * d/t/control, d/t/cross-release: | ||
109 | 152 | - add autopkgtest for cross-release compatibility checks | ||
110 | 153 | |||
111 | 154 | -- Dan Streetman <ddstreet@canonical.com> Fri, 18 Sep 2020 13:57:01 -0400 | ||
112 | 155 | |||
113 | 70 | sshuttle (1.0.4-1) unstable; urgency=low | 156 | sshuttle (1.0.4-1) unstable; urgency=low |
114 | 71 | 157 | ||
115 | 72 | [ Debian Janitor ] | 158 | [ Debian Janitor ] |
116 | @@ -286,3 +372,4 @@ sshuttle (0.42-1) unstable; urgency=low | |||
117 | 286 | * Write manpage for the Debian release | 372 | * Write manpage for the Debian release |
118 | 287 | 373 | ||
119 | 288 | -- Javier Fernandez-Sanguino Pen~a <jfs@debian.org> Wed, 27 Oct 2010 02:50:49 +0200 | 374 | -- Javier Fernandez-Sanguino Pen~a <jfs@debian.org> Wed, 27 Oct 2010 02:50:49 +0200 |
120 | 375 | |||
121 | diff --git a/debian/control b/debian/control | |||
122 | index ccbd6f5..458ac61 100644 | |||
123 | --- a/debian/control | |||
124 | +++ b/debian/control | |||
125 | @@ -1,7 +1,8 @@ | |||
126 | 1 | Source: sshuttle | 1 | Source: sshuttle |
127 | 2 | Section: net | 2 | Section: net |
128 | 3 | Priority: optional | 3 | Priority: optional |
130 | 4 | Maintainer: Brian May <bam@debian.org> | 4 | Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com> |
131 | 5 | XSBC-Original-Maintainer: Brian May <bam@debian.org> | ||
132 | 5 | Build-Depends: debhelper-compat (= 13), dh-python, | 6 | Build-Depends: debhelper-compat (= 13), dh-python, |
133 | 6 | python3-all, python3-pytest, | 7 | python3-all, python3-pytest, |
134 | 7 | python3-sphinx, | 8 | python3-sphinx, |
135 | diff --git a/debian/tests/control b/debian/tests/control | |||
136 | 8 | new file mode 100644 | 9 | new file mode 100644 |
137 | index 0000000..42bf2b9 | |||
138 | --- /dev/null | |||
139 | +++ b/debian/tests/control | |||
140 | @@ -0,0 +1,3 @@ | |||
141 | 1 | Tests: cross-release | ||
142 | 2 | Restrictions: allow-stderr, isolation-machine, needs-root, breaks-testbed, skippable | ||
143 | 3 | Depends: @, lxd, ssh, python3, python3-apt, python3-distro-info | ||
144 | diff --git a/debian/tests/cross-release b/debian/tests/cross-release | |||
145 | 0 | new file mode 100644 | 4 | new file mode 100644 |
146 | index 0000000..575d6a4 | |||
147 | --- /dev/null | |||
148 | +++ b/debian/tests/cross-release | |||
149 | @@ -0,0 +1,701 @@ | |||
150 | 1 | #!/usr/bin/python3 | ||
151 | 2 | # | ||
152 | 3 | # This test uses lxd to create a container for each supported Ubuntu release, | ||
153 | 4 | # and test if sshuttle works from the local testbed (using sshhuttle under test) | ||
154 | 5 | # to the remote containter. | ||
155 | 6 | # | ||
156 | 7 | # This also tests the reverse, by creating a container matching the testbed's release, | ||
157 | 8 | # and connecting from each supported Ubuntu release's container. Note that the reverse | ||
158 | 9 | # direction tests *do not* test the sshuttle under test by this autopkgtest, since | ||
159 | 10 | # on a "remote" system sshuttle is not involved at all, and does not even need to be | ||
160 | 11 | # installed; the reverse direction test primarily tests if anything *else* has changed | ||
161 | 12 | # that breaks the *existing* sshuttle on the instance (most likely, changes in python). | ||
162 | 13 | |||
163 | 14 | import apt_pkg | ||
164 | 15 | import functools | ||
165 | 16 | import ipaddress | ||
166 | 17 | import json | ||
167 | 18 | import os | ||
168 | 19 | import re | ||
169 | 20 | import sys | ||
170 | 21 | import subprocess | ||
171 | 22 | import tempfile | ||
172 | 23 | import time | ||
173 | 24 | import unittest | ||
174 | 25 | |||
175 | 26 | from aptsources.distro import get_distro | ||
176 | 27 | from contextlib import suppress | ||
177 | 28 | from distro_info import UbuntuDistroInfo | ||
178 | 29 | from pathlib import Path | ||
179 | 30 | |||
180 | 31 | |||
181 | 32 | DISTROINFO = UbuntuDistroInfo() | ||
182 | 33 | VALID_RELEASES = set(DISTROINFO.supported_esm() + DISTROINFO.supported() + [DISTROINFO.devel()]) | ||
183 | 34 | RELEASES = [] | ||
184 | 35 | TESTBED = None | ||
185 | 36 | SSH_KEY = None | ||
186 | 37 | SSH_CONFIG = None | ||
187 | 38 | IF_NAME = 'eth1' | ||
188 | 39 | BR_NAME = 'br1' | ||
189 | 40 | |||
190 | 41 | # really silly that users need to call this...especially just to use version_compare | ||
191 | 42 | apt_pkg.init_system() | ||
192 | 43 | |||
193 | 44 | class Testbed(object): | ||
194 | 45 | def __init__(self): | ||
195 | 46 | self.net = {} | ||
196 | 47 | self.index = {} | ||
197 | 48 | self._find_subnets() | ||
198 | 49 | |||
199 | 50 | @property | ||
200 | 51 | def shared_glob(self): | ||
201 | 52 | '''Get the 10.X.* glob-format subnet for ~/.ssh/config usage''' | ||
202 | 53 | b = self.net.get('shared').exploded.split('.')[1] | ||
203 | 54 | return f'10.{b}.0.*' | ||
204 | 55 | |||
205 | 56 | @property | ||
206 | 57 | def shared_next(self): | ||
207 | 58 | '''This is the next unique ip address on the shared subnet''' | ||
208 | 59 | return self._next_interface('shared') | ||
209 | 60 | |||
210 | 61 | @property | ||
211 | 62 | def remote(self): | ||
212 | 63 | '''This is the unique private subnet used for each "remote" instance''' | ||
213 | 64 | return self._interface('remote') | ||
214 | 65 | |||
215 | 66 | @property | ||
216 | 67 | def reverse(self): | ||
217 | 68 | '''This is the unique private subnet used for the single "reverse remote" instance''' | ||
218 | 69 | return self._interface('reverse') | ||
219 | 70 | |||
220 | 71 | def _network(self, b): | ||
221 | 72 | return ipaddress.ip_network(f'10.{b}.0.0/24') | ||
222 | 73 | |||
223 | 74 | def _network_to_interface(self, network, index=0): | ||
224 | 75 | addr = list(network.hosts())[index].exploded | ||
225 | 76 | return ipaddress.ip_interface(f'{addr}/{network.prefixlen}') | ||
226 | 77 | |||
227 | 78 | def _interface(self, name, index=0): | ||
228 | 79 | return self._network_to_interface(self.net.get(name), index) | ||
229 | 80 | |||
230 | 81 | def _next_interface(self, name): | ||
231 | 82 | index = self.index.get(name) | ||
232 | 83 | self.index[name] = index + 1 | ||
233 | 84 | return self._interface(name, index) | ||
234 | 85 | |||
235 | 86 | def _find_subnets(self): | ||
236 | 87 | start = 254 | ||
237 | 88 | for name in ['shared', 'remote', 'reverse']: | ||
238 | 89 | start = self._find_subnet(name, start) | ||
239 | 90 | |||
240 | 91 | def _find_subnet(self, name, start): | ||
241 | 92 | for b in range(start, 0, -1): | ||
242 | 93 | network = self._network(b) | ||
243 | 94 | iface = self._network_to_interface(network) | ||
244 | 95 | result = subprocess.run(f'ip r get {iface.ip}'.split(), | ||
245 | 96 | encoding='utf-8', stdout=subprocess.PIPE, stderr=subprocess.PIPE) | ||
246 | 97 | # we try to find a subnet that isn't locally reachable | ||
247 | 98 | # if returncode != 0, then the subnet isn't reachable (i.e. no gateway) | ||
248 | 99 | # if 'via' is in stdout, then the subnet isn't locally reachable | ||
249 | 100 | if result.returncode != 0 or 'via' in result.stdout: | ||
250 | 101 | self.net[name] = network | ||
251 | 102 | self.index[name] = 0 | ||
252 | 103 | return b - 1 | ||
253 | 104 | else: | ||
254 | 105 | raise Exception('Could not find any 10.* subnet to use for private addresses') | ||
255 | 106 | |||
256 | 107 | @functools.lru_cache | ||
257 | 108 | def get_arch(): | ||
258 | 109 | return run_cmd('dpkg --print-architecture').stdout.strip() | ||
259 | 110 | |||
260 | 111 | def is_expected_failure(src, dst, python): | ||
261 | 112 | if not python and is_expected_failure_nopy(src, dst): | ||
262 | 113 | return True | ||
263 | 114 | if python == 'python2' and is_expected_failure_py2(src, dst): | ||
264 | 115 | return True | ||
265 | 116 | if python == 'python3' and is_expected_failure_py3(src, dst): | ||
266 | 117 | return True | ||
267 | 118 | |||
268 | 119 | # otherwise, we don't expect failure | ||
269 | 120 | return False | ||
270 | 121 | |||
271 | 122 | def is_expected_failure_nopy(src, dst): | ||
272 | 123 | # failure due to regression in patch to detect python command | ||
273 | 124 | # should be fixed in version after this; LP: #1897961 | ||
274 | 125 | if src.release == 'xenial' and apt_pkg.version_compare(src.sshuttle_version, '0.76-1ubuntu1.1') <= 0: | ||
275 | 126 | return True | ||
276 | 127 | |||
277 | 128 | def is_expected_failure_py2(src, dst): | ||
278 | 129 | # failure due to regression from initial fix for py3.8 fix | ||
279 | 130 | # should be fixed in version after this; LP: #1873368 | ||
280 | 131 | if src.release == 'focal' and apt_pkg.version_compare(src.sshuttle_version, '0.78.5-1ubuntu1') <= 0: | ||
281 | 132 | return True | ||
282 | 133 | |||
283 | 134 | def is_expected_failure_py3(src, dst): | ||
284 | 135 | # expected failure: trusty -> any | ||
285 | 136 | # since trusty is now ESM only, this isn't expected to be fixed | ||
286 | 137 | if src.release == 'trusty': | ||
287 | 138 | return True | ||
288 | 139 | |||
289 | 140 | # failure with py3.8 (or later) target, which is default py3 in focal (or later) | ||
290 | 141 | if DISTROINFO.version(dst.release) >= DISTROINFO.version('focal'): | ||
291 | 142 | # should be fixed in version after each of these; LP: #1873368 | ||
292 | 143 | if src.release == 'xenial' and apt_pkg.version_compare(src.sshuttle_version, '0.76-1ubuntu1') <= 0: | ||
293 | 144 | return True | ||
294 | 145 | if src.release == 'bionic' and apt_pkg.version_compare(src.sshuttle_version, '0.78.3-1ubuntu1') <= 0: | ||
295 | 146 | return True | ||
296 | 147 | if src.release == 'focal' and apt_pkg.version_compare(src.sshuttle_version, '0.78.5-1ubuntu1') <= 0: | ||
297 | 148 | return True | ||
298 | 149 | |||
299 | 150 | # otherwise, we don't expect failure | ||
300 | 151 | return False | ||
301 | 152 | |||
302 | 153 | def set_releases(releases): | ||
303 | 154 | invalid_releases = list(set(releases) - VALID_RELEASES) | ||
304 | 155 | if invalid_releases: | ||
305 | 156 | print(f'ignoring invalid release(s): {", ".join(invalid_releases)}') | ||
306 | 157 | valid_releases = list(set(releases) & VALID_RELEASES) | ||
307 | 158 | if valid_releases: | ||
308 | 159 | print(f'limiting remote release(s) to: {", ".join(valid_releases)}') | ||
309 | 160 | RELEASES.clear() | ||
310 | 161 | RELEASES.extend(valid_releases) | ||
311 | 162 | |||
312 | 163 | def load_tests(loader, standard_tests, pattern): | ||
313 | 164 | suite = unittest.TestSuite() | ||
314 | 165 | for release in sorted(RELEASES or VALID_RELEASES): | ||
315 | 166 | cls = type(f'SshuttleTest_{release}', (SshuttleTest,), | ||
316 | 167 | {'release': release}) | ||
317 | 168 | suite.addTests(loader.loadTestsFromTestCase(cls)) | ||
318 | 169 | return suite | ||
319 | 170 | |||
320 | 171 | def setUpModule(): | ||
321 | 172 | global TESTBED | ||
322 | 173 | |||
323 | 174 | _run_cmd('lxd init --auto', check=True) | ||
324 | 175 | |||
325 | 176 | TESTBED = Testbed() | ||
326 | 177 | |||
327 | 178 | add_shared_bridge() | ||
328 | 179 | add_private_subnets() | ||
329 | 180 | init_ssh_config() | ||
330 | 181 | init_base_test_class() | ||
331 | 182 | |||
332 | 183 | def tearDownModule(): | ||
333 | 184 | remove_ssh_config() | ||
334 | 185 | remove_private_subnets() | ||
335 | 186 | remove_shared_bridge() | ||
336 | 187 | del SshuttleTest.reverse_remote | ||
337 | 188 | |||
338 | 189 | def add_shared_bridge(): | ||
339 | 190 | _run_cmd(f'ip l add dev {BR_NAME} type bridge') | ||
340 | 191 | _run_cmd(f'ip l set up dev {BR_NAME}') | ||
341 | 192 | _run_cmd(f'ip a add {TESTBED.shared_next} dev {BR_NAME}') | ||
342 | 193 | |||
343 | 194 | def add_private_subnets(): | ||
344 | 195 | # Force the private addrs unreachable so we don't try to reach them out our normal gateway | ||
345 | 196 | _run_cmd(f'ip r add {TESTBED.remote.network} dev lo') | ||
346 | 197 | _run_cmd(f'ip r add {TESTBED.reverse.network} dev lo') | ||
347 | 198 | |||
348 | 199 | def remove_private_subnets(): | ||
349 | 200 | _run_cmd(f'ip r del {TESTBED.remote.network} dev lo') | ||
350 | 201 | _run_cmd(f'ip r del {TESTBED.reverse.network} dev lo') | ||
351 | 202 | |||
352 | 203 | def remove_shared_bridge(): | ||
353 | 204 | _run_cmd(f'ip l del dev {BR_NAME}') | ||
354 | 205 | |||
355 | 206 | def init_ssh_config(): | ||
356 | 207 | global SSH_KEY | ||
357 | 208 | global SSH_CONFIG | ||
358 | 209 | |||
359 | 210 | id_rsa = Path('/root/.ssh/id_rsa') | ||
360 | 211 | if not id_rsa.exists(): | ||
361 | 212 | _run_cmd(['ssh-keygen', '-f', str(id_rsa), '-P', ''], check=True) | ||
362 | 213 | SSH_KEY = id_rsa.with_suffix('.pub').read_text(encoding='utf-8') | ||
363 | 214 | |||
364 | 215 | SSH_CONFIG = '\n'.join([f'Host {TESTBED.remote.ip} {TESTBED.reverse.ip} {TESTBED.shared_glob}', | ||
365 | 216 | ' StrictHostKeyChecking no', | ||
366 | 217 | ' UserKnownHostsFile /dev/null', | ||
367 | 218 | ' ConnectTimeout 10', | ||
368 | 219 | ' ConnectionAttempts 18', | ||
369 | 220 | '']) | ||
370 | 221 | config = Path('/root/.ssh/config') | ||
371 | 222 | if config.exists(): | ||
372 | 223 | content = config.read_text(encoding='utf-8') or '' | ||
373 | 224 | if content and not content.endswith('\n'): | ||
374 | 225 | content += '\n' | ||
375 | 226 | else: | ||
376 | 227 | content = '' | ||
377 | 228 | content += SSH_CONFIG | ||
378 | 229 | config.write_text(content, encoding='utf-8') | ||
379 | 230 | |||
380 | 231 | def remove_ssh_config(): | ||
381 | 232 | config = Path('/root/.ssh/config') | ||
382 | 233 | content = config.read_text(encoding='utf-8') | ||
383 | 234 | config.write_text(content.replace(SSH_CONFIG, ''), encoding='utf-8') | ||
384 | 235 | |||
385 | 236 | def init_base_test_class(): | ||
386 | 237 | cls = SshuttleTest | ||
387 | 238 | |||
388 | 239 | cls.release = get_distro().codename | ||
389 | 240 | reverse_remote = Remote(f'reverse-remote-{cls.release}', cls.release) | ||
390 | 241 | reverse_remote.add_ssh_key(SSH_KEY) | ||
391 | 242 | reverse_remote.add_ssh_config(SSH_CONFIG) | ||
392 | 243 | reverse_remote.private = TESTBED.reverse | ||
393 | 244 | reverse_remote.add_start_cmd(f'ip a add {reverse_remote.private} dev lo') | ||
394 | 245 | reverse_remote.snapshot_create() | ||
395 | 246 | cls.reverse_remote = reverse_remote | ||
396 | 247 | |||
397 | 248 | def _run_cmd(cmd, **kwargs): | ||
398 | 249 | if type(cmd) == str: | ||
399 | 250 | cmd = cmd.split() | ||
400 | 251 | return subprocess.run(cmd, **kwargs) | ||
401 | 252 | |||
402 | 253 | def run_cmd(cmd, **kwargs): | ||
403 | 254 | kwargs.setdefault('stdout', subprocess.PIPE) | ||
404 | 255 | kwargs.setdefault('stderr', subprocess.STDOUT) | ||
405 | 256 | kwargs.setdefault('encoding', 'utf-8') | ||
406 | 257 | return _run_cmd(cmd, **kwargs) | ||
407 | 258 | |||
408 | 259 | |||
409 | 260 | class Remote(object): | ||
410 | 261 | def __init__(self, name, release): | ||
411 | 262 | self.name = name | ||
412 | 263 | self.release = release | ||
413 | 264 | self.shared = TESTBED.shared_next | ||
414 | 265 | self._start_cmds = [] | ||
415 | 266 | |||
416 | 267 | cmd = f'lxc delete --force {self.name}' | ||
417 | 268 | self.log(cmd) | ||
418 | 269 | run_cmd(cmd) | ||
419 | 270 | |||
420 | 271 | image = f'ubuntu-daily:{release}' | ||
421 | 272 | cmd = f'lxc launch --quiet {image} {self.name}' | ||
422 | 273 | self.log(cmd) | ||
423 | 274 | result = run_cmd(cmd) | ||
424 | 275 | if result.returncode != 0: | ||
425 | 276 | raise Exception(f'Could not launch {self.name}: {result.stdout}') | ||
426 | 277 | |||
427 | 278 | cmd = f'lxc config device add {self.name} {IF_NAME} nic name={IF_NAME} nictype=bridged parent={BR_NAME}' | ||
428 | 279 | self.log(cmd) | ||
429 | 280 | result = run_cmd(cmd) | ||
430 | 281 | if result.returncode != 0: | ||
431 | 282 | raise Exception(f'Could not add {IF_NAME}: {result.stdout}') | ||
432 | 283 | |||
433 | 284 | self.add_start_cmd(f'ip l set up dev {IF_NAME}') | ||
434 | 285 | self.add_start_cmd(f'ip a add {self.shared} dev {IF_NAME}') | ||
435 | 286 | |||
436 | 287 | self._wait_for_networking() | ||
437 | 288 | self._create_ssh_key() | ||
438 | 289 | self._add_local_ppas() | ||
439 | 290 | self._add_proposed() | ||
440 | 291 | self._apt_update_upgrade() | ||
441 | 292 | self._install_net_tools() | ||
442 | 293 | self._install_sshuttle() | ||
443 | 294 | self._install_python() | ||
444 | 295 | self.stop(force=False) | ||
445 | 296 | |||
446 | 297 | def log(self, msg): | ||
447 | 298 | print(f'{self.name}: {msg}') | ||
448 | 299 | |||
449 | 300 | def save_journal(self, testname, remotename): | ||
450 | 301 | artifacts_dir = os.getenv('AUTOPKGTEST_ARTIFACTS') | ||
451 | 302 | if not artifacts_dir: | ||
452 | 303 | self.log('AUTOPKGTEST_ARTIFACTS unset, not saving container journal') | ||
453 | 304 | return | ||
454 | 305 | |||
455 | 306 | dst = Path(artifacts_dir) / testname / remotename | ||
456 | 307 | dst.mkdir(parents=True, exist_ok=True) | ||
457 | 308 | |||
458 | 309 | self.lxc_exec('journalctl --sync --flush') | ||
459 | 310 | self.lxc_file_pull('/var/log/journal', dst, recursive=True) | ||
460 | 311 | |||
461 | 312 | def _wait_for_networking(self): | ||
462 | 313 | self.log(f'Waiting for {self.name} to finish starting') | ||
463 | 314 | for sec in range(120): | ||
464 | 315 | if 'via' in self.lxc_exec('ip r show default').stdout: | ||
465 | 316 | break | ||
466 | 317 | time.sleep(0.5) | ||
467 | 318 | else: | ||
468 | 319 | raise Exception(f'Timed out waiting for remote {self.name} networking') | ||
469 | 320 | |||
470 | 321 | def _create_ssh_key(self): | ||
471 | 322 | self.log('creating ssh key') | ||
472 | 323 | self.lxc_exec(['ssh-keygen', '-f', '/root/.ssh/id_rsa', '-P', '']) | ||
473 | 324 | self._ssh_key = self.lxc_exec('cat /root/.ssh/id_rsa.pub').stdout | ||
474 | 325 | |||
475 | 326 | def _add_local_ppas(self): | ||
476 | 327 | paths = list(Path('/etc/apt/sources.list.d').glob('*.list')) | ||
477 | 328 | paths.append(Path('/etc/apt/sources.list')) | ||
478 | 329 | ppas = [] | ||
479 | 330 | for path in paths: | ||
480 | 331 | for line in path.read_text(encoding='utf-8').splitlines(): | ||
481 | 332 | match = re.match(r'^deb .*ppa.launchpad.net/(?P<team>\w+)/(?P<ppa>\w+)/ubuntu', line) | ||
482 | 333 | if match: | ||
483 | 334 | ppas.append(f'ppa:{match.group("team")}/{match.group("ppa")}') | ||
484 | 335 | for ppa in ppas: | ||
485 | 336 | self.log(f'adding PPA {ppa}') | ||
486 | 337 | self.lxc_exec(['add-apt-repository', '-y', ppa]) | ||
487 | 338 | |||
488 | 339 | def _add_proposed(self): | ||
489 | 340 | with tempfile.TemporaryDirectory() as d: | ||
490 | 341 | f = Path(d) / 'tempfile' | ||
491 | 342 | self.lxc_file_pull('/etc/apt/sources.list', str(f)) | ||
492 | 343 | for line in f.read_text(encoding='utf-8').splitlines(): | ||
493 | 344 | match = re.match(rf'^deb (?P<uri>\S+) {self.release} main.*', line) | ||
494 | 345 | if match: | ||
495 | 346 | uri = match.group('uri') | ||
496 | 347 | components = 'man universe restricted multiverse' | ||
497 | 348 | proposed_line = f'deb {uri} {self.release}-proposed {components}' | ||
498 | 349 | self.log(f'adding {self.release}-proposed using {uri}') | ||
499 | 350 | self.lxc_exec(['add-apt-repository', '-y', proposed_line]) | ||
500 | 351 | return | ||
501 | 352 | |||
502 | 353 | def _apt_update_upgrade(self): | ||
503 | 354 | self.log('upgrading packages') | ||
504 | 355 | self.lxc_apt('update') | ||
505 | 356 | self.lxc_apt('upgrade -y') | ||
506 | 357 | |||
507 | 358 | def _install_net_tools(self): | ||
508 | 359 | self.log('installing net-tools') | ||
509 | 360 | result_install = self.lxc_apt('install -y net-tools') | ||
510 | 361 | result_which = self.lxc_exec('which netstat') | ||
511 | 362 | if result_which.returncode != 0: | ||
512 | 363 | err = result_install.stdout + result_which.stdout | ||
513 | 364 | raise Exception(f'could not install net-tools: {err}') | ||
514 | 365 | |||
515 | 366 | def _install_sshuttle(self): | ||
516 | 367 | self.log('installing sshuttle') | ||
517 | 368 | result_install = self.lxc_apt('install -y sshuttle') | ||
518 | 369 | result_which = self.lxc_exec('which sshuttle') | ||
519 | 370 | if result_which.returncode != 0: | ||
520 | 371 | err = result_install.stdout + result_which.stdout | ||
521 | 372 | raise Exception(f'could not install sshuttle: {err}') | ||
522 | 373 | self.sshuttle_version = self.lxc_exec('dpkg-query -f ${Version} -W sshuttle').stdout | ||
523 | 374 | |||
524 | 375 | def _install_python(self): | ||
525 | 376 | self.log('installing python') | ||
526 | 377 | self.lxc_apt('install -y python') | ||
527 | 378 | for python in ['python2', 'python3']: | ||
528 | 379 | result_install = self.lxc_apt(['install', '-y', python]) | ||
529 | 380 | result_which = self.lxc_exec(['which', python]) | ||
530 | 381 | if result_which.returncode != 0: | ||
531 | 382 | err = result_install.stdout + result_which.stdout | ||
532 | 383 | raise Exception(f'could not install {python}: {err}') | ||
533 | 384 | |||
534 | 385 | def snapshot_create(self, name='default'): | ||
535 | 386 | self.log(f'creating snapshot: {name}') | ||
536 | 387 | self.stop(force=False) | ||
537 | 388 | subprocess.run(['lxc', 'snapshot', self.name, name], check=True) | ||
538 | 389 | |||
539 | 390 | def snapshot_restore(self, name='default', start=True): | ||
540 | 391 | self.log(f'restoring snapshot: {name}') | ||
541 | 392 | self.stop() | ||
542 | 393 | subprocess.run(['lxc', 'restore', self.name, name], check=True) | ||
543 | 394 | if start: | ||
544 | 395 | self.start() | ||
545 | 396 | |||
546 | 397 | def snapshot_update(self, name='default'): | ||
547 | 398 | self.log(f'updating snapshot: {name}') | ||
548 | 399 | subprocess.run(['lxc', 'delete', '--force', f'{self.name}/{name}'], check=True) | ||
549 | 400 | self.snapshot_create(name) | ||
550 | 401 | |||
551 | 402 | @functools.cached_property | ||
552 | 403 | def ssh_key(self): | ||
553 | 404 | return self._ssh_key | ||
554 | 405 | |||
555 | 406 | def add_start_cmd(self, cmd): | ||
556 | 407 | self.log(f'adding start cmd: {cmd}') | ||
557 | 408 | self._start_cmds.append(cmd) | ||
558 | 409 | |||
559 | 410 | def add_file_content(self, path, content): | ||
560 | 411 | with tempfile.TemporaryDirectory() as d: | ||
561 | 412 | localfile = Path(d) / Path(path).name | ||
562 | 413 | self.lxc_file_pull(path, str(localfile)) | ||
563 | 414 | existing_content = localfile.read_text(encoding='utf-8') or '' | ||
564 | 415 | if content not in existing_content: | ||
565 | 416 | if existing_content and not existing_content.endswith('\n'): | ||
566 | 417 | existing_content += '\n' | ||
567 | 418 | existing_content += content | ||
568 | 419 | localfile.write_text(existing_content) | ||
569 | 420 | self.lxc_file_push(str(localfile), path) | ||
570 | 421 | |||
571 | 422 | def add_ssh_key(self, key): | ||
572 | 423 | self.log(f'adding ssh key: {key.strip()}') | ||
573 | 424 | self.add_file_content('/root/.ssh/authorized_keys', key) | ||
574 | 425 | |||
575 | 426 | def add_ssh_config(self, config): | ||
576 | 427 | self.log('adding ssh config') | ||
577 | 428 | self.add_file_content('/root/.ssh/config', config) | ||
578 | 429 | |||
579 | 430 | def lxc_exec(self, cmd, **kwargs): | ||
580 | 431 | if type(cmd) == str: | ||
581 | 432 | cmd = cmd.split() | ||
582 | 433 | return run_cmd(['lxc', 'exec', self.name, '--'] + cmd, **kwargs) | ||
583 | 434 | |||
584 | 435 | def lxc_apt(self, cmd, **kwargs): | ||
585 | 436 | if type(cmd) == str: | ||
586 | 437 | cmd = cmd.split() | ||
587 | 438 | return run_cmd(['lxc', 'exec', self.name, '--env', 'DEBIAN_FRONTEND=noninteractive', '--', 'apt'] + cmd, **kwargs) | ||
588 | 439 | |||
589 | 440 | def lxc_file_pull(self, remote, local, fail_if_missing=False, recursive=False): | ||
590 | 441 | remote = f'{self.name}{remote}' | ||
591 | 442 | self.log(f'{local} <- {remote}') | ||
592 | 443 | cmd = ['lxc', 'file', 'pull', remote, local] | ||
593 | 444 | if recursive: | ||
594 | 445 | cmd += ['--recursive', '--create-dirs'] | ||
595 | 446 | try: | ||
596 | 447 | run_cmd(cmd, check=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) | ||
597 | 448 | except subprocess.CalledProcessError: | ||
598 | 449 | if fail_if_missing: | ||
599 | 450 | raise | ||
600 | 451 | if recursive: | ||
601 | 452 | self.log('remote dir missing, ignoring') | ||
602 | 453 | return | ||
603 | 454 | localpath = Path(local) | ||
604 | 455 | if localpath.is_dir(): | ||
605 | 456 | localpath = localpath / Path(remote).name | ||
606 | 457 | localpath.touch() | ||
607 | 458 | self.log(f'remote file missing, created empty file {localpath}') | ||
608 | 459 | |||
609 | 460 | def lxc_file_push(self, local, remote): | ||
610 | 461 | remote = f'{self.name}{remote}' | ||
611 | 462 | self.log(f'{local} -> {remote}') | ||
612 | 463 | run_cmd(['lxc', 'file', 'push', local, remote], check=True) | ||
613 | 464 | |||
614 | 465 | @property | ||
615 | 466 | def json(self): | ||
616 | 467 | listjson = run_cmd('lxc list --format json').stdout | ||
617 | 468 | filtered = list(filter(lambda i: i['name'] == self.name, json.loads(listjson))) | ||
618 | 469 | if len(filtered) != 1: | ||
619 | 470 | raise Exception(f'Expected only 1 lxc list entry for {self.name}, found {len(filtered)}:\n{listjson}') | ||
620 | 471 | return filtered[0] | ||
621 | 472 | |||
622 | 473 | @property | ||
623 | 474 | def is_running(self): | ||
624 | 475 | return self.json['status'] == 'Running' | ||
625 | 476 | |||
626 | 477 | def start(self): | ||
627 | 478 | if not self.is_running: | ||
628 | 479 | cmd = f'lxc start {self.name}' | ||
629 | 480 | self.log(cmd) | ||
630 | 481 | result = run_cmd(cmd, check=True) | ||
631 | 482 | if result.stdout: | ||
632 | 483 | self.log(result.stdout) | ||
633 | 484 | self._wait_for_networking() | ||
634 | 485 | for cmd in self._start_cmds: | ||
635 | 486 | self.lxc_exec(cmd) | ||
636 | 487 | |||
637 | 488 | def stop(self, force=True): | ||
638 | 489 | cmd = 'lxc stop' | ||
639 | 490 | if force: | ||
640 | 491 | cmd += ' --force' | ||
641 | 492 | cmd += f' {self.name}' | ||
642 | 493 | self.log(cmd) | ||
643 | 494 | result = run_cmd(cmd) | ||
644 | 495 | if result.stdout: | ||
645 | 496 | self.log(result.stdout) | ||
646 | 497 | |||
647 | 498 | def __del__(self): | ||
648 | 499 | run_cmd(['lxc', 'delete', '--force', self.name], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) | ||
649 | 500 | |||
650 | 501 | |||
651 | 502 | class VerboseAssertionError(AssertionError): | ||
652 | 503 | __logs = [] | ||
653 | 504 | |||
654 | 505 | def __init__(self, *args): | ||
655 | 506 | logs = list(args) + self.read_log() | ||
656 | 507 | super(VerboseAssertionError, self).__init__('\n'.join(logs)) | ||
657 | 508 | |||
658 | 509 | @classmethod | ||
659 | 510 | def add_log(cls, msg): | ||
660 | 511 | cls.__logs.append(str(msg)) | ||
661 | 512 | |||
662 | 513 | @classmethod | ||
663 | 514 | def read_log(cls): | ||
664 | 515 | log = cls.__logs | ||
665 | 516 | cls.__logs = [] | ||
666 | 517 | return log | ||
667 | 518 | |||
668 | 519 | @classmethod | ||
669 | 520 | def clear_log(cls): | ||
670 | 521 | cls.read_log() | ||
671 | 522 | |||
672 | 523 | |||
673 | 524 | class SshuttleTest(unittest.TestCase): | ||
674 | 525 | release = None | ||
675 | 526 | failureException = VerboseAssertionError | ||
676 | 527 | |||
677 | 528 | @classmethod | ||
678 | 529 | def is_arch_supported(cls): | ||
679 | 530 | if cls.release == 'trusty': | ||
680 | 531 | return get_arch() == 'amd64' | ||
681 | 532 | return True | ||
682 | 533 | |||
683 | 534 | @classmethod | ||
684 | 535 | def setUpClass(cls): | ||
685 | 536 | # note that some of the cls attrs used here are set by setUpModule() | ||
686 | 537 | |||
687 | 538 | # this is set by the subclass, and required | ||
688 | 539 | assert(cls.release) | ||
689 | 540 | |||
690 | 541 | if not cls.is_arch_supported(): | ||
691 | 542 | raise unittest.SkipTest(f'Release {cls.release} not available for {get_arch()}') | ||
692 | 543 | |||
693 | 544 | remote = Remote(f'remote-{cls.release}', cls.release) | ||
694 | 545 | remote.add_ssh_key(SSH_KEY) | ||
695 | 546 | remote.add_ssh_config(SSH_CONFIG) | ||
696 | 547 | remote.private = TESTBED.remote | ||
697 | 548 | remote.add_start_cmd(f'ip a add {remote.private} dev lo') | ||
698 | 549 | remote.snapshot_create() | ||
699 | 550 | cls.remote = remote | ||
700 | 551 | |||
701 | 552 | cls.reverse_remote.snapshot_restore() | ||
702 | 553 | cls.reverse_remote.add_ssh_key(cls.remote.ssh_key) | ||
703 | 554 | cls.reverse_remote.snapshot_update() | ||
704 | 555 | |||
705 | 556 | @classmethod | ||
706 | 557 | def tearDownClass(cls): | ||
707 | 558 | del cls.remote | ||
708 | 559 | |||
709 | 560 | def setUp(self): | ||
710 | 561 | self.name = f'testbed-{self.release}' | ||
711 | 562 | self.reverse_remote.snapshot_restore() | ||
712 | 563 | self.remote.snapshot_restore() | ||
713 | 564 | self.sshuttle_process = None | ||
714 | 565 | self.sshuttle_log = tempfile.NamedTemporaryFile() | ||
715 | 566 | self.failureException.clear_log() | ||
716 | 567 | |||
717 | 568 | def tearDown(self): | ||
718 | 569 | self.sshuttle_stop() | ||
719 | 570 | self.sshuttle_log.close() | ||
720 | 571 | self.remote.stop() | ||
721 | 572 | self.reverse_remote.stop() | ||
722 | 573 | |||
723 | 574 | @functools.cached_property | ||
724 | 575 | def sshuttle_version(self): | ||
725 | 576 | return run_cmd('dpkg-query -f ${Version} -W sshuttle').stdout | ||
726 | 577 | |||
727 | 578 | def sshuttle_started_ok(self): | ||
728 | 579 | output = Path(self.sshuttle_log.name).read_text(encoding='utf-8') | ||
729 | 580 | # Unfortunately we have to just grep the output to see if it 'connected' | ||
730 | 581 | # and the specific output format has changed across versions | ||
731 | 582 | # Since all output so far includes 'Connected' we'll use that word | ||
732 | 583 | return 'connected' in output.lower() | ||
733 | 584 | |||
734 | 585 | def sshuttle_start(self, dst, python): | ||
735 | 586 | sshuttle_cmd = 'sshuttle' | ||
736 | 587 | if python: | ||
737 | 588 | sshuttle_cmd += f' --python {python}' | ||
738 | 589 | sshuttle_cmd += f' -r {dst.shared.ip} {dst.private.network}' | ||
739 | 590 | if dst is self.reverse_remote: | ||
740 | 591 | sshuttle_cmd = f'lxc exec {self.remote.name} -- {sshuttle_cmd}' | ||
741 | 592 | print(f'running: {sshuttle_cmd}') | ||
742 | 593 | self.sshuttle_process = subprocess.Popen(sshuttle_cmd.split(), encoding='utf-8', | ||
743 | 594 | stdout=self.sshuttle_log, stderr=self.sshuttle_log) | ||
744 | 595 | print('waiting for sshuttle to start...', end='', flush=True) | ||
745 | 596 | for sec in range(300): | ||
746 | 597 | if self.sshuttle_process.poll() is not None: | ||
747 | 598 | print('sshuttle failed :-(', flush=True) | ||
748 | 599 | break | ||
749 | 600 | if self.sshuttle_started_ok(): | ||
750 | 601 | print('started', flush=True) | ||
751 | 602 | break | ||
752 | 603 | time.sleep(1) | ||
753 | 604 | print('.', end='', flush=True) | ||
754 | 605 | else: | ||
755 | 606 | print("WARNING: timed out waiting for sshuttle to start, the test may fail") | ||
756 | 607 | if self.sshuttle_process.poll() is not None: | ||
757 | 608 | self.fail('sshuttle process failed to start') | ||
758 | 609 | |||
759 | 610 | def sshuttle_stop(self): | ||
760 | 611 | if self.sshuttle_process and self.sshuttle_process.poll() is None: | ||
761 | 612 | print('stopping sshuttle...') | ||
762 | 613 | self.sshuttle_process.terminate() | ||
763 | 614 | with suppress(subprocess.TimeoutExpired): | ||
764 | 615 | self.sshuttle_process.communicate(timeout=30) | ||
765 | 616 | print('sshuttle stopped') | ||
766 | 617 | self.sshuttle_process = None | ||
767 | 618 | return | ||
768 | 619 | |||
769 | 620 | print('sshuttle did not respond, killing sshuttle...') | ||
770 | 621 | self.sshuttle_process.kill() | ||
771 | 622 | with suppress(subprocess.TimeoutExpired): | ||
772 | 623 | self.sshuttle_process.communicate(timeout=30) | ||
773 | 624 | print('sshuttle stopped') | ||
774 | 625 | self.sshuttle_process = None | ||
775 | 626 | return | ||
776 | 627 | |||
777 | 628 | self.fail('sshuttle subprocess refused to stop') | ||
778 | 629 | |||
779 | 630 | def ssh_to(self, remote, expect_timeout=False): | ||
780 | 631 | ssh_cmd = 'ssh' | ||
781 | 632 | if expect_timeout: | ||
782 | 633 | # No need to wait long if we expect it to timeout | ||
783 | 634 | ssh_cmd += ' -o ConnectionAttempts=2' | ||
784 | 635 | ssh_cmd += f' -v {remote.private.ip} -- cat /proc/sys/kernel/hostname' | ||
785 | 636 | if remote is self.reverse_remote: | ||
786 | 637 | ssh_cmd = f'lxc exec {self.remote.name} -- {ssh_cmd}' | ||
787 | 638 | print(f'running: {ssh_cmd}') | ||
788 | 639 | result = run_cmd(ssh_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) | ||
789 | 640 | failed = result.returncode != 0 | ||
790 | 641 | if failed: | ||
791 | 642 | if result.stderr: | ||
792 | 643 | # just print the last line here | ||
793 | 644 | print(result.stderr.splitlines()[-1]) | ||
794 | 645 | if failed != expect_timeout: | ||
795 | 646 | self.failureException.add_log(result.stdout) | ||
796 | 647 | self.failureException.add_log(result.stderr) | ||
797 | 648 | elif result.stdout: | ||
798 | 649 | print(f'Connected to: {result.stdout.strip()}') | ||
799 | 650 | msg = 'ssh' | ||
800 | 651 | if failed: | ||
801 | 652 | msg += ' failed' | ||
802 | 653 | if expect_timeout: | ||
803 | 654 | msg += ' (as expected)' | ||
804 | 655 | else: | ||
805 | 656 | msg += ' connected' | ||
806 | 657 | print(msg, flush=True) | ||
807 | 658 | return not failed | ||
808 | 659 | |||
809 | 660 | def test_local_to_remote_nopy(self): | ||
810 | 661 | self._test_to_remote(self, self.remote, None) | ||
811 | 662 | |||
812 | 663 | def test_local_to_remote_py2(self): | ||
813 | 664 | self._test_to_remote(self, self.remote, 'python2') | ||
814 | 665 | |||
815 | 666 | def test_local_to_remote_py3(self): | ||
816 | 667 | self._test_to_remote(self, self.remote, 'python3') | ||
817 | 668 | |||
818 | 669 | def test_remote_to_reverse_remote_nopy(self): | ||
819 | 670 | self._test_to_remote(self.remote, self.reverse_remote, None) | ||
820 | 671 | |||
821 | 672 | def test_remote_to_reverse_remote_py2(self): | ||
822 | 673 | self._test_to_remote(self.remote, self.reverse_remote, 'python2') | ||
823 | 674 | |||
824 | 675 | def test_remote_to_reverse_remote_py3(self): | ||
825 | 676 | self._test_to_remote(self.remote, self.reverse_remote, 'python3') | ||
826 | 677 | |||
827 | 678 | def _test_to_remote(self, src, dst, python): | ||
828 | 679 | self.failureException.add_log(f'Test detail: {src.name} sshuttle {src.sshuttle_version} to {dst.name} {python if python else ""}') | ||
829 | 680 | print('this ssh connection should timeout:') | ||
830 | 681 | self.assertFalse(self.ssh_to(dst, expect_timeout=True)) | ||
831 | 682 | try: | ||
832 | 683 | self.sshuttle_start(dst, python) | ||
833 | 684 | print('this ssh connection should not timeout:') | ||
834 | 685 | self.assertTrue(self.ssh_to(dst)) | ||
835 | 686 | except AssertionError: | ||
836 | 687 | if is_expected_failure(src, dst, python): | ||
837 | 688 | self.skipTest('This is an expected failure, ignoring test failure') | ||
838 | 689 | else: | ||
839 | 690 | self.failureException.add_log(Path(self.sshuttle_log.name).read_text(encoding='utf-8')) | ||
840 | 691 | testname = '.'.join(self.id().split('.')[-2:]) | ||
841 | 692 | self.remote.save_journal(testname, 'remote') | ||
842 | 693 | self.reverse_remote.save_journal(testname, 'reverse_remote') | ||
843 | 694 | raise | ||
844 | 695 | |||
845 | 696 | |||
846 | 697 | if __name__ == '__main__': | ||
847 | 698 | if len(sys.argv) > 1: | ||
848 | 699 | set_releases(sys.argv[1:]) | ||
849 | 700 | del sys.argv[1:] | ||
850 | 701 | unittest.main(verbosity=2) |
Running autopkgtests results in BADPKG since we no longer package lxd:
---
The following packages have unmet dependencies: command- line : Depends: sshuttle but it is not going to be installed
Depends: lxd but it is not installable 1.3.1-1ubuntu1_ all.deb deb:sshuttle sshuttle @@@@@@@ @@@@@@ summary
satisfy:
E: Unable to correct problems, you have held broken packages.
cross-release FAIL badpkg
blame: arg:sshuttle_
badpkg: Test dependencies are unsatisfiable. A common reason is that your testbed is out of date with respect to the archive, and you need to use a current testbed or run apt-get update or use -U.
autopkgtest [11:31:45]: @@@@@@@
cross-release FAIL badpkg
----