Merge ~paelzer/ubuntu/+source/qemu:groovy-lp-1890881-lp-1891187 into ubuntu/+source/qemu:ubuntu/groovy-devel
- Git
- lp:~paelzer/ubuntu/+source/qemu
- groovy-lp-1890881-lp-1891187
- Merge into ubuntu/groovy-devel
Status: | Merged | ||||||||
---|---|---|---|---|---|---|---|---|---|
Approved by: | Christian Ehrhardt | ||||||||
Approved revision: | a5ec710aeadc3b7a01ace5e2cfb8f7eb3062972d | ||||||||
Merge reported by: | Christian Ehrhardt | ||||||||
Merged at revision: | a5ec710aeadc3b7a01ace5e2cfb8f7eb3062972d | ||||||||
Proposed branch: | ~paelzer/ubuntu/+source/qemu:groovy-lp-1890881-lp-1891187 | ||||||||
Merge into: | ubuntu/+source/qemu:ubuntu/groovy-devel | ||||||||
Diff against target: |
1308 lines (+1250/-0) 9 files modified
debian/changelog (+16/-0) debian/patches/series (+7/-0) debian/patches/ubuntu/lp-1891187-hw-net-net_tx_pkt-fix-assertion-failure-in-net_tx.patch (+45/-0) debian/patches/ubuntu/lp1890881-linux-user-completely-re-write-init_guest_space.patch (+725/-0) debian/patches/ubuntu/lp1890881-linux-user-deal-with-address-wrap-for-ARM_COMMPAGE-o.patch (+154/-0) debian/patches/ubuntu/lp1890881-linux-user-don-t-use-MAP_FIXED-in-pgd_find_hole_fall.patch (+78/-0) debian/patches/ubuntu/lp1890881-linux-user-elfload-use-MAP_FIXED_NOREPLACE-in-pgb_re.patch (+66/-0) debian/patches/ubuntu/lp1890881-linux-user-limit-check-to-HOST_LONG_BITS-TARGET_ABI_.patch (+57/-0) debian/patches/ubuntu/lp1890881-linux-user-provide-fallback-pgd_find_hole-for-bare-c.patch (+102/-0) |
||||||||
Related bugs: |
|
Reviewer | Review Type | Date Requested | Status |
---|---|---|---|
Rafael David Tinoco (community) | Approve | ||
Canonical Server | Pending | ||
Canonical Server packageset reviewers | Pending | ||
git-ubuntu developers | Pending | ||
Review via email: mp+389514@code.launchpad.net |
Commit message
Description of the change
Christian Ehrhardt (paelzer) wrote : | # |
Rafael David Tinoco (rafaeldtinoco) wrote : | # |
Okay, for the 1st fix, you already opened the bug (security) and documenting LP number makes sense now. For the second fix (armhf container).. I followed all patchset and dependencies (including commit fixes) and it also makes sense to me. Apart from that, everything is very straightforward so I'm +1 with no further questions.
Christian Ehrhardt (paelzer) wrote : | # |
To ssh://git.
* [new tag] upload/
Uploading to ubuntu (via ftp to upload.ubuntu.com):
Uploading qemu_5.
Uploading qemu_5.
Uploading qemu_5.
Uploading qemu_5.
Successfully uploaded packages.
Christian Ehrhardt (paelzer) wrote : | # |
this migrated
Preview Diff
1 | diff --git a/debian/changelog b/debian/changelog |
2 | index ea3bc89..973aa23 100644 |
3 | --- a/debian/changelog |
4 | +++ b/debian/changelog |
5 | @@ -1,3 +1,19 @@ |
6 | +qemu (1:5.0-5ubuntu5) groovy; urgency=medium |
7 | + |
8 | + * fix qemu-user-static initialization to allow executing systemd |
9 | + (LP: #1890881) |
10 | + - d/p/u/lp1890881-linux-user-completely-re-write-init_guest_space.patch |
11 | + - d/p/u/lp1890881-linux-user-deal-with-address-wrap-for-ARM_COMMPAGE-o.patch |
12 | + - d/p/u/lp1890881-linux-user-don-t-use-MAP_FIXED-in-pgd_find_hole_fall.patch |
13 | + - d/p/u/lp1890881-linux-user-elfload-use-MAP_FIXED_NOREPLACE-in-pgb_re.patch |
14 | + - d/p/u/lp1890881-linux-user-limit-check-to-HOST_LONG_BITS-TARGET_ABI_.patch |
15 | + - d/p/u/lp1890881-linux-user-provide-fallback-pgd_find_hole-for-bare-c.patch |
16 | + * fix assertion failue in net_tx_pkt_add_raw_fragment (LP: #1891187) |
17 | + CVE-2020-16092 |
18 | + - d/p/u/lp-1891187-hw-net-net_tx_pkt-fix-assertion-failure-in-net_tx.patch |
19 | + |
20 | + -- Christian Ehrhardt <christian.ehrhardt@canonical.com> Wed, 19 Aug 2020 07:19:42 +0200 |
21 | + |
22 | qemu (1:5.0-5ubuntu4) groovy; urgency=medium |
23 | |
24 | * xen: provide compat links to what libxen-dev reports where to find |
25 | diff --git a/debian/patches/series b/debian/patches/series |
26 | index c1c26c4..a5e3921 100644 |
27 | --- a/debian/patches/series |
28 | +++ b/debian/patches/series |
29 | @@ -73,3 +73,10 @@ ubuntu/lp-1887763-util-add-qemu_get_host_physmem-utility-function.patch |
30 | ubuntu/lp-1887763-accel-tcg-better-handle-memory-constrained-systems.patch |
31 | ubuntu/lp-1883984-target-s390x-Fix-SQXBR.patch |
32 | lp-1890154-s390x-protvirt-allow-to-IPL-secure-guests-with-no-re.patch |
33 | +ubuntu/lp-1891187-hw-net-net_tx_pkt-fix-assertion-failure-in-net_tx.patch |
34 | +ubuntu/lp1890881-linux-user-completely-re-write-init_guest_space.patch |
35 | +ubuntu/lp1890881-linux-user-limit-check-to-HOST_LONG_BITS-TARGET_ABI_.patch |
36 | +ubuntu/lp1890881-linux-user-provide-fallback-pgd_find_hole-for-bare-c.patch |
37 | +ubuntu/lp1890881-linux-user-deal-with-address-wrap-for-ARM_COMMPAGE-o.patch |
38 | +ubuntu/lp1890881-linux-user-elfload-use-MAP_FIXED_NOREPLACE-in-pgb_re.patch |
39 | +ubuntu/lp1890881-linux-user-don-t-use-MAP_FIXED-in-pgd_find_hole_fall.patch |
40 | diff --git a/debian/patches/ubuntu/lp-1891187-hw-net-net_tx_pkt-fix-assertion-failure-in-net_tx.patch b/debian/patches/ubuntu/lp-1891187-hw-net-net_tx_pkt-fix-assertion-failure-in-net_tx.patch |
41 | new file mode 100644 |
42 | index 0000000..5492cfd |
43 | --- /dev/null |
44 | +++ b/debian/patches/ubuntu/lp-1891187-hw-net-net_tx_pkt-fix-assertion-failure-in-net_tx.patch |
45 | @@ -0,0 +1,45 @@ |
46 | +From 035e69b063835a5fd23cacabd63690a3d84532a8 Mon Sep 17 00:00:00 2001 |
47 | +From: Mauro Matteo Cascella <mcascell@redhat.com> |
48 | +Date: Sat, 1 Aug 2020 18:42:38 +0200 |
49 | +Subject: [PATCH] hw/net/net_tx_pkt: fix assertion failure in |
50 | + net_tx_pkt_add_raw_fragment() |
51 | + |
52 | +An assertion failure issue was found in the code that processes network packets |
53 | +while adding data fragments into the packet context. It could be abused by a |
54 | +malicious guest to abort the QEMU process on the host. This patch replaces the |
55 | +affected assert() with a conditional statement, returning false if the current |
56 | +data fragment exceeds max_raw_frags. |
57 | + |
58 | +Reported-by: Alexander Bulekov <alxndr@bu.edu> |
59 | +Reported-by: Ziming Zhang <ezrakiez@gmail.com> |
60 | +Reviewed-by: Dmitry Fleytman <dmitry.fleytman@gmail.com> |
61 | +Signed-off-by: Mauro Matteo Cascella <mcascell@redhat.com> |
62 | +Signed-off-by: Jason Wang <jasowang@redhat.com> |
63 | + |
64 | +Origin: upstream, https://git.qemu.org/?p=qemu.git;a=commit;h=035e69b063835 |
65 | +Bug-Ubuntu: https://bugs.launchpad.net/bugs/1891187 |
66 | +Last-Update: 2020-08-18 |
67 | + |
68 | +--- |
69 | + hw/net/net_tx_pkt.c | 5 ++++- |
70 | + 1 file changed, 4 insertions(+), 1 deletion(-) |
71 | + |
72 | +diff --git a/hw/net/net_tx_pkt.c b/hw/net/net_tx_pkt.c |
73 | +index 9560e4a49e..da262edc3e 100644 |
74 | +--- a/hw/net/net_tx_pkt.c |
75 | ++++ b/hw/net/net_tx_pkt.c |
76 | +@@ -379,7 +379,10 @@ bool net_tx_pkt_add_raw_fragment(struct NetTxPkt *pkt, hwaddr pa, |
77 | + hwaddr mapped_len = 0; |
78 | + struct iovec *ventry; |
79 | + assert(pkt); |
80 | +- assert(pkt->max_raw_frags > pkt->raw_frags); |
81 | ++ |
82 | ++ if (pkt->raw_frags >= pkt->max_raw_frags) { |
83 | ++ return false; |
84 | ++ } |
85 | + |
86 | + if (!len) { |
87 | + return true; |
88 | +-- |
89 | +2.28.0 |
90 | + |
91 | diff --git a/debian/patches/ubuntu/lp1890881-linux-user-completely-re-write-init_guest_space.patch b/debian/patches/ubuntu/lp1890881-linux-user-completely-re-write-init_guest_space.patch |
92 | new file mode 100644 |
93 | index 0000000..ea287eb |
94 | --- /dev/null |
95 | +++ b/debian/patches/ubuntu/lp1890881-linux-user-completely-re-write-init_guest_space.patch |
96 | @@ -0,0 +1,725 @@ |
97 | +From ee94743034bfb443cf246eda4971bdc15d8ee066 Mon Sep 17 00:00:00 2001 |
98 | +From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org> |
99 | +Date: Wed, 13 May 2020 18:51:28 +0100 |
100 | +Subject: [PATCH] linux-user: completely re-write init_guest_space |
101 | +MIME-Version: 1.0 |
102 | +Content-Type: text/plain; charset=UTF-8 |
103 | +Content-Transfer-Encoding: 8bit |
104 | + |
105 | +First we ensure all guest space initialisation logic comes through |
106 | +probe_guest_base once we understand the nature of the binary we are |
107 | +loading. The convoluted init_guest_space routine is removed and |
108 | +replaced with a number of pgb_* helpers which are called depending on |
109 | +what requirements we have when loading the binary. |
110 | + |
111 | +We first try to do what is requested by the host. Failing that we try |
112 | +and satisfy the guest requested base address. If all those options |
113 | +fail we fall back to finding a space in the memory map using our |
114 | +recently written read_self_maps() helper. |
115 | + |
116 | +There are some additional complications we try and take into account |
117 | +when looking for holes in the address space. We try not to go directly |
118 | +after the system brk() space so there is space for a little growth. We |
119 | +also don't want to have to use negative offsets which would result in |
120 | +slightly less efficient code on x86 when it's unable to use the |
121 | +segment offset register. |
122 | + |
123 | +Less mind-binding gotos and hopefully clearer logic throughout. |
124 | + |
125 | +Signed-off-by: Alex Bennée <alex.bennee@linaro.org> |
126 | +Acked-by: Laurent Vivier <laurent@vivier.eu> |
127 | + |
128 | +Message-Id: <20200513175134.19619-5-alex.bennee@linaro.org> |
129 | + |
130 | +Origin: upstream, https://git.qemu.org/?p=qemu.git;a=commit;h=ee947430 |
131 | +Bug-Ubuntu: https://bugs.launchpad.net/bugs/1890881 |
132 | +Last-Update: 2020-08-19 |
133 | + |
134 | +--- |
135 | + linux-user/elfload.c | 503 +++++++++++++++++++++--------------------- |
136 | + linux-user/flatload.c | 6 + |
137 | + linux-user/main.c | 23 +- |
138 | + linux-user/qemu.h | 31 ++- |
139 | + 4 files changed, 277 insertions(+), 286 deletions(-) |
140 | + |
141 | +diff --git a/linux-user/elfload.c b/linux-user/elfload.c |
142 | +index 619c054cc4..01a9323a63 100644 |
143 | +--- a/linux-user/elfload.c |
144 | ++++ b/linux-user/elfload.c |
145 | +@@ -11,6 +11,7 @@ |
146 | + #include "qemu/queue.h" |
147 | + #include "qemu/guest-random.h" |
148 | + #include "qemu/units.h" |
149 | ++#include "qemu/selfmap.h" |
150 | + |
151 | + #ifdef _ARCH_PPC64 |
152 | + #undef ARCH_DLINFO |
153 | +@@ -382,68 +383,30 @@ enum { |
154 | + |
155 | + /* The commpage only exists for 32 bit kernels */ |
156 | + |
157 | +-/* Return 1 if the proposed guest space is suitable for the guest. |
158 | +- * Return 0 if the proposed guest space isn't suitable, but another |
159 | +- * address space should be tried. |
160 | +- * Return -1 if there is no way the proposed guest space can be |
161 | +- * valid regardless of the base. |
162 | +- * The guest code may leave a page mapped and populate it if the |
163 | +- * address is suitable. |
164 | +- */ |
165 | +-static int init_guest_commpage(unsigned long guest_base, |
166 | +- unsigned long guest_size) |
167 | +-{ |
168 | +- unsigned long real_start, test_page_addr; |
169 | +- |
170 | +- /* We need to check that we can force a fault on access to the |
171 | +- * commpage at 0xffff0fxx |
172 | +- */ |
173 | +- test_page_addr = guest_base + (0xffff0f00 & qemu_host_page_mask); |
174 | +- |
175 | +- /* If the commpage lies within the already allocated guest space, |
176 | +- * then there is no way we can allocate it. |
177 | +- * |
178 | +- * You may be thinking that that this check is redundant because |
179 | +- * we already validated the guest size against MAX_RESERVED_VA; |
180 | +- * but if qemu_host_page_mask is unusually large, then |
181 | +- * test_page_addr may be lower. |
182 | +- */ |
183 | +- if (test_page_addr >= guest_base |
184 | +- && test_page_addr < (guest_base + guest_size)) { |
185 | +- return -1; |
186 | +- } |
187 | ++#define ARM_COMMPAGE (intptr_t)0xffff0f00u |
188 | + |
189 | +- /* Note it needs to be writeable to let us initialise it */ |
190 | +- real_start = (unsigned long) |
191 | +- mmap((void *)test_page_addr, qemu_host_page_size, |
192 | +- PROT_READ | PROT_WRITE, |
193 | +- MAP_ANONYMOUS | MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); |
194 | ++static bool init_guest_commpage(void) |
195 | ++{ |
196 | ++ void *want = g2h(ARM_COMMPAGE & -qemu_host_page_size); |
197 | ++ void *addr = mmap(want, qemu_host_page_size, PROT_READ | PROT_WRITE, |
198 | ++ MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); |
199 | + |
200 | +- /* If we can't map it then try another address */ |
201 | +- if (real_start == -1ul) { |
202 | +- return 0; |
203 | ++ if (addr == MAP_FAILED) { |
204 | ++ perror("Allocating guest commpage"); |
205 | ++ exit(EXIT_FAILURE); |
206 | + } |
207 | +- |
208 | +- if (real_start != test_page_addr) { |
209 | +- /* OS didn't put the page where we asked - unmap and reject */ |
210 | +- munmap((void *)real_start, qemu_host_page_size); |
211 | +- return 0; |
212 | ++ if (addr != want) { |
213 | ++ return false; |
214 | + } |
215 | + |
216 | +- /* Leave the page mapped |
217 | +- * Populate it (mmap should have left it all 0'd) |
218 | +- */ |
219 | +- |
220 | +- /* Kernel helper versions */ |
221 | +- __put_user(5, (uint32_t *)g2h(0xffff0ffcul)); |
222 | ++ /* Set kernel helper versions; rest of page is 0. */ |
223 | ++ __put_user(5, (uint32_t *)g2h(0xffff0ffcu)); |
224 | + |
225 | +- /* Now it's populated make it RO */ |
226 | +- if (mprotect((void *)test_page_addr, qemu_host_page_size, PROT_READ)) { |
227 | ++ if (mprotect(addr, qemu_host_page_size, PROT_READ)) { |
228 | + perror("Protecting guest commpage"); |
229 | +- exit(-1); |
230 | ++ exit(EXIT_FAILURE); |
231 | + } |
232 | +- |
233 | +- return 1; /* All good */ |
234 | ++ return true; |
235 | + } |
236 | + |
237 | + #define ELF_HWCAP get_elf_hwcap() |
238 | +@@ -2075,239 +2038,267 @@ static abi_ulong create_elf_tables(abi_ulong p, int argc, int envc, |
239 | + return sp; |
240 | + } |
241 | + |
242 | +-unsigned long init_guest_space(unsigned long host_start, |
243 | +- unsigned long host_size, |
244 | +- unsigned long guest_start, |
245 | +- bool fixed) |
246 | +-{ |
247 | +- /* In order to use host shmat, we must be able to honor SHMLBA. */ |
248 | +- unsigned long align = MAX(SHMLBA, qemu_host_page_size); |
249 | +- unsigned long current_start, aligned_start; |
250 | +- int flags; |
251 | +- |
252 | +- assert(host_start || host_size); |
253 | +- |
254 | +- /* If just a starting address is given, then just verify that |
255 | +- * address. */ |
256 | +- if (host_start && !host_size) { |
257 | +-#if defined(TARGET_ARM) && !defined(TARGET_AARCH64) |
258 | +- if (init_guest_commpage(host_start, host_size) != 1) { |
259 | +- return (unsigned long)-1; |
260 | +- } |
261 | ++#ifndef ARM_COMMPAGE |
262 | ++#define ARM_COMMPAGE 0 |
263 | ++#define init_guest_commpage() true |
264 | + #endif |
265 | +- return host_start; |
266 | +- } |
267 | + |
268 | +- /* Setup the initial flags and start address. */ |
269 | +- current_start = host_start & -align; |
270 | +- flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_NORESERVE; |
271 | +- if (fixed) { |
272 | +- flags |= MAP_FIXED; |
273 | +- } |
274 | ++static void pgb_fail_in_use(const char *image_name) |
275 | ++{ |
276 | ++ error_report("%s: requires virtual address space that is in use " |
277 | ++ "(omit the -B option or choose a different value)", |
278 | ++ image_name); |
279 | ++ exit(EXIT_FAILURE); |
280 | ++} |
281 | + |
282 | +- /* Otherwise, a non-zero size region of memory needs to be mapped |
283 | +- * and validated. */ |
284 | ++static void pgb_have_guest_base(const char *image_name, abi_ulong guest_loaddr, |
285 | ++ abi_ulong guest_hiaddr, long align) |
286 | ++{ |
287 | ++ const int flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_NORESERVE; |
288 | ++ void *addr, *test; |
289 | + |
290 | +-#if defined(TARGET_ARM) && !defined(TARGET_AARCH64) |
291 | +- /* On 32-bit ARM, we need to map not just the usable memory, but |
292 | +- * also the commpage. Try to find a suitable place by allocating |
293 | +- * a big chunk for all of it. If host_start, then the naive |
294 | +- * strategy probably does good enough. |
295 | +- */ |
296 | +- if (!host_start) { |
297 | +- unsigned long guest_full_size, host_full_size, real_start; |
298 | +- |
299 | +- guest_full_size = |
300 | +- (0xffff0f00 & qemu_host_page_mask) + qemu_host_page_size; |
301 | +- host_full_size = guest_full_size - guest_start; |
302 | +- real_start = (unsigned long) |
303 | +- mmap(NULL, host_full_size, PROT_NONE, flags, -1, 0); |
304 | +- if (real_start == (unsigned long)-1) { |
305 | +- if (host_size < host_full_size - qemu_host_page_size) { |
306 | +- /* We failed to map a continous segment, but we're |
307 | +- * allowed to have a gap between the usable memory and |
308 | +- * the commpage where other things can be mapped. |
309 | +- * This sparseness gives us more flexibility to find |
310 | +- * an address range. |
311 | +- */ |
312 | +- goto naive; |
313 | +- } |
314 | +- return (unsigned long)-1; |
315 | ++ if (!QEMU_IS_ALIGNED(guest_base, align)) { |
316 | ++ fprintf(stderr, "Requested guest base 0x%lx does not satisfy " |
317 | ++ "host minimum alignment (0x%lx)\n", |
318 | ++ guest_base, align); |
319 | ++ exit(EXIT_FAILURE); |
320 | ++ } |
321 | ++ |
322 | ++ /* Sanity check the guest binary. */ |
323 | ++ if (reserved_va) { |
324 | ++ if (guest_hiaddr > reserved_va) { |
325 | ++ error_report("%s: requires more than reserved virtual " |
326 | ++ "address space (0x%" PRIx64 " > 0x%lx)", |
327 | ++ image_name, (uint64_t)guest_hiaddr, reserved_va); |
328 | ++ exit(EXIT_FAILURE); |
329 | + } |
330 | +- munmap((void *)real_start, host_full_size); |
331 | +- if (real_start & (align - 1)) { |
332 | +- /* The same thing again, but with extra |
333 | +- * so that we can shift around alignment. |
334 | +- */ |
335 | +- unsigned long real_size = host_full_size + qemu_host_page_size; |
336 | +- real_start = (unsigned long) |
337 | +- mmap(NULL, real_size, PROT_NONE, flags, -1, 0); |
338 | +- if (real_start == (unsigned long)-1) { |
339 | +- if (host_size < host_full_size - qemu_host_page_size) { |
340 | +- goto naive; |
341 | +- } |
342 | +- return (unsigned long)-1; |
343 | +- } |
344 | +- munmap((void *)real_start, real_size); |
345 | +- real_start = ROUND_UP(real_start, align); |
346 | ++ } else { |
347 | ++ if ((guest_hiaddr - guest_base) > ~(uintptr_t)0) { |
348 | ++ error_report("%s: requires more virtual address space " |
349 | ++ "than the host can provide (0x%" PRIx64 ")", |
350 | ++ image_name, (uint64_t)guest_hiaddr - guest_base); |
351 | ++ exit(EXIT_FAILURE); |
352 | + } |
353 | +- current_start = real_start; |
354 | + } |
355 | +- naive: |
356 | +-#endif |
357 | + |
358 | +- while (1) { |
359 | +- unsigned long real_start, real_size, aligned_size; |
360 | +- aligned_size = real_size = host_size; |
361 | ++ /* |
362 | ++ * Expand the allocation to the entire reserved_va. |
363 | ++ * Exclude the mmap_min_addr hole. |
364 | ++ */ |
365 | ++ if (reserved_va) { |
366 | ++ guest_loaddr = (guest_base >= mmap_min_addr ? 0 |
367 | ++ : mmap_min_addr - guest_base); |
368 | ++ guest_hiaddr = reserved_va; |
369 | ++ } |
370 | + |
371 | +- /* Do not use mmap_find_vma here because that is limited to the |
372 | +- * guest address space. We are going to make the |
373 | +- * guest address space fit whatever we're given. |
374 | +- */ |
375 | +- real_start = (unsigned long) |
376 | +- mmap((void *)current_start, host_size, PROT_NONE, flags, -1, 0); |
377 | +- if (real_start == (unsigned long)-1) { |
378 | +- return (unsigned long)-1; |
379 | +- } |
380 | ++ /* Reserve the address space for the binary, or reserved_va. */ |
381 | ++ test = g2h(guest_loaddr); |
382 | ++ addr = mmap(test, guest_hiaddr - guest_loaddr, PROT_NONE, flags, -1, 0); |
383 | ++ if (test != addr) { |
384 | ++ pgb_fail_in_use(image_name); |
385 | ++ } |
386 | ++} |
387 | + |
388 | +- /* Check to see if the address is valid. */ |
389 | +- if (host_start && real_start != current_start) { |
390 | +- qemu_log_mask(CPU_LOG_PAGE, "invalid %lx && %lx != %lx\n", |
391 | +- host_start, real_start, current_start); |
392 | +- goto try_again; |
393 | ++/* Return value for guest_base, or -1 if no hole found. */ |
394 | ++static uintptr_t pgb_find_hole(uintptr_t guest_loaddr, uintptr_t guest_size, |
395 | ++ long align) |
396 | ++{ |
397 | ++ GSList *maps, *iter; |
398 | ++ uintptr_t this_start, this_end, next_start, brk; |
399 | ++ intptr_t ret = -1; |
400 | ++ |
401 | ++ assert(QEMU_IS_ALIGNED(guest_loaddr, align)); |
402 | ++ |
403 | ++ maps = read_self_maps(); |
404 | ++ |
405 | ++ /* Read brk after we've read the maps, which will malloc. */ |
406 | ++ brk = (uintptr_t)sbrk(0); |
407 | ++ |
408 | ++ /* The first hole is before the first map entry. */ |
409 | ++ this_start = mmap_min_addr; |
410 | ++ |
411 | ++ for (iter = maps; iter; |
412 | ++ this_start = next_start, iter = g_slist_next(iter)) { |
413 | ++ uintptr_t align_start, hole_size; |
414 | ++ |
415 | ++ this_end = ((MapInfo *)iter->data)->start; |
416 | ++ next_start = ((MapInfo *)iter->data)->end; |
417 | ++ align_start = ROUND_UP(this_start, align); |
418 | ++ |
419 | ++ /* Skip holes that are too small. */ |
420 | ++ if (align_start >= this_end) { |
421 | ++ continue; |
422 | ++ } |
423 | ++ hole_size = this_end - align_start; |
424 | ++ if (hole_size < guest_size) { |
425 | ++ continue; |
426 | + } |
427 | + |
428 | +- /* Ensure the address is properly aligned. */ |
429 | +- if (real_start & (align - 1)) { |
430 | +- /* Ideally, we adjust like |
431 | +- * |
432 | +- * pages: [ ][ ][ ][ ][ ] |
433 | +- * old: [ real ] |
434 | +- * [ aligned ] |
435 | +- * new: [ real ] |
436 | +- * [ aligned ] |
437 | +- * |
438 | +- * But if there is something else mapped right after it, |
439 | +- * then obviously it won't have room to grow, and the |
440 | +- * kernel will put the new larger real someplace else with |
441 | +- * unknown alignment (if we made it to here, then |
442 | +- * fixed=false). Which is why we grow real by a full page |
443 | +- * size, instead of by part of one; so that even if we get |
444 | +- * moved, we can still guarantee alignment. But this does |
445 | +- * mean that there is a padding of < 1 page both before |
446 | +- * and after the aligned range; the "after" could could |
447 | +- * cause problems for ARM emulation where it could butt in |
448 | +- * to where we need to put the commpage. |
449 | +- */ |
450 | +- munmap((void *)real_start, host_size); |
451 | +- real_size = aligned_size + align; |
452 | +- real_start = (unsigned long) |
453 | +- mmap((void *)real_start, real_size, PROT_NONE, flags, -1, 0); |
454 | +- if (real_start == (unsigned long)-1) { |
455 | +- return (unsigned long)-1; |
456 | ++ /* If this hole contains brk, give ourselves some room to grow. */ |
457 | ++ if (this_start <= brk && brk < this_end) { |
458 | ++ hole_size -= guest_size; |
459 | ++ if (sizeof(uintptr_t) == 8 && hole_size >= 1 * GiB) { |
460 | ++ align_start += 1 * GiB; |
461 | ++ } else if (hole_size >= 16 * MiB) { |
462 | ++ align_start += 16 * MiB; |
463 | ++ } else { |
464 | ++ align_start = (this_end - guest_size) & -align; |
465 | ++ if (align_start < this_start) { |
466 | ++ continue; |
467 | ++ } |
468 | + } |
469 | +- aligned_start = ROUND_UP(real_start, align); |
470 | +- } else { |
471 | +- aligned_start = real_start; |
472 | + } |
473 | + |
474 | +-#if defined(TARGET_ARM) && !defined(TARGET_AARCH64) |
475 | +- /* On 32-bit ARM, we need to also be able to map the commpage. */ |
476 | +- int valid = init_guest_commpage(aligned_start - guest_start, |
477 | +- aligned_size + guest_start); |
478 | +- if (valid == -1) { |
479 | +- munmap((void *)real_start, real_size); |
480 | +- return (unsigned long)-1; |
481 | +- } else if (valid == 0) { |
482 | +- goto try_again; |
483 | ++ /* Record the lowest successful match. */ |
484 | ++ if (ret < 0) { |
485 | ++ ret = align_start - guest_loaddr; |
486 | + } |
487 | +-#endif |
488 | +- |
489 | +- /* If nothing has said `return -1` or `goto try_again` yet, |
490 | +- * then the address we have is good. |
491 | +- */ |
492 | +- break; |
493 | +- |
494 | +- try_again: |
495 | +- /* That address didn't work. Unmap and try a different one. |
496 | +- * The address the host picked because is typically right at |
497 | +- * the top of the host address space and leaves the guest with |
498 | +- * no usable address space. Resort to a linear search. We |
499 | +- * already compensated for mmap_min_addr, so this should not |
500 | +- * happen often. Probably means we got unlucky and host |
501 | +- * address space randomization put a shared library somewhere |
502 | +- * inconvenient. |
503 | +- * |
504 | +- * This is probably a good strategy if host_start, but is |
505 | +- * probably a bad strategy if not, which means we got here |
506 | +- * because of trouble with ARM commpage setup. |
507 | +- */ |
508 | +- if (munmap((void *)real_start, real_size) != 0) { |
509 | +- error_report("%s: failed to unmap %lx:%lx (%s)", __func__, |
510 | +- real_start, real_size, strerror(errno)); |
511 | +- abort(); |
512 | ++ /* If this hole contains the identity map, select it. */ |
513 | ++ if (align_start <= guest_loaddr && |
514 | ++ guest_loaddr + guest_size <= this_end) { |
515 | ++ ret = 0; |
516 | + } |
517 | +- current_start += align; |
518 | +- if (host_start == current_start) { |
519 | +- /* Theoretically possible if host doesn't have any suitably |
520 | +- * aligned areas. Normally the first mmap will fail. |
521 | +- */ |
522 | +- return (unsigned long)-1; |
523 | ++ /* If this hole ends above the identity map, stop looking. */ |
524 | ++ if (this_end >= guest_loaddr) { |
525 | ++ break; |
526 | + } |
527 | + } |
528 | ++ free_self_maps(maps); |
529 | + |
530 | +- qemu_log_mask(CPU_LOG_PAGE, "Reserved 0x%lx bytes of guest address space\n", host_size); |
531 | +- |
532 | +- return aligned_start; |
533 | ++ return ret; |
534 | + } |
535 | + |
536 | +-static void probe_guest_base(const char *image_name, |
537 | +- abi_ulong loaddr, abi_ulong hiaddr) |
538 | ++static void pgb_static(const char *image_name, abi_ulong orig_loaddr, |
539 | ++ abi_ulong orig_hiaddr, long align) |
540 | + { |
541 | +- /* Probe for a suitable guest base address, if the user has not set |
542 | +- * it explicitly, and set guest_base appropriately. |
543 | +- * In case of error we will print a suitable message and exit. |
544 | +- */ |
545 | +- const char *errmsg; |
546 | +- if (!have_guest_base && !reserved_va) { |
547 | +- unsigned long host_start, real_start, host_size; |
548 | ++ uintptr_t loaddr = orig_loaddr; |
549 | ++ uintptr_t hiaddr = orig_hiaddr; |
550 | ++ uintptr_t addr; |
551 | + |
552 | +- /* Round addresses to page boundaries. */ |
553 | +- loaddr &= qemu_host_page_mask; |
554 | +- hiaddr = HOST_PAGE_ALIGN(hiaddr); |
555 | ++ if (hiaddr != orig_hiaddr) { |
556 | ++ error_report("%s: requires virtual address space that the " |
557 | ++ "host cannot provide (0x%" PRIx64 ")", |
558 | ++ image_name, (uint64_t)orig_hiaddr); |
559 | ++ exit(EXIT_FAILURE); |
560 | ++ } |
561 | + |
562 | +- if (loaddr < mmap_min_addr) { |
563 | +- host_start = HOST_PAGE_ALIGN(mmap_min_addr); |
564 | ++ loaddr &= -align; |
565 | ++ if (ARM_COMMPAGE) { |
566 | ++ /* |
567 | ++ * Extend the allocation to include the commpage. |
568 | ++ * For a 64-bit host, this is just 4GiB; for a 32-bit host, |
569 | ++ * the address arithmetic will wrap around, but the difference |
570 | ++ * will produce the correct allocation size. |
571 | ++ */ |
572 | ++ if (sizeof(uintptr_t) == 8 || loaddr >= 0x80000000u) { |
573 | ++ hiaddr = (uintptr_t)4 << 30; |
574 | + } else { |
575 | +- host_start = loaddr; |
576 | +- if (host_start != loaddr) { |
577 | +- errmsg = "Address overflow loading ELF binary"; |
578 | +- goto exit_errmsg; |
579 | +- } |
580 | ++ loaddr = ARM_COMMPAGE & -align; |
581 | + } |
582 | +- host_size = hiaddr - loaddr; |
583 | ++ } |
584 | + |
585 | +- /* Setup the initial guest memory space with ranges gleaned from |
586 | +- * the ELF image that is being loaded. |
587 | ++ addr = pgb_find_hole(loaddr, hiaddr - loaddr, align); |
588 | ++ if (addr == -1) { |
589 | ++ /* |
590 | ++ * If ARM_COMMPAGE, there *might* be a non-consecutive allocation |
591 | ++ * that can satisfy both. But as the normal arm32 link base address |
592 | ++ * is ~32k, and we extend down to include the commpage, making the |
593 | ++ * overhead only ~96k, this is unlikely. |
594 | + */ |
595 | +- real_start = init_guest_space(host_start, host_size, loaddr, false); |
596 | +- if (real_start == (unsigned long)-1) { |
597 | +- errmsg = "Unable to find space for application"; |
598 | +- goto exit_errmsg; |
599 | +- } |
600 | +- guest_base = real_start - loaddr; |
601 | ++ error_report("%s: Unable to allocate %#zx bytes of " |
602 | ++ "virtual address space", image_name, |
603 | ++ (size_t)(hiaddr - loaddr)); |
604 | ++ exit(EXIT_FAILURE); |
605 | ++ } |
606 | ++ |
607 | ++ guest_base = addr; |
608 | ++} |
609 | ++ |
610 | ++static void pgb_dynamic(const char *image_name, long align) |
611 | ++{ |
612 | ++ /* |
613 | ++ * The executable is dynamic and does not require a fixed address. |
614 | ++ * All we need is a commpage that satisfies align. |
615 | ++ * If we do not need a commpage, leave guest_base == 0. |
616 | ++ */ |
617 | ++ if (ARM_COMMPAGE) { |
618 | ++ uintptr_t addr, commpage; |
619 | + |
620 | +- qemu_log_mask(CPU_LOG_PAGE, "Relocating guest address space from 0x" |
621 | +- TARGET_ABI_FMT_lx " to 0x%lx\n", |
622 | +- loaddr, real_start); |
623 | ++ /* 64-bit hosts should have used reserved_va. */ |
624 | ++ assert(sizeof(uintptr_t) == 4); |
625 | ++ |
626 | ++ /* |
627 | ++ * By putting the commpage at the first hole, that puts guest_base |
628 | ++ * just above that, and maximises the positive guest addresses. |
629 | ++ */ |
630 | ++ commpage = ARM_COMMPAGE & -align; |
631 | ++ addr = pgb_find_hole(commpage, -commpage, align); |
632 | ++ assert(addr != -1); |
633 | ++ guest_base = addr; |
634 | + } |
635 | +- return; |
636 | ++} |
637 | + |
638 | +-exit_errmsg: |
639 | +- fprintf(stderr, "%s: %s\n", image_name, errmsg); |
640 | +- exit(-1); |
641 | ++static void pgb_reserved_va(const char *image_name, abi_ulong guest_loaddr, |
642 | ++ abi_ulong guest_hiaddr, long align) |
643 | ++{ |
644 | ++ const int flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_NORESERVE; |
645 | ++ void *addr, *test; |
646 | ++ |
647 | ++ if (guest_hiaddr > reserved_va) { |
648 | ++ error_report("%s: requires more than reserved virtual " |
649 | ++ "address space (0x%" PRIx64 " > 0x%lx)", |
650 | ++ image_name, (uint64_t)guest_hiaddr, reserved_va); |
651 | ++ exit(EXIT_FAILURE); |
652 | ++ } |
653 | ++ |
654 | ++ /* Widen the "image" to the entire reserved address space. */ |
655 | ++ pgb_static(image_name, 0, reserved_va, align); |
656 | ++ |
657 | ++ /* Reserve the memory on the host. */ |
658 | ++ assert(guest_base != 0); |
659 | ++ test = g2h(0); |
660 | ++ addr = mmap(test, reserved_va, PROT_NONE, flags, -1, 0); |
661 | ++ if (addr == MAP_FAILED) { |
662 | ++ error_report("Unable to reserve 0x%lx bytes of virtual address " |
663 | ++ "space for use as guest address space (check your " |
664 | ++ "virtual memory ulimit setting or reserve less " |
665 | ++ "using -R option)", reserved_va); |
666 | ++ exit(EXIT_FAILURE); |
667 | ++ } |
668 | ++ assert(addr == test); |
669 | + } |
670 | + |
671 | ++void probe_guest_base(const char *image_name, abi_ulong guest_loaddr, |
672 | ++ abi_ulong guest_hiaddr) |
673 | ++{ |
674 | ++ /* In order to use host shmat, we must be able to honor SHMLBA. */ |
675 | ++ uintptr_t align = MAX(SHMLBA, qemu_host_page_size); |
676 | ++ |
677 | ++ if (have_guest_base) { |
678 | ++ pgb_have_guest_base(image_name, guest_loaddr, guest_hiaddr, align); |
679 | ++ } else if (reserved_va) { |
680 | ++ pgb_reserved_va(image_name, guest_loaddr, guest_hiaddr, align); |
681 | ++ } else if (guest_loaddr) { |
682 | ++ pgb_static(image_name, guest_loaddr, guest_hiaddr, align); |
683 | ++ } else { |
684 | ++ pgb_dynamic(image_name, align); |
685 | ++ } |
686 | ++ |
687 | ++ /* Reserve and initialize the commpage. */ |
688 | ++ if (!init_guest_commpage()) { |
689 | ++ /* |
690 | ++ * With have_guest_base, the user has selected the address and |
691 | ++ * we are trying to work with that. Otherwise, we have selected |
692 | ++ * free space and init_guest_commpage must succeeded. |
693 | ++ */ |
694 | ++ assert(have_guest_base); |
695 | ++ pgb_fail_in_use(image_name); |
696 | ++ } |
697 | ++ |
698 | ++ assert(QEMU_IS_ALIGNED(guest_base, align)); |
699 | ++ qemu_log_mask(CPU_LOG_PAGE, "Locating guest address space " |
700 | ++ "@ 0x%" PRIx64 "\n", (uint64_t)guest_base); |
701 | ++} |
702 | + |
703 | + /* Load an ELF image into the address space. |
704 | + |
705 | +@@ -2399,6 +2390,12 @@ static void load_elf_image(const char *image_name, int image_fd, |
706 | + * MMAP_MIN_ADDR or the QEMU application itself. |
707 | + */ |
708 | + probe_guest_base(image_name, loaddr, hiaddr); |
709 | ++ } else { |
710 | ++ /* |
711 | ++ * The binary is dynamic, but we still need to |
712 | ++ * select guest_base. In this case we pass a size. |
713 | ++ */ |
714 | ++ probe_guest_base(image_name, 0, hiaddr - loaddr); |
715 | + } |
716 | + } |
717 | + |
718 | +diff --git a/linux-user/flatload.c b/linux-user/flatload.c |
719 | +index 66901f39cc..8fb448f0bf 100644 |
720 | +--- a/linux-user/flatload.c |
721 | ++++ b/linux-user/flatload.c |
722 | +@@ -441,6 +441,12 @@ static int load_flat_file(struct linux_binprm * bprm, |
723 | + indx_len = MAX_SHARED_LIBS * sizeof(abi_ulong); |
724 | + indx_len = (indx_len + 15) & ~(abi_ulong)15; |
725 | + |
726 | ++ /* |
727 | ++ * Alloate the address space. |
728 | ++ */ |
729 | ++ probe_guest_base(bprm->filename, 0, |
730 | ++ text_len + data_len + extra + indx_len); |
731 | ++ |
732 | + /* |
733 | + * there are a couple of cases here, the separate code/data |
734 | + * case, and then the fully copied to RAM case which lumps |
735 | +diff --git a/linux-user/main.c b/linux-user/main.c |
736 | +index 2cd443237d..e18c1fb952 100644 |
737 | +--- a/linux-user/main.c |
738 | ++++ b/linux-user/main.c |
739 | +@@ -24,6 +24,7 @@ |
740 | + #include "qemu-version.h" |
741 | + #include <sys/syscall.h> |
742 | + #include <sys/resource.h> |
743 | ++#include <sys/shm.h> |
744 | + |
745 | + #include "qapi/error.h" |
746 | + #include "qemu.h" |
747 | +@@ -747,28 +748,6 @@ int main(int argc, char **argv, char **envp) |
748 | + target_environ = envlist_to_environ(envlist, NULL); |
749 | + envlist_free(envlist); |
750 | + |
751 | +- /* |
752 | +- * Now that page sizes are configured in tcg_exec_init() we can do |
753 | +- * proper page alignment for guest_base. |
754 | +- */ |
755 | +- guest_base = HOST_PAGE_ALIGN(guest_base); |
756 | +- |
757 | +- if (reserved_va || have_guest_base) { |
758 | +- guest_base = init_guest_space(guest_base, reserved_va, 0, |
759 | +- have_guest_base); |
760 | +- if (guest_base == (unsigned long)-1) { |
761 | +- fprintf(stderr, "Unable to reserve 0x%lx bytes of virtual address " |
762 | +- "space for use as guest address space (check your virtual " |
763 | +- "memory ulimit setting or reserve less using -R option)\n", |
764 | +- reserved_va); |
765 | +- exit(EXIT_FAILURE); |
766 | +- } |
767 | +- |
768 | +- if (reserved_va) { |
769 | +- mmap_next_start = reserved_va; |
770 | +- } |
771 | +- } |
772 | +- |
773 | + /* |
774 | + * Read in mmap_min_addr kernel parameter. This value is used |
775 | + * When loading the ELF image to determine whether guest_base |
776 | +diff --git a/linux-user/qemu.h b/linux-user/qemu.h |
777 | +index 792c74290f..ce902f5132 100644 |
778 | +--- a/linux-user/qemu.h |
779 | ++++ b/linux-user/qemu.h |
780 | +@@ -219,18 +219,27 @@ void init_qemu_uname_release(void); |
781 | + void fork_start(void); |
782 | + void fork_end(int child); |
783 | + |
784 | +-/* Creates the initial guest address space in the host memory space using |
785 | +- * the given host start address hint and size. The guest_start parameter |
786 | +- * specifies the start address of the guest space. guest_base will be the |
787 | +- * difference between the host start address computed by this function and |
788 | +- * guest_start. If fixed is specified, then the mapped address space must |
789 | +- * start at host_start. The real start address of the mapped memory space is |
790 | +- * returned or -1 if there was an error. |
791 | ++/** |
792 | ++ * probe_guest_base: |
793 | ++ * @image_name: the executable being loaded |
794 | ++ * @loaddr: the lowest fixed address in the executable |
795 | ++ * @hiaddr: the highest fixed address in the executable |
796 | ++ * |
797 | ++ * Creates the initial guest address space in the host memory space. |
798 | ++ * |
799 | ++ * If @loaddr == 0, then no address in the executable is fixed, |
800 | ++ * i.e. it is fully relocatable. In that case @hiaddr is the size |
801 | ++ * of the executable. |
802 | ++ * |
803 | ++ * This function will not return if a valid value for guest_base |
804 | ++ * cannot be chosen. On return, the executable loader can expect |
805 | ++ * |
806 | ++ * target_mmap(loaddr, hiaddr - loaddr, ...) |
807 | ++ * |
808 | ++ * to succeed. |
809 | + */ |
810 | +-unsigned long init_guest_space(unsigned long host_start, |
811 | +- unsigned long host_size, |
812 | +- unsigned long guest_start, |
813 | +- bool fixed); |
814 | ++void probe_guest_base(const char *image_name, |
815 | ++ abi_ulong loaddr, abi_ulong hiaddr); |
816 | + |
817 | + #include "qemu/log.h" |
818 | + |
819 | +-- |
820 | +2.28.0 |
821 | + |
822 | diff --git a/debian/patches/ubuntu/lp1890881-linux-user-deal-with-address-wrap-for-ARM_COMMPAGE-o.patch b/debian/patches/ubuntu/lp1890881-linux-user-deal-with-address-wrap-for-ARM_COMMPAGE-o.patch |
823 | new file mode 100644 |
824 | index 0000000..7733e6a |
825 | --- /dev/null |
826 | +++ b/debian/patches/ubuntu/lp1890881-linux-user-deal-with-address-wrap-for-ARM_COMMPAGE-o.patch |
827 | @@ -0,0 +1,154 @@ |
828 | +From 5c3e87f345ac93de9260f12c408d2afd87a6ab3b Mon Sep 17 00:00:00 2001 |
829 | +From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org> |
830 | +Date: Fri, 5 Jun 2020 16:49:27 +0100 |
831 | +Subject: [PATCH] linux-user: deal with address wrap for ARM_COMMPAGE on 32 bit |
832 | +MIME-Version: 1.0 |
833 | +Content-Type: text/plain; charset=UTF-8 |
834 | +Content-Transfer-Encoding: 8bit |
835 | + |
836 | +We rely on the pointer to wrap when accessing the high address of the |
837 | +COMMPAGE so it lands somewhere reasonable. However on 32 bit hosts we |
838 | +cannot afford just to map the entire 4gb address range. The old mmap |
839 | +trial and error code handled this by just checking we could map both |
840 | +the guest_base and the computed COMMPAGE address. |
841 | + |
842 | +We can't just manipulate loadaddr to get what we want so we introduce |
843 | +an offset which pgb_find_hole can apply when looking for a gap for |
844 | +guest_base that ensures there is space left to map the COMMPAGE |
845 | +afterwards. |
846 | + |
847 | +This is arguably a little inefficient for the one 32 bit |
848 | +value (kuser_helper_version) we need to keep there given all the |
849 | +actual code entries are picked up during the translation phase. |
850 | + |
851 | +Fixes: ee94743034b |
852 | +Bug: https://bugs.launchpad.net/qemu/+bug/1880225 |
853 | +Cc: Bug 1880225 <1880225@bugs.launchpad.net> |
854 | +Signed-off-by: Alex Bennée <alex.bennee@linaro.org> |
855 | +Tested-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com> |
856 | +Cc: Richard Henderson <richard.henderson@linaro.org> |
857 | +Cc: Peter Maydell <peter.maydell@linaro.org> |
858 | +Message-Id: <20200605154929.26910-13-alex.bennee@linaro.org> |
859 | + |
860 | +Origin: upstream, https://git.qemu.org/?p=qemu.git;a=commit;h=5c3e87f345ac93de9260f12c408d2afd87a6ab3b |
861 | +Bug-Ubuntu: https://bugs.launchpad.net/bugs/1890881 |
862 | +Last-Update: 2020-08-19 |
863 | + |
864 | +--- |
865 | + linux-user/elfload.c | 31 +++++++++++++++++-------------- |
866 | + 1 file changed, 17 insertions(+), 14 deletions(-) |
867 | + |
868 | +diff --git a/linux-user/elfload.c b/linux-user/elfload.c |
869 | +index 475d243f3b..b5cb21384a 100644 |
870 | +--- a/linux-user/elfload.c |
871 | ++++ b/linux-user/elfload.c |
872 | +@@ -389,7 +389,7 @@ static bool init_guest_commpage(void) |
873 | + { |
874 | + void *want = g2h(ARM_COMMPAGE & -qemu_host_page_size); |
875 | + void *addr = mmap(want, qemu_host_page_size, PROT_READ | PROT_WRITE, |
876 | +- MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); |
877 | ++ MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); |
878 | + |
879 | + if (addr == MAP_FAILED) { |
880 | + perror("Allocating guest commpage"); |
881 | +@@ -2113,7 +2113,8 @@ static void pgb_have_guest_base(const char *image_name, abi_ulong guest_loaddr, |
882 | + * only dumbly iterate up the host address space seeing if the |
883 | + * allocation would work. |
884 | + */ |
885 | +-static uintptr_t pgd_find_hole_fallback(uintptr_t guest_size, uintptr_t brk, long align) |
886 | ++static uintptr_t pgd_find_hole_fallback(uintptr_t guest_size, uintptr_t brk, |
887 | ++ long align, uintptr_t offset) |
888 | + { |
889 | + uintptr_t base; |
890 | + |
891 | +@@ -2123,7 +2124,7 @@ static uintptr_t pgd_find_hole_fallback(uintptr_t guest_size, uintptr_t brk, lon |
892 | + while (true) { |
893 | + uintptr_t align_start, end; |
894 | + align_start = ROUND_UP(base, align); |
895 | +- end = align_start + guest_size; |
896 | ++ end = align_start + guest_size + offset; |
897 | + |
898 | + /* if brk is anywhere in the range give ourselves some room to grow. */ |
899 | + if (align_start <= brk && brk < end) { |
900 | +@@ -2138,7 +2139,7 @@ static uintptr_t pgd_find_hole_fallback(uintptr_t guest_size, uintptr_t brk, lon |
901 | + PROT_NONE, flags, -1, 0); |
902 | + if (mmap_start != MAP_FAILED) { |
903 | + munmap((void *) align_start, guest_size); |
904 | +- return (uintptr_t) mmap_start; |
905 | ++ return (uintptr_t) mmap_start + offset; |
906 | + } |
907 | + base += qemu_host_page_size; |
908 | + } |
909 | +@@ -2147,7 +2148,7 @@ static uintptr_t pgd_find_hole_fallback(uintptr_t guest_size, uintptr_t brk, lon |
910 | + |
911 | + /* Return value for guest_base, or -1 if no hole found. */ |
912 | + static uintptr_t pgb_find_hole(uintptr_t guest_loaddr, uintptr_t guest_size, |
913 | +- long align) |
914 | ++ long align, uintptr_t offset) |
915 | + { |
916 | + GSList *maps, *iter; |
917 | + uintptr_t this_start, this_end, next_start, brk; |
918 | +@@ -2161,7 +2162,7 @@ static uintptr_t pgb_find_hole(uintptr_t guest_loaddr, uintptr_t guest_size, |
919 | + brk = (uintptr_t)sbrk(0); |
920 | + |
921 | + if (!maps) { |
922 | +- return pgd_find_hole_fallback(guest_size, brk, align); |
923 | ++ return pgd_find_hole_fallback(guest_size, brk, align, offset); |
924 | + } |
925 | + |
926 | + /* The first hole is before the first map entry. */ |
927 | +@@ -2173,7 +2174,7 @@ static uintptr_t pgb_find_hole(uintptr_t guest_loaddr, uintptr_t guest_size, |
928 | + |
929 | + this_end = ((MapInfo *)iter->data)->start; |
930 | + next_start = ((MapInfo *)iter->data)->end; |
931 | +- align_start = ROUND_UP(this_start, align); |
932 | ++ align_start = ROUND_UP(this_start + offset, align); |
933 | + |
934 | + /* Skip holes that are too small. */ |
935 | + if (align_start >= this_end) { |
936 | +@@ -2223,6 +2224,7 @@ static void pgb_static(const char *image_name, abi_ulong orig_loaddr, |
937 | + { |
938 | + uintptr_t loaddr = orig_loaddr; |
939 | + uintptr_t hiaddr = orig_hiaddr; |
940 | ++ uintptr_t offset = 0; |
941 | + uintptr_t addr; |
942 | + |
943 | + if (hiaddr != orig_hiaddr) { |
944 | +@@ -2236,18 +2238,19 @@ static void pgb_static(const char *image_name, abi_ulong orig_loaddr, |
945 | + if (ARM_COMMPAGE) { |
946 | + /* |
947 | + * Extend the allocation to include the commpage. |
948 | +- * For a 64-bit host, this is just 4GiB; for a 32-bit host, |
949 | +- * the address arithmetic will wrap around, but the difference |
950 | +- * will produce the correct allocation size. |
951 | ++ * For a 64-bit host, this is just 4GiB; for a 32-bit host we |
952 | ++ * need to ensure there is space bellow the guest_base so we |
953 | ++ * can map the commpage in the place needed when the address |
954 | ++ * arithmetic wraps around. |
955 | + */ |
956 | + if (sizeof(uintptr_t) == 8 || loaddr >= 0x80000000u) { |
957 | +- hiaddr = (uintptr_t)4 << 30; |
958 | ++ hiaddr = (uintptr_t) 4 << 30; |
959 | + } else { |
960 | +- loaddr = ARM_COMMPAGE & -align; |
961 | ++ offset = -(ARM_COMMPAGE & -align); |
962 | + } |
963 | + } |
964 | + |
965 | +- addr = pgb_find_hole(loaddr, hiaddr - loaddr, align); |
966 | ++ addr = pgb_find_hole(loaddr, hiaddr - loaddr, align, offset); |
967 | + if (addr == -1) { |
968 | + /* |
969 | + * If ARM_COMMPAGE, there *might* be a non-consecutive allocation |
970 | +@@ -2282,7 +2285,7 @@ static void pgb_dynamic(const char *image_name, long align) |
971 | + * just above that, and maximises the positive guest addresses. |
972 | + */ |
973 | + commpage = ARM_COMMPAGE & -align; |
974 | +- addr = pgb_find_hole(commpage, -commpage, align); |
975 | ++ addr = pgb_find_hole(commpage, -commpage, align, 0); |
976 | + assert(addr != -1); |
977 | + guest_base = addr; |
978 | + } |
979 | +-- |
980 | +2.28.0 |
981 | + |
982 | diff --git a/debian/patches/ubuntu/lp1890881-linux-user-don-t-use-MAP_FIXED-in-pgd_find_hole_fall.patch b/debian/patches/ubuntu/lp1890881-linux-user-don-t-use-MAP_FIXED-in-pgd_find_hole_fall.patch |
983 | new file mode 100644 |
984 | index 0000000..97cd189 |
985 | --- /dev/null |
986 | +++ b/debian/patches/ubuntu/lp1890881-linux-user-don-t-use-MAP_FIXED-in-pgd_find_hole_fall.patch |
987 | @@ -0,0 +1,78 @@ |
988 | +From 2667e069e7b5807c69f32109d930967bc1b222cb Mon Sep 17 00:00:00 2001 |
989 | +From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org> |
990 | +Date: Fri, 24 Jul 2020 07:45:01 +0100 |
991 | +Subject: [PATCH] linux-user: don't use MAP_FIXED in pgd_find_hole_fallback |
992 | +MIME-Version: 1.0 |
993 | +Content-Type: text/plain; charset=UTF-8 |
994 | +Content-Transfer-Encoding: 8bit |
995 | + |
996 | +Plain MAP_FIXED has the undesirable behaviour of splatting exiting |
997 | +maps so we don't actually achieve what we want when looking for gaps. |
998 | +We should be using MAP_FIXED_NOREPLACE. As this isn't always available |
999 | +we need to potentially check the returned address to see if the kernel |
1000 | +gave us what we asked for. |
1001 | + |
1002 | +Fixes: ad592e37dfc ("linux-user: provide fallback pgd_find_hole for bare chroots") |
1003 | +Signed-off-by: Alex Bennée <alex.bennee@linaro.org> |
1004 | +Reviewed-by: Richard Henderson <richard.henderson@linaro.org> |
1005 | +Message-Id: <20200724064509.331-9-alex.bennee@linaro.org> |
1006 | + |
1007 | +Origin: upstream, https://git.qemu.org/?p=qemu.git;a=commit;h=2667e069e7b5807c69f32109d930967bc1b222cb |
1008 | +Bug-Ubuntu: https://bugs.launchpad.net/bugs/1890881 |
1009 | +Last-Update: 2020-08-19 |
1010 | + |
1011 | +--- |
1012 | + include/qemu/osdep.h | 3 +++ |
1013 | + linux-user/elfload.c | 10 ++++++---- |
1014 | + 2 files changed, 9 insertions(+), 4 deletions(-) |
1015 | + |
1016 | +diff --git a/include/qemu/osdep.h b/include/qemu/osdep.h |
1017 | +index 0b1298b3c9..20872e793e 100644 |
1018 | +--- a/include/qemu/osdep.h |
1019 | ++++ b/include/qemu/osdep.h |
1020 | +@@ -173,6 +173,9 @@ extern int daemon(int, int); |
1021 | + #ifndef MAP_ANONYMOUS |
1022 | + #define MAP_ANONYMOUS MAP_ANON |
1023 | + #endif |
1024 | ++#ifndef MAP_FIXED_NOREPLACE |
1025 | ++#define MAP_FIXED_NOREPLACE 0 |
1026 | ++#endif |
1027 | + #ifndef ENOMEDIUM |
1028 | + #define ENOMEDIUM ENODEV |
1029 | + #endif |
1030 | +diff --git a/linux-user/elfload.c b/linux-user/elfload.c |
1031 | +index 7e7f642332..fe9dfe795d 100644 |
1032 | +--- a/linux-user/elfload.c |
1033 | ++++ b/linux-user/elfload.c |
1034 | +@@ -2134,12 +2134,15 @@ static uintptr_t pgd_find_hole_fallback(uintptr_t guest_size, uintptr_t brk, |
1035 | + /* we have run out of space */ |
1036 | + return -1; |
1037 | + } else { |
1038 | +- int flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_NORESERVE | MAP_FIXED; |
1039 | ++ int flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_NORESERVE | |
1040 | ++ MAP_FIXED_NOREPLACE; |
1041 | + void * mmap_start = mmap((void *) align_start, guest_size, |
1042 | + PROT_NONE, flags, -1, 0); |
1043 | + if (mmap_start != MAP_FAILED) { |
1044 | + munmap((void *) align_start, guest_size); |
1045 | +- return (uintptr_t) mmap_start + offset; |
1046 | ++ if (MAP_FIXED_NOREPLACE || mmap_start == (void *) align_start) { |
1047 | ++ return (uintptr_t) mmap_start + offset; |
1048 | ++ } |
1049 | + } |
1050 | + base += qemu_host_page_size; |
1051 | + } |
1052 | +@@ -2307,9 +2310,8 @@ static void pgb_reserved_va(const char *image_name, abi_ulong guest_loaddr, |
1053 | + /* Widen the "image" to the entire reserved address space. */ |
1054 | + pgb_static(image_name, 0, reserved_va, align); |
1055 | + |
1056 | +-#ifdef MAP_FIXED_NOREPLACE |
1057 | ++ /* osdep.h defines this as 0 if it's missing */ |
1058 | + flags |= MAP_FIXED_NOREPLACE; |
1059 | +-#endif |
1060 | + |
1061 | + /* Reserve the memory on the host. */ |
1062 | + assert(guest_base != 0); |
1063 | +-- |
1064 | +2.28.0 |
1065 | + |
1066 | diff --git a/debian/patches/ubuntu/lp1890881-linux-user-elfload-use-MAP_FIXED_NOREPLACE-in-pgb_re.patch b/debian/patches/ubuntu/lp1890881-linux-user-elfload-use-MAP_FIXED_NOREPLACE-in-pgb_re.patch |
1067 | new file mode 100644 |
1068 | index 0000000..4f365f2 |
1069 | --- /dev/null |
1070 | +++ b/debian/patches/ubuntu/lp1890881-linux-user-elfload-use-MAP_FIXED_NOREPLACE-in-pgb_re.patch |
1071 | @@ -0,0 +1,66 @@ |
1072 | +From c1f6ad798c7bb328a6f387f2509bf86305383d37 Mon Sep 17 00:00:00 2001 |
1073 | +From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org> |
1074 | +Date: Wed, 1 Jul 2020 14:56:45 +0100 |
1075 | +Subject: [PATCH] linux-user/elfload: use MAP_FIXED_NOREPLACE in |
1076 | + pgb_reserved_va |
1077 | +MIME-Version: 1.0 |
1078 | +Content-Type: text/plain; charset=UTF-8 |
1079 | +Content-Transfer-Encoding: 8bit |
1080 | + |
1081 | +Given we assert the requested address matches what we asked we should |
1082 | +also make that clear in the mmap flags. Otherwise we see failures in |
1083 | +the GitLab environment for some currently unknown but allowable |
1084 | +reason. We use MAP_FIXED_NOREPLACE if we can so we don't just clobber |
1085 | +an existing mapping. Also include the strerror string for a bit more |
1086 | +info on failure. |
1087 | + |
1088 | +Signed-off-by: Alex Bennée <alex.bennee@linaro.org> |
1089 | + |
1090 | +Message-Id: <20200701135652.1366-34-alex.bennee@linaro.org> |
1091 | + |
1092 | +Origin: upstream, https://git.qemu.org/?p=qemu.git;a=commit;h=c1f6ad798c7bb328a6f387f2509bf86305383d37 |
1093 | +Bug-Ubuntu: https://bugs.launchpad.net/bugs/1890881 |
1094 | +Last-Update: 2020-08-19 |
1095 | + |
1096 | +--- |
1097 | + linux-user/elfload.c | 10 +++++++--- |
1098 | + 1 file changed, 7 insertions(+), 3 deletions(-) |
1099 | + |
1100 | +diff --git a/linux-user/elfload.c b/linux-user/elfload.c |
1101 | +index b5cb21384a..7e7f642332 100644 |
1102 | +--- a/linux-user/elfload.c |
1103 | ++++ b/linux-user/elfload.c |
1104 | +@@ -2294,7 +2294,7 @@ static void pgb_dynamic(const char *image_name, long align) |
1105 | + static void pgb_reserved_va(const char *image_name, abi_ulong guest_loaddr, |
1106 | + abi_ulong guest_hiaddr, long align) |
1107 | + { |
1108 | +- const int flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_NORESERVE; |
1109 | ++ int flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_NORESERVE; |
1110 | + void *addr, *test; |
1111 | + |
1112 | + if (guest_hiaddr > reserved_va) { |
1113 | +@@ -2307,15 +2307,19 @@ static void pgb_reserved_va(const char *image_name, abi_ulong guest_loaddr, |
1114 | + /* Widen the "image" to the entire reserved address space. */ |
1115 | + pgb_static(image_name, 0, reserved_va, align); |
1116 | + |
1117 | ++#ifdef MAP_FIXED_NOREPLACE |
1118 | ++ flags |= MAP_FIXED_NOREPLACE; |
1119 | ++#endif |
1120 | ++ |
1121 | + /* Reserve the memory on the host. */ |
1122 | + assert(guest_base != 0); |
1123 | + test = g2h(0); |
1124 | + addr = mmap(test, reserved_va, PROT_NONE, flags, -1, 0); |
1125 | + if (addr == MAP_FAILED) { |
1126 | + error_report("Unable to reserve 0x%lx bytes of virtual address " |
1127 | +- "space for use as guest address space (check your " |
1128 | ++ "space (%s) for use as guest address space (check your " |
1129 | + "virtual memory ulimit setting or reserve less " |
1130 | +- "using -R option)", reserved_va); |
1131 | ++ "using -R option)", reserved_va, strerror(errno)); |
1132 | + exit(EXIT_FAILURE); |
1133 | + } |
1134 | + assert(addr == test); |
1135 | +-- |
1136 | +2.28.0 |
1137 | + |
1138 | diff --git a/debian/patches/ubuntu/lp1890881-linux-user-limit-check-to-HOST_LONG_BITS-TARGET_ABI_.patch b/debian/patches/ubuntu/lp1890881-linux-user-limit-check-to-HOST_LONG_BITS-TARGET_ABI_.patch |
1139 | new file mode 100644 |
1140 | index 0000000..0d95c34 |
1141 | --- /dev/null |
1142 | +++ b/debian/patches/ubuntu/lp1890881-linux-user-limit-check-to-HOST_LONG_BITS-TARGET_ABI_.patch |
1143 | @@ -0,0 +1,57 @@ |
1144 | +From a932eec49d9ec106c7952314ad1adc28f0986076 Mon Sep 17 00:00:00 2001 |
1145 | +From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org> |
1146 | +Date: Thu, 21 May 2020 14:57:48 +0100 |
1147 | +Subject: [PATCH] linux-user: limit check to HOST_LONG_BITS < TARGET_ABI_BITS |
1148 | +MIME-Version: 1.0 |
1149 | +Content-Type: text/plain; charset=UTF-8 |
1150 | +Content-Transfer-Encoding: 8bit |
1151 | + |
1152 | +Newer clangs rightly spot that you can never exceed the full address |
1153 | +space of 64 bit hosts with: |
1154 | + |
1155 | + linux-user/elfload.c:2076:41: error: result of comparison 'unsigned |
1156 | + long' > 18446744073709551615 is always false |
1157 | + [-Werror,-Wtautological-type-limit-compare] |
1158 | + 4685 if ((guest_hiaddr - guest_base) > ~(uintptr_t)0) { |
1159 | + 4686 ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~ |
1160 | + 4687 1 error generated. |
1161 | + |
1162 | +So lets limit the check to 32 bit hosts only. |
1163 | + |
1164 | +Fixes: ee94743034bf |
1165 | +Reported-by: Thomas Huth <thuth@redhat.com> |
1166 | +Signed-off-by: Alex Bennée <alex.bennee@linaro.org> |
1167 | +Message-Id: <20200525131823.715-8-thuth@redhat.com> |
1168 | +[thuth: Use HOST_LONG_BITS < TARGET_ABI_BITS instead of HOST_LONG_BITS == 32] |
1169 | +Signed-off-by: Thomas Huth <thuth@redhat.com> |
1170 | + |
1171 | +Origin: upstream, https://git.qemu.org/?p=qemu.git;a=commit;h=a932eec49d9ec106c7952314ad1adc28f0986076 |
1172 | +Bug-Ubuntu: https://bugs.launchpad.net/bugs/1890881 |
1173 | +Last-Update: 2020-08-19 |
1174 | + |
1175 | +--- |
1176 | + linux-user/elfload.c | 2 ++ |
1177 | + 1 file changed, 2 insertions(+) |
1178 | + |
1179 | +diff --git a/linux-user/elfload.c b/linux-user/elfload.c |
1180 | +index 01a9323a63..ebc663ea0b 100644 |
1181 | +--- a/linux-user/elfload.c |
1182 | ++++ b/linux-user/elfload.c |
1183 | +@@ -2073,12 +2073,14 @@ static void pgb_have_guest_base(const char *image_name, abi_ulong guest_loaddr, |
1184 | + exit(EXIT_FAILURE); |
1185 | + } |
1186 | + } else { |
1187 | ++#if HOST_LONG_BITS < TARGET_ABI_BITS |
1188 | + if ((guest_hiaddr - guest_base) > ~(uintptr_t)0) { |
1189 | + error_report("%s: requires more virtual address space " |
1190 | + "than the host can provide (0x%" PRIx64 ")", |
1191 | + image_name, (uint64_t)guest_hiaddr - guest_base); |
1192 | + exit(EXIT_FAILURE); |
1193 | + } |
1194 | ++#endif |
1195 | + } |
1196 | + |
1197 | + /* |
1198 | +-- |
1199 | +2.28.0 |
1200 | + |
1201 | diff --git a/debian/patches/ubuntu/lp1890881-linux-user-provide-fallback-pgd_find_hole-for-bare-c.patch b/debian/patches/ubuntu/lp1890881-linux-user-provide-fallback-pgd_find_hole-for-bare-c.patch |
1202 | new file mode 100644 |
1203 | index 0000000..4b124c3 |
1204 | --- /dev/null |
1205 | +++ b/debian/patches/ubuntu/lp1890881-linux-user-provide-fallback-pgd_find_hole-for-bare-c.patch |
1206 | @@ -0,0 +1,102 @@ |
1207 | +From ad592e37dfccf730378a44c5fa79acb603a7678d Mon Sep 17 00:00:00 2001 |
1208 | +From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org> |
1209 | +Date: Fri, 5 Jun 2020 16:49:26 +0100 |
1210 | +Subject: [PATCH] linux-user: provide fallback pgd_find_hole for bare chroots |
1211 | +MIME-Version: 1.0 |
1212 | +Content-Type: text/plain; charset=UTF-8 |
1213 | +Content-Transfer-Encoding: 8bit |
1214 | + |
1215 | +When running QEMU out of a chroot environment we may not have access |
1216 | +to /proc/self/maps. As there is no other "official" way to introspect |
1217 | +our memory map we need to fall back to the original technique of |
1218 | +repeatedly trying to mmap an address range until we find one that |
1219 | +works. |
1220 | + |
1221 | +Fortunately it's not quite as ugly as the original code given we |
1222 | +already re-factored the complications of dealing with the |
1223 | +ARM_COMMPAGE. We do make an attempt to skip over brk() which is about |
1224 | +the only concrete piece of information we have about the address map |
1225 | +at this moment. |
1226 | + |
1227 | +Fixes: ee9474303 |
1228 | +Reported-by: Peter Maydell <peter.maydell@linaro.org> |
1229 | +Signed-off-by: Alex Bennée <alex.bennee@linaro.org> |
1230 | +Message-Id: <20200605154929.26910-12-alex.bennee@linaro.org> |
1231 | + |
1232 | +Origin: upstream, https://git.qemu.org/?p=qemu.git;a=commit;h=ad592e37dfccf730378a44c5fa79acb603a7678d |
1233 | +Bug-Ubuntu: https://bugs.launchpad.net/bugs/1890881 |
1234 | +Last-Update: 2020-08-19 |
1235 | + |
1236 | +--- |
1237 | + linux-user/elfload.c | 48 ++++++++++++++++++++++++++++++++++++++++++++ |
1238 | + 1 file changed, 48 insertions(+) |
1239 | + |
1240 | +diff --git a/linux-user/elfload.c b/linux-user/elfload.c |
1241 | +index ebc663ea0b..475d243f3b 100644 |
1242 | +--- a/linux-user/elfload.c |
1243 | ++++ b/linux-user/elfload.c |
1244 | +@@ -2101,6 +2101,50 @@ static void pgb_have_guest_base(const char *image_name, abi_ulong guest_loaddr, |
1245 | + } |
1246 | + } |
1247 | + |
1248 | ++/** |
1249 | ++ * pgd_find_hole_fallback: potential mmap address |
1250 | ++ * @guest_size: size of available space |
1251 | ++ * @brk: location of break |
1252 | ++ * @align: memory alignment |
1253 | ++ * |
1254 | ++ * This is a fallback method for finding a hole in the host address |
1255 | ++ * space if we don't have the benefit of being able to access |
1256 | ++ * /proc/self/map. It can potentially take a very long time as we can |
1257 | ++ * only dumbly iterate up the host address space seeing if the |
1258 | ++ * allocation would work. |
1259 | ++ */ |
1260 | ++static uintptr_t pgd_find_hole_fallback(uintptr_t guest_size, uintptr_t brk, long align) |
1261 | ++{ |
1262 | ++ uintptr_t base; |
1263 | ++ |
1264 | ++ /* Start (aligned) at the bottom and work our way up */ |
1265 | ++ base = ROUND_UP(mmap_min_addr, align); |
1266 | ++ |
1267 | ++ while (true) { |
1268 | ++ uintptr_t align_start, end; |
1269 | ++ align_start = ROUND_UP(base, align); |
1270 | ++ end = align_start + guest_size; |
1271 | ++ |
1272 | ++ /* if brk is anywhere in the range give ourselves some room to grow. */ |
1273 | ++ if (align_start <= brk && brk < end) { |
1274 | ++ base = brk + (16 * MiB); |
1275 | ++ continue; |
1276 | ++ } else if (align_start + guest_size < align_start) { |
1277 | ++ /* we have run out of space */ |
1278 | ++ return -1; |
1279 | ++ } else { |
1280 | ++ int flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_NORESERVE | MAP_FIXED; |
1281 | ++ void * mmap_start = mmap((void *) align_start, guest_size, |
1282 | ++ PROT_NONE, flags, -1, 0); |
1283 | ++ if (mmap_start != MAP_FAILED) { |
1284 | ++ munmap((void *) align_start, guest_size); |
1285 | ++ return (uintptr_t) mmap_start; |
1286 | ++ } |
1287 | ++ base += qemu_host_page_size; |
1288 | ++ } |
1289 | ++ } |
1290 | ++} |
1291 | ++ |
1292 | + /* Return value for guest_base, or -1 if no hole found. */ |
1293 | + static uintptr_t pgb_find_hole(uintptr_t guest_loaddr, uintptr_t guest_size, |
1294 | + long align) |
1295 | +@@ -2116,6 +2160,10 @@ static uintptr_t pgb_find_hole(uintptr_t guest_loaddr, uintptr_t guest_size, |
1296 | + /* Read brk after we've read the maps, which will malloc. */ |
1297 | + brk = (uintptr_t)sbrk(0); |
1298 | + |
1299 | ++ if (!maps) { |
1300 | ++ return pgd_find_hole_fallback(guest_size, brk, align); |
1301 | ++ } |
1302 | ++ |
1303 | + /* The first hole is before the first map entry. */ |
1304 | + this_start = mmap_min_addr; |
1305 | + |
1306 | +-- |
1307 | +2.28.0 |
1308 | + |
PPA: https:/ /launchpad. net/~ci- train-ppa- service/ +archive/ ubuntu/ 4214/+packages
Bugs: /bugs.launchpad .net/ubuntu/ +source/ qemu/+bug/ 1891187 /bugs.launchpad .net/ubuntu/ +source/ qemu/+bug/ 1890881
- https:/
- https:/