~proxmox-arm/proxmox-amr64/+git/qemu-server:stable-7

Last commit made on 2024-04-16
Get this branch:
git clone -b stable-7 https://git.launchpad.net/~proxmox-arm/proxmox-amr64/+git/qemu-server

Branch merges

Branch information

Recent commits

27bc41f... by Fiona Ebner <email address hidden>

bump version to 7.4-6

Signed-off-by: Fiona Ebner <email address hidden>

62e5c96... by Fiona Ebner <email address hidden>

block resize: avoid passing zero size to QMP command

Commit 7246e8f9 ("Set zero $size and continue if volume_resize()
returns false") mentions that this is needed for "some storages with
backing block devices to do online resize" and since this patch came
together [0] with pve-storage commit a4aee43 ("Fix RBD resize with
krbd option enabled."), it's safe to assume that RBD with krbd is
meant. But it should be the same situation for any external plugin
relying on the same behavior.

Other storages backed by block devices like LVM(-thin) and ZFS return
1 and the new size respectively, and the code is older than the above
mentioned commits. So really, the RBD plugin just should have returned
a positive value to be in-line with those and there should be no need
to pass 0 to the block_resize QMP command either.

Actually, it's a hack, because the block_resize QMP command does not
actually do special handling for the value 0. It's just that in the
case of a block device, QEMU won't try to resize it (and not fail for
shrinkage). But the size in the raw driver's BlockDriverState is
temporarily set to 0 (which is not nice), until the sector count is
refreshed, where raw_co_getlength is called, which queries the new
size and sets the size in the raw driver's BlockDriverState again as a
side effect. But bdrv_getlength is a coroutine wrapper starting from
QEMU 8.0.0, and it's just better to avoid setting a completely wrong
value even temporarily. Just pass the actually requested size like is
done for LVM(thin) and ZFS.

Since this patch was originally written, Friedrich managed to find
that this actually can cause real issues:
1. Start VM with an RBD image without krbd
2. Change storage config to use krbd
3. Resize disk
Likely, because the disk is resized via the storage layer and the QMP
command resizing it to "0" happens simultaneously. The exact reason
was not yet determined, but the issue is gone in Proxmox VE 8 and
re-appears after reverting this patch.

Long-term, it makes sense to not rely on the storage flag, but look
how the disk is actually attached in QEMU to decide how to do the resize.

[0]: https://lists.proxmox.com/pipermail/pve-devel/2017-January/025060.html

Signed-off-by: Fiona Ebner <email address hidden>
(cherry picked from commit 2e4357c537287edd47d6031fec8bffc7b0ce2425)
[FE: mention actual issue in commit message]
Signed-off-by: Fiona Ebner <email address hidden>

dbd435c... by Thomas Lamprecht

bump version to 7.4-5

Signed-off-by: Thomas Lamprecht <email address hidden>

042b515... by Fabian Grünbichler

fix #4822: vzdump: fix pbs encryption for no-disk guests

these are backed up directly with proxmox-backup-client, and the invocation was
lacking the key parameters.

Signed-off-by: Fabian Grünbichler <email address hidden>
Signed-off-by: Thomas Lamprecht <email address hidden>
(cherry picked from commit fbd3dde73543e7715ca323bebea539db1a95d480)
Signed-off-by: Thomas Lamprecht <email address hidden>

260100e... by Fabian Grünbichler

fix #4085: properly activate cicustom storage(s)

PVE::Storage::path() neither activates the storage of the passed-in volume, nor
does it ensure that the returned value is actually a file or block device, so
this actually fixes two issues. PVE::Storage::abs_filesystem_path() actually
takes care of both, while still calling path() under the hood (since $volid
here is always a proper volid, unless we change the cicustom schema at some
point in the future).

Reviewed-by: Fiona Ebner <email address hidden>
Signed-off-by: Fabian Grünbichler <email address hidden>
(cherry picked from commit 9946d6fa576cc33ab979005c79d692a0724a60e1)
Signed-off-by: Thomas Lamprecht <email address hidden>

878be3a... by Alexandre Derumier

nbd-stop: increase timeout to 25s

This can seemingly need a bit longer than expected, and better than
erroring out on migration is to wait a bit longer.

Signed-off-by: Alexandre Derumier <email address hidden>
Reviewed-by: Fiona Ebner <email address hidden>
(cherry picked from commit 6cb2338f5338b47b960b71e0bcd1dd08ca5b8054)
Signed-off-by: Thomas Lamprecht <email address hidden>

a6bc3e0... by Fiona Ebner <email address hidden>

fix #4522: api: vncproxy: also set environment variable for ticket without websocket

Since commit 2dc0eb61 ("qm: assume correct VNC setup in 'vncproxy',
disallow passwordless"), 'qm vncproxy' will just fail when the
LC_PVE_TICKET environment variable is not set. Since it is not only
required in combination with websocket, drop that conditional.

For the non-serial case, this was the last remaining effect of the
'websocket' parameter, so update the parameter description.

Signed-off-by: Fiona Ebner <email address hidden>
(cherry picked from commit 62c190492154d932c27ace030c0e84eda5f81a3f)
Signed-off-by: Thomas Lamprecht <email address hidden>

4f044b6... by Fiona Ebner <email address hidden>

api: vncproxy: update description of websocket parameter

Since commit 3e7567e0 ("do not use novnc wsproxy"), the websocket
upgrade is done via the HTTP server.

Signed-off-by: Fiona Ebner <email address hidden>
(cherry picked from commit 876d993886c1d674fc004c8bf1895316dc5d4a94)
Signed-off-by: Thomas Lamprecht <email address hidden>

2e6ea19... by Friedrich Weber <email address hidden>

vm start: set higher timeout if using PCI passthrough

The default VM startup timeout is `max(30, VM memory in GiB)` seconds.
Multiple reports in the forum [0] [1] and the bug tracker [2] suggest
this is too short when using PCI passthrough with a large amount of VM
memory, since QEMU needs to map the whole memory during startup (see
comment #2 in [2]). As a result, VM startup fails with "got timeout".

To work around this, set a larger default timeout if at least one PCI
device is passed through. The question remains how to choose an
appropriate timeout. Users reported the following startup times:

ref | RAM | time | ratio (s/GiB)
---------------------------------
[1] | 60G | 135s | 2.25
[1] | 70G | 157s | 2.24
[1] | 80G | 277s | 3.46
[2] | 65G | 213s | 3.28
[2] | 96G | >290s | >3.02

The data does not really indicate any simple (e.g. linear)
relationship between RAM and startup time (even data from the same
source). However, to keep the heuristic simple, assume linear growth
and multiply the default timeout by 4 if at least one `hostpci[n]`
option is present, obtaining `4 * max(30, VM memory in GiB)`. This
covers all cases above, and should still leave some headroom.

[0]: https://forum.proxmox.com/threads/83765/post-552071
[1]: https://forum.proxmox.com/threads/126398/post-592826
[2]: https://bugzilla.proxmox.com/show_bug.cgi?id=3502

Suggested-by: Fiona Ebner <email address hidden>
Signed-off-by: Friedrich Weber <email address hidden>
(cherry picked from commit 95f1de689e3c898382f8fcc721b024718a0c910a)
Signed-off-by: Thomas Lamprecht <email address hidden>

6d376be... by Fiona Ebner <email address hidden>

fix #2816: restore: remove timeout when allocating disks

10 minutes is not long enough when disks are large and/or network
storages are used when preallocation is not disabled. The default is
metadata preallocation for qcow2, so there are still reports of the
issue [0][1]. If allocation really does not finish like the comment
describing the timeout feared, just let the user cancel it.

Also note that when restoring a PBS backup, there is no timeout for
disk allocation, and there don't seem to be any user complaints yet.

The 5 second timeout for receiving the config from vma is kept,
because certain corruptions in the VMA header can lead to the
operation hanging there.

There is no need for the $tmp variable before setting back the old
timeout, because that is at least one second, so we'll always be able
to set the $oldtimeout variable to undef in time in practice.
Currently, there shouldn't even be an outer timeout in the first
place, because the only call path leading to here is via the create
API (also used by qmrestore), both of which don't set a timeout.

[0]: https://forum.proxmox.com/threads/126825/
[1]: https://forum.proxmox.com/threads/128093/

Signed-off-by: Fiona Ebner <email address hidden>
(cherry picked from commit 853757ccec20d5e84d6a1cc656a66beaf3d3e94c)
Signed-off-by: Thomas Lamprecht <email address hidden>