3c1ccfd...
by
Daniel Wagner <email address hidden>
nvmet: expose max queues to configfs
Allow to set the max queues the target supports. This is useful for
testing the reconnect attempt of the host with changing numbers of
supported queues.
Signed-off-by: Daniel Wagner <email address hidden>
Reviewed-by: Hannes Reinecke <email address hidden>
Signed-off-by: Christoph Hellwig <email address hidden>
(cherry picked from commit 3e980f5995e0bb4d86fef873a9c9ad66721580d0)
Signed-off-by: Michael Reed <email address hidden>
908e21f...
by
Daniel Wagner <email address hidden>
nvme-rdma: handle number of queue changes
On reconnect, the number of queues might have changed.
In the case where we have more queues available than previously we try
to access queues which are not initialized yet.
The other case where we have less queues than previously, the
connection attempt will fail because the target doesn't support the
old number of queues and we end up in a reconnect loop.
Thus, only start queues which are currently present in the tagset
limited by the number of available queues. Then we update the tagset
and we can start any new queue.
Signed-off-by: Daniel Wagner <email address hidden>
Reviewed-by: Sagi Grimberg <email address hidden>
Reviewed-by: Hannes Reinecke <email address hidden>
Signed-off-by: Christoph Hellwig <email address hidden>
(cherry picked from commit 1c467e259599864ec925d5b85066a0960320fb3c)
Signed-off-by: Michael Reed <email address hidden>
6c8ff15...
by
Daniel Wagner <email address hidden>
nvme-tcp: handle number of queue changes
On reconnect, the number of queues might have changed.
In the case where we have more queues available than previously we try
to access queues which are not initialized yet.
The other case where we have less queues than previously, the
connection attempt will fail because the target doesn't support the
old number of queues and we end up in a reconnect loop.
Thus, only start queues which are currently present in the tagset
limited by the number of available queues. Then we update the tagset
and we can start any new queue.
Signed-off-by: Daniel Wagner <email address hidden>
Reviewed-by: Sagi Grimberg <email address hidden>
Reviewed-by: Hannes Reinecke <email address hidden>
Signed-off-by: Christoph Hellwig <email address hidden>
(cherry picked from commit 09035f86496d8dea7a05a07f6dcb8083c0a3d885)
Signed-off-by: Michael Reed <email address hidden>
nvme-fabrics: parse nvme connect Linux error codes
This fixes the assumption that errval is an unsigned nvme error
Signed-off-by: Amit Engel <email address hidden>
Signed-off-by: Christoph Hellwig <email address hidden>
(cherry picked from commit ec9e96b5230148294c7abcaf3a4c592d3720b62d)
Signed-off-by: Michael Reed <email address hidden>
Fixes a problem described in 50252e4b5e989
("aio: fix use-after-free due to missing POLLFREE handling")
and copies the approach used there.
In short, we have to forcibly eject a poll entry when we meet POLLFREE.
We can't rely on io_poll_get_ownership() as can't wait for potentially
running tw handlers, so we use the fact that wqs are RCU freed. See
Eric's patch and comments for more details.
Reported-by: Eric Biggers <email address hidden>
Link: https://<email address hidden>
Reported-and-tested-by: <email address hidden>
Fixes: 221c5eb233823 ("io_uring: add support for IORING_OP_POLL")
Signed-off-by: Pavel Begunkov <email address hidden>
Link: https://lore.kernel.org/r/4ed56b6f548f7ea337603a82315750449412748a<email address hidden>
[axboe: drop non-functional change from patch]
Signed-off-by: Jens Axboe <email address hidden>
[pavel: backport]
Signed-off-by: Pavel Begunkov <email address hidden>
Signed-off-by: Greg Kroah-Hartman <email address hidden>
(cherry picked from commit e9d7ca0c4640cbebe6840ee3bac66a25a9bacaf5 linux-5.15.y)
CVE-2022-3176
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Acked-by: Kamal Mostafa <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>
1383e35...
by
Pavel Begunkov <email address hidden>
1bc84c40088 ("io_uring: remove poll entry from list when canceling all")
removed a potential overflow condition for the poll references. They
are currently limited to 20-bits, even if we have 31-bits available. The
upper bit is used to mark for cancelation.
Bump the poll ref space to 31-bits, making that kind of situation much
harder to trigger in general. We'll separately add overflow checking
and handling.
Fixes: aa43477b0402 ("io_uring: poll rework")
Signed-off-by: Jens Axboe <email address hidden>
[pavel: backport]
Signed-off-by: Pavel Begunkov <email address hidden>
Signed-off-by: Greg Kroah-Hartman <email address hidden>
(cherry picked from commit c41e79a0c46457dc87d56db59c4dc93be2e38568 linux-5.15.y)
CVE-2022-3176
Signed-off-by: Thadeu Lima de Souza Cascardo <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Acked-by: Kamal Mostafa <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>