~nathan-sweetman/ubuntu/+source/linux/+git/jammy:master

Last commit made on 2022-07-12
Get this branch:
git clone -b master https://git.launchpad.net/~nathan-sweetman/ubuntu/+source/linux/+git/jammy
Only Nathan Sweetman can upload to this branch. If you are Nathan Sweetman please log in for upload directions.

Branch merges

Branch information

Recent commits

b850948... by Stefan Bader

UBUNTU: Ubuntu-5.15.0-43.46

Signed-off-by: Stefan Bader <email address hidden>

0a29860... by Stefan Bader

UBUNTU: debian/dkms-versions -- update from kernel-versions (main/2022.07.11)

BugLink: https://bugs.launchpad.net/bugs/1786013
Signed-off-by: Stefan Bader <email address hidden>

15ab2b7... by Stefan Bader

UBUNTU: link-to-tracker: update tracking bug

BugLink: https://bugs.launchpad.net/bugs/1981243
Properties: no-test-build
Signed-off-by: Stefan Bader <email address hidden>

1afe66d... by Yu Kuai <email address hidden>

nbd: fix io hung while disconnecting device

BugLink: https://bugs.launchpad.net/bugs/1896350

In our tests, "qemu-nbd" triggers a io hung:

INFO: task qemu-nbd:11445 blocked for more than 368 seconds.
      Not tainted 5.18.0-rc3-next-20220422-00003-g2176915513ca #884
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:qemu-nbd state:D stack: 0 pid:11445 ppid: 1 flags:0x00000000
Call Trace:
 <TASK>
 __schedule+0x480/0x1050
 ? _raw_spin_lock_irqsave+0x3e/0xb0
 schedule+0x9c/0x1b0
 blk_mq_freeze_queue_wait+0x9d/0xf0
 ? ipi_rseq+0x70/0x70
 blk_mq_freeze_queue+0x2b/0x40
 nbd_add_socket+0x6b/0x270 [nbd]
 nbd_ioctl+0x383/0x510 [nbd]
 blkdev_ioctl+0x18e/0x3e0
 __x64_sys_ioctl+0xac/0x120
 do_syscall_64+0x35/0x80
 entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7fd8ff706577
RSP: 002b:00007fd8fcdfebf8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 0000000040000000 RCX: 00007fd8ff706577
RDX: 000000000000000d RSI: 000000000000ab00 RDI: 000000000000000f
RBP: 000000000000000f R08: 000000000000fbe8 R09: 000055fe497c62b0
R10: 00000002aff20000 R11: 0000000000000246 R12: 000000000000006d
R13: 0000000000000000 R14: 00007ffe82dc5e70 R15: 00007fd8fcdff9c0

"qemu-ndb -d" will call ioctl 'NBD_DISCONNECT' first, however, following
message was found:

block nbd0: Send disconnect failed -32

Which indicate that something is wrong with the server. Then,
"qemu-nbd -d" will call ioctl 'NBD_CLEAR_SOCK', however ioctl can't clear
requests after commit 2516ab1543fd("nbd: only clear the queue on device
teardown"). And in the meantime, request can't complete through timeout
because nbd_xmit_timeout() will always return 'BLK_EH_RESET_TIMER', which
means such request will never be completed in this situation.

Now that the flag 'NBD_CMD_INFLIGHT' can make sure requests won't
complete multiple times, switch back to call nbd_clear_sock() in
nbd_clear_sock_ioctl(), so that inflight requests can be cleared.

Signed-off-by: Yu Kuai <email address hidden>
Reviewed-by: Josef Bacik <email address hidden>
Link: https://<email address hidden>
Signed-off-by: Jens Axboe <email address hidden>

(cherry picked from commit 09dadb5985023e27d4740ebd17e6fea4640110e5)
Signed-off-by: Matthew Ruffell <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Acked-by: Bartlomiej Zolnierkiewicz <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>

8b95e9d... by Yu Kuai <email address hidden>

nbd: don't clear 'NBD_CMD_INFLIGHT' flag if request is not completed

BugLink: https://bugs.launchpad.net/bugs/1896350

Otherwise io will hung because request will only be completed if the
cmd has the flag 'NBD_CMD_INFLIGHT'.

Fixes: 07175cb1baf4 ("nbd: make sure request completion won't concurrent")
Signed-off-by: Yu Kuai <email address hidden>
Link: https://<email address hidden>
Signed-off-by: Jens Axboe <email address hidden>

(backported from 2895f1831e911ca87d4efdf43e35eb72a0c7e66e)
[mruffell: context adjustment removing percpu_ref_put in recv_work()]
Signed-off-by: Matthew Ruffell <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Acked-by: Bartlomiej Zolnierkiewicz <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>

da4fa02... by Yu Kuai <email address hidden>

nbd: make sure request completion won't concurrent

BugLink: https://bugs.launchpad.net/bugs/1896350

commit cddce0116058 ("nbd: Aovid double completion of a request")
try to fix that nbd_clear_que() and recv_work() can complete a
request concurrently. However, the problem still exists:

t1 t2 t3

nbd_disconnect_and_put
 flush_workqueue
                      recv_work
                       blk_mq_complete_request
                        blk_mq_complete_request_remote -> this is true
                         WRITE_ONCE(rq->state, MQ_RQ_COMPLETE)
                          blk_mq_raise_softirq
                                             blk_done_softirq
                                              blk_complete_reqs
                                               nbd_complete_rq
                                                blk_mq_end_request
                                                 blk_mq_free_request
                                                  WRITE_ONCE(rq->state, MQ_RQ_IDLE)
  nbd_clear_que
   blk_mq_tagset_busy_iter
    nbd_clear_req
                                                   __blk_mq_free_request
                                                    blk_mq_put_tag
     blk_mq_complete_request -> complete again

There are three places where request can be completed in nbd:
recv_work(), nbd_clear_que() and nbd_xmit_timeout(). Since they
all hold cmd->lock before completing the request, it's easy to
avoid the problem by setting and checking a cmd flag.

Signed-off-by: Yu Kuai <email address hidden>
Reviewed-by: Ming Lei <email address hidden>
Reviewed-by: Josef Bacik <email address hidden>
Link: https://<email address hidden>
Signed-off-by: Jens Axboe <email address hidden>

(cherry picked from 07175cb1baf4c51051b1fbd391097e349f9a02a9)
Signed-off-by: Matthew Ruffell <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Acked-by: Bartlomiej Zolnierkiewicz <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>

6ce2185... by Yu Kuai <email address hidden>

nbd: don't handle response without a corresponding request message

BugLink: https://bugs.launchpad.net/bugs/1896350

While handling a response message from server, nbd_read_stat() will
try to get request by tag, and then complete the request. However,
this is problematic if nbd haven't sent a corresponding request
message:

t1 t2
                        submit_bio
                         nbd_queue_rq
                          blk_mq_start_request
recv_work
 nbd_read_stat
  blk_mq_tag_to_rq
 blk_mq_complete_request
                          nbd_send_cmd

Thus add a new cmd flag 'NBD_CMD_INFLIGHT', it will be set in
nbd_send_cmd() and checked in nbd_read_stat().

Noted that this patch can't fix that blk_mq_tag_to_rq() might
return a freed request, and this will be fixed in following
patches.

Signed-off-by: Yu Kuai <email address hidden>
Reviewed-by: Ming Lei <email address hidden>
Reviewed-by: Josef Bacik <email address hidden>
Link: https://<email address hidden>
Signed-off-by: Jens Axboe <email address hidden>

(cherry picked from 4e6eef5dc25b528e08ac5b5f64f6ca9d9987241d)
Signed-off-by: Matthew Ruffell <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Acked-by: Bartlomiej Zolnierkiewicz <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>

252cba8... by Michael Reed

UBUNTU: [Config] Enable config option CONFIG_PCIE_EDR

BugLink: https://bugs.launchpad.net/bugs/1965241

PCIE_EDR is enabling support to handle events generated when a PCIE port
disconnects to handle errors. From the comments given in the commit which
adds this option and its help, it sounds like if the OS enables DPC (down-
stream port control) which allows to control PCIE ports in parallel to the
firmware, it should also enable EDR.

Signed-off-by: Michael Reed <email address hidden>
Acked-by: Stefan Bader <email address hidden>
[Added annotation enforcement and bug reference, and adjust annotation
 to force arm64 to the same setting as it was before]
Acked-by: Kleber Sacilotto de Souza <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>

b6ab5bd... by Lukas Wunner <email address hidden>

PCI: pciehp: Ignore Link Down/Up caused by error-induced Hot Reset

Stuart Hayes reports that an error handled by DPC at a Root Port results
in pciehp gratuitously bringing down a subordinate hotplug port:

  RP -- UP -- DP -- UP -- DP (hotplug) -- EP

pciehp brings the slot down because the Link to the Endpoint goes down.
That is caused by a Hot Reset being propagated as a result of DPC.
Per PCIe Base Spec 5.0, section 6.6.1 "Conventional Reset":

  For a Switch, the following must cause a hot reset to be sent on all
  Downstream Ports: [...]

  * The Data Link Layer of the Upstream Port reporting DL_Down status.
    In Switches that support Link speeds greater than 5.0 GT/s, the
    Upstream Port must direct the LTSSM of each Downstream Port to the
    Hot Reset state, but not hold the LTSSMs in that state. This permits
    each Downstream Port to begin Link training immediately after its
    hot reset completes. This behavior is recommended for all Switches.

  * Receiving a hot reset on the Upstream Port.

Once DPC recovers, pcie_do_recovery() walks down the hierarchy and
invokes pcie_portdrv_slot_reset() to restore each port's config space.
At that point, a hotplug interrupt is signaled per PCIe Base Spec r5.0,
section 6.7.3.4 "Software Notification of Hot-Plug Events":

  If the Port is enabled for edge-triggered interrupt signaling using
  MSI or MSI-X, an interrupt message must be sent every time the logical
  AND of the following conditions transitions from FALSE to TRUE: [...]

  * The Hot-Plug Interrupt Enable bit in the Slot Control register is
    set to 1b.

  * At least one hot-plug event status bit in the Slot Status register
    and its associated enable bit in the Slot Control register are both
    set to 1b.

Prevent pciehp from gratuitously bringing down the slot by clearing the
error-induced Data Link Layer State Changed event before restoring
config space. Afterwards, check whether the link has unexpectedly
failed to retrain and synthesize a DLLSC event if so.

Allow each pcie_port_service_driver (one of them being pciehp) to define
a slot_reset callback and re-use the existing pm_iter() function to
iterate over the callbacks.

Thereby, the Endpoint driver remains bound throughout error recovery and
may restore the device to working state.

Surprise removal during error recovery is detected through a Presence
Detect Changed event. The hotplug port is expected to not signal that
event as a result of a Hot Reset.

The issue isn't DPC-specific, it also occurs when an error is handled by
AER through aer_root_reset(). So while the issue was noticed only now,
it's been around since 2006 when AER support was first introduced.

BugLink: https://bugs.launchpad.net/bugs/1965241

[bhelgaas: drop PCI_ERROR_RECOVERY Kconfig, split pm_iter() rename to
preparatory patch]
Link: https://<email address hidden>/
Fixes: 6c2b374d7485 ("PCI-Express AER implemetation: AER core and aerdriver")
Link: https://lore.kernel.org<email address hidden>
Reported-by: Stuart Hayes <email address hidden>
Tested-by: Stuart Hayes <email address hidden>
Signed-off-by: Lukas Wunner <email address hidden>
Signed-off-by: Bjorn Helgaas <email address hidden>
Cc: <email address hidden> # v2.6.19+: ba952824e6c1: PCI/portdrv: Report reset for frozen channel
Cc: Keith Busch <email address hidden>

(cherry picked from commit ea401499e943c307e6d44af6c2b4e068643e7884)
Signed-off-by: Michael Reed <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Acked-by: Kleber Sacilotto de Souza <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>

1b8fc84... by Lukas Wunner <email address hidden>

PCI/portdrv: Rename pm_iter() to pcie_port_device_iter()

Rename pm_iter() to pcie_port_device_iter() and make it visible outside
CONFIG_PM and portdrv_core.c so it can be used for pciehp slot reset
recovery.

BugLink: https://bugs.launchpad.net/bugs/1965241

[bhelgaas: split into its own patch]
Link: https://<email address hidden>/
Link: https://lore.kernel.org<email address hidden>
Signed-off-by: Lukas Wunner <email address hidden>
Signed-off-by: Bjorn Helgaas <email address hidden>

(cherry picked from commit 3134689f98f9e09004a4727370adc46e7635b4be)
Signed-off-by: Michael Reed <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Acked-by: Kleber Sacilotto de Souza <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>