This reverts commit 3829acb7f33d2bcf746b2df598c9a3066713fc2d.
With the broken test case, "underlay in a VRF", fixed in the
test_vxlan_under_vrf.sh, we should remove the SAUCE patches that mark
this test failure as an expected failure to catch regressions in the
future. This could reduce maintenance cost as well.
Signed-off-by: Po-Hsu Lin <email address hidden>
Acked-by: Luke Nowakowski-Krijger <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Signed-off-by: Luke Nowakowski-Krijger <email address hidden>
This reverts commit 7d9c6353c36a560ca73ca1bea35e16f62b4bec69.
With the broken test case, "underlay in a VRF", fixed in the
test_vxlan_under_vrf.sh, we should remove the SAUCE patches that mark
this test failure as an expected failure to catch regressions in the
future. This could reduce maintenance cost as well.
Signed-off-by: Po-Hsu Lin <email address hidden>
Acked-by: Luke Nowakowski-Krijger <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Signed-off-by: Luke Nowakowski-Krijger <email address hidden>
f25bf64...
by
Joakim Tjernlund <email address hidden>
This change adds a new flag to define a controller's wideband speech
capability. This is required since no reliable over HCI mechanism
exists to query the controller and driver's compatibility with
wideband speech.
Signed-off-by: Alain Michaud <email address hidden>
Signed-off-by: Marcel Holtmann <email address hidden>
(cherry picked from commit 3e4e3f73b9f4944ebd8100dbe107f2325aa79c6d)
Signed-off-by: Chia-Lin Kao (AceLan) <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>
From: Christian Borntraeger <email address hidden>
The switch to a keyed guest does not require a classic sske as the other
guest CPUs are not accessing the key before the switch is complete.
By using the NQ SSKE things are faster especially with multiple guests.
Signed-off-by: Christian Borntraeger <email address hidden>
Suggested-by: Janis Schoetterl-Glausch <email address hidden>
Reviewed-by: Claudio Imbrenda <email address hidden>
Link: https://<email address hidden>
Signed-off-by: Christian Borntraeger <email address hidden>
Signed-off-by: Heiko Carstens <email address hidden>
(cherry picked from commit 3ae11dbcfac906a8c3a480e98660a823130dc16a)
Signed-off-by: Frank Heimes <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>
20055f2...
by
Christian Borntraeger <email address hidden>
s390/gmap: voluntarily schedule during key setting
With large and many guest with storage keys it is possible to create
large latencies or stalls during initial key setting:
rcu: INFO: rcu_sched self-detected stall on CPU
rcu: 18-....: (2099 ticks this GP) idle=54e/1/0x4000000000000002 softirq=35598716/35598716 fqs=998
(t=2100 jiffies g=155867385 q=20879)
Task dump for CPU 18:
CPU 1/KVM R running task 0 1030947 256019 0x06000004
Call Trace:
sched_show_task
rcu_dump_cpu_stacks
rcu_sched_clock_irq
update_process_times
tick_sched_handle
tick_sched_timer
__hrtimer_run_queues
hrtimer_interrupt
do_IRQ
ext_int_handler
ptep_zap_key
The mmap lock is held during the page walking but since this is a
semaphore scheduling is still possible. Same for the kvm srcu.
To minimize overhead do this on every segment table entry or large page.
Signed-off-by: Christian Borntraeger <email address hidden>
Reviewed-by: Alexander Gordeev <email address hidden>
Reviewed-by: Claudio Imbrenda <email address hidden>
Link: https://<email address hidden>
Signed-off-by: Christian Borntraeger <email address hidden>
Signed-off-by: Heiko Carstens <email address hidden>
(cherry picked from commit 6d5946274df1fff539a7eece458a43be733d1db8)
Signed-off-by: Frank Heimes <email address hidden>
Acked-by: Stefan Bader <email address hidden>
Acked-by: Tim Gardner <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>
In case the vma will continue to be used after unlink its relevant
anon_vma, we need to reset the vma->anon_vma pointer to NULL. So, later
when fault happen within this vma again, a new anon_vma will be prepared.
By this way, the vma will only be checked for reverse mapping of pages
which been fault in after the unlink_anon_vmas call.
Currently, the mremap with MREMAP_DONTUNMAP scenario will continue use the
vma after moved its page table entries to a new vma. For other scenarios,
the vma itself will be freed after call unlink_anon_vmas.
Link: https://<email address hidden>
Signed-off-by: Li Xinhai <email address hidden>
Cc: Andrea Arcangeli <email address hidden>
Cc: Brian Geffon <email address hidden>
Cc: Kirill A. Shutemov <email address hidden>
Cc: Lokesh Gidra <email address hidden>
Cc: Minchan Kim <email address hidden>
Cc: Vlastimil Babka <email address hidden>
Signed-off-by: Andrew Morton <email address hidden>
Signed-off-by: Linus Torvalds <email address hidden>
(cherry picked from commit ee8ab1903e3d912d8f10bedbf96c3b6a1c8cbede)
Signed-off-by: Tim Gardner <email address hidden>
Acked-by: Khaled Elmously <email address hidden>
Acked-by: Cengiz Can <email address hidden>
Signed-off-by: Stefan Bader <email address hidden>