~john-cabaj/ubuntu/+source/linux-gcp/+git/jammy-gcp:dqo-qpl_feature

Last commit made on 2024-01-12
Get this branch:
git clone -b dqo-qpl_feature https://git.launchpad.net/~john-cabaj/ubuntu/+source/linux-gcp/+git/jammy-gcp
Only John Cabaj can upload to this branch. If you are John Cabaj please log in for upload directions.

Branch merges

Branch information

Name:
dqo-qpl_feature
Repository:
lp:~john-cabaj/ubuntu/+source/linux-gcp/+git/jammy-gcp

Recent commits

1c11c50... by Coco Li <email address hidden>

gro: add ability to control gro max packet size

BugLink: https://bugs.launchpad.net/bugs/2040522

Eric Dumazet suggested to allow users to modify max GRO packet size.

We have seen GRO being disabled by users of appliances (such as
wifi access points) because of claimed bufferbloat issues,
or some work arounds in sch_cake, to split GRO/GSO packets.

Instead of disabling GRO completely, one can chose to limit
the maximum packet size of GRO packets, depending on their
latency constraints.

This patch adds a per device gro_max_size attribute
that can be changed with ip link command.

ip link set dev eth0 gro_max_size 16000

Suggested-by: Eric Dumazet <email address hidden>
Signed-off-by: Coco Li <email address hidden>
Signed-off-by: Eric Dumazet <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(backported from commit eac1b93c14d645ef147b049ace0d5230df755548)
[john-cabaj: context changes]
Signed-off-by: John Cabaj <email address hidden>

3117ab1... by Coco Li <email address hidden>

IPv6/GRO: generic helper to remove temporary HBH/jumbo header in driver

BugLink: https://bugs.launchpad.net/bugs/2040522

IPv6/TCP and GRO stacks can build big TCP packets with an added
temporary Hop By Hop header.

Is GSO is not involved, then the temporary header needs to be removed in
the driver. This patch provides a generic helper for drivers that need
to modify their headers in place.

Tested:
Compiled and ran with ethtool -K eth1 tso off
Could send Big TCP packets

Signed-off-by: Coco Li <email address hidden>
Link: https://<email address hidden>
Signed-off-by: Jakub Kicinski <email address hidden>
(cherry picked from commit 89300468e2b2ec216c7827ba04ac45c129794403)
Signed-off-by: John Cabaj <email address hidden>

46295af... by Eric Dumazet <email address hidden>

ipv6/gso: remove temporary HBH/jumbo header

BugLink: https://bugs.launchpad.net/bugs/2040522

ipv6 tcp and gro stacks will soon be able to build big TCP packets,
with an added temporary Hop By Hop header.

If GSO is involved for these large packets, we need to remove
the temporary HBH header before segmentation happens.

v2: perform HBH removal from ipv6_gso_segment() instead of
    skb_segment() (Alexander feedback)

Signed-off-by: Eric Dumazet <email address hidden>
Acked-by: Alexander Duyck <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit 09f3d1a3a52c696208008618a67e2c7c3fb16d41)
Signed-off-by: John Cabaj <email address hidden>

3fa930d... by Eric Dumazet <email address hidden>

ipv6: add struct hop_jumbo_hdr definition

BugLink: https://bugs.launchpad.net/bugs/2040522

Following patches will need to add and remove local IPv6 jumbogram
options to enable BIG TCP.

Signed-off-by: Eric Dumazet <email address hidden>
Acked-by: Alexander Duyck <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit 7c96d8ec96bb71aac54c9f872aaa65d7411ab864)
Signed-off-by: John Cabaj <email address hidden>

80ed2c4... by Eric Dumazet <email address hidden>

net: annotate accesses to dev->gso_max_segs

BugLink: https://bugs.launchpad.net/bugs/2040522

dev->gso_max_segs is written under RTNL protection, or when the device is
not yet visible, but is read locklessly.

Add netif_set_gso_max_segs() helper.

Add the READ_ONCE()/WRITE_ONCE() pairs, and use netif_set_gso_max_segs()
where we can to better document what is going on.

Signed-off-by: Eric Dumazet <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(backported from commit 6d872df3e3b91532b142de9044e5b4984017a55f)
[john-cabaj: context changes]
Signed-off-by: John Cabaj <email address hidden>

e5ae751... by Jakub Kicinski <email address hidden>

net: don't allow user space to lift the device limits

BugLink: https://bugs.launchpad.net/bugs/2040522

Up until commit 46e6b992c250 ("rtnetlink: allow GSO maximums to
be set on device creation") the gso_max_segs and gso_max_size
of a device were not controlled from user space.

The quoted commit added the ability to control them because of
the following setup:

 netns A | netns B
     veth<->veth eth0

If eth0 has TSO limitations and user wants to efficiently forward
traffic between eth0 and the veths they should copy the TSO
limitations of eth0 onto the veths. This would happen automatically
for macvlans or ipvlan but veth users are not so lucky (given the
loose coupling).

Unfortunately the commit in question allowed users to also override
the limits on real HW devices.

It may be useful to control the max GSO size and someone may be using
that ability (not that I know of any user), so create a separate set
of knobs to reliably record the TSO limitations. Validate the user
requests.

Signed-off-by: Jakub Kicinski <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(backported from commit 14d7b8122fd591693a2388b98563707ba72c6780)
[john-cabaj: context changes]
Signed-off-by: John Cabaj <email address hidden>

e02711c... by Rushil Gupta <email address hidden>

gve: update gve.rst

BugLink: https://bugs.launchpad.net/bugs/2040522

Add a note about QPL and RDA mode

Signed-off-by: Rushil Gupta <email address hidden>
Reviewed-by: Willem de Bruijn <email address hidden>
Signed-off-by: Praveen Kaligineedi <email address hidden>
Signed-off-by: Bailey Forrest <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit 5a3f8d1231073fc5f0b6f38ab8337d424ba0cfe4)
Signed-off-by: John Cabaj <email address hidden>

5c62d8b... by Eric Dumazet <email address hidden>

gve: fix frag_list chaining

BugLink: https://bugs.launchpad.net/bugs/2040522

gve_rx_append_frags() is able to build skbs chained with frag_list,
like GRO engine.

Problem is that shinfo->frag_list should only be used
for the head of the chain.

All other links should use skb->next pointer.

Otherwise, built skbs are not valid and can cause crashes.

Equivalent code in GRO (skb_gro_receive()) is:

    if (NAPI_GRO_CB(p)->last == p)
        skb_shinfo(p)->frag_list = skb;
    else
        NAPI_GRO_CB(p)->last->next = skb;
    NAPI_GRO_CB(p)->last = skb;

Fixes: 9b8dd5e5ea48 ("gve: DQO: Add RX path")
Signed-off-by: Eric Dumazet <email address hidden>
Cc: Bailey Forrest <email address hidden>
Cc: Willem de Bruijn <email address hidden>
Cc: Catherine Sullivan <email address hidden>
Reviewed-by: David Ahern <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit 817c7cd2043a83a3d8147f40eea1505ac7300b62)
Signed-off-by: John Cabaj <email address hidden>

8fc29f7... by Rushil Gupta <email address hidden>

gve: RX path for DQO-QPL

BugLink: https://bugs.launchpad.net/bugs/2040522

The RX path allocates the QPL page pool at queue creation, and
tries to reuse these pages through page recycling. This patch
ensures that on refill no non-QPL pages are posted to the device.

When the driver is running low on free buffers, an ondemand
allocation step kicks in that allocates a non-qpl page for
SKB business to free up the QPL page in use.

gve_try_recycle_buf was moved to gve_rx_append_frags so that driver does
not attempt to mark buffer as used if a non-qpl page was allocated
ondemand.

Signed-off-by: Rushil Gupta <email address hidden>
Reviewed-by: Willem de Bruijn <email address hidden>
Signed-off-by: Praveen Kaligineedi <email address hidden>
Signed-off-by: Bailey Forrest <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit e7075ab4fb6b39730dfbfbfa3a5505d678f01d2c)
Signed-off-by: John Cabaj <email address hidden>

222abc2... by Rushil Gupta <email address hidden>

gve: Tx path for DQO-QPL

BugLink: https://bugs.launchpad.net/bugs/2040522

Each QPL page is divided into GVE_TX_BUFS_PER_PAGE_DQO buffers.
When a packet needs to be transmitted, we break the packet into max
GVE_TX_BUF_SIZE_DQO sized chunks and transmit each chunk using a TX
descriptor.
We allocate the TX buffers from the free list in dqo_tx.
We store these TX buffer indices in an array in the pending_packet
structure.

The TX buffers are returned to the free list in dqo_compl after
receiving packet completion or when removing packets from miss
completions list.

Signed-off-by: Rushil Gupta <email address hidden>
Reviewed-by: Willem de Bruijn <email address hidden>
Signed-off-by: Praveen Kaligineedi <email address hidden>
Signed-off-by: Bailey Forrest <email address hidden>
Signed-off-by: David S. Miller <email address hidden>
(cherry picked from commit a6fb8d5a8b6925f1e635818d3dd2d89531d4a058)
Signed-off-by: John Cabaj <email address hidden>