lp:~lttng/lttng-modules/trunk
- Get this branch:
- bzr branch lp:~lttng/lttng-modules/trunk
Branch merges
Branch information
Import details
This branch is an import of the HEAD branch of the Git repository at git://git.lttng.org/lttng-modules.git.
Last successful import was .
Recent revisions
- 1752. By Michael Jeanson <email address hidden>
-
fix: tie compaction probe build to CONFIG_COMPACTION
The definition of 'struct compact_control' in 'mm/internal.h' depends on
CONFIG_COMPACTION being defined. Only build the compaction probe when
this configuration option is enabled.Thanks to Bruce Ashfield <email address hidden> for reporting this
issue.Change-Id: I81e77aa9c1bf10
452c152d432fe52 24df0db42c9
Signed-off-by: Michael Jeanson <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden> - 1751. By Mathieu Desnoyers
-
fix: net: skb: introduce kfree_skb_reason() (v5.15.58..v5.16)
See upstream commit :
commit c504e5c2f9648a1
e5c2be01e8c3f59 d394192bd3
Author: Menglong Dong <email address hidden>
Date: Sun Jan 9 14:36:26 2022 +0800net: skb: introduce kfree_skb_reason()
Introduce the interface kfree_skb_reason(), which is able to pass
the reason why the skb is dropped to 'kfree_skb' tracepoint.Add the 'reason' field to 'trace_kfree_skb', therefor user can get
more detail information about abnormal skb with 'drop_monitor' or
eBPF.All drop reasons are defined in the enum 'skb_drop_reason', and
they will be print as string in 'kfree_skb' tracepoint in format
of 'reason: XXX'.( Maybe the reasons should be defined in a uapi header file, so that
user space can use them? )Signed-off-by: Mathieu Desnoyers <email address hidden>
Change-Id: Ib3c039207739dad10f097cf76474e 0822e351273 - 1750. By Michael Jeanson <email address hidden>
-
fix: workqueue: Fix type of cpu in trace event (v5.19)
See upstream commit :
commit 873a400938b31a1
e443c4d94b560b7 8300787540
Author: Wonhyuk Yang <email address hidden>
Date: Wed May 4 11:32:03 2022 +0900workqueue: Fix type of cpu in trace event
The trace event "workqueue_
queue_work" use unsigned int type for
req_cpu, cpu. This casue confusing cpu number like below log.$ cat /sys/kernel/
debug/tracing/ trace
cat-317 [001] ...: workqueue_queue_work: ... req_cpu=8192 cpu=4294967295 So, change unsigned type to signed type in the trace event. After
applying this patch, cpu number will be printed as -1 instead of
4294967295 as folllows.$ cat /sys/kernel/
debug/tracing/ trace
cat-1338 [002] ...: workqueue_queue_work: ... req_cpu=8192 cpu=-1 Change-Id: I478083c350b6ec
314d87e9159dc5b 342b96daed7
Signed-off-by: Michael Jeanson <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden> - 1749. By Michael Jeanson <email address hidden>
-
fix: fs: Remove flags parameter from aops->write_begin (v5.19)
See upstream commit :
commit 9d6b0cd75798447
61ed68926eb3073 bab1dca87b
Author: Matthew Wilcox (Oracle) <email address hidden>
Date: Tue Feb 22 14:31:43 2022 -0500fs: Remove flags parameter from aops->write_begin
There are no more aop flags left, so remove the parameter.
Change-Id: I82725b93e13d74
9f52a631b2ac60d f81a5e839f8
Signed-off-by: Michael Jeanson <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden> - 1748. By Michael Jeanson <email address hidden>
-
fix: mm/page_alloc: fix tracepoint mm_page_
alloc_zone_ locked( ) (v5.19) See upstream commit :
commit 10e0f7530205799
e7e971aba699a7c b3a47456de
Author: Wonhyuk Yang <email address hidden>
Date: Thu May 19 14:08:54 2022 -0700mm/page_alloc: fix tracepoint mm_page_
alloc_zone_ locked( ) Currently, trace point mm_page_
alloc_zone_ locked( ) doesn't show correct
information.First, when alloc_flag has ALLOC_HARDER/
ALLOC_CMA, page can be allocated
from MIGRATE_HIGHATOMIC/ MIGRATE_ CMA. Nevertheless, tracepoint use
requested migration type not MIGRATE_HIGHATOMIC and MIGRATE_CMA.Second, after commit 44042b4498728 ("mm/page_alloc: allow high-order pages
to be stored on the per-cpu lists") percpu-list can store high order
pages. But trace point determine whether it is a refiil of percpu-list by
comparing requested order and 0.To handle these problems, make mm_page_
alloc_zone_ locked( ) only be called
by __rmqueue_smallest with correct migration type. With a new argument
called percpu_refill, it can show roughly whether it is a refill of
percpu-list.Change-Id: I2e4a57393757f1
2b9c5a4566c4d11 02ee2474a09
Signed-off-by: Michael Jeanson <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden> - 1747. By Mathieu Desnoyers
-
Fix: event notifier: racy use of last subbuffer record
The lttng-modules event notifiers use the ring buffer internally. When
reading the payload of the last event in a sub-buffer with a multi-part
read (e.g. two read system calls), we should not "put" the sub-buffer
holding this data, else continuing reading the data in the following
read system call can observe corrupted data if it has been concurrently
overwritten by the producer.Signed-off-by: Mathieu Desnoyers <email address hidden>
Change-Id: Idb051e50ee8a25958cfd63a9b143f 4943ca2e01a - 1746. By Mathieu Desnoyers
-
Fix: bytecode interpreter context_get_index() leaves byte order uninitialized
Observed Issue
==============When using the event notification capture feature to capture a context
field, e.g. '$ctx.cpu_id', the captured value is often observed in
reverse byte order.Cause
=====Within the bytecode interpreter, context_get_index() leaves the "rev_bo"
field uninitialized in the top of stack.This only affects the event notification capture bytecode because the
BYTECODE_OP_GET_ SYMBOL bytecode instruction (as of lttng-tools 2.13)
is only generated for capture bytecode in lttng-tools. Therefore, only
capture bytecode targeting contexts are affected by this issue. The
reason why lttng-tools uses the "legacy" bytecode instruction to get
context (BYTECODE_OP_GET_ CONTEXT_ REF) for the filter bytecode is to
preserve backward compatibility of filtering when interacting with
applications linked against LTTng-UST 2.12.Solution
========Initialize the rev_bo field based on the context field type
reserve_byte_order field.Known drawbacks
===============None.
Signed-off-by: Mathieu Desnoyers <email address hidden>
Change-Id: I1483642b0b8f6bc28d5b68be170a0 4fb419fd9b3 - 1745. By Michael Jeanson <email address hidden>
-
fix: 'random' tracepoints removed in stable kernels
The upstream commit 14c174633f349cb
41ea90c2c0aadda c157012f74 removing
the 'random' tracepoints is being backported to multiple stable kernel
branches, I don't see how that qualifies as a fix but here we are.Use the presence of 'include/
trace/events/ random. h' in the kernel source
tree instead of the rather tortuous version check to determine if we
need to build 'lttng-probe-random. ko'. Change-Id: I8f5f2f4c9e09c6
1127c49c7949b22 dd3fab0460d
Signed-off-by: Michael Jeanson <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden> - 1744. By He Zhe <email address hidden>
-
fix: random: remove unused tracepoints (v5.10, v5.15)
The following kernel commit has been back ported to v5.10.119 and v5.15.44.
commit 14c174633f349cb
41ea90c2c0aadda c157012f74
Author: Jason A. Donenfeld <email address hidden>
Date: Thu Feb 10 16:40:44 2022 +0100random: remove unused tracepoints
These explicit tracepoints aren't really used and show sign of aging.
It's work to keep these up to date, and before I attempted to keep them
up to date, they weren't up to date, which indicates that they're not
really used. These days there are better ways of introspecting anyway.Signed-off-by: He Zhe <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden>
Change-Id: I0b7eb8aa78b5bd2039e20ae3e1da4 c5eb9018789 - 1743. By Michael Jeanson <email address hidden>
-
fix: sched/tracing: Append prev_state to tp args instead (v5.18)
See upstream commit :
commit 9c2136be0878c88
c53dea26943ce40 bb03ad8d8d
Author: Delyan Kratunov <email address hidden>
Date: Wed May 11 18:28:36 2022 +0000sched/tracing: Append prev_state to tp args instead
Commit fa2c3254d7cf (sched/tracing: Don't re-read p->state when emitting
sched_switch event, 2022-01-20) added a new prev_state argument to the
sched_switch tracepoint, before the prev task_struct pointer.This reordering of arguments broke BPF programs that use the raw
tracepoint (e.g. tp_btf programs). The type of the second argument has
changed and existing programs that assume a task_struct* argument
(e.g. for bpf_task_storage access) will now fail to verify.If we instead append the new argument to the end, all existing programs
would continue to work and can conditionally extract the prev_state
argument on supported kernel versions.Change-Id: Ife2ec88a8bea27
43562590cbd3570 68d7773863f
Signed-off-by: Michael Jeanson <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden>
Branch metadata
- Branch format:
- Branch format 7
- Repository format:
- Bazaar repository format 2a (needs bzr 1.16 or later)