lp:~lttng/lttng-modules/trunk

Created by Ubuntu LTTng on 2011-05-12 and last modified on 2019-05-22
Get this branch:
bzr branch lp:~lttng/lttng-modules/trunk

Branch merges

Related bugs

Related blueprints

Branch information

Owner:
Ubuntu LTTng
Project:
lttng-modules
Status:
Development

Import details

Import Status: Reviewed

This branch is an import of the HEAD branch of the Git repository at git://git.lttng.org/lttng-modules.git.

The next import is scheduled to run in 3 hours.

Last successful import was 2 hours ago.

Import started 2 hours ago on izar and finished 2 hours ago taking 15 seconds — see the log
Import started 8 hours ago on alnitak and finished 8 hours ago taking 15 seconds — see the log
Import started 14 hours ago on izar and finished 14 hours ago taking 15 seconds — see the log
Import started 20 hours ago on alnitak and finished 20 hours ago taking 15 seconds — see the log
Import started on 2019-05-23 on izar and finished on 2019-05-23 taking 15 seconds — see the log
Import started on 2019-05-22 on alnitak and finished on 2019-05-22 taking 20 seconds — see the log
Import started on 2019-05-22 on alnitak and finished on 2019-05-22 taking 20 seconds — see the log
Import started on 2019-05-22 on alnitak and finished on 2019-05-22 taking 15 seconds — see the log
Import started on 2019-05-22 on alnitak and finished on 2019-05-22 taking 20 seconds — see the log
Import started on 2019-05-21 on izar and finished on 2019-05-21 taking 15 seconds — see the log

Recent revisions

1224. By Mathieu Desnoyers on 2019-05-22

Introduce callstack stackwalk implementation header

Introduce a new implementation header for the stackwalk-based API, added
in Linux 5.2 and gradually integrated within each architecture.

Signed-off-by: Mathieu Desnoyers <email address hidden>

1223. By Mathieu Desnoyers on 2019-05-22

Prepare callstack common code for stackwalk

Prepare the callstack common code for stackwalk implementation,
moving more legacy code to the legacy implementation header.

Signed-off-by: Mathieu Desnoyers <email address hidden>

1222. By Mathieu Desnoyers on 2019-05-22

Introduce callstack legacy implementation header

Split the callstack code: keep boilerplate code within the
C implementation file, and move the parts which depend on the
"legacy" (pre-stackwalk) stacktrace kernel API to a separate
implementation header.

This is a preparation step to introduce a new implementation
header for the stackwalk API, added in Linux 5.2 and gradually
integrated within each architecture.

Signed-off-by: Mathieu Desnoyers <email address hidden>

1221. By Michael Jeanson <email address hidden> on 2019-05-22

fix: random: only read from /dev/random after its pool has received 128 bits (v5.2)

See upstream commit:

  commit eb9d1bf079bb438d1a066d72337092935fc770f6
  Author: Theodore Ts'o <email address hidden>
  Date: Wed Feb 20 16:06:38 2019 -0500

    random: only read from /dev/random after its pool has received 128 bits

    Immediately after boot, we allow reads from /dev/random before its
    entropy pool has been fully initialized. Fix this so that we don't
    allow this until the blocking pool has received 128 bits.

    We do this by repurposing the initialized flag in the entropy pool
    struct, and use the initialized flag in the blocking pool to indicate
    whether it is safe to pull from the blocking pool.

    To do this, we needed to rework when we decide to push entropy from the
    input pool to the blocking pool, since the initialized flag for the
    input pool was used for this purpose. To simplify things, we no
    longer use the initialized flag for that purpose, nor do we use the
    entropy_total field any more.

Signed-off-by: Michael Jeanson <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden>

1220. By Michael Jeanson <email address hidden> on 2019-05-22

fix: mm: move recent_rotated pages calculation to shrink_inactive_list() (v5.2)

See upstream commit:

  commit 886cf1901db962cee5f8b82b9b260079a5e8a4eb
  Author: Kirill Tkhai <email address hidden>
  Date: Mon May 13 17:16:51 2019 -0700

    mm: move recent_rotated pages calculation to shrink_inactive_list()

    Patch series "mm: Generalize putback functions"]

    putback_inactive_pages() and move_active_pages_to_lru() are almost
    similar, so this patchset merges them ina single function.

    This patch (of 4):

    The patch moves the calculation from putback_inactive_pages() to
    shrink_inactive_list(). This makes putback_inactive_pages() looking more
    similar to move_active_pages_to_lru().

    To do that, we account activated pages in reclaim_stat::nr_activate.
    Since a page may change its LRU type from anon to file cache inside
    shrink_page_list() (see ClearPageSwapBacked()), we have to account pages
    for the both types. So, nr_activate becomes an array.

    Previously we used nr_activate to account PGACTIVATE events, but now we
    account them into pgactivate variable (since they are about number of
    pages in general, not about sum of hpage_nr_pages).

Signed-off-by: Michael Jeanson <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden>

1219. By Michael Jeanson <email address hidden> on 2019-05-22

fix: mm/vmscan: simplify trace_reclaim_flags and trace_shrink_flags (v5.2)

See upstream commit:

  commit 60b62ff7cc4217ac3de76535fa4c1510a798dbcb
  Author: Yafang Shao <email address hidden>
  Date: Mon May 13 17:23:08 2019 -0700

    mm/vmscan: simplify trace_reclaim_flags and trace_shrink_flags

    trace_reclaim_flags and trace_shrink_flags are almost the same.
    We can simplify them to avoid redundant code.

Signed-off-by: Michael Jeanson <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden>

1218. By Michael Jeanson <email address hidden> on 2019-05-22

fix: mm/vmscan: drop may_writepage and classzone_idx from direct reclaim begin template (v5.2)

See upstream commit:

  commit 3481c37ffa1de58ef140d0fe9eabf56305e74666
  Author: Yafang Shao <email address hidden>
  Date: Mon May 13 17:19:14 2019 -0700

    mm/vmscan: drop may_writepage and classzone_idx from direct reclaim begin template

    There are three tracepoints using this template, which are
    mm_vmscan_direct_reclaim_begin,
    mm_vmscan_memcg_reclaim_begin,
    mm_vmscan_memcg_softlimit_reclaim_begin.

    Regarding mm_vmscan_direct_reclaim_begin,
    sc.may_writepage is !laptop_mode, that's a static setting, and
    reclaim_idx is derived from gfp_mask which is already show in this
    tracepoint.

    Regarding mm_vmscan_memcg_reclaim_begin,
    may_writepage is !laptop_mode too, and reclaim_idx is (MAX_NR_ZONES-1),
    which are both static value.

    mm_vmscan_memcg_softlimit_reclaim_begin is the same with
    mm_vmscan_memcg_reclaim_begin.

    So we can drop them all.

Signed-off-by: Michael Jeanson <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden>

1217. By Michael Jeanson <email address hidden> on 2019-05-22

fix: timer/trace: Improve timer tracing (v5.2)

See upstream commit:

  commit f28d3d5346e97e60c81f933ac89ccf015430e5cf
  Author: Anna-Maria Gleixner <email address hidden>
  Date: Thu Mar 21 13:09:21 2019 +0100

    timer/trace: Improve timer tracing

    Timers are added to the timer wheel off by one. This is required in
    case a timer is queued directly before incrementing jiffies to prevent
    early timer expiry.

    When reading a timer trace and relying only on the expiry time of the timer
    in the timer_start trace point and on the now in the timer_expiry_entry
    trace point, it seems that the timer fires late. With the current
    timer_expiry_entry trace point information only now=jiffies is printed but
    not the value of base->clk. This makes it impossible to draw a conclusion
    to the index of base->clk and makes it impossible to examine timer problems
    without additional trace points.

    Therefore add the base->clk value to the timer_expire_entry trace
    point, to be able to calculate the index the timer base is located at
    during collecting expired timers.

Signed-off-by: Michael Jeanson <email address hidden>
Signed-off-by: Mathieu Desnoyers <email address hidden>

1216. By Mathieu Desnoyers on 2019-05-17

Cleanup: bitfields: streamline use of underscores

Do not prefix macro arguments with underscores. Use one leading
underscore as prefix for local variables defined within macros.

Signed-off-by: Mathieu Desnoyers <email address hidden>

1215. By Mathieu Desnoyers on 2019-05-17

Silence compiler "always false comparison" warning

Compiling the bitfield test with gcc -Wextra generates those warnings:

 ../../include/babeltrace/bitfield-internal.h:38:45: warning: comparison of unsigned expression < 0 is always false [-Wtype-limits]
 #define _bt_is_signed_type(type) ((type) -1 < (type) 0)

This is the intent of the macro. Disable compiler warnings around use of
that macro.

Signed-off-by: Mathieu Desnoyers <<email address hidden>

Branch metadata

Branch format:
Branch format 7
Repository format:
Bazaar repository format 2a (needs bzr 1.16 or later)
This branch contains Public information 
Everyone can see this information.

Subscribers