memcg_failcnt in mm from ubuntu_ltp failed on B 4.15

Bug #1845919 reported by Po-Hsu Lin
10
This bug affects 2 people
Affects Status Importance Assigned to Milestone
ubuntu-kernel-tests
New
Undecided
Unassigned
linux (Ubuntu)
Incomplete
Undecided
Unassigned
linux-kvm (Ubuntu)
Confirmed
Undecided
Unassigned

Bug Description

Issue found on node amaura, with Bionic 4.15 AMD64:
 tag=memcg_test_3 stime=1569410880 dur=0 exit=exited stat=0 core=no cu=9 cs=17
 startup='Wed Sep 25 11:28:01 2019'
 memcg_failcnt 1 TINFO: Starting test 1
 sh: echo: I/O error
 memcg_failcnt 1 TINFO: set /dev/memcg/memory.use_hierarchy to 0 failed
 memcg_failcnt 1 TINFO: Running memcg_process --mmap-anon -s 8192
 memcg_failcnt 1 TBROK: timeouted on memory.usage_in_bytes
 tag=memcg_failcnt stime=1569410881 dur=10 exit=exited stat=2 core=no cu=9 cs=2

Probably a test case issue, it needs to be investigated

Po-Hsu Lin (cypressyew)
tags: added: 4.15 amd64 bionic sru-20190902 ubuntu-ltp
Revision history for this message
Ubuntu Kernel Bot (ubuntu-kernel-bot) wrote : Missing required logs.

This bug is missing log files that will aid in diagnosing the problem. While running an Ubuntu kernel (not a mainline or third-party kernel) please enter the following command in a terminal window:

apport-collect 1845919

and then change the status of the bug to 'Confirmed'.

If, due to the nature of the issue you have encountered, you are unable to run this command, please add a comment stating that fact and change the bug status to 'Confirmed'.

This change has been made by an automated script, maintained by the Ubuntu Kernel Team.

Changed in linux (Ubuntu):
status: New → Incomplete
Revision history for this message
Po-Hsu Lin (cypressyew) wrote :

Failed on Disco KVM with

startup='Fri Nov 15 03:59:24 2019'
 memcg_failcnt 1 TINFO: Starting test 1
 /opt/ltp/testcases/bin/memcg_failcnt.sh: 522: echo: echo: I/O error
 memcg_failcnt 1 TINFO: set /dev/memcg/memory.use_hierarchy to 0 failed
 memcg_failcnt 1 TINFO: Running memcg_process --mmap-anon -s 8192
 memcg_failcnt 1 TPASS: memory.failcnt is 10, > 0 as expected
 memcg_failcnt 2 TINFO: Starting test 2
 /opt/ltp/testcases/bin/memcg_failcnt.sh: 522: echo: echo: I/O error
 memcg_failcnt 2 TINFO: set /dev/memcg/memory.use_hierarchy to 0 failed
 memcg_failcnt 2 TINFO: Running memcg_process --mmap-file -s 8192
 memcg_failcnt 2 TBROK: timeouted on memory.usage_in_bytes
 tag=memcg_failcnt stime=1573790364 dur=11 exit=exited stat=2 core=no cu=28 cs=19

tags: added: sru-20191111
tags: added: sru-20210315
Revision history for this message
Kelsey Steele (kelsey-steele) wrote :

found on bionic s390x 4.15.0-141.145 host s2lp3

03/25 19:08:57 DEBUG| utils:0153| [stdout] memcg_failcnt 1 TINFO: timeout per run is 0h 5m 0s
03/25 19:08:57 DEBUG| utils:0153| [stdout] memcg_failcnt 1 TINFO: set /dev/memcg/memory.use_hierarchy to 0 failed
03/25 19:08:57 DEBUG| utils:0153| [stdout] memcg_failcnt 1 TINFO: Setting shmmax
03/25 19:08:57 DEBUG| utils:0153| [stdout] memcg_failcnt 1 TINFO: Running memcg_process --mmap-anon -s 8192
03/25 19:08:57 DEBUG| utils:0153| [stdout] memcg_failcnt 1 TPASS: memory.failcnt is 9, > 0 as expected
03/25 19:08:57 DEBUG| utils:0153| [stdout] memcg_failcnt 2 TINFO: Running memcg_process --mmap-file -s 8192
03/25 19:08:57 DEBUG| utils:0153| [stdout] memcg_failcnt 2 TBROK: timed out on memory.usage_in_bytes
03/25 19:08:57 DEBUG| utils:0153| [stdout] memcg_failcnt 2 TINFO: AppArmor enabled, this may affect test results
03/25 19:08:57 DEBUG| utils:0153| [stdout] memcg_failcnt 2 TINFO: it can be disabled with TST_DISABLE_APPARMOR=1 (requires super/root)
03/25 19:08:57 DEBUG| utils:0153| [stdout] memcg_failcnt 2 TINFO: loaded AppArmor profiles: none
03/25 19:08:57 DEBUG| utils:0153| [stdout]
03/25 19:08:57 DEBUG| utils:0153| [stdout] Summary:
03/25 19:08:57 DEBUG| utils:0153| [stdout] passed 1
03/25 19:08:57 DEBUG| utils:0153| [stdout] failed 0
03/25 19:08:57 DEBUG| utils:0153| [stdout] skipped 0
03/25 19:08:57 DEBUG| utils:0153| [stdout] warnings 0
03/25 19:08:57 DEBUG| utils:0153| [stdout] tag=memcg_failcnt stime=1616696713 dur=10 exit=exited stat=2 core=no cu=4 cs=12

tags: added: fips sru-20210412
tags: added: s390x
tags: added: hirsute sru-20210531
Revision history for this message
Kleber Sacilotto de Souza (kleber-souza) wrote :

The same issue seems to be happening with memcg_move_charge_at_immigrat on hirsute/linux 5.11.

tag=memcg_max_usage_in_bytes stime=1626909193 dur=5 exit=exited stat=0 core

startup='Wed Jul 21 23:13:18 2021'
memcg_move_charge_at_immigrate_test 1 TINFO: timeout per run is 0h 5m 0s
[...]
memcg_move_charge_at_immigrate_test 3 TINFO: Test move file
memcg_move_charge_at_immigrate_test 3 TINFO: Running memcg_process --mmap-anon --shm --mmap-file -s 135168
memcg_move_charge_at_immigrate_test 3 TINFO: Warming up pid: 762538
memcg_move_charge_at_immigrate_test 3 TINFO: Process is still here after warm up: 762538
memcg_move_charge_at_immigrate_test 3 TBROK: timed out on memory.usage_in_bytes
memcg_move_charge_at_immigrate_test 3 TINFO: AppArmor enabled, this may affect test results
memcg_move_charge_at_immigrate_test 3 TINFO: it can be disabled with TST_DISABLE_APPARMOR=1 (requires super/root)
memcg_move_charge_at_immigrate_test 3 TINFO: loaded AppArmor profiles: none

Revision history for this message
Kleber Sacilotto de Souza (kleber-souza) wrote :

Found with focal/linux 5.4.0-85.95 on node kernel04 s390x.

tags: added: 5.4
Revision history for this message
Po-Hsu Lin (cypressyew) wrote :

B-KVM 4.15.0-1102.104

tags: added: sru-20211018
Revision history for this message
Krzysztof Kozlowski (krzk) wrote :

Also: 2022.01.31/bionic/linux-ibm-gt-5.4/5.4.0-1009.11

Revision history for this message
Launchpad Janitor (janitor) wrote :

Status changed to 'Confirmed' because the bug affects multiple users.

Changed in linux-kvm (Ubuntu):
status: New → Confirmed
tags: added: sru-20220131
Revision history for this message
Po-Hsu Lin (cypressyew) wrote :

On B-4.15.0-192.203 in cycle 20220808, this issue can only be found on s390x zVM.

tags: added: sru-20220808
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.