use_pty:FALSE /usr/share/restraint/plugins/run_task_plugins bash ./runtest.sh [ 03:11:42 ] Running: 'dnf install -y --skip-broken python3-pip python3-wheel python3-augeas augeas-libs python3-netifaces' Last metadata expiration check: 2:25:14 ago on Wed 28 Sep 2022 12:46:28 AM EDT. Package python3-pip-21.2.3-6.el9.noarch is already installed. Package augeas-libs-1.13.0-2.el9.ppc64le is already installed. Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: python3-augeas noarch 0.5.0-25.el9 BUILDROOT-C9S 27 k python3-netifaces ppc64le 0.10.6-15.el9 BUILDROOT-C9S 24 k python3-wheel noarch 1:0.36.2-7.el9 BUILDROOT-C9S 71 k Transaction Summary ================================================================================ Install 3 Packages Total download size: 122 k Installed size: 363 k Downloading Packages: (1/3): python3-augeas-0.5.0-25.el9.noarch.rpm 81 kB/s | 27 kB 00:00 (2/3): python3-netifaces-0.10.6-15.el9.ppc64le. 42 kB/s | 24 kB 00:00 (3/3): python3-wheel-0.36.2-7.el9.noarch.rpm 118 kB/s | 71 kB 00:00 -------------------------------------------------------------------------------- Total 201 kB/s | 122 kB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : python3-wheel-1:0.36.2-7.el9.noarch 1/3 Installing : python3-netifaces-0.10.6-15.el9.ppc64le 2/3 Installing : python3-augeas-0.5.0-25.el9.noarch 3/3 Running scriptlet: python3-augeas-0.5.0-25.el9.noarch 3/3 Verifying : python3-augeas-0.5.0-25.el9.noarch 1/3 Verifying : python3-netifaces-0.10.6-15.el9.ppc64le 2/3 Verifying : python3-wheel-1:0.36.2-7.el9.noarch 3/3 Installed: python3-augeas-0.5.0-25.el9.noarch python3-netifaces-0.10.6-15.el9.ppc64le python3-wheel-1:0.36.2-7.el9.noarch Complete! [ 03:11:46 ] Running: 'dnf install -y gcc cmake openssl-devel python3-devel libffi-devel zlib-devel' Last metadata expiration check: 2:25:19 ago on Wed 28 Sep 2022 12:46:28 AM EDT. Package gcc-11.3.1-2.1.el9.ppc64le is already installed. Package openssl-devel-1:3.0.1-41.el9.ppc64le is already installed. Package python3-devel-3.9.14-1.el9.ppc64le is already installed. Package libffi-devel-3.4.2-7.el9.ppc64le is already installed. Package zlib-devel-1.2.11-34.el9.ppc64le is already installed. Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: cmake ppc64le 3.20.2-7.el9 BUILDROOT-C9S 6.6 M Installing dependencies: cmake-data noarch 3.20.2-7.el9 BUILDROOT-C9S 1.5 M cmake-filesystem ppc64le 3.20.2-7.el9 BUILDROOT-C9S 16 k cmake-rpm-macros noarch 3.20.2-7.el9 BUILDROOT-C9S 15 k libuv ppc64le 1:1.42.0-1.el9 BUILDROOT-C9S 157 k Transaction Summary ================================================================================ Install 5 Packages Total download size: 8.3 M Installed size: 36 M Downloading Packages: (1/5): cmake-filesystem-3.20.2-7.el9.ppc64le.rp 145 kB/s | 16 kB 00:00 (2/5): cmake-rpm-macros-3.20.2-7.el9.noarch.rpm 265 kB/s | 15 kB 00:00 (3/5): cmake-data-3.20.2-7.el9.noarch.rpm 5.7 MB/s | 1.5 MB 00:00 (4/5): libuv-1.42.0-1.el9.ppc64le.rpm 1.0 MB/s | 157 kB 00:00 (5/5): cmake-3.20.2-7.el9.ppc64le.rpm 14 MB/s | 6.6 MB 00:00 -------------------------------------------------------------------------------- Total 17 MB/s | 8.3 MB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : cmake-rpm-macros-3.20.2-7.el9.noarch 1/5 Installing : cmake-filesystem-3.20.2-7.el9.ppc64le 2/5 Installing : libuv-1:1.42.0-1.el9.ppc64le 3/5 Installing : cmake-data-3.20.2-7.el9.noarch 4/5 Installing : cmake-3.20.2-7.el9.ppc64le 5/5 Running scriptlet: cmake-3.20.2-7.el9.ppc64le 5/5 Verifying : cmake-3.20.2-7.el9.ppc64le 1/5 Verifying : cmake-data-3.20.2-7.el9.noarch 2/5 Verifying : cmake-filesystem-3.20.2-7.el9.ppc64le 3/5 Verifying : cmake-rpm-macros-3.20.2-7.el9.noarch 4/5 Verifying : libuv-1:1.42.0-1.el9.ppc64le 5/5 Installed: cmake-3.20.2-7.el9.ppc64le cmake-data-3.20.2-7.el9.noarch cmake-filesystem-3.20.2-7.el9.ppc64le cmake-rpm-macros-3.20.2-7.el9.noarch libuv-1:1.42.0-1.el9.ppc64le Complete! [ 03:11:57 ] Running: 'python3 -m pip install cffi --no-binary=cffi' Collecting cffi Downloading cffi-1.15.1.tar.gz (508 kB) Collecting pycparser Downloading pycparser-2.21-py2.py3-none-any.whl (118 kB) Skipping wheel build for cffi, due to binaries being disabled for it. Installing collected packages: pycparser, cffi Running setup.py install for cffi: started Running setup.py install for cffi: finished with status 'done' Successfully installed cffi-1.15.1 pycparser-2.21 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv [ 03:12:07 ] Running: 'python3 -m pip install stqe --no-binary=stqe' Collecting stqe Downloading stqe-0.1.12.tar.gz (188 kB) Collecting libsan Downloading libsan-0.3.11-py3-none-any.whl (199 kB) Requirement already satisfied: python-augeas in /usr/lib/python3.9/site-packages (from stqe) (0.5.0) Collecting fmf==1.1.0 Downloading fmf-1.1.0-py3-none-any.whl (36 kB) Collecting pexpect Downloading pexpect-4.8.0-py2.py3-none-any.whl (59 kB) Collecting jsonschema Downloading jsonschema-4.16.0-py3-none-any.whl (83 kB) Collecting filelock Downloading filelock-3.8.0-py3-none-any.whl (10 kB) Collecting ruamel.yaml Downloading ruamel.yaml-0.17.21-py3-none-any.whl (109 kB) Collecting attrs>=17.4.0 Downloading attrs-22.1.0-py2.py3-none-any.whl (58 kB) Collecting pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 Downloading pyrsistent-0.18.1.tar.gz (100 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing wheel metadata: started Preparing wheel metadata: finished with status 'done' Collecting distro Downloading distro-1.7.0-py3-none-any.whl (20 kB) Requirement already satisfied: six in /usr/lib/python3.9/site-packages (from libsan->stqe) (1.15.0) Collecting requests Downloading requests-2.28.1-py3-none-any.whl (62 kB) Collecting future Downloading future-0.18.2.tar.gz (829 kB) Requirement already satisfied: netifaces in /usr/lib64/python3.9/site-packages (from libsan->stqe) (0.10.6) Collecting ipaddress Downloading ipaddress-1.0.23-py2.py3-none-any.whl (18 kB) Collecting ptyprocess>=0.5 Downloading ptyprocess-0.7.0-py2.py3-none-any.whl (13 kB) Collecting charset-normalizer<3,>=2 Downloading charset_normalizer-2.1.1-py3-none-any.whl (39 kB) Collecting urllib3<1.27,>=1.21.1 Downloading urllib3-1.26.12-py2.py3-none-any.whl (140 kB) Collecting idna<4,>=2.5 Downloading idna-3.4-py3-none-any.whl (61 kB) Collecting certifi>=2017.4.17 Downloading certifi-2022.9.24-py3-none-any.whl (161 kB) Collecting ruamel.yaml.clib>=0.2.6 Downloading ruamel.yaml.clib-0.2.6.tar.gz (180 kB) Skipping wheel build for stqe, due to binaries being disabled for it. Building wheels for collected packages: pyrsistent, future, ruamel.yaml.clib Building wheel for pyrsistent (PEP 517): started Building wheel for pyrsistent (PEP 517): finished with status 'done' Created wheel for pyrsistent: filename=pyrsistent-0.18.1-cp39-cp39-linux_ppc64le.whl size=111125 sha256=4581535bbe89fa7f9cd38b97b02fcfaee25a0c9184553460ccd64d8dd4a1509b Stored in directory: /root/.cache/pip/wheels/87/fe/e6/fc8deeb581a41e462eafaf19fee96f51cdc8391e0be1c8088a Building wheel for future (setup.py): started Building wheel for future (setup.py): finished with status 'done' Created wheel for future: filename=future-0.18.2-py3-none-any.whl size=491070 sha256=f08480abaf85f9ddd7934bcec355d7f4cf65424d3c5976bc5bd6c5166006b4b7 Stored in directory: /root/.cache/pip/wheels/2f/a0/d3/4030d9f80e6b3be787f19fc911b8e7aa462986a40ab1e4bb94 Building wheel for ruamel.yaml.clib (setup.py): started Building wheel for ruamel.yaml.clib (setup.py): finished with status 'done' Created wheel for ruamel.yaml.clib: filename=ruamel.yaml.clib-0.2.6-cp39-cp39-linux_ppc64le.whl size=650426 sha256=ef9e0a69a0d264611d3d901f51daaa9e4c058a9479f95b32444f7f2a7d27b0a8 Stored in directory: /root/.cache/pip/wheels/b1/c4/5d/d96e5c09189f4d6d2a9ffb0d7af04ee06d11a20f613f5f3496 Successfully built pyrsistent future ruamel.yaml.clib Installing collected packages: urllib3, ruamel.yaml.clib, pyrsistent, idna, charset-normalizer, certifi, attrs, ruamel.yaml, requests, ptyprocess, jsonschema, ipaddress, future, filelock, distro, pexpect, libsan, fmf, stqe Running setup.py install for stqe: started Running setup.py install for stqe: finished with status 'done' Successfully installed attrs-22.1.0 certifi-2022.9.24 charset-normalizer-2.1.1 distro-1.7.0 filelock-3.8.0 fmf-1.1.0 future-0.18.2 idna-3.4 ipaddress-1.0.23 jsonschema-4.16.0 libsan-0.3.11 pexpect-4.8.0 ptyprocess-0.7.0 pyrsistent-0.18.1 requests-2.28.1 ruamel.yaml-0.17.21 ruamel.yaml.clib-0.2.6 stqe-0.1.12 urllib3-1.26.12 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv [ 03:12:38 ] Running: 'stqe-test run -c lvm/lvm-thinp-basic.conf' ============================================================================================================== Running test 'lvm/thinp/lvm-thinp-modules.py' ============================================================================================================== INFO: [2022-09-28 03:12:38] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvm-thinp-modules.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:12:38] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:12:38] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:12:39] Running: 'echo 101000000 > /tmp/previous-dump-check'... INFO: [2022-09-28 03:12:39] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:12:39] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:12:39] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:12:39] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 76084sec preferred_lft 76084sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591957sec preferred_lft 604757sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:12:39] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 66M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:12:39] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is not installed INFO: dnf is installed (dnf-4.12.0-4.el9.noarch) INFO: [2022-09-28 03:12:39] Running: 'dnf install -y lvm2'... Last metadata expiration check: 2:26:12 ago on Wed 28 Sep 2022 12:46:28 AM EDT. Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: lvm2 ppc64le 9:2.03.16-3.el9 BUILDROOT-C9S 1.5 M Installing dependencies: device-mapper-event ppc64le 9:1.02.185-3.el9 BUILDROOT-C9S 33 k device-mapper-event-libs ppc64le 9:1.02.185-3.el9 BUILDROOT-C9S 32 k device-mapper-persistent-data ppc64le 0.9.0-13.el9 BUILDROOT-C9S 885 k lvm2-libs ppc64le 9:2.03.16-3.el9 BUILDROOT-C9S 1.0 M Transaction Summary ================================================================================ Install 5 Packages Total download size: 3.5 M Installed size: 12 M Downloading Packages: (1/5): device-mapper-event-1.02.185-3.el9.ppc64 197 kB/s | 33 kB 00:00 (2/5): device-mapper-event-libs-1.02.185-3.el9. 166 kB/s | 32 kB 00:00 (3/5): device-mapper-persistent-data-0.9.0-13.e 3.1 MB/s | 885 kB 00:00 (4/5): lvm2-2.03.16-3.el9.ppc64le.rpm 8.7 MB/s | 1.5 MB 00:00 (5/5): lvm2-libs-2.03.16-3.el9.ppc64le.rpm 4.3 MB/s | 1.0 MB 00:00 -------------------------------------------------------------------------------- Total 8.1 MB/s | 3.5 MB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : device-mapper-event-libs-9:1.02.185-3.el9.ppc64le 1/5 Installing : device-mapper-event-9:1.02.185-3.el9.ppc64le 2/5 Running scriptlet: device-mapper-event-9:1.02.185-3.el9.ppc64le 2/5 Created symlink /etc/systemd/system/sockets.target.wants/dm-event.socket /usr/lib/systemd/system/dm-event.socket. Installing : lvm2-libs-9:2.03.16-3.el9.ppc64le 3/5 Installing : device-mapper-persistent-data-0.9.0-13.el9.ppc64le 4/5 Installing : lvm2-9:2.03.16-3.el9.ppc64le 5/5 Running scriptlet: lvm2-9:2.03.16-3.el9.ppc64le 5/5 Created symlink /etc/systemd/system/sysinit.target.wants/lvm2-monitor.service /usr/lib/systemd/system/lvm2-monitor.service. Created symlink /etc/systemd/system/sysinit.target.wants/lvm2-lvmpolld.socket /usr/lib/systemd/system/lvm2-lvmpolld.socket. Verifying : device-mapper-event-9:1.02.185-3.el9.ppc64le 1/5 Verifying : device-mapper-event-libs-9:1.02.185-3.el9.ppc64le 2/5 Verifying : device-mapper-persistent-data-0.9.0-13.el9.ppc64le 3/5 Verifying : lvm2-9:2.03.16-3.el9.ppc64le 4/5 Verifying : lvm2-libs-9:2.03.16-3.el9.ppc64le 5/5 Installed: device-mapper-event-9:1.02.185-3.el9.ppc64le device-mapper-event-libs-9:1.02.185-3.el9.ppc64le device-mapper-persistent-data-0.9.0-13.el9.ppc64le lvm2-9:2.03.16-3.el9.ppc64le lvm2-libs-9:2.03.16-3.el9.ppc64le Complete! INFO: lvm2 was successfully installed ################################################################################ INFO: Starting Thin Provisioning Module test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2022-09-28 03:12:48] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2022-09-28 03:12:48] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2022-09-28 03:12:48] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2022-09-28 03:12:48] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2022-09-28 03:12:48] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2022-09-28 03:12:48] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2022-09-28 03:12:48] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2022-09-28 03:12:48] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: [2022-09-28 03:12:48] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop3 /dev/loop1 /dev/loop2'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop3" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Volume group "testvg" successfully created Creating devices file /etc/lvm/devices/system.devices INFO: [2022-09-28 03:12:49] Running: 'modprobe -r dm_thin_pool'... PASS: modprobe -r dm_thin_pool load & unload dm_thin_pool 100 times INFO: [2022-09-28 03:12:49] Running: 'for i in `seq 100`; do modprobe dm_thin_pool;modprobe -r dm_thin_pool; done'... PASS: for i in `seq 100`; do modprobe dm_thin_pool;modprobe -r dm_thin_pool; done INFO: [2022-09-28 03:13:18] Running: 'modprobe -r dm_cache_cleaner'... PASS: modprobe -r dm_cache_cleaner INFO: [2022-09-28 03:13:18] Running: 'modprobe -r dm_cache_smq'... PASS: modprobe -r dm_cache_smq INFO: [2022-09-28 03:13:18] Running: 'modprobe -r dm_cache'... PASS: modprobe -r dm_cache INFO: [2022-09-28 03:13:18] Running: 'modprobe -r dm_persistent_data'... PASS: modprobe -r dm_persistent_data INFO: [2022-09-28 03:13:18] Running: 'lvs -omodules | grep thin-pool'... PASS: lvs -omodules | grep thin-pool [exited with error, as expected] INFO: [2022-09-28 03:13:18] Running: 'lsmod | egrep -w '^dm_thin_pool''... PASS: lsmod | egrep -w '^dm_thin_pool' [exited with error, as expected] INFO: [2022-09-28 03:13:18] Running: 'lsmod | egrep -w '^dm_persistent_data''... PASS: lsmod | egrep -w '^dm_persistent_data' [exited with error, as expected] INFO: [2022-09-28 03:13:18] Running: 'modinfo dm_thin_pool'... filename: /lib/modules/5.14.0-169.mr1370_220927_1944.el9.ppc64le/kernel/drivers/md/dm-thin-pool.ko.xz license: GPL author: Joe Thornber description: device-mapper thin provisioning target rhelversion: 9.2 srcversion: D103BBD858A26363E7CDD0A depends: dm-persistent-data,dm-bio-prison,dm-mod intree: Y name: dm_thin_pool vermagic: 5.14.0-169.mr1370_220927_1944.el9.ppc64le SMP mod_unload modversions mprofile-kernel relocatable sig_id: PKCS#7 signer: CentOS Stream kernel signing key sig_key: 2E:99:B4:FC:7D:F6:D1:F5:80:28:AC:8E:C7:F9:B4:F8:C6:FB:71:AD sig_hashalgo: sha256 signature: 5B:9B:0F:B6:33:10:21:C1:8E:A0:C4:86:A8:40:82:71:EF:4E:F3:E6: D9:B3:8C:14:0D:D0:DC:C6:F8:BC:6A:88:C2:20:A8:6C:AA:46:2F:11: 1A:9A:A3:35:35:2B:A4:75:DC:A5:0D:52:1A:9C:7C:11:4C:83:58:06: 3D:13:FC:01:60:87:FA:DB:0C:77:7B:FF:ED:06:9E:64:0C:B7:76:11: F8:4B:40:EF:9C:38:9E:23:AC:50:66:87:6F:2D:88:03:6A:AA:85:B0: 84:CC:CB:9B:FA:58:73:3D:F1:47:8C:21:4F:AF:CD:C7:0E:35:DE:AF: 55:8F:F9:2D:C1:5C:5E:0C:12:01:4F:76:C9:B0:02:E2:87:ED:80:CE: 6B:FD:EF:02:40:12:2F:93:45:4B:8D:89:8C:10:DA:1A:4E:C9:F2:B7: EB:25:3B:C8:7E:D6:61:81:B9:ED:4A:95:DE:92:4F:9B:60:D5:00:53: 63:92:3A:37:A3:F5:AF:58:36:EC:64:F5:DF:21:B0:63:AF:0F:1A:42: 3B:B9:C3:02:03:DD:6C:B0:BA:D2:7A:4F:35:4D:95:21:C0:44:32:12: D0:73:C1:0F:5C:D1:B5:33:F1:14:68:6B:07:95:F1:3F:C2:1C:36:C8: 81:F4:7A:84:92:63:92:83:38:17:5A:BF:F1:A9:D5:C2:A4:73:35:8F: B3:5C:54:57:A7:64:CC:43:92:34:E9:5E:E0:EC:13:1A:FA:6C:4A:80: 02:A2:0A:77:28:43:31:CB:B8:EE:80:F4:DA:F7:1F:65:D9:B7:17:29: 74:EB:3F:A7:97:FF:50:59:4F:05:A3:E8:CC:A0:FB:09:06:34:37:7A: 13:14:C8:AF:31:63:18:22:7D:00:F7:E1:FF:6D:DB:D9:AA:D7:8A:6D: C8:90:97:04:50:25:2D:06:F9:1A:93:6C:01:22:5D:95:F0:8A:5F:23: 7C:69:68:BC:43:E5:93:72:6A:FA:8E:F7:07:44:77:91:04:77:71:20: 12:58:D5:D3 parm: snapshot_copy_throttle:A percentage of time allocated for copy on write (uint) parm: no_space_timeout:Out of data space queue IO timeout in seconds (uint) PASS: modinfo dm_thin_pool INFO: [2022-09-28 03:13:18] Running: 'modinfo dm_persistent_data'... filename: /lib/modules/5.14.0-169.mr1370_220927_1944.el9.ppc64le/kernel/drivers/md/persistent-data/dm-persistent-data.ko.xz description: Immutable metadata library for dm author: Joe Thornber license: GPL rhelversion: 9.2 srcversion: DF4419032177883A510A86A depends: dm-bufio,libcrc32c intree: Y name: dm_persistent_data vermagic: 5.14.0-169.mr1370_220927_1944.el9.ppc64le SMP mod_unload modversions mprofile-kernel relocatable sig_id: PKCS#7 signer: CentOS Stream kernel signing key sig_key: 2E:99:B4:FC:7D:F6:D1:F5:80:28:AC:8E:C7:F9:B4:F8:C6:FB:71:AD sig_hashalgo: sha256 signature: 37:87:A1:FE:C7:CA:E4:60:8E:5B:32:BE:0C:98:97:68:AB:86:79:C0: CE:B8:ED:AD:0B:5C:CF:7D:F6:FC:92:DD:BA:A2:E8:1F:3B:0E:CF:07: 64:E8:A5:CA:71:C3:0A:9D:9F:4C:0E:DF:12:9C:2E:B0:81:50:C3:81: 89:A9:87:87:ED:CB:91:FC:48:C2:B3:04:EB:47:1A:C8:14:90:1B:C3: F0:19:E2:3C:01:BF:6E:A0:BE:81:8D:FD:D8:78:18:6F:D7:3C:39:FE: FA:D2:EC:64:7D:04:C3:50:1C:10:CD:87:6B:67:05:D4:F5:E2:70:28: AA:B7:C2:78:E6:B2:C4:79:0C:C2:BE:AB:F1:A4:89:CA:C8:C0:D7:01: 19:0A:C6:CA:F1:D5:57:F9:9B:F7:1E:A3:A0:F2:57:63:53:85:04:70: 58:51:A2:D2:D0:89:CF:DE:71:8D:A0:A0:AC:36:03:C3:41:D0:77:BD: 5E:F1:29:BB:33:3B:2E:29:F6:3D:B0:F7:C8:D9:99:62:93:44:2D:B8: 75:DD:5D:49:9A:1C:ED:14:7F:2B:EA:93:F5:7C:E7:E2:A1:CB:53:44: 73:4E:8D:C8:D1:4D:55:B2:7C:EE:A1:7D:AD:5B:47:1C:DC:7C:CB:98: 9D:83:C4:54:A0:0B:63:C6:68:AD:A4:C9:3F:A6:7F:4D:9E:04:20:D4: 93:8E:BE:7D:E2:E7:E1:DF:39:FB:31:83:44:91:2C:88:55:45:EE:52: FE:D4:A2:C4:85:2E:9A:BE:62:1F:3D:75:D3:ED:38:D4:7F:D6:D8:63: B4:05:68:4F:16:6B:2D:BB:24:E0:CF:94:65:3B:00:11:41:23:F6:57: DC:87:F3:ED:65:82:8A:95:EF:F9:C1:00:A9:96:36:EF:99:93:B2:8E: AB:77:18:7C:72:DA:1D:05:91:19:75:97:6E:98:EA:33:66:58:CC:44: 07:46:9E:9B:93:49:8F:1C:58:B7:6C:C8:0F:32:A2:5B:5D:58:0F:FB: DD:09:FB:43 PASS: modinfo dm_persistent_data INFO: [2022-09-28 03:13:18] Running: 'modinfo -d dm_thin_pool | grep "^device-mapper thin provisioning target$"'... device-mapper thin provisioning target PASS: modinfo -d dm_thin_pool | grep "^device-mapper thin provisioning target$" INFO: [2022-09-28 03:13:18] Running: 'modinfo -d dm_persistent_data | grep "^Immutable metadata library for dm$"'... Immutable metadata library for dm PASS: modinfo -d dm_persistent_data | grep "^Immutable metadata library for dm$" INFO: [2022-09-28 03:13:18] Running: 'lvcreate -l1 -T testvg/pool1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool1" created. PASS: lvcreate -l1 -T testvg/pool1 INFO: [2022-09-28 03:13:19] Running: 'lvcreate -V100m -T testvg/pool1 -n lv1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool1 (4.00 MiB). PASS: lvcreate -V100m -T testvg/pool1 -n lv1 INFO: [2022-09-28 03:13:20] Running: 'lsmod | egrep -w '^dm_persistent_data' | awk '{print $3}''... 1 PASS: lsmod | egrep -w '^dm_persistent_data' | awk '{print $3}' == 1 INFO: [2022-09-28 03:13:20] Running: 'lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}''... 2 PASS: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' == 2 INFO: [2022-09-28 03:13:20] Running: 'lvcreate -V100m -T testvg/pool1 -n lv2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv2" created. WARNING: Sum of all thin volume sizes (200.00 MiB) exceeds the size of thin pool testvg/pool1 (4.00 MiB). PASS: lvcreate -V100m -T testvg/pool1 -n lv2 INFO: [2022-09-28 03:13:21] Running: 'lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}''... 3 PASS: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' == 3 INFO: [2022-09-28 03:13:21] Running: 'lvcreate -l1 -T testvg/pool2'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool2" created. PASS: lvcreate -l1 -T testvg/pool2 INFO: [2022-09-28 03:13:21] Running: 'lvcreate -V100m -T testvg/pool2 -n lv21'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv21" created. WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pools (8.00 MiB). PASS: lvcreate -V100m -T testvg/pool2 -n lv21 INFO: [2022-09-28 03:13:23] Running: 'lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}''... 5 PASS: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' == 5 INFO: [2022-09-28 03:13:23] Running: 'lvchange -an testvg/lv21'... PASS: lvchange -an testvg/lv21 INFO: [2022-09-28 03:13:23] Running: 'lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}''... 4 PASS: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' == 4 INFO: [2022-09-28 03:13:23] Running: 'lvchange -an testvg/pool2'... PASS: lvchange -an testvg/pool2 INFO: [2022-09-28 03:13:24] Running: 'lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}''... 3 PASS: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' == 3 INFO: [2022-09-28 03:13:24] Running: 'lvcreate -i1 -l1 -T testvg/pool3'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool3" created. PASS: lvcreate -i1 -l1 -T testvg/pool3 INFO: [2022-09-28 03:13:25] Running: 'lvcreate -V100m -T testvg/pool3 -n lv31'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv31" created. WARNING: Sum of all thin volume sizes (400.00 MiB) exceeds the size of thin pools (12.00 MiB). PASS: lvcreate -V100m -T testvg/pool3 -n lv31 INFO: [2022-09-28 03:13:26] Running: 'lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}''... 5 PASS: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' == 5 INFO: [2022-09-28 03:13:26] Running: 'lvcreate -V100m -T testvg/pool3 -n lv32'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv32" created. WARNING: Sum of all thin volume sizes (500.00 MiB) exceeds the size of thin pools and the size of whole volume group (496.00 MiB). PASS: lvcreate -V100m -T testvg/pool3 -n lv32 INFO: [2022-09-28 03:13:27] Running: 'lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}''... 8 ERROR: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' should return '6', but returned '8' INFO: [2022-09-28 03:13:27] Running: 'lvcreate -i2 -l1 -T testvg/pool4'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Rounding size 4.00 MiB (1 extents) up to stripe boundary size 8.00 MiB (2 extents). Logical volume "pool4" created. PASS: lvcreate -i2 -l1 -T testvg/pool4 INFO: [2022-09-28 03:13:28] Running: 'lvcreate -V100m -T testvg/pool4 -n lv41'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv41" created. WARNING: Sum of all thin volume sizes (600.00 MiB) exceeds the size of thin pools and the size of whole volume group (496.00 MiB). PASS: lvcreate -V100m -T testvg/pool4 -n lv41 INFO: [2022-09-28 03:13:29] Running: 'lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}''... 10 ERROR: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' should return '8', but returned '10' INFO: [2022-09-28 03:13:29] Running: 'lvchange -an testvg/lv41'... PASS: lvchange -an testvg/lv41 INFO: [2022-09-28 03:13:29] Running: 'lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}''... 9 ERROR: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' should return '7', but returned '9' INFO: [2022-09-28 03:13:29] Running: 'lvchange -an testvg/pool4'... PASS: lvchange -an testvg/pool4 INFO: [2022-09-28 03:13:30] Running: 'lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}''... 8 ERROR: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' should return '6', but returned '8' INFO: [2022-09-28 03:13:30] Running: 'lsmod | grep -w dm_thin_pool'... dm_thin_pool 393216 8 dm_persistent_data 393216 1 dm_thin_pool dm_bio_prison 327680 1 dm_thin_pool dm_mod 458752 44 dm_thin_pool,dm_bufio PASS: lsmod | grep -w dm_thin_pool INFO: [2022-09-28 03:13:30] Running: 'lvs -o +modules testvg'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Modules lv1 testvg Vwi-a-tz-- 100.00m pool1 0.00 thin,thin-pool lv2 testvg Vwi-a-tz-- 100.00m pool1 0.00 thin,thin-pool lv21 testvg Vwi-a-tz-- 100.00m pool2 0.00 thin,thin-pool lv31 testvg Vwi-a-tz-- 100.00m pool3 0.00 thin,thin-pool lv32 testvg Vwi-a-tz-- 100.00m pool3 0.00 thin,thin-pool lv41 testvg Vwi---tz-- 100.00m pool4 thin,thin-pool pool1 testvg twi-aotz-- 4.00m 0.00 11.04 thin-pool pool2 testvg twi-aotz-- 4.00m 0.00 10.94 thin-pool pool3 testvg twi-aotz-- 4.00m 0.00 11.04 thin-pool pool4 testvg twi---tz-- 8.00m thin-pool PASS: lvs -o +modules testvg INFO: [2022-09-28 03:13:30] Running: 'modprobe -r dm_thin_pool'... modprobe: FATAL: Module dm_thin_pool is in use. PASS: modprobe -r dm_thin_pool [exited with error, as expected] INFO: [2022-09-28 03:13:30] Running: 'lsmod | egrep -w '^dm_thin_pool''... dm_thin_pool 393216 8 PASS: lsmod | egrep -w '^dm_thin_pool' INFO: [2022-09-28 03:13:30] Running: 'lsmod | egrep -w '^dm_persistent_data''... dm_persistent_data 393216 1 dm_thin_pool PASS: lsmod | egrep -w '^dm_persistent_data' INFO: [2022-09-28 03:13:30] Running: 'lvremove -ff testvg'... Logical volume "lv41" successfully removed. Logical volume "pool4" successfully removed. Logical volume "lv31" successfully removed. Logical volume "lv32" successfully removed. Logical volume "pool3" successfully removed. Logical volume "lv21" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "lv2" successfully removed. Logical volume "pool1" successfully removed. PASS: lvremove -ff testvg INFO: [2022-09-28 03:13:33] Running: 'modprobe -r dm_thin_pool'... PASS: modprobe -r dm_thin_pool INFO: [2022-09-28 03:13:33] Running: 'lsmod | egrep -w '^dm_thin_pool''... PASS: lsmod | egrep -w '^dm_thin_pool' [exited with error, as expected] INFO: [2022-09-28 03:13:33] Running: 'lsmod | egrep -w '^dm_persistent_data''... PASS: lsmod | egrep -w '^dm_persistent_data' [exited with error, as expected] INFO: [2022-09-28 03:13:33] Running: 'vgremove --force testvg'... Volume group "testvg" successfully removed INFO: [2022-09-28 03:13:34] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2022-09-28 03:13:35] Running: 'losetup -d /dev/loop0'... INFO: [2022-09-28 03:13:35] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2022-09-28 03:13:35] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2022-09-28 03:13:36] Running: 'losetup -d /dev/loop3'... INFO: [2022-09-28 03:13:36] Running: 'rm -f /var/tmp/loop3.img'... INFO: [2022-09-28 03:13:37] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2022-09-28 03:13:38] Running: 'losetup -d /dev/loop1'... INFO: [2022-09-28 03:13:38] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2022-09-28 03:13:38] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2022-09-28 03:13:39] Running: 'losetup -d /dev/loop2'... INFO: [2022-09-28 03:13:39] Running: 'rm -f /var/tmp/loop2.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:13:39] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:13:39] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:13:39] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:13:39] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:13:39] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server ################################ Test Summary ################################## PASS: modprobe -r dm_thin_pool PASS: for i in `seq 100`; do modprobe dm_thin_pool;modprobe -r dm_thin_pool; done PASS: modprobe -r dm_cache_cleaner PASS: modprobe -r dm_cache_smq PASS: modprobe -r dm_cache PASS: modprobe -r dm_persistent_data PASS: lvs -omodules | grep thin-pool [exited with error, as expected] PASS: lsmod | egrep -w '^dm_thin_pool' [exited with error, as expected] PASS: lsmod | egrep -w '^dm_persistent_data' [exited with error, as expected] PASS: modinfo dm_thin_pool PASS: modinfo dm_persistent_data PASS: modinfo -d dm_thin_pool | grep "^device-mapper thin provisioning target$" PASS: modinfo -d dm_persistent_data | grep "^Immutable metadata library for dm$" PASS: lvcreate -l1 -T testvg/pool1 PASS: lvcreate -V100m -T testvg/pool1 -n lv1 PASS: lsmod | egrep -w '^dm_persistent_data' | awk '{print $3}' == 1 PASS: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' == 2 PASS: lvcreate -V100m -T testvg/pool1 -n lv2 PASS: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' == 3 PASS: lvcreate -l1 -T testvg/pool2 PASS: lvcreate -V100m -T testvg/pool2 -n lv21 PASS: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' == 5 PASS: lvchange -an testvg/lv21 PASS: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' == 4 PASS: lvchange -an testvg/pool2 PASS: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' == 3 PASS: lvcreate -i1 -l1 -T testvg/pool3 PASS: lvcreate -V100m -T testvg/pool3 -n lv31 PASS: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' == 5 PASS: lvcreate -V100m -T testvg/pool3 -n lv32 ERROR: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' should return '6', but returned '8' PASS: lvcreate -i2 -l1 -T testvg/pool4 PASS: lvcreate -V100m -T testvg/pool4 -n lv41 ERROR: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' should return '8', but returned '10' PASS: lvchange -an testvg/lv41 ERROR: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' should return '7', but returned '9' PASS: lvchange -an testvg/pool4 ERROR: lsmod | egrep -w '^dm_thin_pool' | awk '{print $3}' should return '6', but returned '8' PASS: lsmod | grep -w dm_thin_pool PASS: lvs -o +modules testvg PASS: modprobe -r dm_thin_pool [exited with error, as expected] PASS: lsmod | egrep -w '^dm_thin_pool' PASS: lsmod | egrep -w '^dm_persistent_data' PASS: lvremove -ff testvg PASS: modprobe -r dm_thin_pool PASS: lsmod | egrep -w '^dm_thin_pool' [exited with error, as expected] PASS: lsmod | egrep -w '^dm_persistent_data' [exited with error, as expected] PASS: Search for error on the server ############################# Total tests that passed: 44 Total tests that failed: 4 Total tests that skipped: 0 ################################################################################ FAIL: test failed ============================================================================================================== Running test 'lvm/thinp/lvchange-thin.py' ============================================================================================================== INFO: [2022-09-28 03:13:41] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvchange-thin.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:13:41] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:13:41] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:13:41] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:13:41] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:13:41] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:13:41] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 76022sec preferred_lft 76022sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591978sec preferred_lft 604778sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:13:41] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 67M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:13:41] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.16-3.el9.ppc64le) ################################################################################ INFO: Starting LV Change Thin Provisioning test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2022-09-28 03:13:41] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2022-09-28 03:13:41] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2022-09-28 03:13:41] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2022-09-28 03:13:41] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2022-09-28 03:13:41] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2022-09-28 03:13:41] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2022-09-28 03:13:41] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2022-09-28 03:13:42] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2022-09-28 03:13:42] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2022-09-28 03:13:42] Running: 'lvcreate -V100m -L100m -T testvg/pool1 -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv1" created. INFO: [2022-09-28 03:13:43] Running: 'lvcreate -V100m -i2 -L100m -T testvg/pool2 -n lv2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB (26 extents). Logical volume "lv2" created. PASS: testvg/pool1 discards == passdown INFO: [2022-09-28 03:13:45] Running: 'lvchange --discards ignore testvg/pool1'... Cannot change support for discards while pool volume testvg/pool1 is active. PASS: lvchange --discards ignore testvg/pool1 [exited with error, as expected] PASS: testvg/pool1 discards == passdown INFO: [2022-09-28 03:13:45] Running: 'lvchange --discards nopassdown testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange --discards nopassdown testvg/pool1 PASS: testvg/pool1 discards == nopassdown INFO: [2022-09-28 03:13:46] Running: 'lvchange --discards ignore testvg/pool1'... Cannot change support for discards while pool volume testvg/pool1 is active. PASS: lvchange --discards ignore testvg/pool1 [exited with error, as expected] PASS: testvg/pool1 discards == nopassdown INFO: [2022-09-28 03:13:46] Running: 'lvchange --discards passdown testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange --discards passdown testvg/pool1 PASS: testvg/pool1 discards == passdown INFO: [2022-09-28 03:13:47] Running: 'lvchange -an testvg/pool1'... PASS: lvchange -an testvg/pool1 INFO: [2022-09-28 03:13:47] Running: 'lvchange -an testvg/lv1'... PASS: lvchange -an testvg/lv1 PASS: testvg/pool1 discards == passdown PASS: testvg/pool1 lv_attr == twi---tz-- INFO: [2022-09-28 03:13:48] Running: 'lvchange --discards ignore testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange --discards ignore testvg/pool1 INFO: [2022-09-28 03:13:48] Running: 'lvchange -ay testvg/pool1'... PASS: lvchange -ay testvg/pool1 INFO: [2022-09-28 03:13:48] Running: 'lvchange -ay testvg/lv1'... PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == ignore PASS: testvg/pool1 lv_attr == twi-aotz-- INFO: [2022-09-28 03:13:49] Running: 'lvchange -an testvg/pool1'... PASS: lvchange -an testvg/pool1 INFO: [2022-09-28 03:13:49] Running: 'lvchange -an testvg/lv1'... PASS: lvchange -an testvg/lv1 INFO: [2022-09-28 03:13:49] Running: 'lvchange --discards nopassdown testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange --discards nopassdown testvg/pool1 INFO: [2022-09-28 03:13:50] Running: 'lvchange -ay testvg/pool1'... PASS: lvchange -ay testvg/pool1 INFO: [2022-09-28 03:13:50] Running: 'lvchange -ay testvg/lv1'... PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == nopassdown INFO: [2022-09-28 03:13:50] Running: 'lvchange -an testvg/pool1'... PASS: lvchange -an testvg/pool1 INFO: [2022-09-28 03:13:51] Running: 'lvchange -an testvg/lv1'... PASS: lvchange -an testvg/lv1 INFO: [2022-09-28 03:13:51] Running: 'lvchange --discards ignore testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange --discards ignore testvg/pool1 INFO: [2022-09-28 03:13:52] Running: 'lvchange -ay testvg/pool1'... PASS: lvchange -ay testvg/pool1 INFO: [2022-09-28 03:13:52] Running: 'lvchange -ay testvg/lv1'... PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == ignore INFO: [2022-09-28 03:13:52] Running: 'lvchange -an testvg/pool1'... PASS: lvchange -an testvg/pool1 INFO: [2022-09-28 03:13:52] Running: 'lvchange -an testvg/lv1'... PASS: lvchange -an testvg/lv1 INFO: [2022-09-28 03:13:53] Running: 'lvchange --discards passdown testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange --discards passdown testvg/pool1 INFO: [2022-09-28 03:13:53] Running: 'lvchange -ay testvg/pool1'... PASS: lvchange -ay testvg/pool1 INFO: [2022-09-28 03:13:54] Running: 'lvchange -ay testvg/lv1'... PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == passdown INFO: [2022-09-28 03:13:54] Running: 'lvchange -p r testvg/pool1 2>&1 | grep 'Command not permitted on LV''... Command not permitted on LV testvg/pool1. PASS: lvchange -p r testvg/pool1 2>&1 | grep 'Command not permitted on LV' PASS: testvg/pool1 lv_attr == twi-aotz-- INFO: [2022-09-28 03:13:54] Running: 'lvchange --refresh testvg/pool1'... PASS: lvchange --refresh testvg/pool1 INFO: [2022-09-28 03:13:54] Running: 'lvchange --monitor n testvg/pool1'... PASS: lvchange --monitor n testvg/pool1 INFO: [2022-09-28 03:13:55] Running: 'lvchange --monitor y testvg/pool1'... PASS: lvchange --monitor y testvg/pool1 PASS: testvg/pool1 lv_allocation_policy == inherit INFO: [2022-09-28 03:13:55] Running: 'lvchange -Cy testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange -Cy testvg/pool1 PASS: testvg/pool1 lv_allocation_policy == contiguous INFO: [2022-09-28 03:13:55] Running: 'lvchange -Cn testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange -Cn testvg/pool1 PASS: testvg/pool1 lv_allocation_policy == inherit PASS: testvg/pool1 lv_read_ahead == auto INFO: [2022-09-28 03:13:56] Running: 'lvchange -r 256 testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange -r 256 testvg/pool1 PASS: testvg/pool1 lv_read_ahead == 128.00k INFO: [2022-09-28 03:13:57] Running: 'lvchange -r none testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange -r none testvg/pool1 PASS: testvg/pool1 lv_read_ahead == 0 INFO: [2022-09-28 03:13:57] Running: 'lvchange -r auto testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange -r auto testvg/pool1 PASS: testvg/pool1 lv_read_ahead == auto INFO: [2022-09-28 03:13:57] Running: 'lvchange -Zn testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange -Zn testvg/pool1 PASS: testvg/pool1 zero == INFO: [2022-09-28 03:13:58] Running: 'lvchange -Z y testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange -Z y testvg/pool1 PASS: testvg/pool1 zero == zero PASS: testvg/pool2 discards == passdown INFO: [2022-09-28 03:13:58] Running: 'lvchange --discards ignore testvg/pool2'... Cannot change support for discards while pool volume testvg/pool2 is active. PASS: lvchange --discards ignore testvg/pool2 [exited with error, as expected] PASS: testvg/pool2 discards == passdown INFO: [2022-09-28 03:13:58] Running: 'lvchange --discards nopassdown testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange --discards nopassdown testvg/pool2 PASS: testvg/pool2 discards == nopassdown INFO: [2022-09-28 03:13:59] Running: 'lvchange --discards ignore testvg/pool2'... Cannot change support for discards while pool volume testvg/pool2 is active. PASS: lvchange --discards ignore testvg/pool2 [exited with error, as expected] PASS: testvg/pool2 discards == nopassdown INFO: [2022-09-28 03:13:59] Running: 'lvchange --discards passdown testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange --discards passdown testvg/pool2 PASS: testvg/pool2 discards == passdown INFO: [2022-09-28 03:14:00] Running: 'lvchange -an testvg/pool2'... PASS: lvchange -an testvg/pool2 INFO: [2022-09-28 03:14:00] Running: 'lvchange -an testvg/lv2'... PASS: lvchange -an testvg/lv2 PASS: testvg/pool2 discards == passdown PASS: testvg/pool2 lv_attr == twi---tz-- INFO: [2022-09-28 03:14:01] Running: 'lvchange --discards ignore testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange --discards ignore testvg/pool2 INFO: [2022-09-28 03:14:01] Running: 'lvchange -ay testvg/pool2'... PASS: lvchange -ay testvg/pool2 INFO: [2022-09-28 03:14:01] Running: 'lvchange -ay testvg/lv2'... PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == ignore PASS: testvg/pool2 lv_attr == twi-aotz-- INFO: [2022-09-28 03:14:02] Running: 'lvchange -an testvg/pool2'... PASS: lvchange -an testvg/pool2 INFO: [2022-09-28 03:14:02] Running: 'lvchange -an testvg/lv2'... PASS: lvchange -an testvg/lv2 INFO: [2022-09-28 03:14:02] Running: 'lvchange --discards nopassdown testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange --discards nopassdown testvg/pool2 INFO: [2022-09-28 03:14:03] Running: 'lvchange -ay testvg/pool2'... PASS: lvchange -ay testvg/pool2 INFO: [2022-09-28 03:14:03] Running: 'lvchange -ay testvg/lv2'... PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == nopassdown INFO: [2022-09-28 03:14:03] Running: 'lvchange -an testvg/pool2'... PASS: lvchange -an testvg/pool2 INFO: [2022-09-28 03:14:04] Running: 'lvchange -an testvg/lv2'... PASS: lvchange -an testvg/lv2 INFO: [2022-09-28 03:14:04] Running: 'lvchange --discards ignore testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange --discards ignore testvg/pool2 INFO: [2022-09-28 03:14:04] Running: 'lvchange -ay testvg/pool2'... PASS: lvchange -ay testvg/pool2 INFO: [2022-09-28 03:14:05] Running: 'lvchange -ay testvg/lv2'... PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == ignore INFO: [2022-09-28 03:14:05] Running: 'lvchange -an testvg/pool2'... PASS: lvchange -an testvg/pool2 INFO: [2022-09-28 03:14:06] Running: 'lvchange -an testvg/lv2'... PASS: lvchange -an testvg/lv2 INFO: [2022-09-28 03:14:06] Running: 'lvchange --discards passdown testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange --discards passdown testvg/pool2 INFO: [2022-09-28 03:14:07] Running: 'lvchange -ay testvg/pool2'... PASS: lvchange -ay testvg/pool2 INFO: [2022-09-28 03:14:07] Running: 'lvchange -ay testvg/lv2'... PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == passdown INFO: [2022-09-28 03:14:07] Running: 'lvchange -p r testvg/pool2 2>&1 | grep 'Command not permitted on LV''... Command not permitted on LV testvg/pool2. PASS: lvchange -p r testvg/pool2 2>&1 | grep 'Command not permitted on LV' PASS: testvg/pool2 lv_attr == twi-aotz-- INFO: [2022-09-28 03:14:08] Running: 'lvchange --refresh testvg/pool2'... PASS: lvchange --refresh testvg/pool2 INFO: [2022-09-28 03:14:08] Running: 'lvchange --monitor n testvg/pool2'... PASS: lvchange --monitor n testvg/pool2 INFO: [2022-09-28 03:14:08] Running: 'lvchange --monitor y testvg/pool2'... PASS: lvchange --monitor y testvg/pool2 PASS: testvg/pool2 lv_allocation_policy == inherit INFO: [2022-09-28 03:14:09] Running: 'lvchange -Cy testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange -Cy testvg/pool2 PASS: testvg/pool2 lv_allocation_policy == contiguous INFO: [2022-09-28 03:14:09] Running: 'lvchange -Cn testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange -Cn testvg/pool2 PASS: testvg/pool2 lv_allocation_policy == inherit PASS: testvg/pool2 lv_read_ahead == auto INFO: [2022-09-28 03:14:09] Running: 'lvchange -r 256 testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange -r 256 testvg/pool2 PASS: testvg/pool2 lv_read_ahead == 128.00k INFO: [2022-09-28 03:14:10] Running: 'lvchange -r none testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange -r none testvg/pool2 PASS: testvg/pool2 lv_read_ahead == 0 INFO: [2022-09-28 03:14:10] Running: 'lvchange -r auto testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange -r auto testvg/pool2 PASS: testvg/pool2 lv_read_ahead == auto INFO: [2022-09-28 03:14:11] Running: 'lvchange -Zn testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange -Zn testvg/pool2 PASS: testvg/pool2 zero == INFO: [2022-09-28 03:14:11] Running: 'lvchange -Z y testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange -Z y testvg/pool2 PASS: testvg/pool2 zero == zero INFO: [2022-09-28 03:14:11] Running: 'lvchange -an testvg/lv1'... PASS: lvchange -an testvg/lv1 PASS: testvg/lv1 lv_attr == Vwi---tz-- INFO: [2022-09-28 03:14:12] Running: 'ls /dev/testvg/lv1'... ls: cannot access '/dev/testvg/lv1': No such file or directory PASS: ls /dev/testvg/lv1 [exited with error, as expected] INFO: [2022-09-28 03:14:12] Running: 'lvchange -a y testvg/lv1'... PASS: lvchange -a y testvg/lv1 PASS: testvg/lv1 lv_attr == Vwi-a-tz-- INFO: [2022-09-28 03:14:12] Running: 'ls /dev/testvg/lv1'... /dev/testvg/lv1 PASS: ls /dev/testvg/lv1 INFO: [2022-09-28 03:14:12] Running: 'lvchange -pr testvg/lv1'... Logical volume testvg/lv1 changed. PASS: lvchange -pr testvg/lv1 PASS: testvg/lv1 lv_attr == Vri-a-tz-- INFO: [2022-09-28 03:14:13] Running: 'lvchange -p rw testvg/lv1'... Logical volume testvg/lv1 changed. PASS: lvchange -p rw testvg/lv1 PASS: testvg/lv1 lv_attr == Vwi-a-tz-- INFO: [2022-09-28 03:14:13] Running: 'lvchange --refresh testvg/lv1'... PASS: lvchange --refresh testvg/lv1 PASS: testvg/lv1 lv_allocation_policy == inherit INFO: [2022-09-28 03:14:14] Running: 'lvchange -Cy testvg/lv1'... Logical volume testvg/lv1 changed. PASS: lvchange -Cy testvg/lv1 PASS: testvg/lv1 lv_allocation_policy == contiguous INFO: [2022-09-28 03:14:14] Running: 'lvchange -Cn testvg/lv1'... Logical volume testvg/lv1 changed. PASS: lvchange -Cn testvg/lv1 PASS: testvg/lv1 lv_allocation_policy == inherit INFO: [2022-09-28 03:14:14] Running: 'lvchange -r 256 testvg/lv1'... Logical volume testvg/lv1 changed. PASS: lvchange -r 256 testvg/lv1 PASS: testvg/lv1 lv_read_ahead == 128.00k INFO: [2022-09-28 03:14:15] Running: 'lvchange -r none testvg/lv1'... Logical volume testvg/lv1 changed. PASS: lvchange -r none testvg/lv1 PASS: testvg/lv1 lv_read_ahead == 0 INFO: [2022-09-28 03:14:15] Running: 'lvchange -r auto testvg/lv1'... Logical volume testvg/lv1 changed. PASS: lvchange -r auto testvg/lv1 PASS: testvg/lv1 lv_read_ahead == auto INFO: [2022-09-28 03:14:15] Running: 'lvchange -an testvg/lv2'... PASS: lvchange -an testvg/lv2 PASS: testvg/lv2 lv_attr == Vwi---tz-- INFO: [2022-09-28 03:14:16] Running: 'ls /dev/testvg/lv2'... ls: cannot access '/dev/testvg/lv2': No such file or directory PASS: ls /dev/testvg/lv2 [exited with error, as expected] INFO: [2022-09-28 03:14:16] Running: 'lvchange -a y testvg/lv2'... PASS: lvchange -a y testvg/lv2 PASS: testvg/lv2 lv_attr == Vwi-a-tz-- INFO: [2022-09-28 03:14:16] Running: 'ls /dev/testvg/lv2'... /dev/testvg/lv2 PASS: ls /dev/testvg/lv2 INFO: [2022-09-28 03:14:16] Running: 'lvchange -pr testvg/lv2'... Logical volume testvg/lv2 changed. PASS: lvchange -pr testvg/lv2 PASS: testvg/lv2 lv_attr == Vri-a-tz-- INFO: [2022-09-28 03:14:17] Running: 'lvchange -p rw testvg/lv2'... Logical volume testvg/lv2 changed. PASS: lvchange -p rw testvg/lv2 PASS: testvg/lv2 lv_attr == Vwi-a-tz-- INFO: [2022-09-28 03:14:17] Running: 'lvchange --refresh testvg/lv2'... PASS: lvchange --refresh testvg/lv2 PASS: testvg/lv2 lv_allocation_policy == inherit INFO: [2022-09-28 03:14:18] Running: 'lvchange -Cy testvg/lv2'... Logical volume testvg/lv2 changed. PASS: lvchange -Cy testvg/lv2 PASS: testvg/lv2 lv_allocation_policy == contiguous INFO: [2022-09-28 03:14:18] Running: 'lvchange -Cn testvg/lv2'... Logical volume testvg/lv2 changed. PASS: lvchange -Cn testvg/lv2 PASS: testvg/lv2 lv_allocation_policy == inherit INFO: [2022-09-28 03:14:18] Running: 'lvchange -r 256 testvg/lv2'... Logical volume testvg/lv2 changed. PASS: lvchange -r 256 testvg/lv2 PASS: testvg/lv2 lv_read_ahead == 128.00k INFO: [2022-09-28 03:14:19] Running: 'lvchange -r none testvg/lv2'... Logical volume testvg/lv2 changed. PASS: lvchange -r none testvg/lv2 PASS: testvg/lv2 lv_read_ahead == 0 INFO: [2022-09-28 03:14:19] Running: 'lvchange -r auto testvg/lv2'... Logical volume testvg/lv2 changed. PASS: lvchange -r auto testvg/lv2 PASS: testvg/lv2 lv_read_ahead == auto INFO: [2022-09-28 03:14:20] Running: 'lvcreate -s testvg/lv1 -n lv3'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv3" created. WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pools and the amount of free space in volume group (280.00 MiB). PASS: lvcreate -s testvg/lv1 -n lv3 PASS: testvg/lv3 lv_attr == Vwi---tz-k INFO: [2022-09-28 03:14:20] Running: 'ls /dev/testvg/lv3'... ls: cannot access '/dev/testvg/lv3': No such file or directory PASS: ls /dev/testvg/lv3 [exited with error, as expected] INFO: [2022-09-28 03:14:20] Running: 'lvchange -ay -K testvg/lv3'... PASS: lvchange -ay -K testvg/lv3 PASS: testvg/lv3 lv_attr == Vwi-a-tz-k INFO: [2022-09-28 03:14:21] Running: 'ls /dev/testvg/lv3'... /dev/testvg/lv3 PASS: ls /dev/testvg/lv3 INFO: [2022-09-28 03:14:21] Running: 'lvchange -pr testvg/lv3'... Logical volume testvg/lv3 changed. PASS: lvchange -pr testvg/lv3 PASS: testvg/lv3 lv_attr == Vri-a-tz-k INFO: [2022-09-28 03:14:21] Running: 'lvchange -p rw testvg/lv3'... Logical volume testvg/lv3 changed. PASS: lvchange -p rw testvg/lv3 PASS: testvg/lv3 lv_attr == Vwi-a-tz-k INFO: [2022-09-28 03:14:22] Running: 'lvchange --refresh testvg/lv3'... PASS: lvchange --refresh testvg/lv3 PASS: testvg/lv3 lv_allocation_policy == inherit INFO: [2022-09-28 03:14:22] Running: 'lvchange -Cy testvg/lv3'... Logical volume testvg/lv3 changed. PASS: lvchange -Cy testvg/lv3 PASS: testvg/lv3 lv_allocation_policy == contiguous INFO: [2022-09-28 03:14:23] Running: 'lvchange -Cn testvg/lv3'... Logical volume testvg/lv3 changed. PASS: lvchange -Cn testvg/lv3 PASS: testvg/lv3 lv_allocation_policy == inherit INFO: [2022-09-28 03:14:23] Running: 'lvchange -r 256 testvg/lv3'... Logical volume testvg/lv3 changed. PASS: lvchange -r 256 testvg/lv3 PASS: testvg/lv3 lv_read_ahead == 128.00k INFO: [2022-09-28 03:14:24] Running: 'lvchange -r none testvg/lv3'... Logical volume testvg/lv3 changed. PASS: lvchange -r none testvg/lv3 PASS: testvg/lv3 lv_read_ahead == 0 INFO: [2022-09-28 03:14:24] Running: 'lvchange -r auto testvg/lv3'... Logical volume testvg/lv3 changed. PASS: lvchange -r auto testvg/lv3 PASS: testvg/lv3 lv_read_ahead == auto INFO: [2022-09-28 03:14:25] Running: 'vgremove --force testvg'... Logical volume "lv2" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "lv3" successfully removed. Logical volume "pool1" successfully removed. Volume group "testvg" successfully removed INFO: [2022-09-28 03:14:27] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2022-09-28 03:14:28] Running: 'losetup -d /dev/loop0'... INFO: [2022-09-28 03:14:28] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2022-09-28 03:14:28] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2022-09-28 03:14:29] Running: 'losetup -d /dev/loop1'... INFO: [2022-09-28 03:14:29] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2022-09-28 03:14:29] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2022-09-28 03:14:31] Running: 'losetup -d /dev/loop2'... INFO: [2022-09-28 03:14:31] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2022-09-28 03:14:31] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2022-09-28 03:14:32] Running: 'losetup -d /dev/loop3'... INFO: [2022-09-28 03:14:32] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:14:32] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:14:32] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:14:32] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:14:32] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:14:32] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_snapshot' was loaded during the test. Unloading it... INFO: [2022-09-28 03:14:32] Running: 'modprobe -r dm_snapshot'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2022-09-28 03:14:32] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: testvg/pool1 discards == passdown PASS: lvchange --discards ignore testvg/pool1 [exited with error, as expected] PASS: testvg/pool1 discards == passdown PASS: lvchange --discards nopassdown testvg/pool1 PASS: testvg/pool1 discards == nopassdown PASS: lvchange --discards ignore testvg/pool1 [exited with error, as expected] PASS: testvg/pool1 discards == nopassdown PASS: lvchange --discards passdown testvg/pool1 PASS: testvg/pool1 discards == passdown PASS: lvchange -an testvg/pool1 PASS: lvchange -an testvg/lv1 PASS: testvg/pool1 discards == passdown PASS: testvg/pool1 lv_attr == twi---tz-- PASS: lvchange --discards ignore testvg/pool1 PASS: lvchange -ay testvg/pool1 PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == ignore PASS: testvg/pool1 lv_attr == twi-aotz-- PASS: lvchange -an testvg/pool1 PASS: lvchange -an testvg/lv1 PASS: lvchange --discards nopassdown testvg/pool1 PASS: lvchange -ay testvg/pool1 PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == nopassdown PASS: lvchange -an testvg/pool1 PASS: lvchange -an testvg/lv1 PASS: lvchange --discards ignore testvg/pool1 PASS: lvchange -ay testvg/pool1 PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == ignore PASS: lvchange -an testvg/pool1 PASS: lvchange -an testvg/lv1 PASS: lvchange --discards passdown testvg/pool1 PASS: lvchange -ay testvg/pool1 PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == passdown PASS: lvchange -p r testvg/pool1 2>&1 | grep 'Command not permitted on LV' PASS: testvg/pool1 lv_attr == twi-aotz-- PASS: lvchange --refresh testvg/pool1 PASS: lvchange --monitor n testvg/pool1 PASS: lvchange --monitor y testvg/pool1 PASS: testvg/pool1 lv_allocation_policy == inherit PASS: lvchange -Cy testvg/pool1 PASS: testvg/pool1 lv_allocation_policy == contiguous PASS: lvchange -Cn testvg/pool1 PASS: testvg/pool1 lv_allocation_policy == inherit PASS: testvg/pool1 lv_read_ahead == auto PASS: lvchange -r 256 testvg/pool1 PASS: testvg/pool1 lv_read_ahead == 128.00k PASS: lvchange -r none testvg/pool1 PASS: testvg/pool1 lv_read_ahead == 0 PASS: lvchange -r auto testvg/pool1 PASS: testvg/pool1 lv_read_ahead == auto PASS: lvchange -Zn testvg/pool1 PASS: testvg/pool1 zero == PASS: lvchange -Z y testvg/pool1 PASS: testvg/pool1 zero == zero PASS: testvg/pool2 discards == passdown PASS: lvchange --discards ignore testvg/pool2 [exited with error, as expected] PASS: testvg/pool2 discards == passdown PASS: lvchange --discards nopassdown testvg/pool2 PASS: testvg/pool2 discards == nopassdown PASS: lvchange --discards ignore testvg/pool2 [exited with error, as expected] PASS: testvg/pool2 discards == nopassdown PASS: lvchange --discards passdown testvg/pool2 PASS: testvg/pool2 discards == passdown PASS: lvchange -an testvg/pool2 PASS: lvchange -an testvg/lv2 PASS: testvg/pool2 discards == passdown PASS: testvg/pool2 lv_attr == twi---tz-- PASS: lvchange --discards ignore testvg/pool2 PASS: lvchange -ay testvg/pool2 PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == ignore PASS: testvg/pool2 lv_attr == twi-aotz-- PASS: lvchange -an testvg/pool2 PASS: lvchange -an testvg/lv2 PASS: lvchange --discards nopassdown testvg/pool2 PASS: lvchange -ay testvg/pool2 PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == nopassdown PASS: lvchange -an testvg/pool2 PASS: lvchange -an testvg/lv2 PASS: lvchange --discards ignore testvg/pool2 PASS: lvchange -ay testvg/pool2 PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == ignore PASS: lvchange -an testvg/pool2 PASS: lvchange -an testvg/lv2 PASS: lvchange --discards passdown testvg/pool2 PASS: lvchange -ay testvg/pool2 PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == passdown PASS: lvchange -p r testvg/pool2 2>&1 | grep 'Command not permitted on LV' PASS: testvg/pool2 lv_attr == twi-aotz-- PASS: lvchange --refresh testvg/pool2 PASS: lvchange --monitor n testvg/pool2 PASS: lvchange --monitor y testvg/pool2 PASS: testvg/pool2 lv_allocation_policy == inherit PASS: lvchange -Cy testvg/pool2 PASS: testvg/pool2 lv_allocation_policy == contiguous PASS: lvchange -Cn testvg/pool2 PASS: testvg/pool2 lv_allocation_policy == inherit PASS: testvg/pool2 lv_read_ahead == auto PASS: lvchange -r 256 testvg/pool2 PASS: testvg/pool2 lv_read_ahead == 128.00k PASS: lvchange -r none testvg/pool2 PASS: testvg/pool2 lv_read_ahead == 0 PASS: lvchange -r auto testvg/pool2 PASS: testvg/pool2 lv_read_ahead == auto PASS: lvchange -Zn testvg/pool2 PASS: testvg/pool2 zero == PASS: lvchange -Z y testvg/pool2 PASS: testvg/pool2 zero == zero PASS: lvchange -an testvg/lv1 PASS: testvg/lv1 lv_attr == Vwi---tz-- PASS: ls /dev/testvg/lv1 [exited with error, as expected] PASS: lvchange -a y testvg/lv1 PASS: testvg/lv1 lv_attr == Vwi-a-tz-- PASS: ls /dev/testvg/lv1 PASS: lvchange -pr testvg/lv1 PASS: testvg/lv1 lv_attr == Vri-a-tz-- PASS: lvchange -p rw testvg/lv1 PASS: testvg/lv1 lv_attr == Vwi-a-tz-- PASS: lvchange --refresh testvg/lv1 PASS: testvg/lv1 lv_allocation_policy == inherit PASS: lvchange -Cy testvg/lv1 PASS: testvg/lv1 lv_allocation_policy == contiguous PASS: lvchange -Cn testvg/lv1 PASS: testvg/lv1 lv_allocation_policy == inherit PASS: lvchange -r 256 testvg/lv1 PASS: testvg/lv1 lv_read_ahead == 128.00k PASS: lvchange -r none testvg/lv1 PASS: testvg/lv1 lv_read_ahead == 0 PASS: lvchange -r auto testvg/lv1 PASS: testvg/lv1 lv_read_ahead == auto PASS: lvchange -an testvg/lv2 PASS: testvg/lv2 lv_attr == Vwi---tz-- PASS: ls /dev/testvg/lv2 [exited with error, as expected] PASS: lvchange -a y testvg/lv2 PASS: testvg/lv2 lv_attr == Vwi-a-tz-- PASS: ls /dev/testvg/lv2 PASS: lvchange -pr testvg/lv2 PASS: testvg/lv2 lv_attr == Vri-a-tz-- PASS: lvchange -p rw testvg/lv2 PASS: testvg/lv2 lv_attr == Vwi-a-tz-- PASS: lvchange --refresh testvg/lv2 PASS: testvg/lv2 lv_allocation_policy == inherit PASS: lvchange -Cy testvg/lv2 PASS: testvg/lv2 lv_allocation_policy == contiguous PASS: lvchange -Cn testvg/lv2 PASS: testvg/lv2 lv_allocation_policy == inherit PASS: lvchange -r 256 testvg/lv2 PASS: testvg/lv2 lv_read_ahead == 128.00k PASS: lvchange -r none testvg/lv2 PASS: testvg/lv2 lv_read_ahead == 0 PASS: lvchange -r auto testvg/lv2 PASS: testvg/lv2 lv_read_ahead == auto PASS: lvcreate -s testvg/lv1 -n lv3 PASS: testvg/lv3 lv_attr == Vwi---tz-k PASS: ls /dev/testvg/lv3 [exited with error, as expected] PASS: lvchange -ay -K testvg/lv3 PASS: testvg/lv3 lv_attr == Vwi-a-tz-k PASS: ls /dev/testvg/lv3 PASS: lvchange -pr testvg/lv3 PASS: testvg/lv3 lv_attr == Vri-a-tz-k PASS: lvchange -p rw testvg/lv3 PASS: testvg/lv3 lv_attr == Vwi-a-tz-k PASS: lvchange --refresh testvg/lv3 PASS: testvg/lv3 lv_allocation_policy == inherit PASS: lvchange -Cy testvg/lv3 PASS: testvg/lv3 lv_allocation_policy == contiguous PASS: lvchange -Cn testvg/lv3 PASS: testvg/lv3 lv_allocation_policy == inherit PASS: lvchange -r 256 testvg/lv3 PASS: testvg/lv3 lv_read_ahead == 128.00k PASS: lvchange -r none testvg/lv3 PASS: testvg/lv3 lv_read_ahead == 0 PASS: lvchange -r auto testvg/lv3 PASS: testvg/lv3 lv_read_ahead == auto PASS: Search for error on the server ############################# Total tests that passed: 181 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvconf-thinp.py' ============================================================================================================== INFO: [2022-09-28 03:14:34] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvconf-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:14:34] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:14:34] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:14:34] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:14:34] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:14:34] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:14:34] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 75969sec preferred_lft 75969sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591965sec preferred_lft 604765sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:14:34] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 67M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:14:34] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.16-3.el9.ppc64le) INFO: vg_remove - testvg does not exist. Skipping... INFO: [2022-09-28 03:14:34] Running: 'cp -f /etc/lvm/lvm.conf /etc/lvm/lvm.conf.copy'... ################################################################################ INFO: Starting Thin Provisioning lvconf test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2022-09-28 03:14:34] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2022-09-28 03:14:34] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2022-09-28 03:14:34] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2022-09-28 03:14:35] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: [2022-09-28 03:14:35] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Volume group "testvg" successfully created INFO: [2022-09-28 03:14:35] Running: 'lvcreate -l1 -T testvg/pool'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool" created. PASS: lvcreate -l1 -T testvg/pool FAIL:(libsan.host.lvm) ( Devices file loop_file /var/tmp/loop2.img PVID none last seen on /dev/loop2 not found.) does not match lvs output format FAIL:(libsan.host.lvm) ( Devices file loop_file /var/tmp/loop3.img PVID none last seen on /dev/loop3 not found.) does not match lvs output format FAIL:(libsan.host.lvm) ( Devices file loop_file /var/tmp/loop2.img PVID none last seen on /dev/loop2 not found.) does not match lvs output format FAIL:(libsan.host.lvm) ( Devices file loop_file /var/tmp/loop3.img PVID none last seen on /dev/loop3 not found.) does not match lvs output format PASS: tmeta an tdata are in different devices INFO: [2022-09-28 03:14:36] Running: 'lvcreate -i3 -l1 -T testvg/pool2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Rounding size 4.00 MiB (1 extents) up to stripe boundary size 12.00 MiB (3 extents). Number of stripes (3) must not exceed number of physical volumes (2) PASS: lvcreate -i3 -l1 -T testvg/pool2 [exited with error, as expected] INFO: [2022-09-28 03:14:36] Running: 'lvcreate -l1 -T testvg/pool3'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool3" created. PASS: lvcreate -l1 -T testvg/pool3 INFO: [2022-09-28 03:14:36] Running: 'lvcreate -i2 -l1 -T testvg/pool4'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Rounding size 4.00 MiB (1 extents) up to stripe boundary size 8.00 MiB (2 extents). Logical volume "pool4" created. PASS: lvcreate -i2 -l1 -T testvg/pool4 PASS: thin_pool_autoextend_threshold == '100' PASS: thin_pool_autoextend_percent == '20' PASS: thin_pool_metadata_require_separate_pvs == '0' INFO: [2022-09-28 03:14:37] Running: 'vgremove --force testvg'... Logical volume "pool4" successfully removed. Logical volume "pool3" successfully removed. Logical volume "pool" successfully removed. Volume group "testvg" successfully removed INFO: [2022-09-28 03:14:39] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2022-09-28 03:14:40] Running: 'losetup -d /dev/loop0'... INFO: [2022-09-28 03:14:40] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2022-09-28 03:14:40] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2022-09-28 03:14:41] Running: 'losetup -d /dev/loop1'... INFO: [2022-09-28 03:14:41] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2022-09-28 03:14:41] Running: 'mv -f /etc/lvm/lvm.conf.copy /etc/lvm/lvm.conf'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:14:41] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:14:41] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:14:41] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:14:42] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:14:42] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2022-09-28 03:14:42] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l1 -T testvg/pool PASS: tmeta an tdata are in different devices PASS: lvcreate -i3 -l1 -T testvg/pool2 [exited with error, as expected] PASS: lvcreate -l1 -T testvg/pool3 PASS: lvcreate -i2 -l1 -T testvg/pool4 PASS: thin_pool_autoextend_threshold == '100' PASS: thin_pool_autoextend_percent == '20' PASS: thin_pool_metadata_require_separate_pvs == '0' PASS: Search for error on the server ############################# Total tests that passed: 9 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvconvert-thinpool.py' ============================================================================================================== INFO: [2022-09-28 03:14:43] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvconvert-thinpool.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:14:43] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:14:43] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:14:43] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:14:43] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:14:43] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:14:43] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 75960sec preferred_lft 75960sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591956sec preferred_lft 604756sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:14:43] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 67M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:14:43] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.16-3.el9.ppc64le) ################################################################################ INFO: Starting Thin Pool Convert test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2022-09-28 03:14:44] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2022-09-28 03:14:44] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2022-09-28 03:14:44] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2022-09-28 03:14:44] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2022-09-28 03:14:44] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2022-09-28 03:14:44] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2022-09-28 03:14:44] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2022-09-28 03:14:44] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2022-09-28 03:14:44] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2022-09-28 03:14:44] Running: 'lvcreate -l20 -n testvg/pool'... Logical volume "pool" created. PASS: lvcreate -l20 -n testvg/pool INFO: [2022-09-28 03:14:45] Running: 'lvconvert --thinpool testvg/pool -y'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Converted testvg/pool to thin pool. WARNING: Converting testvg/pool to thin pool's data volume with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) PASS: lvconvert --thinpool testvg/pool -y PASS: testvg/pool lv_attr == twi-a-tz-- INFO: [2022-09-28 03:14:45] Running: 'lvremove -ff testvg'... Logical volume "pool" successfully removed. PASS: lvremove -ff testvg INFO: [2022-09-28 03:14:46] Running: 'lvcreate --zero n -an -l20 -n testvg/pool'... Logical volume "pool" created. WARNING: Logical volume testvg/pool not zeroed. PASS: lvcreate --zero n -an -l20 -n testvg/pool INFO: [2022-09-28 03:14:46] Running: 'lvconvert --thinpool testvg/pool -y'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Converted testvg/pool to thin pool. WARNING: Converting testvg/pool to thin pool's data volume with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) PASS: lvconvert --thinpool testvg/pool -y PASS: testvg/pool lv_attr == twi---tz-- PASS: testvg/pool discards == passdown INFO: [2022-09-28 03:14:47] Running: 'lvremove -ff testvg'... Logical volume "pool" successfully removed. PASS: lvremove -ff testvg INFO: [2022-09-28 03:14:47] Running: 'lvcreate -l20 -n testvg/pool'... Logical volume "pool" created. PASS: lvcreate -l20 -n testvg/pool INFO: [2022-09-28 03:14:47] Running: 'lvconvert --thinpool testvg/pool -c 256 -Z y --discards nopassdown --poolmetadatasize 4M -r 16 -y'... Thin pool volume with chunk size 256.00 KiB can address at most 63.50 TiB of data. Converted testvg/pool to thin pool. WARNING: Converting testvg/pool to thin pool's data volume with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) PASS: lvconvert --thinpool testvg/pool -c 256 -Z y --discards nopassdown --poolmetadatasize 4M -r 16 -y PASS: testvg/pool chunksize == 256.00k PASS: testvg/pool discards == nopassdown PASS: testvg/pool lv_metadata_size == 4.00m PASS: testvg/pool lv_size == 80.00m INFO: [2022-09-28 03:14:48] Running: 'lvremove -ff testvg'... Logical volume "pool" successfully removed. PASS: lvremove -ff testvg INFO: [2022-09-28 03:14:49] Running: 'lvcreate -l20 -n testvg/pool'... Logical volume "pool" created. PASS: lvcreate -l20 -n testvg/pool INFO: [2022-09-28 03:14:49] Running: 'lvcreate -l10 -n testvg/metadata'... Logical volume "metadata" created. PASS: lvcreate -l10 -n testvg/metadata INFO: [2022-09-28 03:14:50] Running: 'lvconvert -y --thinpool testvg/pool --poolmetadata testvg/metadata'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Converted testvg/pool and testvg/metadata to thin pool. WARNING: Converting testvg/pool and testvg/metadata to thin pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) PASS: lvconvert -y --thinpool testvg/pool --poolmetadata testvg/metadata PASS: testvg/pool lv_size == 80.00m PASS: testvg/pool lv_metadata_size == 40.00m INFO: [2022-09-28 03:14:51] Running: 'lvremove -ff testvg'... Logical volume "pool" successfully removed. PASS: lvremove -ff testvg INFO: [2022-09-28 03:14:52] Running: 'vgremove --force testvg'... Volume group "testvg" successfully removed INFO: [2022-09-28 03:14:52] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2022-09-28 03:14:53] Running: 'losetup -d /dev/loop0'... INFO: [2022-09-28 03:14:53] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2022-09-28 03:14:53] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2022-09-28 03:14:54] Running: 'losetup -d /dev/loop1'... INFO: [2022-09-28 03:14:54] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2022-09-28 03:14:55] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2022-09-28 03:14:56] Running: 'losetup -d /dev/loop2'... INFO: [2022-09-28 03:14:56] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2022-09-28 03:14:56] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2022-09-28 03:14:57] Running: 'losetup -d /dev/loop3'... INFO: [2022-09-28 03:14:58] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:14:58] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:14:58] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:14:58] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:14:58] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:14:58] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2022-09-28 03:14:58] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l20 -n testvg/pool PASS: lvconvert --thinpool testvg/pool -y PASS: testvg/pool lv_attr == twi-a-tz-- PASS: lvremove -ff testvg PASS: lvcreate --zero n -an -l20 -n testvg/pool PASS: lvconvert --thinpool testvg/pool -y PASS: testvg/pool lv_attr == twi---tz-- PASS: testvg/pool discards == passdown PASS: lvremove -ff testvg PASS: lvcreate -l20 -n testvg/pool PASS: lvconvert --thinpool testvg/pool -c 256 -Z y --discards nopassdown --poolmetadatasize 4M -r 16 -y PASS: testvg/pool chunksize == 256.00k PASS: testvg/pool discards == nopassdown PASS: testvg/pool lv_metadata_size == 4.00m PASS: testvg/pool lv_size == 80.00m PASS: lvremove -ff testvg PASS: lvcreate -l20 -n testvg/pool PASS: lvcreate -l10 -n testvg/metadata PASS: lvconvert -y --thinpool testvg/pool --poolmetadata testvg/metadata PASS: testvg/pool lv_size == 80.00m PASS: testvg/pool lv_metadata_size == 40.00m PASS: lvremove -ff testvg PASS: Search for error on the server ############################# Total tests that passed: 23 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvconvert-thin-lv.py' ============================================================================================================== INFO: [2022-09-28 03:14:59] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvconvert-thin-lv.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:14:59] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:14:59] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:14:59] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:14:59] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:14:59] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:14:59] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 75944sec preferred_lft 75944sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591940sec preferred_lft 604740sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:14:59] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 68M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:14:59] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.16-3.el9.ppc64le) ################################################################################ INFO: Starting Thin Pool Convert test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2022-09-28 03:15:00] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2022-09-28 03:15:00] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2022-09-28 03:15:00] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2022-09-28 03:15:00] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2022-09-28 03:15:00] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2022-09-28 03:15:00] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2022-09-28 03:15:00] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2022-09-28 03:15:00] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2022-09-28 03:15:00] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2022-09-28 03:15:00] Running: 'lvcreate -l25 -n testvg/thin'... Logical volume "thin" created. PASS: lvcreate -l25 -n testvg/thin INFO: [2022-09-28 03:15:00] Running: 'mkfs.ext4 -F /dev/mapper/testvg-thin'... Discarding device blocks: 0/102400 done Creating filesystem with 102400 1k blocks and 25584 inodes Filesystem UUID: 2172de3d-9149-49ac-bdef-b068233124d9 Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Allocating group tables: 0/13 done Writing inode tables: 0/13 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/13 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:15:01] Running: 'mount /dev/mapper/testvg-thin /mnt/thin'... INFO: [2022-09-28 03:15:01] Running: 'dd if=/dev/urandom of=/mnt/thin/5m bs=1M count=5;sync'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0603357 s, 86.9 MB/s PASS: dd if=/dev/urandom of=/mnt/thin/5m bs=1M count=5;sync INFO: [2022-09-28 03:15:01] Running: 'md5sum /mnt/thin/5m > pre_md5'... PASS: md5sum /mnt/thin/5m > pre_md5 INFO: [2022-09-28 03:15:01] Running: 'lvcreate -l50 -T -n testvg/pool'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool" created. PASS: lvcreate -l50 -T -n testvg/pool test case:1 INFO: [2022-09-28 03:15:02] Running: 'lvconvert --thinpool testvg/pool --thin testvg/thin --originname thin_origin -y'... Logical volume "thin_origin" created. Converted testvg/thin to thin volume with external origin testvg/thin_origin. PASS: lvconvert --thinpool testvg/pool --thin testvg/thin --originname thin_origin -y 1.1 checking if the md5 checksum is not changed INFO: [2022-09-28 03:15:04] Running: 'md5sum /mnt/thin/5m > post_md5'... PASS: md5sum /mnt/thin/5m > post_md5 INFO: [2022-09-28 03:15:04] Running: 'diff pre_md5 post_md5'... PASS: diff pre_md5 post_md5 1.2 checking if the thin LV is converted PASS: testvg/thin lv_size == 100.00m PASS: testvg/thin pool_lv == pool PASS: testvg/thin lv_attr == Vwi-aotz-- PASS: testvg/thin origin == thin_origin 1.3 checking if a readonly lv is created for the pre-data PASS: testvg/thin_origin lv_attr == ori------- 1.4 checking if the new data will be stored in the pool INFO: [2022-09-28 03:15:04] Running: 'dd if=/dev/urandom of=/mnt/thin/10m bs=1M count=10;sync'... 10+0 records in 10+0 records out 10485760 bytes (10 MB, 10 MiB) copied, 0.120672 s, 86.9 MB/s PASS: dd if=/dev/urandom of=/mnt/thin/10m bs=1M count=10;sync PASS: Data percentage increased correctly 1.5 checking deleting the pre-data, the origin will not impact INFO: [2022-09-28 03:15:05] Running: 'rm -rf /mnt/thin/5m'... PASS: rm -rf /mnt/thin/5m INFO: [2022-09-28 03:15:05] Running: 'umount /mnt/thin'... INFO: [2022-09-28 03:15:06] Running: 'lvremove -ff testvg/thin'... Logical volume "thin" successfully removed. PASS: lvremove -ff testvg/thin INFO: [2022-09-28 03:15:06] Running: 'lvchange -ay testvg/thin_origin'... PASS: lvchange -ay testvg/thin_origin INFO: [2022-09-28 03:15:06] Running: 'mount /dev/mapper/testvg-thin_origin /mnt/thin'... mount: /mnt/thin: WARNING: source write-protected, mounted read-only. INFO: [2022-09-28 03:15:06] Running: 'md5sum /mnt/thin/5m > origin_md5'... PASS: md5sum /mnt/thin/5m > origin_md5 INFO: [2022-09-28 03:15:06] Running: 'diff pre_md5 origin_md5'... PASS: diff pre_md5 origin_md5 INFO: [2022-09-28 03:15:06] Running: 'umount /mnt/thin'... INFO: [2022-09-28 03:15:07] Running: 'vgremove --force testvg'... Logical volume "pool" successfully removed. Logical volume "thin_origin" successfully removed. Volume group "testvg" successfully removed INFO: [2022-09-28 03:15:08] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2022-09-28 03:15:09] Running: 'losetup -d /dev/loop0'... INFO: [2022-09-28 03:15:09] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2022-09-28 03:15:09] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2022-09-28 03:15:10] Running: 'losetup -d /dev/loop1'... INFO: [2022-09-28 03:15:11] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2022-09-28 03:15:11] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2022-09-28 03:15:12] Running: 'losetup -d /dev/loop2'... INFO: [2022-09-28 03:15:12] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2022-09-28 03:15:12] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2022-09-28 03:15:13] Running: 'losetup -d /dev/loop3'... INFO: [2022-09-28 03:15:13] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:15:13] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:15:13] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:15:14] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:15:14] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:15:14] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2022-09-28 03:15:14] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l25 -n testvg/thin PASS: dd if=/dev/urandom of=/mnt/thin/5m bs=1M count=5;sync PASS: md5sum /mnt/thin/5m > pre_md5 PASS: lvcreate -l50 -T -n testvg/pool PASS: lvconvert --thinpool testvg/pool --thin testvg/thin --originname thin_origin -y PASS: md5sum /mnt/thin/5m > post_md5 PASS: diff pre_md5 post_md5 PASS: testvg/thin lv_size == 100.00m PASS: testvg/thin pool_lv == pool PASS: testvg/thin lv_attr == Vwi-aotz-- PASS: testvg/thin origin == thin_origin PASS: testvg/thin_origin lv_attr == ori------- PASS: dd if=/dev/urandom of=/mnt/thin/10m bs=1M count=10;sync PASS: Data percentage increased correctly PASS: rm -rf /mnt/thin/5m PASS: lvremove -ff testvg/thin PASS: lvchange -ay testvg/thin_origin PASS: md5sum /mnt/thin/5m > origin_md5 PASS: diff pre_md5 origin_md5 PASS: Search for error on the server ############################# Total tests that passed: 20 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvcreate-poolmetadataspare.py' ============================================================================================================== INFO: [2022-09-28 03:15:15] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvcreate-poolmetadataspare.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:15:15] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:15:15] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:15:15] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:15:15] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:15:15] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:15:15] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 75928sec preferred_lft 75928sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591991sec preferred_lft 604791sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:15:15] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 68M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:15:15] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.16-3.el9.ppc64le) ################################################################################ INFO: Starting Thinp Metadata Spare ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2022-09-28 03:15:16] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2022-09-28 03:15:16] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2022-09-28 03:15:16] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2022-09-28 03:15:16] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2022-09-28 03:15:16] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2022-09-28 03:15:16] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2022-09-28 03:15:16] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2022-09-28 03:15:16] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2022-09-28 03:15:16] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2022-09-28 03:15:16] Running: 'lvcreate -l10 --thin testvg/pool0 --poolmetadataspare n'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool0" created. WARNING: recovery of pools without pool metadata spare LV is not automated. PASS: lvcreate -l10 --thin testvg/pool0 --poolmetadataspare n PASS: lvol0_pmspare does not exist INFO: [2022-09-28 03:15:17] Running: 'lvcreate -l10 --thin testvg/pool1 --poolmetadatasize 4m'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool1" created. PASS: lvcreate -l10 --thin testvg/pool1 --poolmetadatasize 4m PASS: testvg/lvol0_pmspare lv_size == 4.00m INFO: [2022-09-28 03:15:18] Running: 'lvcreate -l10 --thin testvg/pool2 --poolmetadatasize 8m'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool2" created. PASS: lvcreate -l10 --thin testvg/pool2 --poolmetadatasize 8m PASS: testvg/lvol0_pmspare lvsize == 8.00m INFO: [2022-09-28 03:15:18] Running: 'lvremove -ff testvg'... Logical volume "pool2" successfully removed. Logical volume "pool1" successfully removed. Logical volume "pool0" successfully removed. PASS: lvremove -ff testvg INFO: [2022-09-28 03:15:25] Running: 'lvcreate -l10 --thin testvg/pool1 --poolmetadataspare n'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool1" created. WARNING: recovery of pools without pool metadata spare LV is not automated. PASS: lvcreate -l10 --thin testvg/pool1 --poolmetadataspare n PASS: lvol0_pmspare does not exist INFO: [2022-09-28 03:15:26] Running: 'lvcreate -l10 --thin testvg/pool2 --poolmetadataspare y'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool2" created. PASS: lvcreate -l10 --thin testvg/pool2 --poolmetadataspare y PASS: testvg/lvol0_pmspare lv_size == 4.00m INFO: [2022-09-28 03:15:27] Running: 'vgremove --force testvg'... Logical volume "pool2" successfully removed. Logical volume "pool1" successfully removed. Volume group "testvg" successfully removed INFO: [2022-09-28 03:15:28] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2022-09-28 03:15:29] Running: 'losetup -d /dev/loop0'... INFO: [2022-09-28 03:15:29] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2022-09-28 03:15:29] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2022-09-28 03:15:30] Running: 'losetup -d /dev/loop1'... INFO: [2022-09-28 03:15:30] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2022-09-28 03:15:31] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2022-09-28 03:15:32] Running: 'losetup -d /dev/loop2'... INFO: [2022-09-28 03:15:32] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2022-09-28 03:15:32] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2022-09-28 03:15:33] Running: 'losetup -d /dev/loop3'... INFO: [2022-09-28 03:15:33] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:15:33] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:15:33] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:15:33] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:15:33] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:15:33] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2022-09-28 03:15:33] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l10 --thin testvg/pool0 --poolmetadataspare n PASS: lvol0_pmspare does not exist PASS: lvcreate -l10 --thin testvg/pool1 --poolmetadatasize 4m PASS: testvg/lvol0_pmspare lv_size == 4.00m PASS: lvcreate -l10 --thin testvg/pool2 --poolmetadatasize 8m PASS: testvg/lvol0_pmspare lvsize == 8.00m PASS: lvremove -ff testvg PASS: lvcreate -l10 --thin testvg/pool1 --poolmetadataspare n PASS: lvol0_pmspare does not exist PASS: lvcreate -l10 --thin testvg/pool2 --poolmetadataspare y PASS: testvg/lvol0_pmspare lv_size == 4.00m PASS: Search for error on the server ############################# Total tests that passed: 12 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvcreate-mirror.py' ============================================================================================================== INFO: [2022-09-28 03:15:35] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvcreate-mirror.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:15:35] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:15:35] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:15:35] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:15:35] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:15:35] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:15:35] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 75908sec preferred_lft 75908sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591972sec preferred_lft 604772sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:15:35] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 68M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:15:35] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.16-3.el9.ppc64le) ################################################################################ INFO: Starting Thin Provisioning Mirror test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2022-09-28 03:15:35] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2022-09-28 03:15:35] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2022-09-28 03:15:35] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2022-09-28 03:15:35] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2022-09-28 03:15:35] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2022-09-28 03:15:35] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2022-09-28 03:15:35] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2022-09-28 03:15:35] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2022-09-28 03:15:35] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2022-09-28 03:15:36] Running: 'lvcreate -L4M --thin testvg/pool'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool" created. PASS: lvcreate -L4M --thin testvg/pool PASS: testvg/[pool_tdata] stripes == 1 FAIL:(libsan.host.linux) package_version() - Unsupported release: centos ERROR: Could not query lvm2 version INFO: [2022-09-28 03:15:37] Running: 'vgremove --force testvg'... Logical volume "pool" successfully removed. Volume group "testvg" successfully removed INFO: [2022-09-28 03:15:38] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2022-09-28 03:15:39] Running: 'losetup -d /dev/loop0'... INFO: [2022-09-28 03:15:39] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2022-09-28 03:15:39] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2022-09-28 03:15:40] Running: 'losetup -d /dev/loop1'... INFO: [2022-09-28 03:15:40] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2022-09-28 03:15:41] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2022-09-28 03:15:42] Running: 'losetup -d /dev/loop2'... INFO: [2022-09-28 03:15:42] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2022-09-28 03:15:42] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2022-09-28 03:15:43] Running: 'losetup -d /dev/loop3'... INFO: [2022-09-28 03:15:43] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:15:43] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:15:43] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:15:43] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:15:43] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:15:43] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2022-09-28 03:15:43] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -L4M --thin testvg/pool PASS: testvg/[pool_tdata] stripes == 1 ERROR: Could not query lvm2 version PASS: Search for error on the server ############################# Total tests that passed: 3 Total tests that failed: 1 Total tests that skipped: 0 ################################################################################ FAIL: test failed ============================================================================================================== Running test 'lvm/thinp/lvextend-thinp.py' ============================================================================================================== INFO: [2022-09-28 03:15:45] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvextend-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:15:45] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:15:45] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:15:45] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:15:45] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:15:45] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:15:45] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 75898sec preferred_lft 75898sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591961sec preferred_lft 604761sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:15:45] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 68M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:15:45] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.16-3.el9.ppc64le) ################################################################################ INFO: Starting Thin Extend test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 256 INFO: Creating file /var/tmp/loop0.img INFO: [2022-09-28 03:15:45] Running: 'fallocate -l 256M /var/tmp/loop0.img'... INFO: [2022-09-28 03:15:45] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 256 INFO: Creating file /var/tmp/loop1.img INFO: [2022-09-28 03:15:45] Running: 'fallocate -l 256M /var/tmp/loop1.img'... INFO: [2022-09-28 03:15:45] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 256 INFO: Creating file /var/tmp/loop2.img INFO: [2022-09-28 03:15:45] Running: 'fallocate -l 256M /var/tmp/loop2.img'... INFO: [2022-09-28 03:15:45] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 256 INFO: Creating file /var/tmp/loop3.img INFO: [2022-09-28 03:15:45] Running: 'fallocate -l 256M /var/tmp/loop3.img'... INFO: [2022-09-28 03:15:45] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2022-09-28 03:15:45] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2022-09-28 03:15:46] Running: 'lvcreate -l2 -T testvg/pool1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool1" created. PASS: lvcreate -l2 -T testvg/pool1 INFO: [2022-09-28 03:15:46] Running: 'lvcreate -i2 -l2 -T testvg/pool2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool2" created. PASS: lvcreate -i2 -l2 -T testvg/pool2 INFO: [2022-09-28 03:15:47] Running: 'lvextend -l+2 -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -l+2 -n testvg/pool1 PASS: testvg/pool1 lv_size == 16.00m INFO: [2022-09-28 03:15:47] Running: 'lvextend -L+8 -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -L+8 -n testvg/pool1 PASS: testvg/pool1 lv_size == 24.00m INFO: [2022-09-28 03:15:48] Running: 'lvextend -L+8M -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -L+8M -n testvg/pool1 PASS: testvg/pool1 lv_size == 32.00m INFO: [2022-09-28 03:15:49] Running: 'lvextend -l+2 -n testvg/pool1 /dev/loop3'... Size of logical volume testvg/pool1_tdata changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -l+2 -n testvg/pool1 /dev/loop3 PASS: testvg/pool1 lv_size == 40.00m INFO: [2022-09-28 03:15:49] Running: 'lvextend -l+2 -n testvg/pool1 /dev/loop2:40:41'... Size of logical volume testvg/pool1_tdata changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -l+2 -n testvg/pool1 /dev/loop2:40:41 PASS: testvg/pool1 lv_size == 48.00m INFO: [2022-09-28 03:15:50] Running: 'pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(40)''... testvg [pool1_tdata] /dev/loop2(40) PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(40)' INFO: [2022-09-28 03:15:50] Running: 'lvextend -l+2 -n testvg/pool1 /dev/loop1:35:37'... Size of logical volume testvg/pool1_tdata changed from 48.00 MiB (12 extents) to 56.00 MiB (14 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -l+2 -n testvg/pool1 /dev/loop1:35:37 PASS: testvg/pool1 lv_size == 56.00m INFO: [2022-09-28 03:15:50] Running: 'pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(35)''... testvg [pool1_tdata] /dev/loop1(35) PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(35)' INFO: [2022-09-28 03:15:50] Running: 'lvextend -l16 -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 56.00 MiB (14 extents) to 64.00 MiB (16 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -l16 -n testvg/pool1 PASS: testvg/pool1 lv_size == 64.00m INFO: [2022-09-28 03:15:51] Running: 'lvextend -L72m -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 64.00 MiB (16 extents) to 72.00 MiB (18 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -L72m -n testvg/pool1 PASS: testvg/pool1 lv_size == 72.00m INFO: [2022-09-28 03:15:52] Running: 'lvextend -l+100%FREE --test testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 72.00 MiB (18 extents) to 988.00 MiB (247 extents). Logical volume testvg/pool1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvextend -l+100%FREE --test testvg/pool1 INFO: [2022-09-28 03:15:52] Running: 'lvextend -l+10%PVS --test testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvextend -l+10%PVS --test testvg/pool1 INFO: [2022-09-28 03:15:52] Running: 'lvextend -l+10%VG -t testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvextend -l+10%VG -t testvg/pool1 INFO: [2022-09-28 03:15:52] Running: 'lvextend -l+100%VG -t testvg/pool1'... TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Insufficient free space: 252 extents needed, but only 229 available PASS: lvextend -l+100%VG -t testvg/pool1 [exited with error, as expected] INFO: [2022-09-28 03:15:52] Running: 'lvextend -l+2 -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/pool2 successfully resized. PASS: lvextend -l+2 -n testvg/pool2 PASS: testvg/pool2 lv_size == 16.00m INFO: [2022-09-28 03:15:53] Running: 'lvextend -L+8 -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/pool2 successfully resized. PASS: lvextend -L+8 -n testvg/pool2 PASS: testvg/pool2 lv_size == 24.00m INFO: [2022-09-28 03:15:54] Running: 'lvextend -L+8M -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). Logical volume testvg/pool2 successfully resized. PASS: lvextend -L+8M -n testvg/pool2 PASS: testvg/pool2 lv_size == 32.00m INFO: [2022-09-28 03:15:54] Running: 'lvextend -l+2 -n testvg/pool2 /dev/loop1 /dev/loop2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). Logical volume testvg/pool2 successfully resized. PASS: lvextend -l+2 -n testvg/pool2 /dev/loop1 /dev/loop2 PASS: testvg/pool2 lv_size == 40.00m INFO: [2022-09-28 03:15:55] Running: 'lvextend -l+2 -n testvg/pool2 /dev/loop1:30-41 /dev/loop2:20-31'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). Logical volume testvg/pool2 successfully resized. PASS: lvextend -l+2 -n testvg/pool2 /dev/loop1:30-41 /dev/loop2:20-31 PASS: testvg/pool2 lv_size == 48.00m INFO: [2022-09-28 03:15:55] Running: 'pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(30)''... testvg [pool2_tdata] /dev/loop1(30),/dev/loop2(20) PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(30)' INFO: [2022-09-28 03:15:55] Running: 'pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(20)''... testvg [pool2_tdata] /dev/loop1(30),/dev/loop2(20) PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(20)' INFO: [2022-09-28 03:15:56] Running: 'lvextend -l16 -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 48.00 MiB (12 extents) to 64.00 MiB (16 extents). Logical volume testvg/pool2 successfully resized. PASS: lvextend -l16 -n testvg/pool2 PASS: testvg/pool2 lv_size == 64.00m INFO: [2022-09-28 03:15:56] Running: 'lvextend -L72m -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 64.00 MiB (16 extents) to 72.00 MiB (18 extents). Logical volume testvg/pool2 successfully resized. PASS: lvextend -L72m -n testvg/pool2 PASS: testvg/pool2 lv_size == 72.00m INFO: [2022-09-28 03:15:57] Running: 'lvextend -l+100%FREE --test testvg/pool2'... Using stripesize of last segment 64.00 KiB Rounding size (231 extents) down to stripe boundary size for segment (230 extents) Size of logical volume testvg/pool2_tdata changed from 72.00 MiB (18 extents) to 896.00 MiB (224 extents). Logical volume testvg/pool2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvextend -l+100%FREE --test testvg/pool2 INFO: [2022-09-28 03:15:57] Running: 'lvextend -l+10%PVS --test testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvextend -l+10%PVS --test testvg/pool2 INFO: [2022-09-28 03:15:57] Running: 'lvextend -l+10%VG -t testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvextend -l+10%VG -t testvg/pool2 INFO: [2022-09-28 03:15:57] Running: 'lvextend -l+100%VG -t testvg/pool2'... Using stripesize of last segment 64.00 KiB TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Insufficient free space: 252 extents needed, but only 213 available PASS: lvextend -l+100%VG -t testvg/pool2 [exited with error, as expected] INFO: [2022-09-28 03:15:58] Running: 'lvremove -ff testvg'... Logical volume "pool2" successfully removed. Logical volume "pool1" successfully removed. PASS: lvremove -ff testvg INFO: [2022-09-28 03:15:59] Running: 'lvcreate -l10 -V8m -T testvg/pool1 -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv1" created. PASS: lvcreate -l10 -V8m -T testvg/pool1 -n lv1 INFO: [2022-09-28 03:16:01] Running: 'lvcreate -i2 -l10 -V8m -T testvg/pool2 -n lv2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv2" created. PASS: lvcreate -i2 -l10 -V8m -T testvg/pool2 -n lv2 INFO: [2022-09-28 03:16:02] Running: 'lvextend -l4 testvg/lv1'... Size of logical volume testvg/lv1 changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/lv1 successfully resized. PASS: lvextend -l4 testvg/lv1 PASS: testvg/lv1 lv_size == 16.00m INFO: [2022-09-28 03:16:03] Running: 'lvextend -L24 -n testvg/lv1'... Size of logical volume testvg/lv1 changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/lv1 successfully resized. PASS: lvextend -L24 -n testvg/lv1 PASS: testvg/lv1 lv_size == 24.00m INFO: [2022-09-28 03:16:03] Running: 'lvextend -l+100%FREE --test testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to 940.00 MiB (235 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (948.00 MiB) exceeds the size of thin pools and the amount of free space in volume group (916.00 MiB). PASS: lvextend -l+100%FREE --test testvg/lv1 INFO: [2022-09-28 03:16:03] Running: 'lvextend -l+100%PVS --test testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to <1.01 GiB (258 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (<1.02 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+100%PVS --test testvg/lv1 INFO: [2022-09-28 03:16:04] Running: 'lvextend -l+50%VG -t testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to 528.00 MiB (132 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (536.00 MiB) exceeds the size of thin pools (80.00 MiB). PASS: lvextend -l+50%VG -t testvg/lv1 INFO: [2022-09-28 03:16:04] Running: 'lvextend -l+120%VG -t testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to <1.21 GiB (309 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (1.21 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+120%VG -t testvg/lv1 INFO: [2022-09-28 03:16:04] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv1'... Discarding device blocks: 0/24576 done Creating filesystem with 24576 1k blocks and 6144 inodes Filesystem UUID: 78c5a432-db99-4490-8632-beebec347bda Superblock backups stored on blocks: 8193 Allocating group tables: 0/3 done Writing inode tables: 0/3 done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: 0/3 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:16:04] Running: 'mount /dev/mapper/testvg-lv1 /mnt/lv'... INFO: [2022-09-28 03:16:04] Running: 'dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.060297 s, 87.0 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 INFO: [2022-09-28 03:16:04] Running: 'lvextend -l+2 -r testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). Logical volume testvg/lv1 successfully resized. Filesystem at /dev/mapper/testvg-lv1 is mounted on /mnt/lv; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/mapper/testvg-lv1 is now 32768 (1k) blocks long. resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -r testvg/lv1 PASS: testvg/lv1 lv_size == 32.00m INFO: [2022-09-28 03:16:06] Running: 'lvcreate -K -s testvg/lv1 -n snap1'... Logical volume "snap1" created. PASS: lvcreate -K -s testvg/lv1 -n snap1 INFO: [2022-09-28 03:16:06] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap1'... Discarding device blocks: 0/32768 done Creating filesystem with 32768 1k blocks and 8192 inodes Filesystem UUID: 43ca28ab-06f8-4e7d-a413-777fe818b574 Superblock backups stored on blocks: 8193, 24577 Allocating group tables: 0/4 done Writing inode tables: 0/4 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/4 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:16:07] Running: 'mount /dev/mapper/testvg-snap1 /mnt/snap'... INFO: [2022-09-28 03:16:07] Running: 'dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0604529 s, 86.7 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 INFO: [2022-09-28 03:16:07] Running: 'lvextend -l+2 -rf testvg/snap1'... Size of logical volume testvg/snap1 changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). Logical volume testvg/snap1 successfully resized. Filesystem at /dev/mapper/testvg-snap1 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/mapper/testvg-snap1 is now 40960 (1k) blocks long. resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m INFO: [2022-09-28 03:16:08] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 33M 5.1M 26M 17% /mnt/snap INFO: [2022-09-28 03:16:08] Running: 'lvextend -L48 -rf testvg/snap1'... Size of logical volume testvg/snap1 changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/snap1 successfully resized. Filesystem at /dev/mapper/testvg-snap1 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/mapper/testvg-snap1 is now 49152 (1k) blocks long. WARNING: Sum of all thin volume sizes (88.00 MiB) exceeds the size of thin pools (80.00 MiB). resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -L48 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m INFO: [2022-09-28 03:16:09] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 40M 5.1M 33M 14% /mnt/snap INFO: [2022-09-28 03:16:09] Running: 'umount /mnt/lv'... INFO: [2022-09-28 03:16:09] Running: 'umount /mnt/snap'... INFO: [2022-09-28 03:16:09] Running: 'lvextend -l4 testvg/lv2'... Size of logical volume testvg/lv2 changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/lv2 successfully resized. PASS: lvextend -l4 testvg/lv2 PASS: testvg/lv2 lv_size == 16.00m INFO: [2022-09-28 03:16:10] Running: 'lvextend -L24 -n testvg/lv2'... Size of logical volume testvg/lv2 changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/lv2 successfully resized. PASS: lvextend -L24 -n testvg/lv2 PASS: testvg/lv2 lv_size == 24.00m INFO: [2022-09-28 03:16:10] Running: 'lvextend -l+100%FREE --test testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to 940.00 MiB (235 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (1020.00 MiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+100%FREE --test testvg/lv2 INFO: [2022-09-28 03:16:10] Running: 'lvextend -l+100%PVS --test testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to <1.01 GiB (258 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (<1.09 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+100%PVS --test testvg/lv2 INFO: [2022-09-28 03:16:11] Running: 'lvextend -l+50%VG -t testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to 528.00 MiB (132 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (608.00 MiB) exceeds the size of thin pools (80.00 MiB). PASS: lvextend -l+50%VG -t testvg/lv2 INFO: [2022-09-28 03:16:11] Running: 'lvextend -l+120%VG -t testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to <1.21 GiB (309 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (<1.29 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+120%VG -t testvg/lv2 INFO: /mnt/lv already exist INFO: [2022-09-28 03:16:11] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv2'... Discarding device blocks: 0/24576 done Creating filesystem with 24576 1k blocks and 6144 inodes Filesystem UUID: 3670f839-474f-45d9-9ede-5cb54d9732ad Superblock backups stored on blocks: 8193 Allocating group tables: 0/3 done Writing inode tables: 0/3 done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: 0/3 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:16:11] Running: 'mount /dev/mapper/testvg-lv2 /mnt/lv'... INFO: [2022-09-28 03:16:11] Running: 'dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0605218 s, 86.6 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 INFO: [2022-09-28 03:16:11] Running: 'lvextend -l+2 -r testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). Logical volume testvg/lv2 successfully resized. Filesystem at /dev/mapper/testvg-lv2 is mounted on /mnt/lv; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/mapper/testvg-lv2 is now 32768 (1k) blocks long. resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -r testvg/lv2 PASS: testvg/lv2 lv_size == 32.00m INFO: [2022-09-28 03:16:13] Running: 'lvcreate -K -s testvg/lv2 -n snap2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap2" created. WARNING: Sum of all thin volume sizes (144.00 MiB) exceeds the size of thin pools (80.00 MiB). PASS: lvcreate -K -s testvg/lv2 -n snap2 INFO: /mnt/snap already exist INFO: [2022-09-28 03:16:13] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap2'... Discarding device blocks: 0/32768 done Creating filesystem with 32768 1k blocks and 8192 inodes Filesystem UUID: c56dd138-6dc4-4ccf-a1fa-5937dc384076 Superblock backups stored on blocks: 8193, 24577 Allocating group tables: 0/4 done Writing inode tables: 0/4 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/4 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:16:14] Running: 'mount /dev/mapper/testvg-snap2 /mnt/snap'... INFO: [2022-09-28 03:16:14] Running: 'dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0603857 s, 86.8 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 INFO: [2022-09-28 03:16:14] Running: 'lvextend -l+2 -rf testvg/snap2'... Size of logical volume testvg/snap2 changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/snap2 successfully resized. Filesystem at /dev/mapper/testvg-snap2 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/mapper/testvg-snap2 is now 40960 (1k) blocks long. WARNING: Sum of all thin volume sizes (152.00 MiB) exceeds the size of thin pools (80.00 MiB). resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m INFO: [2022-09-28 03:16:15] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 33M 5.1M 26M 17% /mnt/snap INFO: [2022-09-28 03:16:15] Running: 'lvextend -L48 -rf testvg/snap2'... Size of logical volume testvg/snap2 changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/snap2 successfully resized. Filesystem at /dev/mapper/testvg-snap2 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/mapper/testvg-snap2 is now 49152 (1k) blocks long. WARNING: Sum of all thin volume sizes (160.00 MiB) exceeds the size of thin pools (80.00 MiB). resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -L48 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m INFO: [2022-09-28 03:16:16] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 40M 5.1M 33M 14% /mnt/snap INFO: [2022-09-28 03:16:16] Running: 'umount /mnt/lv'... INFO: [2022-09-28 03:16:16] Running: 'umount /mnt/snap'... INFO: [2022-09-28 03:16:17] Running: 'vgremove --force testvg'... Logical volume "lv2" successfully removed. Logical volume "snap2" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "snap1" successfully removed. Logical volume "pool1" successfully removed. Volume group "testvg" successfully removed INFO: [2022-09-28 03:16:19] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2022-09-28 03:16:20] Running: 'losetup -d /dev/loop0'... INFO: [2022-09-28 03:16:20] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2022-09-28 03:16:20] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2022-09-28 03:16:21] Running: 'losetup -d /dev/loop1'... INFO: [2022-09-28 03:16:21] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2022-09-28 03:16:21] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2022-09-28 03:16:23] Running: 'losetup -d /dev/loop2'... INFO: [2022-09-28 03:16:23] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2022-09-28 03:16:23] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2022-09-28 03:16:24] Running: 'losetup -d /dev/loop3'... INFO: [2022-09-28 03:16:24] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:16:24] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:16:24] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:16:24] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:16:24] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:16:24] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_snapshot' was loaded during the test. Unloading it... INFO: [2022-09-28 03:16:24] Running: 'modprobe -r dm_snapshot'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2022-09-28 03:16:24] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l2 -T testvg/pool1 PASS: lvcreate -i2 -l2 -T testvg/pool2 PASS: lvextend -l+2 -n testvg/pool1 PASS: testvg/pool1 lv_size == 16.00m PASS: lvextend -L+8 -n testvg/pool1 PASS: testvg/pool1 lv_size == 24.00m PASS: lvextend -L+8M -n testvg/pool1 PASS: testvg/pool1 lv_size == 32.00m PASS: lvextend -l+2 -n testvg/pool1 /dev/loop3 PASS: testvg/pool1 lv_size == 40.00m PASS: lvextend -l+2 -n testvg/pool1 /dev/loop2:40:41 PASS: testvg/pool1 lv_size == 48.00m PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(40)' PASS: lvextend -l+2 -n testvg/pool1 /dev/loop1:35:37 PASS: testvg/pool1 lv_size == 56.00m PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(35)' PASS: lvextend -l16 -n testvg/pool1 PASS: testvg/pool1 lv_size == 64.00m PASS: lvextend -L72m -n testvg/pool1 PASS: testvg/pool1 lv_size == 72.00m PASS: lvextend -l+100%FREE --test testvg/pool1 PASS: lvextend -l+10%PVS --test testvg/pool1 PASS: lvextend -l+10%VG -t testvg/pool1 PASS: lvextend -l+100%VG -t testvg/pool1 [exited with error, as expected] PASS: lvextend -l+2 -n testvg/pool2 PASS: testvg/pool2 lv_size == 16.00m PASS: lvextend -L+8 -n testvg/pool2 PASS: testvg/pool2 lv_size == 24.00m PASS: lvextend -L+8M -n testvg/pool2 PASS: testvg/pool2 lv_size == 32.00m PASS: lvextend -l+2 -n testvg/pool2 /dev/loop1 /dev/loop2 PASS: testvg/pool2 lv_size == 40.00m PASS: lvextend -l+2 -n testvg/pool2 /dev/loop1:30-41 /dev/loop2:20-31 PASS: testvg/pool2 lv_size == 48.00m PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(30)' PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(20)' PASS: lvextend -l16 -n testvg/pool2 PASS: testvg/pool2 lv_size == 64.00m PASS: lvextend -L72m -n testvg/pool2 PASS: testvg/pool2 lv_size == 72.00m PASS: lvextend -l+100%FREE --test testvg/pool2 PASS: lvextend -l+10%PVS --test testvg/pool2 PASS: lvextend -l+10%VG -t testvg/pool2 PASS: lvextend -l+100%VG -t testvg/pool2 [exited with error, as expected] PASS: lvremove -ff testvg PASS: lvcreate -l10 -V8m -T testvg/pool1 -n lv1 PASS: lvcreate -i2 -l10 -V8m -T testvg/pool2 -n lv2 PASS: lvextend -l4 testvg/lv1 PASS: testvg/lv1 lv_size == 16.00m PASS: lvextend -L24 -n testvg/lv1 PASS: testvg/lv1 lv_size == 24.00m PASS: lvextend -l+100%FREE --test testvg/lv1 PASS: lvextend -l+100%PVS --test testvg/lv1 PASS: lvextend -l+50%VG -t testvg/lv1 PASS: lvextend -l+120%VG -t testvg/lv1 PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 PASS: lvextend -l+2 -r testvg/lv1 PASS: testvg/lv1 lv_size == 32.00m PASS: lvcreate -K -s testvg/lv1 -n snap1 PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 PASS: lvextend -l+2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m PASS: lvextend -L48 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m PASS: lvextend -l4 testvg/lv2 PASS: testvg/lv2 lv_size == 16.00m PASS: lvextend -L24 -n testvg/lv2 PASS: testvg/lv2 lv_size == 24.00m PASS: lvextend -l+100%FREE --test testvg/lv2 PASS: lvextend -l+100%PVS --test testvg/lv2 PASS: lvextend -l+50%VG -t testvg/lv2 PASS: lvextend -l+120%VG -t testvg/lv2 PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 PASS: lvextend -l+2 -r testvg/lv2 PASS: testvg/lv2 lv_size == 32.00m PASS: lvcreate -K -s testvg/lv2 -n snap2 PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 PASS: lvextend -l+2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m PASS: lvextend -L48 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m PASS: Search for error on the server ############################# Total tests that passed: 82 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvreduce-thinp.py' ============================================================================================================== INFO: [2022-09-28 03:16:26] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvreduce-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:16:26] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:16:26] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:16:26] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:16:26] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:16:26] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:16:26] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 75857sec preferred_lft 75857sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591996sec preferred_lft 604796sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:16:26] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 69M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:16:26] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.16-3.el9.ppc64le) ################################################################################ INFO: Starting Thin Reduce test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 256 INFO: Creating file /var/tmp/loop0.img INFO: [2022-09-28 03:16:27] Running: 'fallocate -l 256M /var/tmp/loop0.img'... INFO: [2022-09-28 03:16:27] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 256 INFO: Creating file /var/tmp/loop1.img INFO: [2022-09-28 03:16:27] Running: 'fallocate -l 256M /var/tmp/loop1.img'... INFO: [2022-09-28 03:16:27] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 256 INFO: Creating file /var/tmp/loop2.img INFO: [2022-09-28 03:16:27] Running: 'fallocate -l 256M /var/tmp/loop2.img'... INFO: [2022-09-28 03:16:27] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 256 INFO: Creating file /var/tmp/loop3.img INFO: [2022-09-28 03:16:27] Running: 'fallocate -l 256M /var/tmp/loop3.img'... INFO: [2022-09-28 03:16:27] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2022-09-28 03:16:27] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2022-09-28 03:16:27] Running: 'lvcreate -L100m -V100m -T testvg/pool1 -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv1" created. PASS: lvcreate -L100m -V100m -T testvg/pool1 -n lv1 INFO: [2022-09-28 03:16:29] Running: 'lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB (26 extents). Logical volume "lv2" created. PASS: lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2 INFO: [2022-09-28 03:16:30] Running: 'lvreduce -l-1 testvg/pool1 > /tmp/reduce_pool.err 2>&1'... PASS: lvreduce -l-1 testvg/pool1 > /tmp/reduce_pool.err 2>&1 [exited with error, as expected] INFO: [2022-09-28 03:16:31] Running: 'grep -e 'Thin pool volumes .*cannot be reduced in size yet' /tmp/reduce_pool.err'... Thin pool volumes testvg/pool1_tdata cannot be reduced in size yet. PASS: grep -e 'Thin pool volumes .*cannot be reduced in size yet' /tmp/reduce_pool.err INFO: [2022-09-28 03:16:31] Running: 'lvremove -ff testvg'... Logical volume "lv2" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "pool1" successfully removed. PASS: lvremove -ff testvg INFO: [2022-09-28 03:16:33] Running: 'lvcreate -L100m -V100m -T testvg/pool1 -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv1" created. PASS: lvcreate -L100m -V100m -T testvg/pool1 -n lv1 INFO: [2022-09-28 03:16:34] Running: 'lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB (26 extents). Logical volume "lv2" created. PASS: lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2 INFO: [2022-09-28 03:16:36] Running: 'lvreduce -f -l-2 testvg/lv1'... Size of logical volume testvg/lv1 changed from 100.00 MiB (25 extents) to 92.00 MiB (23 extents). Logical volume testvg/lv1 successfully resized. WARNING: Reducing active logical volume to 92.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 92.00m INFO: [2022-09-28 03:16:36] Running: 'lvreduce -f -L-8 -n testvg/lv1'... Size of logical volume testvg/lv1 changed from 92.00 MiB (23 extents) to 84.00 MiB (21 extents). Logical volume testvg/lv1 successfully resized. WARNING: Reducing active logical volume to 84.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -L-8 -n testvg/lv1 PASS: testvg/lv1 lv_size == 84.00m INFO: [2022-09-28 03:16:37] Running: 'lvreduce -f -L-8m -n testvg/lv1'... Size of logical volume testvg/lv1 changed from 84.00 MiB (21 extents) to 76.00 MiB (19 extents). Logical volume testvg/lv1 successfully resized. WARNING: Reducing active logical volume to 76.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -L-8m -n testvg/lv1 PASS: testvg/lv1 lv_size == 76.00m INFO: [2022-09-28 03:16:37] Running: 'lvreduce -f -l18 -n testvg/lv1'... Size of logical volume testvg/lv1 changed from 76.00 MiB (19 extents) to 72.00 MiB (18 extents). Logical volume testvg/lv1 successfully resized. WARNING: Reducing active logical volume to 72.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -l18 -n testvg/lv1 PASS: testvg/lv1 lv_size == 72.00m INFO: [2022-09-28 03:16:38] Running: 'lvreduce -f -L64m -n testvg/lv1'... Size of logical volume testvg/lv1 changed from 72.00 MiB (18 extents) to 64.00 MiB (16 extents). Logical volume testvg/lv1 successfully resized. WARNING: Reducing active logical volume to 64.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -L64m -n testvg/lv1 PASS: testvg/lv1 lv_size == 64.00m INFO: [2022-09-28 03:16:38] Running: 'lvreduce -f -l-1%FREE --test testvg/lv1'... Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 4.00 MiB (1 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 4.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -l-1%FREE --test testvg/lv1 INFO: [2022-09-28 03:16:38] Running: 'lvreduce -f -l-1%PVS --test testvg/lv1'... Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 8.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -l-1%PVS --test testvg/lv1 INFO: [2022-09-28 03:16:38] Running: 'lvreduce -f -l-1%VG -t testvg/lv1'... Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 8.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -l-1%VG -t testvg/lv1 INFO: [2022-09-28 03:16:38] Running: 'lvreduce -f -l-1%VG -t testvg/lv1'... Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 8.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -l-1%VG -t testvg/lv1 INFO: /mnt/lv already exist INFO: [2022-09-28 03:16:39] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv1'... Discarding device blocks: 0/65536 done Creating filesystem with 65536 1k blocks and 16384 inodes Filesystem UUID: 2a2d9fb5-96ab-4624-a6f9-0dafee232aef Superblock backups stored on blocks: 8193, 24577, 40961, 57345 Allocating group tables: 0/8 done Writing inode tables: 0/8 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/8 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:16:39] Running: 'mount /dev/mapper/testvg-lv1 /mnt/lv'... INFO: [2022-09-28 03:16:39] Running: 'dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0615614 s, 85.2 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 INFO: [2022-09-28 03:16:39] Running: 'yes 2>/dev/null | lvreduce -rf -l-2 testvg/lv1'... Do you want to unmount "/mnt/lv" ? [Y|n] y fsck from util-linux 2.37.4 /dev/mapper/testvg-lv1: 12/16384 files (8.3% non-contiguous), 14633/65536 blocks Resizing the filesystem on /dev/mapper/testvg-lv1 to 57344 (1k) blocks. The filesystem on /dev/mapper/testvg-lv1 is now 57344 (1k) blocks long. Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 56.00 MiB (14 extents). Logical volume testvg/lv1 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvreduce -rf -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 56.00m INFO: [2022-09-28 03:16:40] Running: 'lvcreate -K -s testvg/lv1 -n snap1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap1" created. WARNING: Sum of all thin volume sizes (212.00 MiB) exceeds the size of thin pools (204.00 MiB). PASS: lvcreate -K -s testvg/lv1 -n snap1 INFO: /mnt/snap already exist INFO: [2022-09-28 03:16:41] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap1'... Discarding device blocks: 0/57344 done Creating filesystem with 57344 1k blocks and 14336 inodes Filesystem UUID: 808f2c8c-c0f5-45ac-be1e-4a8f29641f8a Superblock backups stored on blocks: 8193, 24577, 40961 Allocating group tables: 0/7 done Writing inode tables: 0/7 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/7 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:16:42] Running: 'mount /dev/mapper/testvg-snap1 /mnt/snap'... INFO: [2022-09-28 03:16:42] Running: 'dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0603137 s, 86.9 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 INFO: [2022-09-28 03:16:42] Running: 'yes 2>/dev/null | lvreduce -l-2 -rf testvg/snap1'... Do you want to unmount "/mnt/snap" ? [Y|n] y fsck from util-linux 2.37.4 /dev/mapper/testvg-snap1: 12/14336 files (8.3% non-contiguous), 13861/57344 blocks Resizing the filesystem on /dev/mapper/testvg-snap1 to 49152 (1k) blocks. The filesystem on /dev/mapper/testvg-snap1 is now 49152 (1k) blocks long. Size of logical volume testvg/snap1 changed from 56.00 MiB (14 extents) to 48.00 MiB (12 extents). Logical volume testvg/snap1 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvreduce -l-2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m INFO: [2022-09-28 03:16:43] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 40M 5.1M 32M 14% /mnt/snap INFO: [2022-09-28 03:16:43] Running: 'yes 2>/dev/null | lvreduce -L40 -rf testvg/snap1'... Do you want to unmount "/mnt/snap" ? [Y|n] y fsck from util-linux 2.37.4 /dev/mapper/testvg-snap1: 12/12288 files (8.3% non-contiguous), 13347/49152 blocks Resizing the filesystem on /dev/mapper/testvg-snap1 to 40960 (1k) blocks. The filesystem on /dev/mapper/testvg-snap1 is now 40960 (1k) blocks long. Size of logical volume testvg/snap1 changed from 48.00 MiB (12 extents) to 40.00 MiB (10 extents). Logical volume testvg/snap1 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvreduce -L40 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m INFO: [2022-09-28 03:16:44] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 33M 5.1M 25M 17% /mnt/snap INFO: [2022-09-28 03:16:44] Running: 'umount /mnt/lv'... INFO: [2022-09-28 03:16:44] Running: 'umount /mnt/snap'... INFO: [2022-09-28 03:16:45] Running: 'lvreduce -f -l-2 testvg/lv2'... Size of logical volume testvg/lv2 changed from 100.00 MiB (25 extents) to 92.00 MiB (23 extents). Logical volume testvg/lv2 successfully resized. WARNING: Reducing active logical volume to 92.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 92.00m INFO: [2022-09-28 03:16:45] Running: 'lvreduce -f -L-8 -n testvg/lv2'... Size of logical volume testvg/lv2 changed from 92.00 MiB (23 extents) to 84.00 MiB (21 extents). Logical volume testvg/lv2 successfully resized. WARNING: Reducing active logical volume to 84.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -L-8 -n testvg/lv2 PASS: testvg/lv2 lv_size == 84.00m INFO: [2022-09-28 03:16:46] Running: 'lvreduce -f -L-8m -n testvg/lv2'... Size of logical volume testvg/lv2 changed from 84.00 MiB (21 extents) to 76.00 MiB (19 extents). Logical volume testvg/lv2 successfully resized. WARNING: Reducing active logical volume to 76.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -L-8m -n testvg/lv2 PASS: testvg/lv2 lv_size == 76.00m INFO: [2022-09-28 03:16:46] Running: 'lvreduce -f -l18 -n testvg/lv2'... Size of logical volume testvg/lv2 changed from 76.00 MiB (19 extents) to 72.00 MiB (18 extents). Logical volume testvg/lv2 successfully resized. WARNING: Reducing active logical volume to 72.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -l18 -n testvg/lv2 PASS: testvg/lv2 lv_size == 72.00m INFO: [2022-09-28 03:16:47] Running: 'lvreduce -f -L64m -n testvg/lv2'... Size of logical volume testvg/lv2 changed from 72.00 MiB (18 extents) to 64.00 MiB (16 extents). Logical volume testvg/lv2 successfully resized. WARNING: Reducing active logical volume to 64.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -L64m -n testvg/lv2 PASS: testvg/lv2 lv_size == 64.00m INFO: [2022-09-28 03:16:47] Running: 'lvreduce -f -l-1%FREE --test testvg/lv2'... Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 4.00 MiB (1 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 4.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -l-1%FREE --test testvg/lv2 INFO: [2022-09-28 03:16:48] Running: 'lvreduce -f -l-1%PVS --test testvg/lv2'... Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 8.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -l-1%PVS --test testvg/lv2 INFO: [2022-09-28 03:16:48] Running: 'lvreduce -f -l-1%VG -t testvg/lv2'... Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 8.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -l-1%VG -t testvg/lv2 INFO: [2022-09-28 03:16:48] Running: 'lvreduce -f -l-1%VG -t testvg/lv2'... Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 8.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvreduce -f -l-1%VG -t testvg/lv2 INFO: /mnt/lv already exist INFO: [2022-09-28 03:16:48] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv2'... Discarding device blocks: 0/65536 done Creating filesystem with 65536 1k blocks and 16384 inodes Filesystem UUID: 0ca73a66-6dd3-41d3-93fa-8e67ea2fab5f Superblock backups stored on blocks: 8193, 24577, 40961, 57345 Allocating group tables: 0/8 done Writing inode tables: 0/8 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/8 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:16:49] Running: 'mount /dev/mapper/testvg-lv2 /mnt/lv'... INFO: [2022-09-28 03:16:49] Running: 'dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0605588 s, 86.6 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 INFO: [2022-09-28 03:16:49] Running: 'yes 2>/dev/null | lvreduce -rf -l-2 testvg/lv2'... Do you want to unmount "/mnt/lv" ? [Y|n] y fsck from util-linux 2.37.4 /dev/mapper/testvg-lv2: 12/16384 files (8.3% non-contiguous), 14633/65536 blocks Resizing the filesystem on /dev/mapper/testvg-lv2 to 57344 (1k) blocks. The filesystem on /dev/mapper/testvg-lv2 is now 57344 (1k) blocks long. Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 56.00 MiB (14 extents). Logical volume testvg/lv2 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvreduce -rf -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 56.00m INFO: [2022-09-28 03:16:50] Running: 'lvcreate -K -s testvg/lv2 -n snap2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap2" created. WARNING: Sum of all thin volume sizes (208.00 MiB) exceeds the size of thin pools (204.00 MiB). PASS: lvcreate -K -s testvg/lv2 -n snap2 INFO: /mnt/snap already exist INFO: [2022-09-28 03:16:51] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap2'... Discarding device blocks: 0/57344 done Creating filesystem with 57344 1k blocks and 14336 inodes Filesystem UUID: cfe82ef3-7aa9-43a5-888d-bd9308902a66 Superblock backups stored on blocks: 8193, 24577, 40961 Allocating group tables: 0/7 done Writing inode tables: 0/7 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/7 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:16:51] Running: 'mount /dev/mapper/testvg-snap2 /mnt/snap'... INFO: [2022-09-28 03:16:51] Running: 'dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0605188 s, 86.6 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 INFO: [2022-09-28 03:16:51] Running: 'yes 2>/dev/null | lvreduce -l-2 -rf testvg/snap2'... Do you want to unmount "/mnt/snap" ? [Y|n] y fsck from util-linux 2.37.4 /dev/mapper/testvg-snap2: 12/14336 files (8.3% non-contiguous), 13861/57344 blocks Resizing the filesystem on /dev/mapper/testvg-snap2 to 49152 (1k) blocks. The filesystem on /dev/mapper/testvg-snap2 is now 49152 (1k) blocks long. Size of logical volume testvg/snap2 changed from 56.00 MiB (14 extents) to 48.00 MiB (12 extents). Logical volume testvg/snap2 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvreduce -l-2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m INFO: [2022-09-28 03:16:52] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 40M 5.1M 32M 14% /mnt/snap INFO: [2022-09-28 03:16:52] Running: 'yes 2>/dev/null | lvreduce -L40 -rf testvg/snap2'... Do you want to unmount "/mnt/snap" ? [Y|n] y fsck from util-linux 2.37.4 /dev/mapper/testvg-snap2: 12/12288 files (8.3% non-contiguous), 13347/49152 blocks Resizing the filesystem on /dev/mapper/testvg-snap2 to 40960 (1k) blocks. The filesystem on /dev/mapper/testvg-snap2 is now 40960 (1k) blocks long. Size of logical volume testvg/snap2 changed from 48.00 MiB (12 extents) to 40.00 MiB (10 extents). Logical volume testvg/snap2 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvreduce -L40 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m INFO: [2022-09-28 03:16:54] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 33M 5.1M 25M 17% /mnt/snap INFO: [2022-09-28 03:16:54] Running: 'umount /mnt/lv'... INFO: [2022-09-28 03:16:54] Running: 'umount /mnt/snap'... INFO: [2022-09-28 03:16:54] Running: 'vgremove --force testvg'... Logical volume "lv2" successfully removed. Logical volume "snap2" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "snap1" successfully removed. Logical volume "pool1" successfully removed. Volume group "testvg" successfully removed INFO: [2022-09-28 03:16:56] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2022-09-28 03:16:58] Running: 'losetup -d /dev/loop0'... INFO: [2022-09-28 03:16:58] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2022-09-28 03:16:58] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2022-09-28 03:16:59] Running: 'losetup -d /dev/loop1'... INFO: [2022-09-28 03:16:59] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2022-09-28 03:16:59] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2022-09-28 03:17:00] Running: 'losetup -d /dev/loop2'... INFO: [2022-09-28 03:17:00] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2022-09-28 03:17:01] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2022-09-28 03:17:02] Running: 'losetup -d /dev/loop3'... INFO: [2022-09-28 03:17:02] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:17:02] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:17:02] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:17:02] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:17:02] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:17:02] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_snapshot' was loaded during the test. Unloading it... INFO: [2022-09-28 03:17:02] Running: 'modprobe -r dm_snapshot'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2022-09-28 03:17:02] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -L100m -V100m -T testvg/pool1 -n lv1 PASS: lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2 PASS: lvreduce -l-1 testvg/pool1 > /tmp/reduce_pool.err 2>&1 [exited with error, as expected] PASS: grep -e 'Thin pool volumes .*cannot be reduced in size yet' /tmp/reduce_pool.err PASS: lvremove -ff testvg PASS: lvcreate -L100m -V100m -T testvg/pool1 -n lv1 PASS: lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2 PASS: lvreduce -f -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 92.00m PASS: lvreduce -f -L-8 -n testvg/lv1 PASS: testvg/lv1 lv_size == 84.00m PASS: lvreduce -f -L-8m -n testvg/lv1 PASS: testvg/lv1 lv_size == 76.00m PASS: lvreduce -f -l18 -n testvg/lv1 PASS: testvg/lv1 lv_size == 72.00m PASS: lvreduce -f -L64m -n testvg/lv1 PASS: testvg/lv1 lv_size == 64.00m PASS: lvreduce -f -l-1%FREE --test testvg/lv1 PASS: lvreduce -f -l-1%PVS --test testvg/lv1 PASS: lvreduce -f -l-1%VG -t testvg/lv1 PASS: lvreduce -f -l-1%VG -t testvg/lv1 PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 PASS: yes 2>/dev/null | lvreduce -rf -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 56.00m PASS: lvcreate -K -s testvg/lv1 -n snap1 PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 PASS: yes 2>/dev/null | lvreduce -l-2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m PASS: yes 2>/dev/null | lvreduce -L40 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m PASS: lvreduce -f -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 92.00m PASS: lvreduce -f -L-8 -n testvg/lv2 PASS: testvg/lv2 lv_size == 84.00m PASS: lvreduce -f -L-8m -n testvg/lv2 PASS: testvg/lv2 lv_size == 76.00m PASS: lvreduce -f -l18 -n testvg/lv2 PASS: testvg/lv2 lv_size == 72.00m PASS: lvreduce -f -L64m -n testvg/lv2 PASS: testvg/lv2 lv_size == 64.00m PASS: lvreduce -f -l-1%FREE --test testvg/lv2 PASS: lvreduce -f -l-1%PVS --test testvg/lv2 PASS: lvreduce -f -l-1%VG -t testvg/lv2 PASS: lvreduce -f -l-1%VG -t testvg/lv2 PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 PASS: yes 2>/dev/null | lvreduce -rf -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 56.00m PASS: lvcreate -K -s testvg/lv2 -n snap2 PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 PASS: yes 2>/dev/null | lvreduce -l-2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m PASS: yes 2>/dev/null | lvreduce -L40 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m PASS: Search for error on the server ############################# Total tests that passed: 54 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvremove-thinp.py' ============================================================================================================== INFO: [2022-09-28 03:17:03] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvremove-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:17:04] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:17:04] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:17:04] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:17:04] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:17:04] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:17:04] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 75819sec preferred_lft 75819sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591958sec preferred_lft 604758sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:17:04] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 69M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:17:04] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.16-3.el9.ppc64le) ################################################################################ INFO: Starting Thin Remove test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 256 INFO: Creating file /var/tmp/loop0.img INFO: [2022-09-28 03:17:04] Running: 'fallocate -l 256M /var/tmp/loop0.img'... INFO: [2022-09-28 03:17:04] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 256 INFO: Creating file /var/tmp/loop1.img INFO: [2022-09-28 03:17:04] Running: 'fallocate -l 256M /var/tmp/loop1.img'... INFO: [2022-09-28 03:17:04] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 256 INFO: Creating file /var/tmp/loop2.img INFO: [2022-09-28 03:17:04] Running: 'fallocate -l 256M /var/tmp/loop2.img'... INFO: [2022-09-28 03:17:04] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 256 INFO: Creating file /var/tmp/loop3.img INFO: [2022-09-28 03:17:04] Running: 'fallocate -l 256M /var/tmp/loop3.img'... INFO: [2022-09-28 03:17:04] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2022-09-28 03:17:04] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2022-09-28 03:17:05] Running: 'lvcreate -l20 -T testvg/pool'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool" created. PASS: lvcreate -l20 -T testvg/pool INFO: [2022-09-28 03:17:05] Running: 'yes 2>/dev/null | lvremove testvg/pool'... Logical volume "pool" successfully removed. Do you really want to remove active logical volume testvg/pool? [y/n]: PASS: yes 2>/dev/null | lvremove testvg/pool INFO: [2022-09-28 03:17:06] Running: 'lvcreate -l20 -T testvg/pool'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool" created. PASS: lvcreate -l20 -T testvg/pool INFO: [2022-09-28 03:17:07] Running: 'lvremove -f testvg/pool'... Logical volume "pool" successfully removed. PASS: lvremove -f testvg/pool INFO: [2022-09-28 03:17:08] Running: 'lvcreate -l20 -T testvg/pool'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool" created. PASS: lvcreate -l20 -T testvg/pool INFO: [2022-09-28 03:17:08] Running: 'lvremove -ff testvg/pool'... Logical volume "pool" successfully removed. PASS: lvremove -ff testvg/pool INFO: [2022-09-28 03:17:09] Running: 'lvcreate -l20 -T testvg/pool'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool" created. PASS: lvcreate -l20 -T testvg/pool INFO: [2022-09-28 03:17:10] Running: 'lvremove -ff testvg'... Logical volume "pool" successfully removed. PASS: lvremove -ff testvg INFO: [2022-09-28 03:17:10] Running: 'lvcreate -l20 -V 100m -T testvg/pool -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -l20 -V 100m -T testvg/pool -n lv1 INFO: [2022-09-28 03:17:12] Running: 'lvremove -ff testvg/lv1'... Logical volume "lv1" successfully removed. PASS: lvremove -ff testvg/lv1 INFO: [2022-09-28 03:17:13] Running: 'lvcreate -V 100m -T testvg/pool -n lv1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -V 100m -T testvg/pool -n lv1 INFO: [2022-09-28 03:17:13] Running: 'lvcreate -V 100m -T testvg/pool -n lv2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv2" created. WARNING: Sum of all thin volume sizes (200.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -V 100m -T testvg/pool -n lv2 INFO: [2022-09-28 03:17:14] Running: 'lvcreate -V 100m -T testvg/pool -n lv3'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv3" created. WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -V 100m -T testvg/pool -n lv3 INFO: [2022-09-28 03:17:14] Running: 'lvremove -ff testvg/lv1 testvg/lv2 testvg/lv3'... Logical volume "lv1" successfully removed. Logical volume "lv2" successfully removed. Logical volume "lv3" successfully removed. PASS: lvremove -ff testvg/lv1 testvg/lv2 testvg/lv3 PASS: testvg/pool data_percent == 0.00 INFO: [2022-09-28 03:17:15] Running: 'lvcreate -V 100m -T testvg/pool -n lv1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -V 100m -T testvg/pool -n lv1 INFO: [2022-09-28 03:17:16] Running: 'lvcreate -V 100m -T testvg/pool -n lv2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv2" created. WARNING: Sum of all thin volume sizes (200.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -V 100m -T testvg/pool -n lv2 INFO: [2022-09-28 03:17:16] Running: 'lvcreate -V 100m -T testvg/pool -n lv3'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv3" created. WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -V 100m -T testvg/pool -n lv3 INFO: [2022-09-28 03:17:17] Running: 'lvremove -ff /dev/testvg/lv[1-3]'... Logical volume "lv1" successfully removed. Logical volume "lv2" successfully removed. Logical volume "lv3" successfully removed. PASS: lvremove -ff /dev/testvg/lv[1-3] INFO: [2022-09-28 03:17:18] Running: 'lvcreate -V 100m -T testvg/pool -n lv1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -V 100m -T testvg/pool -n lv1 INFO: [2022-09-28 03:17:18] Running: 'lvcreate -s testvg/lv1 -n snap1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap1" created. WARNING: Sum of all thin volume sizes (200.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -s testvg/lv1 -n snap1 INFO: [2022-09-28 03:17:19] Running: 'lvcreate -s testvg/snap1 -n snap2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap2" created. WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -s testvg/snap1 -n snap2 INFO: [2022-09-28 03:17:19] Running: 'lvremove -ff testvg/snap1 testvg/snap2'... Logical volume "snap1" successfully removed. Logical volume "snap2" successfully removed. PASS: lvremove -ff testvg/snap1 testvg/snap2 INFO: [2022-09-28 03:17:20] Running: 'vgremove --force testvg'... Logical volume "lv1" successfully removed. Logical volume "pool" successfully removed. Volume group "testvg" successfully removed INFO: [2022-09-28 03:17:21] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2022-09-28 03:17:22] Running: 'losetup -d /dev/loop0'... INFO: [2022-09-28 03:17:22] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2022-09-28 03:17:22] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2022-09-28 03:17:23] Running: 'losetup -d /dev/loop1'... INFO: [2022-09-28 03:17:23] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2022-09-28 03:17:24] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2022-09-28 03:17:25] Running: 'losetup -d /dev/loop2'... INFO: [2022-09-28 03:17:25] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2022-09-28 03:17:25] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2022-09-28 03:17:26] Running: 'losetup -d /dev/loop3'... INFO: [2022-09-28 03:17:26] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:17:27] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:17:27] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:17:27] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:17:27] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:17:27] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_snapshot' was loaded during the test. Unloading it... INFO: [2022-09-28 03:17:27] Running: 'modprobe -r dm_snapshot'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2022-09-28 03:17:27] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l20 -T testvg/pool PASS: yes 2>/dev/null | lvremove testvg/pool PASS: lvcreate -l20 -T testvg/pool PASS: lvremove -f testvg/pool PASS: lvcreate -l20 -T testvg/pool PASS: lvremove -ff testvg/pool PASS: lvcreate -l20 -T testvg/pool PASS: lvremove -ff testvg PASS: lvcreate -l20 -V 100m -T testvg/pool -n lv1 PASS: lvremove -ff testvg/lv1 PASS: lvcreate -V 100m -T testvg/pool -n lv1 PASS: lvcreate -V 100m -T testvg/pool -n lv2 PASS: lvcreate -V 100m -T testvg/pool -n lv3 PASS: lvremove -ff testvg/lv1 testvg/lv2 testvg/lv3 PASS: testvg/pool data_percent == 0.00 PASS: lvcreate -V 100m -T testvg/pool -n lv1 PASS: lvcreate -V 100m -T testvg/pool -n lv2 PASS: lvcreate -V 100m -T testvg/pool -n lv3 PASS: lvremove -ff /dev/testvg/lv[1-3] PASS: lvcreate -V 100m -T testvg/pool -n lv1 PASS: lvcreate -s testvg/lv1 -n snap1 PASS: lvcreate -s testvg/snap1 -n snap2 PASS: lvremove -ff testvg/snap1 testvg/snap2 PASS: Search for error on the server ############################# Total tests that passed: 24 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvrename-thinp.py' ============================================================================================================== INFO: [2022-09-28 03:17:28] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvrename-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:17:28] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:17:28] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:17:28] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:17:29] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:17:29] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:17:29] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 75795sec preferred_lft 75795sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591933sec preferred_lft 604733sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:17:29] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 69M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:17:29] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.16-3.el9.ppc64le) ################################################################################ INFO: Starting Thin Rename test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2022-09-28 03:17:29] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2022-09-28 03:17:29] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2022-09-28 03:17:29] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2022-09-28 03:17:29] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2022-09-28 03:17:29] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2022-09-28 03:17:29] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2022-09-28 03:17:29] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2022-09-28 03:17:29] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2022-09-28 03:17:29] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2022-09-28 03:17:29] Running: 'lvcreate -l20 -V100M -T testvg/pool1 -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool1 (80.00 MiB). PASS: lvcreate -l20 -V100M -T testvg/pool1 -n lv1 INFO: [2022-09-28 03:17:31] Running: 'lvcreate -i 2 -l 20 -V 100M -T testvg/pool2 -n lv2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv2" created. WARNING: Sum of all thin volume sizes (200.00 MiB) exceeds the size of thin pools (160.00 MiB). PASS: lvcreate -i 2 -l 20 -V 100M -T testvg/pool2 -n lv2 INFO: [2022-09-28 03:17:32] Running: 'lvs -a testvg'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 testvg Vwi-a-tz-- 100.00m pool1 0.00 lv2 testvg Vwi-a-tz-- 100.00m pool2 0.00 [lvol0_pmspare] testvg ewi------- 4.00m pool1 testvg twi-aotz-- 80.00m 0.00 10.94 [pool1_tdata] testvg Twi-ao---- 80.00m [pool1_tmeta] testvg ewi-ao---- 4.00m pool2 testvg twi-aotz-- 80.00m 0.00 10.94 [pool2_tdata] testvg Twi-ao---- 80.00m [pool2_tmeta] testvg ewi-ao---- 4.00m PASS: lvs -a testvg INFO: [2022-09-28 03:17:32] Running: 'lvrename testvg pool1 bakpool1'... Renamed "pool1" to "bakpool1" in volume group "testvg" PASS: lvrename testvg pool1 bakpool1 INFO: [2022-09-28 03:17:33] Running: 'lvs testvg/bakpool1'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert bakpool1 testvg twi-aotz-- 80.00m 0.00 10.94 PASS: lvs testvg/bakpool1 INFO: [2022-09-28 03:17:33] Running: 'lvs testvg/pool1'... Failed to find logical volume "testvg/pool1" PASS: lvs testvg/pool1 [exited with error, as expected] PASS: testvg/lv1 pool_lv == bakpool1 INFO: [2022-09-28 03:17:33] Running: 'lvrename testvg lv1 baklv1'... Renamed "lv1" to "baklv1" in volume group "testvg" PASS: lvrename testvg lv1 baklv1 INFO: [2022-09-28 03:17:34] Running: 'lvs testvg/baklv1'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert baklv1 testvg Vwi-a-tz-- 100.00m bakpool1 0.00 PASS: lvs testvg/baklv1 INFO: [2022-09-28 03:17:34] Running: 'lvs testvg/lv1'... Failed to find logical volume "testvg/lv1" PASS: lvs testvg/lv1 [exited with error, as expected] PASS: testvg/baklv1 pool_lv == bakpool1 INFO: [2022-09-28 03:17:34] Running: 'lvrename testvg pool2 bakpool2'... Renamed "pool2" to "bakpool2" in volume group "testvg" PASS: lvrename testvg pool2 bakpool2 INFO: [2022-09-28 03:17:35] Running: 'lvs testvg/bakpool2'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert bakpool2 testvg twi-aotz-- 80.00m 0.00 10.94 PASS: lvs testvg/bakpool2 INFO: [2022-09-28 03:17:35] Running: 'lvs testvg/pool2'... Failed to find logical volume "testvg/pool2" PASS: lvs testvg/pool2 [exited with error, as expected] PASS: testvg/lv2 pool_lv == bakpool2 INFO: [2022-09-28 03:17:35] Running: 'lvrename testvg lv2 baklv2'... Renamed "lv2" to "baklv2" in volume group "testvg" PASS: lvrename testvg lv2 baklv2 INFO: [2022-09-28 03:17:36] Running: 'lvs testvg/baklv2'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert baklv2 testvg Vwi-a-tz-- 100.00m bakpool2 0.00 PASS: lvs testvg/baklv2 INFO: [2022-09-28 03:17:36] Running: 'lvs testvg/lv2'... Failed to find logical volume "testvg/lv2" PASS: lvs testvg/lv2 [exited with error, as expected] PASS: testvg/baklv2 pool_lv == bakpool2 INFO: [2022-09-28 03:17:36] Running: 'vgremove --force testvg'... Logical volume "baklv2" successfully removed. Logical volume "bakpool2" successfully removed. Logical volume "baklv1" successfully removed. Logical volume "bakpool1" successfully removed. Volume group "testvg" successfully removed INFO: [2022-09-28 03:17:38] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2022-09-28 03:17:40] Running: 'losetup -d /dev/loop0'... INFO: [2022-09-28 03:17:40] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2022-09-28 03:17:40] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2022-09-28 03:17:41] Running: 'losetup -d /dev/loop1'... INFO: [2022-09-28 03:17:41] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2022-09-28 03:17:41] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2022-09-28 03:17:42] Running: 'losetup -d /dev/loop2'... INFO: [2022-09-28 03:17:42] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2022-09-28 03:17:43] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2022-09-28 03:17:44] Running: 'losetup -d /dev/loop3'... INFO: [2022-09-28 03:17:44] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:17:44] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:17:44] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:17:44] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:17:44] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:17:44] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2022-09-28 03:17:44] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l20 -V100M -T testvg/pool1 -n lv1 PASS: lvcreate -i 2 -l 20 -V 100M -T testvg/pool2 -n lv2 PASS: lvs -a testvg PASS: lvrename testvg pool1 bakpool1 PASS: lvs testvg/bakpool1 PASS: lvs testvg/pool1 [exited with error, as expected] PASS: testvg/lv1 pool_lv == bakpool1 PASS: lvrename testvg lv1 baklv1 PASS: lvs testvg/baklv1 PASS: lvs testvg/lv1 [exited with error, as expected] PASS: testvg/baklv1 pool_lv == bakpool1 PASS: lvrename testvg pool2 bakpool2 PASS: lvs testvg/bakpool2 PASS: lvs testvg/pool2 [exited with error, as expected] PASS: testvg/lv2 pool_lv == bakpool2 PASS: lvrename testvg lv2 baklv2 PASS: lvs testvg/baklv2 PASS: lvs testvg/lv2 [exited with error, as expected] PASS: testvg/baklv2 pool_lv == bakpool2 PASS: Search for error on the server ############################# Total tests that passed: 20 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvresize-thinp.py' ============================================================================================================== INFO: [2022-09-28 03:17:46] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvresize-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:17:46] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:17:46] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:17:46] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:17:46] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:17:46] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:17:46] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 75777sec preferred_lft 75777sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591916sec preferred_lft 604716sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:17:46] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 69M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:17:46] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ ################################################################################ INFO: Starting Thin Resize test ################################################################################ INFO: lvm2 is installed (lvm2-2.03.16-3.el9.ppc64le) INFO: Creating loop device /var/tmp/loop0.img with size 256 INFO: Creating file /var/tmp/loop0.img INFO: [2022-09-28 03:17:46] Running: 'fallocate -l 256M /var/tmp/loop0.img'... INFO: [2022-09-28 03:17:46] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 256 INFO: Creating file /var/tmp/loop1.img INFO: [2022-09-28 03:17:46] Running: 'fallocate -l 256M /var/tmp/loop1.img'... INFO: [2022-09-28 03:17:46] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 256 INFO: Creating file /var/tmp/loop2.img INFO: [2022-09-28 03:17:46] Running: 'fallocate -l 256M /var/tmp/loop2.img'... INFO: [2022-09-28 03:17:46] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 256 INFO: Creating file /var/tmp/loop3.img INFO: [2022-09-28 03:17:46] Running: 'fallocate -l 256M /var/tmp/loop3.img'... INFO: [2022-09-28 03:17:46] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2022-09-28 03:17:46] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created ################################################################################ INFO: Starting extend test ################################################################################ INFO: [2022-09-28 03:17:47] Running: 'lvcreate -l2 -T testvg/pool1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool1" created. PASS: lvcreate -l2 -T testvg/pool1 INFO: [2022-09-28 03:17:47] Running: 'lvcreate -i2 -l2 -T testvg/pool2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool2" created. PASS: lvcreate -i2 -l2 -T testvg/pool2 INFO: [2022-09-28 03:17:48] Running: 'lvresize -l+2 -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -l+2 -n testvg/pool1 PASS: testvg/pool1 lv_size == 16.00m INFO: [2022-09-28 03:17:48] Running: 'lvresize -L+8 -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -L+8 -n testvg/pool1 PASS: testvg/pool1 lv_size == 24.00m INFO: [2022-09-28 03:17:49] Running: 'lvresize -L+8M -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -L+8M -n testvg/pool1 PASS: testvg/pool1 lv_size == 32.00m INFO: [2022-09-28 03:17:49] Running: 'lvresize -l+2 -n testvg/pool1 /dev/loop3'... Size of logical volume testvg/pool1_tdata changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -l+2 -n testvg/pool1 /dev/loop3 PASS: testvg/pool1 lv_size == 40.00m INFO: [2022-09-28 03:17:50] Running: 'lvresize -l+2 -n testvg/pool1 /dev/loop2:40:41'... Size of logical volume testvg/pool1_tdata changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -l+2 -n testvg/pool1 /dev/loop2:40:41 PASS: testvg/pool1 lv_size == 48.00m INFO: [2022-09-28 03:17:50] Running: 'pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(40)''... testvg [pool1_tdata] /dev/loop2(40) PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(40)' INFO: [2022-09-28 03:17:51] Running: 'lvresize -l+2 -n testvg/pool1 /dev/loop1:35:37'... Size of logical volume testvg/pool1_tdata changed from 48.00 MiB (12 extents) to 56.00 MiB (14 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -l+2 -n testvg/pool1 /dev/loop1:35:37 PASS: testvg/pool1 lv_size == 56.00m INFO: [2022-09-28 03:17:51] Running: 'pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(35)''... testvg [pool1_tdata] /dev/loop1(35) PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(35)' INFO: [2022-09-28 03:17:52] Running: 'lvresize -l16 -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 56.00 MiB (14 extents) to 64.00 MiB (16 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -l16 -n testvg/pool1 PASS: testvg/pool1 lv_size == 64.00m INFO: [2022-09-28 03:17:52] Running: 'lvresize -L72m -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 64.00 MiB (16 extents) to 72.00 MiB (18 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -L72m -n testvg/pool1 PASS: testvg/pool1 lv_size == 72.00m INFO: [2022-09-28 03:17:53] Running: 'lvresize -l+100%FREE --test testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 72.00 MiB (18 extents) to 988.00 MiB (247 extents). Logical volume testvg/pool1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -l+100%FREE --test testvg/pool1 INFO: [2022-09-28 03:17:53] Running: 'lvresize -l+10%PVS --test testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -l+10%PVS --test testvg/pool1 INFO: [2022-09-28 03:17:53] Running: 'lvresize -l+10%VG -t testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -l+10%VG -t testvg/pool1 INFO: [2022-09-28 03:17:53] Running: 'lvresize -l+100%VG -t testvg/pool1'... TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Insufficient free space: 252 extents needed, but only 229 available PASS: lvresize -l+100%VG -t testvg/pool1 [exited with error, as expected] INFO: [2022-09-28 03:17:54] Running: 'lvresize -l+2 -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/pool2 successfully resized. PASS: lvresize -l+2 -n testvg/pool2 PASS: testvg/pool2 lv_size == 16.00m INFO: [2022-09-28 03:17:54] Running: 'lvresize -L+8 -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/pool2 successfully resized. PASS: lvresize -L+8 -n testvg/pool2 PASS: testvg/pool2 lv_size == 24.00m INFO: [2022-09-28 03:17:55] Running: 'lvresize -L+8M -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). Logical volume testvg/pool2 successfully resized. PASS: lvresize -L+8M -n testvg/pool2 PASS: testvg/pool2 lv_size == 32.00m INFO: [2022-09-28 03:17:55] Running: 'lvresize -l+2 -n testvg/pool2 /dev/loop1 /dev/loop2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). Logical volume testvg/pool2 successfully resized. PASS: lvresize -l+2 -n testvg/pool2 /dev/loop1 /dev/loop2 PASS: testvg/pool2 lv_size == 40.00m INFO: [2022-09-28 03:17:56] Running: 'lvresize -l+2 -n testvg/pool2 /dev/loop1:30-41 /dev/loop2:20-31'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). Logical volume testvg/pool2 successfully resized. PASS: lvresize -l+2 -n testvg/pool2 /dev/loop1:30-41 /dev/loop2:20-31 PASS: testvg/pool2 lv_size == 48.00m INFO: [2022-09-28 03:17:56] Running: 'pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(30)''... testvg [pool2_tdata] /dev/loop1(30),/dev/loop2(20) PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(30)' INFO: [2022-09-28 03:17:56] Running: 'pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(20)''... testvg [pool2_tdata] /dev/loop1(30),/dev/loop2(20) PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(20)' INFO: [2022-09-28 03:17:57] Running: 'lvresize -l16 -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 48.00 MiB (12 extents) to 64.00 MiB (16 extents). Logical volume testvg/pool2 successfully resized. PASS: lvresize -l16 -n testvg/pool2 PASS: testvg/pool2 lv_size == 64.00m INFO: [2022-09-28 03:17:57] Running: 'lvresize -L72m -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 64.00 MiB (16 extents) to 72.00 MiB (18 extents). Logical volume testvg/pool2 successfully resized. PASS: lvresize -L72m -n testvg/pool2 PASS: testvg/pool2 lv_size == 72.00m INFO: [2022-09-28 03:17:58] Running: 'lvresize -l+100%FREE --test testvg/pool2'... Using stripesize of last segment 64.00 KiB Rounding size (231 extents) down to stripe boundary size for segment (230 extents) Size of logical volume testvg/pool2_tdata changed from 72.00 MiB (18 extents) to 896.00 MiB (224 extents). Logical volume testvg/pool2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -l+100%FREE --test testvg/pool2 INFO: [2022-09-28 03:17:58] Running: 'lvresize -l+10%PVS --test testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -l+10%PVS --test testvg/pool2 INFO: [2022-09-28 03:17:58] Running: 'lvresize -l+10%VG -t testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -l+10%VG -t testvg/pool2 INFO: [2022-09-28 03:17:58] Running: 'lvresize -l+100%VG -t testvg/pool2'... Using stripesize of last segment 64.00 KiB TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Insufficient free space: 252 extents needed, but only 213 available PASS: lvresize -l+100%VG -t testvg/pool2 [exited with error, as expected] INFO: [2022-09-28 03:17:58] Running: 'lvremove -ff testvg'... Logical volume "pool2" successfully removed. Logical volume "pool1" successfully removed. PASS: lvremove -ff testvg INFO: [2022-09-28 03:18:00] Running: 'lvcreate -l10 -V8m -T testvg/pool1 -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv1" created. PASS: lvcreate -l10 -V8m -T testvg/pool1 -n lv1 INFO: [2022-09-28 03:18:01] Running: 'lvcreate -i2 -l10 -V8m -T testvg/pool2 -n lv2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv2" created. PASS: lvcreate -i2 -l10 -V8m -T testvg/pool2 -n lv2 INFO: [2022-09-28 03:18:03] Running: 'lvextend -l4 testvg/lv1'... Size of logical volume testvg/lv1 changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/lv1 successfully resized. PASS: lvextend -l4 testvg/lv1 PASS: testvg/lv1 lv_size == 16.00m INFO: [2022-09-28 03:18:03] Running: 'lvextend -L24 -n testvg/lv1'... Size of logical volume testvg/lv1 changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/lv1 successfully resized. PASS: lvextend -L24 -n testvg/lv1 PASS: testvg/lv1 lv_size == 24.00m INFO: [2022-09-28 03:18:04] Running: 'lvextend -l+100%FREE --test testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to 940.00 MiB (235 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (948.00 MiB) exceeds the size of thin pools and the amount of free space in volume group (916.00 MiB). PASS: lvextend -l+100%FREE --test testvg/lv1 INFO: [2022-09-28 03:18:04] Running: 'lvextend -l+100%PVS --test testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to <1.01 GiB (258 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (<1.02 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+100%PVS --test testvg/lv1 INFO: [2022-09-28 03:18:04] Running: 'lvextend -l+50%VG -t testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to 528.00 MiB (132 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (536.00 MiB) exceeds the size of thin pools (80.00 MiB). PASS: lvextend -l+50%VG -t testvg/lv1 INFO: [2022-09-28 03:18:04] Running: 'lvextend -l+120%VG -t testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to <1.21 GiB (309 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (1.21 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+120%VG -t testvg/lv1 INFO: /mnt/lv already exist INFO: [2022-09-28 03:18:05] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv1'... Discarding device blocks: 0/24576 done Creating filesystem with 24576 1k blocks and 6144 inodes Filesystem UUID: 3adb5c9f-c33c-435b-a273-6ae854a77301 Superblock backups stored on blocks: 8193 Allocating group tables: 0/3 done Writing inode tables: 0/3 done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: 0/3 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:18:05] Running: 'mount /dev/mapper/testvg-lv1 /mnt/lv'... INFO: [2022-09-28 03:18:05] Running: 'dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0604564 s, 86.7 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 INFO: [2022-09-28 03:18:05] Running: 'lvextend -l+2 -r testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). Logical volume testvg/lv1 successfully resized. Filesystem at /dev/mapper/testvg-lv1 is mounted on /mnt/lv; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/mapper/testvg-lv1 is now 32768 (1k) blocks long. resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -r testvg/lv1 PASS: testvg/lv1 lv_size == 32.00m INFO: [2022-09-28 03:18:06] Running: 'lvcreate -K -s testvg/lv1 -n snap1'... Logical volume "snap1" created. PASS: lvcreate -K -s testvg/lv1 -n snap1 INFO: /mnt/snap already exist INFO: [2022-09-28 03:18:07] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap1'... Discarding device blocks: 0/32768 done Creating filesystem with 32768 1k blocks and 8192 inodes Filesystem UUID: a16be753-5782-499e-90a9-d1518c535177 Superblock backups stored on blocks: 8193, 24577 Allocating group tables: 0/4 done Writing inode tables: 0/4 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/4 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:18:07] Running: 'mount /dev/mapper/testvg-snap1 /mnt/snap'... INFO: [2022-09-28 03:18:07] Running: 'dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.059986 s, 87.4 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 INFO: [2022-09-28 03:18:07] Running: 'lvextend -l+2 -rf testvg/snap1'... Size of logical volume testvg/snap1 changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). Logical volume testvg/snap1 successfully resized. Filesystem at /dev/mapper/testvg-snap1 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/mapper/testvg-snap1 is now 40960 (1k) blocks long. resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m INFO: [2022-09-28 03:18:08] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 33M 5.1M 26M 17% /mnt/snap INFO: [2022-09-28 03:18:08] Running: 'lvextend -L48 -rf testvg/snap1'... Size of logical volume testvg/snap1 changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/snap1 successfully resized. Filesystem at /dev/mapper/testvg-snap1 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/mapper/testvg-snap1 is now 49152 (1k) blocks long. WARNING: Sum of all thin volume sizes (88.00 MiB) exceeds the size of thin pools (80.00 MiB). resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -L48 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m INFO: [2022-09-28 03:18:09] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 40M 5.1M 33M 14% /mnt/snap INFO: [2022-09-28 03:18:09] Running: 'umount /mnt/lv'... INFO: [2022-09-28 03:18:09] Running: 'umount /mnt/snap'... INFO: [2022-09-28 03:18:09] Running: 'lvextend -l4 testvg/lv2'... Size of logical volume testvg/lv2 changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/lv2 successfully resized. PASS: lvextend -l4 testvg/lv2 PASS: testvg/lv2 lv_size == 16.00m INFO: [2022-09-28 03:18:10] Running: 'lvextend -L24 -n testvg/lv2'... Size of logical volume testvg/lv2 changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/lv2 successfully resized. PASS: lvextend -L24 -n testvg/lv2 PASS: testvg/lv2 lv_size == 24.00m INFO: [2022-09-28 03:18:10] Running: 'lvextend -l+100%FREE --test testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to 940.00 MiB (235 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (1020.00 MiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+100%FREE --test testvg/lv2 INFO: [2022-09-28 03:18:11] Running: 'lvextend -l+100%PVS --test testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to <1.01 GiB (258 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (<1.09 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+100%PVS --test testvg/lv2 INFO: [2022-09-28 03:18:11] Running: 'lvextend -l+50%VG -t testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to 528.00 MiB (132 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (608.00 MiB) exceeds the size of thin pools (80.00 MiB). PASS: lvextend -l+50%VG -t testvg/lv2 INFO: [2022-09-28 03:18:11] Running: 'lvextend -l+120%VG -t testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to <1.21 GiB (309 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (<1.29 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+120%VG -t testvg/lv2 INFO: /mnt/lv already exist INFO: [2022-09-28 03:18:11] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv2'... Discarding device blocks: 0/24576 done Creating filesystem with 24576 1k blocks and 6144 inodes Filesystem UUID: e06070f8-3f29-42a3-9c07-16e88ab31197 Superblock backups stored on blocks: 8193 Allocating group tables: 0/3 done Writing inode tables: 0/3 done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: 0/3 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:18:12] Running: 'mount /dev/mapper/testvg-lv2 /mnt/lv'... INFO: [2022-09-28 03:18:12] Running: 'dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.060837 s, 86.2 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 INFO: [2022-09-28 03:18:12] Running: 'lvextend -l+2 -r testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). Logical volume testvg/lv2 successfully resized. Filesystem at /dev/mapper/testvg-lv2 is mounted on /mnt/lv; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/mapper/testvg-lv2 is now 32768 (1k) blocks long. resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -r testvg/lv2 PASS: testvg/lv2 lv_size == 32.00m INFO: [2022-09-28 03:18:13] Running: 'lvcreate -K -s testvg/lv2 -n snap2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap2" created. WARNING: Sum of all thin volume sizes (144.00 MiB) exceeds the size of thin pools (80.00 MiB). PASS: lvcreate -K -s testvg/lv2 -n snap2 INFO: /mnt/snap already exist INFO: [2022-09-28 03:18:13] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap2'... Discarding device blocks: 0/32768 done Creating filesystem with 32768 1k blocks and 8192 inodes Filesystem UUID: 4b7b7251-813e-4c1d-927f-93525f6b87e6 Superblock backups stored on blocks: 8193, 24577 Allocating group tables: 0/4 done Writing inode tables: 0/4 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/4 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:18:14] Running: 'mount /dev/mapper/testvg-snap2 /mnt/snap'... INFO: [2022-09-28 03:18:14] Running: 'dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0606879 s, 86.4 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 INFO: [2022-09-28 03:18:14] Running: 'lvextend -l+2 -rf testvg/snap2'... Size of logical volume testvg/snap2 changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/snap2 successfully resized. Filesystem at /dev/mapper/testvg-snap2 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/mapper/testvg-snap2 is now 40960 (1k) blocks long. WARNING: Sum of all thin volume sizes (152.00 MiB) exceeds the size of thin pools (80.00 MiB). resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m INFO: [2022-09-28 03:18:15] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 33M 5.1M 26M 17% /mnt/snap INFO: [2022-09-28 03:18:15] Running: 'lvextend -L48 -rf testvg/snap2'... Size of logical volume testvg/snap2 changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/snap2 successfully resized. Filesystem at /dev/mapper/testvg-snap2 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/mapper/testvg-snap2 is now 49152 (1k) blocks long. WARNING: Sum of all thin volume sizes (160.00 MiB) exceeds the size of thin pools (80.00 MiB). resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -L48 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m INFO: [2022-09-28 03:18:16] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 40M 5.1M 33M 14% /mnt/snap INFO: [2022-09-28 03:18:16] Running: 'umount /mnt/lv'... INFO: [2022-09-28 03:18:16] Running: 'umount /mnt/snap'... INFO: [2022-09-28 03:18:16] Running: 'lvremove -ff testvg'... Logical volume "lv2" successfully removed. Logical volume "snap2" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "snap1" successfully removed. Logical volume "pool1" successfully removed. PASS: lvremove -ff testvg ################################################################################ INFO: Starting reduce test ################################################################################ INFO: [2022-09-28 03:18:18] Running: 'lvcreate -L400M -T testvg/pool1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool1" created. PASS: lvcreate -L400M -T testvg/pool1 INFO: [2022-09-28 03:18:19] Running: 'lvresize -l-2 -n testvg/pool1'... Thin pool volumes testvg/pool1_tdata cannot be reduced in size yet. PASS: lvresize -l-2 -n testvg/pool1 [exited with error, as expected] PASS: testvg/pool1 lv_size == 400.00m INFO: [2022-09-28 03:18:19] Running: 'lvremove -ff testvg'... Logical volume "pool1" successfully removed. PASS: lvremove -ff testvg INFO: [2022-09-28 03:18:20] Running: 'lvcreate -L100m -V100m -T testvg/pool1 -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv1" created. PASS: lvcreate -L100m -V100m -T testvg/pool1 -n lv1 INFO: [2022-09-28 03:18:21] Running: 'lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB (26 extents). Logical volume "lv2" created. PASS: lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2 INFO: [2022-09-28 03:18:23] Running: 'lvresize -f -l-2 testvg/lv1'... Size of logical volume testvg/lv1 changed from 100.00 MiB (25 extents) to 92.00 MiB (23 extents). Logical volume testvg/lv1 successfully resized. WARNING: Reducing active logical volume to 92.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 92.00m INFO: [2022-09-28 03:18:23] Running: 'lvresize -f -L-8 -n testvg/lv1'... Size of logical volume testvg/lv1 changed from 92.00 MiB (23 extents) to 84.00 MiB (21 extents). Logical volume testvg/lv1 successfully resized. WARNING: Reducing active logical volume to 84.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -L-8 -n testvg/lv1 PASS: testvg/lv1 lv_size == 84.00m INFO: [2022-09-28 03:18:24] Running: 'lvresize -f -L-8m -n testvg/lv1'... Size of logical volume testvg/lv1 changed from 84.00 MiB (21 extents) to 76.00 MiB (19 extents). Logical volume testvg/lv1 successfully resized. WARNING: Reducing active logical volume to 76.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -L-8m -n testvg/lv1 PASS: testvg/lv1 lv_size == 76.00m INFO: [2022-09-28 03:18:25] Running: 'lvresize -f -l18 -n testvg/lv1'... Size of logical volume testvg/lv1 changed from 76.00 MiB (19 extents) to 72.00 MiB (18 extents). Logical volume testvg/lv1 successfully resized. WARNING: Reducing active logical volume to 72.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -l18 -n testvg/lv1 PASS: testvg/lv1 lv_size == 72.00m INFO: [2022-09-28 03:18:25] Running: 'lvresize -f -L64m -n testvg/lv1'... Size of logical volume testvg/lv1 changed from 72.00 MiB (18 extents) to 64.00 MiB (16 extents). Logical volume testvg/lv1 successfully resized. WARNING: Reducing active logical volume to 64.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -L64m -n testvg/lv1 PASS: testvg/lv1 lv_size == 64.00m INFO: [2022-09-28 03:18:26] Running: 'lvresize -f -l-1%FREE --test testvg/lv1'... Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 4.00 MiB (1 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 4.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -l-1%FREE --test testvg/lv1 INFO: [2022-09-28 03:18:26] Running: 'lvresize -f -l-1%PVS --test testvg/lv1'... Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 8.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -l-1%PVS --test testvg/lv1 INFO: [2022-09-28 03:18:26] Running: 'lvresize -f -l-1%VG -t testvg/lv1'... Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 8.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -l-1%VG -t testvg/lv1 INFO: [2022-09-28 03:18:26] Running: 'lvresize -f -l-1%VG -t testvg/lv1'... Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 8.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -l-1%VG -t testvg/lv1 INFO: /mnt/lv already exist INFO: [2022-09-28 03:18:26] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv1'... Discarding device blocks: 0/65536 done Creating filesystem with 65536 1k blocks and 16384 inodes Filesystem UUID: 7d590a9c-6b11-4dc2-bdfb-87a1ab6d582a Superblock backups stored on blocks: 8193, 24577, 40961, 57345 Allocating group tables: 0/8 done Writing inode tables: 0/81/8 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/8 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:18:27] Running: 'mount /dev/mapper/testvg-lv1 /mnt/lv'... INFO: [2022-09-28 03:18:27] Running: 'dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0606409 s, 86.5 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 INFO: [2022-09-28 03:18:27] Running: 'yes 2>/dev/null | lvresize -rf -l-2 testvg/lv1'... Do you want to unmount "/mnt/lv" ? [Y|n] y fsck from util-linux 2.37.4 /dev/mapper/testvg-lv1: 12/16384 files (8.3% non-contiguous), 14633/65536 blocks Resizing the filesystem on /dev/mapper/testvg-lv1 to 57344 (1k) blocks. The filesystem on /dev/mapper/testvg-lv1 is now 57344 (1k) blocks long. Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 56.00 MiB (14 extents). Logical volume testvg/lv1 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvresize -rf -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 56.00m INFO: [2022-09-28 03:18:28] Running: 'lvcreate -K -s testvg/lv1 -n snap1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap1" created. WARNING: Sum of all thin volume sizes (212.00 MiB) exceeds the size of thin pools (204.00 MiB). PASS: lvcreate -K -s testvg/lv1 -n snap1 INFO: /mnt/snap already exist INFO: [2022-09-28 03:18:29] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap1'... Discarding device blocks: 0/57344 done Creating filesystem with 57344 1k blocks and 14336 inodes Filesystem UUID: 09443d2e-906d-4bca-834b-47640a366bb5 Superblock backups stored on blocks: 8193, 24577, 40961 Allocating group tables: 0/7 done Writing inode tables: 0/7 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/7 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:18:29] Running: 'mount /dev/mapper/testvg-snap1 /mnt/snap'... INFO: [2022-09-28 03:18:29] Running: 'dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0605558 s, 86.6 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 INFO: [2022-09-28 03:18:29] Running: 'yes 2>/dev/null | lvresize -l-2 -rf testvg/snap1'... Do you want to unmount "/mnt/snap" ? [Y|n] y fsck from util-linux 2.37.4 /dev/mapper/testvg-snap1: 12/14336 files (8.3% non-contiguous), 13861/57344 blocks Resizing the filesystem on /dev/mapper/testvg-snap1 to 49152 (1k) blocks. The filesystem on /dev/mapper/testvg-snap1 is now 49152 (1k) blocks long. Size of logical volume testvg/snap1 changed from 56.00 MiB (14 extents) to 48.00 MiB (12 extents). Logical volume testvg/snap1 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvresize -l-2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m INFO: [2022-09-28 03:18:31] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 40M 5.1M 32M 14% /mnt/snap INFO: [2022-09-28 03:18:31] Running: 'yes 2>/dev/null | lvresize -L40 -rf testvg/snap1'... Do you want to unmount "/mnt/snap" ? [Y|n] y fsck from util-linux 2.37.4 /dev/mapper/testvg-snap1: 12/12288 files (8.3% non-contiguous), 13347/49152 blocks Resizing the filesystem on /dev/mapper/testvg-snap1 to 40960 (1k) blocks. The filesystem on /dev/mapper/testvg-snap1 is now 40960 (1k) blocks long. Size of logical volume testvg/snap1 changed from 48.00 MiB (12 extents) to 40.00 MiB (10 extents). Logical volume testvg/snap1 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvresize -L40 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m INFO: [2022-09-28 03:18:32] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 33M 5.1M 25M 17% /mnt/snap INFO: [2022-09-28 03:18:32] Running: 'umount /mnt/lv'... INFO: [2022-09-28 03:18:32] Running: 'umount /mnt/snap'... INFO: [2022-09-28 03:18:32] Running: 'lvresize -f -l-2 testvg/lv2'... Size of logical volume testvg/lv2 changed from 100.00 MiB (25 extents) to 92.00 MiB (23 extents). Logical volume testvg/lv2 successfully resized. WARNING: Reducing active logical volume to 92.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 92.00m INFO: [2022-09-28 03:18:32] Running: 'lvresize -f -L-8 -n testvg/lv2'... Size of logical volume testvg/lv2 changed from 92.00 MiB (23 extents) to 84.00 MiB (21 extents). Logical volume testvg/lv2 successfully resized. WARNING: Reducing active logical volume to 84.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -L-8 -n testvg/lv2 PASS: testvg/lv2 lv_size == 84.00m INFO: [2022-09-28 03:18:33] Running: 'lvresize -f -L-8m -n testvg/lv2'... Size of logical volume testvg/lv2 changed from 84.00 MiB (21 extents) to 76.00 MiB (19 extents). Logical volume testvg/lv2 successfully resized. WARNING: Reducing active logical volume to 76.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -L-8m -n testvg/lv2 PASS: testvg/lv2 lv_size == 76.00m INFO: [2022-09-28 03:18:34] Running: 'lvresize -f -l18 -n testvg/lv2'... Size of logical volume testvg/lv2 changed from 76.00 MiB (19 extents) to 72.00 MiB (18 extents). Logical volume testvg/lv2 successfully resized. WARNING: Reducing active logical volume to 72.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -l18 -n testvg/lv2 PASS: testvg/lv2 lv_size == 72.00m INFO: [2022-09-28 03:18:34] Running: 'lvresize -f -L64m -n testvg/lv2'... Size of logical volume testvg/lv2 changed from 72.00 MiB (18 extents) to 64.00 MiB (16 extents). Logical volume testvg/lv2 successfully resized. WARNING: Reducing active logical volume to 64.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -L64m -n testvg/lv2 PASS: testvg/lv2 lv_size == 64.00m INFO: [2022-09-28 03:18:35] Running: 'lvresize -f -l-1%FREE --test testvg/lv2'... Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 4.00 MiB (1 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 4.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -l-1%FREE --test testvg/lv2 INFO: [2022-09-28 03:18:35] Running: 'lvresize -f -l-1%PVS --test testvg/lv2'... Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 8.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -l-1%PVS --test testvg/lv2 INFO: [2022-09-28 03:18:35] Running: 'lvresize -f -l-1%VG -t testvg/lv2'... Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 8.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -l-1%VG -t testvg/lv2 INFO: [2022-09-28 03:18:35] Running: 'lvresize -f -l-1%VG -t testvg/lv2'... Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Reducing active logical volume to 8.00 MiB. THIS MAY DESTROY YOUR DATA (filesystem etc.) PASS: lvresize -f -l-1%VG -t testvg/lv2 INFO: /mnt/lv already exist INFO: [2022-09-28 03:18:35] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv2'... Discarding device blocks: 0/65536 done Creating filesystem with 65536 1k blocks and 16384 inodes Filesystem UUID: 14ee93a4-b929-47f7-a25a-995fe021afe3 Superblock backups stored on blocks: 8193, 24577, 40961, 57345 Allocating group tables: 0/8 done Writing inode tables: 0/8 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/8 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:18:36] Running: 'mount /dev/mapper/testvg-lv2 /mnt/lv'... INFO: [2022-09-28 03:18:36] Running: 'dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0608896 s, 86.1 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 INFO: [2022-09-28 03:18:36] Running: 'yes 2>/dev/null | lvresize -rf -l-2 testvg/lv2'... Do you want to unmount "/mnt/lv" ? [Y|n] y fsck from util-linux 2.37.4 /dev/mapper/testvg-lv2: 12/16384 files (8.3% non-contiguous), 14633/65536 blocks Resizing the filesystem on /dev/mapper/testvg-lv2 to 57344 (1k) blocks. The filesystem on /dev/mapper/testvg-lv2 is now 57344 (1k) blocks long. Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 56.00 MiB (14 extents). Logical volume testvg/lv2 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvresize -rf -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 56.00m INFO: [2022-09-28 03:18:37] Running: 'lvcreate -K -s testvg/lv2 -n snap2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap2" created. WARNING: Sum of all thin volume sizes (208.00 MiB) exceeds the size of thin pools (204.00 MiB). PASS: lvcreate -K -s testvg/lv2 -n snap2 INFO: /mnt/snap already exist INFO: [2022-09-28 03:18:38] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap2'... Discarding device blocks: 0/57344 done Creating filesystem with 57344 1k blocks and 14336 inodes Filesystem UUID: a7952c12-13d4-4e2b-aa25-1041f9d04f70 Superblock backups stored on blocks: 8193, 24577, 40961 Allocating group tables: 0/7 done Writing inode tables: 0/7 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/7 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2022-09-28 03:18:38] Running: 'mount /dev/mapper/testvg-snap2 /mnt/snap'... INFO: [2022-09-28 03:18:38] Running: 'dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0603327 s, 86.9 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 INFO: [2022-09-28 03:18:38] Running: 'yes 2>/dev/null | lvresize -l-2 -rf testvg/snap2'... Do you want to unmount "/mnt/snap" ? [Y|n] y fsck from util-linux 2.37.4 /dev/mapper/testvg-snap2: 12/14336 files (8.3% non-contiguous), 13861/57344 blocks Resizing the filesystem on /dev/mapper/testvg-snap2 to 49152 (1k) blocks. The filesystem on /dev/mapper/testvg-snap2 is now 49152 (1k) blocks long. Size of logical volume testvg/snap2 changed from 56.00 MiB (14 extents) to 48.00 MiB (12 extents). Logical volume testvg/snap2 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvresize -l-2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m INFO: [2022-09-28 03:18:40] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 40M 5.1M 32M 14% /mnt/snap INFO: [2022-09-28 03:18:40] Running: 'yes 2>/dev/null | lvresize -L40 -rf testvg/snap2'... Do you want to unmount "/mnt/snap" ? [Y|n] y fsck from util-linux 2.37.4 /dev/mapper/testvg-snap2: 12/12288 files (8.3% non-contiguous), 13347/49152 blocks Resizing the filesystem on /dev/mapper/testvg-snap2 to 40960 (1k) blocks. The filesystem on /dev/mapper/testvg-snap2 is now 40960 (1k) blocks long. Size of logical volume testvg/snap2 changed from 48.00 MiB (12 extents) to 40.00 MiB (10 extents). Logical volume testvg/snap2 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvresize -L40 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m INFO: [2022-09-28 03:18:41] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 33M 5.1M 25M 17% /mnt/snap INFO: [2022-09-28 03:18:41] Running: 'umount /mnt/lv'... INFO: [2022-09-28 03:18:41] Running: 'umount /mnt/snap'... INFO: [2022-09-28 03:18:41] Running: 'lvremove -ff testvg'... Logical volume "lv2" successfully removed. Logical volume "snap2" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "snap1" successfully removed. Logical volume "pool1" successfully removed. PASS: lvremove -ff testvg INFO: [2022-09-28 03:18:43] Running: 'vgremove --force testvg'... Volume group "testvg" successfully removed INFO: [2022-09-28 03:18:44] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2022-09-28 03:18:45] Running: 'losetup -d /dev/loop0'... INFO: [2022-09-28 03:18:45] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2022-09-28 03:18:45] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2022-09-28 03:18:46] Running: 'losetup -d /dev/loop1'... INFO: [2022-09-28 03:18:46] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2022-09-28 03:18:46] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2022-09-28 03:18:48] Running: 'losetup -d /dev/loop2'... INFO: [2022-09-28 03:18:48] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2022-09-28 03:18:48] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2022-09-28 03:18:49] Running: 'losetup -d /dev/loop3'... INFO: [2022-09-28 03:18:49] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:18:49] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:18:49] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:18:49] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:18:49] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:18:49] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_snapshot' was loaded during the test. Unloading it... INFO: [2022-09-28 03:18:49] Running: 'modprobe -r dm_snapshot'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2022-09-28 03:18:49] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l2 -T testvg/pool1 PASS: lvcreate -i2 -l2 -T testvg/pool2 PASS: lvresize -l+2 -n testvg/pool1 PASS: testvg/pool1 lv_size == 16.00m PASS: lvresize -L+8 -n testvg/pool1 PASS: testvg/pool1 lv_size == 24.00m PASS: lvresize -L+8M -n testvg/pool1 PASS: testvg/pool1 lv_size == 32.00m PASS: lvresize -l+2 -n testvg/pool1 /dev/loop3 PASS: testvg/pool1 lv_size == 40.00m PASS: lvresize -l+2 -n testvg/pool1 /dev/loop2:40:41 PASS: testvg/pool1 lv_size == 48.00m PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(40)' PASS: lvresize -l+2 -n testvg/pool1 /dev/loop1:35:37 PASS: testvg/pool1 lv_size == 56.00m PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(35)' PASS: lvresize -l16 -n testvg/pool1 PASS: testvg/pool1 lv_size == 64.00m PASS: lvresize -L72m -n testvg/pool1 PASS: testvg/pool1 lv_size == 72.00m PASS: lvresize -l+100%FREE --test testvg/pool1 PASS: lvresize -l+10%PVS --test testvg/pool1 PASS: lvresize -l+10%VG -t testvg/pool1 PASS: lvresize -l+100%VG -t testvg/pool1 [exited with error, as expected] PASS: lvresize -l+2 -n testvg/pool2 PASS: testvg/pool2 lv_size == 16.00m PASS: lvresize -L+8 -n testvg/pool2 PASS: testvg/pool2 lv_size == 24.00m PASS: lvresize -L+8M -n testvg/pool2 PASS: testvg/pool2 lv_size == 32.00m PASS: lvresize -l+2 -n testvg/pool2 /dev/loop1 /dev/loop2 PASS: testvg/pool2 lv_size == 40.00m PASS: lvresize -l+2 -n testvg/pool2 /dev/loop1:30-41 /dev/loop2:20-31 PASS: testvg/pool2 lv_size == 48.00m PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(30)' PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(20)' PASS: lvresize -l16 -n testvg/pool2 PASS: testvg/pool2 lv_size == 64.00m PASS: lvresize -L72m -n testvg/pool2 PASS: testvg/pool2 lv_size == 72.00m PASS: lvresize -l+100%FREE --test testvg/pool2 PASS: lvresize -l+10%PVS --test testvg/pool2 PASS: lvresize -l+10%VG -t testvg/pool2 PASS: lvresize -l+100%VG -t testvg/pool2 [exited with error, as expected] PASS: lvremove -ff testvg PASS: lvcreate -l10 -V8m -T testvg/pool1 -n lv1 PASS: lvcreate -i2 -l10 -V8m -T testvg/pool2 -n lv2 PASS: lvextend -l4 testvg/lv1 PASS: testvg/lv1 lv_size == 16.00m PASS: lvextend -L24 -n testvg/lv1 PASS: testvg/lv1 lv_size == 24.00m PASS: lvextend -l+100%FREE --test testvg/lv1 PASS: lvextend -l+100%PVS --test testvg/lv1 PASS: lvextend -l+50%VG -t testvg/lv1 PASS: lvextend -l+120%VG -t testvg/lv1 PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 PASS: lvextend -l+2 -r testvg/lv1 PASS: testvg/lv1 lv_size == 32.00m PASS: lvcreate -K -s testvg/lv1 -n snap1 PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 PASS: lvextend -l+2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m PASS: lvextend -L48 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m PASS: lvextend -l4 testvg/lv2 PASS: testvg/lv2 lv_size == 16.00m PASS: lvextend -L24 -n testvg/lv2 PASS: testvg/lv2 lv_size == 24.00m PASS: lvextend -l+100%FREE --test testvg/lv2 PASS: lvextend -l+100%PVS --test testvg/lv2 PASS: lvextend -l+50%VG -t testvg/lv2 PASS: lvextend -l+120%VG -t testvg/lv2 PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 PASS: lvextend -l+2 -r testvg/lv2 PASS: testvg/lv2 lv_size == 32.00m PASS: lvcreate -K -s testvg/lv2 -n snap2 PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 PASS: lvextend -l+2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m PASS: lvextend -L48 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m PASS: lvremove -ff testvg PASS: lvcreate -L400M -T testvg/pool1 PASS: lvresize -l-2 -n testvg/pool1 [exited with error, as expected] PASS: testvg/pool1 lv_size == 400.00m PASS: lvremove -ff testvg PASS: lvcreate -L100m -V100m -T testvg/pool1 -n lv1 PASS: lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2 PASS: lvresize -f -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 92.00m PASS: lvresize -f -L-8 -n testvg/lv1 PASS: testvg/lv1 lv_size == 84.00m PASS: lvresize -f -L-8m -n testvg/lv1 PASS: testvg/lv1 lv_size == 76.00m PASS: lvresize -f -l18 -n testvg/lv1 PASS: testvg/lv1 lv_size == 72.00m PASS: lvresize -f -L64m -n testvg/lv1 PASS: testvg/lv1 lv_size == 64.00m PASS: lvresize -f -l-1%FREE --test testvg/lv1 PASS: lvresize -f -l-1%PVS --test testvg/lv1 PASS: lvresize -f -l-1%VG -t testvg/lv1 PASS: lvresize -f -l-1%VG -t testvg/lv1 PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 PASS: yes 2>/dev/null | lvresize -rf -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 56.00m PASS: lvcreate -K -s testvg/lv1 -n snap1 PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 PASS: yes 2>/dev/null | lvresize -l-2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m PASS: yes 2>/dev/null | lvresize -L40 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m PASS: lvresize -f -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 92.00m PASS: lvresize -f -L-8 -n testvg/lv2 PASS: testvg/lv2 lv_size == 84.00m PASS: lvresize -f -L-8m -n testvg/lv2 PASS: testvg/lv2 lv_size == 76.00m PASS: lvresize -f -l18 -n testvg/lv2 PASS: testvg/lv2 lv_size == 72.00m PASS: lvresize -f -L64m -n testvg/lv2 PASS: testvg/lv2 lv_size == 64.00m PASS: lvresize -f -l-1%FREE --test testvg/lv2 PASS: lvresize -f -l-1%PVS --test testvg/lv2 PASS: lvresize -f -l-1%VG -t testvg/lv2 PASS: lvresize -f -l-1%VG -t testvg/lv2 PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 PASS: yes 2>/dev/null | lvresize -rf -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 56.00m PASS: lvcreate -K -s testvg/lv2 -n snap2 PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 PASS: yes 2>/dev/null | lvresize -l-2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m PASS: yes 2>/dev/null | lvresize -L40 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m PASS: lvremove -ff testvg PASS: Search for error on the server ############################# Total tests that passed: 136 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvscan-thinp.py' ============================================================================================================== INFO: [2022-09-28 03:18:51] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvscan-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:18:51] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:18:51] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:18:51] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:18:51] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:18:51] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:18:51] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 75712sec preferred_lft 75712sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591951sec preferred_lft 604751sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:18:51] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 70M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:18:51] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.16-3.el9.ppc64le) ################################################################################ INFO: Starting LV Scan Thin Provisioning test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2022-09-28 03:18:51] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2022-09-28 03:18:52] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2022-09-28 03:18:52] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2022-09-28 03:18:52] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2022-09-28 03:18:52] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2022-09-28 03:18:52] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2022-09-28 03:18:52] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2022-09-28 03:18:52] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2022-09-28 03:18:52] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2022-09-28 03:18:52] Running: 'lvcreate -V100m -l10 -T testvg/pool -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool (40.00 MiB). PASS: lvcreate -V100m -l10 -T testvg/pool -n lv1 INFO: [2022-09-28 03:18:54] Running: 'lvcreate -s testvg/lv1 -n snap1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap1" created. WARNING: Sum of all thin volume sizes (200.00 MiB) exceeds the size of thin pool testvg/pool (40.00 MiB). PASS: lvcreate -s testvg/lv1 -n snap1 INFO: [2022-09-28 03:18:54] Running: 'lvcreate -s testvg/snap1 -n snap2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap2" created. WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pool testvg/pool (40.00 MiB). PASS: lvcreate -s testvg/snap1 -n snap2 INFO: [2022-09-28 03:18:55] Running: 'lvscan'... ACTIVE '/dev/testvg/pool' [40.00 MiB] inherit ACTIVE '/dev/testvg/lv1' [100.00 MiB] inherit inactive '/dev/testvg/snap1' [100.00 MiB] inherit inactive '/dev/testvg/snap2' [100.00 MiB] inherit INFO: [2022-09-28 03:18:55] Running: 'lvs testvg'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 testvg Vwi-a-tz-- 100.00m pool 0.00 pool testvg twi-aotz-- 40.00m 0.00 10.94 snap1 testvg Vwi---tz-k 100.00m pool lv1 snap2 testvg Vwi---tz-k 100.00m pool snap1 INFO: [2022-09-28 03:18:55] Running: 'lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool'\s+\[40.00 MiB\]\s+inherit"'... ACTIVE '/dev/testvg/pool' [40.00 MiB] inherit PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool'\s+\[40.00 MiB\]\s+inherit" INFO: [2022-09-28 03:18:55] Running: 'lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/lv1'\s+\[100.00 MiB\]\s+inherit"'... ACTIVE '/dev/testvg/lv1' [100.00 MiB] inherit PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/lv1'\s+\[100.00 MiB\]\s+inherit" INFO: [2022-09-28 03:18:55] Running: 'lvscan | egrep "\s+inactive\s+'/dev/testvg/snap1'\s+\[100.00 MiB\]\s+inherit"'... inactive '/dev/testvg/snap1' [100.00 MiB] inherit PASS: lvscan | egrep "\s+inactive\s+'/dev/testvg/snap1'\s+\[100.00 MiB\]\s+inherit" INFO: [2022-09-28 03:18:56] Running: 'lvscan | egrep "\s+inactive\s+'/dev/testvg/snap2'\s+\[100.00 MiB\]\s+inherit"'... inactive '/dev/testvg/snap2' [100.00 MiB] inherit PASS: lvscan | egrep "\s+inactive\s+'/dev/testvg/snap2'\s+\[100.00 MiB\]\s+inherit" INFO: [2022-09-28 03:18:56] Running: 'lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tdata'\s+\[40.00 MiB\]\s+inherit"'... PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tdata'\s+\[40.00 MiB\]\s+inherit" [exited with error, as expected] INFO: [2022-09-28 03:18:56] Running: 'lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tmeta'\s+\[4.00 MiB\]\s+inherit"'... PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tmeta'\s+\[4.00 MiB\]\s+inherit" [exited with error, as expected] INFO: [2022-09-28 03:18:56] Running: 'lvscan -a'... ACTIVE '/dev/testvg/pool' [40.00 MiB] inherit ACTIVE '/dev/testvg/lv1' [100.00 MiB] inherit inactive '/dev/testvg/snap1' [100.00 MiB] inherit inactive '/dev/testvg/snap2' [100.00 MiB] inherit inactive '/dev/testvg/lvol0_pmspare' [4.00 MiB] inherit ACTIVE '/dev/testvg/pool_tmeta' [4.00 MiB] inherit ACTIVE '/dev/testvg/pool_tdata' [40.00 MiB] inherit INFO: [2022-09-28 03:18:56] Running: 'lvs -a testvg'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 testvg Vwi-a-tz-- 100.00m pool 0.00 [lvol0_pmspare] testvg ewi------- 4.00m pool testvg twi-aotz-- 40.00m 0.00 10.94 [pool_tdata] testvg Twi-ao---- 40.00m [pool_tmeta] testvg ewi-ao---- 4.00m snap1 testvg Vwi---tz-k 100.00m pool lv1 snap2 testvg Vwi---tz-k 100.00m pool snap1 INFO: [2022-09-28 03:18:56] Running: 'lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool'\s+\[40.00 MiB\]\s+inherit"'... ACTIVE '/dev/testvg/pool' [40.00 MiB] inherit PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool'\s+\[40.00 MiB\]\s+inherit" INFO: [2022-09-28 03:18:57] Running: 'lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/lv1'\s+\[100.00 MiB\]\s+inherit"'... ACTIVE '/dev/testvg/lv1' [100.00 MiB] inherit PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/lv1'\s+\[100.00 MiB\]\s+inherit" INFO: [2022-09-28 03:18:57] Running: 'lvscan --all | egrep "\s+inactive\s+'/dev/testvg/snap1'\s+\[100.00 MiB\]\s+inherit"'... inactive '/dev/testvg/snap1' [100.00 MiB] inherit PASS: lvscan --all | egrep "\s+inactive\s+'/dev/testvg/snap1'\s+\[100.00 MiB\]\s+inherit" INFO: [2022-09-28 03:18:57] Running: 'lvscan --all | egrep "\s+inactive\s+'/dev/testvg/snap2'\s+\[100.00 MiB\]\s+inherit"'... inactive '/dev/testvg/snap2' [100.00 MiB] inherit PASS: lvscan --all | egrep "\s+inactive\s+'/dev/testvg/snap2'\s+\[100.00 MiB\]\s+inherit" INFO: [2022-09-28 03:18:57] Running: 'lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tdata'\s+\[40.00 MiB\]\s+inherit"'... ACTIVE '/dev/testvg/pool_tdata' [40.00 MiB] inherit PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tdata'\s+\[40.00 MiB\]\s+inherit" INFO: [2022-09-28 03:18:57] Running: 'lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tmeta'\s+\[4.00 MiB\]\s+inherit"'... ACTIVE '/dev/testvg/pool_tmeta' [4.00 MiB] inherit PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tmeta'\s+\[4.00 MiB\]\s+inherit" INFO: [2022-09-28 03:18:58] Running: 'vgremove --force testvg'... Logical volume "lv1" successfully removed. Logical volume "snap1" successfully removed. Logical volume "snap2" successfully removed. Logical volume "pool" successfully removed. Volume group "testvg" successfully removed INFO: [2022-09-28 03:18:59] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2022-09-28 03:19:00] Running: 'losetup -d /dev/loop0'... INFO: [2022-09-28 03:19:00] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2022-09-28 03:19:00] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2022-09-28 03:19:01] Running: 'losetup -d /dev/loop1'... INFO: [2022-09-28 03:19:01] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2022-09-28 03:19:02] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2022-09-28 03:19:03] Running: 'losetup -d /dev/loop2'... INFO: [2022-09-28 03:19:03] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2022-09-28 03:19:03] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2022-09-28 03:19:04] Running: 'losetup -d /dev/loop3'... INFO: [2022-09-28 03:19:04] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:19:04] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:19:04] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:19:04] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:19:04] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:19:04] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_snapshot' was loaded during the test. Unloading it... INFO: [2022-09-28 03:19:04] Running: 'modprobe -r dm_snapshot'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2022-09-28 03:19:04] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -V100m -l10 -T testvg/pool -n lv1 PASS: lvcreate -s testvg/lv1 -n snap1 PASS: lvcreate -s testvg/snap1 -n snap2 PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool'\s+\[40.00 MiB\]\s+inherit" PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/lv1'\s+\[100.00 MiB\]\s+inherit" PASS: lvscan | egrep "\s+inactive\s+'/dev/testvg/snap1'\s+\[100.00 MiB\]\s+inherit" PASS: lvscan | egrep "\s+inactive\s+'/dev/testvg/snap2'\s+\[100.00 MiB\]\s+inherit" PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tdata'\s+\[40.00 MiB\]\s+inherit" [exited with error, as expected] PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tmeta'\s+\[4.00 MiB\]\s+inherit" [exited with error, as expected] PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool'\s+\[40.00 MiB\]\s+inherit" PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/lv1'\s+\[100.00 MiB\]\s+inherit" PASS: lvscan --all | egrep "\s+inactive\s+'/dev/testvg/snap1'\s+\[100.00 MiB\]\s+inherit" PASS: lvscan --all | egrep "\s+inactive\s+'/dev/testvg/snap2'\s+\[100.00 MiB\]\s+inherit" PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tdata'\s+\[40.00 MiB\]\s+inherit" PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tmeta'\s+\[4.00 MiB\]\s+inherit" PASS: Search for error on the server ############################# Total tests that passed: 16 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvs-thinp.py' ============================================================================================================== INFO: [2022-09-28 03:19:06] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvs-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:19:06] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:19:06] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:19:06] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:19:06] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:19:06] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:19:06] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 75697sec preferred_lft 75697sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591998sec preferred_lft 604798sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:19:06] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 70M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:19:06] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.16-3.el9.ppc64le) ################################################################################ INFO: Starting lvs Thin Provisioning test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2022-09-28 03:19:07] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2022-09-28 03:19:07] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2022-09-28 03:19:07] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2022-09-28 03:19:07] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2022-09-28 03:19:07] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2022-09-28 03:19:07] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2022-09-28 03:19:07] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2022-09-28 03:19:07] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2022-09-28 03:19:07] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2022-09-28 03:19:07] Running: 'lvcreate -l1 -T testvg/pool1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool1" created. PASS: lvcreate -l1 -T testvg/pool1 PASS: testvg/pool1 thin_count == 0 INFO: [2022-09-28 03:19:08] Running: 'lvcreate -V100m -T testvg/pool1 -n lv1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool1 (4.00 MiB). PASS: lvcreate -V100m -T testvg/pool1 -n lv1 PASS: testvg/pool1 thin_count == 1 PASS: testvg/pool1 lv_name == pool1 PASS: testvg/pool1 lv_size == 4.00m PASS: testvg/pool1 lv_metadata_size == 4.00m PASS: testvg/pool1 lv_attr == twi-aotz-- PASS: testvg/pool1 modules == thin-pool PASS: testvg/pool1 metadata_lv == [pool1_tmeta] PASS: testvg/pool1 data_lv == [pool1_tdata] INFO: [2022-09-28 03:19:11] Running: 'lvs testvg/pool1 -o+metadata_percent'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Meta% pool1 testvg twi-aotz-- 4.00m 0.00 10.94 10.94 PASS: lvs testvg/pool1 -o+metadata_percent PASS: testvg/pool1 chunksize == 64.00k PASS: testvg/pool1 transaction_id == 1 PASS: testvg/lv1 pool_lv == pool1 PASS: testvg/lv1 lv_name == lv1 PASS: testvg/lv1 lv_size == 100.00m PASS: testvg/lv1 lv_attr == Vwi-a-tz-- PASS: testvg/lv1 modules == thin,thin-pool INFO: [2022-09-28 03:19:12] Running: 'lvs -a testvg | egrep '\[pool1_tdata\]\s+testvg\s+Twi-ao----''... [pool1_tdata] testvg Twi-ao---- 4.00m PASS: lvs -a testvg | egrep '\[pool1_tdata\]\s+testvg\s+Twi-ao----' INFO: [2022-09-28 03:19:12] Running: 'lvs -a testvg | egrep '\[pool1_tmeta\]\s+testvg\s+ewi-ao----''... [pool1_tmeta] testvg ewi-ao---- 4.00m PASS: lvs -a testvg | egrep '\[pool1_tmeta\]\s+testvg\s+ewi-ao----' INFO: [2022-09-28 03:19:12] Running: 'lvs -a testvg | egrep 'Meta%''... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert PASS: lvs -a testvg | egrep 'Meta%' INFO: [2022-09-28 03:19:12] Running: 'lvs -a testvg | egrep 'Data%''... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert PASS: lvs -a testvg | egrep 'Data%' INFO: [2022-09-28 03:19:13] Running: 'lvcreate -V100m -T testvg/pool1 -n lv2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv2" created. WARNING: Sum of all thin volume sizes (200.00 MiB) exceeds the size of thin pool testvg/pool1 (4.00 MiB). PASS: lvcreate -V100m -T testvg/pool1 -n lv2 PASS: testvg/pool1 transaction_id == 2 PASS: testvg/pool1 thin_count == 2 PASS: testvg/pool1 zero == zero INFO: [2022-09-28 03:19:14] Running: 'lvcreate -s testvg/lv1 -n snap1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap1" created. WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pool testvg/pool1 (4.00 MiB). PASS: lvcreate -s testvg/lv1 -n snap1 PASS: testvg/pool1 transaction_id == 3 PASS: testvg/pool1 thin_count == 3 PASS: testvg/snap1 origin == lv1 PASS: testvg/snap1 lv_name == snap1 PASS: testvg/snap1 lv_size == 100.00m PASS: testvg/snap1 lv_attr == Vwi---tz-k PASS: testvg/snap1 modules == thin,thin-pool INFO: [2022-09-28 03:19:15] Running: 'grep -E "^\W+thin_pool_autoextend" /etc/lvm/lvm.conf'... # thin_pool_autoextend_threshold = 70 # thin_pool_autoextend_threshold = 100 # thin_pool_autoextend_percent = 20 # thin_pool_autoextend_percent = 20 PASS: grep -E "^\W+thin_pool_autoextend" /etc/lvm/lvm.conf INFO: [2022-09-28 03:19:15] Running: 'lvcreate -l25 -V84m -T testvg/pool2 -n lv3'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv3" created. PASS: lvcreate -l25 -V84m -T testvg/pool2 -n lv3 INFO: [2022-09-28 03:19:16] Running: 'dd if=/dev/zero of=/dev/testvg/lv3 bs=1M count=84'... 84+0 records in 84+0 records out 88080384 bytes (88 MB, 84 MiB) copied, 0.0329288 s, 2.7 GB/s PASS: dd if=/dev/zero of=/dev/testvg/lv3 bs=1M count=84 PASS: testvg/lv3 data_percent == 100.00 PASS: testvg/pool2 data_percent == 84.00 INFO: [2022-09-28 03:19:47] Running: 'journalctl -n 100 | grep 'testvg-pool2-tpool .*is now 84.*% full''... Sep 28 03:19:26 ibm-p9b-16.ibm2.lab.eng.bos.redhat.com dmeventd[498517]: WARNING: Thin pool testvg-pool2-tpool data is now 84.00% full. PASS: journalctl -n 100 | grep 'testvg-pool2-tpool .*is now 84.*% full' INFO: [2022-09-28 03:19:47] Running: 'lvextend -L88m testvg/lv3'... Size of logical volume testvg/lv3 changed from 84.00 MiB (21 extents) to 88.00 MiB (22 extents). Logical volume testvg/lv3 successfully resized. PASS: lvextend -L88m testvg/lv3 INFO: [2022-09-28 03:19:47] Running: 'dd if=/dev/zero of=/dev/testvg/lv3 bs=1M count=88'... 88+0 records in 88+0 records out 92274688 bytes (92 MB, 88 MiB) copied, 0.897674 s, 103 MB/s PASS: dd if=/dev/zero of=/dev/testvg/lv3 bs=1M count=88 PASS: testvg/lv3 data_percent == 100.00 PASS: testvg/pool2 data_percent == 88.00 INFO: [2022-09-28 03:20:18] Running: 'journalctl -n 100 | grep 'testvg-pool2-tpool .*is now 88.*% full''... Sep 28 03:19:57 ibm-p9b-16.ibm2.lab.eng.bos.redhat.com dmeventd[498517]: WARNING: Thin pool testvg-pool2-tpool data is now 88.00% full. PASS: journalctl -n 100 | grep 'testvg-pool2-tpool .*is now 88.*% full' INFO: [2022-09-28 03:20:18] Running: 'lvs testvg/pool2 -o+metadata_percent'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Meta% pool2 testvg twi-aotz-- 100.00m 88.00 11.52 11.52 PASS: lvs testvg/pool2 -o+metadata_percent INFO: [2022-09-28 03:20:19] Running: 'lvcreate -L100m -V100m -T testvg/pool3 -n lv4'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv4" created. PASS: lvcreate -L100m -V100m -T testvg/pool3 -n lv4 INFO: [2022-09-28 03:20:20] Running: 'dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=81'... 81+0 records in 81+0 records out 84934656 bytes (85 MB, 81 MiB) copied, 0.0337971 s, 2.5 GB/s PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=81 PASS: testvg/pool3 data_percent == 81.00 INFO: [2022-09-28 03:20:51] Running: 'journalctl -n 100 | grep 'testvg-pool3-tpool .*is now 81.*% full''... Sep 28 03:20:30 ibm-p9b-16.ibm2.lab.eng.bos.redhat.com dmeventd[498517]: WARNING: Thin pool testvg-pool3-tpool data is now 81.00% full. PASS: journalctl -n 100 | grep 'testvg-pool3-tpool .*is now 81.*% full' INFO: [2022-09-28 03:20:51] Running: 'dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=86'... 86+0 records in 86+0 records out 90177536 bytes (90 MB, 86 MiB) copied, 0.863986 s, 104 MB/s PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=86 PASS: testvg/pool3 data_percent == 86.00 INFO: [2022-09-28 03:21:22] Running: 'journalctl -n 100 | grep 'testvg-pool3-tpool .*is now 86.*% full''... Sep 28 03:21:00 ibm-p9b-16.ibm2.lab.eng.bos.redhat.com dmeventd[498517]: WARNING: Thin pool testvg-pool3-tpool data is now 86.00% full. PASS: journalctl -n 100 | grep 'testvg-pool3-tpool .*is now 86.*% full' INFO: [2022-09-28 03:21:22] Running: 'dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=91'... 91+0 records in 91+0 records out 95420416 bytes (95 MB, 91 MiB) copied, 0.866069 s, 110 MB/s PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=91 PASS: testvg/pool3 data_percent == 91.00 INFO: [2022-09-28 03:21:53] Running: 'journalctl -n 100 | grep 'testvg-pool3-tpool .*is now 91.*% full''... Sep 28 03:21:30 ibm-p9b-16.ibm2.lab.eng.bos.redhat.com dmeventd[498517]: WARNING: Thin pool testvg-pool3-tpool data is now 91.00% full. PASS: journalctl -n 100 | grep 'testvg-pool3-tpool .*is now 91.*% full' INFO: [2022-09-28 03:21:53] Running: 'dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=96'... 96+0 records in 96+0 records out 100663296 bytes (101 MB, 96 MiB) copied, 0.801151 s, 126 MB/s PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=96 PASS: testvg/pool3 data_percent == 96.00 INFO: [2022-09-28 03:22:24] Running: 'journalctl -n 100 | grep 'testvg-pool3-tpool .*is now 96.*% full''... Sep 28 03:22:00 ibm-p9b-16.ibm2.lab.eng.bos.redhat.com dmeventd[498517]: WARNING: Thin pool testvg-pool3-tpool data is now 96.00% full. PASS: journalctl -n 100 | grep 'testvg-pool3-tpool .*is now 96.*% full' INFO: [2022-09-28 03:22:24] Running: 'lvs -o +invalid_option testvg/lv1 2>/dev/null'... PASS: lvs -o +invalid_option testvg/lv1 2>/dev/null [exited with error, as expected] INFO: [2022-09-28 03:22:24] Running: 'lvs -a testvg'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 testvg Vwi-a-tz-- 100.00m pool1 0.00 lv2 testvg Vwi-a-tz-- 100.00m pool1 0.00 lv3 testvg Vwi-a-tz-- 88.00m pool2 100.00 lv4 testvg Vwi-a-tz-- 100.00m pool3 96.00 [lvol0_pmspare] testvg ewi------- 4.00m pool1 testvg twi-aotz-- 4.00m 0.00 11.04 [pool1_tdata] testvg Twi-ao---- 4.00m [pool1_tmeta] testvg ewi-ao---- 4.00m pool2 testvg twi-aotz-- 100.00m 88.00 11.52 [pool2_tdata] testvg Twi-ao---- 100.00m [pool2_tmeta] testvg ewi-ao---- 4.00m pool3 testvg twi-aotz-- 100.00m 96.00 11.62 [pool3_tdata] testvg Twi-ao---- 100.00m [pool3_tmeta] testvg ewi-ao---- 4.00m snap1 testvg Vwi---tz-k 100.00m pool1 lv1 INFO: [2022-09-28 03:22:24] Running: 'vgremove --force testvg'... Logical volume "lv4" successfully removed. Logical volume "pool3" successfully removed. Logical volume "lv3" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "lv2" successfully removed. Logical volume "snap1" successfully removed. Logical volume "pool1" successfully removed. Volume group "testvg" successfully removed INFO: [2022-09-28 03:22:27] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2022-09-28 03:22:28] Running: 'losetup -d /dev/loop0'... INFO: [2022-09-28 03:22:28] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2022-09-28 03:22:28] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2022-09-28 03:22:29] Running: 'losetup -d /dev/loop1'... INFO: [2022-09-28 03:22:29] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2022-09-28 03:22:30] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2022-09-28 03:22:31] Running: 'losetup -d /dev/loop2'... INFO: [2022-09-28 03:22:31] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2022-09-28 03:22:31] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2022-09-28 03:22:32] Running: 'losetup -d /dev/loop3'... INFO: [2022-09-28 03:22:32] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:22:32] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:22:32] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:22:32] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:22:32] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:22:32] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_snapshot' was loaded during the test. Unloading it... INFO: [2022-09-28 03:22:32] Running: 'modprobe -r dm_snapshot'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2022-09-28 03:22:33] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l1 -T testvg/pool1 PASS: testvg/pool1 thin_count == 0 PASS: lvcreate -V100m -T testvg/pool1 -n lv1 PASS: testvg/pool1 thin_count == 1 PASS: testvg/pool1 lv_name == pool1 PASS: testvg/pool1 lv_size == 4.00m PASS: testvg/pool1 lv_metadata_size == 4.00m PASS: testvg/pool1 lv_attr == twi-aotz-- PASS: testvg/pool1 modules == thin-pool PASS: testvg/pool1 metadata_lv == [pool1_tmeta] PASS: testvg/pool1 data_lv == [pool1_tdata] PASS: lvs testvg/pool1 -o+metadata_percent PASS: testvg/pool1 chunksize == 64.00k PASS: testvg/pool1 transaction_id == 1 PASS: testvg/lv1 pool_lv == pool1 PASS: testvg/lv1 lv_name == lv1 PASS: testvg/lv1 lv_size == 100.00m PASS: testvg/lv1 lv_attr == Vwi-a-tz-- PASS: testvg/lv1 modules == thin,thin-pool PASS: lvs -a testvg | egrep '\[pool1_tdata\]\s+testvg\s+Twi-ao----' PASS: lvs -a testvg | egrep '\[pool1_tmeta\]\s+testvg\s+ewi-ao----' PASS: lvs -a testvg | egrep 'Meta%' PASS: lvs -a testvg | egrep 'Data%' PASS: lvcreate -V100m -T testvg/pool1 -n lv2 PASS: testvg/pool1 transaction_id == 2 PASS: testvg/pool1 thin_count == 2 PASS: testvg/pool1 zero == zero PASS: lvcreate -s testvg/lv1 -n snap1 PASS: testvg/pool1 transaction_id == 3 PASS: testvg/pool1 thin_count == 3 PASS: testvg/snap1 origin == lv1 PASS: testvg/snap1 lv_name == snap1 PASS: testvg/snap1 lv_size == 100.00m PASS: testvg/snap1 lv_attr == Vwi---tz-k PASS: testvg/snap1 modules == thin,thin-pool PASS: grep -E "^\W+thin_pool_autoextend" /etc/lvm/lvm.conf PASS: lvcreate -l25 -V84m -T testvg/pool2 -n lv3 PASS: dd if=/dev/zero of=/dev/testvg/lv3 bs=1M count=84 PASS: testvg/lv3 data_percent == 100.00 PASS: testvg/pool2 data_percent == 84.00 PASS: journalctl -n 100 | grep 'testvg-pool2-tpool .*is now 84.*% full' PASS: lvextend -L88m testvg/lv3 PASS: dd if=/dev/zero of=/dev/testvg/lv3 bs=1M count=88 PASS: testvg/lv3 data_percent == 100.00 PASS: testvg/pool2 data_percent == 88.00 PASS: journalctl -n 100 | grep 'testvg-pool2-tpool .*is now 88.*% full' PASS: lvs testvg/pool2 -o+metadata_percent PASS: lvcreate -L100m -V100m -T testvg/pool3 -n lv4 PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=81 PASS: testvg/pool3 data_percent == 81.00 PASS: journalctl -n 100 | grep 'testvg-pool3-tpool .*is now 81.*% full' PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=86 PASS: testvg/pool3 data_percent == 86.00 PASS: journalctl -n 100 | grep 'testvg-pool3-tpool .*is now 86.*% full' PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=91 PASS: testvg/pool3 data_percent == 91.00 PASS: journalctl -n 100 | grep 'testvg-pool3-tpool .*is now 91.*% full' PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=96 PASS: testvg/pool3 data_percent == 96.00 PASS: journalctl -n 100 | grep 'testvg-pool3-tpool .*is now 96.*% full' PASS: lvs -o +invalid_option testvg/lv1 2>/dev/null [exited with error, as expected] PASS: Search for error on the server ############################# Total tests that passed: 62 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvm-thinp-misc.py' ============================================================================================================== INFO: [2022-09-28 03:22:34] Running: 'python3 /usr/local/lib/python3.9/site-packages/stqe/tests/lvm/thinp/lvm-thinp-misc.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:22:34] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:22:34] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:22:34] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:22:34] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:22:34] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux ibm-p9b-16.ibm2.lab.eng.bos.redhat.com 5.14.0-169.mr1370_220927_1944.el9.ppc64le #1 SMP Tue Sep 27 19:55:30 UTC 2022 ppc64le ppc64le ppc64le GNU/Linux Kernel tainted: 77824 ### IP settings: ### INFO: [2022-09-28 03:22:34] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP2p1s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ac:1f:6b:56:e7:b8 brd ff:ff:ff:ff:ff:ff inet 10.16.214.104/23 brd 10.16.215.255 scope global dynamic noprefixroute enP2p1s0f0 valid_lft 75489sec preferred_lft 75489sec inet6 2620:52:0:10d6:ae1f:6bff:fe56:e7b8/64 scope global dynamic noprefixroute valid_lft 2591892sec preferred_lft 604692sec inet6 fe80::ae1f:6bff:fe56:e7b8/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP2p1s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:b9 brd ff:ff:ff:ff:ff:ff 4: enP2p1s0f2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:ba brd ff:ff:ff:ff:ff:ff 5: enP2p1s0f3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:56:e7:bb brd ff:ff:ff:ff:ff:ff ### File system disk space usage: ### INFO: [2022-09-28 03:22:34] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 13G 70M 13G 1% /run /dev/sda5 1.8T 18G 1.8T 1% / /dev/sda1 459M 306M 125M 72% /boot tmpfs 6.2G 64K 6.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 197G 271G 43% /var/crash INFO: [2022-09-28 03:22:34] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.16-3.el9.ppc64le) ################################################################################ INFO: Starting lvm Thin Provisioning Misc test ################################################################################ INFO: [2022-09-28 03:22:34] Running: 'lvm segtypes | grep -w "thin$"'... thin PASS: lvm segtypes | grep -w "thin$" INFO: [2022-09-28 03:22:34] Running: 'lvm segtypes | grep -w "thin-pool$"'... thin-pool PASS: lvm segtypes | grep -w "thin-pool$" INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2022-09-28 03:22:34] Running: 'cat /proc/sys/kernel/tainted'... 77824 WARN: Kernel is tainted! INFO: [2022-09-28 03:22:35] Running: 'cat /tmp/previous-tainted'... 77824 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2022-09-28 03:22:35] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2022-09-28 03:22:35] Running: 'dmesg | grep -i ' segfault ''... INFO: [2022-09-28 03:22:35] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. WARN: Could not find recipe ID INFO: No kdump log found for this server PASS: Search for error on the server ################################ Test Summary ################################## PASS: lvm segtypes | grep -w "thin$" PASS: lvm segtypes | grep -w "thin-pool$" PASS: Search for error on the server ############################# Total tests that passed: 3 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Generating test result report ============================================================================================================== Test name: lvm/thinp/lvm-thinp-modules.py Status: FAIL Elapsed Time: 01m03s Test name: lvm/thinp/lvchange-thin.py Status: PASS Elapsed Time: 53s Test name: lvm/thinp/lvconf-thinp.py Status: PASS Elapsed Time: 09s Test name: lvm/thinp/lvconvert-thinpool.py Status: PASS Elapsed Time: 16s Test name: lvm/thinp/lvconvert-thin-lv.py Status: PASS Elapsed Time: 16s Test name: lvm/thinp/lvcreate-poolmetadataspare.py Status: PASS Elapsed Time: 20s Test name: lvm/thinp/lvcreate-mirror.py Status: FAIL Elapsed Time: 10s Test name: lvm/thinp/lvextend-thinp.py Status: PASS Elapsed Time: 41s Test name: lvm/thinp/lvreduce-thinp.py Status: PASS Elapsed Time: 37s Test name: lvm/thinp/lvremove-thinp.py Status: PASS Elapsed Time: 25s Test name: lvm/thinp/lvrename-thinp.py Status: PASS Elapsed Time: 18s Test name: lvm/thinp/lvresize-thinp.py Status: PASS Elapsed Time: 01m05s Test name: lvm/thinp/lvscan-thinp.py Status: PASS Elapsed Time: 15s Test name: lvm/thinp/lvs-thinp.py Status: PASS Elapsed Time: 03m28s Test name: lvm/thinp/lvm-thinp-misc.py Status: PASS Elapsed Time: 02s ============================================================================================================== Total - PASS: 13 FAIL: 2 SKIP: 0 WARN: 0 Total Time: 09m58s ==============================================================================================================