use_pty:FALSE /usr/share/restraint/plugins/run_task_plugins bash ./runtest.sh [ 04:15:38 ] Running: 'dnf install -y --skip-broken python3-pip python3-wheel python3-augeas augeas-libs python3-netifaces' Last metadata expiration check: -5472 days, 6:01:33 ago on Mon 18 Jan 2038 10:14:06 PM EST. Package python3-pip-21.2.3-6.el9.noarch is already installed. Package augeas-libs-1.13.0-3.el9.x86_64 is already installed. Dependencies resolved. ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: python3-augeas noarch 0.5.0-25.el9 BUILDROOT-C9S 27 k python3-netifaces x86_64 0.10.6-15.el9 BUILDROOT-C9S 22 k python3-wheel noarch 1:0.36.2-7.el9 BUILDROOT-C9S 71 k Transaction Summary ================================================================================ Install 3 Packages Total download size: 121 k Installed size: 320 k Downloading Packages: (1/3): python3-augeas-0.5.0-25.el9.noarch.rpm 2.1 MB/s | 27 kB 00:00 (2/3): python3-netifaces-0.10.6-15.el9.x86_64.r 1.4 MB/s | 22 kB 00:00 (3/3): python3-wheel-0.36.2-7.el9.noarch.rpm 3.6 MB/s | 71 kB 00:00 -------------------------------------------------------------------------------- Total 4.0 MB/s | 121 kB 00:00 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : python3-wheel-1:0.36.2-7.el9.noarch 1/3 Installing : python3-netifaces-0.10.6-15.el9.x86_64 2/3 Installing : python3-augeas-0.5.0-25.el9.noarch 3/3 Running scriptlet: python3-augeas-0.5.0-25.el9.noarch 3/3 Verifying : python3-augeas-0.5.0-25.el9.noarch 1/3 Verifying : python3-netifaces-0.10.6-15.el9.x86_64 2/3 Verifying : python3-wheel-1:0.36.2-7.el9.noarch 3/3 Installed: python3-augeas-0.5.0-25.el9.noarch python3-netifaces-0.10.6-15.el9.x86_64 python3-wheel-1:0.36.2-7.el9.noarch Complete! [ 04:15:42 ] Running: 'python3 -m pip install virtualenv && python3 -m venv /opt/stqe-venv && source /opt/stqe-venv/bin/activate' Collecting virtualenv Downloading virtualenv-20.17.1-py3-none-any.whl (8.8 MB) Collecting distlib<1,>=0.3.6 Downloading distlib-0.3.6-py2.py3-none-any.whl (468 kB) Collecting platformdirs<3,>=2.4 Downloading platformdirs-2.6.2-py3-none-any.whl (14 kB) Collecting filelock<4,>=3.4.1 Downloading filelock-3.9.0-py3-none-any.whl (9.7 kB) Installing collected packages: platformdirs, filelock, distlib, virtualenv Successfully installed distlib-0.3.6 filelock-3.9.0 platformdirs-2.6.2 virtualenv-20.17.1 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv [ 04:15:53 ] Running: 'python3 -m pip install wheel' Collecting wheel Downloading wheel-0.38.4-py3-none-any.whl (36 kB) Installing collected packages: wheel Successfully installed wheel-0.38.4 WARNING: You are using pip version 21.2.3; however, version 22.3.1 is available. You should consider upgrading via the '/opt/stqe-venv/bin/python3 -m pip install --upgrade pip' command. [ 04:15:54 ] Running: 'python3 -m pip install stqe --no-binary=stqe' Collecting stqe Downloading stqe-0.1.20.tar.gz (198 kB) Collecting libsan Downloading libsan-0.4.0-py3-none-any.whl (205 kB) Collecting python-augeas Downloading python-augeas-1.1.0.tar.gz (93 kB) Collecting fmf==1.1.0 Downloading fmf-1.1.0-py3-none-any.whl (36 kB) Collecting pexpect Downloading pexpect-4.8.0-py2.py3-none-any.whl (59 kB) Collecting ruamel.yaml Downloading ruamel.yaml-0.17.21-py3-none-any.whl (109 kB) Collecting jsonschema Downloading jsonschema-4.17.3-py3-none-any.whl (90 kB) Collecting filelock Using cached filelock-3.9.0-py3-none-any.whl (9.7 kB) Collecting attrs>=17.4.0 Downloading attrs-22.2.0-py3-none-any.whl (60 kB) Collecting pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 Downloading pyrsistent-0.19.3-py3-none-any.whl (57 kB) Collecting six Downloading six-1.16.0-py2.py3-none-any.whl (11 kB) Collecting future Downloading future-0.18.3.tar.gz (840 kB) Collecting netapp-ontap Downloading netapp_ontap-9.11.1.0-py3-none-any.whl (27.1 MB) Collecting ipaddress Downloading ipaddress-1.0.23-py2.py3-none-any.whl (18 kB) Collecting requests Downloading requests-2.28.2-py3-none-any.whl (62 kB) Collecting distro Downloading distro-1.8.0-py3-none-any.whl (20 kB) Collecting netifaces Downloading netifaces-0.11.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl (32 kB) Collecting marshmallow>=3.2.1 Downloading marshmallow-3.19.0-py3-none-any.whl (49 kB) Collecting urllib3>=1.26.7 Downloading urllib3-1.26.14-py2.py3-none-any.whl (140 kB) Collecting requests-toolbelt>=0.9.1 Downloading requests_toolbelt-0.10.1-py2.py3-none-any.whl (54 kB) Collecting packaging>=17.0 Downloading packaging-23.0-py3-none-any.whl (42 kB) Collecting charset-normalizer<4,>=2 Downloading charset_normalizer-3.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (198 kB) Collecting certifi>=2017.4.17 Downloading certifi-2022.12.7-py3-none-any.whl (155 kB) Collecting idna<4,>=2.5 Downloading idna-3.4-py3-none-any.whl (61 kB) Collecting ptyprocess>=0.5 Downloading ptyprocess-0.7.0-py2.py3-none-any.whl (13 kB) Collecting cffi>=1.0.0 Using cached cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (441 kB) Collecting pycparser Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB) Collecting ruamel.yaml.clib>=0.2.6 Downloading ruamel.yaml.clib-0.2.7-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (519 kB) Skipping wheel build for stqe, due to binaries being disabled for it. Building wheels for collected packages: future, python-augeas Building wheel for future (setup.py): started Building wheel for future (setup.py): finished with status 'done' Created wheel for future: filename=future-0.18.3-py3-none-any.whl size=492036 sha256=50c6dcc891894682156dbff0417baf053af08f87a419ed656963b3b83b63562f Stored in directory: /root/.cache/pip/wheels/bf/5d/6a/2e53874f7ec4e2bede522385439531fafec8fafe005b5c3d1b Building wheel for python-augeas (setup.py): started Building wheel for python-augeas (setup.py): finished with status 'done' Created wheel for python-augeas: filename=python_augeas-1.1.0-py3-none-any.whl size=21249 sha256=f79367d870ea82f6545535c412f876abf216387514362b6e9c7ce792dc16a189 Stored in directory: /root/.cache/pip/wheels/6e/68/47/25baeabcc9e90f81938e12ace4cb58eeba9deaeaae8bc3d577 Successfully built future python-augeas Installing collected packages: urllib3, idna, charset-normalizer, certifi, requests, pycparser, packaging, ruamel.yaml.clib, requests-toolbelt, pyrsistent, marshmallow, cffi, attrs, six, ruamel.yaml, python-augeas, ptyprocess, netifaces, netapp-ontap, jsonschema, ipaddress, future, filelock, distro, pexpect, libsan, fmf, stqe Running setup.py install for stqe: started Running setup.py install for stqe: finished with status 'done' Successfully installed attrs-22.2.0 certifi-2022.12.7 cffi-1.15.1 charset-normalizer-3.0.1 distro-1.8.0 filelock-3.9.0 fmf-1.1.0 future-0.18.3 idna-3.4 ipaddress-1.0.23 jsonschema-4.17.3 libsan-0.4.0 marshmallow-3.19.0 netapp-ontap-9.11.1.0 netifaces-0.11.0 packaging-23.0 pexpect-4.8.0 ptyprocess-0.7.0 pycparser-2.21 pyrsistent-0.19.3 python-augeas-1.1.0 requests-2.28.2 requests-toolbelt-0.10.1 ruamel.yaml-0.17.21 ruamel.yaml.clib-0.2.7 six-1.16.0 stqe-0.1.20 urllib3-1.26.14 WARNING: You are using pip version 21.2.3; however, version 22.3.1 is available. You should consider upgrading via the '/opt/stqe-venv/bin/python3 -m pip install --upgrade pip' command. [ 04:16:27 ] Running: '/opt/stqe-venv/bin/stqe-test run -c lvm/lvm-thinp-basic.conf' ============================================================================================================== Running test 'lvm/thinp/lvchange-thin.py' ============================================================================================================== INFO: [2023-01-26 04:16:28] Running: '/opt/stqe-venv/bin/python3 /opt/stqe-venv/lib64/python3.9/site-packages/stqe/tests/lvm/thinp/lvchange-thin.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:16:29] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:16:29] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:16:29] Running: 'echo 101000000 > /tmp/previous-dump-check'... INFO: [2023-01-26 04:16:29] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:16:29] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:16:29] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:16:29] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:16:30] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:16:30] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:16:30] Running: 'cat /root/console.log | grep -i ' segfault ''... [ 38.631436] lldpad[1167]: segfault at 0 ip 000056490128b2b9 sp 00007fff6761ebc0 error 6 in lldpad[56490127a000+43000] [ 2150.011414] mmap18[161328]: segfault at 7fd1177fbff8 ip 0000000000404e06 sp 00007fd1177fc000 error 6 in mmap18[404000+1a000] [ 2150.045559] mmap18[161330]: segfault at 7fd1177efff8 ip 0000000000404e06 sp 00007fd1177f0000 error 6 in mmap18[404000+1a000] [ 2183.070208] pkey01[161652]: segfault at 7f9c8bc00000 ip 00000000004052f0 sp 00007ffe6aa105e0 error 4 in pkey01[404000+1a000] [ 2183.103547] pkey01[161653]: segfault at 7f9c8bc00000 ip 00000000004052f0 sp 00007ffe6aa105e0 error 4 in pkey01[404000+1a000] [ 2183.136816] pkey01[161654]: segfault at 7f9c8bc00000 ip 00000000004052f0 sp 00007ffe6aa105e0 error 4 in pkey01[404000+1a000] [ 2183.170529] pkey01[161655]: segfault at 7f9c8bc00000 ip 00000000004052f0 sp 00007ffe6aa105e0 error 4 in pkey01[404000+1a000] [ 2183.204489] pkey01[161656]: segfault at 7f9c8ba00000 ip 00000000004052f0 sp 00007ffe6aa105e0 error 4 in pkey01[404000+1a000] [ 2183.237226] pkey01[161657]: segfault at 7f9c8ba00000 ip 00000000004052f0 sp 00007ffe6aa105e0 error 4 in pkey01[404000+1a000] [ 2183.270028] pkey01[161658]: segfault at 7f9c8bc00000 ip 00000000004052f0 sp 00007ffe6aa105e0 error 4 in pkey01[404000+1a000] [ 2183.304043] pkey01[161659]: segfault at 7f9c8bc00000 ip 00000000004052f0 sp 00007ffe6aa105e0 error 4 in pkey01[404000+1a000] [ 2183.337038] pkey01[161660]: segfault at 7f9c8ba00000 ip 00000000004052f0 sp 00007ffe6aa105e0 error 4 in pkey01[404000+1a000] [ 2183.369826] pkey01[161661]: segfault at 7f9c8ba00000 ip 00000000004052f0 sp 00007ffe6aa105e0 error 4 in pkey01[404000+1a000] [ 2536.826710] select03[166330]: segfault at 7fb8133c5000 ip 00007fb81314500a sp 00007ffdc8d637b0 error 4 in libc.so.6[7fb813028000+175000] [ 2637.960041] shmat01[167171]: segfault at 7efe00690000 ip 00000000004050b2 sp 00007ffc8d7d0690 error 6 in shmat01[404000+1a000] INFO found segfault on /root/console.log INFO: log_submit() - /root/console.log uploaded successfully INFO: [2023-01-26 04:16:30] Running: 'cat /root/console.log | grep -i 'Call Trace:''... INFO: No kdump log found for this server submit /var/log/messages, named messages.log INFO: [2023-01-26 04:16:30] Running: 'cp /var/log/messages messages.log'... INFO: log_submit() - messages.log uploaded successfully INFO: sos is not installed ### Kernel Info: ### Kernel version: Linux rdma-qe-36.rdma.lab.eng.rdu2.redhat.com 5.14.0-244.1956_758049736.el9.x86_64+debug #1 SMP PREEMPT_DYNAMIC Thu Jan 26 05:36:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Kernel tainted: 79872 ### IP settings: ### INFO: [2023-01-26 04:16:31] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c4 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 10.1.217.125/24 brd 10.1.217.255 scope global dynamic noprefixroute eno1 valid_lft 77976sec preferred_lft 77976sec inet6 2620:52:0:1d9:3673:5aff:fe9d:5cc4/64 scope global dynamic noprefixroute valid_lft 2591975sec preferred_lft 604775sec inet6 fe80::3673:5aff:fe9d:5cc4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c5 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c6 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c7 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: ens1f0np0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 10:70:fd:a3:7c:60 brd ff:ff:ff:ff:ff:ff altname enp59s0f0np0 7: ens1f1np1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 10:70:fd:a3:7c:61 brd ff:ff:ff:ff:ff:ff altname enp59s0f1np1 ### File system disk space usage: ### INFO: [2023-01-26 04:16:31] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 26G 0 26G 0% /dev/shm tmpfs 11G 38M 11G 1% /run /dev/mapper/cs_rdma--qe--36-root 70G 5.3G 65G 8% / /dev/sda2 1014M 285M 730M 29% /boot /dev/mapper/cs_rdma--qe--36-home 180G 1.3G 179G 1% /home /dev/sda1 599M 7.5M 592M 2% /boot/efi tmpfs 5.2G 4.0K 5.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 93G 375G 20% /var/crash INFO: [2023-01-26 04:16:31] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.17-4.el9.x86_64) ################################################################################ INFO: Starting LV Change Thin Provisioning test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2023-01-26 04:16:32] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2023-01-26 04:16:32] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2023-01-26 04:16:32] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2023-01-26 04:16:33] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2023-01-26 04:16:33] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2023-01-26 04:16:33] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2023-01-26 04:16:33] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2023-01-26 04:16:33] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: [2023-01-26 04:16:33] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop3 /dev/loop1 /dev/loop2'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop3" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Volume group "testvg" successfully created INFO: [2023-01-26 04:16:34] Running: 'lvcreate -V100m -L100m -T testvg/pool1 -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv1" created. INFO: [2023-01-26 04:16:36] Running: 'lvcreate -V100m -i2 -L100m -T testvg/pool2 -n lv2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB (26 extents). Logical volume "lv2" created. PASS: testvg/pool1 discards == passdown INFO: [2023-01-26 04:16:38] Running: 'lvchange --discards ignore testvg/pool1'... Cannot change support for discards while pool volume testvg/pool1 is active. PASS: lvchange --discards ignore testvg/pool1 [exited with error, as expected] PASS: testvg/pool1 discards == passdown INFO: [2023-01-26 04:16:39] Running: 'lvchange --discards nopassdown testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange --discards nopassdown testvg/pool1 PASS: testvg/pool1 discards == nopassdown INFO: [2023-01-26 04:16:39] Running: 'lvchange --discards ignore testvg/pool1'... Cannot change support for discards while pool volume testvg/pool1 is active. PASS: lvchange --discards ignore testvg/pool1 [exited with error, as expected] PASS: testvg/pool1 discards == nopassdown INFO: [2023-01-26 04:16:40] Running: 'lvchange --discards passdown testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange --discards passdown testvg/pool1 PASS: testvg/pool1 discards == passdown INFO: [2023-01-26 04:16:41] Running: 'lvchange -an testvg/pool1'... PASS: lvchange -an testvg/pool1 INFO: [2023-01-26 04:16:41] Running: 'lvchange -an testvg/lv1'... PASS: lvchange -an testvg/lv1 PASS: testvg/pool1 discards == passdown PASS: testvg/pool1 lv_attr == twi---tz-- INFO: [2023-01-26 04:16:47] Running: 'lvchange --discards ignore testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange --discards ignore testvg/pool1 INFO: [2023-01-26 04:16:47] Running: 'lvchange -ay testvg/pool1'... PASS: lvchange -ay testvg/pool1 INFO: [2023-01-26 04:16:48] Running: 'lvchange -ay testvg/lv1'... PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == ignore PASS: testvg/pool1 lv_attr == twi-aotz-- INFO: [2023-01-26 04:16:53] Running: 'lvchange -an testvg/pool1'... PASS: lvchange -an testvg/pool1 INFO: [2023-01-26 04:16:54] Running: 'lvchange -an testvg/lv1'... PASS: lvchange -an testvg/lv1 INFO: [2023-01-26 04:16:54] Running: 'lvchange --discards nopassdown testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange --discards nopassdown testvg/pool1 INFO: [2023-01-26 04:16:54] Running: 'lvchange -ay testvg/pool1'... PASS: lvchange -ay testvg/pool1 INFO: [2023-01-26 04:16:55] Running: 'lvchange -ay testvg/lv1'... PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == nopassdown INFO: [2023-01-26 04:16:56] Running: 'lvchange -an testvg/pool1'... PASS: lvchange -an testvg/pool1 INFO: [2023-01-26 04:16:56] Running: 'lvchange -an testvg/lv1'... PASS: lvchange -an testvg/lv1 INFO: [2023-01-26 04:16:56] Running: 'lvchange --discards ignore testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange --discards ignore testvg/pool1 INFO: [2023-01-26 04:16:57] Running: 'lvchange -ay testvg/pool1'... PASS: lvchange -ay testvg/pool1 INFO: [2023-01-26 04:16:57] Running: 'lvchange -ay testvg/lv1'... PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == ignore INFO: [2023-01-26 04:16:58] Running: 'lvchange -an testvg/pool1'... PASS: lvchange -an testvg/pool1 INFO: [2023-01-26 04:16:58] Running: 'lvchange -an testvg/lv1'... PASS: lvchange -an testvg/lv1 INFO: [2023-01-26 04:16:59] Running: 'lvchange --discards passdown testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange --discards passdown testvg/pool1 INFO: [2023-01-26 04:16:59] Running: 'lvchange -ay testvg/pool1'... PASS: lvchange -ay testvg/pool1 INFO: [2023-01-26 04:17:00] Running: 'lvchange -ay testvg/lv1'... PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == passdown INFO: [2023-01-26 04:17:00] Running: 'lvchange -p r testvg/pool1 2>&1 | grep 'Command not permitted on LV''... Command not permitted on LV testvg/pool1. PASS: lvchange -p r testvg/pool1 2>&1 | grep 'Command not permitted on LV' PASS: testvg/pool1 lv_attr == twi-aotz-- INFO: [2023-01-26 04:17:01] Running: 'lvchange --refresh testvg/pool1'... PASS: lvchange --refresh testvg/pool1 INFO: [2023-01-26 04:17:01] Running: 'lvchange --monitor n testvg/pool1'... PASS: lvchange --monitor n testvg/pool1 INFO: [2023-01-26 04:17:01] Running: 'lvchange --monitor y testvg/pool1'... PASS: lvchange --monitor y testvg/pool1 PASS: testvg/pool1 lv_allocation_policy == inherit INFO: [2023-01-26 04:17:02] Running: 'lvchange -Cy testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange -Cy testvg/pool1 PASS: testvg/pool1 lv_allocation_policy == contiguous INFO: [2023-01-26 04:17:02] Running: 'lvchange -Cn testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange -Cn testvg/pool1 PASS: testvg/pool1 lv_allocation_policy == inherit PASS: testvg/pool1 lv_read_ahead == auto INFO: [2023-01-26 04:17:03] Running: 'lvchange -r 256 testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange -r 256 testvg/pool1 PASS: testvg/pool1 lv_read_ahead == 128.00k INFO: [2023-01-26 04:17:04] Running: 'lvchange -r none testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange -r none testvg/pool1 PASS: testvg/pool1 lv_read_ahead == 0 INFO: [2023-01-26 04:17:04] Running: 'lvchange -r auto testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange -r auto testvg/pool1 PASS: testvg/pool1 lv_read_ahead == auto INFO: [2023-01-26 04:17:05] Running: 'lvchange -Zn testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange -Zn testvg/pool1 PASS: testvg/pool1 zero == INFO: [2023-01-26 04:17:06] Running: 'lvchange -Z y testvg/pool1'... Logical volume testvg/pool1 changed. PASS: lvchange -Z y testvg/pool1 PASS: testvg/pool1 zero == zero PASS: testvg/pool2 discards == passdown INFO: [2023-01-26 04:17:07] Running: 'lvchange --discards ignore testvg/pool2'... Cannot change support for discards while pool volume testvg/pool2 is active. PASS: lvchange --discards ignore testvg/pool2 [exited with error, as expected] PASS: testvg/pool2 discards == passdown INFO: [2023-01-26 04:17:07] Running: 'lvchange --discards nopassdown testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange --discards nopassdown testvg/pool2 PASS: testvg/pool2 discards == nopassdown INFO: [2023-01-26 04:17:08] Running: 'lvchange --discards ignore testvg/pool2'... Cannot change support for discards while pool volume testvg/pool2 is active. PASS: lvchange --discards ignore testvg/pool2 [exited with error, as expected] PASS: testvg/pool2 discards == nopassdown INFO: [2023-01-26 04:17:08] Running: 'lvchange --discards passdown testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange --discards passdown testvg/pool2 PASS: testvg/pool2 discards == passdown INFO: [2023-01-26 04:17:09] Running: 'lvchange -an testvg/pool2'... PASS: lvchange -an testvg/pool2 INFO: [2023-01-26 04:17:09] Running: 'lvchange -an testvg/lv2'... PASS: lvchange -an testvg/lv2 PASS: testvg/pool2 discards == passdown PASS: testvg/pool2 lv_attr == twi---tz-- INFO: [2023-01-26 04:17:15] Running: 'lvchange --discards ignore testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange --discards ignore testvg/pool2 INFO: [2023-01-26 04:17:16] Running: 'lvchange -ay testvg/pool2'... PASS: lvchange -ay testvg/pool2 INFO: [2023-01-26 04:17:16] Running: 'lvchange -ay testvg/lv2'... PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == ignore PASS: testvg/pool2 lv_attr == twi-aotz-- INFO: [2023-01-26 04:17:22] Running: 'lvchange -an testvg/pool2'... PASS: lvchange -an testvg/pool2 INFO: [2023-01-26 04:17:22] Running: 'lvchange -an testvg/lv2'... PASS: lvchange -an testvg/lv2 INFO: [2023-01-26 04:17:23] Running: 'lvchange --discards nopassdown testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange --discards nopassdown testvg/pool2 INFO: [2023-01-26 04:17:23] Running: 'lvchange -ay testvg/pool2'... PASS: lvchange -ay testvg/pool2 INFO: [2023-01-26 04:17:24] Running: 'lvchange -ay testvg/lv2'... PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == nopassdown INFO: [2023-01-26 04:17:24] Running: 'lvchange -an testvg/pool2'... PASS: lvchange -an testvg/pool2 INFO: [2023-01-26 04:17:24] Running: 'lvchange -an testvg/lv2'... PASS: lvchange -an testvg/lv2 INFO: [2023-01-26 04:17:25] Running: 'lvchange --discards ignore testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange --discards ignore testvg/pool2 INFO: [2023-01-26 04:17:25] Running: 'lvchange -ay testvg/pool2'... PASS: lvchange -ay testvg/pool2 INFO: [2023-01-26 04:17:26] Running: 'lvchange -ay testvg/lv2'... PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == ignore INFO: [2023-01-26 04:17:26] Running: 'lvchange -an testvg/pool2'... PASS: lvchange -an testvg/pool2 INFO: [2023-01-26 04:17:27] Running: 'lvchange -an testvg/lv2'... PASS: lvchange -an testvg/lv2 INFO: [2023-01-26 04:17:27] Running: 'lvchange --discards passdown testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange --discards passdown testvg/pool2 INFO: [2023-01-26 04:17:28] Running: 'lvchange -ay testvg/pool2'... PASS: lvchange -ay testvg/pool2 INFO: [2023-01-26 04:17:28] Running: 'lvchange -ay testvg/lv2'... PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == passdown INFO: [2023-01-26 04:17:29] Running: 'lvchange -p r testvg/pool2 2>&1 | grep 'Command not permitted on LV''... Command not permitted on LV testvg/pool2. PASS: lvchange -p r testvg/pool2 2>&1 | grep 'Command not permitted on LV' PASS: testvg/pool2 lv_attr == twi-aotz-- INFO: [2023-01-26 04:17:29] Running: 'lvchange --refresh testvg/pool2'... PASS: lvchange --refresh testvg/pool2 INFO: [2023-01-26 04:17:30] Running: 'lvchange --monitor n testvg/pool2'... PASS: lvchange --monitor n testvg/pool2 INFO: [2023-01-26 04:17:30] Running: 'lvchange --monitor y testvg/pool2'... PASS: lvchange --monitor y testvg/pool2 PASS: testvg/pool2 lv_allocation_policy == inherit INFO: [2023-01-26 04:17:30] Running: 'lvchange -Cy testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange -Cy testvg/pool2 PASS: testvg/pool2 lv_allocation_policy == contiguous INFO: [2023-01-26 04:17:31] Running: 'lvchange -Cn testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange -Cn testvg/pool2 PASS: testvg/pool2 lv_allocation_policy == inherit PASS: testvg/pool2 lv_read_ahead == auto INFO: [2023-01-26 04:17:32] Running: 'lvchange -r 256 testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange -r 256 testvg/pool2 PASS: testvg/pool2 lv_read_ahead == 128.00k INFO: [2023-01-26 04:17:33] Running: 'lvchange -r none testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange -r none testvg/pool2 PASS: testvg/pool2 lv_read_ahead == 0 INFO: [2023-01-26 04:17:33] Running: 'lvchange -r auto testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange -r auto testvg/pool2 PASS: testvg/pool2 lv_read_ahead == auto INFO: [2023-01-26 04:17:34] Running: 'lvchange -Zn testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange -Zn testvg/pool2 PASS: testvg/pool2 zero == INFO: [2023-01-26 04:17:35] Running: 'lvchange -Z y testvg/pool2'... Logical volume testvg/pool2 changed. PASS: lvchange -Z y testvg/pool2 PASS: testvg/pool2 zero == zero INFO: [2023-01-26 04:17:35] Running: 'lvchange -an testvg/lv1'... PASS: lvchange -an testvg/lv1 PASS: testvg/lv1 lv_attr == Vwi---tz-- INFO: [2023-01-26 04:17:41] Running: 'ls /dev/testvg/lv1'... ls: cannot access '/dev/testvg/lv1': No such file or directory PASS: ls /dev/testvg/lv1 [exited with error, as expected] INFO: [2023-01-26 04:17:41] Running: 'lvchange -a y testvg/lv1'... PASS: lvchange -a y testvg/lv1 PASS: testvg/lv1 lv_attr == Vwi-a-tz-- INFO: [2023-01-26 04:17:42] Running: 'ls /dev/testvg/lv1'... /dev/testvg/lv1 PASS: ls /dev/testvg/lv1 INFO: [2023-01-26 04:17:42] Running: 'lvchange -pr testvg/lv1'... Logical volume testvg/lv1 changed. PASS: lvchange -pr testvg/lv1 PASS: testvg/lv1 lv_attr == Vri-a-tz-- INFO: [2023-01-26 04:17:43] Running: 'lvchange -p rw testvg/lv1'... Logical volume testvg/lv1 changed. PASS: lvchange -p rw testvg/lv1 PASS: testvg/lv1 lv_attr == Vwi-a-tz-- INFO: [2023-01-26 04:17:43] Running: 'lvchange --refresh testvg/lv1'... PASS: lvchange --refresh testvg/lv1 PASS: testvg/lv1 lv_allocation_policy == inherit INFO: [2023-01-26 04:17:44] Running: 'lvchange -Cy testvg/lv1'... Logical volume testvg/lv1 changed. PASS: lvchange -Cy testvg/lv1 PASS: testvg/lv1 lv_allocation_policy == contiguous INFO: [2023-01-26 04:17:45] Running: 'lvchange -Cn testvg/lv1'... Logical volume testvg/lv1 changed. PASS: lvchange -Cn testvg/lv1 PASS: testvg/lv1 lv_allocation_policy == inherit INFO: [2023-01-26 04:17:45] Running: 'lvchange -r 256 testvg/lv1'... Logical volume testvg/lv1 changed. PASS: lvchange -r 256 testvg/lv1 PASS: testvg/lv1 lv_read_ahead == 128.00k INFO: [2023-01-26 04:17:46] Running: 'lvchange -r none testvg/lv1'... Logical volume testvg/lv1 changed. PASS: lvchange -r none testvg/lv1 PASS: testvg/lv1 lv_read_ahead == 0 INFO: [2023-01-26 04:17:46] Running: 'lvchange -r auto testvg/lv1'... Logical volume testvg/lv1 changed. PASS: lvchange -r auto testvg/lv1 PASS: testvg/lv1 lv_read_ahead == auto INFO: [2023-01-26 04:17:47] Running: 'lvchange -an testvg/lv2'... PASS: lvchange -an testvg/lv2 PASS: testvg/lv2 lv_attr == Vwi---tz-- INFO: [2023-01-26 04:17:53] Running: 'ls /dev/testvg/lv2'... ls: cannot access '/dev/testvg/lv2': No such file or directory PASS: ls /dev/testvg/lv2 [exited with error, as expected] INFO: [2023-01-26 04:17:53] Running: 'lvchange -a y testvg/lv2'... PASS: lvchange -a y testvg/lv2 PASS: testvg/lv2 lv_attr == Vwi-a-tz-- INFO: [2023-01-26 04:17:54] Running: 'ls /dev/testvg/lv2'... /dev/testvg/lv2 PASS: ls /dev/testvg/lv2 INFO: [2023-01-26 04:17:54] Running: 'lvchange -pr testvg/lv2'... Logical volume testvg/lv2 changed. PASS: lvchange -pr testvg/lv2 PASS: testvg/lv2 lv_attr == Vri-a-tz-- INFO: [2023-01-26 04:17:54] Running: 'lvchange -p rw testvg/lv2'... Logical volume testvg/lv2 changed. PASS: lvchange -p rw testvg/lv2 PASS: testvg/lv2 lv_attr == Vwi-a-tz-- INFO: [2023-01-26 04:17:55] Running: 'lvchange --refresh testvg/lv2'... PASS: lvchange --refresh testvg/lv2 PASS: testvg/lv2 lv_allocation_policy == inherit INFO: [2023-01-26 04:17:56] Running: 'lvchange -Cy testvg/lv2'... Logical volume testvg/lv2 changed. PASS: lvchange -Cy testvg/lv2 PASS: testvg/lv2 lv_allocation_policy == contiguous INFO: [2023-01-26 04:17:56] Running: 'lvchange -Cn testvg/lv2'... Logical volume testvg/lv2 changed. PASS: lvchange -Cn testvg/lv2 PASS: testvg/lv2 lv_allocation_policy == inherit INFO: [2023-01-26 04:17:57] Running: 'lvchange -r 256 testvg/lv2'... Logical volume testvg/lv2 changed. PASS: lvchange -r 256 testvg/lv2 PASS: testvg/lv2 lv_read_ahead == 128.00k INFO: [2023-01-26 04:17:58] Running: 'lvchange -r none testvg/lv2'... Logical volume testvg/lv2 changed. PASS: lvchange -r none testvg/lv2 PASS: testvg/lv2 lv_read_ahead == 0 INFO: [2023-01-26 04:17:58] Running: 'lvchange -r auto testvg/lv2'... Logical volume testvg/lv2 changed. PASS: lvchange -r auto testvg/lv2 PASS: testvg/lv2 lv_read_ahead == auto INFO: [2023-01-26 04:17:59] Running: 'lvcreate -s testvg/lv1 -n lv3'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv3" created. WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pools and the amount of free space in volume group (280.00 MiB). PASS: lvcreate -s testvg/lv1 -n lv3 PASS: testvg/lv3 lv_attr == Vwi---tz-k INFO: [2023-01-26 04:18:00] Running: 'ls /dev/testvg/lv3'... ls: cannot access '/dev/testvg/lv3': No such file or directory PASS: ls /dev/testvg/lv3 [exited with error, as expected] INFO: [2023-01-26 04:18:00] Running: 'lvchange -ay -K testvg/lv3'... PASS: lvchange -ay -K testvg/lv3 PASS: testvg/lv3 lv_attr == Vwi-a-tz-k INFO: [2023-01-26 04:18:01] Running: 'ls /dev/testvg/lv3'... /dev/testvg/lv3 PASS: ls /dev/testvg/lv3 INFO: [2023-01-26 04:18:01] Running: 'lvchange -pr testvg/lv3'... Logical volume testvg/lv3 changed. PASS: lvchange -pr testvg/lv3 PASS: testvg/lv3 lv_attr == Vri-a-tz-k INFO: [2023-01-26 04:18:02] Running: 'lvchange -p rw testvg/lv3'... Logical volume testvg/lv3 changed. PASS: lvchange -p rw testvg/lv3 PASS: testvg/lv3 lv_attr == Vwi-a-tz-k INFO: [2023-01-26 04:18:02] Running: 'lvchange --refresh testvg/lv3'... PASS: lvchange --refresh testvg/lv3 PASS: testvg/lv3 lv_allocation_policy == inherit INFO: [2023-01-26 04:18:03] Running: 'lvchange -Cy testvg/lv3'... Logical volume testvg/lv3 changed. PASS: lvchange -Cy testvg/lv3 PASS: testvg/lv3 lv_allocation_policy == contiguous INFO: [2023-01-26 04:18:04] Running: 'lvchange -Cn testvg/lv3'... Logical volume testvg/lv3 changed. PASS: lvchange -Cn testvg/lv3 PASS: testvg/lv3 lv_allocation_policy == inherit INFO: [2023-01-26 04:18:04] Running: 'lvchange -r 256 testvg/lv3'... Logical volume testvg/lv3 changed. PASS: lvchange -r 256 testvg/lv3 PASS: testvg/lv3 lv_read_ahead == 128.00k INFO: [2023-01-26 04:18:05] Running: 'lvchange -r none testvg/lv3'... Logical volume testvg/lv3 changed. PASS: lvchange -r none testvg/lv3 PASS: testvg/lv3 lv_read_ahead == 0 INFO: [2023-01-26 04:18:06] Running: 'lvchange -r auto testvg/lv3'... Logical volume testvg/lv3 changed. PASS: lvchange -r auto testvg/lv3 PASS: testvg/lv3 lv_read_ahead == auto INFO: [2023-01-26 04:18:07] Running: 'vgremove --force testvg'... Logical volume "lv2" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "lv3" successfully removed. Logical volume "pool1" successfully removed. Volume group "testvg" successfully removed INFO: [2023-01-26 04:18:08] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2023-01-26 04:18:10] Running: 'losetup -d /dev/loop0'... INFO: [2023-01-26 04:18:10] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2023-01-26 04:18:10] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2023-01-26 04:18:12] Running: 'losetup -d /dev/loop3'... INFO: [2023-01-26 04:18:12] Running: 'rm -f /var/tmp/loop3.img'... INFO: [2023-01-26 04:18:12] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2023-01-26 04:18:13] Running: 'losetup -d /dev/loop1'... INFO: [2023-01-26 04:18:13] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2023-01-26 04:18:14] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2023-01-26 04:18:15] Running: 'losetup -d /dev/loop2'... INFO: [2023-01-26 04:18:15] Running: 'rm -f /var/tmp/loop2.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:18:15] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:18:15] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:18:16] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:18:16] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:18:16] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:18:16] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:18:16] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:18:16] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:18:16] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:18:17] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_snapshot' was loaded during the test. Unloading it... INFO: [2023-01-26 04:18:17] Running: 'modprobe -r dm_snapshot'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2023-01-26 04:18:17] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: testvg/pool1 discards == passdown PASS: lvchange --discards ignore testvg/pool1 [exited with error, as expected] PASS: testvg/pool1 discards == passdown PASS: lvchange --discards nopassdown testvg/pool1 PASS: testvg/pool1 discards == nopassdown PASS: lvchange --discards ignore testvg/pool1 [exited with error, as expected] PASS: testvg/pool1 discards == nopassdown PASS: lvchange --discards passdown testvg/pool1 PASS: testvg/pool1 discards == passdown PASS: lvchange -an testvg/pool1 PASS: lvchange -an testvg/lv1 PASS: testvg/pool1 discards == passdown PASS: testvg/pool1 lv_attr == twi---tz-- PASS: lvchange --discards ignore testvg/pool1 PASS: lvchange -ay testvg/pool1 PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == ignore PASS: testvg/pool1 lv_attr == twi-aotz-- PASS: lvchange -an testvg/pool1 PASS: lvchange -an testvg/lv1 PASS: lvchange --discards nopassdown testvg/pool1 PASS: lvchange -ay testvg/pool1 PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == nopassdown PASS: lvchange -an testvg/pool1 PASS: lvchange -an testvg/lv1 PASS: lvchange --discards ignore testvg/pool1 PASS: lvchange -ay testvg/pool1 PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == ignore PASS: lvchange -an testvg/pool1 PASS: lvchange -an testvg/lv1 PASS: lvchange --discards passdown testvg/pool1 PASS: lvchange -ay testvg/pool1 PASS: lvchange -ay testvg/lv1 PASS: testvg/pool1 discards == passdown PASS: lvchange -p r testvg/pool1 2>&1 | grep 'Command not permitted on LV' PASS: testvg/pool1 lv_attr == twi-aotz-- PASS: lvchange --refresh testvg/pool1 PASS: lvchange --monitor n testvg/pool1 PASS: lvchange --monitor y testvg/pool1 PASS: testvg/pool1 lv_allocation_policy == inherit PASS: lvchange -Cy testvg/pool1 PASS: testvg/pool1 lv_allocation_policy == contiguous PASS: lvchange -Cn testvg/pool1 PASS: testvg/pool1 lv_allocation_policy == inherit PASS: testvg/pool1 lv_read_ahead == auto PASS: lvchange -r 256 testvg/pool1 PASS: testvg/pool1 lv_read_ahead == 128.00k PASS: lvchange -r none testvg/pool1 PASS: testvg/pool1 lv_read_ahead == 0 PASS: lvchange -r auto testvg/pool1 PASS: testvg/pool1 lv_read_ahead == auto PASS: lvchange -Zn testvg/pool1 PASS: testvg/pool1 zero == PASS: lvchange -Z y testvg/pool1 PASS: testvg/pool1 zero == zero PASS: testvg/pool2 discards == passdown PASS: lvchange --discards ignore testvg/pool2 [exited with error, as expected] PASS: testvg/pool2 discards == passdown PASS: lvchange --discards nopassdown testvg/pool2 PASS: testvg/pool2 discards == nopassdown PASS: lvchange --discards ignore testvg/pool2 [exited with error, as expected] PASS: testvg/pool2 discards == nopassdown PASS: lvchange --discards passdown testvg/pool2 PASS: testvg/pool2 discards == passdown PASS: lvchange -an testvg/pool2 PASS: lvchange -an testvg/lv2 PASS: testvg/pool2 discards == passdown PASS: testvg/pool2 lv_attr == twi---tz-- PASS: lvchange --discards ignore testvg/pool2 PASS: lvchange -ay testvg/pool2 PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == ignore PASS: testvg/pool2 lv_attr == twi-aotz-- PASS: lvchange -an testvg/pool2 PASS: lvchange -an testvg/lv2 PASS: lvchange --discards nopassdown testvg/pool2 PASS: lvchange -ay testvg/pool2 PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == nopassdown PASS: lvchange -an testvg/pool2 PASS: lvchange -an testvg/lv2 PASS: lvchange --discards ignore testvg/pool2 PASS: lvchange -ay testvg/pool2 PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == ignore PASS: lvchange -an testvg/pool2 PASS: lvchange -an testvg/lv2 PASS: lvchange --discards passdown testvg/pool2 PASS: lvchange -ay testvg/pool2 PASS: lvchange -ay testvg/lv2 PASS: testvg/pool2 discards == passdown PASS: lvchange -p r testvg/pool2 2>&1 | grep 'Command not permitted on LV' PASS: testvg/pool2 lv_attr == twi-aotz-- PASS: lvchange --refresh testvg/pool2 PASS: lvchange --monitor n testvg/pool2 PASS: lvchange --monitor y testvg/pool2 PASS: testvg/pool2 lv_allocation_policy == inherit PASS: lvchange -Cy testvg/pool2 PASS: testvg/pool2 lv_allocation_policy == contiguous PASS: lvchange -Cn testvg/pool2 PASS: testvg/pool2 lv_allocation_policy == inherit PASS: testvg/pool2 lv_read_ahead == auto PASS: lvchange -r 256 testvg/pool2 PASS: testvg/pool2 lv_read_ahead == 128.00k PASS: lvchange -r none testvg/pool2 PASS: testvg/pool2 lv_read_ahead == 0 PASS: lvchange -r auto testvg/pool2 PASS: testvg/pool2 lv_read_ahead == auto PASS: lvchange -Zn testvg/pool2 PASS: testvg/pool2 zero == PASS: lvchange -Z y testvg/pool2 PASS: testvg/pool2 zero == zero PASS: lvchange -an testvg/lv1 PASS: testvg/lv1 lv_attr == Vwi---tz-- PASS: ls /dev/testvg/lv1 [exited with error, as expected] PASS: lvchange -a y testvg/lv1 PASS: testvg/lv1 lv_attr == Vwi-a-tz-- PASS: ls /dev/testvg/lv1 PASS: lvchange -pr testvg/lv1 PASS: testvg/lv1 lv_attr == Vri-a-tz-- PASS: lvchange -p rw testvg/lv1 PASS: testvg/lv1 lv_attr == Vwi-a-tz-- PASS: lvchange --refresh testvg/lv1 PASS: testvg/lv1 lv_allocation_policy == inherit PASS: lvchange -Cy testvg/lv1 PASS: testvg/lv1 lv_allocation_policy == contiguous PASS: lvchange -Cn testvg/lv1 PASS: testvg/lv1 lv_allocation_policy == inherit PASS: lvchange -r 256 testvg/lv1 PASS: testvg/lv1 lv_read_ahead == 128.00k PASS: lvchange -r none testvg/lv1 PASS: testvg/lv1 lv_read_ahead == 0 PASS: lvchange -r auto testvg/lv1 PASS: testvg/lv1 lv_read_ahead == auto PASS: lvchange -an testvg/lv2 PASS: testvg/lv2 lv_attr == Vwi---tz-- PASS: ls /dev/testvg/lv2 [exited with error, as expected] PASS: lvchange -a y testvg/lv2 PASS: testvg/lv2 lv_attr == Vwi-a-tz-- PASS: ls /dev/testvg/lv2 PASS: lvchange -pr testvg/lv2 PASS: testvg/lv2 lv_attr == Vri-a-tz-- PASS: lvchange -p rw testvg/lv2 PASS: testvg/lv2 lv_attr == Vwi-a-tz-- PASS: lvchange --refresh testvg/lv2 PASS: testvg/lv2 lv_allocation_policy == inherit PASS: lvchange -Cy testvg/lv2 PASS: testvg/lv2 lv_allocation_policy == contiguous PASS: lvchange -Cn testvg/lv2 PASS: testvg/lv2 lv_allocation_policy == inherit PASS: lvchange -r 256 testvg/lv2 PASS: testvg/lv2 lv_read_ahead == 128.00k PASS: lvchange -r none testvg/lv2 PASS: testvg/lv2 lv_read_ahead == 0 PASS: lvchange -r auto testvg/lv2 PASS: testvg/lv2 lv_read_ahead == auto PASS: lvcreate -s testvg/lv1 -n lv3 PASS: testvg/lv3 lv_attr == Vwi---tz-k PASS: ls /dev/testvg/lv3 [exited with error, as expected] PASS: lvchange -ay -K testvg/lv3 PASS: testvg/lv3 lv_attr == Vwi-a-tz-k PASS: ls /dev/testvg/lv3 PASS: lvchange -pr testvg/lv3 PASS: testvg/lv3 lv_attr == Vri-a-tz-k PASS: lvchange -p rw testvg/lv3 PASS: testvg/lv3 lv_attr == Vwi-a-tz-k PASS: lvchange --refresh testvg/lv3 PASS: testvg/lv3 lv_allocation_policy == inherit PASS: lvchange -Cy testvg/lv3 PASS: testvg/lv3 lv_allocation_policy == contiguous PASS: lvchange -Cn testvg/lv3 PASS: testvg/lv3 lv_allocation_policy == inherit PASS: lvchange -r 256 testvg/lv3 PASS: testvg/lv3 lv_read_ahead == 128.00k PASS: lvchange -r none testvg/lv3 PASS: testvg/lv3 lv_read_ahead == 0 PASS: lvchange -r auto testvg/lv3 PASS: testvg/lv3 lv_read_ahead == auto PASS: Search for error on the server ############################# Total tests that passed: 181 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvconf-thinp.py' ============================================================================================================== INFO: [2023-01-26 04:18:19] Running: '/opt/stqe-venv/bin/python3 /opt/stqe-venv/lib64/python3.9/site-packages/stqe/tests/lvm/thinp/lvconf-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:18:20] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:18:20] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:18:20] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:18:20] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:18:20] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:18:20] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:18:21] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:18:21] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:18:21] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:18:21] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux rdma-qe-36.rdma.lab.eng.rdu2.redhat.com 5.14.0-244.1956_758049736.el9.x86_64+debug #1 SMP PREEMPT_DYNAMIC Thu Jan 26 05:36:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Kernel tainted: 79872 ### IP settings: ### INFO: [2023-01-26 04:18:21] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c4 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 10.1.217.125/24 brd 10.1.217.255 scope global dynamic noprefixroute eno1 valid_lft 77865sec preferred_lft 77865sec inet6 2620:52:0:1d9:3673:5aff:fe9d:5cc4/64 scope global dynamic noprefixroute valid_lft 2591995sec preferred_lft 604795sec inet6 fe80::3673:5aff:fe9d:5cc4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c5 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c6 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c7 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: ens1f0np0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 10:70:fd:a3:7c:60 brd ff:ff:ff:ff:ff:ff altname enp59s0f0np0 7: ens1f1np1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 10:70:fd:a3:7c:61 brd ff:ff:ff:ff:ff:ff altname enp59s0f1np1 ### File system disk space usage: ### INFO: [2023-01-26 04:18:21] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 26G 0 26G 0% /dev/shm tmpfs 11G 38M 11G 1% /run /dev/mapper/cs_rdma--qe--36-root 70G 5.2G 65G 8% / /dev/sda2 1014M 285M 730M 29% /boot /dev/mapper/cs_rdma--qe--36-home 180G 1.3G 179G 1% /home /dev/sda1 599M 7.5M 592M 2% /boot/efi tmpfs 5.2G 4.0K 5.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 93G 375G 20% /var/crash INFO: [2023-01-26 04:18:21] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.17-4.el9.x86_64) INFO: vg_remove - testvg does not exist. Skipping... INFO: [2023-01-26 04:18:22] Running: 'cp -f /etc/lvm/lvm.conf /etc/lvm/lvm.conf.copy'... ################################################################################ INFO: Starting Thin Provisioning lvconf test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2023-01-26 04:18:23] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2023-01-26 04:18:23] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2023-01-26 04:18:23] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2023-01-26 04:18:23] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: [2023-01-26 04:18:23] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Volume group "testvg" successfully created INFO: [2023-01-26 04:18:24] Running: 'lvcreate -l1 -T testvg/pool'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool" created. PASS: lvcreate -l1 -T testvg/pool FAIL:(libsan.host.lvm) ( Devices file loop_file /var/tmp/loop3.img PVID none last seen on /dev/loop3 not found.) does not match lvs output format FAIL:(libsan.host.lvm) ( Devices file loop_file /var/tmp/loop2.img PVID none last seen on /dev/loop2 not found.) does not match lvs output format FAIL:(libsan.host.lvm) ( Devices file loop_file /var/tmp/loop3.img PVID none last seen on /dev/loop3 not found.) does not match lvs output format FAIL:(libsan.host.lvm) ( Devices file loop_file /var/tmp/loop2.img PVID none last seen on /dev/loop2 not found.) does not match lvs output format PASS: tmeta an tdata are in different devices INFO: [2023-01-26 04:18:25] Running: 'lvcreate -i3 -l1 -T testvg/pool2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Rounding size 4.00 MiB (1 extents) up to stripe boundary size 12.00 MiB (3 extents). Number of stripes (3) must not exceed number of physical volumes (2) PASS: lvcreate -i3 -l1 -T testvg/pool2 [exited with error, as expected] INFO: [2023-01-26 04:18:25] Running: 'lvcreate -l1 -T testvg/pool3'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool3" created. PASS: lvcreate -l1 -T testvg/pool3 INFO: [2023-01-26 04:18:26] Running: 'lvcreate -i2 -l1 -T testvg/pool4'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Rounding size 4.00 MiB (1 extents) up to stripe boundary size 8.00 MiB (2 extents). Logical volume "pool4" created. PASS: lvcreate -i2 -l1 -T testvg/pool4 PASS: thin_pool_autoextend_threshold == '100' PASS: thin_pool_autoextend_percent == '20' PASS: thin_pool_metadata_require_separate_pvs == '0' INFO: [2023-01-26 04:18:27] Running: 'vgremove --force testvg'... Logical volume "pool4" successfully removed. Logical volume "pool3" successfully removed. Logical volume "pool" successfully removed. Volume group "testvg" successfully removed INFO: [2023-01-26 04:18:28] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2023-01-26 04:18:30] Running: 'losetup -d /dev/loop0'... INFO: [2023-01-26 04:18:30] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2023-01-26 04:18:30] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2023-01-26 04:18:32] Running: 'losetup -d /dev/loop1'... INFO: [2023-01-26 04:18:32] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2023-01-26 04:18:32] Running: 'mv -f /etc/lvm/lvm.conf.copy /etc/lvm/lvm.conf'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:18:32] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:18:32] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:18:32] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:18:32] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:18:32] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:18:33] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:18:33] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:18:33] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:18:33] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:18:33] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2023-01-26 04:18:33] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l1 -T testvg/pool PASS: tmeta an tdata are in different devices PASS: lvcreate -i3 -l1 -T testvg/pool2 [exited with error, as expected] PASS: lvcreate -l1 -T testvg/pool3 PASS: lvcreate -i2 -l1 -T testvg/pool4 PASS: thin_pool_autoextend_threshold == '100' PASS: thin_pool_autoextend_percent == '20' PASS: thin_pool_metadata_require_separate_pvs == '0' PASS: Search for error on the server ############################# Total tests that passed: 9 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvconvert-thinpool.py' ============================================================================================================== INFO: [2023-01-26 04:18:35] Running: '/opt/stqe-venv/bin/python3 /opt/stqe-venv/lib64/python3.9/site-packages/stqe/tests/lvm/thinp/lvconvert-thinpool.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:18:36] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:18:36] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:18:36] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:18:36] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:18:37] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:18:37] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:18:37] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:18:37] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:18:37] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:18:37] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux rdma-qe-36.rdma.lab.eng.rdu2.redhat.com 5.14.0-244.1956_758049736.el9.x86_64+debug #1 SMP PREEMPT_DYNAMIC Thu Jan 26 05:36:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Kernel tainted: 79872 ### IP settings: ### INFO: [2023-01-26 04:18:38] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c4 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 10.1.217.125/24 brd 10.1.217.255 scope global dynamic noprefixroute eno1 valid_lft 77849sec preferred_lft 77849sec inet6 2620:52:0:1d9:3673:5aff:fe9d:5cc4/64 scope global dynamic noprefixroute valid_lft 2591978sec preferred_lft 604778sec inet6 fe80::3673:5aff:fe9d:5cc4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c5 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c6 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c7 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: ens1f0np0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 10:70:fd:a3:7c:60 brd ff:ff:ff:ff:ff:ff altname enp59s0f0np0 7: ens1f1np1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 10:70:fd:a3:7c:61 brd ff:ff:ff:ff:ff:ff altname enp59s0f1np1 ### File system disk space usage: ### INFO: [2023-01-26 04:18:38] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 26G 0 26G 0% /dev/shm tmpfs 11G 38M 11G 1% /run /dev/mapper/cs_rdma--qe--36-root 70G 5.2G 65G 8% / /dev/sda2 1014M 285M 730M 29% /boot /dev/mapper/cs_rdma--qe--36-home 180G 1.3G 179G 1% /home /dev/sda1 599M 7.5M 592M 2% /boot/efi tmpfs 5.2G 4.0K 5.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 93G 375G 20% /var/crash INFO: [2023-01-26 04:18:38] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.17-4.el9.x86_64) ################################################################################ INFO: Starting Thin Pool Convert test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2023-01-26 04:18:39] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2023-01-26 04:18:39] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2023-01-26 04:18:39] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2023-01-26 04:18:39] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2023-01-26 04:18:40] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2023-01-26 04:18:40] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2023-01-26 04:18:40] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2023-01-26 04:18:40] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2023-01-26 04:18:40] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2023-01-26 04:18:40] Running: 'lvcreate -l20 -n testvg/pool'... Logical volume "pool" created. PASS: lvcreate -l20 -n testvg/pool INFO: [2023-01-26 04:18:41] Running: 'lvconvert --thinpool testvg/pool -y'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Converted testvg/pool to thin pool. WARNING: Converting testvg/pool to thin pool's data volume with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) PASS: lvconvert --thinpool testvg/pool -y PASS: testvg/pool lv_attr == twi-a-tz-- INFO: [2023-01-26 04:18:42] Running: 'lvremove -ff testvg'... Logical volume "pool" successfully removed. PASS: lvremove -ff testvg INFO: [2023-01-26 04:18:43] Running: 'lvcreate --zero n -an -l20 -n testvg/pool'... Logical volume "pool" created. WARNING: Logical volume testvg/pool not zeroed. PASS: lvcreate --zero n -an -l20 -n testvg/pool INFO: [2023-01-26 04:18:43] Running: 'lvconvert --thinpool testvg/pool -y'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Converted testvg/pool to thin pool. WARNING: Converting testvg/pool to thin pool's data volume with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) PASS: lvconvert --thinpool testvg/pool -y PASS: testvg/pool lv_attr == twi---tz-- PASS: testvg/pool discards == passdown INFO: [2023-01-26 04:18:44] Running: 'lvremove -ff testvg'... Logical volume "pool" successfully removed. PASS: lvremove -ff testvg INFO: [2023-01-26 04:18:44] Running: 'lvcreate -l20 -n testvg/pool'... Logical volume "pool" created. PASS: lvcreate -l20 -n testvg/pool INFO: [2023-01-26 04:18:45] Running: 'lvconvert --thinpool testvg/pool -c 256 -Z y --discards nopassdown --poolmetadatasize 4M -r 16 -y'... Thin pool volume with chunk size 256.00 KiB can address at most 63.50 TiB of data. Converted testvg/pool to thin pool. WARNING: Converting testvg/pool to thin pool's data volume with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) PASS: lvconvert --thinpool testvg/pool -c 256 -Z y --discards nopassdown --poolmetadatasize 4M -r 16 -y PASS: testvg/pool chunksize == 256.00k PASS: testvg/pool discards == nopassdown PASS: testvg/pool lv_metadata_size == 4.00m PASS: testvg/pool lv_size == 80.00m INFO: [2023-01-26 04:18:46] Running: 'lvremove -ff testvg'... Logical volume "pool" successfully removed. PASS: lvremove -ff testvg INFO: [2023-01-26 04:18:47] Running: 'lvcreate -l20 -n testvg/pool'... Logical volume "pool" created. PASS: lvcreate -l20 -n testvg/pool INFO: [2023-01-26 04:18:47] Running: 'lvcreate -l10 -n testvg/metadata'... Logical volume "metadata" created. PASS: lvcreate -l10 -n testvg/metadata INFO: [2023-01-26 04:18:47] Running: 'lvconvert -y --thinpool testvg/pool --poolmetadata testvg/metadata'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Converted testvg/pool and testvg/metadata to thin pool. WARNING: Converting testvg/pool and testvg/metadata to thin pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) PASS: lvconvert -y --thinpool testvg/pool --poolmetadata testvg/metadata PASS: testvg/pool lv_size == 80.00m PASS: testvg/pool lv_metadata_size == 40.00m INFO: [2023-01-26 04:18:49] Running: 'lvremove -ff testvg'... Logical volume "pool" successfully removed. PASS: lvremove -ff testvg INFO: [2023-01-26 04:18:50] Running: 'vgremove --force testvg'... Volume group "testvg" successfully removed INFO: [2023-01-26 04:18:50] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2023-01-26 04:18:51] Running: 'losetup -d /dev/loop0'... INFO: [2023-01-26 04:18:51] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2023-01-26 04:18:52] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2023-01-26 04:18:53] Running: 'losetup -d /dev/loop1'... INFO: [2023-01-26 04:18:53] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2023-01-26 04:18:53] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2023-01-26 04:18:55] Running: 'losetup -d /dev/loop2'... INFO: [2023-01-26 04:18:55] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2023-01-26 04:18:55] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2023-01-26 04:18:57] Running: 'losetup -d /dev/loop3'... INFO: [2023-01-26 04:18:57] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:18:57] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:18:57] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:18:57] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:18:57] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:18:57] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:18:57] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:18:58] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:18:58] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:18:58] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:18:58] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2023-01-26 04:18:58] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l20 -n testvg/pool PASS: lvconvert --thinpool testvg/pool -y PASS: testvg/pool lv_attr == twi-a-tz-- PASS: lvremove -ff testvg PASS: lvcreate --zero n -an -l20 -n testvg/pool PASS: lvconvert --thinpool testvg/pool -y PASS: testvg/pool lv_attr == twi---tz-- PASS: testvg/pool discards == passdown PASS: lvremove -ff testvg PASS: lvcreate -l20 -n testvg/pool PASS: lvconvert --thinpool testvg/pool -c 256 -Z y --discards nopassdown --poolmetadatasize 4M -r 16 -y PASS: testvg/pool chunksize == 256.00k PASS: testvg/pool discards == nopassdown PASS: testvg/pool lv_metadata_size == 4.00m PASS: testvg/pool lv_size == 80.00m PASS: lvremove -ff testvg PASS: lvcreate -l20 -n testvg/pool PASS: lvcreate -l10 -n testvg/metadata PASS: lvconvert -y --thinpool testvg/pool --poolmetadata testvg/metadata PASS: testvg/pool lv_size == 80.00m PASS: testvg/pool lv_metadata_size == 40.00m PASS: lvremove -ff testvg PASS: Search for error on the server ############################# Total tests that passed: 23 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvconvert-thin-lv.py' ============================================================================================================== INFO: [2023-01-26 04:19:00] Running: '/opt/stqe-venv/bin/python3 /opt/stqe-venv/lib64/python3.9/site-packages/stqe/tests/lvm/thinp/lvconvert-thin-lv.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:19:01] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:19:01] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:19:01] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:19:01] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:19:02] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:19:02] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:19:02] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:19:02] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:19:02] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:19:02] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux rdma-qe-36.rdma.lab.eng.rdu2.redhat.com 5.14.0-244.1956_758049736.el9.x86_64+debug #1 SMP PREEMPT_DYNAMIC Thu Jan 26 05:36:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Kernel tainted: 79872 ### IP settings: ### INFO: [2023-01-26 04:19:03] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c4 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 10.1.217.125/24 brd 10.1.217.255 scope global dynamic noprefixroute eno1 valid_lft 77824sec preferred_lft 77824sec inet6 2620:52:0:1d9:3673:5aff:fe9d:5cc4/64 scope global dynamic noprefixroute valid_lft 2591953sec preferred_lft 604753sec inet6 fe80::3673:5aff:fe9d:5cc4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c5 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c6 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c7 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: ens1f0np0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 10:70:fd:a3:7c:60 brd ff:ff:ff:ff:ff:ff altname enp59s0f0np0 7: ens1f1np1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 10:70:fd:a3:7c:61 brd ff:ff:ff:ff:ff:ff altname enp59s0f1np1 ### File system disk space usage: ### INFO: [2023-01-26 04:19:03] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 26G 0 26G 0% /dev/shm tmpfs 11G 38M 11G 1% /run /dev/mapper/cs_rdma--qe--36-root 70G 5.2G 65G 8% / /dev/sda2 1014M 285M 730M 29% /boot /dev/mapper/cs_rdma--qe--36-home 180G 1.3G 179G 1% /home /dev/sda1 599M 7.5M 592M 2% /boot/efi tmpfs 5.2G 4.0K 5.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 93G 375G 20% /var/crash INFO: [2023-01-26 04:19:03] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.17-4.el9.x86_64) ################################################################################ INFO: Starting Thin Pool Convert test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2023-01-26 04:19:04] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2023-01-26 04:19:04] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2023-01-26 04:19:04] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2023-01-26 04:19:04] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2023-01-26 04:19:04] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2023-01-26 04:19:05] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2023-01-26 04:19:05] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2023-01-26 04:19:05] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2023-01-26 04:19:05] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2023-01-26 04:19:05] Running: 'lvcreate -l25 -n testvg/thin'... Logical volume "thin" created. PASS: lvcreate -l25 -n testvg/thin INFO: [2023-01-26 04:19:06] Running: 'mkfs.ext4 -F /dev/mapper/testvg-thin'... Discarding device blocks: 0/102400 done Creating filesystem with 102400 1k blocks and 25584 inodes Filesystem UUID: c8010932-274c-4cc6-be29-cfa4b9579766 Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 Allocating group tables: 0/13 done Writing inode tables: 0/13 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/13 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:19:06] Running: 'mount /dev/mapper/testvg-thin /mnt/thin'... INFO: [2023-01-26 04:19:06] Running: 'dd if=/dev/urandom of=/mnt/thin/5m bs=1M count=5;sync'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.21118 s, 24.8 MB/s PASS: dd if=/dev/urandom of=/mnt/thin/5m bs=1M count=5;sync INFO: [2023-01-26 04:19:07] Running: 'md5sum /mnt/thin/5m > pre_md5'... PASS: md5sum /mnt/thin/5m > pre_md5 INFO: [2023-01-26 04:19:07] Running: 'lvcreate -l50 -T -n testvg/pool'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool" created. PASS: lvcreate -l50 -T -n testvg/pool test case:1 INFO: [2023-01-26 04:19:08] Running: 'lvconvert --thinpool testvg/pool --thin testvg/thin --originname thin_origin -y'... Logical volume "thin_origin" created. Converted testvg/thin to thin volume with external origin testvg/thin_origin. PASS: lvconvert --thinpool testvg/pool --thin testvg/thin --originname thin_origin -y 1.1 checking if the md5 checksum is not changed INFO: [2023-01-26 04:19:10] Running: 'md5sum /mnt/thin/5m > post_md5'... PASS: md5sum /mnt/thin/5m > post_md5 INFO: [2023-01-26 04:19:10] Running: 'diff pre_md5 post_md5'... PASS: diff pre_md5 post_md5 1.2 checking if the thin LV is converted PASS: testvg/thin lv_size == 100.00m PASS: testvg/thin pool_lv == pool PASS: testvg/thin lv_attr == Vwi-aotz-- PASS: testvg/thin origin == thin_origin 1.3 checking if a readonly lv is created for the pre-data PASS: testvg/thin_origin lv_attr == ori------- 1.4 checking if the new data will be stored in the pool INFO: [2023-01-26 04:19:11] Running: 'dd if=/dev/urandom of=/mnt/thin/10m bs=1M count=10;sync'... 10+0 records in 10+0 records out 10485760 bytes (10 MB, 10 MiB) copied, 0.420827 s, 24.9 MB/s PASS: dd if=/dev/urandom of=/mnt/thin/10m bs=1M count=10;sync PASS: Data percentage increased correctly 1.5 checking deleting the pre-data, the origin will not impact INFO: [2023-01-26 04:19:12] Running: 'rm -rf /mnt/thin/5m'... PASS: rm -rf /mnt/thin/5m INFO: [2023-01-26 04:19:13] Running: 'umount /mnt/thin'... INFO: [2023-01-26 04:19:13] Running: 'lvremove -ff testvg/thin'... Logical volume "thin" successfully removed. PASS: lvremove -ff testvg/thin INFO: [2023-01-26 04:19:14] Running: 'lvchange -ay testvg/thin_origin'... PASS: lvchange -ay testvg/thin_origin INFO: [2023-01-26 04:19:14] Running: 'mount /dev/mapper/testvg-thin_origin /mnt/thin'... mount: /mnt/thin: WARNING: source write-protected, mounted read-only. INFO: [2023-01-26 04:19:14] Running: 'md5sum /mnt/thin/5m > origin_md5'... PASS: md5sum /mnt/thin/5m > origin_md5 INFO: [2023-01-26 04:19:14] Running: 'diff pre_md5 origin_md5'... PASS: diff pre_md5 origin_md5 INFO: [2023-01-26 04:19:14] Running: 'umount /mnt/thin'... INFO: [2023-01-26 04:19:15] Running: 'vgremove --force testvg'... Logical volume "pool" successfully removed. Logical volume "thin_origin" successfully removed. Volume group "testvg" successfully removed INFO: [2023-01-26 04:19:16] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2023-01-26 04:19:17] Running: 'losetup -d /dev/loop0'... INFO: [2023-01-26 04:19:17] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2023-01-26 04:19:17] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2023-01-26 04:19:19] Running: 'losetup -d /dev/loop1'... INFO: [2023-01-26 04:19:19] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2023-01-26 04:19:19] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2023-01-26 04:19:21] Running: 'losetup -d /dev/loop2'... INFO: [2023-01-26 04:19:21] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2023-01-26 04:19:21] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2023-01-26 04:19:22] Running: 'losetup -d /dev/loop3'... INFO: [2023-01-26 04:19:23] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:19:23] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:19:23] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:19:23] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:19:23] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:19:23] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:19:23] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:19:23] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:19:24] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:19:24] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:19:24] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2023-01-26 04:19:24] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l25 -n testvg/thin PASS: dd if=/dev/urandom of=/mnt/thin/5m bs=1M count=5;sync PASS: md5sum /mnt/thin/5m > pre_md5 PASS: lvcreate -l50 -T -n testvg/pool PASS: lvconvert --thinpool testvg/pool --thin testvg/thin --originname thin_origin -y PASS: md5sum /mnt/thin/5m > post_md5 PASS: diff pre_md5 post_md5 PASS: testvg/thin lv_size == 100.00m PASS: testvg/thin pool_lv == pool PASS: testvg/thin lv_attr == Vwi-aotz-- PASS: testvg/thin origin == thin_origin PASS: testvg/thin_origin lv_attr == ori------- PASS: dd if=/dev/urandom of=/mnt/thin/10m bs=1M count=10;sync PASS: Data percentage increased correctly PASS: rm -rf /mnt/thin/5m PASS: lvremove -ff testvg/thin PASS: lvchange -ay testvg/thin_origin PASS: md5sum /mnt/thin/5m > origin_md5 PASS: diff pre_md5 origin_md5 PASS: Search for error on the server ############################# Total tests that passed: 20 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvcreate-poolmetadataspare.py' ============================================================================================================== INFO: [2023-01-26 04:19:26] Running: '/opt/stqe-venv/bin/python3 /opt/stqe-venv/lib64/python3.9/site-packages/stqe/tests/lvm/thinp/lvcreate-poolmetadataspare.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:19:27] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:19:27] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:19:27] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:19:27] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:19:27] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:19:28] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:19:28] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:19:28] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:19:28] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:19:28] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux rdma-qe-36.rdma.lab.eng.rdu2.redhat.com 5.14.0-244.1956_758049736.el9.x86_64+debug #1 SMP PREEMPT_DYNAMIC Thu Jan 26 05:36:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Kernel tainted: 79872 ### IP settings: ### INFO: [2023-01-26 04:19:28] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c4 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 10.1.217.125/24 brd 10.1.217.255 scope global dynamic noprefixroute eno1 valid_lft 77798sec preferred_lft 77798sec inet6 2620:52:0:1d9:3673:5aff:fe9d:5cc4/64 scope global dynamic noprefixroute valid_lft 2591928sec preferred_lft 604728sec inet6 fe80::3673:5aff:fe9d:5cc4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c5 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c6 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c7 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: ens1f0np0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 10:70:fd:a3:7c:60 brd ff:ff:ff:ff:ff:ff altname enp59s0f0np0 7: ens1f1np1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 10:70:fd:a3:7c:61 brd ff:ff:ff:ff:ff:ff altname enp59s0f1np1 ### File system disk space usage: ### INFO: [2023-01-26 04:19:28] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 26G 0 26G 0% /dev/shm tmpfs 11G 38M 11G 1% /run /dev/mapper/cs_rdma--qe--36-root 70G 5.2G 65G 8% / /dev/sda2 1014M 285M 730M 29% /boot /dev/mapper/cs_rdma--qe--36-home 180G 1.3G 179G 1% /home /dev/sda1 599M 7.5M 592M 2% /boot/efi tmpfs 5.2G 4.0K 5.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 93G 375G 20% /var/crash INFO: [2023-01-26 04:19:28] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.17-4.el9.x86_64) ################################################################################ INFO: Starting Thinp Metadata Spare ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2023-01-26 04:19:29] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2023-01-26 04:19:30] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2023-01-26 04:19:30] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2023-01-26 04:19:30] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2023-01-26 04:19:30] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2023-01-26 04:19:30] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2023-01-26 04:19:31] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2023-01-26 04:19:31] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2023-01-26 04:19:31] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2023-01-26 04:19:31] Running: 'lvcreate -l10 --thin testvg/pool0 --poolmetadataspare n'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool0" created. WARNING: recovery of pools without pool metadata spare LV is not automated. PASS: lvcreate -l10 --thin testvg/pool0 --poolmetadataspare n PASS: lvol0_pmspare does not exist INFO: [2023-01-26 04:19:32] Running: 'lvcreate -l10 --thin testvg/pool1 --poolmetadatasize 4m'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool1" created. PASS: lvcreate -l10 --thin testvg/pool1 --poolmetadatasize 4m PASS: testvg/lvol0_pmspare lv_size == 4.00m INFO: [2023-01-26 04:19:33] Running: 'lvcreate -l10 --thin testvg/pool2 --poolmetadatasize 8m'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool2" created. PASS: lvcreate -l10 --thin testvg/pool2 --poolmetadatasize 8m PASS: testvg/lvol0_pmspare lvsize == 8.00m INFO: [2023-01-26 04:19:34] Running: 'lvremove -ff testvg'... Logical volume "pool2" successfully removed. Logical volume "pool1" successfully removed. Logical volume "pool0" successfully removed. PASS: lvremove -ff testvg INFO: [2023-01-26 04:19:41] Running: 'lvcreate -l10 --thin testvg/pool1 --poolmetadataspare n'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool1" created. WARNING: recovery of pools without pool metadata spare LV is not automated. PASS: lvcreate -l10 --thin testvg/pool1 --poolmetadataspare n PASS: lvol0_pmspare does not exist INFO: [2023-01-26 04:19:42] Running: 'lvcreate -l10 --thin testvg/pool2 --poolmetadataspare y'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool2" created. PASS: lvcreate -l10 --thin testvg/pool2 --poolmetadataspare y PASS: testvg/lvol0_pmspare lv_size == 4.00m INFO: [2023-01-26 04:19:43] Running: 'vgremove --force testvg'... Logical volume "pool2" successfully removed. Logical volume "pool1" successfully removed. Volume group "testvg" successfully removed INFO: [2023-01-26 04:19:44] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2023-01-26 04:19:45] Running: 'losetup -d /dev/loop0'... INFO: [2023-01-26 04:19:45] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2023-01-26 04:19:46] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2023-01-26 04:19:47] Running: 'losetup -d /dev/loop1'... INFO: [2023-01-26 04:19:47] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2023-01-26 04:19:47] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2023-01-26 04:19:49] Running: 'losetup -d /dev/loop2'... INFO: [2023-01-26 04:19:49] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2023-01-26 04:19:49] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2023-01-26 04:19:51] Running: 'losetup -d /dev/loop3'... INFO: [2023-01-26 04:19:51] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:19:51] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:19:51] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:19:51] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:19:51] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:19:51] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:19:51] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:19:52] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:19:52] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:19:52] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:19:52] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2023-01-26 04:19:52] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l10 --thin testvg/pool0 --poolmetadataspare n PASS: lvol0_pmspare does not exist PASS: lvcreate -l10 --thin testvg/pool1 --poolmetadatasize 4m PASS: testvg/lvol0_pmspare lv_size == 4.00m PASS: lvcreate -l10 --thin testvg/pool2 --poolmetadatasize 8m PASS: testvg/lvol0_pmspare lvsize == 8.00m PASS: lvremove -ff testvg PASS: lvcreate -l10 --thin testvg/pool1 --poolmetadataspare n PASS: lvol0_pmspare does not exist PASS: lvcreate -l10 --thin testvg/pool2 --poolmetadataspare y PASS: testvg/lvol0_pmspare lv_size == 4.00m PASS: Search for error on the server ############################# Total tests that passed: 12 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvcreate-mirror.py' ============================================================================================================== INFO: [2023-01-26 04:19:54] Running: '/opt/stqe-venv/bin/python3 /opt/stqe-venv/lib64/python3.9/site-packages/stqe/tests/lvm/thinp/lvcreate-mirror.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:19:55] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:19:55] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:19:55] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:19:55] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:19:55] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:19:56] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:19:56] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:19:56] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:19:56] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:19:56] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux rdma-qe-36.rdma.lab.eng.rdu2.redhat.com 5.14.0-244.1956_758049736.el9.x86_64+debug #1 SMP PREEMPT_DYNAMIC Thu Jan 26 05:36:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Kernel tainted: 79872 ### IP settings: ### INFO: [2023-01-26 04:19:56] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c4 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 10.1.217.125/24 brd 10.1.217.255 scope global dynamic noprefixroute eno1 valid_lft 77770sec preferred_lft 77770sec inet6 2620:52:0:1d9:3673:5aff:fe9d:5cc4/64 scope global dynamic noprefixroute valid_lft 2591900sec preferred_lft 604700sec inet6 fe80::3673:5aff:fe9d:5cc4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c5 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c6 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c7 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: ens1f0np0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 10:70:fd:a3:7c:60 brd ff:ff:ff:ff:ff:ff altname enp59s0f0np0 7: ens1f1np1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 10:70:fd:a3:7c:61 brd ff:ff:ff:ff:ff:ff altname enp59s0f1np1 ### File system disk space usage: ### INFO: [2023-01-26 04:19:57] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 26G 0 26G 0% /dev/shm tmpfs 11G 38M 11G 1% /run /dev/mapper/cs_rdma--qe--36-root 70G 5.2G 65G 8% / /dev/sda2 1014M 285M 730M 29% /boot /dev/mapper/cs_rdma--qe--36-home 180G 1.3G 179G 1% /home /dev/sda1 599M 7.5M 592M 2% /boot/efi tmpfs 5.2G 4.0K 5.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 93G 375G 20% /var/crash INFO: [2023-01-26 04:19:57] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.17-4.el9.x86_64) ################################################################################ INFO: Starting Thin Provisioning Mirror test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2023-01-26 04:19:58] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2023-01-26 04:19:58] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2023-01-26 04:19:58] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2023-01-26 04:19:58] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2023-01-26 04:19:58] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2023-01-26 04:19:59] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2023-01-26 04:19:59] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2023-01-26 04:19:59] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2023-01-26 04:19:59] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2023-01-26 04:19:59] Running: 'lvcreate -L4M --thin testvg/pool'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool" created. PASS: lvcreate -L4M --thin testvg/pool PASS: testvg/[pool_tdata] stripes == 1 INFO: [2023-01-26 04:20:01] Running: 'lvchange -an testvg/pool'... PASS: lvchange -an testvg/pool INFO: [2023-01-26 04:20:02] Running: 'lvconvert --type raid1 --mirrors 3 --yes testvg/pool_tdata'... testvg/pool_tdata must be active to perform this operation. PASS: lvconvert --type raid1 --mirrors 3 --yes testvg/pool_tdata [exited with error, as expected] INFO: [2023-01-26 04:20:03] Running: 'lvchange -ay testvg/pool'... PASS: lvchange -ay testvg/pool INFO: [2023-01-26 04:20:03] Running: 'lvconvert --type raid1 --mirrors 3 --yes testvg/pool_tdata'... Logical volume testvg/pool_tdata successfully converted. PASS: lvconvert --type raid1 --mirrors 3 --yes testvg/pool_tdata INFO: [2023-01-26 04:20:09] Running: 'lvconvert --type raid1 -m 1 --yes testvg/pool_tmeta'... Logical volume testvg/pool_tmeta successfully converted. PASS: lvconvert --type raid1 -m 1 --yes testvg/pool_tmeta INFO: [2023-01-26 04:20:10] Running: 'lvchange -ay testvg/pool'... PASS: lvchange -ay testvg/pool PASS: testvg/[pool_tdata_rimage_0] stripes == 1 PASS: testvg/[pool_tdata_rimage_1] stripes == 1 PASS: testvg/[pool_tdata_rimage_2] stripes == 1 PASS: testvg/[pool_tdata_rimage_3] stripes == 1 PASS: testvg/[pool_tdata_rmeta_0] stripes == 1 PASS: testvg/[pool_tdata_rmeta_1] stripes == 1 PASS: testvg/[pool_tdata_rmeta_2] stripes == 1 PASS: testvg/[pool_tdata_rmeta_3] stripes == 1 PASS: testvg/[pool_tmeta_rimage_0] stripes == 1 PASS: testvg/[pool_tmeta_rimage_1] stripes == 1 PASS: testvg/[pool_tmeta_rmeta_0] stripes == 1 PASS: testvg/[pool_tmeta_rmeta_1] stripes == 1 INFO: [2023-01-26 04:20:13] Running: 'lvs -a testvg'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert [lvol0_pmspare] testvg ewi------- 4.00m pool testvg twi-a-tz-- 4.00m 0.00 10.84 [pool_tdata] testvg rwi-aor--- 4.00m 100.00 [pool_tdata_rimage_0] testvg iwi-aor--- 4.00m [pool_tdata_rimage_1] testvg iwi-aor--- 4.00m [pool_tdata_rimage_2] testvg iwi-aor--- 4.00m [pool_tdata_rimage_3] testvg iwi-aor--- 4.00m [pool_tdata_rmeta_0] testvg ewi-aor--- 4.00m [pool_tdata_rmeta_1] testvg ewi-aor--- 4.00m [pool_tdata_rmeta_2] testvg ewi-aor--- 4.00m [pool_tdata_rmeta_3] testvg ewi-aor--- 4.00m [pool_tmeta] testvg ewi-aor--- 4.00m 100.00 [pool_tmeta_rimage_0] testvg iwi-aor--- 4.00m [pool_tmeta_rimage_1] testvg iwi-aor--- 4.00m [pool_tmeta_rmeta_0] testvg ewi-aor--- 4.00m [pool_tmeta_rmeta_1] testvg ewi-aor--- 4.00m INFO: [2023-01-26 04:20:14] Running: 'lvremove -ff testvg'... Logical volume "pool" successfully removed. PASS: lvremove -ff testvg INFO: [2023-01-26 04:20:15] Running: 'vgremove --force testvg'... Volume group "testvg" successfully removed INFO: [2023-01-26 04:20:15] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2023-01-26 04:20:17] Running: 'losetup -d /dev/loop0'... INFO: [2023-01-26 04:20:17] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2023-01-26 04:20:17] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2023-01-26 04:20:19] Running: 'losetup -d /dev/loop1'... INFO: [2023-01-26 04:20:19] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2023-01-26 04:20:19] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2023-01-26 04:20:20] Running: 'losetup -d /dev/loop2'... INFO: [2023-01-26 04:20:20] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2023-01-26 04:20:21] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2023-01-26 04:20:22] Running: 'losetup -d /dev/loop3'... INFO: [2023-01-26 04:20:22] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:20:22] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:20:22] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:20:23] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:20:23] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:20:23] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:20:23] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:20:23] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:20:23] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:20:23] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:20:23] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server PASS: Search for error on the server module 'raid1' was loaded during the test. Unloading it... INFO: [2023-01-26 04:20:24] Running: 'modprobe -r raid1'... module 'dm_raid' was loaded during the test. Unloading it... INFO: [2023-01-26 04:20:24] Running: 'modprobe -r dm_raid'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2023-01-26 04:20:25] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -L4M --thin testvg/pool PASS: testvg/[pool_tdata] stripes == 1 PASS: lvchange -an testvg/pool PASS: lvconvert --type raid1 --mirrors 3 --yes testvg/pool_tdata [exited with error, as expected] PASS: lvchange -ay testvg/pool PASS: lvconvert --type raid1 --mirrors 3 --yes testvg/pool_tdata PASS: lvconvert --type raid1 -m 1 --yes testvg/pool_tmeta PASS: lvchange -ay testvg/pool PASS: testvg/[pool_tdata_rimage_0] stripes == 1 PASS: testvg/[pool_tdata_rimage_1] stripes == 1 PASS: testvg/[pool_tdata_rimage_2] stripes == 1 PASS: testvg/[pool_tdata_rimage_3] stripes == 1 PASS: testvg/[pool_tdata_rmeta_0] stripes == 1 PASS: testvg/[pool_tdata_rmeta_1] stripes == 1 PASS: testvg/[pool_tdata_rmeta_2] stripes == 1 PASS: testvg/[pool_tdata_rmeta_3] stripes == 1 PASS: testvg/[pool_tmeta_rimage_0] stripes == 1 PASS: testvg/[pool_tmeta_rimage_1] stripes == 1 PASS: testvg/[pool_tmeta_rmeta_0] stripes == 1 PASS: testvg/[pool_tmeta_rmeta_1] stripes == 1 PASS: lvremove -ff testvg PASS: Search for error on the server ############################# Total tests that passed: 22 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvextend-thinp.py' ============================================================================================================== INFO: [2023-01-26 04:20:27] Running: '/opt/stqe-venv/bin/python3 /opt/stqe-venv/lib64/python3.9/site-packages/stqe/tests/lvm/thinp/lvextend-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:20:28] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:20:28] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:20:28] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:20:28] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:20:28] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:20:28] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:20:28] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:20:29] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:20:29] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:20:29] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux rdma-qe-36.rdma.lab.eng.rdu2.redhat.com 5.14.0-244.1956_758049736.el9.x86_64+debug #1 SMP PREEMPT_DYNAMIC Thu Jan 26 05:36:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Kernel tainted: 79872 ### IP settings: ### INFO: [2023-01-26 04:20:29] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c4 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 10.1.217.125/24 brd 10.1.217.255 scope global dynamic noprefixroute eno1 valid_lft 77738sec preferred_lft 77738sec inet6 2620:52:0:1d9:3673:5aff:fe9d:5cc4/64 scope global dynamic noprefixroute valid_lft 2591867sec preferred_lft 604667sec inet6 fe80::3673:5aff:fe9d:5cc4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c5 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c6 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c7 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: ens1f0np0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 10:70:fd:a3:7c:60 brd ff:ff:ff:ff:ff:ff altname enp59s0f0np0 7: ens1f1np1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 10:70:fd:a3:7c:61 brd ff:ff:ff:ff:ff:ff altname enp59s0f1np1 ### File system disk space usage: ### INFO: [2023-01-26 04:20:29] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 26G 0 26G 0% /dev/shm tmpfs 11G 38M 11G 1% /run /dev/mapper/cs_rdma--qe--36-root 70G 5.2G 65G 8% / /dev/sda2 1014M 285M 730M 29% /boot /dev/mapper/cs_rdma--qe--36-home 180G 1.3G 179G 1% /home /dev/sda1 599M 7.5M 592M 2% /boot/efi tmpfs 5.2G 4.0K 5.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 93G 375G 20% /var/crash INFO: [2023-01-26 04:20:29] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.17-4.el9.x86_64) ################################################################################ INFO: Starting Thin Extend test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 256 INFO: Creating file /var/tmp/loop0.img INFO: [2023-01-26 04:20:30] Running: 'fallocate -l 256M /var/tmp/loop0.img'... INFO: [2023-01-26 04:20:30] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 256 INFO: Creating file /var/tmp/loop1.img INFO: [2023-01-26 04:20:31] Running: 'fallocate -l 256M /var/tmp/loop1.img'... INFO: [2023-01-26 04:20:31] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 256 INFO: Creating file /var/tmp/loop2.img INFO: [2023-01-26 04:20:31] Running: 'fallocate -l 256M /var/tmp/loop2.img'... INFO: [2023-01-26 04:20:31] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 256 INFO: Creating file /var/tmp/loop3.img INFO: [2023-01-26 04:20:31] Running: 'fallocate -l 256M /var/tmp/loop3.img'... INFO: [2023-01-26 04:20:32] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2023-01-26 04:20:32] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2023-01-26 04:20:32] Running: 'lvcreate -l2 -T testvg/pool1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool1" created. PASS: lvcreate -l2 -T testvg/pool1 INFO: [2023-01-26 04:20:33] Running: 'lvcreate -i2 -l2 -T testvg/pool2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool2" created. PASS: lvcreate -i2 -l2 -T testvg/pool2 INFO: [2023-01-26 04:20:34] Running: 'lvextend -l+2 -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -l+2 -n testvg/pool1 PASS: testvg/pool1 lv_size == 16.00m INFO: [2023-01-26 04:20:34] Running: 'lvextend -L+8 -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -L+8 -n testvg/pool1 PASS: testvg/pool1 lv_size == 24.00m INFO: [2023-01-26 04:20:35] Running: 'lvextend -L+8M -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -L+8M -n testvg/pool1 PASS: testvg/pool1 lv_size == 32.00m INFO: [2023-01-26 04:20:36] Running: 'lvextend -l+2 -n testvg/pool1 /dev/loop3'... Size of logical volume testvg/pool1_tdata changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -l+2 -n testvg/pool1 /dev/loop3 PASS: testvg/pool1 lv_size == 40.00m INFO: [2023-01-26 04:20:37] Running: 'lvextend -l+2 -n testvg/pool1 /dev/loop2:40:41'... Size of logical volume testvg/pool1_tdata changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -l+2 -n testvg/pool1 /dev/loop2:40:41 PASS: testvg/pool1 lv_size == 48.00m INFO: [2023-01-26 04:20:37] Running: 'pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(40)''... testvg [pool1_tdata] /dev/loop2(40) PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(40)' INFO: [2023-01-26 04:20:38] Running: 'lvextend -l+2 -n testvg/pool1 /dev/loop1:35:37'... Size of logical volume testvg/pool1_tdata changed from 48.00 MiB (12 extents) to 56.00 MiB (14 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -l+2 -n testvg/pool1 /dev/loop1:35:37 PASS: testvg/pool1 lv_size == 56.00m INFO: [2023-01-26 04:20:39] Running: 'pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(35)''... testvg [pool1_tdata] /dev/loop1(35) PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(35)' INFO: [2023-01-26 04:20:39] Running: 'lvextend -l16 -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 56.00 MiB (14 extents) to 64.00 MiB (16 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -l16 -n testvg/pool1 PASS: testvg/pool1 lv_size == 64.00m INFO: [2023-01-26 04:20:40] Running: 'lvextend -L72m -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 64.00 MiB (16 extents) to 72.00 MiB (18 extents). Logical volume testvg/pool1 successfully resized. PASS: lvextend -L72m -n testvg/pool1 PASS: testvg/pool1 lv_size == 72.00m INFO: [2023-01-26 04:20:40] Running: 'lvextend -l+100%FREE --test testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 72.00 MiB (18 extents) to 988.00 MiB (247 extents). Logical volume testvg/pool1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvextend -l+100%FREE --test testvg/pool1 INFO: [2023-01-26 04:20:41] Running: 'lvextend -l+10%PVS --test testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvextend -l+10%PVS --test testvg/pool1 INFO: [2023-01-26 04:20:41] Running: 'lvextend -l+10%VG -t testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvextend -l+10%VG -t testvg/pool1 INFO: [2023-01-26 04:20:41] Running: 'lvextend -l+100%VG -t testvg/pool1'... TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Insufficient free space: 252 extents needed, but only 229 available PASS: lvextend -l+100%VG -t testvg/pool1 [exited with error, as expected] INFO: [2023-01-26 04:20:41] Running: 'lvextend -l+2 -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/pool2 successfully resized. PASS: lvextend -l+2 -n testvg/pool2 PASS: testvg/pool2 lv_size == 16.00m INFO: [2023-01-26 04:20:42] Running: 'lvextend -L+8 -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/pool2 successfully resized. PASS: lvextend -L+8 -n testvg/pool2 PASS: testvg/pool2 lv_size == 24.00m INFO: [2023-01-26 04:20:43] Running: 'lvextend -L+8M -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). Logical volume testvg/pool2 successfully resized. PASS: lvextend -L+8M -n testvg/pool2 PASS: testvg/pool2 lv_size == 32.00m INFO: [2023-01-26 04:20:44] Running: 'lvextend -l+2 -n testvg/pool2 /dev/loop1 /dev/loop2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). Logical volume testvg/pool2 successfully resized. PASS: lvextend -l+2 -n testvg/pool2 /dev/loop1 /dev/loop2 PASS: testvg/pool2 lv_size == 40.00m INFO: [2023-01-26 04:20:45] Running: 'lvextend -l+2 -n testvg/pool2 /dev/loop1:30-41 /dev/loop2:20-31'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). Logical volume testvg/pool2 successfully resized. PASS: lvextend -l+2 -n testvg/pool2 /dev/loop1:30-41 /dev/loop2:20-31 PASS: testvg/pool2 lv_size == 48.00m INFO: [2023-01-26 04:20:45] Running: 'pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(30)''... testvg [pool2_tdata] /dev/loop1(30),/dev/loop2(20) PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(30)' INFO: [2023-01-26 04:20:46] Running: 'pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(20)''... testvg [pool2_tdata] /dev/loop1(30),/dev/loop2(20) PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(20)' INFO: [2023-01-26 04:20:46] Running: 'lvextend -l16 -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 48.00 MiB (12 extents) to 64.00 MiB (16 extents). Logical volume testvg/pool2 successfully resized. PASS: lvextend -l16 -n testvg/pool2 PASS: testvg/pool2 lv_size == 64.00m INFO: [2023-01-26 04:20:47] Running: 'lvextend -L72m -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 64.00 MiB (16 extents) to 72.00 MiB (18 extents). Logical volume testvg/pool2 successfully resized. PASS: lvextend -L72m -n testvg/pool2 PASS: testvg/pool2 lv_size == 72.00m INFO: [2023-01-26 04:20:47] Running: 'lvextend -l+100%FREE --test testvg/pool2'... Using stripesize of last segment 64.00 KiB Rounding size (231 extents) down to stripe boundary size for segment (230 extents) Size of logical volume testvg/pool2_tdata changed from 72.00 MiB (18 extents) to 896.00 MiB (224 extents). Logical volume testvg/pool2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvextend -l+100%FREE --test testvg/pool2 INFO: [2023-01-26 04:20:48] Running: 'lvextend -l+10%PVS --test testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvextend -l+10%PVS --test testvg/pool2 INFO: [2023-01-26 04:20:48] Running: 'lvextend -l+10%VG -t testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvextend -l+10%VG -t testvg/pool2 INFO: [2023-01-26 04:20:48] Running: 'lvextend -l+100%VG -t testvg/pool2'... Using stripesize of last segment 64.00 KiB TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Insufficient free space: 252 extents needed, but only 213 available PASS: lvextend -l+100%VG -t testvg/pool2 [exited with error, as expected] INFO: [2023-01-26 04:20:49] Running: 'lvremove -ff testvg'... Logical volume "pool2" successfully removed. Logical volume "pool1" successfully removed. PASS: lvremove -ff testvg INFO: [2023-01-26 04:20:50] Running: 'lvcreate -l10 -V8m -T testvg/pool1 -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv1" created. PASS: lvcreate -l10 -V8m -T testvg/pool1 -n lv1 INFO: [2023-01-26 04:20:51] Running: 'lvcreate -i2 -l10 -V8m -T testvg/pool2 -n lv2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv2" created. PASS: lvcreate -i2 -l10 -V8m -T testvg/pool2 -n lv2 INFO: [2023-01-26 04:20:53] Running: 'lvextend -l4 testvg/lv1'... Size of logical volume testvg/lv1 changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/lv1 successfully resized. PASS: lvextend -l4 testvg/lv1 PASS: testvg/lv1 lv_size == 16.00m INFO: [2023-01-26 04:20:54] Running: 'lvextend -L24 -n testvg/lv1'... Size of logical volume testvg/lv1 changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/lv1 successfully resized. PASS: lvextend -L24 -n testvg/lv1 PASS: testvg/lv1 lv_size == 24.00m INFO: [2023-01-26 04:20:55] Running: 'lvextend -l+100%FREE --test testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to 940.00 MiB (235 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (948.00 MiB) exceeds the size of thin pools and the amount of free space in volume group (916.00 MiB). PASS: lvextend -l+100%FREE --test testvg/lv1 INFO: [2023-01-26 04:20:55] Running: 'lvextend -l+100%PVS --test testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to <1.01 GiB (258 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (<1.02 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+100%PVS --test testvg/lv1 INFO: [2023-01-26 04:20:55] Running: 'lvextend -l+50%VG -t testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to 528.00 MiB (132 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (536.00 MiB) exceeds the size of thin pools (80.00 MiB). PASS: lvextend -l+50%VG -t testvg/lv1 INFO: [2023-01-26 04:20:55] Running: 'lvextend -l+120%VG -t testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to <1.21 GiB (309 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (1.21 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+120%VG -t testvg/lv1 INFO: [2023-01-26 04:20:56] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv1'... Discarding device blocks: 0/24576 done Creating filesystem with 24576 1k blocks and 6144 inodes Filesystem UUID: 3c7d7f0d-3b76-4728-8cb4-080a8367313f Superblock backups stored on blocks: 8193 Allocating group tables: 0/3 done Writing inode tables: 0/3 done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: 0/3 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:20:56] Running: 'mount /dev/mapper/testvg-lv1 /mnt/lv'... INFO: [2023-01-26 04:20:56] Running: 'dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.208589 s, 25.1 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 INFO: [2023-01-26 04:20:56] Running: 'lvextend -l+2 -r testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). File system ext4 found on testvg/lv1 mounted at /mnt/lv. Extending file system ext4 to 32.00 MiB (33554432 bytes) on testvg/lv1... resize2fs /dev/testvg/lv1 Filesystem at /dev/testvg/lv1 is mounted on /mnt/lv; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/testvg/lv1 is now 32768 (1k) blocks long. resize2fs done Extended file system ext4 on testvg/lv1. Logical volume testvg/lv1 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -r testvg/lv1 PASS: testvg/lv1 lv_size == 32.00m INFO: [2023-01-26 04:20:58] Running: 'lvcreate -K -s testvg/lv1 -n snap1'... Logical volume "snap1" created. PASS: lvcreate -K -s testvg/lv1 -n snap1 INFO: [2023-01-26 04:20:58] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap1'... Discarding device blocks: 0/32768 done Creating filesystem with 32768 1k blocks and 8192 inodes Filesystem UUID: f9174e17-1131-4cc0-8963-e48ce251602d Superblock backups stored on blocks: 8193, 24577 Allocating group tables: 0/4 done Writing inode tables: 0/4 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/4 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:20:59] Running: 'mount /dev/mapper/testvg-snap1 /mnt/snap'... INFO: [2023-01-26 04:20:59] Running: 'dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.210526 s, 24.9 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 INFO: [2023-01-26 04:20:59] Running: 'lvextend -l+2 -rf testvg/snap1'... Size of logical volume testvg/snap1 changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). File system ext4 found on testvg/snap1 mounted at /mnt/snap. Extending file system ext4 to 40.00 MiB (41943040 bytes) on testvg/snap1... resize2fs /dev/testvg/snap1 Filesystem at /dev/testvg/snap1 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/testvg/snap1 is now 40960 (1k) blocks long. resize2fs done Extended file system ext4 on testvg/snap1. Logical volume testvg/snap1 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m INFO: [2023-01-26 04:21:00] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 33M 5.1M 26M 17% /mnt/snap INFO: [2023-01-26 04:21:00] Running: 'lvextend -L48 -rf testvg/snap1'... Size of logical volume testvg/snap1 changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. File system ext4 found on testvg/snap1 mounted at /mnt/snap. Extending file system ext4 to 48.00 MiB (50331648 bytes) on testvg/snap1... resize2fs /dev/testvg/snap1 Filesystem at /dev/testvg/snap1 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/testvg/snap1 is now 49152 (1k) blocks long. resize2fs done Extended file system ext4 on testvg/snap1. Logical volume testvg/snap1 successfully resized. WARNING: Sum of all thin volume sizes (88.00 MiB) exceeds the size of thin pools (80.00 MiB). resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -L48 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m INFO: [2023-01-26 04:21:01] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 40M 5.1M 33M 14% /mnt/snap INFO: [2023-01-26 04:21:01] Running: 'umount /mnt/lv'... INFO: [2023-01-26 04:21:02] Running: 'umount /mnt/snap'... INFO: [2023-01-26 04:21:02] Running: 'lvextend -l4 testvg/lv2'... Size of logical volume testvg/lv2 changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/lv2 successfully resized. PASS: lvextend -l4 testvg/lv2 PASS: testvg/lv2 lv_size == 16.00m INFO: [2023-01-26 04:21:03] Running: 'lvextend -L24 -n testvg/lv2'... Size of logical volume testvg/lv2 changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/lv2 successfully resized. PASS: lvextend -L24 -n testvg/lv2 PASS: testvg/lv2 lv_size == 24.00m INFO: [2023-01-26 04:21:03] Running: 'lvextend -l+100%FREE --test testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to 940.00 MiB (235 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (1020.00 MiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+100%FREE --test testvg/lv2 INFO: [2023-01-26 04:21:04] Running: 'lvextend -l+100%PVS --test testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to <1.01 GiB (258 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (<1.09 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+100%PVS --test testvg/lv2 INFO: [2023-01-26 04:21:04] Running: 'lvextend -l+50%VG -t testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to 528.00 MiB (132 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (608.00 MiB) exceeds the size of thin pools (80.00 MiB). PASS: lvextend -l+50%VG -t testvg/lv2 INFO: [2023-01-26 04:21:04] Running: 'lvextend -l+120%VG -t testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to <1.21 GiB (309 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (<1.29 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+120%VG -t testvg/lv2 INFO: /mnt/lv already exist INFO: [2023-01-26 04:21:05] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv2'... Discarding device blocks: 0/24576 done Creating filesystem with 24576 1k blocks and 6144 inodes Filesystem UUID: 13a4bbd4-0910-4d32-8171-11df1c5b2d2d Superblock backups stored on blocks: 8193 Allocating group tables: 0/3 done Writing inode tables: 0/3 done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: 0/3 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:21:05] Running: 'mount /dev/mapper/testvg-lv2 /mnt/lv'... INFO: [2023-01-26 04:21:05] Running: 'dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.209271 s, 25.1 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 INFO: [2023-01-26 04:21:05] Running: 'lvextend -l+2 -r testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). File system ext4 found on testvg/lv2 mounted at /mnt/lv. Extending file system ext4 to 32.00 MiB (33554432 bytes) on testvg/lv2... resize2fs /dev/testvg/lv2 Filesystem at /dev/testvg/lv2 is mounted on /mnt/lv; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/testvg/lv2 is now 32768 (1k) blocks long. resize2fs done Extended file system ext4 on testvg/lv2. Logical volume testvg/lv2 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -r testvg/lv2 PASS: testvg/lv2 lv_size == 32.00m INFO: [2023-01-26 04:21:06] Running: 'lvcreate -K -s testvg/lv2 -n snap2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap2" created. WARNING: Sum of all thin volume sizes (144.00 MiB) exceeds the size of thin pools (80.00 MiB). PASS: lvcreate -K -s testvg/lv2 -n snap2 INFO: /mnt/snap already exist INFO: [2023-01-26 04:21:07] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap2'... Discarding device blocks: 0/32768 done Creating filesystem with 32768 1k blocks and 8192 inodes Filesystem UUID: 1c768a54-90eb-46ec-a5ea-0606e00088d8 Superblock backups stored on blocks: 8193, 24577 Allocating group tables: 0/4 done Writing inode tables: 0/4 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/4 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:21:07] Running: 'mount /dev/mapper/testvg-snap2 /mnt/snap'... INFO: [2023-01-26 04:21:07] Running: 'dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.211275 s, 24.8 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 INFO: [2023-01-26 04:21:08] Running: 'lvextend -l+2 -rf testvg/snap2'... Size of logical volume testvg/snap2 changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. File system ext4 found on testvg/snap2 mounted at /mnt/snap. Extending file system ext4 to 40.00 MiB (41943040 bytes) on testvg/snap2... resize2fs /dev/testvg/snap2 Filesystem at /dev/testvg/snap2 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/testvg/snap2 is now 40960 (1k) blocks long. resize2fs done Extended file system ext4 on testvg/snap2. Logical volume testvg/snap2 successfully resized. WARNING: Sum of all thin volume sizes (152.00 MiB) exceeds the size of thin pools (80.00 MiB). resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m INFO: [2023-01-26 04:21:09] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 33M 5.1M 26M 17% /mnt/snap INFO: [2023-01-26 04:21:09] Running: 'lvextend -L48 -rf testvg/snap2'... Size of logical volume testvg/snap2 changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. File system ext4 found on testvg/snap2 mounted at /mnt/snap. Extending file system ext4 to 48.00 MiB (50331648 bytes) on testvg/snap2... resize2fs /dev/testvg/snap2 Filesystem at /dev/testvg/snap2 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/testvg/snap2 is now 49152 (1k) blocks long. resize2fs done Extended file system ext4 on testvg/snap2. Logical volume testvg/snap2 successfully resized. WARNING: Sum of all thin volume sizes (160.00 MiB) exceeds the size of thin pools (80.00 MiB). resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -L48 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m INFO: [2023-01-26 04:21:10] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 40M 5.1M 33M 14% /mnt/snap INFO: [2023-01-26 04:21:10] Running: 'umount /mnt/lv'... INFO: [2023-01-26 04:21:10] Running: 'umount /mnt/snap'... INFO: [2023-01-26 04:21:11] Running: 'vgremove --force testvg'... Logical volume "lv2" successfully removed. Logical volume "snap2" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "snap1" successfully removed. Logical volume "pool1" successfully removed. Volume group "testvg" successfully removed INFO: [2023-01-26 04:21:13] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2023-01-26 04:21:14] Running: 'losetup -d /dev/loop0'... INFO: [2023-01-26 04:21:14] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2023-01-26 04:21:15] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2023-01-26 04:21:16] Running: 'losetup -d /dev/loop1'... INFO: [2023-01-26 04:21:16] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2023-01-26 04:21:16] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2023-01-26 04:21:18] Running: 'losetup -d /dev/loop2'... INFO: [2023-01-26 04:21:18] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2023-01-26 04:21:18] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2023-01-26 04:21:19] Running: 'losetup -d /dev/loop3'... INFO: [2023-01-26 04:21:20] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:21:20] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:21:20] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:21:20] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:21:20] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:21:20] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:21:20] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:21:21] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:21:21] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:21:21] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:21:21] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_snapshot' was loaded during the test. Unloading it... INFO: [2023-01-26 04:21:21] Running: 'modprobe -r dm_snapshot'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2023-01-26 04:21:21] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l2 -T testvg/pool1 PASS: lvcreate -i2 -l2 -T testvg/pool2 PASS: lvextend -l+2 -n testvg/pool1 PASS: testvg/pool1 lv_size == 16.00m PASS: lvextend -L+8 -n testvg/pool1 PASS: testvg/pool1 lv_size == 24.00m PASS: lvextend -L+8M -n testvg/pool1 PASS: testvg/pool1 lv_size == 32.00m PASS: lvextend -l+2 -n testvg/pool1 /dev/loop3 PASS: testvg/pool1 lv_size == 40.00m PASS: lvextend -l+2 -n testvg/pool1 /dev/loop2:40:41 PASS: testvg/pool1 lv_size == 48.00m PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(40)' PASS: lvextend -l+2 -n testvg/pool1 /dev/loop1:35:37 PASS: testvg/pool1 lv_size == 56.00m PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(35)' PASS: lvextend -l16 -n testvg/pool1 PASS: testvg/pool1 lv_size == 64.00m PASS: lvextend -L72m -n testvg/pool1 PASS: testvg/pool1 lv_size == 72.00m PASS: lvextend -l+100%FREE --test testvg/pool1 PASS: lvextend -l+10%PVS --test testvg/pool1 PASS: lvextend -l+10%VG -t testvg/pool1 PASS: lvextend -l+100%VG -t testvg/pool1 [exited with error, as expected] PASS: lvextend -l+2 -n testvg/pool2 PASS: testvg/pool2 lv_size == 16.00m PASS: lvextend -L+8 -n testvg/pool2 PASS: testvg/pool2 lv_size == 24.00m PASS: lvextend -L+8M -n testvg/pool2 PASS: testvg/pool2 lv_size == 32.00m PASS: lvextend -l+2 -n testvg/pool2 /dev/loop1 /dev/loop2 PASS: testvg/pool2 lv_size == 40.00m PASS: lvextend -l+2 -n testvg/pool2 /dev/loop1:30-41 /dev/loop2:20-31 PASS: testvg/pool2 lv_size == 48.00m PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(30)' PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(20)' PASS: lvextend -l16 -n testvg/pool2 PASS: testvg/pool2 lv_size == 64.00m PASS: lvextend -L72m -n testvg/pool2 PASS: testvg/pool2 lv_size == 72.00m PASS: lvextend -l+100%FREE --test testvg/pool2 PASS: lvextend -l+10%PVS --test testvg/pool2 PASS: lvextend -l+10%VG -t testvg/pool2 PASS: lvextend -l+100%VG -t testvg/pool2 [exited with error, as expected] PASS: lvremove -ff testvg PASS: lvcreate -l10 -V8m -T testvg/pool1 -n lv1 PASS: lvcreate -i2 -l10 -V8m -T testvg/pool2 -n lv2 PASS: lvextend -l4 testvg/lv1 PASS: testvg/lv1 lv_size == 16.00m PASS: lvextend -L24 -n testvg/lv1 PASS: testvg/lv1 lv_size == 24.00m PASS: lvextend -l+100%FREE --test testvg/lv1 PASS: lvextend -l+100%PVS --test testvg/lv1 PASS: lvextend -l+50%VG -t testvg/lv1 PASS: lvextend -l+120%VG -t testvg/lv1 PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 PASS: lvextend -l+2 -r testvg/lv1 PASS: testvg/lv1 lv_size == 32.00m PASS: lvcreate -K -s testvg/lv1 -n snap1 PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 PASS: lvextend -l+2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m PASS: lvextend -L48 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m PASS: lvextend -l4 testvg/lv2 PASS: testvg/lv2 lv_size == 16.00m PASS: lvextend -L24 -n testvg/lv2 PASS: testvg/lv2 lv_size == 24.00m PASS: lvextend -l+100%FREE --test testvg/lv2 PASS: lvextend -l+100%PVS --test testvg/lv2 PASS: lvextend -l+50%VG -t testvg/lv2 PASS: lvextend -l+120%VG -t testvg/lv2 PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 PASS: lvextend -l+2 -r testvg/lv2 PASS: testvg/lv2 lv_size == 32.00m PASS: lvcreate -K -s testvg/lv2 -n snap2 PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 PASS: lvextend -l+2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m PASS: lvextend -L48 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m PASS: Search for error on the server ############################# Total tests that passed: 82 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvreduce-thinp.py' ============================================================================================================== INFO: [2023-01-26 04:21:24] Running: '/opt/stqe-venv/bin/python3 /opt/stqe-venv/lib64/python3.9/site-packages/stqe/tests/lvm/thinp/lvreduce-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:21:24] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:21:24] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:21:25] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:21:25] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:21:25] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:21:25] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:21:25] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:21:25] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:21:25] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:21:25] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux rdma-qe-36.rdma.lab.eng.rdu2.redhat.com 5.14.0-244.1956_758049736.el9.x86_64+debug #1 SMP PREEMPT_DYNAMIC Thu Jan 26 05:36:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Kernel tainted: 79872 ### IP settings: ### INFO: [2023-01-26 04:21:26] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c4 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 10.1.217.125/24 brd 10.1.217.255 scope global dynamic noprefixroute eno1 valid_lft 77681sec preferred_lft 77681sec inet6 2620:52:0:1d9:3673:5aff:fe9d:5cc4/64 scope global dynamic noprefixroute valid_lft 2591949sec preferred_lft 604749sec inet6 fe80::3673:5aff:fe9d:5cc4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c5 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c6 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c7 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: ens1f0np0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 10:70:fd:a3:7c:60 brd ff:ff:ff:ff:ff:ff altname enp59s0f0np0 7: ens1f1np1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 10:70:fd:a3:7c:61 brd ff:ff:ff:ff:ff:ff altname enp59s0f1np1 ### File system disk space usage: ### INFO: [2023-01-26 04:21:26] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 26G 0 26G 0% /dev/shm tmpfs 11G 38M 11G 1% /run /dev/mapper/cs_rdma--qe--36-root 70G 5.2G 65G 8% / /dev/sda2 1014M 285M 730M 29% /boot /dev/mapper/cs_rdma--qe--36-home 180G 1.3G 179G 1% /home /dev/sda1 599M 7.5M 592M 2% /boot/efi tmpfs 5.2G 4.0K 5.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 93G 375G 20% /var/crash INFO: [2023-01-26 04:21:26] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.17-4.el9.x86_64) ################################################################################ INFO: Starting Thin Reduce test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 256 INFO: Creating file /var/tmp/loop0.img INFO: [2023-01-26 04:21:27] Running: 'fallocate -l 256M /var/tmp/loop0.img'... INFO: [2023-01-26 04:21:27] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 256 INFO: Creating file /var/tmp/loop1.img INFO: [2023-01-26 04:21:27] Running: 'fallocate -l 256M /var/tmp/loop1.img'... INFO: [2023-01-26 04:21:28] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 256 INFO: Creating file /var/tmp/loop2.img INFO: [2023-01-26 04:21:28] Running: 'fallocate -l 256M /var/tmp/loop2.img'... INFO: [2023-01-26 04:21:28] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 256 INFO: Creating file /var/tmp/loop3.img INFO: [2023-01-26 04:21:28] Running: 'fallocate -l 256M /var/tmp/loop3.img'... INFO: [2023-01-26 04:21:28] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2023-01-26 04:21:28] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2023-01-26 04:21:29] Running: 'lvcreate -L100m -V100m -T testvg/pool1 -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv1" created. PASS: lvcreate -L100m -V100m -T testvg/pool1 -n lv1 INFO: [2023-01-26 04:21:31] Running: 'lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB (26 extents). Logical volume "lv2" created. PASS: lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2 INFO: [2023-01-26 04:21:32] Running: 'lvreduce -l-1 testvg/pool1 > /tmp/reduce_pool.err 2>&1'... PASS: lvreduce -l-1 testvg/pool1 > /tmp/reduce_pool.err 2>&1 [exited with error, as expected] INFO: [2023-01-26 04:21:32] Running: 'grep -e 'Thin pool volumes .*cannot be reduced in size yet' /tmp/reduce_pool.err'... Thin pool volumes testvg/pool1_tdata cannot be reduced in size yet. PASS: grep -e 'Thin pool volumes .*cannot be reduced in size yet' /tmp/reduce_pool.err INFO: [2023-01-26 04:21:33] Running: 'lvremove -ff testvg'... Logical volume "lv2" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "pool1" successfully removed. PASS: lvremove -ff testvg INFO: [2023-01-26 04:21:34] Running: 'lvcreate -L100m -V100m -T testvg/pool1 -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv1" created. PASS: lvcreate -L100m -V100m -T testvg/pool1 -n lv1 INFO: [2023-01-26 04:21:36] Running: 'lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB (26 extents). Logical volume "lv2" created. PASS: lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2 INFO: [2023-01-26 04:21:37] Running: 'lvreduce -f -l-2 testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 100.00 MiB (25 extents) to 92.00 MiB (23 extents). Logical volume testvg/lv1 successfully resized. PASS: lvreduce -f -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 92.00m INFO: [2023-01-26 04:21:38] Running: 'lvreduce -f -L-8 -n testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 92.00 MiB (23 extents) to 84.00 MiB (21 extents). Logical volume testvg/lv1 successfully resized. PASS: lvreduce -f -L-8 -n testvg/lv1 PASS: testvg/lv1 lv_size == 84.00m INFO: [2023-01-26 04:21:39] Running: 'lvreduce -f -L-8m -n testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 84.00 MiB (21 extents) to 76.00 MiB (19 extents). Logical volume testvg/lv1 successfully resized. PASS: lvreduce -f -L-8m -n testvg/lv1 PASS: testvg/lv1 lv_size == 76.00m INFO: [2023-01-26 04:21:40] Running: 'lvreduce -f -l18 -n testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 76.00 MiB (19 extents) to 72.00 MiB (18 extents). Logical volume testvg/lv1 successfully resized. PASS: lvreduce -f -l18 -n testvg/lv1 PASS: testvg/lv1 lv_size == 72.00m INFO: [2023-01-26 04:21:41] Running: 'lvreduce -f -L64m -n testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 72.00 MiB (18 extents) to 64.00 MiB (16 extents). Logical volume testvg/lv1 successfully resized. PASS: lvreduce -f -L64m -n testvg/lv1 PASS: testvg/lv1 lv_size == 64.00m INFO: [2023-01-26 04:21:41] Running: 'lvreduce -f -l-1%FREE --test testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 4.00 MiB (1 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvreduce -f -l-1%FREE --test testvg/lv1 INFO: [2023-01-26 04:21:42] Running: 'lvreduce -f -l-1%PVS --test testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvreduce -f -l-1%PVS --test testvg/lv1 INFO: [2023-01-26 04:21:42] Running: 'lvreduce -f -l-1%VG -t testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvreduce -f -l-1%VG -t testvg/lv1 INFO: [2023-01-26 04:21:42] Running: 'lvreduce -f -l-1%VG -t testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvreduce -f -l-1%VG -t testvg/lv1 INFO: /mnt/lv already exist INFO: [2023-01-26 04:21:43] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv1'... Discarding device blocks: 0/65536 done Creating filesystem with 65536 1k blocks and 16384 inodes Filesystem UUID: 5491c93b-731e-44c1-88b9-8cb7b431eff0 Superblock backups stored on blocks: 8193, 24577, 40961, 57345 Allocating group tables: 0/8 done Writing inode tables: 0/8 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/8 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:21:43] Running: 'mount /dev/mapper/testvg-lv1 /mnt/lv'... INFO: [2023-01-26 04:21:43] Running: 'dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.210072 s, 25.0 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 INFO: [2023-01-26 04:21:43] Running: 'yes 2>/dev/null | lvreduce -rf -l-2 testvg/lv1'... File system ext4 found on testvg/lv1 mounted at /mnt/lv. File system size (64.00 MiB) is larger than the requested size (56.00 MiB). File system reduce is required using resize2fs. File system unmount is needed for reduce. File system fsck will be run before reduce. Reducing file system ext4 to 56.00 MiB (58720256 bytes) on testvg/lv1... unmount /mnt/lv unmount done e2fsck /dev/testvg/lv1 /dev/testvg/lv1: 12/16384 files (8.3% non-contiguous), 14633/65536 blocks e2fsck done resize2fs /dev/testvg/lv1 57344k Resizing the filesystem on /dev/testvg/lv1 to 57344 (1k) blocks. The filesystem on /dev/testvg/lv1 is now 57344 (1k) blocks long. resize2fs done remount /dev/testvg/lv1 /mnt/lv remount done Reduced file system ext4 on testvg/lv1. Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 56.00 MiB (14 extents). Logical volume testvg/lv1 successfully resized. Continue with ext4 file system reduce steps: unmount, fsck, resize2fs? [y/n]:resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvreduce -rf -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 56.00m INFO: [2023-01-26 04:21:45] Running: 'lvcreate -K -s testvg/lv1 -n snap1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap1" created. WARNING: Sum of all thin volume sizes (212.00 MiB) exceeds the size of thin pools (204.00 MiB). PASS: lvcreate -K -s testvg/lv1 -n snap1 INFO: /mnt/snap already exist INFO: [2023-01-26 04:21:45] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap1'... Discarding device blocks: 0/57344 done Creating filesystem with 57344 1k blocks and 14336 inodes Filesystem UUID: ab7536e3-662d-45b6-a987-57d158abf209 Superblock backups stored on blocks: 8193, 24577, 40961 Allocating group tables: 0/7 done Writing inode tables: 0/7 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/7 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:21:46] Running: 'mount /dev/mapper/testvg-snap1 /mnt/snap'... INFO: [2023-01-26 04:21:46] Running: 'dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.20905 s, 25.1 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 INFO: [2023-01-26 04:21:46] Running: 'yes 2>/dev/null | lvreduce -l-2 -rf testvg/snap1'... File system ext4 found on testvg/snap1 mounted at /mnt/snap. File system size (56.00 MiB) is larger than the requested size (48.00 MiB). File system reduce is required using resize2fs. File system unmount is needed for reduce. File system fsck will be run before reduce. Reducing file system ext4 to 48.00 MiB (50331648 bytes) on testvg/snap1... unmount /mnt/snap unmount done e2fsck /dev/testvg/snap1 /dev/testvg/snap1: 12/14336 files (8.3% non-contiguous), 13861/57344 blocks e2fsck done resize2fs /dev/testvg/snap1 49152k Resizing the filesystem on /dev/testvg/snap1 to 49152 (1k) blocks. The filesystem on /dev/testvg/snap1 is now 49152 (1k) blocks long. resize2fs done remount /dev/testvg/snap1 /mnt/snap remount done Reduced file system ext4 on testvg/snap1. Size of logical volume testvg/snap1 changed from 56.00 MiB (14 extents) to 48.00 MiB (12 extents). Logical volume testvg/snap1 successfully resized. Continue with ext4 file system reduce steps: unmount, fsck, resize2fs? [y/n]:resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvreduce -l-2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m INFO: [2023-01-26 04:21:47] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 40M 5.1M 32M 14% /mnt/snap INFO: [2023-01-26 04:21:48] Running: 'yes 2>/dev/null | lvreduce -L40 -rf testvg/snap1'... File system ext4 found on testvg/snap1 mounted at /mnt/snap. File system size (48.00 MiB) is larger than the requested size (40.00 MiB). File system reduce is required using resize2fs. File system unmount is needed for reduce. File system fsck will be run before reduce. Reducing file system ext4 to 40.00 MiB (41943040 bytes) on testvg/snap1... unmount /mnt/snap unmount done e2fsck /dev/testvg/snap1 /dev/testvg/snap1: 12/12288 files (8.3% non-contiguous), 13347/49152 blocks e2fsck done resize2fs /dev/testvg/snap1 40960k Resizing the filesystem on /dev/testvg/snap1 to 40960 (1k) blocks. The filesystem on /dev/testvg/snap1 is now 40960 (1k) blocks long. resize2fs done remount /dev/testvg/snap1 /mnt/snap remount done Reduced file system ext4 on testvg/snap1. Size of logical volume testvg/snap1 changed from 48.00 MiB (12 extents) to 40.00 MiB (10 extents). Logical volume testvg/snap1 successfully resized. Continue with ext4 file system reduce steps: unmount, fsck, resize2fs? [y/n]:resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvreduce -L40 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m INFO: [2023-01-26 04:21:49] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 33M 5.1M 25M 17% /mnt/snap INFO: [2023-01-26 04:21:49] Running: 'umount /mnt/lv'... INFO: [2023-01-26 04:21:49] Running: 'umount /mnt/snap'... INFO: [2023-01-26 04:21:49] Running: 'lvreduce -f -l-2 testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 100.00 MiB (25 extents) to 92.00 MiB (23 extents). Logical volume testvg/lv2 successfully resized. PASS: lvreduce -f -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 92.00m INFO: [2023-01-26 04:21:50] Running: 'lvreduce -f -L-8 -n testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 92.00 MiB (23 extents) to 84.00 MiB (21 extents). Logical volume testvg/lv2 successfully resized. PASS: lvreduce -f -L-8 -n testvg/lv2 PASS: testvg/lv2 lv_size == 84.00m INFO: [2023-01-26 04:21:51] Running: 'lvreduce -f -L-8m -n testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 84.00 MiB (21 extents) to 76.00 MiB (19 extents). Logical volume testvg/lv2 successfully resized. PASS: lvreduce -f -L-8m -n testvg/lv2 PASS: testvg/lv2 lv_size == 76.00m INFO: [2023-01-26 04:21:52] Running: 'lvreduce -f -l18 -n testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 76.00 MiB (19 extents) to 72.00 MiB (18 extents). Logical volume testvg/lv2 successfully resized. PASS: lvreduce -f -l18 -n testvg/lv2 PASS: testvg/lv2 lv_size == 72.00m INFO: [2023-01-26 04:21:53] Running: 'lvreduce -f -L64m -n testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 72.00 MiB (18 extents) to 64.00 MiB (16 extents). Logical volume testvg/lv2 successfully resized. PASS: lvreduce -f -L64m -n testvg/lv2 PASS: testvg/lv2 lv_size == 64.00m INFO: [2023-01-26 04:21:53] Running: 'lvreduce -f -l-1%FREE --test testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 4.00 MiB (1 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvreduce -f -l-1%FREE --test testvg/lv2 INFO: [2023-01-26 04:21:54] Running: 'lvreduce -f -l-1%PVS --test testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvreduce -f -l-1%PVS --test testvg/lv2 INFO: [2023-01-26 04:21:54] Running: 'lvreduce -f -l-1%VG -t testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvreduce -f -l-1%VG -t testvg/lv2 INFO: [2023-01-26 04:21:54] Running: 'lvreduce -f -l-1%VG -t testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvreduce -f -l-1%VG -t testvg/lv2 INFO: /mnt/lv already exist INFO: [2023-01-26 04:21:55] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv2'... Discarding device blocks: 0/65536 done Creating filesystem with 65536 1k blocks and 16384 inodes Filesystem UUID: 5c56c76c-e88b-4ebc-8c76-c1d278bccd3f Superblock backups stored on blocks: 8193, 24577, 40961, 57345 Allocating group tables: 0/8 done Writing inode tables: 0/8 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/8 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:21:55] Running: 'mount /dev/mapper/testvg-lv2 /mnt/lv'... INFO: [2023-01-26 04:21:55] Running: 'dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.208542 s, 25.1 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 INFO: [2023-01-26 04:21:55] Running: 'yes 2>/dev/null | lvreduce -rf -l-2 testvg/lv2'... File system ext4 found on testvg/lv2 mounted at /mnt/lv. File system size (64.00 MiB) is larger than the requested size (56.00 MiB). File system reduce is required using resize2fs. File system unmount is needed for reduce. File system fsck will be run before reduce. Reducing file system ext4 to 56.00 MiB (58720256 bytes) on testvg/lv2... unmount /mnt/lv unmount done e2fsck /dev/testvg/lv2 /dev/testvg/lv2: 12/16384 files (8.3% non-contiguous), 14633/65536 blocks e2fsck done resize2fs /dev/testvg/lv2 57344k Resizing the filesystem on /dev/testvg/lv2 to 57344 (1k) blocks. The filesystem on /dev/testvg/lv2 is now 57344 (1k) blocks long. resize2fs done remount /dev/testvg/lv2 /mnt/lv remount done Reduced file system ext4 on testvg/lv2. Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 56.00 MiB (14 extents). Logical volume testvg/lv2 successfully resized. Continue with ext4 file system reduce steps: unmount, fsck, resize2fs? [y/n]:resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvreduce -rf -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 56.00m INFO: [2023-01-26 04:21:57] Running: 'lvcreate -K -s testvg/lv2 -n snap2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap2" created. WARNING: Sum of all thin volume sizes (208.00 MiB) exceeds the size of thin pools (204.00 MiB). PASS: lvcreate -K -s testvg/lv2 -n snap2 INFO: /mnt/snap already exist INFO: [2023-01-26 04:21:57] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap2'... Discarding device blocks: 0/57344 done Creating filesystem with 57344 1k blocks and 14336 inodes Filesystem UUID: 677ebada-bd10-4326-9069-3feeb4d11084 Superblock backups stored on blocks: 8193, 24577, 40961 Allocating group tables: 0/7 done Writing inode tables: 0/7 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/7 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:21:57] Running: 'mount /dev/mapper/testvg-snap2 /mnt/snap'... INFO: [2023-01-26 04:21:58] Running: 'dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.213627 s, 24.5 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 INFO: [2023-01-26 04:21:58] Running: 'yes 2>/dev/null | lvreduce -l-2 -rf testvg/snap2'... File system ext4 found on testvg/snap2 mounted at /mnt/snap. File system size (56.00 MiB) is larger than the requested size (48.00 MiB). File system reduce is required using resize2fs. File system unmount is needed for reduce. File system fsck will be run before reduce. Reducing file system ext4 to 48.00 MiB (50331648 bytes) on testvg/snap2... unmount /mnt/snap unmount done e2fsck /dev/testvg/snap2 /dev/testvg/snap2: 12/14336 files (8.3% non-contiguous), 13861/57344 blocks e2fsck done resize2fs /dev/testvg/snap2 49152k Resizing the filesystem on /dev/testvg/snap2 to 49152 (1k) blocks. The filesystem on /dev/testvg/snap2 is now 49152 (1k) blocks long. resize2fs done remount /dev/testvg/snap2 /mnt/snap remount done Reduced file system ext4 on testvg/snap2. Size of logical volume testvg/snap2 changed from 56.00 MiB (14 extents) to 48.00 MiB (12 extents). Logical volume testvg/snap2 successfully resized. Continue with ext4 file system reduce steps: unmount, fsck, resize2fs? [y/n]:resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvreduce -l-2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m INFO: [2023-01-26 04:21:59] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 40M 5.1M 32M 14% /mnt/snap INFO: [2023-01-26 04:21:59] Running: 'yes 2>/dev/null | lvreduce -L40 -rf testvg/snap2'... File system ext4 found on testvg/snap2 mounted at /mnt/snap. File system size (48.00 MiB) is larger than the requested size (40.00 MiB). File system reduce is required using resize2fs. File system unmount is needed for reduce. File system fsck will be run before reduce. Reducing file system ext4 to 40.00 MiB (41943040 bytes) on testvg/snap2... unmount /mnt/snap unmount done e2fsck /dev/testvg/snap2 /dev/testvg/snap2: 12/12288 files (8.3% non-contiguous), 13347/49152 blocks e2fsck done resize2fs /dev/testvg/snap2 40960k Resizing the filesystem on /dev/testvg/snap2 to 40960 (1k) blocks. The filesystem on /dev/testvg/snap2 is now 40960 (1k) blocks long. resize2fs done remount /dev/testvg/snap2 /mnt/snap remount done Reduced file system ext4 on testvg/snap2. Size of logical volume testvg/snap2 changed from 48.00 MiB (12 extents) to 40.00 MiB (10 extents). Logical volume testvg/snap2 successfully resized. Continue with ext4 file system reduce steps: unmount, fsck, resize2fs? [y/n]:resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvreduce -L40 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m INFO: [2023-01-26 04:22:01] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 33M 5.1M 25M 17% /mnt/snap INFO: [2023-01-26 04:22:01] Running: 'umount /mnt/lv'... INFO: [2023-01-26 04:22:01] Running: 'umount /mnt/snap'... INFO: [2023-01-26 04:22:01] Running: 'vgremove --force testvg'... Logical volume "lv2" successfully removed. Logical volume "snap2" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "snap1" successfully removed. Logical volume "pool1" successfully removed. Volume group "testvg" successfully removed INFO: [2023-01-26 04:22:03] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2023-01-26 04:22:05] Running: 'losetup -d /dev/loop0'... INFO: [2023-01-26 04:22:05] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2023-01-26 04:22:05] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2023-01-26 04:22:06] Running: 'losetup -d /dev/loop1'... INFO: [2023-01-26 04:22:07] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2023-01-26 04:22:07] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2023-01-26 04:22:08] Running: 'losetup -d /dev/loop2'... INFO: [2023-01-26 04:22:08] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2023-01-26 04:22:09] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2023-01-26 04:22:10] Running: 'losetup -d /dev/loop3'... INFO: [2023-01-26 04:22:10] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:22:10] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:22:10] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:22:10] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:22:11] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:22:11] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:22:11] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:22:11] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:22:11] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:22:11] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:22:11] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_snapshot' was loaded during the test. Unloading it... INFO: [2023-01-26 04:22:12] Running: 'modprobe -r dm_snapshot'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2023-01-26 04:22:12] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -L100m -V100m -T testvg/pool1 -n lv1 PASS: lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2 PASS: lvreduce -l-1 testvg/pool1 > /tmp/reduce_pool.err 2>&1 [exited with error, as expected] PASS: grep -e 'Thin pool volumes .*cannot be reduced in size yet' /tmp/reduce_pool.err PASS: lvremove -ff testvg PASS: lvcreate -L100m -V100m -T testvg/pool1 -n lv1 PASS: lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2 PASS: lvreduce -f -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 92.00m PASS: lvreduce -f -L-8 -n testvg/lv1 PASS: testvg/lv1 lv_size == 84.00m PASS: lvreduce -f -L-8m -n testvg/lv1 PASS: testvg/lv1 lv_size == 76.00m PASS: lvreduce -f -l18 -n testvg/lv1 PASS: testvg/lv1 lv_size == 72.00m PASS: lvreduce -f -L64m -n testvg/lv1 PASS: testvg/lv1 lv_size == 64.00m PASS: lvreduce -f -l-1%FREE --test testvg/lv1 PASS: lvreduce -f -l-1%PVS --test testvg/lv1 PASS: lvreduce -f -l-1%VG -t testvg/lv1 PASS: lvreduce -f -l-1%VG -t testvg/lv1 PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 PASS: yes 2>/dev/null | lvreduce -rf -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 56.00m PASS: lvcreate -K -s testvg/lv1 -n snap1 PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 PASS: yes 2>/dev/null | lvreduce -l-2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m PASS: yes 2>/dev/null | lvreduce -L40 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m PASS: lvreduce -f -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 92.00m PASS: lvreduce -f -L-8 -n testvg/lv2 PASS: testvg/lv2 lv_size == 84.00m PASS: lvreduce -f -L-8m -n testvg/lv2 PASS: testvg/lv2 lv_size == 76.00m PASS: lvreduce -f -l18 -n testvg/lv2 PASS: testvg/lv2 lv_size == 72.00m PASS: lvreduce -f -L64m -n testvg/lv2 PASS: testvg/lv2 lv_size == 64.00m PASS: lvreduce -f -l-1%FREE --test testvg/lv2 PASS: lvreduce -f -l-1%PVS --test testvg/lv2 PASS: lvreduce -f -l-1%VG -t testvg/lv2 PASS: lvreduce -f -l-1%VG -t testvg/lv2 PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 PASS: yes 2>/dev/null | lvreduce -rf -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 56.00m PASS: lvcreate -K -s testvg/lv2 -n snap2 PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 PASS: yes 2>/dev/null | lvreduce -l-2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m PASS: yes 2>/dev/null | lvreduce -L40 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m PASS: Search for error on the server ############################# Total tests that passed: 54 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvremove-thinp.py' ============================================================================================================== INFO: [2023-01-26 04:22:14] Running: '/opt/stqe-venv/bin/python3 /opt/stqe-venv/lib64/python3.9/site-packages/stqe/tests/lvm/thinp/lvremove-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:22:15] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:22:15] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:22:15] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:22:15] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:22:15] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:22:15] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:22:16] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:22:16] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:22:16] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:22:16] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux rdma-qe-36.rdma.lab.eng.rdu2.redhat.com 5.14.0-244.1956_758049736.el9.x86_64+debug #1 SMP PREEMPT_DYNAMIC Thu Jan 26 05:36:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Kernel tainted: 79872 ### IP settings: ### INFO: [2023-01-26 04:22:16] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c4 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 10.1.217.125/24 brd 10.1.217.255 scope global dynamic noprefixroute eno1 valid_lft 77630sec preferred_lft 77630sec inet6 2620:52:0:1d9:3673:5aff:fe9d:5cc4/64 scope global dynamic noprefixroute valid_lft 2591899sec preferred_lft 604699sec inet6 fe80::3673:5aff:fe9d:5cc4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c5 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c6 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c7 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: ens1f0np0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 10:70:fd:a3:7c:60 brd ff:ff:ff:ff:ff:ff altname enp59s0f0np0 7: ens1f1np1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 10:70:fd:a3:7c:61 brd ff:ff:ff:ff:ff:ff altname enp59s0f1np1 ### File system disk space usage: ### INFO: [2023-01-26 04:22:16] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 26G 0 26G 0% /dev/shm tmpfs 11G 38M 11G 1% /run /dev/mapper/cs_rdma--qe--36-root 70G 5.2G 65G 8% / /dev/sda2 1014M 285M 730M 29% /boot /dev/mapper/cs_rdma--qe--36-home 180G 1.3G 179G 1% /home /dev/sda1 599M 7.5M 592M 2% /boot/efi tmpfs 5.2G 4.0K 5.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 93G 375G 20% /var/crash INFO: [2023-01-26 04:22:16] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.17-4.el9.x86_64) ################################################################################ INFO: Starting Thin Remove test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 256 INFO: Creating file /var/tmp/loop0.img INFO: [2023-01-26 04:22:17] Running: 'fallocate -l 256M /var/tmp/loop0.img'... INFO: [2023-01-26 04:22:18] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 256 INFO: Creating file /var/tmp/loop1.img INFO: [2023-01-26 04:22:18] Running: 'fallocate -l 256M /var/tmp/loop1.img'... INFO: [2023-01-26 04:22:18] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 256 INFO: Creating file /var/tmp/loop2.img INFO: [2023-01-26 04:22:18] Running: 'fallocate -l 256M /var/tmp/loop2.img'... INFO: [2023-01-26 04:22:18] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 256 INFO: Creating file /var/tmp/loop3.img INFO: [2023-01-26 04:22:18] Running: 'fallocate -l 256M /var/tmp/loop3.img'... INFO: [2023-01-26 04:22:19] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2023-01-26 04:22:19] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2023-01-26 04:22:19] Running: 'lvcreate -l20 -T testvg/pool'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool" created. PASS: lvcreate -l20 -T testvg/pool INFO: [2023-01-26 04:22:20] Running: 'yes 2>/dev/null | lvremove testvg/pool'... Logical volume "pool" successfully removed. Do you really want to remove active logical volume testvg/pool? [y/n]: PASS: yes 2>/dev/null | lvremove testvg/pool INFO: [2023-01-26 04:22:21] Running: 'lvcreate -l20 -T testvg/pool'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool" created. PASS: lvcreate -l20 -T testvg/pool INFO: [2023-01-26 04:22:22] Running: 'lvremove -f testvg/pool'... Logical volume "pool" successfully removed. PASS: lvremove -f testvg/pool INFO: [2023-01-26 04:22:22] Running: 'lvcreate -l20 -T testvg/pool'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool" created. PASS: lvcreate -l20 -T testvg/pool INFO: [2023-01-26 04:22:23] Running: 'lvremove -ff testvg/pool'... Logical volume "pool" successfully removed. PASS: lvremove -ff testvg/pool INFO: [2023-01-26 04:22:24] Running: 'lvcreate -l20 -T testvg/pool'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool" created. PASS: lvcreate -l20 -T testvg/pool INFO: [2023-01-26 04:22:25] Running: 'lvremove -ff testvg'... Logical volume "pool" successfully removed. PASS: lvremove -ff testvg INFO: [2023-01-26 04:22:25] Running: 'lvcreate -l20 -V 100m -T testvg/pool -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -l20 -V 100m -T testvg/pool -n lv1 INFO: [2023-01-26 04:22:27] Running: 'lvremove -ff testvg/lv1'... Logical volume "lv1" successfully removed. PASS: lvremove -ff testvg/lv1 INFO: [2023-01-26 04:22:28] Running: 'lvcreate -V 100m -T testvg/pool -n lv1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -V 100m -T testvg/pool -n lv1 INFO: [2023-01-26 04:22:28] Running: 'lvcreate -V 100m -T testvg/pool -n lv2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv2" created. WARNING: Sum of all thin volume sizes (200.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -V 100m -T testvg/pool -n lv2 INFO: [2023-01-26 04:22:29] Running: 'lvcreate -V 100m -T testvg/pool -n lv3'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv3" created. WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -V 100m -T testvg/pool -n lv3 INFO: [2023-01-26 04:22:29] Running: 'lvremove -ff testvg/lv1 testvg/lv2 testvg/lv3'... Logical volume "lv1" successfully removed. Logical volume "lv2" successfully removed. Logical volume "lv3" successfully removed. PASS: lvremove -ff testvg/lv1 testvg/lv2 testvg/lv3 PASS: testvg/pool data_percent == 0.00 INFO: [2023-01-26 04:22:31] Running: 'lvcreate -V 100m -T testvg/pool -n lv1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -V 100m -T testvg/pool -n lv1 INFO: [2023-01-26 04:22:31] Running: 'lvcreate -V 100m -T testvg/pool -n lv2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv2" created. WARNING: Sum of all thin volume sizes (200.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -V 100m -T testvg/pool -n lv2 INFO: [2023-01-26 04:22:32] Running: 'lvcreate -V 100m -T testvg/pool -n lv3'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv3" created. WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -V 100m -T testvg/pool -n lv3 INFO: [2023-01-26 04:22:32] Running: 'lvremove -ff /dev/testvg/lv[1-3]'... Logical volume "lv1" successfully removed. Logical volume "lv2" successfully removed. Logical volume "lv3" successfully removed. PASS: lvremove -ff /dev/testvg/lv[1-3] INFO: [2023-01-26 04:22:33] Running: 'lvcreate -V 100m -T testvg/pool -n lv1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -V 100m -T testvg/pool -n lv1 INFO: [2023-01-26 04:22:34] Running: 'lvcreate -s testvg/lv1 -n snap1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap1" created. WARNING: Sum of all thin volume sizes (200.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -s testvg/lv1 -n snap1 INFO: [2023-01-26 04:22:34] Running: 'lvcreate -s testvg/snap1 -n snap2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap2" created. WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pool testvg/pool (80.00 MiB). PASS: lvcreate -s testvg/snap1 -n snap2 INFO: [2023-01-26 04:22:35] Running: 'lvremove -ff testvg/snap1 testvg/snap2'... Logical volume "snap1" successfully removed. Logical volume "snap2" successfully removed. PASS: lvremove -ff testvg/snap1 testvg/snap2 INFO: [2023-01-26 04:22:35] Running: 'vgremove --force testvg'... Logical volume "lv1" successfully removed. Logical volume "pool" successfully removed. Volume group "testvg" successfully removed INFO: [2023-01-26 04:22:37] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2023-01-26 04:22:38] Running: 'losetup -d /dev/loop0'... INFO: [2023-01-26 04:22:38] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2023-01-26 04:22:38] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2023-01-26 04:22:40] Running: 'losetup -d /dev/loop1'... INFO: [2023-01-26 04:22:40] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2023-01-26 04:22:40] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2023-01-26 04:22:41] Running: 'losetup -d /dev/loop2'... INFO: [2023-01-26 04:22:42] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2023-01-26 04:22:42] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2023-01-26 04:22:43] Running: 'losetup -d /dev/loop3'... INFO: [2023-01-26 04:22:43] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:22:43] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:22:44] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:22:44] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:22:44] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:22:44] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:22:44] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:22:44] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:22:44] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:22:45] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:22:45] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_snapshot' was loaded during the test. Unloading it... INFO: [2023-01-26 04:22:45] Running: 'modprobe -r dm_snapshot'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2023-01-26 04:22:45] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l20 -T testvg/pool PASS: yes 2>/dev/null | lvremove testvg/pool PASS: lvcreate -l20 -T testvg/pool PASS: lvremove -f testvg/pool PASS: lvcreate -l20 -T testvg/pool PASS: lvremove -ff testvg/pool PASS: lvcreate -l20 -T testvg/pool PASS: lvremove -ff testvg PASS: lvcreate -l20 -V 100m -T testvg/pool -n lv1 PASS: lvremove -ff testvg/lv1 PASS: lvcreate -V 100m -T testvg/pool -n lv1 PASS: lvcreate -V 100m -T testvg/pool -n lv2 PASS: lvcreate -V 100m -T testvg/pool -n lv3 PASS: lvremove -ff testvg/lv1 testvg/lv2 testvg/lv3 PASS: testvg/pool data_percent == 0.00 PASS: lvcreate -V 100m -T testvg/pool -n lv1 PASS: lvcreate -V 100m -T testvg/pool -n lv2 PASS: lvcreate -V 100m -T testvg/pool -n lv3 PASS: lvremove -ff /dev/testvg/lv[1-3] PASS: lvcreate -V 100m -T testvg/pool -n lv1 PASS: lvcreate -s testvg/lv1 -n snap1 PASS: lvcreate -s testvg/snap1 -n snap2 PASS: lvremove -ff testvg/snap1 testvg/snap2 PASS: Search for error on the server ############################# Total tests that passed: 24 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvrename-thinp.py' ============================================================================================================== INFO: [2023-01-26 04:22:47] Running: '/opt/stqe-venv/bin/python3 /opt/stqe-venv/lib64/python3.9/site-packages/stqe/tests/lvm/thinp/lvrename-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:22:48] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:22:48] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:22:48] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:22:48] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:22:49] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:22:49] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:22:49] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:22:49] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:22:49] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:22:49] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux rdma-qe-36.rdma.lab.eng.rdu2.redhat.com 5.14.0-244.1956_758049736.el9.x86_64+debug #1 SMP PREEMPT_DYNAMIC Thu Jan 26 05:36:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Kernel tainted: 79872 ### IP settings: ### INFO: [2023-01-26 04:22:49] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c4 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 10.1.217.125/24 brd 10.1.217.255 scope global dynamic noprefixroute eno1 valid_lft 77597sec preferred_lft 77597sec inet6 2620:52:0:1d9:3673:5aff:fe9d:5cc4/64 scope global dynamic noprefixroute valid_lft 2591866sec preferred_lft 604666sec inet6 fe80::3673:5aff:fe9d:5cc4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c5 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c6 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c7 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: ens1f0np0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 10:70:fd:a3:7c:60 brd ff:ff:ff:ff:ff:ff altname enp59s0f0np0 7: ens1f1np1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 10:70:fd:a3:7c:61 brd ff:ff:ff:ff:ff:ff altname enp59s0f1np1 ### File system disk space usage: ### INFO: [2023-01-26 04:22:50] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 26G 0 26G 0% /dev/shm tmpfs 11G 38M 11G 1% /run /dev/mapper/cs_rdma--qe--36-root 70G 5.2G 65G 8% / /dev/sda2 1014M 285M 730M 29% /boot /dev/mapper/cs_rdma--qe--36-home 180G 1.3G 179G 1% /home /dev/sda1 599M 7.5M 592M 2% /boot/efi tmpfs 5.2G 4.0K 5.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 93G 375G 20% /var/crash INFO: [2023-01-26 04:22:50] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.17-4.el9.x86_64) ################################################################################ INFO: Starting Thin Rename test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2023-01-26 04:22:51] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2023-01-26 04:22:51] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2023-01-26 04:22:51] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2023-01-26 04:22:51] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2023-01-26 04:22:51] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2023-01-26 04:22:52] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2023-01-26 04:22:52] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2023-01-26 04:22:52] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2023-01-26 04:22:52] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2023-01-26 04:22:52] Running: 'lvcreate -l20 -V100M -T testvg/pool1 -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool1 (80.00 MiB). PASS: lvcreate -l20 -V100M -T testvg/pool1 -n lv1 INFO: [2023-01-26 04:22:54] Running: 'lvcreate -i 2 -l 20 -V 100M -T testvg/pool2 -n lv2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv2" created. WARNING: Sum of all thin volume sizes (200.00 MiB) exceeds the size of thin pools (160.00 MiB). PASS: lvcreate -i 2 -l 20 -V 100M -T testvg/pool2 -n lv2 INFO: [2023-01-26 04:22:56] Running: 'lvs -a testvg'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 testvg Vwi-a-tz-- 100.00m pool1 0.00 lv2 testvg Vwi-a-tz-- 100.00m pool2 0.00 [lvol0_pmspare] testvg ewi------- 4.00m pool1 testvg twi-aotz-- 80.00m 0.00 10.94 [pool1_tdata] testvg Twi-ao---- 80.00m [pool1_tmeta] testvg ewi-ao---- 4.00m pool2 testvg twi-aotz-- 80.00m 0.00 10.94 [pool2_tdata] testvg Twi-ao---- 80.00m [pool2_tmeta] testvg ewi-ao---- 4.00m PASS: lvs -a testvg INFO: [2023-01-26 04:22:56] Running: 'lvrename testvg pool1 bakpool1'... Renamed "pool1" to "bakpool1" in volume group "testvg" PASS: lvrename testvg pool1 bakpool1 INFO: [2023-01-26 04:22:57] Running: 'lvs testvg/bakpool1'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert bakpool1 testvg twi-aotz-- 80.00m 0.00 10.94 PASS: lvs testvg/bakpool1 INFO: [2023-01-26 04:22:57] Running: 'lvs testvg/pool1'... Failed to find logical volume "testvg/pool1" PASS: lvs testvg/pool1 [exited with error, as expected] PASS: testvg/lv1 pool_lv == bakpool1 INFO: [2023-01-26 04:22:57] Running: 'lvrename testvg lv1 baklv1'... Renamed "lv1" to "baklv1" in volume group "testvg" PASS: lvrename testvg lv1 baklv1 INFO: [2023-01-26 04:22:58] Running: 'lvs testvg/baklv1'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert baklv1 testvg Vwi-a-tz-- 100.00m bakpool1 0.00 PASS: lvs testvg/baklv1 INFO: [2023-01-26 04:22:58] Running: 'lvs testvg/lv1'... Failed to find logical volume "testvg/lv1" PASS: lvs testvg/lv1 [exited with error, as expected] PASS: testvg/baklv1 pool_lv == bakpool1 INFO: [2023-01-26 04:22:59] Running: 'lvrename testvg pool2 bakpool2'... Renamed "pool2" to "bakpool2" in volume group "testvg" PASS: lvrename testvg pool2 bakpool2 INFO: [2023-01-26 04:22:59] Running: 'lvs testvg/bakpool2'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert bakpool2 testvg twi-aotz-- 80.00m 0.00 10.94 PASS: lvs testvg/bakpool2 INFO: [2023-01-26 04:23:00] Running: 'lvs testvg/pool2'... Failed to find logical volume "testvg/pool2" PASS: lvs testvg/pool2 [exited with error, as expected] PASS: testvg/lv2 pool_lv == bakpool2 INFO: [2023-01-26 04:23:00] Running: 'lvrename testvg lv2 baklv2'... Renamed "lv2" to "baklv2" in volume group "testvg" PASS: lvrename testvg lv2 baklv2 INFO: [2023-01-26 04:23:01] Running: 'lvs testvg/baklv2'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert baklv2 testvg Vwi-a-tz-- 100.00m bakpool2 0.00 PASS: lvs testvg/baklv2 INFO: [2023-01-26 04:23:01] Running: 'lvs testvg/lv2'... Failed to find logical volume "testvg/lv2" PASS: lvs testvg/lv2 [exited with error, as expected] PASS: testvg/baklv2 pool_lv == bakpool2 INFO: [2023-01-26 04:23:02] Running: 'vgremove --force testvg'... Logical volume "baklv2" successfully removed. Logical volume "bakpool2" successfully removed. Logical volume "baklv1" successfully removed. Logical volume "bakpool1" successfully removed. Volume group "testvg" successfully removed INFO: [2023-01-26 04:23:04] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2023-01-26 04:23:05] Running: 'losetup -d /dev/loop0'... INFO: [2023-01-26 04:23:05] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2023-01-26 04:23:05] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2023-01-26 04:23:07] Running: 'losetup -d /dev/loop1'... INFO: [2023-01-26 04:23:07] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2023-01-26 04:23:07] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2023-01-26 04:23:08] Running: 'losetup -d /dev/loop2'... INFO: [2023-01-26 04:23:09] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2023-01-26 04:23:09] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2023-01-26 04:23:10] Running: 'losetup -d /dev/loop3'... INFO: [2023-01-26 04:23:10] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:23:10] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:23:10] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:23:11] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:23:11] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:23:11] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:23:11] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:23:11] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:23:11] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:23:11] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:23:12] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2023-01-26 04:23:12] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l20 -V100M -T testvg/pool1 -n lv1 PASS: lvcreate -i 2 -l 20 -V 100M -T testvg/pool2 -n lv2 PASS: lvs -a testvg PASS: lvrename testvg pool1 bakpool1 PASS: lvs testvg/bakpool1 PASS: lvs testvg/pool1 [exited with error, as expected] PASS: testvg/lv1 pool_lv == bakpool1 PASS: lvrename testvg lv1 baklv1 PASS: lvs testvg/baklv1 PASS: lvs testvg/lv1 [exited with error, as expected] PASS: testvg/baklv1 pool_lv == bakpool1 PASS: lvrename testvg pool2 bakpool2 PASS: lvs testvg/bakpool2 PASS: lvs testvg/pool2 [exited with error, as expected] PASS: testvg/lv2 pool_lv == bakpool2 PASS: lvrename testvg lv2 baklv2 PASS: lvs testvg/baklv2 PASS: lvs testvg/lv2 [exited with error, as expected] PASS: testvg/baklv2 pool_lv == bakpool2 PASS: Search for error on the server ############################# Total tests that passed: 20 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvresize-thinp.py' ============================================================================================================== INFO: [2023-01-26 04:23:14] Running: '/opt/stqe-venv/bin/python3 /opt/stqe-venv/lib64/python3.9/site-packages/stqe/tests/lvm/thinp/lvresize-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:23:15] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:23:15] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:23:15] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:23:15] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:23:15] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:23:15] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:23:15] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:23:16] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:23:16] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:23:16] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux rdma-qe-36.rdma.lab.eng.rdu2.redhat.com 5.14.0-244.1956_758049736.el9.x86_64+debug #1 SMP PREEMPT_DYNAMIC Thu Jan 26 05:36:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Kernel tainted: 79872 ### IP settings: ### INFO: [2023-01-26 04:23:16] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c4 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 10.1.217.125/24 brd 10.1.217.255 scope global dynamic noprefixroute eno1 valid_lft 77571sec preferred_lft 77571sec inet6 2620:52:0:1d9:3673:5aff:fe9d:5cc4/64 scope global dynamic noprefixroute valid_lft 2591839sec preferred_lft 604639sec inet6 fe80::3673:5aff:fe9d:5cc4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c5 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c6 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c7 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: ens1f0np0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 10:70:fd:a3:7c:60 brd ff:ff:ff:ff:ff:ff altname enp59s0f0np0 7: ens1f1np1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 10:70:fd:a3:7c:61 brd ff:ff:ff:ff:ff:ff altname enp59s0f1np1 ### File system disk space usage: ### INFO: [2023-01-26 04:23:16] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 26G 0 26G 0% /dev/shm tmpfs 11G 38M 11G 1% /run /dev/mapper/cs_rdma--qe--36-root 70G 5.2G 65G 8% / /dev/sda2 1014M 285M 730M 29% /boot /dev/mapper/cs_rdma--qe--36-home 180G 1.3G 179G 1% /home /dev/sda1 599M 7.5M 592M 2% /boot/efi tmpfs 5.2G 4.0K 5.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 93G 375G 20% /var/crash INFO: [2023-01-26 04:23:16] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ ################################################################################ INFO: Starting Thin Resize test ################################################################################ INFO: lvm2 is installed (lvm2-2.03.17-4.el9.x86_64) INFO: Creating loop device /var/tmp/loop0.img with size 256 INFO: Creating file /var/tmp/loop0.img INFO: [2023-01-26 04:23:17] Running: 'fallocate -l 256M /var/tmp/loop0.img'... INFO: [2023-01-26 04:23:18] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 256 INFO: Creating file /var/tmp/loop1.img INFO: [2023-01-26 04:23:18] Running: 'fallocate -l 256M /var/tmp/loop1.img'... INFO: [2023-01-26 04:23:18] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 256 INFO: Creating file /var/tmp/loop2.img INFO: [2023-01-26 04:23:18] Running: 'fallocate -l 256M /var/tmp/loop2.img'... INFO: [2023-01-26 04:23:18] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 256 INFO: Creating file /var/tmp/loop3.img INFO: [2023-01-26 04:23:18] Running: 'fallocate -l 256M /var/tmp/loop3.img'... INFO: [2023-01-26 04:23:19] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2023-01-26 04:23:19] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created ################################################################################ INFO: Starting extend test ################################################################################ INFO: [2023-01-26 04:23:19] Running: 'lvcreate -l2 -T testvg/pool1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool1" created. PASS: lvcreate -l2 -T testvg/pool1 INFO: [2023-01-26 04:23:20] Running: 'lvcreate -i2 -l2 -T testvg/pool2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool2" created. PASS: lvcreate -i2 -l2 -T testvg/pool2 INFO: [2023-01-26 04:23:21] Running: 'lvresize -l+2 -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -l+2 -n testvg/pool1 PASS: testvg/pool1 lv_size == 16.00m INFO: [2023-01-26 04:23:22] Running: 'lvresize -L+8 -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -L+8 -n testvg/pool1 PASS: testvg/pool1 lv_size == 24.00m INFO: [2023-01-26 04:23:22] Running: 'lvresize -L+8M -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -L+8M -n testvg/pool1 PASS: testvg/pool1 lv_size == 32.00m INFO: [2023-01-26 04:23:23] Running: 'lvresize -l+2 -n testvg/pool1 /dev/loop3'... Size of logical volume testvg/pool1_tdata changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -l+2 -n testvg/pool1 /dev/loop3 PASS: testvg/pool1 lv_size == 40.00m INFO: [2023-01-26 04:23:24] Running: 'lvresize -l+2 -n testvg/pool1 /dev/loop2:40:41'... Size of logical volume testvg/pool1_tdata changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -l+2 -n testvg/pool1 /dev/loop2:40:41 PASS: testvg/pool1 lv_size == 48.00m INFO: [2023-01-26 04:23:25] Running: 'pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(40)''... testvg [pool1_tdata] /dev/loop2(40) PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(40)' INFO: [2023-01-26 04:23:25] Running: 'lvresize -l+2 -n testvg/pool1 /dev/loop1:35:37'... Size of logical volume testvg/pool1_tdata changed from 48.00 MiB (12 extents) to 56.00 MiB (14 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -l+2 -n testvg/pool1 /dev/loop1:35:37 PASS: testvg/pool1 lv_size == 56.00m INFO: [2023-01-26 04:23:26] Running: 'pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(35)''... testvg [pool1_tdata] /dev/loop1(35) PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(35)' INFO: [2023-01-26 04:23:26] Running: 'lvresize -l16 -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 56.00 MiB (14 extents) to 64.00 MiB (16 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -l16 -n testvg/pool1 PASS: testvg/pool1 lv_size == 64.00m INFO: [2023-01-26 04:23:27] Running: 'lvresize -L72m -n testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 64.00 MiB (16 extents) to 72.00 MiB (18 extents). Logical volume testvg/pool1 successfully resized. PASS: lvresize -L72m -n testvg/pool1 PASS: testvg/pool1 lv_size == 72.00m INFO: [2023-01-26 04:23:27] Running: 'lvresize -l+100%FREE --test testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 72.00 MiB (18 extents) to 988.00 MiB (247 extents). Logical volume testvg/pool1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -l+100%FREE --test testvg/pool1 INFO: [2023-01-26 04:23:28] Running: 'lvresize -l+10%PVS --test testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -l+10%PVS --test testvg/pool1 INFO: [2023-01-26 04:23:28] Running: 'lvresize -l+10%VG -t testvg/pool1'... Size of logical volume testvg/pool1_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -l+10%VG -t testvg/pool1 INFO: [2023-01-26 04:23:28] Running: 'lvresize -l+100%VG -t testvg/pool1'... TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Insufficient free space: 252 extents needed, but only 229 available PASS: lvresize -l+100%VG -t testvg/pool1 [exited with error, as expected] INFO: [2023-01-26 04:23:29] Running: 'lvresize -l+2 -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/pool2 successfully resized. PASS: lvresize -l+2 -n testvg/pool2 PASS: testvg/pool2 lv_size == 16.00m INFO: [2023-01-26 04:23:29] Running: 'lvresize -L+8 -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/pool2 successfully resized. PASS: lvresize -L+8 -n testvg/pool2 PASS: testvg/pool2 lv_size == 24.00m INFO: [2023-01-26 04:23:30] Running: 'lvresize -L+8M -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). Logical volume testvg/pool2 successfully resized. PASS: lvresize -L+8M -n testvg/pool2 PASS: testvg/pool2 lv_size == 32.00m INFO: [2023-01-26 04:23:31] Running: 'lvresize -l+2 -n testvg/pool2 /dev/loop1 /dev/loop2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). Logical volume testvg/pool2 successfully resized. PASS: lvresize -l+2 -n testvg/pool2 /dev/loop1 /dev/loop2 PASS: testvg/pool2 lv_size == 40.00m INFO: [2023-01-26 04:23:32] Running: 'lvresize -l+2 -n testvg/pool2 /dev/loop1:30-41 /dev/loop2:20-31'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). Logical volume testvg/pool2 successfully resized. PASS: lvresize -l+2 -n testvg/pool2 /dev/loop1:30-41 /dev/loop2:20-31 PASS: testvg/pool2 lv_size == 48.00m INFO: [2023-01-26 04:23:32] Running: 'pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(30)''... testvg [pool2_tdata] /dev/loop1(30),/dev/loop2(20) PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(30)' INFO: [2023-01-26 04:23:33] Running: 'pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(20)''... testvg [pool2_tdata] /dev/loop1(30),/dev/loop2(20) PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(20)' INFO: [2023-01-26 04:23:33] Running: 'lvresize -l16 -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 48.00 MiB (12 extents) to 64.00 MiB (16 extents). Logical volume testvg/pool2 successfully resized. PASS: lvresize -l16 -n testvg/pool2 PASS: testvg/pool2 lv_size == 64.00m INFO: [2023-01-26 04:23:34] Running: 'lvresize -L72m -n testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 64.00 MiB (16 extents) to 72.00 MiB (18 extents). Logical volume testvg/pool2 successfully resized. PASS: lvresize -L72m -n testvg/pool2 PASS: testvg/pool2 lv_size == 72.00m INFO: [2023-01-26 04:23:35] Running: 'lvresize -l+100%FREE --test testvg/pool2'... Using stripesize of last segment 64.00 KiB Rounding size (231 extents) down to stripe boundary size for segment (230 extents) Size of logical volume testvg/pool2_tdata changed from 72.00 MiB (18 extents) to 896.00 MiB (224 extents). Logical volume testvg/pool2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -l+100%FREE --test testvg/pool2 INFO: [2023-01-26 04:23:35] Running: 'lvresize -l+10%PVS --test testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -l+10%PVS --test testvg/pool2 INFO: [2023-01-26 04:23:35] Running: 'lvresize -l+10%VG -t testvg/pool2'... Using stripesize of last segment 64.00 KiB Size of logical volume testvg/pool2_tdata changed from 72.00 MiB (18 extents) to 176.00 MiB (44 extents). Logical volume testvg/pool2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -l+10%VG -t testvg/pool2 INFO: [2023-01-26 04:23:35] Running: 'lvresize -l+100%VG -t testvg/pool2'... Using stripesize of last segment 64.00 KiB TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. Insufficient free space: 252 extents needed, but only 213 available PASS: lvresize -l+100%VG -t testvg/pool2 [exited with error, as expected] INFO: [2023-01-26 04:23:36] Running: 'lvremove -ff testvg'... Logical volume "pool2" successfully removed. Logical volume "pool1" successfully removed. PASS: lvremove -ff testvg INFO: [2023-01-26 04:23:37] Running: 'lvcreate -l10 -V8m -T testvg/pool1 -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv1" created. PASS: lvcreate -l10 -V8m -T testvg/pool1 -n lv1 INFO: [2023-01-26 04:23:39] Running: 'lvcreate -i2 -l10 -V8m -T testvg/pool2 -n lv2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv2" created. PASS: lvcreate -i2 -l10 -V8m -T testvg/pool2 -n lv2 INFO: [2023-01-26 04:23:40] Running: 'lvextend -l4 testvg/lv1'... Size of logical volume testvg/lv1 changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/lv1 successfully resized. PASS: lvextend -l4 testvg/lv1 PASS: testvg/lv1 lv_size == 16.00m INFO: [2023-01-26 04:23:41] Running: 'lvextend -L24 -n testvg/lv1'... Size of logical volume testvg/lv1 changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/lv1 successfully resized. PASS: lvextend -L24 -n testvg/lv1 PASS: testvg/lv1 lv_size == 24.00m INFO: [2023-01-26 04:23:42] Running: 'lvextend -l+100%FREE --test testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to 940.00 MiB (235 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (948.00 MiB) exceeds the size of thin pools and the amount of free space in volume group (916.00 MiB). PASS: lvextend -l+100%FREE --test testvg/lv1 INFO: [2023-01-26 04:23:42] Running: 'lvextend -l+100%PVS --test testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to <1.01 GiB (258 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (<1.02 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+100%PVS --test testvg/lv1 INFO: [2023-01-26 04:23:42] Running: 'lvextend -l+50%VG -t testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to 528.00 MiB (132 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (536.00 MiB) exceeds the size of thin pools (80.00 MiB). PASS: lvextend -l+50%VG -t testvg/lv1 INFO: [2023-01-26 04:23:43] Running: 'lvextend -l+120%VG -t testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to <1.21 GiB (309 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (1.21 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+120%VG -t testvg/lv1 INFO: /mnt/lv already exist INFO: [2023-01-26 04:23:43] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv1'... Discarding device blocks: 0/24576 done Creating filesystem with 24576 1k blocks and 6144 inodes Filesystem UUID: 8d8c58e0-dfaa-4929-b1a4-c0d1092e02e4 Superblock backups stored on blocks: 8193 Allocating group tables: 0/3 done Writing inode tables: 0/3 done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: 0/3 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:23:43] Running: 'mount /dev/mapper/testvg-lv1 /mnt/lv'... INFO: [2023-01-26 04:23:43] Running: 'dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.209542 s, 25.0 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 INFO: [2023-01-26 04:23:44] Running: 'lvextend -l+2 -r testvg/lv1'... Size of logical volume testvg/lv1 changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). File system ext4 found on testvg/lv1 mounted at /mnt/lv. Extending file system ext4 to 32.00 MiB (33554432 bytes) on testvg/lv1... resize2fs /dev/testvg/lv1 Filesystem at /dev/testvg/lv1 is mounted on /mnt/lv; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/testvg/lv1 is now 32768 (1k) blocks long. resize2fs done Extended file system ext4 on testvg/lv1. Logical volume testvg/lv1 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -r testvg/lv1 PASS: testvg/lv1 lv_size == 32.00m INFO: [2023-01-26 04:23:45] Running: 'lvcreate -K -s testvg/lv1 -n snap1'... Logical volume "snap1" created. PASS: lvcreate -K -s testvg/lv1 -n snap1 INFO: /mnt/snap already exist INFO: [2023-01-26 04:23:45] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap1'... Discarding device blocks: 0/32768 done Creating filesystem with 32768 1k blocks and 8192 inodes Filesystem UUID: 6c0267be-44fa-4495-84ff-abd186800fdb Superblock backups stored on blocks: 8193, 24577 Allocating group tables: 0/4 done Writing inode tables: 0/4 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/4 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:23:46] Running: 'mount /dev/mapper/testvg-snap1 /mnt/snap'... INFO: [2023-01-26 04:23:46] Running: 'dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.211724 s, 24.8 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 INFO: [2023-01-26 04:23:46] Running: 'lvextend -l+2 -rf testvg/snap1'... Size of logical volume testvg/snap1 changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). File system ext4 found on testvg/snap1 mounted at /mnt/snap. Extending file system ext4 to 40.00 MiB (41943040 bytes) on testvg/snap1... resize2fs /dev/testvg/snap1 Filesystem at /dev/testvg/snap1 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/testvg/snap1 is now 40960 (1k) blocks long. resize2fs done Extended file system ext4 on testvg/snap1. Logical volume testvg/snap1 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m INFO: [2023-01-26 04:23:47] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 33M 5.1M 26M 17% /mnt/snap INFO: [2023-01-26 04:23:47] Running: 'lvextend -L48 -rf testvg/snap1'... Size of logical volume testvg/snap1 changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. File system ext4 found on testvg/snap1 mounted at /mnt/snap. Extending file system ext4 to 48.00 MiB (50331648 bytes) on testvg/snap1... resize2fs /dev/testvg/snap1 Filesystem at /dev/testvg/snap1 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/testvg/snap1 is now 49152 (1k) blocks long. resize2fs done Extended file system ext4 on testvg/snap1. Logical volume testvg/snap1 successfully resized. WARNING: Sum of all thin volume sizes (88.00 MiB) exceeds the size of thin pools (80.00 MiB). resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -L48 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m INFO: [2023-01-26 04:23:48] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 40M 5.1M 33M 14% /mnt/snap INFO: [2023-01-26 04:23:48] Running: 'umount /mnt/lv'... INFO: [2023-01-26 04:23:48] Running: 'umount /mnt/snap'... INFO: [2023-01-26 04:23:49] Running: 'lvextend -l4 testvg/lv2'... Size of logical volume testvg/lv2 changed from 8.00 MiB (2 extents) to 16.00 MiB (4 extents). Logical volume testvg/lv2 successfully resized. PASS: lvextend -l4 testvg/lv2 PASS: testvg/lv2 lv_size == 16.00m INFO: [2023-01-26 04:23:50] Running: 'lvextend -L24 -n testvg/lv2'... Size of logical volume testvg/lv2 changed from 16.00 MiB (4 extents) to 24.00 MiB (6 extents). Logical volume testvg/lv2 successfully resized. PASS: lvextend -L24 -n testvg/lv2 PASS: testvg/lv2 lv_size == 24.00m INFO: [2023-01-26 04:23:50] Running: 'lvextend -l+100%FREE --test testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to 940.00 MiB (235 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (1020.00 MiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+100%FREE --test testvg/lv2 INFO: [2023-01-26 04:23:51] Running: 'lvextend -l+100%PVS --test testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to <1.01 GiB (258 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (<1.09 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+100%PVS --test testvg/lv2 INFO: [2023-01-26 04:23:51] Running: 'lvextend -l+50%VG -t testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to 528.00 MiB (132 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (608.00 MiB) exceeds the size of thin pools (80.00 MiB). PASS: lvextend -l+50%VG -t testvg/lv2 INFO: [2023-01-26 04:23:51] Running: 'lvextend -l+120%VG -t testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to <1.21 GiB (309 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. WARNING: Sum of all thin volume sizes (<1.29 GiB) exceeds the size of thin pools and the size of whole volume group (1008.00 MiB). PASS: lvextend -l+120%VG -t testvg/lv2 INFO: /mnt/lv already exist INFO: [2023-01-26 04:23:51] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv2'... Discarding device blocks: 0/24576 done Creating filesystem with 24576 1k blocks and 6144 inodes Filesystem UUID: cf844aa2-141a-48ed-bfa7-af5fa871ec4a Superblock backups stored on blocks: 8193 Allocating group tables: 0/3 done Writing inode tables: 0/3 done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: 0/3 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:23:52] Running: 'mount /dev/mapper/testvg-lv2 /mnt/lv'... INFO: [2023-01-26 04:23:52] Running: 'dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.209216 s, 25.1 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 INFO: [2023-01-26 04:23:52] Running: 'lvextend -l+2 -r testvg/lv2'... Size of logical volume testvg/lv2 changed from 24.00 MiB (6 extents) to 32.00 MiB (8 extents). File system ext4 found on testvg/lv2 mounted at /mnt/lv. Extending file system ext4 to 32.00 MiB (33554432 bytes) on testvg/lv2... resize2fs /dev/testvg/lv2 Filesystem at /dev/testvg/lv2 is mounted on /mnt/lv; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/testvg/lv2 is now 32768 (1k) blocks long. resize2fs done Extended file system ext4 on testvg/lv2. Logical volume testvg/lv2 successfully resized. resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -r testvg/lv2 PASS: testvg/lv2 lv_size == 32.00m INFO: [2023-01-26 04:23:53] Running: 'lvcreate -K -s testvg/lv2 -n snap2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap2" created. WARNING: Sum of all thin volume sizes (144.00 MiB) exceeds the size of thin pools (80.00 MiB). PASS: lvcreate -K -s testvg/lv2 -n snap2 INFO: /mnt/snap already exist INFO: [2023-01-26 04:23:54] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap2'... Discarding device blocks: 0/32768 done Creating filesystem with 32768 1k blocks and 8192 inodes Filesystem UUID: e3e2f8b7-ab92-44c1-9e18-4bc33d032375 Superblock backups stored on blocks: 8193, 24577 Allocating group tables: 0/4 done Writing inode tables: 0/4 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/4 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:23:54] Running: 'mount /dev/mapper/testvg-snap2 /mnt/snap'... INFO: [2023-01-26 04:23:54] Running: 'dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.210279 s, 24.9 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 INFO: [2023-01-26 04:23:55] Running: 'lvextend -l+2 -rf testvg/snap2'... Size of logical volume testvg/snap2 changed from 32.00 MiB (8 extents) to 40.00 MiB (10 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. File system ext4 found on testvg/snap2 mounted at /mnt/snap. Extending file system ext4 to 40.00 MiB (41943040 bytes) on testvg/snap2... resize2fs /dev/testvg/snap2 Filesystem at /dev/testvg/snap2 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/testvg/snap2 is now 40960 (1k) blocks long. resize2fs done Extended file system ext4 on testvg/snap2. Logical volume testvg/snap2 successfully resized. WARNING: Sum of all thin volume sizes (152.00 MiB) exceeds the size of thin pools (80.00 MiB). resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -l+2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m INFO: [2023-01-26 04:23:56] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 33M 5.1M 26M 17% /mnt/snap INFO: [2023-01-26 04:23:56] Running: 'lvextend -L48 -rf testvg/snap2'... Size of logical volume testvg/snap2 changed from 40.00 MiB (10 extents) to 48.00 MiB (12 extents). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. File system ext4 found on testvg/snap2 mounted at /mnt/snap. Extending file system ext4 to 48.00 MiB (50331648 bytes) on testvg/snap2... resize2fs /dev/testvg/snap2 Filesystem at /dev/testvg/snap2 is mounted on /mnt/snap; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/testvg/snap2 is now 49152 (1k) blocks long. resize2fs done Extended file system ext4 on testvg/snap2. Logical volume testvg/snap2 successfully resized. WARNING: Sum of all thin volume sizes (160.00 MiB) exceeds the size of thin pools (80.00 MiB). resize2fs 1.46.5 (30-Dec-2021) PASS: lvextend -L48 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m INFO: [2023-01-26 04:23:57] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 40M 5.1M 33M 14% /mnt/snap INFO: [2023-01-26 04:23:57] Running: 'umount /mnt/lv'... INFO: [2023-01-26 04:23:57] Running: 'umount /mnt/snap'... INFO: [2023-01-26 04:23:57] Running: 'lvremove -ff testvg'... Logical volume "lv2" successfully removed. Logical volume "snap2" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "snap1" successfully removed. Logical volume "pool1" successfully removed. PASS: lvremove -ff testvg ################################################################################ INFO: Starting reduce test ################################################################################ INFO: [2023-01-26 04:23:59] Running: 'lvcreate -L400M -T testvg/pool1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool1" created. PASS: lvcreate -L400M -T testvg/pool1 INFO: [2023-01-26 04:24:00] Running: 'lvresize -l-2 -n testvg/pool1'... Thin pool volumes testvg/pool1_tdata cannot be reduced in size yet. PASS: lvresize -l-2 -n testvg/pool1 [exited with error, as expected] PASS: testvg/pool1 lv_size == 400.00m INFO: [2023-01-26 04:24:00] Running: 'lvremove -ff testvg'... Logical volume "pool1" successfully removed. PASS: lvremove -ff testvg INFO: [2023-01-26 04:24:01] Running: 'lvcreate -L100m -V100m -T testvg/pool1 -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv1" created. PASS: lvcreate -L100m -V100m -T testvg/pool1 -n lv1 INFO: [2023-01-26 04:24:03] Running: 'lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2'... Using default stripesize 64.00 KiB. Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB (26 extents). Logical volume "lv2" created. PASS: lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2 INFO: [2023-01-26 04:24:05] Running: 'lvresize -f -l-2 testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 100.00 MiB (25 extents) to 92.00 MiB (23 extents). Logical volume testvg/lv1 successfully resized. PASS: lvresize -f -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 92.00m INFO: [2023-01-26 04:24:05] Running: 'lvresize -f -L-8 -n testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 92.00 MiB (23 extents) to 84.00 MiB (21 extents). Logical volume testvg/lv1 successfully resized. PASS: lvresize -f -L-8 -n testvg/lv1 PASS: testvg/lv1 lv_size == 84.00m INFO: [2023-01-26 04:24:06] Running: 'lvresize -f -L-8m -n testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 84.00 MiB (21 extents) to 76.00 MiB (19 extents). Logical volume testvg/lv1 successfully resized. PASS: lvresize -f -L-8m -n testvg/lv1 PASS: testvg/lv1 lv_size == 76.00m INFO: [2023-01-26 04:24:07] Running: 'lvresize -f -l18 -n testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 76.00 MiB (19 extents) to 72.00 MiB (18 extents). Logical volume testvg/lv1 successfully resized. PASS: lvresize -f -l18 -n testvg/lv1 PASS: testvg/lv1 lv_size == 72.00m INFO: [2023-01-26 04:24:08] Running: 'lvresize -f -L64m -n testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 72.00 MiB (18 extents) to 64.00 MiB (16 extents). Logical volume testvg/lv1 successfully resized. PASS: lvresize -f -L64m -n testvg/lv1 PASS: testvg/lv1 lv_size == 64.00m INFO: [2023-01-26 04:24:09] Running: 'lvresize -f -l-1%FREE --test testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 4.00 MiB (1 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -f -l-1%FREE --test testvg/lv1 INFO: [2023-01-26 04:24:09] Running: 'lvresize -f -l-1%PVS --test testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -f -l-1%PVS --test testvg/lv1 INFO: [2023-01-26 04:24:09] Running: 'lvresize -f -l-1%VG -t testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -f -l-1%VG -t testvg/lv1 INFO: [2023-01-26 04:24:09] Running: 'lvresize -f -l-1%VG -t testvg/lv1'... No file system found on /dev/testvg/lv1. Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv1 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -f -l-1%VG -t testvg/lv1 INFO: /mnt/lv already exist INFO: [2023-01-26 04:24:10] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv1'... Discarding device blocks: 0/65536 done Creating filesystem with 65536 1k blocks and 16384 inodes Filesystem UUID: d625d8e7-445b-4317-9103-b3219f017514 Superblock backups stored on blocks: 8193, 24577, 40961, 57345 Allocating group tables: 0/8 done Writing inode tables: 0/8 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/8 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:24:10] Running: 'mount /dev/mapper/testvg-lv1 /mnt/lv'... INFO: [2023-01-26 04:24:10] Running: 'dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.211345 s, 24.8 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 INFO: [2023-01-26 04:24:11] Running: 'yes 2>/dev/null | lvresize -rf -l-2 testvg/lv1'... File system ext4 found on testvg/lv1 mounted at /mnt/lv. File system size (64.00 MiB) is larger than the requested size (56.00 MiB). File system reduce is required using resize2fs. File system unmount is needed for reduce. File system fsck will be run before reduce. Reducing file system ext4 to 56.00 MiB (58720256 bytes) on testvg/lv1... unmount /mnt/lv unmount done e2fsck /dev/testvg/lv1 /dev/testvg/lv1: 12/16384 files (8.3% non-contiguous), 14633/65536 blocks e2fsck done resize2fs /dev/testvg/lv1 57344k Resizing the filesystem on /dev/testvg/lv1 to 57344 (1k) blocks. The filesystem on /dev/testvg/lv1 is now 57344 (1k) blocks long. resize2fs done remount /dev/testvg/lv1 /mnt/lv remount done Reduced file system ext4 on testvg/lv1. Size of logical volume testvg/lv1 changed from 64.00 MiB (16 extents) to 56.00 MiB (14 extents). Logical volume testvg/lv1 successfully resized. Continue with ext4 file system reduce steps: unmount, fsck, resize2fs? [y/n]:resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvresize -rf -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 56.00m INFO: [2023-01-26 04:24:12] Running: 'lvcreate -K -s testvg/lv1 -n snap1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap1" created. WARNING: Sum of all thin volume sizes (212.00 MiB) exceeds the size of thin pools (204.00 MiB). PASS: lvcreate -K -s testvg/lv1 -n snap1 INFO: /mnt/snap already exist INFO: [2023-01-26 04:24:12] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap1'... Discarding device blocks: 0/57344 done Creating filesystem with 57344 1k blocks and 14336 inodes Filesystem UUID: e6c50197-1893-4a50-ad26-2029e25fae3a Superblock backups stored on blocks: 8193, 24577, 40961 Allocating group tables: 0/7 done Writing inode tables: 0/7 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/7 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:24:13] Running: 'mount /dev/mapper/testvg-snap1 /mnt/snap'... INFO: [2023-01-26 04:24:13] Running: 'dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.211761 s, 24.8 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 INFO: [2023-01-26 04:24:13] Running: 'yes 2>/dev/null | lvresize -l-2 -rf testvg/snap1'... File system ext4 found on testvg/snap1 mounted at /mnt/snap. File system size (56.00 MiB) is larger than the requested size (48.00 MiB). File system reduce is required using resize2fs. File system unmount is needed for reduce. File system fsck will be run before reduce. Reducing file system ext4 to 48.00 MiB (50331648 bytes) on testvg/snap1... unmount /mnt/snap unmount done e2fsck /dev/testvg/snap1 /dev/testvg/snap1: 12/14336 files (8.3% non-contiguous), 13861/57344 blocks e2fsck done resize2fs /dev/testvg/snap1 49152k Resizing the filesystem on /dev/testvg/snap1 to 49152 (1k) blocks. The filesystem on /dev/testvg/snap1 is now 49152 (1k) blocks long. resize2fs done remount /dev/testvg/snap1 /mnt/snap remount done Reduced file system ext4 on testvg/snap1. Size of logical volume testvg/snap1 changed from 56.00 MiB (14 extents) to 48.00 MiB (12 extents). Logical volume testvg/snap1 successfully resized. Continue with ext4 file system reduce steps: unmount, fsck, resize2fs? [y/n]:resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvresize -l-2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m INFO: [2023-01-26 04:24:15] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 40M 5.1M 32M 14% /mnt/snap INFO: [2023-01-26 04:24:15] Running: 'yes 2>/dev/null | lvresize -L40 -rf testvg/snap1'... File system ext4 found on testvg/snap1 mounted at /mnt/snap. File system size (48.00 MiB) is larger than the requested size (40.00 MiB). File system reduce is required using resize2fs. File system unmount is needed for reduce. File system fsck will be run before reduce. Reducing file system ext4 to 40.00 MiB (41943040 bytes) on testvg/snap1... unmount /mnt/snap unmount done e2fsck /dev/testvg/snap1 /dev/testvg/snap1: 12/12288 files (8.3% non-contiguous), 13347/49152 blocks e2fsck done resize2fs /dev/testvg/snap1 40960k Resizing the filesystem on /dev/testvg/snap1 to 40960 (1k) blocks. The filesystem on /dev/testvg/snap1 is now 40960 (1k) blocks long. resize2fs done remount /dev/testvg/snap1 /mnt/snap remount done Reduced file system ext4 on testvg/snap1. Size of logical volume testvg/snap1 changed from 48.00 MiB (12 extents) to 40.00 MiB (10 extents). Logical volume testvg/snap1 successfully resized. Continue with ext4 file system reduce steps: unmount, fsck, resize2fs? [y/n]:resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvresize -L40 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m INFO: [2023-01-26 04:24:16] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap1 33M 5.1M 25M 17% /mnt/snap INFO: [2023-01-26 04:24:16] Running: 'umount /mnt/lv'... INFO: [2023-01-26 04:24:16] Running: 'umount /mnt/snap'... INFO: [2023-01-26 04:24:16] Running: 'lvresize -f -l-2 testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 100.00 MiB (25 extents) to 92.00 MiB (23 extents). Logical volume testvg/lv2 successfully resized. PASS: lvresize -f -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 92.00m INFO: [2023-01-26 04:24:17] Running: 'lvresize -f -L-8 -n testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 92.00 MiB (23 extents) to 84.00 MiB (21 extents). Logical volume testvg/lv2 successfully resized. PASS: lvresize -f -L-8 -n testvg/lv2 PASS: testvg/lv2 lv_size == 84.00m INFO: [2023-01-26 04:24:18] Running: 'lvresize -f -L-8m -n testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 84.00 MiB (21 extents) to 76.00 MiB (19 extents). Logical volume testvg/lv2 successfully resized. PASS: lvresize -f -L-8m -n testvg/lv2 PASS: testvg/lv2 lv_size == 76.00m INFO: [2023-01-26 04:24:19] Running: 'lvresize -f -l18 -n testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 76.00 MiB (19 extents) to 72.00 MiB (18 extents). Logical volume testvg/lv2 successfully resized. PASS: lvresize -f -l18 -n testvg/lv2 PASS: testvg/lv2 lv_size == 72.00m INFO: [2023-01-26 04:24:20] Running: 'lvresize -f -L64m -n testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 72.00 MiB (18 extents) to 64.00 MiB (16 extents). Logical volume testvg/lv2 successfully resized. PASS: lvresize -f -L64m -n testvg/lv2 PASS: testvg/lv2 lv_size == 64.00m INFO: [2023-01-26 04:24:20] Running: 'lvresize -f -l-1%FREE --test testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 4.00 MiB (1 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -f -l-1%FREE --test testvg/lv2 INFO: [2023-01-26 04:24:21] Running: 'lvresize -f -l-1%PVS --test testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -f -l-1%PVS --test testvg/lv2 INFO: [2023-01-26 04:24:21] Running: 'lvresize -f -l-1%VG -t testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -f -l-1%VG -t testvg/lv2 INFO: [2023-01-26 04:24:21] Running: 'lvresize -f -l-1%VG -t testvg/lv2'... No file system found on /dev/testvg/lv2. Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 8.00 MiB (2 extents). Logical volume testvg/lv2 successfully resized. TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated. PASS: lvresize -f -l-1%VG -t testvg/lv2 INFO: /mnt/lv already exist INFO: [2023-01-26 04:24:22] Running: 'mkfs.ext4 -F /dev/mapper/testvg-lv2'... Discarding device blocks: 0/65536 done Creating filesystem with 65536 1k blocks and 16384 inodes Filesystem UUID: 19295aa0-fa99-4ba5-a2fc-a4381a91843e Superblock backups stored on blocks: 8193, 24577, 40961, 57345 Allocating group tables: 0/8 done Writing inode tables: 0/8 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/8 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:24:22] Running: 'mount /dev/mapper/testvg-lv2 /mnt/lv'... INFO: [2023-01-26 04:24:22] Running: 'dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.211553 s, 24.8 MB/s PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 INFO: [2023-01-26 04:24:22] Running: 'yes 2>/dev/null | lvresize -rf -l-2 testvg/lv2'... File system ext4 found on testvg/lv2 mounted at /mnt/lv. File system size (64.00 MiB) is larger than the requested size (56.00 MiB). File system reduce is required using resize2fs. File system unmount is needed for reduce. File system fsck will be run before reduce. Reducing file system ext4 to 56.00 MiB (58720256 bytes) on testvg/lv2... unmount /mnt/lv unmount done e2fsck /dev/testvg/lv2 /dev/testvg/lv2: 12/16384 files (8.3% non-contiguous), 14633/65536 blocks e2fsck done resize2fs /dev/testvg/lv2 57344k Resizing the filesystem on /dev/testvg/lv2 to 57344 (1k) blocks. The filesystem on /dev/testvg/lv2 is now 57344 (1k) blocks long. resize2fs done remount /dev/testvg/lv2 /mnt/lv remount done Reduced file system ext4 on testvg/lv2. Size of logical volume testvg/lv2 changed from 64.00 MiB (16 extents) to 56.00 MiB (14 extents). Logical volume testvg/lv2 successfully resized. Continue with ext4 file system reduce steps: unmount, fsck, resize2fs? [y/n]:resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvresize -rf -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 56.00m INFO: [2023-01-26 04:24:24] Running: 'lvcreate -K -s testvg/lv2 -n snap2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap2" created. WARNING: Sum of all thin volume sizes (208.00 MiB) exceeds the size of thin pools (204.00 MiB). PASS: lvcreate -K -s testvg/lv2 -n snap2 INFO: /mnt/snap already exist INFO: [2023-01-26 04:24:24] Running: 'mkfs.ext4 -F /dev/mapper/testvg-snap2'... Discarding device blocks: 0/57344 done Creating filesystem with 57344 1k blocks and 14336 inodes Filesystem UUID: 866488f6-006d-4a38-bd70-ce6b2122dfd9 Superblock backups stored on blocks: 8193, 24577, 40961 Allocating group tables: 0/7 done Writing inode tables: 0/7 done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: 0/7 done mke2fs 1.46.5 (30-Dec-2021) INFO: [2023-01-26 04:24:25] Running: 'mount /dev/mapper/testvg-snap2 /mnt/snap'... INFO: [2023-01-26 04:24:25] Running: 'dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5'... 5+0 records in 5+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.210223 s, 24.9 MB/s PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 INFO: [2023-01-26 04:24:25] Running: 'yes 2>/dev/null | lvresize -l-2 -rf testvg/snap2'... File system ext4 found on testvg/snap2 mounted at /mnt/snap. File system size (56.00 MiB) is larger than the requested size (48.00 MiB). File system reduce is required using resize2fs. File system unmount is needed for reduce. File system fsck will be run before reduce. Reducing file system ext4 to 48.00 MiB (50331648 bytes) on testvg/snap2... unmount /mnt/snap unmount done e2fsck /dev/testvg/snap2 /dev/testvg/snap2: 12/14336 files (8.3% non-contiguous), 13861/57344 blocks e2fsck done resize2fs /dev/testvg/snap2 49152k Resizing the filesystem on /dev/testvg/snap2 to 49152 (1k) blocks. The filesystem on /dev/testvg/snap2 is now 49152 (1k) blocks long. resize2fs done remount /dev/testvg/snap2 /mnt/snap remount done Reduced file system ext4 on testvg/snap2. Size of logical volume testvg/snap2 changed from 56.00 MiB (14 extents) to 48.00 MiB (12 extents). Logical volume testvg/snap2 successfully resized. Continue with ext4 file system reduce steps: unmount, fsck, resize2fs? [y/n]:resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvresize -l-2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m INFO: [2023-01-26 04:24:26] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 40M 5.1M 32M 14% /mnt/snap INFO: [2023-01-26 04:24:27] Running: 'yes 2>/dev/null | lvresize -L40 -rf testvg/snap2'... File system ext4 found on testvg/snap2 mounted at /mnt/snap. File system size (48.00 MiB) is larger than the requested size (40.00 MiB). File system reduce is required using resize2fs. File system unmount is needed for reduce. File system fsck will be run before reduce. Reducing file system ext4 to 40.00 MiB (41943040 bytes) on testvg/snap2... unmount /mnt/snap unmount done e2fsck /dev/testvg/snap2 /dev/testvg/snap2: 12/12288 files (8.3% non-contiguous), 13347/49152 blocks e2fsck done resize2fs /dev/testvg/snap2 40960k Resizing the filesystem on /dev/testvg/snap2 to 40960 (1k) blocks. The filesystem on /dev/testvg/snap2 is now 40960 (1k) blocks long. resize2fs done remount /dev/testvg/snap2 /mnt/snap remount done Reduced file system ext4 on testvg/snap2. Size of logical volume testvg/snap2 changed from 48.00 MiB (12 extents) to 40.00 MiB (10 extents). Logical volume testvg/snap2 successfully resized. Continue with ext4 file system reduce steps: unmount, fsck, resize2fs? [y/n]:resize2fs 1.46.5 (30-Dec-2021) PASS: yes 2>/dev/null | lvresize -L40 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m INFO: [2023-01-26 04:24:28] Running: 'df -h /mnt/snap'... Filesystem Size Used Avail Use% Mounted on /dev/mapper/testvg-snap2 33M 5.1M 25M 17% /mnt/snap INFO: [2023-01-26 04:24:28] Running: 'umount /mnt/lv'... INFO: [2023-01-26 04:24:28] Running: 'umount /mnt/snap'... INFO: [2023-01-26 04:24:28] Running: 'lvremove -ff testvg'... Logical volume "lv2" successfully removed. Logical volume "snap2" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "snap1" successfully removed. Logical volume "pool1" successfully removed. PASS: lvremove -ff testvg INFO: [2023-01-26 04:24:30] Running: 'vgremove --force testvg'... Volume group "testvg" successfully removed INFO: [2023-01-26 04:24:31] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2023-01-26 04:24:32] Running: 'losetup -d /dev/loop0'... INFO: [2023-01-26 04:24:32] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2023-01-26 04:24:32] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2023-01-26 04:24:34] Running: 'losetup -d /dev/loop1'... INFO: [2023-01-26 04:24:34] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2023-01-26 04:24:34] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2023-01-26 04:24:36] Running: 'losetup -d /dev/loop2'... INFO: [2023-01-26 04:24:36] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2023-01-26 04:24:36] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2023-01-26 04:24:37] Running: 'losetup -d /dev/loop3'... INFO: [2023-01-26 04:24:37] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:24:38] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:24:38] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:24:38] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:24:38] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:24:38] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:24:38] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:24:38] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:24:39] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:24:39] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:24:39] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_snapshot' was loaded during the test. Unloading it... INFO: [2023-01-26 04:24:39] Running: 'modprobe -r dm_snapshot'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2023-01-26 04:24:39] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l2 -T testvg/pool1 PASS: lvcreate -i2 -l2 -T testvg/pool2 PASS: lvresize -l+2 -n testvg/pool1 PASS: testvg/pool1 lv_size == 16.00m PASS: lvresize -L+8 -n testvg/pool1 PASS: testvg/pool1 lv_size == 24.00m PASS: lvresize -L+8M -n testvg/pool1 PASS: testvg/pool1 lv_size == 32.00m PASS: lvresize -l+2 -n testvg/pool1 /dev/loop3 PASS: testvg/pool1 lv_size == 40.00m PASS: lvresize -l+2 -n testvg/pool1 /dev/loop2:40:41 PASS: testvg/pool1 lv_size == 48.00m PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(40)' PASS: lvresize -l+2 -n testvg/pool1 /dev/loop1:35:37 PASS: testvg/pool1 lv_size == 56.00m PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(35)' PASS: lvresize -l16 -n testvg/pool1 PASS: testvg/pool1 lv_size == 64.00m PASS: lvresize -L72m -n testvg/pool1 PASS: testvg/pool1 lv_size == 72.00m PASS: lvresize -l+100%FREE --test testvg/pool1 PASS: lvresize -l+10%PVS --test testvg/pool1 PASS: lvresize -l+10%VG -t testvg/pool1 PASS: lvresize -l+100%VG -t testvg/pool1 [exited with error, as expected] PASS: lvresize -l+2 -n testvg/pool2 PASS: testvg/pool2 lv_size == 16.00m PASS: lvresize -L+8 -n testvg/pool2 PASS: testvg/pool2 lv_size == 24.00m PASS: lvresize -L+8M -n testvg/pool2 PASS: testvg/pool2 lv_size == 32.00m PASS: lvresize -l+2 -n testvg/pool2 /dev/loop1 /dev/loop2 PASS: testvg/pool2 lv_size == 40.00m PASS: lvresize -l+2 -n testvg/pool2 /dev/loop1:30-41 /dev/loop2:20-31 PASS: testvg/pool2 lv_size == 48.00m PASS: pvs -ovg_name,lv_name,devices /dev/loop1 | grep '/dev/loop1(30)' PASS: pvs -ovg_name,lv_name,devices /dev/loop2 | grep '/dev/loop2(20)' PASS: lvresize -l16 -n testvg/pool2 PASS: testvg/pool2 lv_size == 64.00m PASS: lvresize -L72m -n testvg/pool2 PASS: testvg/pool2 lv_size == 72.00m PASS: lvresize -l+100%FREE --test testvg/pool2 PASS: lvresize -l+10%PVS --test testvg/pool2 PASS: lvresize -l+10%VG -t testvg/pool2 PASS: lvresize -l+100%VG -t testvg/pool2 [exited with error, as expected] PASS: lvremove -ff testvg PASS: lvcreate -l10 -V8m -T testvg/pool1 -n lv1 PASS: lvcreate -i2 -l10 -V8m -T testvg/pool2 -n lv2 PASS: lvextend -l4 testvg/lv1 PASS: testvg/lv1 lv_size == 16.00m PASS: lvextend -L24 -n testvg/lv1 PASS: testvg/lv1 lv_size == 24.00m PASS: lvextend -l+100%FREE --test testvg/lv1 PASS: lvextend -l+100%PVS --test testvg/lv1 PASS: lvextend -l+50%VG -t testvg/lv1 PASS: lvextend -l+120%VG -t testvg/lv1 PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 PASS: lvextend -l+2 -r testvg/lv1 PASS: testvg/lv1 lv_size == 32.00m PASS: lvcreate -K -s testvg/lv1 -n snap1 PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 PASS: lvextend -l+2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m PASS: lvextend -L48 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m PASS: lvextend -l4 testvg/lv2 PASS: testvg/lv2 lv_size == 16.00m PASS: lvextend -L24 -n testvg/lv2 PASS: testvg/lv2 lv_size == 24.00m PASS: lvextend -l+100%FREE --test testvg/lv2 PASS: lvextend -l+100%PVS --test testvg/lv2 PASS: lvextend -l+50%VG -t testvg/lv2 PASS: lvextend -l+120%VG -t testvg/lv2 PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 PASS: lvextend -l+2 -r testvg/lv2 PASS: testvg/lv2 lv_size == 32.00m PASS: lvcreate -K -s testvg/lv2 -n snap2 PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 PASS: lvextend -l+2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m PASS: lvextend -L48 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m PASS: lvremove -ff testvg PASS: lvcreate -L400M -T testvg/pool1 PASS: lvresize -l-2 -n testvg/pool1 [exited with error, as expected] PASS: testvg/pool1 lv_size == 400.00m PASS: lvremove -ff testvg PASS: lvcreate -L100m -V100m -T testvg/pool1 -n lv1 PASS: lvcreate -i2 -L100m -V100m -T testvg/pool2 -n lv2 PASS: lvresize -f -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 92.00m PASS: lvresize -f -L-8 -n testvg/lv1 PASS: testvg/lv1 lv_size == 84.00m PASS: lvresize -f -L-8m -n testvg/lv1 PASS: testvg/lv1 lv_size == 76.00m PASS: lvresize -f -l18 -n testvg/lv1 PASS: testvg/lv1 lv_size == 72.00m PASS: lvresize -f -L64m -n testvg/lv1 PASS: testvg/lv1 lv_size == 64.00m PASS: lvresize -f -l-1%FREE --test testvg/lv1 PASS: lvresize -f -l-1%PVS --test testvg/lv1 PASS: lvresize -f -l-1%VG -t testvg/lv1 PASS: lvresize -f -l-1%VG -t testvg/lv1 PASS: dd if=/dev/urandom of=/mnt/lv/lv1 bs=1M count=5 PASS: yes 2>/dev/null | lvresize -rf -l-2 testvg/lv1 PASS: testvg/lv1 lv_size == 56.00m PASS: lvcreate -K -s testvg/lv1 -n snap1 PASS: dd if=/dev/urandom of=/mnt/snap/lv1 bs=1M count=5 PASS: yes 2>/dev/null | lvresize -l-2 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 48.00m PASS: yes 2>/dev/null | lvresize -L40 -rf testvg/snap1 PASS: testvg/snap1 lv_size == 40.00m PASS: lvresize -f -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 92.00m PASS: lvresize -f -L-8 -n testvg/lv2 PASS: testvg/lv2 lv_size == 84.00m PASS: lvresize -f -L-8m -n testvg/lv2 PASS: testvg/lv2 lv_size == 76.00m PASS: lvresize -f -l18 -n testvg/lv2 PASS: testvg/lv2 lv_size == 72.00m PASS: lvresize -f -L64m -n testvg/lv2 PASS: testvg/lv2 lv_size == 64.00m PASS: lvresize -f -l-1%FREE --test testvg/lv2 PASS: lvresize -f -l-1%PVS --test testvg/lv2 PASS: lvresize -f -l-1%VG -t testvg/lv2 PASS: lvresize -f -l-1%VG -t testvg/lv2 PASS: dd if=/dev/urandom of=/mnt/lv/lv2 bs=1M count=5 PASS: yes 2>/dev/null | lvresize -rf -l-2 testvg/lv2 PASS: testvg/lv2 lv_size == 56.00m PASS: lvcreate -K -s testvg/lv2 -n snap2 PASS: dd if=/dev/urandom of=/mnt/snap/lv2 bs=1M count=5 PASS: yes 2>/dev/null | lvresize -l-2 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 48.00m PASS: yes 2>/dev/null | lvresize -L40 -rf testvg/snap2 PASS: testvg/snap2 lv_size == 40.00m PASS: lvremove -ff testvg PASS: Search for error on the server ############################# Total tests that passed: 136 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvscan-thinp.py' ============================================================================================================== INFO: [2023-01-26 04:24:42] Running: '/opt/stqe-venv/bin/python3 /opt/stqe-venv/lib64/python3.9/site-packages/stqe/tests/lvm/thinp/lvscan-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:24:42] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:24:42] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:24:42] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:24:43] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:24:43] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:24:43] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:24:43] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:24:43] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:24:43] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:24:43] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux rdma-qe-36.rdma.lab.eng.rdu2.redhat.com 5.14.0-244.1956_758049736.el9.x86_64+debug #1 SMP PREEMPT_DYNAMIC Thu Jan 26 05:36:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Kernel tainted: 79872 ### IP settings: ### INFO: [2023-01-26 04:24:44] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c4 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 10.1.217.125/24 brd 10.1.217.255 scope global dynamic noprefixroute eno1 valid_lft 77483sec preferred_lft 77483sec inet6 2620:52:0:1d9:3673:5aff:fe9d:5cc4/64 scope global dynamic noprefixroute valid_lft 2591943sec preferred_lft 604743sec inet6 fe80::3673:5aff:fe9d:5cc4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c5 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c6 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c7 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: ens1f0np0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 10:70:fd:a3:7c:60 brd ff:ff:ff:ff:ff:ff altname enp59s0f0np0 7: ens1f1np1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 10:70:fd:a3:7c:61 brd ff:ff:ff:ff:ff:ff altname enp59s0f1np1 ### File system disk space usage: ### INFO: [2023-01-26 04:24:44] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 26G 0 26G 0% /dev/shm tmpfs 11G 38M 11G 1% /run /dev/mapper/cs_rdma--qe--36-root 70G 5.2G 65G 8% / /dev/sda2 1014M 285M 730M 29% /boot /dev/mapper/cs_rdma--qe--36-home 180G 1.3G 179G 1% /home /dev/sda1 599M 7.5M 592M 2% /boot/efi tmpfs 5.2G 4.0K 5.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 93G 375G 20% /var/crash INFO: [2023-01-26 04:24:44] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.17-4.el9.x86_64) ################################################################################ INFO: Starting LV Scan Thin Provisioning test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2023-01-26 04:24:45] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2023-01-26 04:24:45] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2023-01-26 04:24:45] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2023-01-26 04:24:45] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2023-01-26 04:24:45] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2023-01-26 04:24:46] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2023-01-26 04:24:46] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2023-01-26 04:24:46] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2023-01-26 04:24:46] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2023-01-26 04:24:46] Running: 'lvcreate -V100m -l10 -T testvg/pool -n lv1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool (40.00 MiB). PASS: lvcreate -V100m -l10 -T testvg/pool -n lv1 INFO: [2023-01-26 04:24:48] Running: 'lvcreate -s testvg/lv1 -n snap1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap1" created. WARNING: Sum of all thin volume sizes (200.00 MiB) exceeds the size of thin pool testvg/pool (40.00 MiB). PASS: lvcreate -s testvg/lv1 -n snap1 INFO: [2023-01-26 04:24:49] Running: 'lvcreate -s testvg/snap1 -n snap2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap2" created. WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pool testvg/pool (40.00 MiB). PASS: lvcreate -s testvg/snap1 -n snap2 INFO: [2023-01-26 04:24:49] Running: 'lvscan'... ACTIVE '/dev/cs_rdma-qe-36/swap' [27.89 GiB] inherit ACTIVE '/dev/cs_rdma-qe-36/home' [179.39 GiB] inherit ACTIVE '/dev/cs_rdma-qe-36/root' [70.00 GiB] inherit ACTIVE '/dev/testvg/pool' [40.00 MiB] inherit ACTIVE '/dev/testvg/lv1' [100.00 MiB] inherit inactive '/dev/testvg/snap1' [100.00 MiB] inherit inactive '/dev/testvg/snap2' [100.00 MiB] inherit INFO: [2023-01-26 04:24:50] Running: 'lvs testvg'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 testvg Vwi-a-tz-- 100.00m pool 0.00 pool testvg twi-aotz-- 40.00m 0.00 10.94 snap1 testvg Vwi---tz-k 100.00m pool lv1 snap2 testvg Vwi---tz-k 100.00m pool snap1 INFO: [2023-01-26 04:24:50] Running: 'lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool'\s+\[40.00 MiB\]\s+inherit"'... ACTIVE '/dev/testvg/pool' [40.00 MiB] inherit PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool'\s+\[40.00 MiB\]\s+inherit" INFO: [2023-01-26 04:24:50] Running: 'lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/lv1'\s+\[100.00 MiB\]\s+inherit"'... ACTIVE '/dev/testvg/lv1' [100.00 MiB] inherit PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/lv1'\s+\[100.00 MiB\]\s+inherit" INFO: [2023-01-26 04:24:50] Running: 'lvscan | egrep "\s+inactive\s+'/dev/testvg/snap1'\s+\[100.00 MiB\]\s+inherit"'... inactive '/dev/testvg/snap1' [100.00 MiB] inherit PASS: lvscan | egrep "\s+inactive\s+'/dev/testvg/snap1'\s+\[100.00 MiB\]\s+inherit" INFO: [2023-01-26 04:24:51] Running: 'lvscan | egrep "\s+inactive\s+'/dev/testvg/snap2'\s+\[100.00 MiB\]\s+inherit"'... inactive '/dev/testvg/snap2' [100.00 MiB] inherit PASS: lvscan | egrep "\s+inactive\s+'/dev/testvg/snap2'\s+\[100.00 MiB\]\s+inherit" INFO: [2023-01-26 04:24:51] Running: 'lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tdata'\s+\[40.00 MiB\]\s+inherit"'... PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tdata'\s+\[40.00 MiB\]\s+inherit" [exited with error, as expected] INFO: [2023-01-26 04:24:51] Running: 'lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tmeta'\s+\[4.00 MiB\]\s+inherit"'... PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tmeta'\s+\[4.00 MiB\]\s+inherit" [exited with error, as expected] INFO: [2023-01-26 04:24:51] Running: 'lvscan -a'... ACTIVE '/dev/cs_rdma-qe-36/swap' [27.89 GiB] inherit ACTIVE '/dev/cs_rdma-qe-36/home' [179.39 GiB] inherit ACTIVE '/dev/cs_rdma-qe-36/root' [70.00 GiB] inherit ACTIVE '/dev/testvg/pool' [40.00 MiB] inherit ACTIVE '/dev/testvg/lv1' [100.00 MiB] inherit inactive '/dev/testvg/snap1' [100.00 MiB] inherit inactive '/dev/testvg/snap2' [100.00 MiB] inherit inactive '/dev/testvg/lvol0_pmspare' [4.00 MiB] inherit ACTIVE '/dev/testvg/pool_tmeta' [4.00 MiB] inherit ACTIVE '/dev/testvg/pool_tdata' [40.00 MiB] inherit INFO: [2023-01-26 04:24:52] Running: 'lvs -a testvg'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 testvg Vwi-a-tz-- 100.00m pool 0.00 [lvol0_pmspare] testvg ewi------- 4.00m pool testvg twi-aotz-- 40.00m 0.00 10.94 [pool_tdata] testvg Twi-ao---- 40.00m [pool_tmeta] testvg ewi-ao---- 4.00m snap1 testvg Vwi---tz-k 100.00m pool lv1 snap2 testvg Vwi---tz-k 100.00m pool snap1 INFO: [2023-01-26 04:24:52] Running: 'lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool'\s+\[40.00 MiB\]\s+inherit"'... ACTIVE '/dev/testvg/pool' [40.00 MiB] inherit PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool'\s+\[40.00 MiB\]\s+inherit" INFO: [2023-01-26 04:24:52] Running: 'lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/lv1'\s+\[100.00 MiB\]\s+inherit"'... ACTIVE '/dev/testvg/lv1' [100.00 MiB] inherit PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/lv1'\s+\[100.00 MiB\]\s+inherit" INFO: [2023-01-26 04:24:52] Running: 'lvscan --all | egrep "\s+inactive\s+'/dev/testvg/snap1'\s+\[100.00 MiB\]\s+inherit"'... inactive '/dev/testvg/snap1' [100.00 MiB] inherit PASS: lvscan --all | egrep "\s+inactive\s+'/dev/testvg/snap1'\s+\[100.00 MiB\]\s+inherit" INFO: [2023-01-26 04:24:53] Running: 'lvscan --all | egrep "\s+inactive\s+'/dev/testvg/snap2'\s+\[100.00 MiB\]\s+inherit"'... inactive '/dev/testvg/snap2' [100.00 MiB] inherit PASS: lvscan --all | egrep "\s+inactive\s+'/dev/testvg/snap2'\s+\[100.00 MiB\]\s+inherit" INFO: [2023-01-26 04:24:53] Running: 'lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tdata'\s+\[40.00 MiB\]\s+inherit"'... ACTIVE '/dev/testvg/pool_tdata' [40.00 MiB] inherit PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tdata'\s+\[40.00 MiB\]\s+inherit" INFO: [2023-01-26 04:24:53] Running: 'lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tmeta'\s+\[4.00 MiB\]\s+inherit"'... ACTIVE '/dev/testvg/pool_tmeta' [4.00 MiB] inherit PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tmeta'\s+\[4.00 MiB\]\s+inherit" INFO: [2023-01-26 04:24:54] Running: 'vgremove --force testvg'... Logical volume "lv1" successfully removed. Logical volume "snap1" successfully removed. Logical volume "snap2" successfully removed. Logical volume "pool" successfully removed. Volume group "testvg" successfully removed INFO: [2023-01-26 04:24:55] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2023-01-26 04:24:56] Running: 'losetup -d /dev/loop0'... INFO: [2023-01-26 04:24:57] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2023-01-26 04:24:57] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2023-01-26 04:24:58] Running: 'losetup -d /dev/loop1'... INFO: [2023-01-26 04:24:58] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2023-01-26 04:24:59] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2023-01-26 04:25:00] Running: 'losetup -d /dev/loop2'... INFO: [2023-01-26 04:25:00] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2023-01-26 04:25:00] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2023-01-26 04:25:02] Running: 'losetup -d /dev/loop3'... INFO: [2023-01-26 04:25:02] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:25:02] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:25:02] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:25:02] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:25:02] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:25:02] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:25:02] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:25:03] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:25:03] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:25:03] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:25:03] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_snapshot' was loaded during the test. Unloading it... INFO: [2023-01-26 04:25:03] Running: 'modprobe -r dm_snapshot'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2023-01-26 04:25:04] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -V100m -l10 -T testvg/pool -n lv1 PASS: lvcreate -s testvg/lv1 -n snap1 PASS: lvcreate -s testvg/snap1 -n snap2 PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool'\s+\[40.00 MiB\]\s+inherit" PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/lv1'\s+\[100.00 MiB\]\s+inherit" PASS: lvscan | egrep "\s+inactive\s+'/dev/testvg/snap1'\s+\[100.00 MiB\]\s+inherit" PASS: lvscan | egrep "\s+inactive\s+'/dev/testvg/snap2'\s+\[100.00 MiB\]\s+inherit" PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tdata'\s+\[40.00 MiB\]\s+inherit" [exited with error, as expected] PASS: lvscan | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tmeta'\s+\[4.00 MiB\]\s+inherit" [exited with error, as expected] PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool'\s+\[40.00 MiB\]\s+inherit" PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/lv1'\s+\[100.00 MiB\]\s+inherit" PASS: lvscan --all | egrep "\s+inactive\s+'/dev/testvg/snap1'\s+\[100.00 MiB\]\s+inherit" PASS: lvscan --all | egrep "\s+inactive\s+'/dev/testvg/snap2'\s+\[100.00 MiB\]\s+inherit" PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tdata'\s+\[40.00 MiB\]\s+inherit" PASS: lvscan -a | egrep "\s+ACTIVE\s+'/dev/testvg/pool_tmeta'\s+\[4.00 MiB\]\s+inherit" PASS: Search for error on the server ############################# Total tests that passed: 16 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvs-thinp.py' ============================================================================================================== INFO: [2023-01-26 04:25:06] Running: '/opt/stqe-venv/bin/python3 /opt/stqe-venv/lib64/python3.9/site-packages/stqe/tests/lvm/thinp/lvs-thinp.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:25:06] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:25:06] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:25:07] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:25:07] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:25:07] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:25:07] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:25:07] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:25:07] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:25:07] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:25:07] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux rdma-qe-36.rdma.lab.eng.rdu2.redhat.com 5.14.0-244.1956_758049736.el9.x86_64+debug #1 SMP PREEMPT_DYNAMIC Thu Jan 26 05:36:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Kernel tainted: 79872 ### IP settings: ### INFO: [2023-01-26 04:25:08] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c4 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 10.1.217.125/24 brd 10.1.217.255 scope global dynamic noprefixroute eno1 valid_lft 77459sec preferred_lft 77459sec inet6 2620:52:0:1d9:3673:5aff:fe9d:5cc4/64 scope global dynamic noprefixroute valid_lft 2591919sec preferred_lft 604719sec inet6 fe80::3673:5aff:fe9d:5cc4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c5 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c6 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c7 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: ens1f0np0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 10:70:fd:a3:7c:60 brd ff:ff:ff:ff:ff:ff altname enp59s0f0np0 7: ens1f1np1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 10:70:fd:a3:7c:61 brd ff:ff:ff:ff:ff:ff altname enp59s0f1np1 ### File system disk space usage: ### INFO: [2023-01-26 04:25:08] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 26G 0 26G 0% /dev/shm tmpfs 11G 38M 11G 1% /run /dev/mapper/cs_rdma--qe--36-root 70G 5.2G 65G 8% / /dev/sda2 1014M 285M 730M 29% /boot /dev/mapper/cs_rdma--qe--36-home 180G 1.3G 179G 1% /home /dev/sda1 599M 7.5M 592M 2% /boot/efi tmpfs 5.2G 4.0K 5.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 93G 375G 20% /var/crash INFO: [2023-01-26 04:25:08] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.17-4.el9.x86_64) ################################################################################ INFO: Starting lvs Thin Provisioning test ################################################################################ INFO: Creating loop device /var/tmp/loop0.img with size 128 INFO: Creating file /var/tmp/loop0.img INFO: [2023-01-26 04:25:09] Running: 'fallocate -l 128M /var/tmp/loop0.img'... INFO: [2023-01-26 04:25:09] Running: 'losetup /dev/loop0 /var/tmp/loop0.img'... INFO: Creating loop device /var/tmp/loop1.img with size 128 INFO: Creating file /var/tmp/loop1.img INFO: [2023-01-26 04:25:09] Running: 'fallocate -l 128M /var/tmp/loop1.img'... INFO: [2023-01-26 04:25:10] Running: 'losetup /dev/loop1 /var/tmp/loop1.img'... INFO: Creating loop device /var/tmp/loop2.img with size 128 INFO: Creating file /var/tmp/loop2.img INFO: [2023-01-26 04:25:10] Running: 'fallocate -l 128M /var/tmp/loop2.img'... INFO: [2023-01-26 04:25:10] Running: 'losetup /dev/loop2 /var/tmp/loop2.img'... INFO: Creating loop device /var/tmp/loop3.img with size 128 INFO: Creating file /var/tmp/loop3.img INFO: [2023-01-26 04:25:10] Running: 'fallocate -l 128M /var/tmp/loop3.img'... INFO: [2023-01-26 04:25:10] Running: 'losetup /dev/loop3 /var/tmp/loop3.img'... INFO: [2023-01-26 04:25:10] Running: 'vgcreate --force testvg /dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3'... Physical volume "/dev/loop0" successfully created. Physical volume "/dev/loop1" successfully created. Physical volume "/dev/loop2" successfully created. Physical volume "/dev/loop3" successfully created. Volume group "testvg" successfully created INFO: [2023-01-26 04:25:11] Running: 'lvcreate -l1 -T testvg/pool1'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "pool1" created. PASS: lvcreate -l1 -T testvg/pool1 PASS: testvg/pool1 thin_count == 0 INFO: [2023-01-26 04:25:12] Running: 'lvcreate -V100m -T testvg/pool1 -n lv1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv1" created. WARNING: Sum of all thin volume sizes (100.00 MiB) exceeds the size of thin pool testvg/pool1 (4.00 MiB). PASS: lvcreate -V100m -T testvg/pool1 -n lv1 PASS: testvg/pool1 thin_count == 1 PASS: testvg/pool1 lv_name == pool1 PASS: testvg/pool1 lv_size == 4.00m PASS: testvg/pool1 lv_metadata_size == 4.00m PASS: testvg/pool1 lv_attr == twi-aotz-- PASS: testvg/pool1 modules == thin-pool PASS: testvg/pool1 metadata_lv == [pool1_tmeta] PASS: testvg/pool1 data_lv == [pool1_tdata] INFO: [2023-01-26 04:25:15] Running: 'lvs testvg/pool1 -o+metadata_percent'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Meta% pool1 testvg twi-aotz-- 4.00m 0.00 10.94 10.94 PASS: lvs testvg/pool1 -o+metadata_percent PASS: testvg/pool1 chunksize == 64.00k PASS: testvg/pool1 transaction_id == 1 PASS: testvg/lv1 pool_lv == pool1 PASS: testvg/lv1 lv_name == lv1 PASS: testvg/lv1 lv_size == 100.00m PASS: testvg/lv1 lv_attr == Vwi-a-tz-- PASS: testvg/lv1 modules == thin,thin-pool INFO: [2023-01-26 04:25:17] Running: 'lvs -a testvg | egrep '\[pool1_tdata\]\s+testvg\s+Twi-ao----''... [pool1_tdata] testvg Twi-ao---- 4.00m PASS: lvs -a testvg | egrep '\[pool1_tdata\]\s+testvg\s+Twi-ao----' INFO: [2023-01-26 04:25:17] Running: 'lvs -a testvg | egrep '\[pool1_tmeta\]\s+testvg\s+ewi-ao----''... [pool1_tmeta] testvg ewi-ao---- 4.00m PASS: lvs -a testvg | egrep '\[pool1_tmeta\]\s+testvg\s+ewi-ao----' INFO: [2023-01-26 04:25:17] Running: 'lvs -a testvg | egrep 'Meta%''... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert PASS: lvs -a testvg | egrep 'Meta%' INFO: [2023-01-26 04:25:18] Running: 'lvs -a testvg | egrep 'Data%''... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert PASS: lvs -a testvg | egrep 'Data%' INFO: [2023-01-26 04:25:18] Running: 'lvcreate -V100m -T testvg/pool1 -n lv2'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "lv2" created. WARNING: Sum of all thin volume sizes (200.00 MiB) exceeds the size of thin pool testvg/pool1 (4.00 MiB). PASS: lvcreate -V100m -T testvg/pool1 -n lv2 PASS: testvg/pool1 transaction_id == 2 PASS: testvg/pool1 thin_count == 2 PASS: testvg/pool1 zero == zero INFO: [2023-01-26 04:25:19] Running: 'lvcreate -s testvg/lv1 -n snap1'... WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "snap1" created. WARNING: Sum of all thin volume sizes (300.00 MiB) exceeds the size of thin pool testvg/pool1 (4.00 MiB). PASS: lvcreate -s testvg/lv1 -n snap1 PASS: testvg/pool1 transaction_id == 3 PASS: testvg/pool1 thin_count == 3 PASS: testvg/snap1 origin == lv1 PASS: testvg/snap1 lv_name == snap1 PASS: testvg/snap1 lv_size == 100.00m PASS: testvg/snap1 lv_attr == Vwi---tz-k PASS: testvg/snap1 modules == thin,thin-pool INFO: [2023-01-26 04:25:22] Running: 'grep -E "^\W+thin_pool_autoextend" /etc/lvm/lvm.conf'... # thin_pool_autoextend_threshold = 70 # thin_pool_autoextend_threshold = 100 # thin_pool_autoextend_percent = 20 # thin_pool_autoextend_percent = 20 PASS: grep -E "^\W+thin_pool_autoextend" /etc/lvm/lvm.conf INFO: [2023-01-26 04:25:22] Running: 'lvcreate -l25 -V84m -T testvg/pool2 -n lv3'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv3" created. PASS: lvcreate -l25 -V84m -T testvg/pool2 -n lv3 INFO: [2023-01-26 04:25:23] Running: 'dd if=/dev/zero of=/dev/testvg/lv3 bs=1M count=84'... 84+0 records in 84+0 records out 88080384 bytes (88 MB, 84 MiB) copied, 2.85584 s, 30.8 MB/s PASS: dd if=/dev/zero of=/dev/testvg/lv3 bs=1M count=84 PASS: testvg/lv3 data_percent == 100.00 PASS: testvg/pool2 data_percent == 84.00 INFO: [2023-01-26 04:25:57] Running: 'journalctl -n 200 | grep 'testvg-pool2-tpool .*is now 84.*% full''... Jan 26 04:25:33 rdma-qe-36.rdma.lab.eng.rdu2.redhat.com dmeventd[284501]: WARNING: Thin pool testvg-pool2-tpool data is now 84.00% full. PASS: journalctl -n 200 | grep 'testvg-pool2-tpool .*is now 84.*% full' INFO: [2023-01-26 04:25:57] Running: 'lvextend -L88m testvg/lv3'... Size of logical volume testvg/lv3 changed from 84.00 MiB (21 extents) to 88.00 MiB (22 extents). Logical volume testvg/lv3 successfully resized. PASS: lvextend -L88m testvg/lv3 INFO: [2023-01-26 04:25:57] Running: 'dd if=/dev/zero of=/dev/testvg/lv3 bs=1M count=88'... 88+0 records in 88+0 records out 92274688 bytes (92 MB, 88 MiB) copied, 3.62872 s, 25.4 MB/s PASS: dd if=/dev/zero of=/dev/testvg/lv3 bs=1M count=88 PASS: testvg/lv3 data_percent == 100.00 PASS: testvg/pool2 data_percent == 88.00 INFO: [2023-01-26 04:26:32] Running: 'journalctl -n 200 | grep 'testvg-pool2-tpool .*is now 88.*% full''... Jan 26 04:26:07 rdma-qe-36.rdma.lab.eng.rdu2.redhat.com dmeventd[284501]: WARNING: Thin pool testvg-pool2-tpool data is now 88.00% full. PASS: journalctl -n 200 | grep 'testvg-pool2-tpool .*is now 88.*% full' INFO: [2023-01-26 04:26:32] Running: 'lvs testvg/pool2 -o+metadata_percent'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Meta% pool2 testvg twi-aotz-- 100.00m 88.00 11.52 11.52 PASS: lvs testvg/pool2 -o+metadata_percent INFO: [2023-01-26 04:26:32] Running: 'lvcreate -L100m -V100m -T testvg/pool3 -n lv4'... Thin pool volume with chunk size 64.00 KiB can address at most <15.88 TiB of data. Logical volume "lv4" created. PASS: lvcreate -L100m -V100m -T testvg/pool3 -n lv4 INFO: [2023-01-26 04:26:34] Running: 'dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=81'... 81+0 records in 81+0 records out 84934656 bytes (85 MB, 81 MiB) copied, 2.49483 s, 34.0 MB/s PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=81 PASS: testvg/pool3 data_percent == 81.00 INFO: [2023-01-26 04:27:07] Running: 'journalctl -n 200 | grep 'testvg-pool3-tpool .*is now 81.*% full''... Jan 26 04:26:43 rdma-qe-36.rdma.lab.eng.rdu2.redhat.com dmeventd[284501]: WARNING: Thin pool testvg-pool3-tpool data is now 81.00% full. PASS: journalctl -n 200 | grep 'testvg-pool3-tpool .*is now 81.*% full' INFO: [2023-01-26 04:27:07] Running: 'dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=86'... 86+0 records in 86+0 records out 90177536 bytes (90 MB, 86 MiB) copied, 3.52465 s, 25.6 MB/s PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=86 PASS: testvg/pool3 data_percent == 86.00 INFO: [2023-01-26 04:27:41] Running: 'journalctl -n 200 | grep 'testvg-pool3-tpool .*is now 86.*% full''... Jan 26 04:27:13 rdma-qe-36.rdma.lab.eng.rdu2.redhat.com dmeventd[284501]: WARNING: Thin pool testvg-pool3-tpool data is now 86.00% full. PASS: journalctl -n 200 | grep 'testvg-pool3-tpool .*is now 86.*% full' INFO: [2023-01-26 04:27:41] Running: 'dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=91'... 91+0 records in 91+0 records out 95420416 bytes (95 MB, 91 MiB) copied, 3.61768 s, 26.4 MB/s PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=91 PASS: testvg/pool3 data_percent == 91.00 INFO: [2023-01-26 04:28:15] Running: 'journalctl -n 200 | grep 'testvg-pool3-tpool .*is now 91.*% full''... Jan 26 04:27:53 rdma-qe-36.rdma.lab.eng.rdu2.redhat.com dmeventd[284501]: WARNING: Thin pool testvg-pool3-tpool data is now 91.00% full. PASS: journalctl -n 200 | grep 'testvg-pool3-tpool .*is now 91.*% full' INFO: [2023-01-26 04:28:15] Running: 'dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=96'... 96+0 records in 96+0 records out 100663296 bytes (101 MB, 96 MiB) copied, 3.89033 s, 25.9 MB/s PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=96 PASS: testvg/pool3 data_percent == 96.00 INFO: [2023-01-26 04:28:50] Running: 'journalctl -n 200 | grep 'testvg-pool3-tpool .*is now 96.*% full''... Jan 26 04:28:23 rdma-qe-36.rdma.lab.eng.rdu2.redhat.com dmeventd[284501]: WARNING: Thin pool testvg-pool3-tpool data is now 96.00% full. PASS: journalctl -n 200 | grep 'testvg-pool3-tpool .*is now 96.*% full' INFO: [2023-01-26 04:28:50] Running: 'lvs -o +invalid_option testvg/lv1 2>/dev/null'... PASS: lvs -o +invalid_option testvg/lv1 2>/dev/null [exited with error, as expected] INFO: [2023-01-26 04:28:50] Running: 'lvs -a testvg'... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lv1 testvg Vwi-a-tz-- 100.00m pool1 0.00 lv2 testvg Vwi-a-tz-- 100.00m pool1 0.00 lv3 testvg Vwi-a-tz-- 88.00m pool2 100.00 lv4 testvg Vwi-a-tz-- 100.00m pool3 96.00 [lvol0_pmspare] testvg ewi------- 4.00m pool1 testvg twi-aotz-- 4.00m 0.00 11.04 [pool1_tdata] testvg Twi-ao---- 4.00m [pool1_tmeta] testvg ewi-ao---- 4.00m pool2 testvg twi-aotz-- 100.00m 88.00 11.52 [pool2_tdata] testvg Twi-ao---- 100.00m [pool2_tmeta] testvg ewi-ao---- 4.00m pool3 testvg twi-aotz-- 100.00m 96.00 11.62 [pool3_tdata] testvg Twi-ao---- 100.00m [pool3_tmeta] testvg ewi-ao---- 4.00m snap1 testvg Vwi---tz-k 100.00m pool1 lv1 INFO: [2023-01-26 04:28:50] Running: 'vgremove --force testvg'... Logical volume "lv4" successfully removed. Logical volume "pool3" successfully removed. Logical volume "lv3" successfully removed. Logical volume "pool2" successfully removed. Logical volume "lv1" successfully removed. Logical volume "lv2" successfully removed. Logical volume "snap1" successfully removed. Logical volume "pool1" successfully removed. Volume group "testvg" successfully removed INFO: [2023-01-26 04:28:53] Running: 'pvremove /dev/loop0'... Labels on physical volume "/dev/loop0" successfully wiped. INFO: Deleting loop device /dev/loop0 INFO: [2023-01-26 04:28:54] Running: 'losetup -d /dev/loop0'... INFO: [2023-01-26 04:28:54] Running: 'rm -f /var/tmp/loop0.img'... INFO: [2023-01-26 04:28:55] Running: 'pvremove /dev/loop1'... Labels on physical volume "/dev/loop1" successfully wiped. INFO: Deleting loop device /dev/loop1 INFO: [2023-01-26 04:28:56] Running: 'losetup -d /dev/loop1'... INFO: [2023-01-26 04:28:56] Running: 'rm -f /var/tmp/loop1.img'... INFO: [2023-01-26 04:28:56] Running: 'pvremove /dev/loop2'... Labels on physical volume "/dev/loop2" successfully wiped. INFO: Deleting loop device /dev/loop2 INFO: [2023-01-26 04:28:58] Running: 'losetup -d /dev/loop2'... INFO: [2023-01-26 04:28:58] Running: 'rm -f /var/tmp/loop2.img'... INFO: [2023-01-26 04:28:58] Running: 'pvremove /dev/loop3'... Labels on physical volume "/dev/loop3" successfully wiped. INFO: Deleting loop device /dev/loop3 INFO: [2023-01-26 04:29:00] Running: 'losetup -d /dev/loop3'... INFO: [2023-01-26 04:29:00] Running: 'rm -f /var/tmp/loop3.img'... INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:29:00] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:29:00] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:29:00] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:29:00] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:29:00] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:29:00] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:29:01] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:29:01] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:29:01] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:29:01] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server PASS: Search for error on the server module 'dm_snapshot' was loaded during the test. Unloading it... INFO: [2023-01-26 04:29:01] Running: 'modprobe -r dm_snapshot'... module 'dm_thin_pool' was loaded during the test. Unloading it... INFO: [2023-01-26 04:29:02] Running: 'modprobe -r dm_thin_pool'... ################################ Test Summary ################################## PASS: lvcreate -l1 -T testvg/pool1 PASS: testvg/pool1 thin_count == 0 PASS: lvcreate -V100m -T testvg/pool1 -n lv1 PASS: testvg/pool1 thin_count == 1 PASS: testvg/pool1 lv_name == pool1 PASS: testvg/pool1 lv_size == 4.00m PASS: testvg/pool1 lv_metadata_size == 4.00m PASS: testvg/pool1 lv_attr == twi-aotz-- PASS: testvg/pool1 modules == thin-pool PASS: testvg/pool1 metadata_lv == [pool1_tmeta] PASS: testvg/pool1 data_lv == [pool1_tdata] PASS: lvs testvg/pool1 -o+metadata_percent PASS: testvg/pool1 chunksize == 64.00k PASS: testvg/pool1 transaction_id == 1 PASS: testvg/lv1 pool_lv == pool1 PASS: testvg/lv1 lv_name == lv1 PASS: testvg/lv1 lv_size == 100.00m PASS: testvg/lv1 lv_attr == Vwi-a-tz-- PASS: testvg/lv1 modules == thin,thin-pool PASS: lvs -a testvg | egrep '\[pool1_tdata\]\s+testvg\s+Twi-ao----' PASS: lvs -a testvg | egrep '\[pool1_tmeta\]\s+testvg\s+ewi-ao----' PASS: lvs -a testvg | egrep 'Meta%' PASS: lvs -a testvg | egrep 'Data%' PASS: lvcreate -V100m -T testvg/pool1 -n lv2 PASS: testvg/pool1 transaction_id == 2 PASS: testvg/pool1 thin_count == 2 PASS: testvg/pool1 zero == zero PASS: lvcreate -s testvg/lv1 -n snap1 PASS: testvg/pool1 transaction_id == 3 PASS: testvg/pool1 thin_count == 3 PASS: testvg/snap1 origin == lv1 PASS: testvg/snap1 lv_name == snap1 PASS: testvg/snap1 lv_size == 100.00m PASS: testvg/snap1 lv_attr == Vwi---tz-k PASS: testvg/snap1 modules == thin,thin-pool PASS: grep -E "^\W+thin_pool_autoextend" /etc/lvm/lvm.conf PASS: lvcreate -l25 -V84m -T testvg/pool2 -n lv3 PASS: dd if=/dev/zero of=/dev/testvg/lv3 bs=1M count=84 PASS: testvg/lv3 data_percent == 100.00 PASS: testvg/pool2 data_percent == 84.00 PASS: journalctl -n 200 | grep 'testvg-pool2-tpool .*is now 84.*% full' PASS: lvextend -L88m testvg/lv3 PASS: dd if=/dev/zero of=/dev/testvg/lv3 bs=1M count=88 PASS: testvg/lv3 data_percent == 100.00 PASS: testvg/pool2 data_percent == 88.00 PASS: journalctl -n 200 | grep 'testvg-pool2-tpool .*is now 88.*% full' PASS: lvs testvg/pool2 -o+metadata_percent PASS: lvcreate -L100m -V100m -T testvg/pool3 -n lv4 PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=81 PASS: testvg/pool3 data_percent == 81.00 PASS: journalctl -n 200 | grep 'testvg-pool3-tpool .*is now 81.*% full' PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=86 PASS: testvg/pool3 data_percent == 86.00 PASS: journalctl -n 200 | grep 'testvg-pool3-tpool .*is now 86.*% full' PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=91 PASS: testvg/pool3 data_percent == 91.00 PASS: journalctl -n 200 | grep 'testvg-pool3-tpool .*is now 91.*% full' PASS: dd if=/dev/zero of=/dev/testvg/lv4 bs=1M count=96 PASS: testvg/pool3 data_percent == 96.00 PASS: journalctl -n 200 | grep 'testvg-pool3-tpool .*is now 96.*% full' PASS: lvs -o +invalid_option testvg/lv1 2>/dev/null [exited with error, as expected] PASS: Search for error on the server ############################# Total tests that passed: 62 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Running test 'lvm/thinp/lvm-thinp-misc.py' ============================================================================================================== INFO: [2023-01-26 04:29:04] Running: '/opt/stqe-venv/bin/python3 /opt/stqe-venv/lib64/python3.9/site-packages/stqe/tests/lvm/thinp/lvm-thinp-misc.py'... ################################## Test Init ################################### INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:29:04] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:29:04] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:29:05] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:29:05] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:29:05] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:29:05] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:29:05] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:29:05] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:29:05] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:29:06] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server ### Kernel Info: ### Kernel version: Linux rdma-qe-36.rdma.lab.eng.rdu2.redhat.com 5.14.0-244.1956_758049736.el9.x86_64+debug #1 SMP PREEMPT_DYNAMIC Thu Jan 26 05:36:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux Kernel tainted: 79872 ### IP settings: ### INFO: [2023-01-26 04:29:06] Running: 'ip a'... 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c4 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 10.1.217.125/24 brd 10.1.217.255 scope global dynamic noprefixroute eno1 valid_lft 77221sec preferred_lft 77221sec inet6 2620:52:0:1d9:3673:5aff:fe9d:5cc4/64 scope global dynamic noprefixroute valid_lft 2591999sec preferred_lft 604799sec inet6 fe80::3673:5aff:fe9d:5cc4/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c5 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c6 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 34:73:5a:9d:5c:c7 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: ens1f0np0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 10:70:fd:a3:7c:60 brd ff:ff:ff:ff:ff:ff altname enp59s0f0np0 7: ens1f1np1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 10:70:fd:a3:7c:61 brd ff:ff:ff:ff:ff:ff altname enp59s0f1np1 ### File system disk space usage: ### INFO: [2023-01-26 04:29:06] Running: 'df -h'... Filesystem Size Used Avail Use% Mounted on devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 26G 0 26G 0% /dev/shm tmpfs 11G 38M 11G 1% /run /dev/mapper/cs_rdma--qe--36-root 70G 5.2G 65G 8% / /dev/sda2 1014M 285M 730M 29% /boot /dev/mapper/cs_rdma--qe--36-home 180G 1.3G 179G 1% /home /dev/sda1 599M 7.5M 592M 2% /boot/efi tmpfs 5.2G 4.0K 5.2G 1% /run/user/0 kdump.usersys.redhat.com:/var/www/html/vmcore 493G 93G 375G 20% /var/crash INFO: [2023-01-26 04:29:06] Running: 'rpm -q device-mapper-multipath'... package device-mapper-multipath is not installed ################################################################################ INFO: lvm2 is installed (lvm2-2.03.17-4.el9.x86_64) ################################################################################ INFO: Starting lvm Thin Provisioning Misc test ################################################################################ INFO: [2023-01-26 04:29:07] Running: 'lvm segtypes | grep -w "thin$"'... thin PASS: lvm segtypes | grep -w "thin$" INFO: [2023-01-26 04:29:07] Running: 'lvm segtypes | grep -w "thin-pool$"'... thin-pool PASS: lvm segtypes | grep -w "thin-pool$" INFO: Checking for error on the system INFO: Checking for tainted kernel INFO: [2023-01-26 04:29:07] Running: 'cat /proc/sys/kernel/tainted'... 79872 WARN: Kernel is tainted! INFO: [2023-01-26 04:29:07] Running: 'cat /tmp/previous-tainted'... 79872 INFO: Kernel tainted has already been handled INFO: Checking abrt for error WARN: abrt tool does not seem to be installed WARN: skipping abrt check INFO: [2023-01-26 04:29:08] Running: 'cat /tmp/previous-dump-check'... 101000000 INFO: Checking for stack dump messages after: 101000000 PASS: No recent dump messages has been found. INFO: Checking for errors on dmesg. INFO: [2023-01-26 04:29:08] Running: 'dmesg | grep -i ' segfault ''... INFO: [2023-01-26 04:29:08] Running: 'dmesg | grep -i 'Call Trace:''... PASS: No errors on dmesg have been found. INFO: [2023-01-26 04:29:08] Running: 'wget -q http://lab-02.rhts.eng.rdu.redhat.com:8000/recipes/13289334/logs/console.log -O /root/console.log.new'... INFO: [2023-01-26 04:29:08] Running: 'diff -N -n --unidirectional-new-file /root/console.log.prev /root/console.log.new > /root/console.log'... INFO: [2023-01-26 04:29:08] Running: 'mv -f /root/console.log.new /root/console.log.prev'... INFO: Checking for errors on /root/console.log INFO: [2023-01-26 04:29:08] Running: 'cat /root/console.log | grep -i ' segfault ''... INFO: [2023-01-26 04:29:08] Running: 'cat /root/console.log | grep -i 'Call Trace:''... PASS: No errors on /root/console.log have been found. INFO: No kdump log found for this server PASS: Search for error on the server ################################ Test Summary ################################## PASS: lvm segtypes | grep -w "thin$" PASS: lvm segtypes | grep -w "thin-pool$" PASS: Search for error on the server ############################# Total tests that passed: 3 Total tests that failed: 0 Total tests that skipped: 0 ################################################################################ PASS: test pass ============================================================================================================== Generating test result report ============================================================================================================== Test name: lvm/thinp/lvchange-thin.py Status: PASS Elapsed Time: 01m51s Test name: lvm/thinp/lvconf-thinp.py Status: PASS Elapsed Time: 16s Test name: lvm/thinp/lvconvert-thinpool.py Status: PASS Elapsed Time: 25s Test name: lvm/thinp/lvconvert-thin-lv.py Status: PASS Elapsed Time: 26s Test name: lvm/thinp/lvcreate-poolmetadataspare.py Status: PASS Elapsed Time: 28s Test name: lvm/thinp/lvcreate-mirror.py Status: PASS Elapsed Time: 33s Test name: lvm/thinp/lvextend-thinp.py Status: PASS Elapsed Time: 57s Test name: lvm/thinp/lvreduce-thinp.py Status: PASS Elapsed Time: 50s Test name: lvm/thinp/lvremove-thinp.py Status: PASS Elapsed Time: 33s Test name: lvm/thinp/lvrename-thinp.py Status: PASS Elapsed Time: 27s Test name: lvm/thinp/lvresize-thinp.py Status: PASS Elapsed Time: 01m27s Test name: lvm/thinp/lvscan-thinp.py Status: PASS Elapsed Time: 24s Test name: lvm/thinp/lvs-thinp.py Status: PASS Elapsed Time: 03m59s Test name: lvm/thinp/lvm-thinp-misc.py Status: PASS Elapsed Time: 06s ============================================================================================================== Total - PASS: 14 FAIL: 0 SKIP: 0 WARN: 0 Total Time: 12m42s ==============================================================================================================