Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Linux version 6.0.0-rc7 (root@runner-ia7yd-k9-project-18194050-concurrent-0) (gcc (GCC) 12.2.1 20220819 (Red Hat 12.2.1-2), GNU ld version 2.39-3.fc38) #1 SMP PREEMPT_DYNAMIC Mon Sep 26 18:12:54 UTC 2022 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Command line: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-6.0.0-rc7 root=/dev/mapper/fedora_hpe--ml350egen8--01-root ro BOOTIF=9C-8E-99-6E-14-D8 console=tty0 console=ttyS1,115200n81 rd.lvm.lv=fedora_hpe-ml350egen8-01/root rhgb Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: signal: max sigframe size: 1776 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: BIOS-provided physical RAM map: Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: BIOS-e820: [mem 0x0000000000099800-0x0000000000099bff] reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: BIOS-e820: [mem 0x000000000009e000-0x000000000009ffff] reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bddabfff] usable Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: BIOS-e820: [mem 0x00000000bddac000-0x00000000bddddfff] ACPI data Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: BIOS-e820: [mem 0x00000000bddde000-0x00000000cfffffff] reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fee0ffff] reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: BIOS-e820: [mem 0x00000000ff800000-0x00000000ffffffff] reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffefff] usable Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NX (Execute Disable) protection: active Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SMBIOS 2.8 present. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMI: HP ProLiant ML350e Gen8, BIOS J02 08/02/2014 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: tsc: Fast TSC calibration using PIT Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: tsc: Detected 2094.764 MHz processor Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: last_pfn = 0x43ffff max_arch_pfn = 0x400000000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: last_pfn = 0xbddac max_arch_pfn = 0x400000000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: found SMP MP-table at [mem 0x000f4f80-0x000f4f8f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Using GB pages for direct mapping Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAMDISK: [mem 0x35634000-0x36b11fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Early table checksum verification disabled Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: RSDP 0x00000000000F4F00 000024 (v02 HP ) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [81B blob data] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [81B blob data] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI BIOS Warning (bug): Invalid length for FADT/Pm1aControlBlock: 32, using default 16 (20220331/tbfadt-669) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI BIOS Warning (bug): Invalid length for FADT/Pm2ControlBlock: 32, using default 8 (20220331/tbfadt-669) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: DSDT 0x00000000BDDAEF00 0026DC (v01 HP DSDT 00000001 INTL 20030228) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: FACS 0x00000000BDDAC140 000040 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: FACS 0x00000000BDDAC140 000040 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [81B blob data] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: MCFG 0x00000000BDDAC200 00003C (v01 HP ProLiant 00000001 00000000) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [81B blob data] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [81B blob data] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [81B blob data] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [81B blob data] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: APIC 0x00000000BDDAC580 00026A (v01 HP ProLiant 00000002 00000000) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [81B blob data] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [81B blob data] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [81B blob data] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [81B blob data] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [81B blob data] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: FFFF 0x00000000BDDAEC00 000030 (v01 HP ProLiant 00000001 00000000) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PCCT 0x00000000BDDAEC40 00006E (v01 HP Proliant 00000001 PH 0000504D) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: SSDT 0x00000000BDDB1600 0007EA (v01 HP DEV_PCI1 00000001 INTL 20120503) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: SSDT 0x00000000BDDB1E00 000103 (v03 HP CRSPCI0 00000002 HP 00000001) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: SSDT 0x00000000BDDB1F40 000098 (v03 HP CRSPCI1 00000002 HP 00000001) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: SSDT 0x00000000BDDB2000 0007F5 (v03 HP dsmpci0 00000002 INTL 20030228) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: SSDT 0x00000000BDDB2800 000386 (v03 HP dsmpci1 00000002 INTL 20030228) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: SSDT 0x00000000BDDB2BC0 000BB9 (v01 HP pcc 00000001 INTL 20120503) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: SSDT 0x00000000BDDB3780 000377 (v01 HP pmab 00000001 INTL 20120503) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: SSDT 0x00000000BDDB3B00 005524 (v01 HP pcc2 00000001 INTL 20120503) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: SSDT 0x00000000BDDB9040 004E84 (v01 INTEL PPM RCM 00000001 INTL 20061109) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving FACP table memory at [mem 0xbddaee00-0xbddaeef3] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving DSDT table memory at [mem 0xbddaef00-0xbddb15db] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving FACS table memory at [mem 0xbddac140-0xbddac17f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving FACS table memory at [mem 0xbddac140-0xbddac17f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving SPCR table memory at [mem 0xbddac180-0xbddac1cf] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving MCFG table memory at [mem 0xbddac200-0xbddac23b] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving HPET table memory at [mem 0xbddac240-0xbddac277] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving FFFF table memory at [mem 0xbddac280-0xbddac2e3] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving SPMI table memory at [mem 0xbddac300-0xbddac33f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving ERST table memory at [mem 0xbddac340-0xbddac56f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving APIC table memory at [mem 0xbddac580-0xbddac7e9] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving SRAT table memory at [mem 0xbddac800-0xbddacf4f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving FFFF table memory at [mem 0xbddacf80-0xbddad0f5] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving BERT table memory at [mem 0xbddad100-0xbddad12f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving HEST table memory at [mem 0xbddad140-0xbddad1fb] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving DMAR table memory at [mem 0xbddad200-0xbddad67f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving FFFF table memory at [mem 0xbddaec00-0xbddaec2f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving PCCT table memory at [mem 0xbddaec40-0xbddaecad] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving SSDT table memory at [mem 0xbddb1600-0xbddb1de9] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving SSDT table memory at [mem 0xbddb1e00-0xbddb1f02] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving SSDT table memory at [mem 0xbddb1f40-0xbddb1fd7] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving SSDT table memory at [mem 0xbddb2000-0xbddb27f4] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving SSDT table memory at [mem 0xbddb2800-0xbddb2b85] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving SSDT table memory at [mem 0xbddb2bc0-0xbddb3778] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving SSDT table memory at [mem 0xbddb3780-0xbddb3af6] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving SSDT table memory at [mem 0xbddb3b00-0xbddb9023] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Reserving SSDT table memory at [mem 0xbddb9040-0xbddbdec3] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x20 -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x21 -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x22 -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x23 -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x24 -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x25 -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x26 -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x27 -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x28 -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x29 -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x2a -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x2b -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x2c -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x2d -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x2e -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SRAT: PXM 1 -> APIC 0x2f -> Node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x23fffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: SRAT: Node 1 PXM 1 [mem 0x240000000-0x43fffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NODE_DATA(1) allocated [mem 0x43ffd4000-0x43fffefff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Zone ranges: Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Normal [mem 0x0000000100000000-0x000000043fffefff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Device empty Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Movable zone start for each node Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Early memory node ranges Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: node 0: [mem 0x0000000000100000-0x00000000bddabfff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: node 0: [mem 0x0000000100000000-0x000000023fffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: node 1: [mem 0x0000000240000000-0x000000043fffefff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Initmem setup node 1 [mem 0x0000000240000000-0x000000043fffefff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: On node 0, zone Normal: 8788 pages in unavailable ranges Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: On node 1, zone Normal: 1 pages in unavailable ranges Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PM-Timer IO Port: 0x908 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: IOAPIC[1]: apic_id 0, version 32, address 0xfec10000, GSI 24-47 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: IOAPIC[2]: apic_id 10, version 32, address 0xfec40000, GSI 48-71 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: SPCR: SPCR table version 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: SPCR: Unexpected SPCR Access Width. Defaulting to byte size Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: SPCR: console: uart,mmio,0x0,9600 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: TSC deadline timer available Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot: Allowing 64 CPUs, 32 hotplug CPUs Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PM: hibernation: Registered nosave memory: [mem 0x00099000-0x00099fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PM: hibernation: Registered nosave memory: [mem 0x00099000-0x0009dfff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PM: hibernation: Registered nosave memory: [mem 0x0009e000-0x0009ffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PM: hibernation: Registered nosave memory: [mem 0xbddac000-0xbddddfff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PM: hibernation: Registered nosave memory: [mem 0xbddde000-0xcfffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PM: hibernation: Registered nosave memory: [mem 0xd0000000-0xfebfffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PM: hibernation: Registered nosave memory: [mem 0xfec00000-0xfee0ffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PM: hibernation: Registered nosave memory: [mem 0xfee10000-0xff7fffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PM: hibernation: Registered nosave memory: [mem 0xff800000-0xffffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [mem 0xd0000000-0xfebfffff] available for PCI devices Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Booting paravirtualized kernel on bare hardware Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:64 nr_cpu_ids:64 nr_node_ids:2 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: percpu: Embedded 61 pages/cpu s212992 r8192 d28672 u262144 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pcpu-alloc: s212992 r8192 d28672 u262144 alloc=1*2097152 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 16 17 18 19 20 21 22 23 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pcpu-alloc: [0] 32 34 36 38 40 42 44 46 [0] 48 50 52 54 56 58 60 62 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pcpu-alloc: [1] 08 09 10 11 12 13 14 15 [1] 24 25 26 27 28 29 30 31 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pcpu-alloc: [1] 33 35 37 39 41 43 45 47 [1] 49 51 53 55 57 59 61 63 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Fallback order for Node 0: 0 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Fallback order for Node 1: 1 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Built 2 zonelists, mobility grouping on. Total pages: 4119860 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Policy zone: Normal Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-6.0.0-rc7 root=/dev/mapper/fedora_hpe--ml350egen8--01-root ro BOOTIF=9C-8E-99-6E-14-D8 console=tty0 console=ttyS1,115200n81 rd.lvm.lv=fedora_hpe-ml350egen8-01/root rhgb Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Unknown kernel command line parameters "rhgb BOOT_IMAGE=(hd0,gpt2)/vmlinuz-6.0.0-rc7 BOOTIF=9C-8E-99-6E-14-D8", will be passed to user space. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: printk: log_buf_len total cpu_extra contributions: 258048 bytes Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: printk: log_buf_len min size: 262144 bytes Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: printk: log_buf_len: 524288 bytes Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: printk: early log buf free: 248800(94%) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: software IO TLB: area num 64. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Memory: 16323360K/16741644K available (16393K kernel code, 3225K rwdata, 11588K rodata, 3016K init, 4720K bss, 418024K reserved, 0K cma-reserved) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=64, Nodes=2 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Kernel/User page tables isolation: enabled Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ftrace: allocating 50846 entries in 199 pages Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ftrace: allocated 199 pages with 5 groups Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Dynamic Preempt: voluntary Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu: Preemptible hierarchical RCU implementation. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=64. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Trampoline variant of Tasks RCU enabled. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Rude variant of Tasks RCU enabled. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Tracing variant of Tasks RCU enabled. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=64 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NR_IRQS: 524544, nr_irqs: 1752, preallocated irqs: 16 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Console: colour VGA+ 80x25 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: printk: console [tty0] enabled Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: printk: console [ttyS1] enabled Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: mempolicy: Enabling automatic NUMA balancing. Configure with numa_balancing= or the kernel.numa_balancing sysctl Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Core revision 20220331 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: APIC: Switch to symmetric I/O mode setup Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: Host address width 46 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: DRHD base: 0x000000f9efe000 flags: 0x0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: dmar0: reg_base_addr f9efe000 ver 1:0 cap d2078c106f0462 ecap f020fe Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: DRHD base: 0x000000f4ffe000 flags: 0x1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: dmar1: reg_base_addr f4ffe000 ver 1:0 cap d2078c106f0462 ecap f020fe Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: RMRR base: 0x000000bdffd000 end: 0x000000bdffffff Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: RMRR base: 0x000000bdff6000 end: 0x000000bdffcfff Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: RMRR base: 0x000000bdf83000 end: 0x000000bdf84fff Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: RMRR base: 0x000000bdf7f000 end: 0x000000bdf82fff Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: RMRR base: 0x000000bdf6f000 end: 0x000000bdf7efff Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: RMRR base: 0x000000bdf6e000 end: 0x000000bdf6efff Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: RMRR base: 0x000000000f4000 end: 0x000000000f4fff Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: RMRR base: 0x000000000e8000 end: 0x000000000e8fff Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: [Firmware Bug]: No firmware reserved region can cover this RMRR [0x00000000000e8000-0x00000000000e8fff], contact BIOS vendor for fixes Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: [Firmware Bug]: Your BIOS is broken; bad RMRR [0x00000000000e8000-0x00000000000e8fff] BIOS vendor: HP; Ver: J02; Product Version: Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: RMRR base: 0x000000bddde000 end: 0x000000bdddefff Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR: ATSR flags: 0x0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR-IR: IOAPIC id 10 under DRHD base 0xf9efe000 IOMMU 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR-IR: IOAPIC id 8 under DRHD base 0xf4ffe000 IOMMU 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR-IR: IOAPIC id 0 under DRHD base 0xf4ffe000 IOMMU 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR-IR: HPET id 0 under DRHD base 0xf4ffe000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMAR-IR: Enabled IRQ remapping in xapic mode Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: x2apic: IRQ remapping doesn't support X2APIC mode Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Switched APIC routing to physical flat. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1e31de3122e, max_idle_ns: 440795224410 ns Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4189.52 BogoMIPS (lpj=2094764) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pid_max: default: 65536 minimum: 512 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: LSM: Security Framework initializing Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Yama: becoming mindful. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SELinux: Initializing. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: LSM support for eBPF active Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: landlock: Up and running. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, vmalloc hugepage) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, vmalloc hugepage) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, vmalloc) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, vmalloc) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: CPU0: Thermal monitoring enabled (TM1) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: process: using mwait in idle threads Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Last level iTLB entries: 4KB 512, 2MB 8, 4MB 8 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Last level dTLB entries: 4KB 512, 2MB 32, 4MB 32, 1GB 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Spectre V2 : Mitigation: Retpolines Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Speculative Store Bypass: Vulnerable Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: MMIO Stale Data: Unknown: No mitigations Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Freeing SMP alternatives memory: 44K Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2450 0 @ 2.10GHz (family: 0x6, model: 0x2d, stepping: 0x7) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cblist_init_generic: Setting adjustable number of callback queues. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cblist_init_generic: Setting shift to 6 and lim to 1. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cblist_init_generic: Setting shift to 6 and lim to 1. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cblist_init_generic: Setting shift to 6 and lim to 1. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Performance Events: PEBS fmt1+, SandyBridge events, 16-deep LBR, full-width counters, Broken BIOS detected, complain to your hardware vendor. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [Firmware Bug]: the BIOS has corrupted hw-PMU resources (MSR 38d is 330) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Intel PMU driver. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ... version: 3 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ... bit width: 48 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ... generic registers: 4 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ... value mask: 0000ffffffffffff Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ... max period: 00007fffffffffff Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ... fixed-purpose events: 3 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ... event mask: 000000070000000f Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1365 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu: Hierarchical SRCU implementation. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu: Max phase no-delay instances is 400. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smp: Bringing up secondary CPUs ... Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: x86: Booting SMP configuration: Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .... node #1, CPUs: #8 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot: CPU 8 Converting physical 0 to logical die 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: #9 #10 #11 #12 #13 #14 #15 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .... node #0, CPUs: #16 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: #17 #18 #19 #20 #21 #22 #23 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .... node #1, CPUs: #24 #25 #26 #27 #28 #29 #30 #31 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smp: Brought up 2 nodes, 32 CPUs Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot: Max logical packages: 4 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot: Total of 32 processors activated (134271.74 BogoMIPS) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: devtmpfs: initialized Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: x86/mm: Memory block size: 128MB Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex hash table entries: 16384 (order: 8, 1048576 bytes, vmalloc) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pinctrl core: initialized pinctrl subsystem Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PM: RTC time: 15:11:38, date: 2022-09-26 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: initializing netlink subsys (disabled) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=2000 audit(1664205087.279:1): state=initialized audit_enabled=0 res=1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: thermal_sys: Registered thermal governor 'fair_share' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: thermal_sys: Registered thermal governor 'bang_bang' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: thermal_sys: Registered thermal governor 'user_space' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpuidle: using governor menu Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Detected 1 PCC Subspaces Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Registering PCC driver as Mailbox controller Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI FADT declares the system doesn't support PCIe ASPM, so disable it Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xc0000000-0xcfffffff] (base 0xc0000000) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI: MMCONFIG at [mem 0xc0000000-0xcfffffff] reserved in E820 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI: Using configuration type 1 for base access Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cryptd: max_cpu_qlen set to 1000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: raid6: skipped pq benchmark and selected sse2x4 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: raid6: using ssse3x2 recovery algorithm Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Added _OSI(Module Device) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Added _OSI(Processor Device) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: 10 ACPI AML tables successfully acquired and loaded Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Interpreter enabled Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PM: (supports S0 S4 S5) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: Using IOAPIC for interrupt routing Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: HEST: Table parsing has been initialized. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI: Using E820 reservations for host bridge windows Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-1f]) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER PCIeCapability LTR DPC] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: acpi PNP0A08:00: FADT indicates ASPM is unsupported, using BIOS configuration Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI host bridge to bus 0000:00 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: root bus resource [mem 0xf4000000-0xf7ffffff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: root bus resource [io 0x1000-0x7fff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x03af window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: root bus resource [io 0x03e0-0x0cf7 window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0x0fff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: root bus resource [io 0x03b0-0x03bb window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: root bus resource [io 0x03c0-0x03df window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: root bus resource [bus 00-1f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:00.0: [8086:3c00] type 00 class 0x060000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:00.0: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:01.0: [8086:3c02] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:01.1: [8086:3c03] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.0: [8086:3c08] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.0: enabling Extended Tags Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.0: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.1: [8086:3c09] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.1: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.2: [8086:3c0a] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.2: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.3: [8086:3c0b] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.3: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.0: [8086:3c20] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.0: reg 0x10: [mem 0xf6cf0000-0xf6cf3fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.1: [8086:3c21] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.1: reg 0x10: [mem 0xf6ce0000-0xf6ce3fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.2: [8086:3c22] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.2: reg 0x10: [mem 0xf6cd0000-0xf6cd3fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.3: [8086:3c23] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.3: reg 0x10: [mem 0xf6cc0000-0xf6cc3fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.4: [8086:3c24] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.4: reg 0x10: [mem 0xf6cb0000-0xf6cb3fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.5: [8086:3c25] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.5: reg 0x10: [mem 0xf6ca0000-0xf6ca3fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.6: [8086:3c26] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.6: reg 0x10: [mem 0xf6c90000-0xf6c93fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.7: [8086:3c27] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:04.7: reg 0x10: [mem 0xf6c80000-0xf6c83fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:05.0: [8086:3c28] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:05.2: [8086:3c2a] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:05.4: [8086:3c2c] type 00 class 0x080020 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:05.4: reg 0x10: [mem 0xf6c70000-0xf6c70fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:11.0: [8086:1d3e] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:11.0: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1a.0: [8086:1d2d] type 00 class 0x0c0320 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1a.0: reg 0x10: [mem 0xf6c60000-0xf6c603ff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.0: [8086:1d10] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.4: [8086:1d18] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.4: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.6: [8086:1d1c] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.6: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.7: [8086:1d1e] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1d.0: [8086:1d26] type 00 class 0x0c0320 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1d.0: reg 0x10: [mem 0xf6c50000-0xf6c503ff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1e.0: [8086:244e] type 01 class 0x060401 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1f.0: [8086:1d41] type 00 class 0x060100 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1f.2: [8086:1d02] type 00 class 0x010601 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1f.2: reg 0x10: [io 0x4000-0x4007] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1f.2: reg 0x14: [io 0x4008-0x400b] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1f.2: reg 0x18: [io 0x4010-0x4017] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1f.2: reg 0x1c: [io 0x4018-0x401b] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1f.2: reg 0x20: [io 0x4020-0x403f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xf6c40000-0xf6c407ff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1f.2: PME# supported from D3hot Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:01.0: PCI bridge to [bus 02] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:01.1: PCI bridge to [bus 03] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.0: PCI bridge to [bus 0d] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.1: PCI bridge to [bus 18] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.2: PCI bridge to [bus 19] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.3: PCI bridge to [bus 1a] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:11.0: PCI bridge to [bus 1c] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.0: PCI bridge to [bus 0a] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.0: [8086:1521] type 00 class 0x020000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.0: reg 0x10: [mem 0xf7f00000-0xf7ffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.0: reg 0x18: [io 0x5000-0x501f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.0: reg 0x1c: [mem 0xf7ef0000-0xf7ef3fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.0: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.0: reg 0x184: [mem 0x00000000-0x00003fff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.0: VF(n) BAR0 space: [mem 0x00000000-0x0001ffff 64bit pref] (contains BAR0 for 8 VFs) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.0: VF(n) BAR3 space: [mem 0x00000000-0x0001ffff 64bit pref] (contains BAR3 for 8 VFs) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0000:00:1c.4 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.1: [8086:1521] type 00 class 0x020000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.1: reg 0x10: [mem 0xf7d00000-0xf7dfffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.1: reg 0x18: [io 0x5020-0x503f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.1: reg 0x1c: [mem 0xf7cf0000-0xf7cf3fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.1: reg 0x30: [mem 0x00000000-0x0007ffff pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.1: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.1: reg 0x184: [mem 0x00000000-0x00003fff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.1: VF(n) BAR0 space: [mem 0x00000000-0x0001ffff 64bit pref] (contains BAR0 for 8 VFs) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.1: VF(n) BAR3 space: [mem 0x00000000-0x0001ffff 64bit pref] (contains BAR3 for 8 VFs) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.4: PCI bridge to [bus 06] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.4: bridge window [io 0x5000-0x5fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.4: bridge window [mem 0xf7c00000-0xf7ffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.4: bridge has subordinate 06 but max busn 07 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.6: PCI bridge to [bus 07] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.0: [103c:3306] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.0: reg 0x10: [io 0x3000-0x30ff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.0: reg 0x14: [mem 0xf7bf0000-0xf7bf01ff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.0: reg 0x18: [io 0x3400-0x34ff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.1: [102b:0533] type 00 class 0x030000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.1: reg 0x10: [mem 0xf5000000-0xf5ffffff pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.1: reg 0x14: [mem 0xf7be0000-0xf7be3fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.1: reg 0x18: [mem 0xf7000000-0xf77fffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.1: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.2: [103c:3307] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.2: reg 0x10: [io 0x3800-0x38ff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.2: reg 0x14: [mem 0xf6ff0000-0xf6ff00ff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.2: reg 0x18: [mem 0xf6e00000-0xf6efffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.2: reg 0x1c: [mem 0xf6d80000-0xf6dfffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.2: reg 0x20: [mem 0xf6d70000-0xf6d77fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.2: reg 0x24: [mem 0xf6d60000-0xf6d67fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.2: reg 0x30: [mem 0x00000000-0x0000ffff pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.2: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.4: [103c:3300] type 00 class 0x0c0300 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.4: reg 0x20: [io 0x3c00-0x3c1f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.7: PCI bridge to [bus 01] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.7: bridge window [io 0x3000-0x3fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.7: bridge window [mem 0xf6d00000-0xf7bfffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.7: bridge window [mem 0xf5000000-0xf5ffffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1b: extended config space not accessible Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1e.0: PCI bridge to [bus 1b] (subtractive decode) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1e.0: bridge window [mem 0xf4000000-0xf7ffffff window] (subtractive decode) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1e.0: bridge window [io 0x1000-0x7fff window] (subtractive decode) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1e.0: bridge window [io 0x0000-0x03af window] (subtractive decode) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1e.0: bridge window [io 0x03e0-0x0cf7 window] (subtractive decode) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1e.0: bridge window [io 0x0d00-0x0fff window] (subtractive decode) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1e.0: bridge window [io 0x03b0-0x03bb window] (subtractive decode) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1e.0: bridge window [io 0x03c0-0x03df window] (subtractive decode) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1e.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: on NUMA node 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 5 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 7 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 10 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 5 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 7 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PCI: Interrupt link LNKG disabled Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PCI: Interrupt link LNKH disabled Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 20-3f]) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER PCIeCapability LTR DPC] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: acpi PNP0A08:01: FADT indicates ASPM is unsupported, using BIOS configuration Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI host bridge to bus 0000:20 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:20: root bus resource [mem 0xf8000000-0xfbffffff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:20: root bus resource [io 0x8000-0xffff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:20: root bus resource [bus 20-3f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:00.0: [8086:3c01] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:00.0: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:01.0: [8086:3c02] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:01.0: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:01.1: [8086:3c03] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:01.1: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.0: [8086:3c08] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.0: enabling Extended Tags Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.0: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.1: [8086:3c09] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.1: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.2: [8086:3c0a] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.2: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.3: [8086:3c0b] type 01 class 0x060400 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.3: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.0: [8086:3c20] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.0: reg 0x10: [mem 0xfbff0000-0xfbff3fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.1: [8086:3c21] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.1: reg 0x10: [mem 0xfbfe0000-0xfbfe3fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.2: [8086:3c22] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.2: reg 0x10: [mem 0xfbfd0000-0xfbfd3fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.3: [8086:3c23] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.3: reg 0x10: [mem 0xfbfc0000-0xfbfc3fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.4: [8086:3c24] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.4: reg 0x10: [mem 0xfbfb0000-0xfbfb3fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.5: [8086:3c25] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.5: reg 0x10: [mem 0xfbfa0000-0xfbfa3fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.6: [8086:3c26] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.6: reg 0x10: [mem 0xfbf90000-0xfbf93fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.7: [8086:3c27] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:04.7: reg 0x10: [mem 0xfbf80000-0xfbf83fff 64bit] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:05.0: [8086:3c28] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:05.2: [8086:3c2a] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:05.4: [8086:3c2c] type 00 class 0x080020 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:05.4: reg 0x10: [mem 0xfbf70000-0xfbf70fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:00.0: PCI bridge to [bus 2f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.0: [8086:1583] type 00 class 0x020000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.0: reg 0x10: [mem 0xfb000000-0xfb7fffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.0: reg 0x1c: [mem 0xfaff0000-0xfaff7fff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.0: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.0: reg 0x184: [mem 0x00000000-0x0000ffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.0: VF(n) BAR0 space: [mem 0x00000000-0x003fffff 64bit pref] (contains BAR0 for 64 VFs) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.0: reg 0x190: [mem 0x00000000-0x00003fff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.0: VF(n) BAR3 space: [mem 0x00000000-0x000fffff 64bit pref] (contains BAR3 for 64 VFs) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.1: [8086:1583] type 00 class 0x020000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.1: reg 0x10: [mem 0xfa000000-0xfa7fffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.1: reg 0x1c: [mem 0xf9ff0000-0xf9ff7fff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.1: reg 0x30: [mem 0x00000000-0x0007ffff pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.1: PME# supported from D0 D3hot D3cold Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.1: reg 0x184: [mem 0x00000000-0x0000ffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.1: VF(n) BAR0 space: [mem 0x00000000-0x003fffff 64bit pref] (contains BAR0 for 64 VFs) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.1: reg 0x190: [mem 0x00000000-0x00003fff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.1: VF(n) BAR3 space: [mem 0x00000000-0x000fffff 64bit pref] (contains BAR3 for 64 VFs) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:01.0: PCI bridge to [bus 21] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:01.0: bridge window [mem 0xf9f00000-0xfb7fffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:01.1: PCI bridge to [bus 27] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.0: PCI bridge to [bus 24] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.1: PCI bridge to [bus 2c] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.2: PCI bridge to [bus 2d] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.3: PCI bridge to [bus 2e] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:20: on NUMA node 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: iommu: Default domain type: Translated Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SCSI subsystem initialized Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: libata version 3.00 loaded. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: bus type USB registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usbcore: registered new interface driver usbfs Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usbcore: registered new interface driver hub Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usbcore: registered new device driver usb Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pps_core: LinuxPPS API ver. 1 registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PTP clock support registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC MC: Ver: 3.0.0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NetLabel: Initializing Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NetLabel: domain hash size = 128 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NetLabel: unlabeled traffic allowed by default Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: mctp: management component transport protocol core Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NET: Registered PF_MCTP protocol family Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI: Using ACPI for IRQ routing Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI: Discovered peer bus 1f Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI: root bus 1f: using default resources Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI: Probing PCI hardware (bus 1f) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI host bridge to bus 0000:1f Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1f: Unknown NUMA node; performance will be reduced Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1f: root bus resource [io 0x0000-0xffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1f: root bus resource [mem 0x00000000-0x3fffffffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1f: No busn resource found for root bus, will use [bus 1f-ff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1f: busn_res: can not insert [bus 1f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 00-1f]) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:08.0: [8086:3c80] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:08.3: [8086:3c83] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:08.4: [8086:3c84] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:09.0: [8086:3c90] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:09.3: [8086:3c93] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:09.4: [8086:3c94] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0a.0: [8086:3cc0] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0a.1: [8086:3cc1] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0a.2: [8086:3cc2] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0a.3: [8086:3cd0] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0b.0: [8086:3ce0] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0b.3: [8086:3ce3] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0c.0: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0c.1: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0c.2: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0c.3: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0c.6: [8086:3cf4] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0c.7: [8086:3cf6] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0d.0: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0d.1: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0d.2: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0d.3: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0d.6: [8086:3cf5] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0e.0: [8086:3ca0] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0e.1: [8086:3c46] type 00 class 0x110100 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0f.0: [8086:3ca8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0f.1: [8086:3c71] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0f.2: [8086:3caa] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0f.3: [8086:3cab] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0f.4: [8086:3cac] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0f.5: [8086:3cad] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:0f.6: [8086:3cae] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:10.0: [8086:3cb0] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:10.1: [8086:3cb1] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:10.2: [8086:3cb2] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:10.3: [8086:3cb3] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:10.5: [8086:3cb5] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:10.6: [8086:3cb6] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:10.7: [8086:3cb7] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:11.0: [8086:3cb8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:13.0: [8086:3ce4] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:13.1: [8086:3c43] type 00 class 0x110100 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:13.4: [8086:3ce6] type 00 class 0x110100 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:13.5: [8086:3c44] type 00 class 0x110100 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:1f:13.6: [8086:3c45] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1f: busn_res: [bus 1f-ff] end is updated to 1f Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1f: busn_res: can not insert [bus 1f] under domain [bus 00-ff] (conflicts with (null) [bus 00-1f]) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI: Discovered peer bus 3f Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI: root bus 3f: using default resources Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI: Probing PCI hardware (bus 3f) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI host bridge to bus 0000:3f Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:3f: Unknown NUMA node; performance will be reduced Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:3f: root bus resource [io 0x0000-0xffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:3f: root bus resource [mem 0x00000000-0x3fffffffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:3f: No busn resource found for root bus, will use [bus 3f-ff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:3f: busn_res: can not insert [bus 3f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 20-3f]) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:08.0: [8086:3c80] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:08.3: [8086:3c83] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:08.4: [8086:3c84] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:09.0: [8086:3c90] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:09.3: [8086:3c93] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:09.4: [8086:3c94] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0a.0: [8086:3cc0] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0a.1: [8086:3cc1] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0a.2: [8086:3cc2] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0a.3: [8086:3cd0] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0b.0: [8086:3ce0] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0b.3: [8086:3ce3] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0c.0: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0c.1: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0c.2: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0c.3: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0c.6: [8086:3cf4] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0c.7: [8086:3cf6] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0d.0: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0d.1: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0d.2: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0d.3: [8086:3ce8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0d.6: [8086:3cf5] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0e.0: [8086:3ca0] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0e.1: [8086:3c46] type 00 class 0x110100 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0f.0: [8086:3ca8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0f.1: [8086:3c71] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0f.2: [8086:3caa] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0f.3: [8086:3cab] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0f.4: [8086:3cac] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0f.5: [8086:3cad] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:0f.6: [8086:3cae] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:10.0: [8086:3cb0] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:10.1: [8086:3cb1] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:10.2: [8086:3cb2] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:10.3: [8086:3cb3] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:10.5: [8086:3cb5] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:10.6: [8086:3cb6] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:10.7: [8086:3cb7] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:11.0: [8086:3cb8] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:13.0: [8086:3ce4] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:13.1: [8086:3c43] type 00 class 0x110100 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:13.4: [8086:3ce6] type 00 class 0x110100 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:13.5: [8086:3c44] type 00 class 0x110100 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:3f:13.6: [8086:3c45] type 00 class 0x088000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:3f: busn_res: [bus 3f-ff] end is updated to 3f Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:3f: busn_res: can not insert [bus 3f] under domain [bus 00-ff] (conflicts with (null) [bus 20-3f]) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI: pci_cache_line_size set to 64 bytes Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: e820: reserve RAM buffer [mem 0xbddac000-0xbfffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: e820: reserve RAM buffer [mem 0x43ffff000-0x43fffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.1: vgaarb: setting as boot VGA device Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.1: vgaarb: bridge control possible Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.1: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: vgaarb: loaded Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hpet0: 8 comparators, 64-bit 14.318180 MHz counter Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: clocksource: Switched to clocksource tsc-early Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: VFS: Disk quotas dquot_6.6.0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pnp: PnP ACPI init Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:00: [mem 0xf4ffe000-0xf4ffffff] could not be reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [io 0x0408-0x040f] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [io 0x04d0-0x04d1] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [io 0x0310-0x0315] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [io 0x0316-0x0317] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [io 0x0700-0x071f] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [io 0x0880-0x08ff] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [io 0x0900-0x097f] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [io 0x0cd4-0x0cd7] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [io 0x0cd0-0x0cd3] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [io 0x0f50-0x0f58] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [io 0x0ca0-0x0ca1] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [io 0x0ca4-0x0ca5] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [io 0x02f8-0x02ff] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [mem 0xc0000000-0xcfffffff] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [mem 0xfe000000-0xfebfffff] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [mem 0xfc000000-0xfc000fff] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [mem 0xfed1c000-0xfed1ffff] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [mem 0xfed30000-0xfed3ffff] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [mem 0xfee00000-0xfee00fff] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:01: [mem 0xff800000-0xffffffff] has been reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: system 00:06: [mem 0xf9efe000-0xf9efffff] could not be reserved Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pnp: PnP ACPI: found 7 devices Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NET: Registered PF_INET protocol family Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, vmalloc) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, vmalloc) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, vmalloc) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, vmalloc) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: TCP: Hash tables configured (established 131072 bind 65536) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, vmalloc) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, vmalloc) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, vmalloc) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NET: Registered PF_XDP protocol family Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.4: BAR 15: assigned [mem 0xf4000000-0xf40fffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:01.0: PCI bridge to [bus 02] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:01.1: PCI bridge to [bus 03] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.0: PCI bridge to [bus 0d] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.1: PCI bridge to [bus 18] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.2: PCI bridge to [bus 19] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:03.3: PCI bridge to [bus 1a] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:11.0: PCI bridge to [bus 1c] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.0: PCI bridge to [bus 0a] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.0: BAR 6: assigned [mem 0xf7c00000-0xf7c7ffff pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.1: BAR 6: assigned [mem 0xf7e00000-0xf7e7ffff pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.0: BAR 7: assigned [mem 0xf4000000-0xf401ffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.0: BAR 10: assigned [mem 0xf4020000-0xf403ffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.1: BAR 7: assigned [mem 0xf4040000-0xf405ffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:06:00.1: BAR 10: assigned [mem 0xf4060000-0xf407ffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.4: PCI bridge to [bus 06] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.4: bridge window [io 0x5000-0x5fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.4: bridge window [mem 0xf7c00000-0xf7ffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.4: bridge window [mem 0xf4000000-0xf40fffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.6: PCI bridge to [bus 07] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:01:00.2: BAR 6: assigned [mem 0xf6d00000-0xf6d0ffff pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.7: PCI bridge to [bus 01] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.7: bridge window [io 0x3000-0x3fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.7: bridge window [mem 0xf6d00000-0xf7bfffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1c.7: bridge window [mem 0xf5000000-0xf5ffffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1e.0: PCI bridge to [bus 1b] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: resource 4 [mem 0xf4000000-0xf7ffffff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: resource 5 [io 0x1000-0x7fff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: resource 6 [io 0x0000-0x03af window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: resource 7 [io 0x03e0-0x0cf7 window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0x0fff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: resource 9 [io 0x03b0-0x03bb window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: resource 10 [io 0x03c0-0x03df window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:00: resource 11 [mem 0x000a0000-0x000bffff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:06: resource 1 [mem 0xf7c00000-0xf7ffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:06: resource 2 [mem 0xf4000000-0xf40fffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:01: resource 0 [io 0x3000-0x3fff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:01: resource 1 [mem 0xf6d00000-0xf7bfffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:01: resource 2 [mem 0xf5000000-0xf5ffffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1b: resource 4 [mem 0xf4000000-0xf7ffffff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1b: resource 5 [io 0x1000-0x7fff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1b: resource 6 [io 0x0000-0x03af window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1b: resource 7 [io 0x03e0-0x0cf7 window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1b: resource 8 [io 0x0d00-0x0fff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1b: resource 9 [io 0x03b0-0x03bb window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1b: resource 10 [io 0x03c0-0x03df window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1b: resource 11 [mem 0x000a0000-0x000bffff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:01.0: BAR 14: assigned [mem 0xf8000000-0xf80fffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:00.0: PCI bridge to [bus 2f] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.0: BAR 6: assigned [mem 0xf8000000-0xf807ffff pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.1: BAR 6: assigned [mem 0xf8080000-0xf80fffff pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.0: BAR 7: assigned [mem 0xfa800000-0xfabfffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.1: BAR 7: no space for [mem size 0x00400000 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.1: BAR 7: failed to assign [mem size 0x00400000 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.0: BAR 10: assigned [mem 0xfac00000-0xfacfffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:21:00.1: BAR 10: assigned [mem 0xfad00000-0xfadfffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:01.0: PCI bridge to [bus 21] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:01.0: bridge window [mem 0xf8000000-0xf80fffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:01.0: bridge window [mem 0xf9f00000-0xfb7fffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:01.1: PCI bridge to [bus 27] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.0: PCI bridge to [bus 24] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.1: PCI bridge to [bus 2c] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.2: PCI bridge to [bus 2d] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:03.3: PCI bridge to [bus 2e] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:20: Some PCI device resources are unassigned, try booting with pci=realloc Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:20: resource 4 [mem 0xf8000000-0xfbffffff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:20: resource 5 [io 0x8000-0xffff window] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:21: resource 1 [mem 0xf8000000-0xf80fffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:21: resource 2 [mem 0xf9f00000-0xfb7fffff 64bit pref] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1f: resource 4 [io 0x0000-0xffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:1f: resource 5 [mem 0x00000000-0x3fffffffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:3f: resource 4 [io 0x0000-0xffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci_bus 0000:3f: resource 5 [mem 0x00000000-0x3fffffffffff] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:05.0: disabled boot interrupts on device [8086:3c28] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1a.0: quirk_usb_early_handoff+0x0/0x6e0 took 11678 usecs Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:00:1d.0: quirk_usb_early_handoff+0x0/0x6e0 took 12053 usecs Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:05.0: disabled boot interrupts on device [8086:3c28] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pci 0000:20:05.0: quirk_disable_intel_boot_interrupt+0x0/0xe0 took 125285 usecs Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI: CLS 64 bytes, default 64 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Trying to unpack rootfs image as initramfs... Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: software IO TLB: mapped [mem 0x00000000b9dac000-0x00000000bddac000] (64MB) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Initialise system trusted keyrings Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Key type blacklist registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: zbud: loaded Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: integrity: Platform Keyring initialized Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: integrity: Machine keyring initialized Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NET: Registered PF_ALG protocol family Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: xor: automatically using best checksumming function avx Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Key type asymmetric registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Asymmetric key parser 'x509' registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Freeing initrd memory: 21368K Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 245) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: io scheduler mq-deadline registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: io scheduler kyber registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: io scheduler bfq registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Monitor-Mwait will be used to enter C-1 state Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Monitor-Mwait will be used to enter C-2 state Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: \_PR_.CP00: Found 2 idle states Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: button: Power Button [PWRF] Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: thermal: Thermal Zone [THM0] (8 C) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pstore: Registered erst as persistent store backend Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: serial8250: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Non-volatile memory driver v1.3 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Linux agpgart interface v0.103 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ACPI: bus type drm_connector registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ahci 0000:00:1f.2: version 3.0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ahci 0000:00:1f.2: SSS flag set, parallel bus scan disabled Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ahci 0000:00:1f.2: AHCI 0001.0300 32 slots 6 ports 6 Gbps 0x3f impl SATA mode Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ahci 0000:00:1f.2: flags: 64bit ncq sntf ilck stag pm led clo pmp pio slum part ems apst Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: tsc: Refined TSC clocksource calibration: 2094.949 MHz Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1e328cf0a17, max_idle_ns: 440795250041 ns Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: clocksource: Switched to clocksource tsc Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi host0: ahci Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi host1: ahci Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi host2: ahci Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi host3: ahci Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi host4: ahci Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi host5: ahci Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata1: SATA max UDMA/133 abar m2048@0xf6c40000 port 0xf6c40100 irq 35 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata2: SATA max UDMA/133 abar m2048@0xf6c40000 port 0xf6c40180 irq 35 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata3: SATA max UDMA/133 abar m2048@0xf6c40000 port 0xf6c40200 irq 35 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata4: SATA max UDMA/133 abar m2048@0xf6c40000 port 0xf6c40280 irq 35 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata5: SATA max UDMA/133 abar m2048@0xf6c40000 port 0xf6c40300 irq 35 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata6: SATA max UDMA/133 abar m2048@0xf6c40000 port 0xf6c40380 irq 35 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ehci-pci: EHCI PCI platform driver Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ehci-pci 0000:00:1a.0: EHCI Host Controller Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ehci-pci 0000:00:1a.0: new USB bus registered, assigned bus number 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ehci-pci 0000:00:1a.0: debug port 2 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata1.00: ATA-8: MB0500GCEHF, HPGD, max UDMA/133 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata1.00: 976773168 sectors, multi 0: LBA48 NCQ (depth 32), AA Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ehci-pci 0000:00:1a.0: irq 21, io mem 0xf6c60000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata1.00: configured for UDMA/133 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 0:0:0:0: Direct-Access ATA MB0500GCEHF HPGD PQ: 0 ANSI: 5 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ehci-pci 0000:00:1a.0: USB 2.0 started, EHCI 1.00 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 0:0:0:0: Attached scsi generic sg0 type 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 0:0:0:0: [sda] 976773168 512-byte logical blocks: (500 GB/466 GiB) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 512 bytes Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 6.00 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb1: Product: EHCI Host Controller Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb1: Manufacturer: Linux 6.0.0-rc7 ehci_hcd Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb1: SerialNumber: 0000:00:1a.0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hub 1-0:1.0: USB hub found Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hub 1-0:1.0: 2 ports detected Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ehci-pci 0000:00:1d.0: EHCI Host Controller Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ehci-pci 0000:00:1d.0: new USB bus registered, assigned bus number 2 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ehci-pci 0000:00:1d.0: debug port 2 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ehci-pci 0000:00:1d.0: irq 20, io mem 0xf6c50000 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ehci-pci 0000:00:1d.0: USB 2.0 started, EHCI 1.00 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb2: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 6.00 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb2: Product: EHCI Host Controller Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb2: Manufacturer: Linux 6.0.0-rc7 ehci_hcd Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sda: sda1 sda2 sda3 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata2.00: ATA-8: MB0500GCEHF, HPGD, max UDMA/133 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata2.00: 976773168 sectors, multi 0: LBA48 NCQ (depth 32), AA Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata2.00: configured for UDMA/133 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 1:0:0:0: Direct-Access ATA MB0500GCEHF HPGD PQ: 0 ANSI: 5 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 1:0:0:0: Attached scsi generic sg1 type 0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 1:0:0:0: [sdb] 976773168 512-byte logical blocks: (500 GB/466 GiB) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 1:0:0:0: [sdb] Write Protect is off Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 1:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 512 bytes Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb2: SerialNumber: 0000:00:1d.0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hub 2-0:1.0: USB hub found Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hub 2-0:1.0: 2 ports detected Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ohci-pci: OHCI PCI platform driver Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: uhci_hcd: USB Universal Host Controller Interface driver Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: uhci_hcd 0000:01:00.4: UHCI Host Controller Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: uhci_hcd 0000:01:00.4: new USB bus registered, assigned bus number 3 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sdb: sdb1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: uhci_hcd 0000:01:00.4: detected 8 ports Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: uhci_hcd 0000:01:00.4: port count misdetected? forcing to 2 ports Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: uhci_hcd 0000:01:00.4: irq 36, io port 0x00003c00 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb3: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 6.00 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb3: Product: UHCI Host Controller Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb3: Manufacturer: Linux 6.0.0-rc7 uhci_hcd Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb usb3: SerialNumber: 0000:01:00.4 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hub 3-0:1.0: USB hub found Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hub 3-0:1.0: 2 ports detected Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usbcore: registered new interface driver usbserial_generic Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usbserial: USB Serial support registered for generic Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f0e:PS2M] at 0x60,0x64 irq 1,12 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: mousedev: PS/2 mouse device common for all mice Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rtc_cmos 00:03: registered as rtc0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rtc_cmos 00:03: setting system clock to 2022-09-26T15:12:00 UTC (1664205120) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rtc_cmos 00:03: alarms up to one day, 114 bytes nvram, hpet irqs Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device-mapper: uevent: version 1.0.3 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: intel_pstate: Intel P-state driver initializing Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hid: raw HID events driver (C) Jiri Kosina Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usbcore: registered new interface driver usbhid Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usbhid: USB HID core driver Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: drop_monitor: Initializing network drop monitor service Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Initializing XFRM netlink socket Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NET: Registered PF_INET6 protocol family Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Segment Routing with IPv6 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RPL Segment Routing with IPv6 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb 1-1: new high-speed USB device number 2 using ehci-pci Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata5: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata5.00: ATAPI: hp DVD-RAM GH80N, RS03, max UDMA/100 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata5.00: configured for UDMA/100 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 4:0:0:0: CD-ROM hp DVD-RAM GH80N RS03 PQ: 0 ANSI: 5 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: In-situ OAM (IOAM) with IPv6 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: mip6: Mobile IPv6 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NET: Registered PF_PACKET protocol family Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: microcode: sig=0x206d7, pf=0x8, revision=0x710 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: microcode: Microcode Update Driver: v2.2. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: IPI shorthand broadcast: enabled Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: AVX version of gcm_enc/dec engaged. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: AES CTR mode by8 optimization enabled Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sched_clock: Marking stable (22633885996, 8611251011)->(33757469401, -2512332394) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: registered taskstats version 1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Loading compiled-in X.509 certificates Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb 2-1: new high-speed USB device number 2 using ehci-pci Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 4:0:0:0: [sr0] scsi3-mmc drive: 48x/48x writer dvd-ram cd/rw xa/form2 cdda tray Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Loaded X.509 cert 'Build time autogenerated kernel key: aac63769703cd2dc46616d1b8524ea4e51cafd64' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: zswap: loaded using pool lzo/zbud Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb 1-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb 1-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: page_owner is disabled Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hub 1-1:1.0: USB hub found Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Key type ._fscrypt registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hub 1-1:1.0: 6 ports detected Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Key type .fscrypt registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Key type fscrypt-provisioning registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 4:0:0:0: Attached scsi CD-ROM sr0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=yes, fsverity=yes Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pstore: Using crash dump compression: deflate Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 4:0:0:0: Attached scsi generic sg2 type 5 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Key type big_key registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb 2-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb 2-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hub 2-1:1.0: USB hub found Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hub 2-1:1.0: 8 ports detected Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Key type encrypted registered Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ima: No TPM chip found, activating TPM-bypass! Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Loading compiled-in module X.509 certificates Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb 2-1.3: new high-speed USB device number 3 using ehci-pci Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Loaded X.509 cert 'Build time autogenerated kernel key: aac63769703cd2dc46616d1b8524ea4e51cafd64' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ima: Allocated hash algorithm: sha256 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ima: No architecture policies found Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: evm: Initialising EVM extended attributes: Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: evm: security.selinux Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: evm: security.SMACK64 (disabled) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: evm: security.SMACK64EXEC (disabled) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: evm: security.SMACK64TRANSMUTE (disabled) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: evm: security.SMACK64MMAP (disabled) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: evm: security.apparmor (disabled) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: evm: security.ima Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: evm: security.capability Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: evm: HMAC attrs: 0x1 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb 2-1.3: New USB device found, idVendor=0424, idProduct=2660, bcdDevice= 8.01 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: usb 2-1.3: New USB device strings: Mfr=0, Product=0, SerialNumber=0 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hub 2-1.3:1.0: USB hub found Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hub 2-1.3:1.0: 2 ports detected Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: alg: No test for 842 (842-scomp) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: alg: No test for 842 (842-generic) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: PM: Magic number: 6:181:185 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: acpi device:44: hash matches Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAS: Correctable Errors collector initialized. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Freeing unused decrypted memory: 2036K Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Freeing unused kernel image (initmem) memory: 3016K Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Write protecting the kernel read-only data: 30720k Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Freeing unused kernel image (text/rodata gap) memory: 2036K Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Freeing unused kernel image (rodata/data gap) memory: 700K Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rodata_test: all tests were successful Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: x86/mm: Checking user space page tables Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Run /init as init process Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: with arguments: Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: /init Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rhgb Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: with environment: Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: HOME=/ Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: TERM=linux Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-6.0.0-rc7 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: BOOTIF=9C-8E-99-6E-14-D8 Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pstore: crypto_acomp_decompress failed, ret = -22! Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd 251.4-53.fc38 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 +PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Detected architecture x86-64. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Running in initial RAM disk. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Hostname set to . Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Failed to open libbpf, cgroup BPF features disabled: Operation not supported Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Queued start job for default target initrd.target. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target local-fs.target - Local File Systems. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target slices.target - Slice Units. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target swap.target - Swaps. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target timers.target - Timer Units. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target sockets.target - Socket Units. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: memstrack.service - Memstrack Anylazing Service was skipped because all trigger condition checks failed. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-journald.service - Journal Service... Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-journald[452]: Journal started Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-journald[452]: Runtime Journal (/run/log/journal/8c111416d0674a4db9fef2eb75d8b6d5) is 8.0M, max 319.3M, 311.3M free. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-modules-load[455]: Module 'msr' is built in Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-journald.service - Journal Service. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664205124.883:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664205124.888:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664205124.900:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 26 11:12:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 26 11:12:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 26 11:12:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Sep 26 11:12:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 26 11:12:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664205125.746:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 26 11:12:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664205125.748:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 26 11:12:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664205125.753:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: dracut-cmdline-ask.service - dracut ask for additional cmdline parameters was skipped because all trigger condition checks failed. Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664205126.155:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com dracut-cmdline[470]: dracut-38 (Rawhide Prerelease) dracut-057-3.fc38 Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com dracut-cmdline[470]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-6.0.0-rc7 root=/dev/mapper/fedora_hpe--ml350egen8--01-root ro BOOTIF=9C-8E-99-6E-14-D8 console=tty0 console=ttyS1,115200n81 rd.lvm.lv=fedora_hpe-ml350egen8-01/root rhgb Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664205126.266:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664205126.316:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=6 op=LOAD Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=7 op=LOAD Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=8 op=LOAD Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664205126.318:11): prog-id=6 op=LOAD Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[557]: Using default interface naming scheme 'v251'. Sep 26 11:12:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: dracut-pre-trigger.service - dracut pre-trigger hook was skipped because all trigger condition checks failed. Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hpwdt 0000:01:00.0: HPE Watchdog Timer Driver: NMI decoding initialized Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hpwdt 0000:01:00.0: HPE Watchdog Timer Driver: Version: 2.0.4 Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hpwdt 0000:01:00.0: timeout: 30 seconds (nowayout=0) Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hpwdt 0000:01:00.0: pretimeout: on. Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hpwdt 0000:01:00.0: kdumptimeout: -1. Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: mgag200 0000:01:00.1: vgaarb: deactivate vga console Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Console: switching to colour dummy device 80x25 Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target sysinit.target - System Initialization. Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: [drm] Initialized mgag200 1.0.0 20110418 for 0000:01:00.1 on minor 0 Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: fbcon: mgag200drmfb (fb0) is primary device Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Console: switching to colour frame buffer device 128x48 Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: mgag200 0000:01:00.1: [drm] fb0: mgag200drmfb frame buffer device Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting plymouth-start.service - Show Plymouth Boot Screen... Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Received SIGRTMIN+20 from PID 706 (plymouthd). Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started plymouth-start.service - Show Plymouth Boot Screen. Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=plymouth-start comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch was skipped because of a failed condition check (ConditionPathExists=!/run/plymouth/pid). Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-ask-password-plymouth.path - Forward Password Requests to Plymouth Directory Watch. Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target paths.target - Path Units. Sep 26 11:12:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target basic.target - Basic System. Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: random: crng init done Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com dracut-initqueue[721]: Scanning devices sda3 sdb1 for LVM logical volumes fedora_hpe-ml350egen8-01/root Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com dracut-initqueue[742]: WARNING: File locking is disabled. Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com dracut-initqueue[721]: fedora_hpe-ml350egen8-01/root linear Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com dracut-initqueue[721]: fedora_hpe-ml350egen8-01/root linear Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Found device dev-mapper-fedora_hpe\x2d\x2dml350egen8\x2d\x2d01\x2droot.device - /dev/mapper/fedora_hpe--ml350egen8--01-root. Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: dracut-pre-mount.service - dracut pre-mount hook was skipped because all trigger condition checks failed. Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/mapper/fedora_hpe--ml350egen8--01-root... Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-fsck[759]: /usr/sbin/fsck.xfs: XFS file system. Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/mapper/fedora_hpe--ml350egen8--01-root. Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounting sysroot.mount - /sysroot... Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Sep 26 11:12:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: XFS (dm-0): Mounting V5 Filesystem Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: XFS (dm-0): Ending clean mount Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounted sysroot.mount - /sysroot. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting initrd-parse-etc.service - Reload Configuration from the Real Root... Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reloading. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=9 op=LOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kauditd_printk_skb: 7 callbacks suppressed Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664205131.394:19): prog-id=9 op=LOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=10 op=LOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664205131.394:20): prog-id=0 op=UNLOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664205131.394:21): prog-id=10 op=LOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664205131.394:22): prog-id=11 op=LOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=11 op=LOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664205131.395:23): prog-id=0 op=UNLOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664205131.395:24): prog-id=0 op=UNLOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=12 op=LOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664205131.396:25): prog-id=12 op=LOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664205131.396:26): prog-id=0 op=UNLOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664205131.397:27): prog-id=13 op=LOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=13 op=LOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=14 op=LOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664205131.397:28): prog-id=14 op=LOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished initrd-parse-etc.service - Reload Configuration from the Real Root. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: dracut-mount.service - dracut mount hook was skipped because all trigger condition checks failed. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target timers.target - Timer Units. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target basic.target - Basic System. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target paths.target - Path Units. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target slices.target - Slice Units. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target sockets.target - Socket Units. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target sysinit.target - System Initialization. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target swap.target - Swaps. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting plymouth-switch-root.service - Plymouth switch root service... Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-udevd.service: Consumed 1.755s CPU time. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished plymouth-switch-root.service - Plymouth switch root service. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=plymouth-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Switching root. Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-journald[452]: Received SIGTERM from PID 1 (systemd). Sep 26 11:12:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-journald[452]: Journal stopped Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SELinux: policy capability network_peer_controls=1 Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SELinux: policy capability open_perms=1 Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SELinux: policy capability extended_socket_class=1 Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SELinux: policy capability always_check_network=0 Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SELinux: policy capability cgroup_seclabel=1 Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Successfully loaded SELinux policy in 197.478ms. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: RTC configured in localtime, applying delta of -240 minutes to system time. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 91.514ms. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd 251.4-53.fc38 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 +PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD +BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Detected architecture x86-64. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: bpf-lsm: Failed to load BPF object: No such process Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: zram: Added device: zram0 Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: /usr/lib/systemd/system/restraintd.service:8: Standard output type syslog+console is obsolete, automatically updating to journal+console. Please update your unit file, and consider removing the setting altogether. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Created slice system-sshd\x2dkeygen.slice - Slice /system/sshd-keygen. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Created slice system-systemd\x2dzram\x2dsetup.slice - Slice /system/systemd-zram-setup. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Created slice user.slice - User and Session Slice. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch was skipped because of a failed condition check (ConditionPathExists=!/run/plymouth/pid). Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target paths.target - Path Units. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target slices.target - Slice Units. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target time-set.target - System Time Set. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on dm-event.socket - Device-mapper event daemon FIFOs. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on lvm2-lvmpolld.socket - LVM2 poll daemon socket. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: auth-rpcgss-module.service - Kernel Module supporting RPCSEC_GSS was skipped because of a failed condition check (ConditionPathExists=/etc/krb5.keytab). Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: plymouth-switch-root.service: Deactivated successfully. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped plymouth-switch-root.service - Plymouth switch root service. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kauditd_printk_skb: 63 callbacks suppressed Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1131 audit(1664219537.317:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=plymouth-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: fuse: init (API version 7.36) Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1131 audit(1664219537.346:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Stopped systemd-journald.service - Journal Service. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664219537.608:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1131 audit(1664219537.608:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664219537.609:96): prog-id=33 op=LOAD Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664219538.155:97): prog-id=34 op=LOAD Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664219538.159:98): prog-id=35 op=LOAD Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664219538.159:99): prog-id=0 op=UNLOAD Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664219538.159:100): prog-id=0 op=UNLOAD Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-journald.service - Journal Service... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1305 audit(1664219538.210:101): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:syslogd_t:s0 res=1 Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-journald[860]: Journal started Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-journald[860]: Runtime Journal (/run/log/journal/8c111416d0674a4db9fef2eb75d8b6d5) is 8.0M, max 319.3M, 311.3M free. Sep 26 11:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=15 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=16 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=17 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=18 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=19 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=20 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=21 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=22 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=23 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=24 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=25 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=26 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=27 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=28 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=29 op=LOAD Sep 26 15:12:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=30 op=LOAD Sep 26 15:12:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=31 op=LOAD Sep 26 15:12:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=32 op=LOAD Sep 26 15:12:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=plymouth-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=33 op=LOAD Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=34 op=LOAD Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=35 op=LOAD Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:syslogd_t:s0 res=1 Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[860]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffdd0e0b560 a2=4000 a3=7ffdd0e0b5ec items=0 ppid=1 pid=860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:syslogd_t:s0 key=(null) Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 26 15:12:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Queued start job for default target multi-user.target. Sep 26 15:12:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-journald.service: Deactivated successfully. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-modules-load[861]: Module 'msr' is built in Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because all trigger condition checks failed. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-journald.service - Journal Service. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 26 15:12:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 26 15:12:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 26 15:12:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-firstboot.service - First Boot Wizard was skipped because of a failed condition check (ConditionFirstBoot=yes). Sep 26 15:12:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-hwdb-update.service - Rebuild Hardware Database was skipped because of a failed condition check (ConditionNeedsUpdate=/etc). Sep 26 15:12:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 26 15:12:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Sep 26 15:12:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-sysusers.service - Create System Users was skipped because of a failed condition check (ConditionNeedsUpdate=/etc). Sep 26 15:12:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 26 15:12:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-journald[860]: Time spent on flushing to /var/log/journal/8c111416d0674a4db9fef2eb75d8b6d5 is 326.778ms for 1463 entries. Sep 26 15:12:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-journald[860]: System Journal (/var/log/journal/8c111416d0674a4db9fef2eb75d8b6d5) is 13.2M, max 4.0G, 3.9G free. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-journald[860]: Received client request to flush runtime journal. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=36 op=LOAD Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=37 op=LOAD Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=38 op=LOAD Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: first-boot-complete.target - First Boot Complete was skipped because of a failed condition check (ConditionFirstBoot=yes). Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[878]: Using default interface naming scheme 'v251'. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Found device dev-zram0.device - /dev/zram0. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-zram-setup@zram0.service - Create swap on /dev/zram0... Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: zram0: detected capacity change from 0 to 16777216 Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com zram-generator[924]: Setting up swapspace version 1, size = 8 GiB (8589930496 bytes) Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com zram-generator[924]: LABEL=zram0, UUID=6e9d4167-579d-4b8a-bbfe-e7bc43240e88 Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-makefs[914]: /dev/zram0 successfully formatted as swap (label "zram0", uuid 6e9d4167-579d-4b8a-bbfe-e7bc43240e88) Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-zram-setup@zram0.service - Create swap on /dev/zram0. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-zram-setup@zram0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Activating swap dev-zram0.swap - Compressed Swap on /dev/zram0... Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dca service started, version 1.12.1 Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: power_meter ACPI000D:00: Found ACPI power meter. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: power_meter ACPI000D:00: Ignoring unsafe software power cap! Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: power_meter ACPI000D:00: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: IPMI message handler: version 39.2 Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi device interface Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Adding 8388604k swap on /dev/zram0. Priority:100 extents:1 across:8388604k SSDscFS Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Activated swap dev-zram0.swap - Compressed Swap on /dev/zram0. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target swap.target - Swaps. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi_si: IPMI System Interface driver Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2-0x0ca3] regsize 1 spacing 1 irq 0 Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi_si: Adding ACPI-specified kcs state machine Sep 26 15:12:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Condition check resulted in dev-block-8:3.device - MB0500GCEHF 3 being skipped. Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Condition check resulted in dev-block-8:17.device - MB0500GCEHF 1 being skipped. Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Created slice system-lvm2\x2dpvscan.slice - Slice /system/lvm2-pvscan. Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting lvm2-pvscan@8:17.service - LVM event activation on device 8:17... Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting lvm2-pvscan@8:3.service - LVM event activation on device 8:3... Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com lvm[982]: pvscan[982] PV /dev/sdb1 online, VG fedora_hpe-ml350egen8-01 incomplete (need 1). Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com lvm[983]: pvscan[983] PV /dev/sda3 online, VG fedora_hpe-ml350egen8-01 is complete. Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com lvm[983]: pvscan[983] VG fedora_hpe-ml350egen8-01 run autoactivation. Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com lvm[983]: 1 logical volume(s) in volume group "fedora_hpe-ml350egen8-01" now active Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Condition check resulted in dev-disk-by\x2duuid-dd7f96ac\x2dc20b\x2d4dec\x2d81ea\x2d8d75c50920bb.device - MB0500GCEHF 2 being skipped. Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com lvm[854]: 1 logical volume(s) in volume group "fedora_hpe-ml350egen8-01" monitored Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: input: PC Speaker as /devices/platform/pcspkr/input/input4 Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished lvm2-pvscan@8:17.service - LVM event activation on device 8:17. Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:17 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kauditd_printk_skb: 27 callbacks suppressed Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664219542.443:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:17 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: i40e: Intel(R) Ethernet Connection XL710 Network Driver Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: i40e: Copyright (c) 2013 - 2019 Intel Corporation. Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-monitor comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished lvm2-pvscan@8:3.service - LVM event activation on device 8:3. Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664219542.488:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-monitor comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: i40e 0000:21:00.0: fw 4.40.35115 api 1.4 nvm 4.53 0x80001f5e 0.0.0 [8086:1583] [8086:0002] Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:3 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664219542.502:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-pvscan@8:3 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x00000b, prod_id: 0x2000, dev_id: 0x13) Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.0: DCA enabled Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.0: added PHC on eth0 Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 9c:8e:99:6e:14:d8 Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.0: eth0: PBA No: 0960FF-0FF Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.1: DCA enabled Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.1: added PHC on eth1 Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.1: Intel(R) Gigabit Ethernet Network Connection Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 9c:8e:99:6e:14:d9 Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.1: eth1: PBA No: 0960FF-0FF Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: i40e 0000:21:00.0: MAC address: 3c:fd:fe:a0:3a:10 Sep 26 15:12:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: i40e 0000:21:00.0: PCI-Express: Speed 8.0GT/s Width x8 Sep 26 15:12:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Sep 26 15:12:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: i40e 0000:21:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 32 RSS FD_ATR FD_SB NTUPLE VxLAN Geneve PTP VEPA Sep 26 15:12:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi_ssif: IPMI SSIF Interface driver Sep 26 15:12:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounting boot.mount - /boot... Sep 26 15:12:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: XFS (sda2): Mounting V5 Filesystem Sep 26 15:12:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAPL PMU: API unit is 2^-32 Joules, 2 fixed counters, 163840 ms ovfl timer Sep 26 15:12:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAPL PMU: hw unit of domain pp0-core 2^-16 Joules Sep 26 15:12:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAPL PMU: hw unit of domain package 2^-16 Joules Sep 26 15:12:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: i40e 0000:21:00.1: fw 4.40.35115 api 1.4 nvm 4.53 0x80001f5e 0.0.0 [8086:1583] [8086:0002] Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: XFS (sda2): Ending clean mount Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounted boot.mount - /boot. Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target local-fs.target - Local File Systems. Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: ldconfig.service - Rebuild Dynamic Linker Cache was skipped because all trigger condition checks failed. Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting plymouth-read-write.service - Tell Plymouth To Write Out Runtime Data... Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: selinux-autorelabel-mark.service - Mark the need to relabel after reboot was skipped because of a failed condition check (ConditionSecurity=!selinux). Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because all trigger condition checks failed. Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of a failed condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-machine-id-commit.service - Commit a transient machine-id on disk was skipped because of a failed condition check (ConditionPathIsMountPoint=/etc/machine-id). Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com bootctl[997]: Couldn't find EFI system partition, skipping. Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664219544.092:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: i40e 0000:21:00.1: MAC address: 3c:fd:fe:a0:3a:11 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1130 audit(1664219544.303:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounting var-lib-nfs-rpc_pipefs.mount - RPC Pipe File System... Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting auditd.service - Security Auditing Service... Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-journal-catalog-update.service - Rebuild Journal Catalog was skipped because of a failed condition check (ConditionNeedsUpdate=/var). Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=39 op=LOAD Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=40 op=LOAD Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664219544.360:132): prog-id=39 op=LOAD Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664219544.360:133): prog-id=40 op=LOAD Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=41 op=LOAD Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1334 audit(1664219544.360:134): prog-id=41 op=LOAD Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: i40e 0000:21:00.1: PCI-Express: Speed 8.0GT/s Width x8 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com auditd[1015]: No plugins found, not dispatching events Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: i40e 0000:21:00.1: Features: PF-id[1] VFs: 64 VSIs: 66 QP: 32 RSS FD_ATR FD_SB NTUPLE VxLAN Geneve PTP VEPA Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: audit: type=1305 audit(1664219544.465:135): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1015]: SYSCALL arch=c000003e syscall=44 success=yes exit=60 a0=3 a1=7fffb5807cc0 a2=3c a3=0 items=0 ppid=1009 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditd" exe="/usr/sbin/auditd" subj=system_u:system_r:auditd_t:s0 key=(null) Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: PROCTITLE proctitle="/sbin/auditd" Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: CONFIG_CHANGE op=set audit_pid=1015 old=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1015]: SYSCALL arch=c000003e syscall=44 success=yes exit=60 a0=3 a1=7fffb5805980 a2=3c a3=0 items=0 ppid=1009 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditd" exe="/usr/sbin/auditd" subj=system_u:system_r:auditd_t:s0 key=(null) Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: PROCTITLE proctitle="/sbin/auditd" Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com auditd[1015]: Init complete, auditd 3.0.9 listening for events (startup state enable) Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: i40e 0000:21:00.0 ens4f0: renamed from eth2 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3ca0 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3ca0 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3ca0 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3ca8 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3ca8 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3ca8 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3c71 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3c71 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3c71 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3caa Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3caa Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3caa Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cab Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cab Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cab Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cac Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cac Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cac Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cad Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cad Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cad Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cb8 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cb8 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cb8 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cf4 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cf4 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cf4 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cf6 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cf6 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cf6 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cf5 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cf5 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Seeking for: PCI ID 8086:3cf5 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: iTCO_vendor_support: vendor-support=0 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC MC0: Giving out device to module sb_edac controller Sandy Bridge SrcID#0_Ha#0: DEV 0000:1f:0e.0 (INTERRUPT) Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: i40e 0000:21:00.1 ens4f1: renamed from eth3 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC MC1: Giving out device to module sb_edac controller Sandy Bridge SrcID#1_Ha#0: DEV 0000:3f:0e.0 (INTERRUPT) Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: EDAC sbridge: Ver: 1.1.2 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.1 eno2: renamed from eth1 Sep 26 15:12:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.0 eno1: renamed from eth0 Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: iTCO_wdt iTCO_wdt.1.auto: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RPC: Registered named UNIX socket transport module. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RPC: Registered udp transport module. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RPC: Registered tcp transport module. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=42 op=LOAD Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: intel_rapl_common: Found RAPL domain package Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: intel_rapl_common: Found RAPL domain core Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: intel_rapl_common: Found RAPL domain package Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: intel_rapl_common: Found RAPL domain core Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-update-done.service - Update is Completed was skipped because all trigger condition checks failed. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished plymouth-read-write.service - Tell Plymouth To Write Out Runtime Data. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=plymouth-read-write comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Mounted var-lib-nfs-rpc_pipefs.mount - RPC Pipe File System. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Received SIGRTMIN+20 from PID 706 (plymouthd). Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target rpc_pipefs.target. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=43 op=LOAD Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=44 op=LOAD Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=45 op=LOAD Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com augenrules[1036]: /sbin/augenrules: No change Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:unconfined_service_t:s0 op=add_rule key=(null) list=1 res=1 Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1052]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcc6675240 a2=420 a3=0 items=0 ppid=1036 pid=1052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:unconfined_service_t:s0 key=(null) Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com augenrules[1052]: No rules Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started auditd.service - Security Auditing Service. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=auditd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1056]: SYSTEM_BOOT pid=1056 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-resolved[1035]: Positive Trust Anchors: Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-resolved[1035]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-resolved[1035]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-resolved[1035]: Using system hostname 'hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com'. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target sysinit.target - System Initialization. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started dnf-makecache.timer - dnf makecache --timer. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on pcscd.socket - PC/SC Smart Card Daemon Activation Socket. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on sssd-kcm.socket - SSSD Kerberos Cache Manager responder socket. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target sockets.target - Socket Units. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: rpmdb-migrate.service - RPM database migration to /usr was skipped because of a failed condition check (ConditionPathExists=/var/lib/rpm/.migratedb). Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: rpmdb-rebuild.service - RPM database rebuild was skipped because of a failed condition check (ConditionPathExists=/usr/lib/sysimage/rpm/.rebuilddb). Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target basic.target - Basic System. Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting NetworkManager.service - Network Manager... Sep 26 15:12:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=46 op=LOAD Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219546.1070] NetworkManager (version 1.40.0-1.fc38) is starting... (boot:a47cf067-6fcb-4c73-b1e5-1566d821aee6) Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219546.1076] Read config: /etc/NetworkManager/NetworkManager.conf Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting chronyd.service - NTP client/server... Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting flatpak-add-fedora-repos.service - Add Fedora flatpak repositories... Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: mdmonitor.service - Software RAID monitoring and management was skipped because of a failed condition check (ConditionPathExists=/etc/mdadm.conf). Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: sshd-keygen@ecdsa.service - OpenSSH ecdsa Server Key Generation was skipped because all trigger condition checks failed. Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: sshd-keygen@ed25519.service - OpenSSH ed25519 Server Key Generation was skipped because all trigger condition checks failed. Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: sshd-keygen@rsa.service - OpenSSH rsa Server Key Generation was skipped because all trigger condition checks failed. Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target sshd-keygen.target. Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: sssd.service - System Security Services Daemon was skipped because all trigger condition checks failed. Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target nss-user-lookup.target - User and Group Name Lookups. Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=47 op=LOAD Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=48 op=LOAD Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=49 op=LOAD Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com chronyd[1064]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com chronyd[1064]: Frequency 13.867 +/- 0.836 ppm read from /var/lib/chrony/drift Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com chronyd[1064]: Using right/UTC timezone to obtain leap second data Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com chronyd[1064]: Loaded seccomp filter (level 2) Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-logind.service - User Login Management... Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started chronyd.service - NTP client/server. Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=chronyd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dracut-shutdown comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=50 op=LOAD Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=51 op=LOAD Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=52 op=LOAD Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting chrony-wait.service - Wait for chrony to synchronize system clock... Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=53 op=LOAD Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-logind[1065]: New seat seat0. Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting dbus-broker.service - D-Bus System Message Bus... Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-logind[1065]: Watching system buttons on /dev/input/event0 (Power Button) Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started dbus-broker.service - D-Bus System Message Bus. Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbus-broker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com dbus-broker-lau[1067]: Ready Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-logind.service - User Login Management. Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-logind comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219546.8624] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219546.8933] manager[0x557ffb09d030]: monitoring kernel firmware directory '/lib/firmware'. Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started NetworkManager.service - Network Manager. Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=NetworkManager comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target network.target - Network. Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting NetworkManager-wait-online.service - Network Manager Wait Online... Sep 26 15:12:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting gssproxy.service - GSSAPI Proxy Daemon... Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting sshd.service - OpenSSH server daemon... Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=54 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=55 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=56 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com sshd[1079]: Server listening on 0.0.0.0 port 22. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com sshd[1079]: Server listening on :: port 22. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started sshd.service - OpenSSH server daemon. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sshd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started gssproxy.service - GSSAPI Proxy Daemon. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=gssproxy comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: rpc-gssd.service - RPC security service for NFS client and server was skipped because of a failed condition check (ConditionPathExists=/etc/krb5.keytab). Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target nfs-client.target - NFS client services. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-user-sessions comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Created slice user-0.slice - User Slice of UID 0. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting plymouth-quit-wait.service - Hold until boot process finishes up... Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting plymouth-quit.service - Terminate Plymouth Boot Screen... Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting user-runtime-dir@0.service - User Runtime Directory /run/user/0... Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.4771] hostname: hostname: using hostnamed Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.4772] hostname: static hostname changed from (none) to "hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com" Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.4799] dns-mgr: init: dns=systemd-resolved rc-manager=unmanaged (auto), plugin=systemd-resolved Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished user-runtime-dir@0.service - User Runtime Directory /run/user/0. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=user-runtime-dir@0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting user@0.service - User Manager for UID 0... Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1092]: USER_ACCT pid=1092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='op=PAM:accounting grantors=pam_unix acct="root" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1092]: CRED_ACQ pid=1092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='op=PAM:setcred grantors=? acct="root" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1092]: USER_ROLE_CHANGE pid=1092 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='pam: default-context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 selected-context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0) Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1092]: USER_START pid=1092 uid=0 auid=0 ses=1 subj=system_u:system_r:init_t:s0 msg='op=PAM:session_open grantors=pam_selinux,pam_selinux,pam_loginuid,pam_namespace,pam_systemd_home,pam_keyinit,pam_limits,pam_systemd,pam_unix acct="root" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=57 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=58 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=59 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=60 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=61 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.8358] manager[0x557ffb09d030]: rfkill: Wi-Fi hardware radio set enabled Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.8361] manager[0x557ffb09d030]: rfkill: WWAN hardware radio set enabled Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.8632] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.8633] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.8634] manager: Networking is enabled by state file Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=62 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.8734] settings: Loaded settings plugin: keyfile (internal) Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Listening on systemd-rfkill.socket - Load/Save RF Kill Switch Status /dev/rfkill Watch. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=63 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=64 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=65 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=66 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=67 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=68 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=69 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=70 op=LOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.9012] dhcp: init: Using DHCP client 'internal' Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.9013] device (lo): carrier: link connected Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.9019] manager: (lo): new Generic device (/org/freedesktop/NetworkManager/Devices/1) Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service... Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.9046] manager: (eno1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.9053] device (eno1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started NetworkManager-dispatcher.service - Network Manager Script Dispatcher Service. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.9616] manager: (eno2): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219547.9622] device (eno2): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Received SIGRTMIN+21 from PID 706 (plymouthd). Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Received SIGRTMIN+21 from PID 706 (plymouthd). Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished plymouth-quit-wait.service - Hold until boot process finishes up. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=plymouth-quit-wait comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished plymouth-quit.service - Terminate Plymouth Boot Screen. Sep 26 15:12:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=plymouth-quit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=getty@tty1 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=serial-getty@ttyS1 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target getty.target - Login Prompts. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219548.0276] manager: (ens4f0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/4) Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219548.0281] device (ens4f0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219548.0396] manager: (ens4f1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/5) Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219548.0403] device (ens4f1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Queued start job for default target default.target. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Created slice app.slice - User Application Slice. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: grub-boot-success.timer - Mark boot as successful after the user session has run 2 minutes was skipped because of a failed condition check (ConditionUser=!@system). Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Reached target paths.target - Paths. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Reached target timers.target - Timers. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: pipewire-pulse.socket - PipeWire PulseAudio was skipped because of a failed condition check (ConditionUser=!root). Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Listening on pipewire.socket - PipeWire Multimedia System Socket. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Starting systemd-tmpfiles-setup.service - Create User's Volatile Files and Directories... Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Reached target sockets.target - Sockets. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Finished systemd-tmpfiles-setup.service - Create User's Volatile Files and Directories. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Reached target basic.target - Basic System. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Reached target default.target - Main User Target. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Startup finished in 752ms. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started user@0.service - User Manager for UID 0. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=user@0 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com flatpak[1061]: system: Added remote fedora to oci+https://registry.fedoraproject.org Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com flatpak[1120]: system: Added remote fedora-testing to oci+https://registry.fedoraproject.org#testing Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished flatpak-add-fedora-repos.service - Add Fedora flatpak repositories. Sep 26 15:12:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=flatpak-add-fedora-repos comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: igb 0000:06:00.0 eno1: igb: eno1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX Sep 26 15:12:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eno1: link becomes ready Sep 26 15:12:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219551.5208] device (eno1): carrier: link connected Sep 26 15:12:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219551.5214] device (eno1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Sep 26 15:12:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219551.5238] policy: auto-activating connection 'eno1' (0a3074b3-9b5f-477f-9cf0-41b90bb84112) Sep 26 15:12:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219551.5248] device (eno1): Activation: starting connection 'eno1' (0a3074b3-9b5f-477f-9cf0-41b90bb84112) Sep 26 15:12:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219551.5250] device (eno1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Sep 26 15:12:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219551.5255] manager: NetworkManager state is now CONNECTING Sep 26 15:12:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219551.5259] device (eno1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Sep 26 15:12:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219551.5287] device (eno1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Sep 26 15:12:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219551.5306] dhcp4 (eno1): activation: beginning transaction (timeout in 45 seconds) Sep 26 15:12:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219553.5684] policy: set 'eno1' (eno1) as default for IPv6 routing and DNS Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219555.6174] device (eno1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219555.6210] device (eno1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219555.6215] device (eno1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219555.6225] manager: NetworkManager state is now CONNECTED_SITE Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219555.6233] device (eno1): Activation: successful, device activated. Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219555.6247] manager: NetworkManager state is now CONNECTED_GLOBAL Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219555.6257] manager: startup complete Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished NetworkManager-wait-online.service - Network Manager Wait Online. Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=NetworkManager-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target network-online.target - Network is Online. Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started anamon.service - Anaconda Monitoring (anamon) post-boot notification program. Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=anamon comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting rpc-statd-notify.service - Notify NFS peers of a restart... Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com sm-notify[1137]: Version 2.6.2 starting Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started rpc-statd-notify.service - Notify NFS peers of a restart. Sep 26 15:12:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=rpc-statd-notify comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219556.2815] dhcp4 (eno1): state changed new lease, address=10.16.216.161 Sep 26 15:12:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com NetworkManager[1057]: [1664219556.2823] policy: set 'eno1' (eno1) as default for IPv4 routing and DNS Sep 26 15:12:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-resolved[1035]: eno1: Bus client set search domain list to: hpe2.lab.eng.bos.redhat.com Sep 26 15:12:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-resolved[1035]: eno1: Bus client set default route setting: yes Sep 26 15:12:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-resolved[1035]: eno1: Bus client set DNS server list to: 10.2.32.1, 10.11.5.19 Sep 26 15:12:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: anamon.service: Deactivated successfully. Sep 26 15:12:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: anamon.service: Unit process 1138 (anamon) remains running after unit stopped. Sep 26 15:12:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: anamon.service: Unit process 1140 (journalctl) remains running after unit stopped. Sep 26 15:12:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=anamon comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1169]: CRYPTO_KEY_USER pid=1169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=destroy kind=server fp=SHA256:66:c0:7a:bb:63:76:3f:a6:0d:bb:0d:f4:d6:a5:52:34:74:0d:3a:90:1f:c5:80:af:f6:7c:b0:16:9f:3c:bc:d7 direction=? spid=1169 suid=0 exe="/usr/sbin/sshd" hostname=? addr=10.8.0.181 terminal=? res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1168]: CRYPTO_SESSION pid=1168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=start direction=from-server cipher=aes256-gcm@openssh.com ksize=256 mac= pfs=curve25519-sha256 spid=1169 suid=74 rport=37956 laddr=10.16.216.161 lport=22 exe="/usr/sbin/sshd" hostname=? addr=10.8.0.181 terminal=? res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1168]: CRYPTO_SESSION pid=1168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=start direction=from-client cipher=aes256-gcm@openssh.com ksize=256 mac= pfs=curve25519-sha256 spid=1169 suid=74 rport=37956 laddr=10.16.216.161 lport=22 exe="/usr/sbin/sshd" hostname=? addr=10.8.0.181 terminal=? res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1168]: USER_AUTH pid=1168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=pubkey_auth grantors=auth-key acct="root" exe="/usr/sbin/sshd" hostname=? addr=10.8.0.181 terminal=? res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1168]: CRYPTO_KEY_USER pid=1168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=negotiate kind=auth-key fp=SHA256:2b:10:ae:1b:c0:ec:c3:5b:a5:eb:aa:46:d9:cc:91:b3:92:1d:3b:70:40:88:3e:cb:38:84:d2:3f:b9:c9:18:d9 exe="/usr/sbin/sshd" hostname=? addr=10.8.0.181 terminal=? res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1168]: USER_ACCT pid=1168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:accounting grantors=pam_unix,pam_localuser acct="root" exe="/usr/sbin/sshd" hostname=10.8.0.181 addr=10.8.0.181 terminal=ssh res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com sshd[1168]: Accepted publickey for root from 10.8.0.181 port 37956 ssh2: RSA SHA256:KxCuG8Dsw1ul66pG2cyRs5IdO3BAiD7LOITSP7nJGNk Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1168]: CRYPTO_KEY_USER pid=1168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=destroy kind=session fp=? direction=both spid=1169 suid=74 rport=37956 laddr=10.16.216.161 lport=22 exe="/usr/sbin/sshd" hostname=? addr=10.8.0.181 terminal=? res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1168]: CRED_ACQ pid=1168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_localuser,pam_unix acct="root" exe="/usr/sbin/sshd" hostname=10.8.0.181 addr=10.8.0.181 terminal=ssh res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1168]: USER_ROLE_CHANGE pid=1168 uid=0 auid=0 ses=2 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='pam: default-context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 selected-context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 exe="/usr/sbin/sshd" hostname=10.8.0.181 addr=10.8.0.181 terminal=ssh res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-logind[1065]: New session 2 of user root. Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started session-2.scope - Session 2 of User root. Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com sshd[1168]: pam_unix(sshd:session): session opened for user root(uid=0) by (uid=0) Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1168]: USER_START pid=1168 uid=0 auid=0 ses=2 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:session_open grantors=pam_selinux,pam_loginuid,pam_selinux,pam_namespace,pam_keyinit,pam_keyinit,pam_limits,pam_systemd,pam_unix,pam_umask,pam_lastlog acct="root" exe="/usr/sbin/sshd" hostname=10.8.0.181 addr=10.8.0.181 terminal=ssh res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1170]: CRYPTO_KEY_USER pid=1170 uid=0 auid=0 ses=2 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=destroy kind=server fp=SHA256:66:c0:7a:bb:63:76:3f:a6:0d:bb:0d:f4:d6:a5:52:34:74:0d:3a:90:1f:c5:80:af:f6:7c:b0:16:9f:3c:bc:d7 direction=? spid=1170 suid=0 exe="/usr/sbin/sshd" hostname=? addr=10.8.0.181 terminal=? res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1170]: CRED_ACQ pid=1170 uid=0 auid=0 ses=2 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=PAM:setcred grantors=pam_env,pam_localuser,pam_unix acct="root" exe="/usr/sbin/sshd" hostname=10.8.0.181 addr=10.8.0.181 terminal=ssh res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1168]: USER_LOGIN pid=1168 uid=0 auid=0 ses=2 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login id=0 exe="/usr/sbin/sshd" hostname=? addr=10.8.0.181 terminal=ssh res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1168]: USER_START pid=1168 uid=0 auid=0 ses=2 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login id=0 exe="/usr/sbin/sshd" hostname=? addr=10.8.0.181 terminal=ssh res=success' Sep 26 15:12:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1168]: CRYPTO_KEY_USER pid=1168 uid=0 auid=0 ses=2 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=destroy kind=server fp=SHA256:66:c0:7a:bb:63:76:3f:a6:0d:bb:0d:f4:d6:a5:52:34:74:0d:3a:90:1f:c5:80:af:f6:7c:b0:16:9f:3c:bc:d7 direction=? spid=1171 suid=0 exe="/usr/sbin/sshd" hostname=? addr=10.8.0.181 terminal=? res=success' Sep 26 15:12:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: Running test [R:12663582 T:4 - Boot test - Kernel: 6.0.0-rc7] Sep 26 15:12:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com chronyd[1064]: Selected source 10.13.199.1 Sep 26 15:12:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com chronyd[1064]: System clock TAI offset set to 37 seconds Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished chrony-wait.service - Wait for chrony to synchronize system clock. Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=chrony-wait comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target time-sync.target - System Time Synchronized. Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started fstrim.timer - Discard unused blocks once a week. Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started raid-check.timer - Weekly RAID setup health check. Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started unbound-anchor.timer - daily update of the root trust anchor for DNSSEC. Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target timers.target - Timer Units. Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting restraintd.service - The restraint harness.... Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started restraintd.service - The restraint harness.. Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=restraintd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reached target multi-user.target - Multi-User System. Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com restraintd[1317]: Listening on http://localhost:8081 Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1318]: SYSTEM_RUNLEVEL pid=1318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='old-level=N new-level=3 comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Startup finished in 24.385s (kernel) + 9.130s (initrd) + 30.520s (userspace) = 1min 4.037s. Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-update-utmp-runlevel comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-update-utmp-runlevel comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com restraintd[1317]: * Fetching recipe: http://lab-02.rhts.eng.bos.redhat.com:8000//recipes/12663582/ Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com restraintd[1317]: Ignoring Server Running state Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com restraintd[1317]: * Parsing recipe Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com restraintd[1317]: * Running recipe Sep 26 15:12:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com restraintd[1317]: ** Continuing task: 150692600 [/mnt/tests/github.com/beaker-project/beaker-core-tasks/archive/master.tar.gz/reservesys] Sep 26 15:12:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com restraintd[1317]: ** Preparing metadata Sep 26 15:12:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com restraintd[1317]: ** Refreshing peer role hostnames: Retries 0 Sep 26 15:12:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com restraintd[1317]: ** Updating env vars Sep 26 15:12:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com restraintd[1317]: *** Current Time: Mon Sep 26 15:12:43 2022 Localwatchdog at: * Disabled! * Sep 26 15:12:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com restraintd[1317]: ** Running task: 150692600 [/distribution/reservesys] Sep 26 15:12:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: Running test [R:12663582 T:150692600 - /distribution/reservesys - Kernel: 6.0.0-rc7] Sep 26 15:12:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Sep 26 15:12:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=NetworkManager-dispatcher comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:13:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 26 15:13:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:13:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:13:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:13:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:15:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=71 op=LOAD Sep 26 15:15:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=72 op=LOAD Sep 26 15:15:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=73 op=LOAD Sep 26 15:15:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 26 15:15:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 26 15:15:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:16:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 26 15:16:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:16:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started run-r0c61cb42e4424a42bcdadd135cc57b0b.service - /usr/bin/systemctl start man-db-cache-update. Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=run-r0c61cb42e4424a42bcdadd135cc57b0b comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting man-db-cache-update.service... Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Reloading. Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: /usr/lib/systemd/system/restraintd.service:8: Standard output type syslog+console is obsolete, automatically updating to journal+console. Please update your unit file, and consider removing the setting altogether. Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=74 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=75 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=76 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=77 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=78 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=79 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=80 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=81 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=82 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=83 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=84 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=85 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=86 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=87 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=88 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=89 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=90 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=91 op=LOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:16:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Queuing reload/restart jobs for marked units… Sep 26 15:16:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=92 op=LOAD Sep 26 15:16:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=93 op=LOAD Sep 26 15:16:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=94 op=LOAD Sep 26 15:16:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 26 15:16:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 26 15:16:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:16:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: Running test [R:12663582 T:5 - Storage - blktests - blk - Kernel: 6.0.0-rc7] Sep 26 15:16:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: man-db-cache-update.service: Deactivated successfully. Sep 26 15:16:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished man-db-cache-update.service. Sep 26 15:16:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=man-db-cache-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:16:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=man-db-cache-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:16:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: run-r0c61cb42e4424a42bcdadd135cc57b0b.service: Deactivated successfully. Sep 26 15:16:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=run-r0c61cb42e4424a42bcdadd135cc57b0b comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests block/001 at 2022-09-26 15:16:35 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[11863]: run blktests block/001 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi_debug:sdebug_driver_probe: scsi_debug: trim poll_queues to 0. poll_q/nr_hw = (0/1) Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi host6: scsi_debug: version 0191 [20210520] dev_size_mb=8, opts=0x0, submit_queues=1, statistics=0 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi_debug:sdebug_driver_probe: scsi_debug: trim poll_queues to 0. poll_q/nr_hw = (0/1) Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi host7: scsi_debug: version 0191 [20210520] dev_size_mb=8, opts=0x0, submit_queues=1, statistics=0 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi_debug:sdebug_driver_probe: scsi_debug: trim poll_queues to 0. poll_q/nr_hw = (0/1) Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi host8: scsi_debug: version 0191 [20210520] dev_size_mb=8, opts=0x0, submit_queues=1, statistics=0 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi_debug:sdebug_driver_probe: scsi_debug: trim poll_queues to 0. poll_q/nr_hw = (0/1) Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi host9: scsi_debug: version 0191 [20210520] dev_size_mb=8, opts=0x0, submit_queues=1, statistics=0 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Write Protect is off Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Attached scsi generic sg4 type 0 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Mode Sense: 73 00 10 08 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Attached scsi generic sg5 type 0 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Attached scsi generic sg6 type 0 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Write Protect is off Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Write Protect is off Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Write Protect is off Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Mode Sense: 73 00 10 08 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Mode Sense: 73 00 10 08 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Preferred minimum I/O size 512 bytes Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Preferred minimum I/O size 512 bytes Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Optimal transfer size 524288 bytes Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Mode Sense: 73 00 10 08 Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Preferred minimum I/O size 512 bytes Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Optimal transfer size 524288 bytes Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Optimal transfer size 524288 bytes Sep 26 15:16:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Preferred minimum I/O size 512 bytes Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Optimal transfer size 524288 bytes Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Attached SCSI disk Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Attached SCSI disk Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Attached SCSI disk Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Attached SCSI disk Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Synchronizing SCSI cache Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Synchronizing SCSI cache Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Synchronizing SCSI cache Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Attached scsi generic sg4 type 0 Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Write Protect is off Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Mode Sense: 73 00 10 08 Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Sense not available. Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Synchronizing SCSI cache Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Preferred minimum I/O size 512 bytes Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Sense not available. Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Sense not available. Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] 0 512-byte logical blocks: (0 B/0 B) Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] 0-byte physical blocks Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Write Protect is off Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Mode Sense: 00 00 00 00 Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Asking for cache data failed Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Assuming drive cache: write through Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Preferred minimum I/O size 512 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Optimal transfer size 524288 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Attached SCSI disk Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Optimal transfer size 524288 bytes Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Sense not available. Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] 0 512-byte logical blocks: (0 B/0 B) Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Attached SCSI disk Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] 0-byte physical blocks Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Write Protect is off Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Sense not available. Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Sense not available. Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] 0 512-byte logical blocks: (0 B/0 B) Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] 0-byte physical blocks Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Write Protect is off Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Mode Sense: 00 00 00 00 Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Asking for cache data failed Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Assuming drive cache: write through Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Preferred minimum I/O size 512 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Optimal transfer size 524288 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Attached SCSI disk Sep 26 15:16:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Mode Sense: 00 00 00 00 Sep 26 15:16:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Asking for cache data failed Sep 26 15:16:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Assuming drive cache: write through Sep 26 15:16:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Preferred minimum I/O size 512 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Optimal transfer size 524288 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Attached SCSI disk Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Synchronizing SCSI cache Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Write Protect is off Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Mode Sense: 73 00 10 08 Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Preferred minimum I/O size 512 bytes Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Optimal transfer size 524288 bytes Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Attached scsi generic sg4 type 0 Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Write Protect is off Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Mode Sense: 73 00 10 08 Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Asking for cache data failed Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Assuming drive cache: write through Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Preferred minimum I/O size 512 bytes Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Write Protect is off Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Mode Sense: 73 00 10 08 Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Preferred minimum I/O size 512 bytes Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Optimal transfer size 524288 bytes Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Attached SCSI disk Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Optimal transfer size 524288 bytes Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Attached SCSI disk Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sdd, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Write Protect is off Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Mode Sense: 73 00 10 08 Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Preferred minimum I/O size 512 bytes Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Optimal transfer size 524288 bytes Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sdd, logical block 0, async page read Sep 26 15:16:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sdd, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sdd, logical block 0, async page read Sep 26 15:16:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sdc, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sdd, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sdc, logical block 0, async page read Sep 26 15:16:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sdd, logical block 0, async page read Sep 26 15:16:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sdc, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ldm_validate_partition_table(): Disk read failed. Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sdc, logical block 0, async page read Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sdc, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sdd, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sdc, logical block 0, async page read Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sdd, logical block 0, async page read Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ldm_validate_partition_table(): Disk read failed. Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sdd, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sdc, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sdd, logical block 0, async page read Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sdc, logical block 0, async page read Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sdd, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sdc, logical block 0, async page read Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sdd: unable to read partition table Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sdc: unable to read partition table Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Attached SCSI disk Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Attached SCSI disk Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Synchronizing SCSI cache Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Sense not available. Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Sense not available. Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] 0 512-byte logical blocks: (0 B/0 B) Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] 0-byte physical blocks Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Write Protect is off Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Mode Sense: 00 00 00 00 Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Asking for cache data failed Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Assuming drive cache: write through Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Preferred minimum I/O size 512 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Optimal transfer size 524288 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Attached SCSI disk Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Sense not available. Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Sense not available. Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] 0 512-byte logical blocks: (0 B/0 B) Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] 0-byte physical blocks Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Write Protect is off Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Mode Sense: 00 00 00 00 Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Asking for cache data failed Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Assuming drive cache: write through Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Preferred minimum I/O size 512 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Optimal transfer size 524288 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Attached SCSI disk Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Write Protect is off Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Mode Sense: 73 00 10 08 Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Attached scsi generic sg4 type 0 Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Preferred minimum I/O size 512 bytes Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Optimal transfer size 524288 bytes Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Attached SCSI disk Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Write Protect is off Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Mode Sense: 73 00 10 08 Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Preferred minimum I/O size 512 bytes Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Optimal transfer size 524288 bytes Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Write Protect is off Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Mode Sense: 73 00 10 08 Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Write Protect is off Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Mode Sense: 73 00 10 08 Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Preferred minimum I/O size 512 bytes Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Optimal transfer size 524288 bytes Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Preferred minimum I/O size 512 bytes Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Optimal transfer size 524288 bytes Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Attached scsi generic sg5 type 0 Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ldm_validate_partition_table(): Disk read failed. Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ldm_validate_partition_table(): Disk read failed. Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ldm_validate_partition_table(): Disk read failed. Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sdf: unable to read partition table Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Attached SCSI disk Sep 26 15:16:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sdd: unable to read partition table Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sde: unable to read partition table Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sdd] Attached SCSI disk Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sde] Attached SCSI disk Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Synchronizing SCSI cache Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Sense not available. Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Write Protect is off Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Mode Sense: 73 00 10 08 Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] No Caching mode page found Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Sense not available. Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] 0 512-byte logical blocks: (0 B/0 B) Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Assuming drive cache: write through Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] 0-byte physical blocks Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Write Protect is off Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Mode Sense: 00 00 00 00 Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Preferred minimum I/O size 512 bytes Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Asking for cache data failed Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Optimal transfer size 524288 bytes Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Assuming drive cache: write through Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Preferred minimum I/O size 512 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdf] Attached SCSI disk Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Optimal transfer size 524288 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdc] Attached SCSI disk Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Write Protect is off Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Mode Sense: 73 00 10 08 Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Attached scsi generic sg4 type 0 Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Preferred minimum I/O size 512 bytes Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Optimal transfer size 524288 bytes Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Attached SCSI disk Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Write Protect is off Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Mode Sense: 00 00 00 00 Sep 26 15:16:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Asking for cache data failed Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Assuming drive cache: write through Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Preferred minimum I/O size 512 bytes Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Optimal transfer size 524288 bytes Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Attached SCSI disk Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Write Protect is off Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Mode Sense: 73 00 10 08 Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Preferred minimum I/O size 512 bytes Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Optimal transfer size 524288 bytes Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Attached scsi generic sg4 type 0 Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Sense not available. Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Sense not available. Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] 0 512-byte logical blocks: (0 B/0 B) Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ldm_validate_partition_table(): Disk read failed. Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] 0-byte physical blocks Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sdf: unable to read partition table Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Write Protect is off Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Attached SCSI disk Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Mode Sense: 00 00 00 00 Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Asking for cache data failed Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Assuming drive cache: write through Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Synchronizing SCSI cache Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Preferred minimum I/O size 512 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Optimal transfer size 524288 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Write Protect is off Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Mode Sense: 73 00 10 08 Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Attached SCSI disk Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Preferred minimum I/O size 512 bytes Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Optimal transfer size 524288 bytes Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Attached SCSI disk Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Write Protect is off Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Mode Sense: 73 00 10 08 Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Preferred minimum I/O size 512 bytes Sep 26 15:16:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Optimal transfer size 524288 bytes Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: blk_print_req_error: 32 callbacks suppressed Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: buffer_io_error: 32 callbacks suppressed Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sde, logical block 0, async page read Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Write Protect is off Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Mode Sense: 73 00 10 08 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sde, logical block 0, async page read Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Preferred minimum I/O size 512 bytes Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Sense not available. Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Sense not available. Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] 0 512-byte logical blocks: (0 B/0 B) Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] 0-byte physical blocks Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Write Protect is off Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Mode Sense: 00 00 00 00 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Asking for cache data failed Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Assuming drive cache: write through Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Preferred minimum I/O size 512 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Optimal transfer size 524288 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Attached SCSI disk Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Optimal transfer size 524288 bytes Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sde, logical block 0, async page read Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ldm_validate_partition_table(): Disk read failed. Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Synchronizing SCSI cache Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Attached SCSI disk Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sde, logical block 0, async page read Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sde, logical block 0, async page read Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Sense not available. Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sde, logical block 0, async page read Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Sense not available. Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sde, logical block 0, async page read Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sde: unable to read partition table Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] 0 512-byte logical blocks: (0 B/0 B) Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Attached SCSI disk Sep 26 15:16:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] 0-byte physical blocks Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Write Protect is off Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Mode Sense: 00 00 00 00 Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Asking for cache data failed Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Assuming drive cache: write through Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Preferred minimum I/O size 512 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Optimal transfer size 524288 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Attached SCSI disk Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Synchronizing SCSI cache Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Sense not available. Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Write Protect is off Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Mode Sense: 73 00 10 08 Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Preferred minimum I/O size 512 bytes Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Optimal transfer size 524288 bytes Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Sense not available. Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] 0 512-byte logical blocks: (0 B/0 B) Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] 0-byte physical blocks Sep 26 15:16:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Write Protect is off Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Mode Sense: 00 00 00 00 Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Asking for cache data failed Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Assuming drive cache: write through Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Preferred minimum I/O size 512 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Optimal transfer size 524288 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Attached SCSI disk Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sdf, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Sense not available. Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Sense not available. Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] 0 512-byte logical blocks: (0 B/0 B) Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] 0-byte physical blocks Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Write Protect is off Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Mode Sense: 00 00 00 00 Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Asking for cache data failed Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Assuming drive cache: write through Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Preferred minimum I/O size 512 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Optimal transfer size 524288 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Attached SCSI disk Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sdf, logical block 0, async page read Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sdf, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Write Protect is off Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Mode Sense: 73 00 10 08 Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Preferred minimum I/O size 512 bytes Sep 26 15:16:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Optimal transfer size 524288 bytes Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sdf, logical block 0, async page read Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sdf, logical block 0, async page read Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ldm_validate_partition_table(): Disk read failed. Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ldm_validate_partition_table(): Disk read failed. Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sde: unable to read partition table Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sdf: unable to read partition table Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Attached SCSI disk Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Attached SCSI disk Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Sense not available. Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Sense not available. Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] 0 512-byte logical blocks: (0 B/0 B) Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] 0-byte physical blocks Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Write Protect is off Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Write Protect is off Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Mode Sense: 73 00 10 08 Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] No Caching mode page found Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Mode Sense: 00 00 00 00 Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Asking for cache data failed Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Assuming drive cache: write through Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Assuming drive cache: write through Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Preferred minimum I/O size 512 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Preferred minimum I/O size 512 bytes Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Optimal transfer size 524288 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Optimal transfer size 524288 bytes Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdc] Attached SCSI disk Sep 26 15:16:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdd] Attached SCSI disk Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Sense not available. Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Write Protect is off Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Mode Sense: 73 00 10 08 Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Sense not available. Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Preferred minimum I/O size 512 bytes Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] 0 512-byte logical blocks: (0 B/0 B) Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Optimal transfer size 524288 bytes Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Attached SCSI disk Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] 0-byte physical blocks Sep 26 15:16:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Write Protect is off Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Mode Sense: 00 00 00 00 Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Asking for cache data failed Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Assuming drive cache: write through Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Preferred minimum I/O size 512 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Optimal transfer size 524288 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Attached SCSI disk Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Attached scsi generic sg4 type 0 Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] Write Protect is off Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] Mode Sense: 73 00 10 08 Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] Write Protect is off Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] Preferred minimum I/O size 512 bytes Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] Mode Sense: 00 00 00 00 Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] Optimal transfer size 524288 bytes Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] Attached SCSI disk Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] Asking for cache data failed Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] Assuming drive cache: write through Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] Preferred minimum I/O size 512 bytes Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] Optimal transfer size 524288 bytes Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] Attached SCSI disk Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Synchronizing SCSI cache Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Attached scsi generic sg4 type 0 Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Write Protect is off Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Mode Sense: 00 00 00 00 Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Asking for cache data failed Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Assuming drive cache: write through Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Preferred minimum I/O size 512 bytes Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Optimal transfer size 524288 bytes Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Attached SCSI disk Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Write Protect is off Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Mode Sense: 00 00 00 00 Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Asking for cache data failed Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Assuming drive cache: write through Sep 26 15:16:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Preferred minimum I/O size 512 bytes Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Optimal transfer size 524288 bytes Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Attached SCSI disk Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] Synchronizing SCSI cache Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: Attached scsi generic sg4 type 0 Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] Write Protect is off Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] Mode Sense: 73 00 10 08 Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] No Caching mode page found Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] Assuming drive cache: write through Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] Preferred minimum I/O size 512 bytes Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] Optimal transfer size 524288 bytes Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 8:0:0:0: [sdc] Attached SCSI disk Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] Write Protect is off Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] Mode Sense: 73 00 10 08 Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] Asking for cache data failed Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] Assuming drive cache: write through Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] Preferred minimum I/O size 512 bytes Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] Optimal transfer size 524288 bytes Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] 16384 512-byte logical blocks: (8.39 MB/8.00 MiB) Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Write Protect is off Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Mode Sense: 73 00 10 08 Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Write cache: enabled, read cache: enabled, supports DPO and FUA Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Preferred minimum I/O size 512 bytes Sep 26 15:16:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Optimal transfer size 524288 bytes Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 9:0:0:0: [sdd] Attached SCSI disk Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: Direct-Access Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Attached scsi generic sg3 type 0 Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: blk_print_req_error: 11 callbacks suppressed Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: buffer_io_error: 11 callbacks suppressed Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sde, logical block 0, async page read Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sde, logical block 0, async page read Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sde, logical block 0, async page read Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ldm_validate_partition_table(): Disk read failed. Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sde, logical block 0, async page read Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sde, logical block 0, async page read Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sde, logical block 0, async page read Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: device offline error, dev sde, sector 0 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2 Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev sde, logical block 0, async page read Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sde: unable to read partition table Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 7:0:0:0: [sde] Attached SCSI disk Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Read Capacity(16) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Sense not available. Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Read Capacity(10) failed: Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Sense not available. Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] 0 512-byte logical blocks: (0 B/0 B) Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] 0-byte physical blocks Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Write Protect is off Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Mode Sense: 00 00 00 00 Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Asking for cache data failed Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Assuming drive cache: write through Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Preferred minimum I/O size 512 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Optimal transfer size 524288 bytes not a multiple of physical block size (0 bytes) Sep 26 15:16:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sd 6:0:0:0: [sdf] Attached SCSI disk Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi_debug:sdebug_driver_probe: scsi_debug: trim poll_queues to 0. poll_q/nr_hw = (0/1) Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi host6: scsi_debug: version 0191 [20210520] dev_size_mb=8, opts=0x0, submit_queues=1, statistics=0 Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi_debug:sdebug_driver_probe: scsi_debug: trim poll_queues to 0. poll_q/nr_hw = (0/1) Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi host7: scsi_debug: version 0191 [20210520] dev_size_mb=8, opts=0x0, submit_queues=1, statistics=0 Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi_debug:sdebug_driver_probe: scsi_debug: trim poll_queues to 0. poll_q/nr_hw = (0/1) Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi host8: scsi_debug: version 0191 [20210520] dev_size_mb=8, opts=0x0, submit_queues=1, statistics=0 Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi_debug:sdebug_driver_probe: scsi_debug: trim poll_queues to 0. poll_q/nr_hw = (0/1) Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi host9: scsi_debug: version 0191 [20210520] dev_size_mb=8, opts=0x0, submit_queues=1, statistics=0 Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg5 type 5 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg6 type 5 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: block device autoloading is deprecated and will be removed. Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: block device autoloading is deprecated and will be removed. Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11874]: sr2: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter3/host9/target9:0:0/9:0:0:0/block/sr2/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11869]: sr3: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter2/host8/target8:0:0/8:0:0:0/block/sr3/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11869]: sr1: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter1/host7/target7:0:0/7:0:0:0/block/sr1/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11874]: sr2: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter3/host9/target9:0:0/9:0:0:0/block/sr2/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11872]: sr4: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter0/host6/target6:0:0/6:0:0:0/block/sr4/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11872]: sr4: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter0/host6/target6:0:0/6:0:0:0/block/sr4/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11881]: sr2: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter3/host9/target9:0:0/9:0:0:0/block/sr2/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11872]: sr1: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter2/host8/target8:0:0/8:0:0:0/block/sr1/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11874]: sr4: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter0/host6/target6:0:0/6:0:0:0/block/sr4/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11881]: sr3: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter1/host7/target7:0:0/7:0:0:0/block/sr3/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11875]: sr4: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter2/host8/target8:0:0/8:0:0:0/block/sr4/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11872]: sr2: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter3/host9/target9:0:0/9:0:0:0/block/sr2/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: block device autoloading is deprecated and will be removed. Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11869]: sr1: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter0/host6/target6:0:0/6:0:0:0/block/sr1/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11874]: sr4: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter2/host8/target8:0:0/8:0:0:0/block/sr4/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11869]: sr3: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter1/host7/target7:0:0/7:0:0:0/block/sr3/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11874]: sr2: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter3/host9/target9:0:0/9:0:0:0/block/sr2/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg5 type 5 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11873]: sr3: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter3/host9/target9:0:0/9:0:0:0/block/sr3/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11874]: sr1: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter0/host6/target6:0:0/6:0:0:0/block/sr1/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr4] scsi-1 drive Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11869]: sr3: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter2/host8/target8:0:0/8:0:0:0/block/sr3/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11873]: sr4: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter1/host7/target7:0:0/7:0:0:0/block/sr4/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11874]: sr2: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter3/host9/target9:0:0/9:0:0:0/block/sr2/queue/scheduler}, ignoring: No such file or directory Sep 26 15:16:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr1] scsi-1 drive Sep 26 15:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr4] scsi-1 drive Sep 26 15:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 26 15:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr1] scsi-1 drive Sep 26 15:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr4] scsi-1 drive Sep 26 15:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr1] scsi-1 drive Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr4] scsi-1 drive Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11881]: sr4: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter0/host6/target6:0:0/6:0:0:0/block/sr4/queue/scheduler}, ignoring: No such file or directory Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr2] scsi-1 drive Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr4] scsi-1 drive Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11869]: sr3: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter2/host8/target8:0:0/8:0:0:0/block/sr3/queue/scheduler}, ignoring: No such file or directory Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr1] scsi-1 drive Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11873]: sr1: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter1/host7/target7:0:0/7:0:0:0/block/sr1/queue/scheduler}, ignoring: No such file or directory Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11873]: sr3: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter2/host8/target8:0:0/8:0:0:0/block/sr3/queue/scheduler}, ignoring: No such file or directory Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr1] scsi-1 drive Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr2] scsi-1 drive Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr4] scsi-1 drive Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11869]: sr1: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter1/host7/target7:0:0/7:0:0:0/block/sr1/queue/scheduler}, ignoring: No such file or directory Sep 26 15:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11872]: sr2: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter0/host6/target6:0:0/6:0:0:0/block/sr2/queue/scheduler}, ignoring: No such file or directory Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr1] scsi-1 drive Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr2] scsi-1 drive Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11875]: sr2: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter0/host6/target6:0:0/6:0:0:0/block/sr2/queue/scheduler}, ignoring: No such file or directory Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr4] scsi-1 drive Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: block device autoloading is deprecated and will be removed. Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr1] scsi-1 drive Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr2] scsi-1 drive Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr3] scsi-1 drive Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11874]: sr4: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter3/host9/target9:0:0/9:0:0:0/block/sr4/queue/scheduler}, ignoring: No such file or directory Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr4] scsi-1 drive Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr1] scsi-1 drive Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr2] scsi-1 drive Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr4] scsi-1 drive Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr3] scsi-1 drive Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11874]: sr3: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter2/host8/target8:0:0/8:0:0:0/block/sr3/queue/scheduler}, ignoring: No such file or directory Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11874]: sr1: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter2/host8/target8:0:0/8:0:0:0/block/sr1/queue/scheduler}, ignoring: No such file or directory Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr1] scsi-1 drive Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 6:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 7:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Power-on or device reset occurred Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Power-on or device reset occurred Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: [sr2] scsi-1 drive Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 9:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: [sr3] scsi-1 drive Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi CD-ROM sr2 Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: block device autoloading is deprecated and will be removed. Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Power-on or device reset occurred Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: [sr4] scsi-1 drive Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 6:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi CD-ROM sr3 Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: block device autoloading is deprecated and will be removed. Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11869]: sr1: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter2/host8/target8:0:0/8:0:0:0/block/sr1/queue/scheduler}, ignoring: No such file or directory Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi CD-ROM sr4 Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 7:0:0:0: Attached scsi generic sg4 type 5 Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi 8:0:0:0: CD-ROM Linux scsi_debug 0191 PQ: 0 ANSI: 7 Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 9:0:0:0: Attached scsi generic sg5 type 5 Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Power-on or device reset occurred Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: [sr1] scsi-1 drive Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi CD-ROM sr1 Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sr 8:0:0:0: Attached scsi generic sg3 type 5 Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11869]: sr1: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter2/host8/target8:0:0/8:0:0:0/block/sr1/queue/scheduler}, ignoring: No such file or directory Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd-udevd[11869]: sr4: /usr/lib/udev/rules.d/60-block-scheduler.rules:5 Failed to write ATTR{/sys/devices/pseudo_0/adapter3/host9/target9:0:0/9:0:0:0/block/sr4/queue/scheduler}, ignoring: No such file or directory Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=95 op=LOAD Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=96 op=LOAD Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=97 op=LOAD Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 26 15:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests block/006 at 2022-09-26 15:17:09 Sep 26 15:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[12877]: run blktests block/006 Sep 26 15:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: null_blk: disk nullb0 created Sep 26 15:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: null_blk: module loaded Sep 26 15:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests block/016 at 2022-09-26 15:17:24 Sep 26 15:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[13134]: run blktests block/016 Sep 26 15:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: null_blk: disk nullb0 created Sep 26 15:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: null_blk: module loaded Sep 26 15:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Created slice background.slice - User Background Tasks Slice. Sep 26 15:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Sep 26 15:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1092]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Sep 26 15:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests block/017 at 2022-09-26 15:17:30 Sep 26 15:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[13286]: run blktests block/017 Sep 26 15:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: null_blk: disk nullb0 created Sep 26 15:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: null_blk: module loaded Sep 26 15:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests block/018 at 2022-09-26 15:17:32 Sep 26 15:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[13438]: run blktests block/018 Sep 26 15:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: null_blk: disk nullb0 created Sep 26 15:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: null_blk: module loaded Sep 26 15:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests block/021 at 2022-09-26 15:17:36 Sep 26 15:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[13597]: run blktests block/021 Sep 26 15:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: null_blk: disk nullb0 created Sep 26 15:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: null_blk: module loaded Sep 26 15:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests block/023 at 2022-09-26 15:17:43 Sep 26 15:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[13747]: run blktests block/023 Sep 26 15:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: null_blk: disk nullb0 created Sep 26 15:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: null_blk: module loaded Sep 26 15:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: null_blk: disk nullb0 created Sep 26 15:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: null_blk: module loaded Sep 26 15:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests loop/001 at 2022-09-26 15:17:44 Sep 26 15:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[13900]: run blktests loop/001 Sep 26 15:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: loop: module loaded Sep 26 15:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: loop0: detected capacity change from 0 to 2097152 Sep 26 15:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: loop0: p1 p2 Sep 26 15:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests loop/003 at 2022-09-26 15:17:45 Sep 26 15:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[14054]: run blktests loop/003 Sep 26 15:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests loop/005 at 2022-09-26 15:17:46 Sep 26 15:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[14190]: run blktests loop/005 Sep 26 15:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: loop0: detected capacity change from 0 to 2048 Sep 26 15:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I/O error, dev loop0, sector 0 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2 Sep 26 15:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests nvme/004 at 2022-09-26 15:17:46 Sep 26 15:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[14334]: run blktests nvme/004 Sep 26 15:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: loop0: detected capacity change from 0 to 2097152 Sep 26 15:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: adding nsid 1 to subsystem blktests-subsystem-1 Sep 26 15:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:37363836-3137-4d32-3232-323130315253. Sep 26 15:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: creating 32 I/O queues. Sep 26 15:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: new ctrl: "blktests-subsystem-1" Sep 26 15:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1" Sep 26 15:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests nvme/006 at 2022-09-26 15:17:49 Sep 26 15:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[14520]: run blktests nvme/006 Sep 26 15:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: loop0: detected capacity change from 0 to 2097152 Sep 26 15:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: adding nsid 1 to subsystem blktests-subsystem-1 Sep 26 15:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests nvme/007 at 2022-09-26 15:17:50 Sep 26 15:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[14678]: run blktests nvme/007 Sep 26 15:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: adding nsid 1 to subsystem blktests-subsystem-1 Sep 26 15:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests nvme/008 at 2022-09-26 15:17:50 Sep 26 15:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[14835]: run blktests nvme/008 Sep 26 15:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: loop0: detected capacity change from 0 to 2097152 Sep 26 15:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: adding nsid 1 to subsystem blktests-subsystem-1 Sep 26 15:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:37363836-3137-4d32-3232-323130315253. Sep 26 15:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: creating 32 I/O queues. Sep 26 15:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: new ctrl: "blktests-subsystem-1" Sep 26 15:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1" Sep 26 15:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests nvme/009 at 2022-09-26 15:17:53 Sep 26 15:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[15015]: run blktests nvme/009 Sep 26 15:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: adding nsid 1 to subsystem blktests-subsystem-1 Sep 26 15:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:37363836-3137-4d32-3232-323130315253. Sep 26 15:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: creating 32 I/O queues. Sep 26 15:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: new ctrl: "blktests-subsystem-1" Sep 26 15:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1" Sep 26 15:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests nvme/010 at 2022-09-26 15:17:55 Sep 26 15:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[15228]: run blktests nvme/010 Sep 26 15:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: loop0: detected capacity change from 0 to 2097152 Sep 26 15:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: adding nsid 1 to subsystem blktests-subsystem-1 Sep 26 15:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:37363836-3137-4d32-3232-323130315253. Sep 26 15:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: creating 32 I/O queues. Sep 26 15:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: new ctrl: "blktests-subsystem-1" Sep 26 15:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1" Sep 26 15:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests nvme/011 at 2022-09-26 15:18:07 Sep 26 15:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[15484]: run blktests nvme/011 Sep 26 15:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: adding nsid 1 to subsystem blktests-subsystem-1 Sep 26 15:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:37363836-3137-4d32-3232-323130315253. Sep 26 15:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: creating 32 I/O queues. Sep 26 15:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: new ctrl: "blktests-subsystem-1" Sep 26 15:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 26 15:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:27:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Sep 26 15:27:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:27:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:27:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Sep 26 15:27:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Sep 26 15:33:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting dnf-makecache.service - dnf makecache... Sep 26 15:33:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com dnf[16298]: Metadata cache refreshed recently. Sep 26 15:33:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: dnf-makecache.service: Deactivated successfully. Sep 26 15:33:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Finished dnf-makecache.service - dnf makecache. Sep 26 15:33:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dnf-makecache comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:33:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dnf-makecache comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:33:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1" Sep 26 15:33:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: block nvme0n1: no available path - failing I/O Sep 26 15:33:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: block nvme0n1: no available path - failing I/O Sep 26 15:33:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Buffer I/O error on dev nvme0n1, logical block 8, async page read Sep 26 15:33:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=98 op=LOAD Sep 26 15:33:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=99 op=LOAD Sep 26 15:33:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=100 op=LOAD Sep 26 15:33:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 26 15:33:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 26 15:33:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:33:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests nvme/012 at 2022-09-26 15:33:53 Sep 26 15:33:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[16495]: run blktests nvme/012 Sep 26 15:33:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: loop0: detected capacity change from 0 to 2097152 Sep 26 15:33:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: adding nsid 1 to subsystem blktests-subsystem-1 Sep 26 15:33:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:37363836-3137-4d32-3232-323130315253. Sep 26 15:33:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: creating 32 I/O queues. Sep 26 15:33:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: new ctrl: "blktests-subsystem-1" Sep 26 15:34:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 26 15:34:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 15:34:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:34:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 15:34:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 16:00:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: Removing ctrl: NQN "blktests-subsystem-1" Sep 26 16:00:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=101 op=LOAD Sep 26 16:00:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=102 op=LOAD Sep 26 16:00:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=103 op=LOAD Sep 26 16:00:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 26 16:00:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 26 16:00:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 16:00:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: run blktests nvme/013 at 2022-09-26 16:00:10 Sep 26 16:00:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com root[17457]: run blktests nvme/013 Sep 26 16:00:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: adding nsid 1 to subsystem blktests-subsystem-1 Sep 26 16:00:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:37363836-3137-4d32-3232-323130315253. Sep 26 16:00:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: creating 32 I/O queues. Sep 26 16:00:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nvme nvme0: new ctrl: "blktests-subsystem-1" Sep 26 16:00:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ISOFS: unsupported/invalid hardware sector size 4096 Sep 26 16:00:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 26 16:00:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 16:00:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 16:00:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 16:00:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 16:16:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=104 op=LOAD Sep 26 16:16:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=105 op=LOAD Sep 26 16:16:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=106 op=LOAD Sep 26 16:16:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 26 16:16:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 26 16:16:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 26 16:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown: Storage - blktests - blk hit test timeout, aborting it... Sep 26 16:16:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown[18059]: List of m Tasks: Start Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sysrq: Show Memory Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Mem-Info: Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: active_anon:146 inactive_anon:22289 isolated_anon:0 active_file:143497 inactive_file:49582 isolated_file:0 unevictable:768 dirty:249 writeback:0 slab_reclaimable:19630 slab_unreclaimable:34148 mapped:12125 shmem:1715 pagetables:598 bounce:0 kernel_misc_reclaimable:0 free:3708284 free_pcp:14349 free_cma:0 Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Node 0 active_anon:256kB inactive_anon:66044kB active_file:472408kB inactive_file:139940kB unevictable:3072kB isolated(anon):0kB isolated(file):0kB mapped:36344kB dirty:1012kB writeback:0kB shmem:3640kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB kernel_stack:3912kB pagetables:1488kB all_unreclaimable? no Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Node 1 active_anon:328kB inactive_anon:23112kB active_file:101580kB inactive_file:58388kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:12156kB dirty:84kB writeback:0kB shmem:3220kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB kernel_stack:3032kB pagetables:904kB all_unreclaimable? no Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Node 0 DMA free:13312kB boost:0kB min:84kB low:104kB high:124kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15968kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: lowmem_reserve[]: 0 2936 7920 7920 7920 Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Node 0 DMA32 free:3025296kB boost:0kB min:16600kB low:20748kB high:24896kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:3094192kB managed:3028656kB mlocked:0kB bounce:0kB free_pcp:1276kB local_pcp:1276kB free_cma:0kB Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: lowmem_reserve[]: 0 0 4984 4984 4984 Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Node 0 Normal free:4014492kB boost:0kB min:28172kB low:35212kB high:42252kB reserved_highatomic:0KB active_anon:256kB inactive_anon:66044kB active_file:472408kB inactive_file:139940kB unevictable:3072kB writepending:1012kB present:5242880kB managed:5103812kB mlocked:0kB bounce:0kB free_pcp:19568kB local_pcp:1176kB free_cma:0kB Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: lowmem_reserve[]: 0 0 0 0 0 Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Node 1 Normal free:7780036kB boost:0kB min:45248kB low:56560kB high:67872kB reserved_highatomic:0KB active_anon:328kB inactive_anon:23112kB active_file:101580kB inactive_file:58388kB unevictable:0kB writepending:84kB present:8388604kB managed:8205000kB mlocked:0kB bounce:0kB free_pcp:36552kB local_pcp:0kB free_cma:0kB Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: lowmem_reserve[]: 0 0 0 0 0 Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 2*2048kB (UM) 2*4096kB (M) = 13312kB Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Node 0 DMA32: 2*4kB (UM) 1*8kB (M) 2*16kB (M) 3*32kB (M) 0*64kB 2*128kB (M) 2*256kB (UM) 3*512kB (UM) 2*1024kB (UM) 1*2048kB (M) 737*4096kB (M) = 3025296kB Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Node 0 Normal: 105*4kB (UME) 239*8kB (UME) 284*16kB (UME) 260*32kB (UME) 135*64kB (UE) 179*128kB (UME) 95*256kB (UME) 54*512kB (UME) 34*1024kB (UM) 33*2048kB (M) 931*4096kB (M) = 4014492kB Sep 26 16:16:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Node 1 Normal: 253*4kB (UME) 91*8kB (UM) 238*16kB (UM) 205*32kB (UME) 102*64kB (UME) 108*128kB (UME) 65*256kB (UME) 45*512kB (UME) 27*1024kB (UME) 22*2048kB (M) 1864*4096kB (M) = 7779788kB Sep 26 16:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB Sep 26 16:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB Sep 26 16:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB Sep 26 16:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB Sep 26 16:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: 194794 total pagecache pages Sep 26 16:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: 0 pages in swap cache Sep 26 16:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Free swap = 8388604kB Sep 26 16:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Total swap = 8388604kB Sep 26 16:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: 4185411 pages RAM Sep 26 16:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: 0 pages HighMem/MovableOnly Sep 26 16:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: 97204 pages reserved Sep 26 16:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: 0 pages cma reserved Sep 26 16:16:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: 0 pages hwpoisoned Sep 26 16:16:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com unknown[18071]: List of m Tasks: Stop Sep 26 16:16:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com logger[18073]: List of t Tasks: Start Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sysrq: Show State Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:systemd state:S stack: 0 pid: 1 ppid: 0 flags:0x00000002 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_write_lock_irq+0x19/0x40 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_epoll_wait+0x633/0x750 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_eventpoll_poll+0x10/0x10 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_epoll_wait+0x4d/0xd0 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7ff3f1b2f0d6 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffdcbddabc0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000056 RCX: 00007ff3f1b2f0d6 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000056 RSI: 000056241aeb9bb0 RDI: 0000000000000004 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000212 R08: 0000000000000000 R09: 0000000000000000 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00000000ffffffff R11: 0000000000000293 R12: ffffffffffffffff Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000035 R14: aaaaaaaaaaaaaaab R15: 000056241ad1cba0 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kthreadd state:S stack: 0 pid: 2 ppid: 0 flags:0x00004000 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kernel_thread+0x4f/0x60 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthreadd+0x2bc/0x2d0 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_is_per_cpu+0x30/0x30 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:rcu_gp state:I stack: 0 pid: 3 ppid: 2 flags:0x00004000 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:rcu_par_gp state:I stack: 0 pid: 4 ppid: 2 flags:0x00004000 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:slub_flushwq state:I stack: 0 pid: 5 ppid: 2 flags:0x00004000 Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:netns state:I stack: 0 pid: 6 ppid: 2 flags:0x00004000 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/0:0H state:I stack: 0 pid: 8 ppid: 2 flags:0x00004000 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness.cold+0x1a/0x1f Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/0:1H state:I stack: 0 pid: 10 ppid: 2 flags:0x00004000 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_timeout_work+0xd9/0x110 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/u128:0 state:I stack: 0 pid: 11 ppid: 2 flags:0x00004000 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_unbound) Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:mm_percpu_wq state:I stack: 0 pid: 12 ppid: 2 flags:0x00004000 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/u128:1 state:I stack: 0 pid: 13 ppid: 2 flags:0x00004000 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (netns) Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:rcu_tasks_kthre state:I stack: 0 pid: 14 ppid: 2 flags:0x00004000 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? finish_task_switch.isra.0+0x9b/0x300 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rcu_tasks_postscan+0x20/0x20 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu_tasks_one_gp+0x15e/0x370 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rcu_tasks_postscan+0x20/0x20 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu_tasks_kthread+0x2e/0x40 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:rcu_tasks_rude_ state:I stack: 0 pid: 15 ppid: 2 flags:0x00004000 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? finish_task_switch.isra.0+0x9b/0x300 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rcu_tasks_postscan+0x20/0x20 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu_tasks_one_gp+0x15e/0x370 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rcu_tasks_postscan+0x20/0x20 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu_tasks_kthread+0x2e/0x40 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:rcu_tasks_trace state:I stack: 0 pid: 16 ppid: 2 flags:0x00004000 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? finish_task_switch.isra.0+0x9b/0x300 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rcu_tasks_postscan+0x20/0x20 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu_tasks_one_gp+0x15e/0x370 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rcu_tasks_postscan+0x20/0x20 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu_tasks_kthread+0x2e/0x40 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/0 state:S stack: 0 pid: 17 ppid: 2 flags:0x00004000 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:rcu_preempt state:I stack: 0 pid: 18 ppid: 2 flags:0x00004000 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x13e/0x12a0 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? get_nohz_timer_target+0x18/0x1a0 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_timer+0x256/0x380 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_timeout+0x79/0x130 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __bpf_trace_tick_stop+0x10/0x10 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu_gp_fqs_loop+0x124/0x4a0 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rcu_gp_cleanup+0x450/0x450 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu_gp_kthread+0xd6/0x190 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/0 state:S stack: 0 pid: 19 ppid: 2 flags:0x00004000 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/0 state:S stack: 0 pid: 21 ppid: 2 flags:0x00004000 Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/1 state:S stack: 0 pid: 22 ppid: 2 flags:0x00004000 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/1 state:S stack: 0 pid: 23 ppid: 2 flags:0x00004000 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/1 state:S stack: 0 pid: 24 ppid: 2 flags:0x00004000 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/1:0H state:I stack: 0 pid: 26 ppid: 2 flags:0x00004000 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_timeout_work+0xd9/0x110 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/2 state:S stack: 0 pid: 27 ppid: 2 flags:0x00004000 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/2 state:S stack: 0 pid: 28 ppid: 2 flags:0x00004000 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/2 state:S stack: 0 pid: 29 ppid: 2 flags:0x00004000 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/2:0H state:I stack: 0 pid: 31 ppid: 2 flags:0x00004000 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/3 state:S stack: 0 pid: 32 ppid: 2 flags:0x00004000 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/3 state:S stack: 0 pid: 33 ppid: 2 flags:0x00004000 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/3 state:S stack: 0 pid: 34 ppid: 2 flags:0x00004000 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/3:0H state:I stack: 0 pid: 36 ppid: 2 flags:0x00004000 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness.cold+0x1a/0x1f Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/4 state:S stack: 0 pid: 37 ppid: 2 flags:0x00004000 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/4 state:S stack: 0 pid: 38 ppid: 2 flags:0x00004000 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/4 state:S stack: 0 pid: 39 ppid: 2 flags:0x00004000 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/4:0H state:I stack: 0 pid: 41 ppid: 2 flags:0x00004000 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness.cold+0x1a/0x1f Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/5 state:S stack: 0 pid: 42 ppid: 2 flags:0x00004000 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/5 state:S stack: 0 pid: 43 ppid: 2 flags:0x00004000 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/5 state:S stack: 0 pid: 44 ppid: 2 flags:0x00004000 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/5:0H state:I stack: 0 pid: 46 ppid: 2 flags:0x00004000 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/6 state:S stack: 0 pid: 47 ppid: 2 flags:0x00004000 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/6 state:S stack: 0 pid: 48 ppid: 2 flags:0x00004000 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/6 state:S stack: 0 pid: 49 ppid: 2 flags:0x00004000 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/6:0H state:I stack: 0 pid: 51 ppid: 2 flags:0x00004000 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness.cold+0x1a/0x1f Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/7 state:S stack: 0 pid: 52 ppid: 2 flags:0x00004000 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/7 state:S stack: 0 pid: 53 ppid: 2 flags:0x00004000 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/7 state:S stack: 0 pid: 54 ppid: 2 flags:0x00004000 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/7:0H state:I stack: 0 pid: 56 ppid: 2 flags:0x00004000 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness.cold+0x1a/0x1f Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/8 state:S stack: 0 pid: 57 ppid: 2 flags:0x00004000 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/8 state:S stack: 0 pid: 58 ppid: 2 flags:0x00004000 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/8 state:S stack: 0 pid: 59 ppid: 2 flags:0x00004000 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/8:0H state:I stack: 0 pid: 61 ppid: 2 flags:0x00004000 Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/9 state:S stack: 0 pid: 64 ppid: 2 flags:0x00004000 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/9 state:S stack: 0 pid: 65 ppid: 2 flags:0x00004000 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/9 state:S stack: 0 pid: 66 ppid: 2 flags:0x00004000 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/9:0H state:I stack: 0 pid: 68 ppid: 2 flags:0x00004000 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/10 state:S stack: 0 pid: 69 ppid: 2 flags:0x00004000 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/10 state:R running task stack: 0 pid: 70 ppid: 2 flags:0x00004000 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/10 state:R running task stack: 0 pid: 71 ppid: 2 flags:0x00004000 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/10:0H state:I stack: 0 pid: 73 ppid: 2 flags:0x00004000 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/11 state:S stack: 0 pid: 74 ppid: 2 flags:0x00004000 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/11 state:S stack: 0 pid: 75 ppid: 2 flags:0x00004000 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/11 state:S stack: 0 pid: 76 ppid: 2 flags:0x00004000 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/11:0H state:I stack: 0 pid: 78 ppid: 2 flags:0x00004000 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/12 state:S stack: 0 pid: 79 ppid: 2 flags:0x00004000 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/12 state:S stack: 0 pid: 80 ppid: 2 flags:0x00004000 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/12 state:S stack: 0 pid: 81 ppid: 2 flags:0x00004000 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/12:0H state:I stack: 0 pid: 83 ppid: 2 flags:0x00004000 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/13 state:S stack: 0 pid: 84 ppid: 2 flags:0x00004000 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/13 state:S stack: 0 pid: 85 ppid: 2 flags:0x00004000 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/13 state:S stack: 0 pid: 86 ppid: 2 flags:0x00004000 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/13:0H state:I stack: 0 pid: 88 ppid: 2 flags:0x00004000 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/14 state:S stack: 0 pid: 89 ppid: 2 flags:0x00004000 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/14 state:S stack: 0 pid: 90 ppid: 2 flags:0x00004000 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/14 state:S stack: 0 pid: 91 ppid: 2 flags:0x00004000 Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/14:0H state:I stack: 0 pid: 93 ppid: 2 flags:0x00004000 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/15 state:S stack: 0 pid: 94 ppid: 2 flags:0x00004000 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/15 state:S stack: 0 pid: 95 ppid: 2 flags:0x00004000 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/15 state:S stack: 0 pid: 96 ppid: 2 flags:0x00004000 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/15:0H state:I stack: 0 pid: 98 ppid: 2 flags:0x00004000 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/16 state:S stack: 0 pid: 99 ppid: 2 flags:0x00004000 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/16 state:S stack: 0 pid: 100 ppid: 2 flags:0x00004000 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/16 state:S stack: 0 pid: 101 ppid: 2 flags:0x00004000 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/16:0H state:I stack: 0 pid: 103 ppid: 2 flags:0x00004000 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness.cold+0x1a/0x1f Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/17 state:S stack: 0 pid: 104 ppid: 2 flags:0x00004000 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/17 state:S stack: 0 pid: 105 ppid: 2 flags:0x00004000 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/17 state:S stack: 0 pid: 106 ppid: 2 flags:0x00004000 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/17:0H state:I stack: 0 pid: 108 ppid: 2 flags:0x00004000 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness.cold+0x1a/0x1f Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/18 state:S stack: 0 pid: 109 ppid: 2 flags:0x00004000 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/18 state:S stack: 0 pid: 110 ppid: 2 flags:0x00004000 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/18 state:S stack: 0 pid: 111 ppid: 2 flags:0x00004000 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/18:0H state:I stack: 0 pid: 113 ppid: 2 flags:0x00004000 Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/19 state:S stack: 0 pid: 114 ppid: 2 flags:0x00004000 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/19 state:S stack: 0 pid: 115 ppid: 2 flags:0x00004000 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/19 state:S stack: 0 pid: 116 ppid: 2 flags:0x00004000 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/19:0H state:I stack: 0 pid: 118 ppid: 2 flags:0x00004000 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/20 state:S stack: 0 pid: 119 ppid: 2 flags:0x00004000 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/20 state:S stack: 0 pid: 120 ppid: 2 flags:0x00004000 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/20 state:S stack: 0 pid: 121 ppid: 2 flags:0x00004000 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/20:0H state:I stack: 0 pid: 123 ppid: 2 flags:0x00004000 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/21 state:S stack: 0 pid: 124 ppid: 2 flags:0x00004000 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/21 state:S stack: 0 pid: 125 ppid: 2 flags:0x00004000 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/21 state:S stack: 0 pid: 126 ppid: 2 flags:0x00004000 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/21:0H state:I stack: 0 pid: 128 ppid: 2 flags:0x00004000 Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/22 state:S stack: 0 pid: 129 ppid: 2 flags:0x00004000 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/22 state:S stack: 0 pid: 130 ppid: 2 flags:0x00004000 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/22 state:S stack: 0 pid: 131 ppid: 2 flags:0x00004000 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/22:0H state:I stack: 0 pid: 133 ppid: 2 flags:0x00004000 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __blk_mq_sched_dispatch_requests+0xdb/0x130 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_sched_dispatch_requests+0x30/0x60 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/23 state:S stack: 0 pid: 134 ppid: 2 flags:0x00004000 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/23 state:S stack: 0 pid: 135 ppid: 2 flags:0x00004000 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/23 state:S stack: 0 pid: 136 ppid: 2 flags:0x00004000 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/23:0H state:I stack: 0 pid: 138 ppid: 2 flags:0x00004000 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/24 state:S stack: 0 pid: 139 ppid: 2 flags:0x00004000 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/24 state:S stack: 0 pid: 140 ppid: 2 flags:0x00004000 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/24 state:S stack: 0 pid: 141 ppid: 2 flags:0x00004000 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/24:0H state:I stack: 0 pid: 143 ppid: 2 flags:0x00004000 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/25 state:S stack: 0 pid: 144 ppid: 2 flags:0x00004000 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/25 state:S stack: 0 pid: 145 ppid: 2 flags:0x00004000 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/25 state:S stack: 0 pid: 146 ppid: 2 flags:0x00004000 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/25:0H state:I stack: 0 pid: 148 ppid: 2 flags:0x00004000 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/26 state:S stack: 0 pid: 149 ppid: 2 flags:0x00004000 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/26 state:S stack: 0 pid: 150 ppid: 2 flags:0x00004000 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/26 state:S stack: 0 pid: 151 ppid: 2 flags:0x00004000 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/26:0H state:I stack: 0 pid: 153 ppid: 2 flags:0x00004000 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/27 state:S stack: 0 pid: 154 ppid: 2 flags:0x00004000 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/27 state:S stack: 0 pid: 155 ppid: 2 flags:0x00004000 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/27 state:S stack: 0 pid: 156 ppid: 2 flags:0x00004000 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/27:0H state:I stack: 0 pid: 158 ppid: 2 flags:0x00004000 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/28 state:S stack: 0 pid: 159 ppid: 2 flags:0x00004000 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/28 state:S stack: 0 pid: 160 ppid: 2 flags:0x00004000 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/28 state:S stack: 0 pid: 161 ppid: 2 flags:0x00004000 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/28:0H state:I stack: 0 pid: 163 ppid: 2 flags:0x00004000 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/29 state:S stack: 0 pid: 164 ppid: 2 flags:0x00004000 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/29 state:S stack: 0 pid: 165 ppid: 2 flags:0x00004000 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/29 state:S stack: 0 pid: 166 ppid: 2 flags:0x00004000 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/29:0H state:I stack: 0 pid: 168 ppid: 2 flags:0x00004000 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/30 state:S stack: 0 pid: 169 ppid: 2 flags:0x00004000 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/30 state:S stack: 0 pid: 170 ppid: 2 flags:0x00004000 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/30 state:S stack: 0 pid: 171 ppid: 2 flags:0x00004000 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/30:0H state:I stack: 0 pid: 173 ppid: 2 flags:0x00004000 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cpuhp/31 state:S stack: 0 pid: 174 ppid: 2 flags:0x00004000 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:migration/31 state:S stack: 0 pid: 175 ppid: 2 flags:0x00004000 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Stopper: 0x0 <- 0x0 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? complete+0x18/0x80 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksoftirqd/31 state:S stack: 0 pid: 176 ppid: 2 flags:0x00004000 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_softirq+0x1cc/0x310 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sort_range+0x20/0x20 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: smpboot_thread_fn+0xc4/0x220 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/31:0H state:I stack: 0 pid: 178 ppid: 2 flags:0x00004000 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kdevtmpfs state:S stack: 0 pid: 179 ppid: 2 flags:0x00004000 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: devtmpfs_work_loop+0x266/0x270 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? dmar_validate_one_drhd+0xa5/0xa5 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: devtmpfsd+0x22/0x2b Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:inet_frag_wq state:I stack: 0 pid: 180 ppid: 2 flags:0x00004000 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kauditd state:S stack: 0 pid: 181 ppid: 2 flags:0x00004000 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kauditd_thread+0x1c0/0x2b0 Sep 26 16:17:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? dequeue_task_stop+0x70/0x70 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? auditd_reset+0x90/0x90 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:oom_reaper state:S stack: 0 pid: 186 ppid: 2 flags:0x00004000 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock+0x15/0x30 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: oom_reaper+0x23a/0x390 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? dequeue_task_stop+0x70/0x70 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __oom_reap_task_mm+0x140/0x140 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:writeback state:I stack: 0 pid: 187 ppid: 2 flags:0x00004000 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kcompactd0 state:S stack: 0 pid: 188 ppid: 2 flags:0x00004000 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? get_nohz_timer_target+0x18/0x1a0 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_timer+0x256/0x380 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_timeout+0x79/0x130 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __bpf_trace_tick_stop+0x10/0x10 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kcompactd+0x364/0x3f0 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? dequeue_task_stop+0x70/0x70 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kcompactd_do_work+0x280/0x280 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kcompactd1 state:S stack: 0 pid: 189 ppid: 2 flags:0x00004000 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? get_nohz_timer_target+0x18/0x1a0 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_timer+0x256/0x380 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_timeout+0x79/0x130 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __bpf_trace_tick_stop+0x10/0x10 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kcompactd+0x364/0x3f0 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? dequeue_task_stop+0x70/0x70 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kcompactd_do_work+0x280/0x280 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ksmd state:S stack: 0 pid: 190 ppid: 2 flags:0x00004000 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_merge_with_ksm_page+0xc0/0xc0 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_merge_with_ksm_page+0xc0/0xc0 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ksm_scan_thread+0xaa6/0x23c0 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __schedule+0x32a/0x12a0 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ttwu_queue_wakelist+0xbf/0x110 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __set_cpus_allowed_ptr_locked+0xea/0x1b0 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? dequeue_task_stop+0x70/0x70 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_merge_with_ksm_page+0xc0/0xc0 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:khugepaged state:S stack: 0 pid: 191 ppid: 2 flags:0x00004000 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: khugepaged+0x490/0x1710 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock+0x15/0x30 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __switch_to+0x77/0x420 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? collapse_pte_mapped_thp+0x420/0x420 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __schedule+0x32a/0x12a0 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ttwu_queue_wakelist+0xbf/0x110 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? dequeue_task_stop+0x70/0x70 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? collapse_pte_mapped_thp+0x420/0x420 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:cryptd state:I stack: 0 pid: 192 ppid: 2 flags:0x00004000 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kintegrityd state:I stack: 0 pid: 193 ppid: 2 flags:0x00004000 Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kblockd state:I stack: 0 pid: 194 ppid: 2 flags:0x00004000 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:blkcg_punt_bio state:I stack: 0 pid: 195 ppid: 2 flags:0x00004000 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/8:1 state:I stack: 0 pid: 199 ppid: 2 flags:0x00004000 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (xfs-buf/dm-0) Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/26:1 state:I stack: 0 pid: 218 ppid: 2 flags:0x00004000 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (xfsalloc) Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/30:1 state:I stack: 0 pid: 222 ppid: 2 flags:0x00004000 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (xfs-reclaim/dm-0) Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:tpm_dev_wq state:I stack: 0 pid: 226 ppid: 2 flags:0x00004000 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ata_sff state:I stack: 0 pid: 227 ppid: 2 flags:0x00004000 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:md state:I stack: 0 pid: 228 ppid: 2 flags:0x00004000 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:edac-poller state:I stack: 0 pid: 229 ppid: 2 flags:0x00004000 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:watchdogd state:S stack: 0 pid: 230 ppid: 2 flags:0x00004000 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_stop+0x170/0x170 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __schedule+0x32a/0x12a0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_stop+0x170/0x170 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread_worker_fn+0x126/0x250 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/2:1H state:I stack: 0 pid: 231 ppid: 2 flags:0x00004000 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness+0xa6/0xe0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kswapd0 state:S stack: 0 pid: 232 ppid: 2 flags:0x00004000 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? lock_timer_base+0x61/0x80 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kswapd+0x358/0x390 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? dequeue_task_stop+0x70/0x70 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? balance_pgdat+0x720/0x720 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kswapd1 state:S stack: 0 pid: 233 ppid: 2 flags:0x00004000 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? lock_timer_base+0x61/0x80 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kswapd+0x358/0x390 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? dequeue_task_stop+0x70/0x70 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? balance_pgdat+0x720/0x720 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kthrotld state:I stack: 0 pid: 240 ppid: 2 flags:0x00004000 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:acpi_thermal_pm state:I stack: 0 pid: 249 ppid: 2 flags:0x00004000 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xenbus_probe state:S stack: 0 pid: 251 ppid: 2 flags:0x00004000 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xenbus_probe+0x80/0x80 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __schedule+0x32a/0x12a0 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xenbus_probe+0x80/0x80 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: xenbus_probe_thread+0x50/0xa0 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? dequeue_task_stop+0x70/0x70 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:scsi_eh_0 state:S stack: 0 pid: 252 ppid: 2 flags:0x00004000 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi_error_handler+0x1b6/0x490 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __set_cpus_allowed_ptr_locked+0xea/0x1b0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:scsi_tmf_0 state:I stack: 0 pid: 253 ppid: 2 flags:0x00004000 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:scsi_eh_1 state:S stack: 0 pid: 254 ppid: 2 flags:0x00004000 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi_error_handler+0x1b6/0x490 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __set_cpus_allowed_ptr_locked+0xea/0x1b0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:scsi_tmf_1 state:I stack: 0 pid: 255 ppid: 2 flags:0x00004000 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:scsi_eh_2 state:S stack: 0 pid: 256 ppid: 2 flags:0x00004000 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi_error_handler+0x1b6/0x490 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __set_cpus_allowed_ptr_locked+0xea/0x1b0 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:scsi_tmf_2 state:I stack: 0 pid: 257 ppid: 2 flags:0x00004000 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:scsi_eh_3 state:S stack: 0 pid: 258 ppid: 2 flags:0x00004000 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi_error_handler+0x1b6/0x490 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __set_cpus_allowed_ptr_locked+0xea/0x1b0 Sep 26 16:17:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:scsi_tmf_3 state:I stack: 0 pid: 259 ppid: 2 flags:0x00004000 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:scsi_eh_4 state:S stack: 0 pid: 260 ppid: 2 flags:0x00004000 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xa_find_after+0xbb/0x100 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi_error_handler+0x1b6/0x490 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __set_cpus_allowed_ptr_locked+0xea/0x1b0 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:scsi_tmf_4 state:I stack: 0 pid: 261 ppid: 2 flags:0x00004000 Sep 26 16:17:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:scsi_eh_5 state:S stack: 0 pid: 262 ppid: 2 flags:0x00004000 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: scsi_error_handler+0x1b6/0x490 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __set_cpus_allowed_ptr_locked+0xea/0x1b0 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scsi_eh_get_sense+0x230/0x230 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:scsi_tmf_5 state:I stack: 0 pid: 263 ppid: 2 flags:0x00004000 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/3:1H state:I stack: 0 pid: 268 ppid: 2 flags:0x00004000 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness+0xa6/0xe0 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/22:1H state:I stack: 0 pid: 269 ppid: 2 flags:0x00004000 Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness+0xa6/0xe0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/4:1H state:I stack: 0 pid: 270 ppid: 2 flags:0x00004000 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness+0xa6/0xe0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/6:1H state:I stack: 0 pid: 272 ppid: 2 flags:0x00004000 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness+0xa6/0xe0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/7:1H state:I stack: 0 pid: 274 ppid: 2 flags:0x00004000 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness+0xa6/0xe0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/16:1H state:I stack: 0 pid: 275 ppid: 2 flags:0x00004000 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness+0xa6/0xe0 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:dm_bufio_cache state:I stack: 0 pid: 276 ppid: 2 flags:0x00004000 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:mld state:I stack: 0 pid: 279 ppid: 2 flags:0x00004000 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/11:1H state:I stack: 0 pid: 280 ppid: 2 flags:0x00004000 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (xfs-log/dm-0) Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ipv6_addrconf state:I stack: 0 pid: 281 ppid: 2 flags:0x00004000 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kstrp state:I stack: 0 pid: 286 ppid: 2 flags:0x00004000 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:zswap-shrink state:I stack: 0 pid: 300 ppid: 2 flags:0x00004000 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/u131:0 state:I stack: 0 pid: 301 ppid: 2 flags:0x00004000 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __schedule+0x32a/0x12a0 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ttwu_queue_wakelist+0xbf/0x110 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/u132:0 state:I stack: 0 pid: 302 ppid: 2 flags:0x00004000 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __schedule+0x32a/0x12a0 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ttwu_queue_wakelist+0xbf/0x110 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/u133:0 state:I stack: 0 pid: 303 ppid: 2 flags:0x00004000 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __schedule+0x32a/0x12a0 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ttwu_queue_wakelist+0xbf/0x110 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/17:1H state:I stack: 0 pid: 413 ppid: 2 flags:0x00004000 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness+0xa6/0xe0 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/28:1H state:I stack: 0 pid: 451 ppid: 2 flags:0x00004000 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __blk_mq_sched_dispatch_requests+0xdb/0x130 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_sched_dispatch_requests+0x30/0x60 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/29:1H state:I stack: 0 pid: 453 ppid: 2 flags:0x00004000 Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __blk_mq_sched_dispatch_requests+0xfe/0x130 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_sched_dispatch_requests+0x30/0x60 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/30:1H state:I stack: 0 pid: 454 ppid: 2 flags:0x00004000 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __blk_mq_sched_dispatch_requests+0xdb/0x130 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_sched_dispatch_requests+0x30/0x60 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/11:4 state:I stack: 0 pid: 464 ppid: 2 flags:0x00004000 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (xfsalloc) Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/12:1H state:I stack: 0 pid: 465 ppid: 2 flags:0x00004000 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (xfs-log/dm-0) Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/1:1H state:I stack: 0 pid: 469 ppid: 2 flags:0x00004000 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness+0xa6/0xe0 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/24:1H state:I stack: 0 pid: 560 ppid: 2 flags:0x00004000 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __blk_mq_sched_dispatch_requests+0xfe/0x130 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_sched_dispatch_requests+0x30/0x60 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/26:1H state:I stack: 0 pid: 561 ppid: 2 flags:0x00004000 Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __blk_mq_sched_dispatch_requests+0xdb/0x130 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_sched_dispatch_requests+0x30/0x60 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/23:1H state:I stack: 0 pid: 568 ppid: 2 flags:0x00004000 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness+0xa6/0xe0 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/31:1H state:I stack: 0 pid: 574 ppid: 2 flags:0x00004000 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __blk_mq_sched_dispatch_requests+0xfe/0x130 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_sched_dispatch_requests+0x30/0x60 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/21:1H state:I stack: 0 pid: 575 ppid: 2 flags:0x00004000 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_timeout_work+0xd9/0x110 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/20:1H state:I stack: 0 pid: 576 ppid: 2 flags:0x00004000 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness+0xa6/0xe0 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/5:1H state:I stack: 0 pid: 577 ppid: 2 flags:0x00004000 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness+0xa6/0xe0 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/18:1H state:I stack: 0 pid: 578 ppid: 2 flags:0x00004000 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness+0xa6/0xe0 Sep 26 16:17:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/19:1H state:I stack: 0 pid: 583 ppid: 2 flags:0x00004000 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_highpri) Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blake2s_update+0x48/0xc0 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mix_interrupt_randomness+0xa6/0xe0 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/25:1H state:I stack: 0 pid: 592 ppid: 2 flags:0x00004000 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __blk_mq_sched_dispatch_requests+0xfe/0x130 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_sched_dispatch_requests+0x30/0x60 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/29:2 state:I stack: 0 pid: 593 ppid: 2 flags:0x00004000 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/8:1H state:I stack: 0 pid: 602 ppid: 2 flags:0x00004000 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __blk_mq_sched_dispatch_requests+0xdb/0x130 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_sched_dispatch_requests+0x30/0x60 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/9:1H state:I stack: 0 pid: 629 ppid: 2 flags:0x00004000 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_timeout_work+0xd9/0x110 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/10:1H state:I stack: 0 pid: 634 ppid: 2 flags:0x00004000 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __blk_mq_sched_dispatch_requests+0xdb/0x130 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_sched_dispatch_requests+0x30/0x60 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/13:1H state:I stack: 0 pid: 635 ppid: 2 flags:0x00004000 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __blk_mq_sched_dispatch_requests+0xdb/0x130 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_sched_dispatch_requests+0x30/0x60 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/14:1H state:I stack: 0 pid: 637 ppid: 2 flags:0x00004000 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (xfs-log/dm-0) Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kdmflush/253:0 state:I stack: 0 pid: 745 ppid: 2 flags:0x00004000 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfsalloc state:I stack: 0 pid: 762 ppid: 2 flags:0x00004000 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs_mru_cache state:I stack: 0 pid: 763 ppid: 2 flags:0x00004000 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs-buf/dm-0 state:I stack: 0 pid: 764 ppid: 2 flags:0x00004000 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs-conv/dm-0 state:I stack: 0 pid: 765 ppid: 2 flags:0x00004000 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs-reclaim/dm- state:I stack: 0 pid: 766 ppid: 2 flags:0x00004000 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs-blockgc/dm- state:I stack: 0 pid: 767 ppid: 2 flags:0x00004000 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs-inodegc/dm- state:I stack: 0 pid: 768 ppid: 2 flags:0x00004000 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs-log/dm-0 state:I stack: 0 pid: 769 ppid: 2 flags:0x00004000 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs-cil/dm-0 state:I stack: 0 pid: 770 ppid: 2 flags:0x00004000 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfsaild/dm-0 state:S stack: 0 pid: 771 ppid: 2 flags:0x00004000 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? get_nohz_timer_target+0x18/0x1a0 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_timer+0x256/0x380 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_timeout+0x79/0x130 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __bpf_trace_tick_stop+0x10/0x10 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: xfsaild+0xd6/0x8d0 [xfs] Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __schedule+0x32a/0x12a0 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ttwu_queue_wakelist+0xbf/0x110 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xfs_trans_ail_cursor_first+0x80/0x80 [xfs] Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/15:1H state:I stack: 0 pid: 823 ppid: 2 flags:0x00004000 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __blk_mq_sched_dispatch_requests+0xdb/0x130 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_sched_dispatch_requests+0x30/0x60 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/31:2 state:I stack: 0 pid: 859 ppid: 2 flags:0x00004000 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (xfs-sync/sda2) Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:systemd-journal state:R stack: 0 pid: 860 ppid: 1 flags:0x00000002 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock+0x13/0x40 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? raw_spin_rq_lock_nested+0x1e/0x70 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __update_idle_core+0x20/0xc0 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __schedule+0x322/0x12a0 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? schedule+0x5d/0xe0 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? schedule+0x5d/0xe0 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_epoll_wait+0x633/0x750 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_eventpoll_poll+0x10/0x10 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_epoll_wait+0x4d/0xd0 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_eventpoll_poll+0x10/0x10 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode_prepare+0x18e/0x1c0 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_epoll_wait+0x4d/0xd0 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode_prepare+0x18e/0x1c0 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:systemd-udevd state:S stack: 0 pid: 878 ppid: 1 flags:0x00000002 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? seq_printf+0x6e/0x90 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kmem_cache_free+0x32e/0x4c0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_epoll_wait+0x633/0x750 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? audit_reset_context+0x270/0x320 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __seccomp_filter+0x30d/0x4c0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_eventpoll_poll+0x10/0x10 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_epoll_wait+0x4d/0xd0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_trace_enter.constprop.0+0x124/0x1a0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f01b3d2f097 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffd1a0e8e78 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 00000000000001f0 RCX: 00007f01b3d2f097 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 00000000000001f0 RSI: 00005637aaf4dea0 RDI: 0000000000000009 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 000000000000005a R08: 00000000000001f0 R09: 0000000000000000 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: ffffffffffffffff Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000009 R14: aaaaaaaaaaaaaaab R15: 00005637aae4ac70 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/27:1H state:I stack: 0 pid: 891 ppid: 2 flags:0x00004000 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (kblockd) Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __blk_mq_sched_dispatch_requests+0xdb/0x130 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? blk_mq_sched_dispatch_requests+0x30/0x60 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/19:2 state:I stack: 0 pid: 972 ppid: 2 flags:0x00004000 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (nvmet-wq) Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? nvmet_file_execute_io+0x1a4/0x240 [nvmet] Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:ipmi-msghandler state:I stack: 0 pid: 976 ppid: 2 flags:0x00004000 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kipmi0 state:S stack: 0 pid: 984 ppid: 2 flags:0x00004000 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? flush_messages+0x40/0x40 [ipmi_si] Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? lock_timer_base+0x61/0x80 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? del_timer+0x44/0x60 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? flush_messages+0x40/0x40 [ipmi_si] Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ipmi_thread+0x18e/0x190 [ipmi_si] Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:i40e state:I stack: 0 pid: 985 ppid: 2 flags:0x00004000 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs-buf/sda2 state:I stack: 0 pid: 988 ppid: 2 flags:0x00004000 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs-conv/sda2 state:I stack: 0 pid: 989 ppid: 2 flags:0x00004000 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs-reclaim/sda state:I stack: 0 pid: 990 ppid: 2 flags:0x00004000 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs-blockgc/sda state:I stack: 0 pid: 991 ppid: 2 flags:0x00004000 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs-inodegc/sda state:I stack: 0 pid: 992 ppid: 2 flags:0x00004000 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs-log/sda2 state:I stack: 0 pid: 993 ppid: 2 flags:0x00004000 Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfs-cil/sda2 state:I stack: 0 pid: 994 ppid: 2 flags:0x00004000 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xfsaild/sda2 state:S stack: 0 pid: 995 ppid: 2 flags:0x00004000 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? newidle_balance+0x2c7/0x3f0 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: xfsaild+0x434/0x8d0 [xfs] Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __schedule+0x32a/0x12a0 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ttwu_queue_wakelist+0xbf/0x110 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __set_cpus_allowed_ptr_locked+0xea/0x1b0 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xfs_trans_ail_cursor_first+0x80/0x80 [xfs] Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:auditd state:S stack: 0 pid: 1015 ppid: 1 flags:0x00000002 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? timerqueue_add+0x62/0xb0 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? enqueue_hrtimer+0x2f/0x80 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? hrtimer_start_range_ns+0x274/0x3a0 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0x9f/0xf0 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __hrtimer_init+0xe0/0xe0 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_select+0x632/0x7e0 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __filemap_get_folio+0x271/0x440 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x47/0xa0 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? filemap_dirty_folio+0x67/0x70 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __skb_try_recv_datagram+0xa7/0x170 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __wake_up_common_lock+0x77/0x90 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: core_sys_select+0x18d/0x370 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __seccomp_filter+0x30d/0x4c0 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? recalibrate_cpu_khz+0x10/0x10 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ktime_get_ts64+0x4c/0xf0 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_to_user+0x21/0x30 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? put_timespec64+0x2c/0x40 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? poll_select_finish+0x1a2/0x210 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_from_user+0x43/0x60 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_pselect.constprop.0+0xb1/0x140 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_pselect6+0x65/0x80 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode_prepare+0x18e/0x1c0 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f4fa4c7da3c Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007fffb580c2f0 EFLAGS: 00000246 ORIG_RAX: 000000000000010e Sep 26 16:18:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000040 RCX: 00007f4fa4c7da3c Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000559b4faaa7c0 RSI: 0000559b4faaa780 RDI: 0000000000000040 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 00007fffb580c380 R08: 00007fffb580c300 R09: 0000000000000000 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000559b4faaa780 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 00007fffb580c300 R14: 0000000000000000 R15: 0000559b4faaa7c0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:auditd state:S stack: 0 pid: 1016 ppid: 1 flags:0x00000002 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? finish_task_switch.isra.0+0x9b/0x300 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __switch_to+0x2ef/0x420 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? plist_add+0xba/0xf0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex_wait_queue+0x70/0xd0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex_wait+0x15a/0x220 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? remove_wait_queue+0x20/0x60 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xlog_wait_on_iclog+0x14f/0x160 [xfs] Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_futex+0x106/0x1b0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_futex+0x63/0x190 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f4fa4c03f26 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007f4fa45febf0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4fa4c03f26 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 0000000000000189 RDI: 0000559b4f9ea10c Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000559b4f9ea120 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000001 R15: 0000559b4f9ea10c Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:systemd-oomd state:S stack: 0 pid: 1023 ppid: 1 flags:0x00000002 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? get_nohz_timer_target+0x18/0x1a0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? timerqueue_add+0x62/0xb0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_write_lock_irq+0x19/0x40 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_done_scan+0xc8/0x110 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_epoll_wait+0x633/0x750 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_eventpoll_poll+0x10/0x10 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_epoll_wait+0x4d/0xd0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_trace_enter.constprop.0+0x124/0x1a0 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f5f4ef2f097 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffd3e04fd68 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000018 RCX: 00007f5f4ef2f097 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000018 RSI: 000055ddc14eaa40 RDI: 0000000000000005 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 00000000000000a0 R08: 0000000000000018 R09: 0000000000000000 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: ffffffffffffffff Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000010 R14: aaaaaaaaaaaaaaab R15: 000055ddc14e6080 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:rpciod state:I stack: 0 pid: 1026 ppid: 2 flags:0x00004000 Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:xprtiod state:I stack: 0 pid: 1027 ppid: 2 flags:0x00004000 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:systemd-resolve state:S stack: 0 pid: 1035 ppid: 1 flags:0x00000002 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? avc_has_perm+0x7a/0x170 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? scm_recv.constprop.0+0x122/0x150 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mod_objcg_state+0xc6/0x2e0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? refill_obj_stock+0x105/0x1b0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_epoll_wait+0x633/0x750 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? refill_stock+0x31/0x50 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __seccomp_filter+0x30d/0x4c0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_eventpoll_poll+0x10/0x10 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_epoll_wait+0x4d/0xd0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_trace_enter.constprop.0+0x124/0x1a0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode_prepare+0x18e/0x1c0 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __irq_exit_rcu+0x3d/0x140 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f65db72f097 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffe3c4e0538 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 000000000000002c RCX: 00007f65db72f097 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 000000000000002c RSI: 0000555a4b916b60 RDI: 0000000000000004 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 00000000000000dc R08: 000000000000002c R09: 0000000000000000 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: ffffffffffffffff Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000016 R14: aaaaaaaaaaaaaaab R15: 0000555a4b8db910 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:systemd-userdbd state:S stack: 0 pid: 1038 ppid: 1 flags:0x00000002 Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_lruvec_page_state+0xa6/0x1a0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_write_lock_irq+0x19/0x40 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_done_scan+0xc8/0x110 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_epoll_wait+0x633/0x750 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_eventpoll_poll+0x10/0x10 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_epoll_wait+0x4d/0xd0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_trace_enter.constprop.0+0x124/0x1a0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_user_addr_fault+0x1ef/0x690 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f7f4af2f097 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffd192c3ca8 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000008 RCX: 00007f7f4af2f097 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000008 RSI: 0000559dbb953710 RDI: 0000000000000005 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000028 R08: 0000000000000008 R09: 0000559dbb9523c8 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: ffffffffffffffff Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000004 R14: aaaaaaaaaaaaaaab R15: 0000559dbb951c10 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:NetworkManager state:S stack: 0 pid: 1057 ppid: 1 flags:0x00000002 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? timerqueue_add+0x62/0xb0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? enqueue_hrtimer+0x2f/0x80 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? hrtimer_start_range_ns+0x274/0x3a0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0x9f/0xf0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __hrtimer_init+0xe0/0xe0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_sys_poll+0x4d6/0x5d0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_sys_poll+0x357/0x5d0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ip6_datagram_recv_common_ctl+0x9b/0xd0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_poll+0x82/0x110 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_epoll_wait+0x4d/0xd0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_poll+0x82/0x110 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? common_interrupt+0x61/0xd0 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f91a109725f Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffc448f0bb0 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000557ffb06d130 RCX: 00007f91a109725f Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 000000000018826f RSI: 0000000000000009 RDI: 0000557ffb124310 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000557ffb124310 R08: 0000000000000000 R09: 00007ffc448f0a30 Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00007ffc44968080 R11: 0000000000000293 R12: 000000000018826f Sep 26 16:18:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000001 R14: 0000000000000009 R15: 00007f91a12450a9 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:gmain state:S stack: 0 pid: 1058 ppid: 1 flags:0x00000002 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? timerqueue_add+0x62/0xb0 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? enqueue_hrtimer+0x2f/0x80 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? hrtimer_start_range_ns+0x274/0x3a0 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0x9f/0xf0 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __hrtimer_init+0xe0/0xe0 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_sys_poll+0x4d6/0x5d0 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? avc_has_perm_noaudit+0x79/0xe0 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? selinux_inode_permission+0xeb/0x170 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? filename_lookup+0xbd/0x1a0 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __check_object_size+0x1f4/0x250 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? strncpy_from_user+0x3f/0x130 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? user_path_at_empty+0x40/0x50 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? inotify_find_inode+0x24/0x80 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_inotify_add_watch+0xef/0x140 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? recalibrate_cpu_khz+0x10/0x10 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ktime_get_ts64+0x4c/0xf0 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_poll+0x82/0x110 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f91a109725f Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007f919fda7000 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000557ffb06d290 RCX: 00007f91a109725f Sep 26 16:18:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000f9c RSI: 0000000000000002 RDI: 0000557ffb06c440 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000557ffb06c440 R08: 0000000000000000 R09: 0000000000000000 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00007ffc44968080 R11: 0000000000000293 R12: 0000000000000f9c Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000001 R14: 0000000000000002 R15: 00007f91a12450a9 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:gdbus state:S stack: 0 pid: 1070 ppid: 1 flags:0x00000002 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? add_wait_queue+0x64/0xa0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_sys_poll+0x4d6/0x5d0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? unix_stream_read_generic+0x1ea/0xa60 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_to_iter+0x90/0x640 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sched_clock_cpu+0xb/0xc0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __smp_call_single_queue+0x23/0x40 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ttwu_queue_wakelist+0xbf/0x110 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_to_iter+0x90/0x640 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? wake_up_q+0x90/0x90 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? eventfd_read+0xf3/0x2c0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? wake_up_q+0x90/0x90 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? vfs_read+0x1d2/0x2a0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_poll+0x29/0x110 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ksys_read+0xac/0xd0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ksys_write+0xac/0xd0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f91a109725f Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007f919f5a6000 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000557ffb081c30 RCX: 00007f91a109725f Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 00000000ffffffff RSI: 0000000000000002 RDI: 0000557ffb084bc0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000557ffb084bc0 R08: 0000000000000000 R09: 00007f919f5a5e80 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00007ffc44968080 R11: 0000000000000293 R12: 00000000ffffffff Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000001 R14: 0000000000000002 R15: 00007f91a12450a9 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:chronyd state:S stack: 0 pid: 1064 ppid: 1 flags:0x00000002 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? timerqueue_add+0x62/0xb0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? enqueue_hrtimer+0x2f/0x80 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? hrtimer_start_range_ns+0x274/0x3a0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0x9f/0xf0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __hrtimer_init+0xe0/0xe0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_select+0x632/0x7e0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xfs_buf_rele+0x58/0x480 [xfs] Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock+0x15/0x30 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xlog_cil_commit+0x5dd/0xc30 [xfs] Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? free_unref_page_commit+0x6d/0x160 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? free_unref_page_list+0x1c9/0x3e0 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xas_load+0x5/0x40 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __filemap_get_folio+0x36b/0x440 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xas_load+0x5/0x40 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xas_find+0x14d/0x1d0 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? find_get_entries+0x111/0x170 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: core_sys_select+0x18d/0x370 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_timer+0x256/0x380 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mod_delayed_work_on+0x49/0x70 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? call_rcu+0xfd/0x690 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_memcg_lruvec_state+0xc7/0x110 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_from_user+0x43/0x60 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_pselect.constprop.0+0xb1/0x140 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_pselect6+0x65/0x80 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exit_to_user_mode_prepare+0x165/0x1f0 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f8ee71249a4 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffe757b3ef0 EFLAGS: 00000246 ORIG_RAX: 000000000000010e Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000008 RCX: 00007f8ee71249a4 Sep 26 16:18:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 00007ffe757b4110 RDI: 0000000000000008 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 00007ffe757b3ff0 R08: 00007ffe757b3f00 R09: 0000000000000000 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffe757b4110 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 00007ffe757b3f00 R14: 0000000000000009 R15: 0000000000000000 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:systemd-logind state:S stack: 0 pid: 1065 ppid: 1 flags:0x00000002 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __import_iovec+0x42/0x150 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_write_lock_irq+0x19/0x40 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_done_scan+0xc8/0x110 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_epoll_wait+0x633/0x750 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_eventpoll_poll+0x10/0x10 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_epoll_wait+0x4d/0xd0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_trace_enter.constprop.0+0x98/0x1a0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_trace_enter.constprop.0+0x98/0x1a0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f337972f097 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffc40a9e5e8 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 000000000000001c RCX: 00007f337972f097 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 000000000000001c RSI: 0000558d4f9efb60 RDI: 0000000000000004 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 00000000000000a0 R08: 000000000000001c R09: c1d99bf8f4390fdf Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: ffffffffffffffff Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000010 R14: aaaaaaaaaaaaaaab R15: 0000558d4f9eefa0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:dbus-broker-lau state:S stack: 0 pid: 1067 ppid: 1 flags:0x00000002 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_write_lock_irq+0x19/0x40 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_done_scan+0xc8/0x110 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_epoll_wait+0x633/0x750 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_eventpoll_poll+0x10/0x10 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_epoll_wait+0x4d/0xd0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_trace_enter.constprop.0+0x98/0x1a0 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f829cfdf097 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffe533839d8 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8 Sep 26 16:18:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000016 RCX: 00007f829cfdf097 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000016 RSI: 0000556cb249ff30 RDI: 0000000000000005 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 000000000000006e R08: 0000000000000016 R09: 0000000000000000 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: ffffffffffffffff Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 000000000000000b R14: aaaaaaaaaaaaaaab R15: 0000556cb2440380 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:dbus-broker state:S stack: 0 pid: 1069 ppid: 1067 flags:0x00000002 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? finish_task_switch.isra.0+0x9b/0x300 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_epoll_wait+0x633/0x750 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __seccomp_filter+0x30d/0x4c0 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_epoll_wait+0x65e/0x750 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_eventpoll_poll+0x10/0x10 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_epoll_wait+0x4d/0xd0 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_trace_enter.constprop.0+0x98/0x1a0 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? fpregs_restore_userregs+0x12/0xe0 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exit_to_user_mode_prepare+0x18f/0x1f0 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exit_to_user_mode_prepare+0x18f/0x1f0 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __irq_exit_rcu+0x3d/0x140 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7fa522d32097 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffcde168f98 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 000055be3addee38 RCX: 00007fa522d32097 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 000000000000000a RSI: 00007ffcde168fa0 RDI: 0000000000000005 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 00007ffcde169060 R08: 000055be3ae5b0f0 R09: ffffffffffffffff Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: 0000000000000000 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 000000000000000a R14: 00007ffcde168fa0 R15: 00000000ffffffff Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:gssproxy state:S stack: 0 pid: 1078 ppid: 1 flags:0x00000002 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? timerqueue_add+0x62/0xb0 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? enqueue_hrtimer+0x2f/0x80 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? hrtimer_start_range_ns+0x274/0x3a0 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0x9f/0xf0 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __hrtimer_init+0xe0/0xe0 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_epoll_wait+0x633/0x750 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_eventpoll_poll+0x10/0x10 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_epoll_wait+0x4d/0xd0 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exit_to_user_mode_prepare+0x18f/0x1f0 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7ff443eaf0d6 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffcc2ecf110 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff443eaf0d6 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000040 RSI: 00005610b353f410 RDI: 0000000000000005 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 000000000000e95f R11: 0000000000000293 R12: 00007ff4437ff060 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000008 R14: 0000000000000000 R15: 00007ff4437ff060 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:gssproxy state:S stack: 0 pid: 1080 ppid: 1 flags:0x00000002 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? refill_stock+0x31/0x50 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_charge_memcg+0x72c/0x7b0 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_node_page_state+0x72/0xc0 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? plist_add+0xba/0xf0 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex_wait_queue+0x70/0xd0 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex_wait+0x15a/0x220 Sep 26 16:18:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __rseq_handle_notify_resume+0x99/0x460 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock+0x15/0x30 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __handle_mm_fault+0xc3d/0xe40 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_futex+0x106/0x1b0 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_futex+0x63/0x190 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_user_addr_fault+0x1ef/0x690 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7ff443e2af26 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ff4433fe7e0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff443e2af26 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005610b353f158 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00005610b353f108 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 00005610b353f158 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:gssproxy state:S stack: 0 pid: 1081 ppid: 1 flags:0x00000002 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? page_counter_uncharge+0x2f/0x70 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? plist_add+0xba/0xf0 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex_wait_queue+0x70/0xd0 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex_wait+0x15a/0x220 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_memcg_lruvec_state+0x93/0x110 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_lruvec_page_state+0xa6/0x1a0 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_futex+0x106/0x1b0 Sep 26 16:18:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_futex+0x63/0x190 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irq+0x1b/0x35 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sigprocmask+0x85/0xb0 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7ff443e2af26 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ff442bfd7e0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff443e2af26 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005610b353ce18 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00005610b353cdc8 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 00005610b353ce18 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:gssproxy state:S stack: 0 pid: 1082 ppid: 1 flags:0x00000002 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? plist_add+0xba/0xf0 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex_wait_queue+0x70/0xd0 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex_wait+0x15a/0x220 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_lruvec_page_state+0xa6/0x1a0 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? folio_add_lru+0x92/0x100 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_futex+0x106/0x1b0 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_futex+0x63/0x190 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exit_to_user_mode_prepare+0x165/0x1f0 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7ff443e2af26 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ff4423fc7e0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff443e2af26 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005610b353cff8 Sep 26 16:18:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00005610b353cfa8 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 00005610b353cff8 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:gssproxy state:S stack: 0 pid: 1083 ppid: 1 flags:0x00000002 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? get_page_from_freelist+0x81f/0x16a0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? plist_add+0xba/0xf0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex_wait_queue+0x70/0xd0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex_wait+0x15a/0x220 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_charge_memcg+0x401/0x7b0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_node_page_state+0x72/0xc0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_futex+0x106/0x1b0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_futex+0x63/0x190 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? handle_mm_fault+0xae/0x290 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exit_to_user_mode_prepare+0x165/0x1f0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exit_to_user_mode_prepare+0x18f/0x1f0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7ff443e2af26 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ff441bfb7e0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff443e2af26 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005610b353d088 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00005610b353d038 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 00005610b353d088 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:gssproxy state:S stack: 0 pid: 1084 ppid: 1 flags:0x00000002 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? plist_add+0xba/0xf0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex_wait_queue+0x70/0xd0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex_wait+0x15a/0x220 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_memcg_lruvec_state+0x93/0x110 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_futex+0x106/0x1b0 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_futex+0x63/0x190 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7ff443e2af26 Sep 26 16:18:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ff4413fa7e0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff443e2af26 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005610b35436e8 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00005610b3543698 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 00005610b35436e8 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:sshd state:S stack: 0 pid: 1079 ppid: 1 flags:0x00000002 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? tcp_poll+0x29/0x370 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sock_poll+0x51/0xf0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_sys_poll+0x4d6/0x5d0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? handle_mm_fault+0xae/0x290 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_charge_memcg+0x401/0x7b0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_to_iter+0x90/0x640 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_memcg_lruvec_state+0x93/0x110 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? current_time+0x1b/0xe0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? avc_has_perm+0x7a/0x170 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __refill_stock+0x1a/0x90 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? refill_stock+0x31/0x50 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mod_objcg_state+0xc6/0x2e0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? refill_obj_stock+0x105/0x1b0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __dentry_kill+0x135/0x170 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_ppoll+0xad/0x130 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irq+0x1b/0x35 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sigprocmask+0x85/0xb0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_to_user+0x21/0x30 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irq+0x1b/0x35 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sigprocmask+0x85/0xb0 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f0faa4452fc Sep 26 16:18:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffe8d280c70 EFLAGS: 00000202 ORIG_RAX: 000000000000010f Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007f0faa4452fc Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 0000000000000002 RDI: 000055dad0cb09a0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000002 R08: 0000000000000008 R09: 00007ffe8d281ec6 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00007ffe8d280ef0 R11: 0000000000000202 R12: 0000000000000000 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000064 R14: 000055dad0cb09a0 R15: 000055dacf4877c0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:systemd state:S stack: 0 pid: 1092 ppid: 1 flags:0x00000002 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __dentry_kill+0x135/0x170 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_epoll_wait+0x633/0x750 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exit_to_user_mode_prepare+0x165/0x1f0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ep_eventpoll_poll+0x10/0x10 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_epoll_wait+0x4d/0xd0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7fdabdb2f097 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffccbaa8d58 EFLAGS: 00000202 ORIG_RAX: 00000000000000e8 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000028 RCX: 00007fdabdb2f097 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000028 RSI: 000055fd21e7b180 RDI: 0000000000000004 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 00000000000000e6 R08: 0000000000000028 R09: 0000000000000000 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00000000ffffffff R11: 0000000000000202 R12: ffffffffffffffff Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000017 R14: aaaaaaaaaaaaaaab R15: 000055fd21e39ef0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:(sd-pam) state:S stack: 0 pid: 1094 ppid: 1092 flags:0x00000002 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? filemap_map_pages+0x1b3/0x880 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? dequeue_signal+0x56/0x1c0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_sigtimedwait+0x15d/0x200 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_rt_sigtimedwait+0x76/0xf0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __task_pid_nr_ns+0x94/0xa0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ksys_write+0x53/0xd0 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_user_addr_fault+0x1ef/0x690 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7ff3f1a5ed53 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffdcbdda120 EFLAGS: 00000202 ORIG_RAX: 0000000000000080 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 00007ffdcbdda150 RCX: 00007ff3f1a5ed53 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 00007ffdcbdda150 RDI: 00007ffdcbdda2e0 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 00007ffdcbdda150 R08: 0000000000000000 R09: 0000000000008000 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000008 R11: 0000000000000202 R12: 00007ffdcbdda238 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000444 R15: 00007ffdcbdda240 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:agetty state:S stack: 0 pid: 1107 ppid: 1 flags:0x00000002 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __flush_work.isra.0+0x174/0x250 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ldsem_down_read+0x25/0x280 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? add_wait_queue+0x64/0xa0 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock+0x13/0x40 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock+0x15/0x30 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_select+0x632/0x7e0 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mls_level_isvalid+0x3e/0x70 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock+0x13/0x40 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? simple_xattr_list_add+0x2e/0x50 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? avc_has_perm_noaudit+0x79/0xe0 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? selinux_inode_permission+0xeb/0x170 Sep 26 16:18:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? bpf_lsm_inode_follow_link+0x10/0x10 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? security_inode_permission+0x3a/0x60 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? page_add_file_rmap+0x97/0x380 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: core_sys_select+0x18d/0x370 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xfs_iunlock+0xa6/0x100 [xfs] Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xfs_filemap_map_pages+0x53/0x60 [xfs] Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_fault+0x1c8/0x440 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __handle_mm_fault+0x64d/0xe40 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_pselect.constprop.0+0xb1/0x140 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_pselect6+0x65/0x80 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7fa8c52989a4 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffdec3cd5c0 EFLAGS: 00000246 ORIG_RAX: 000000000000010e Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007fa8c52989a4 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 00007ffdec3cd6a0 RDI: 0000000000000005 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffdec3cd6a0 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 000055788aab53d7 R15: 0000000000000000 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:agetty state:S stack: 0 pid: 1111 ppid: 1 flags:0x00000002 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mod_objcg_state+0xc6/0x2e0 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? post_alloc_hook+0xfe/0x160 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __flush_work.isra.0+0x174/0x250 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ldsem_down_read+0x25/0x280 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? add_wait_queue+0x64/0xa0 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock+0x13/0x40 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock+0x15/0x30 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_select+0x632/0x7e0 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? avc_has_perm_noaudit+0x79/0xe0 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? selinux_inode_permission+0xeb/0x170 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? bpf_lsm_inode_follow_link+0x10/0x10 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? security_inode_permission+0x3a/0x60 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __legitimize_path+0x27/0x60 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? page_add_file_rmap+0x97/0x380 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_set_pte+0x189/0x1d0 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? radix_tree_node_alloc.constprop.0+0x93/0xd0 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: core_sys_select+0x18d/0x370 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_fault+0x1c8/0x440 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __handle_mm_fault+0x64d/0xe40 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __rseq_handle_notify_resume+0x99/0x460 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_pselect.constprop.0+0xb1/0x140 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_user_addr_fault+0x1ef/0x690 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_pselect6+0x65/0x80 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7ff1faf8b9a4 Sep 26 16:18:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007fffbe289de0 EFLAGS: 00000246 ORIG_RAX: 000000000000010e Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007ff1faf8b9a4 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 00007fffbe289ec0 RDI: 0000000000000005 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 00007fffbe289ec0 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000001002 R15: 0000000000000000 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:sshd state:S stack: 0 pid: 1168 ppid: 1079 flags:0x00000002 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? unix_poll+0x23/0x100 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sock_poll+0x51/0xf0 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_sys_poll+0x4d6/0x5d0 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? bpf_lsm_socket_shutdown+0x10/0x10 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? security_sock_rcv_skb+0x34/0x50 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? skb_queue_tail+0x1b/0x50 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sock_def_readable+0xe/0x80 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __check_object_size+0x1f4/0x250 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_to_iter+0x18e/0x640 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __check_object_size+0x1f4/0x250 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __skb_datagram_iter+0x78/0x2f0 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __wake_up_common_lock+0x77/0x90 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mod_objcg_state+0xc6/0x2e0 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? refill_obj_stock+0x105/0x1b0 Sep 26 16:18:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __dentry_kill+0x135/0x170 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __rseq_handle_notify_resume+0x99/0x460 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? proc_nr_files+0x30/0x30 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? call_rcu+0xfd/0x690 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_poll+0x29/0x110 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exit_to_user_mode_prepare+0x165/0x1f0 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_ioctl+0x6e/0xd0 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7fd726922224 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffe1774ea18 EFLAGS: 00000202 ORIG_RAX: 0000000000000007 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 00007ffe1774ea36 RCX: 00007fd726922224 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 00000000ffffffff RSI: 0000000000000001 RDI: 00007ffe1774ea30 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 000055e2e93600c0 R08: 0000000000000000 R09: 0000000000000001 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000040 R11: 0000000000000202 R12: 00007ffe1774ea30 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 000055e2eb0b9f50 R15: 000055e2e9312c68 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:sshd state:S stack: 0 pid: 1170 ppid: 1168 flags:0x00000002 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? dev_hard_start_xmit+0x77/0x1e0 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? add_wait_queue+0x64/0xa0 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? pipe_poll+0x9d/0x150 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_sys_poll+0x4d6/0x5d0 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? tcp_update_skb_after_send+0x69/0xd0 Sep 26 16:18:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? get_nohz_timer_target+0x18/0x1a0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? lock_timer_base+0x61/0x80 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_timer+0x256/0x380 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __local_bh_enable_ip+0x37/0x90 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? tcp_sendmsg+0x31/0x40 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sock_sendmsg+0x58/0x70 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sock_write_iter+0x89/0xe0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? vfs_write+0x312/0x3a0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irq+0x1b/0x35 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_ppoll+0xad/0x130 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ksys_read+0x8f/0xd0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __irq_exit_rcu+0x3d/0x140 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7fd7269222fc Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffe1774e700 EFLAGS: 00000202 ORIG_RAX: 000000000000010f Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fd7269222fc Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 0000000000000004 RDI: 000055e2eb10b7b0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000004 R08: 0000000000000008 R09: 000000000000000e Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00007ffe1774e870 R11: 0000000000000202 R12: 0000000000000000 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 000055e2eb0c63b0 R14: 0000000000000000 R15: 000055e2eb10b7b0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:restraintd state:S stack: 0 pid: 1171 ppid: 1170 flags:0x00000002 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? add_wait_queue+0x64/0xa0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? pipe_poll+0x9d/0x150 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_sys_poll+0x4d6/0x5d0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? tcp_ack_update_rtt+0x10a/0x3f0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? tcp_ack+0xcc1/0x13f0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? eventfd_read+0xf3/0x2c0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-hostnamed.service: State 'stop-sigterm' timed out. Killing. Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-hostnamed.service: Killing process 17976 (systemd-hostnam) with signal SIGKILL. Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? wake_up_q+0x90/0x90 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? vfs_read+0x1d2/0x2a0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_poll+0x29/0x110 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f9e100ab25f Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffeb0a222e0 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000558513354750 RCX: 00007f9e100ab25f Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 00000000ffffffff RSI: 0000000000000004 RDI: 0000558513354750 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 000000007fffffff R08: 0000000000000000 R09: 0000000000000000 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000030 R11: 0000000000000293 R12: 000055851116f530 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000004 R14: 00000000ffffffff R15: 00005585132f0090 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:gmain state:S stack: 0 pid: 1189 ppid: 1170 flags:0x00000002 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __switch_to+0x2ef/0x420 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? add_wait_queue+0x64/0xa0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_sys_poll+0x4d6/0x5d0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? wp_page_reuse+0x60/0xa0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? wp_page_reuse+0x60/0xa0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? select_task_rq_fair+0x15d/0x1640 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? cgroup_rstat_updated+0x42/0xc0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? pollwake+0x66/0x80 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? wake_up_q+0x90/0x90 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __wake_up_common+0x76/0x180 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irq+0x1b/0x35 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? eventfd_write+0x220/0x2f0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? wake_up_q+0x90/0x90 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? vfs_write+0xb7/0x3a0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? wait_consider_task+0x4d5/0xa70 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __fget_light+0x94/0x100 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ksys_write+0xac/0xd0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_poll+0x29/0x110 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? fpregs_restore_userregs+0x53/0xe0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exit_to_user_mode_prepare+0x18f/0x1f0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f9e100ab25f Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007f9e0ffaad60 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 00005585132fed00 RCX: 00007f9e100ab25f Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 00000000ffffffff RSI: 0000000000000001 RDI: 00005585132fed00 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 000000007fffffff R08: 0000000000000000 R09: 0000000000000000 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000293 R12: 000055851116f530 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000001 R14: 00000000ffffffff R15: 00005585132fed90 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:restraintd state:S stack: 0 pid: 1317 ppid: 1 flags:0x00000002 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? timerqueue_add+0x62/0xb0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? enqueue_hrtimer+0x2f/0x80 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? hrtimer_start_range_ns+0x274/0x3a0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0x9f/0xf0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __hrtimer_init+0xe0/0xe0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_sys_poll+0x4d6/0x5d0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? avc_has_perm_noaudit+0x79/0xe0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mod_objcg_state+0xc6/0x2e0 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? avc_has_perm+0x7a/0x170 Sep 26 16:18:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? avc_has_perm_noaudit+0x79/0xe0 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock+0x15/0x30 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? unmap_page_range+0xac1/0x12e0 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? refill_obj_stock+0x105/0x1b0 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xfs_iunlock+0x94/0x100 [xfs] Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? xfs_can_free_eofblocks+0xe7/0x120 [xfs] Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? proc_nr_files+0x30/0x30 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __rseq_handle_notify_resume+0x99/0x460 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? recalibrate_cpu_khz+0x10/0x10 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ktime_get_ts64+0x4c/0xf0 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_poll+0x82/0x110 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f8f26b1025f Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffe3766b820 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 00007f8f18005b10 RCX: 00007f8f26b1025f Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000003990 RSI: 0000000000000004 RDI: 00007f8f18005b10 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 000000007fffffff R08: 0000000000000000 R09: 0000000000000000 Sep 26 16:18:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00007ffe376a7080 R11: 0000000000000293 R12: 000055f9c6c86530 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000004 R14: 0000000000003990 R15: 000055f9c7368e00 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:gmain state:S stack: 0 pid: 1319 ppid: 1 flags:0x00000002 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __switch_to+0x2ef/0x420 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? add_wait_queue+0x64/0xa0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_sys_poll+0x4d6/0x5d0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? mod_objcg_state+0xc6/0x2e0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_charge_memcg+0x401/0x7b0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_node_page_state+0x72/0xc0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_memcg_lruvec_state+0x93/0x110 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_to_iter+0x90/0x640 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_to_iter+0x90/0x640 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? folio_add_lru+0x92/0x100 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? eventfd_read+0xf3/0x2c0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_poll+0x29/0x110 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? ksys_read+0xac/0xd0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f8f26b1025f Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007f8f26a0fd60 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 000055f9c7383480 RCX: 00007f8f26b1025f Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 00000000ffffffff RSI: 0000000000000001 RDI: 000055f9c7383480 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 000000007fffffff R08: 0000000000000000 R09: 0000000000000000 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000022 R11: 0000000000000293 R12: 000055f9c6c86530 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000001 R14: 00000000ffffffff R15: 000055f9c7384520 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:pool-restraintd state:S stack: 0 pid: 1900 ppid: 1 flags:0x00000002 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? timerqueue_add+0x62/0xb0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? enqueue_hrtimer+0x2f/0x80 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? hrtimer_start_range_ns+0x274/0x3a0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex_wait_queue+0x70/0xd0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: futex_wait+0x15a/0x220 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __hrtimer_init+0xe0/0xe0 Sep 26 16:18:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_futex+0x106/0x1b0 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_futex+0x63/0x190 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f8f26b1596d Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007f8f2620ed88 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 000055f9c73987f0 RCX: 00007f8f26b1596d Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000106 RSI: 0000000000000080 RDI: 000055f9c7398800 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 00000000eff07acd R08: 0000000000000faa R09: 0000000000000faa Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00007f8f2620eda0 R11: 0000000000000246 R12: 0000000000000fb9 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 000000001eadb6c8 R14: 00007f8f2620ed90 R15: 0000000000000106 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:10_bash_login state:S stack: 0 pid: 1324 ppid: 1317 flags:0x00004002 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_lruvec_page_state+0xa6/0x1a0 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? page_remove_rmap+0xf5/0x550 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_wait+0x160/0x2f0 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kernel_wait4+0x8e/0x110 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? thread_group_exited+0x50/0x50 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_to_user+0x21/0x30 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_rt_sigaction+0x5e/0xb0 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irq+0x1b/0x35 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sigprocmask+0x85/0xb0 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_to_user+0x21/0x30 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f5453f7d7c7 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffcbf9b5198 EFLAGS: 00000202 ORIG_RAX: 000000000000003d Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000547 RCX: 00007f5453f7d7c7 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 00007ffcbf9b51c0 RDI: 00000000ffffffff Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000001 R08: 000055c9efe20130 R09: 0000000000000001 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 00007ffcbf9b51c0 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:runtest.sh state:S stack: 0 pid: 1351 ppid: 1324 flags:0x00000002 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? lru_add_drain_cpu+0x87/0x130 Sep 26 16:18:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_wp_page+0x110/0x410 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_wait+0x160/0x2f0 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kernel_wait4+0x8e/0x110 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? thread_group_exited+0x50/0x50 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_user_addr_fault+0x1ef/0x690 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f64046237c7 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007fff5f5b8be8 EFLAGS: 00000202 ORIG_RAX: 000000000000003d Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 00000000000046c4 RCX: 00007f64046237c7 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 00007fff5f5b8c10 RDI: 00000000ffffffff Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000001 R08: 000055654509e2e0 R09: 0000000000000001 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 00007fff5f5b8c10 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000001 R14: 0000000000000000 R15: 0000000000000000 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:bash state:S stack: 0 pid:11599 ppid: 1 flags:0x00000002 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_wp_page+0x110/0x410 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_wait+0x160/0x2f0 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kernel_wait4+0x8e/0x110 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? thread_group_exited+0x50/0x50 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f33b923c7c7 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffd26880358 EFLAGS: 00000202 ORIG_RAX: 000000000000003d Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 00000000000043f5 RCX: 00007f33b923c7c7 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 00007ffd26880380 RDI: 00000000ffffffff Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000001 R08: 000055e861b09370 R09: 0000000000000001 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 00007ffd26880380 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:tls-strp state:I stack: 0 pid:11713 ppid: 2 flags:0x00004000 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:nvme-wq state:I stack: 0 pid:14337 ppid: 2 flags:0x00004000 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:nvme-reset-wq state:I stack: 0 pid:14338 ppid: 2 flags:0x00004000 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:nvme-delete-wq state:I stack: 0 pid:14339 ppid: 2 flags:0x00004000 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/20:0 state:I stack: 0 pid:14842 ppid: 2 flags:0x00004000 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:dio/dm-0 state:I stack: 0 pid:15550 ppid: 2 flags:0x00004000 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/6:4 state:I stack: 0 pid:15573 ppid: 2 flags:0x00004000 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/4:6 state:I stack: 0 pid:15580 ppid: 2 flags:0x00004000 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_power_efficient) Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irq+0x1b/0x35 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/7:3 state:I stack: 0 pid:15583 ppid: 2 flags:0x00004000 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (nvmet-wq) Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? nvmet_file_execute_io+0x1a4/0x240 [nvmet] Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/19:5 state:I stack: 0 pid:15596 ppid: 2 flags:0x00004000 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/21:2 state:I stack: 0 pid:15599 ppid: 2 flags:0x00004000 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (rcu_gp) Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_timer+0x256/0x380 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/22:4 state:I stack: 0 pid:15615 ppid: 2 flags:0x00004000 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (nvmet-wq) Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? nvmet_file_execute_io+0x1a4/0x240 [nvmet] Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/0:9 state:I stack: 0 pid:15626 ppid: 2 flags:0x00004000 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (rcu_par_gp) Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __rcu_report_exp_rnp+0x8f/0xc0 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/3:12 state:I stack: 0 pid:15639 ppid: 2 flags:0x00004000 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (ipv6_addrconf) Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? netdev_run_todo+0x52/0x4f0 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/20:7 state:I stack: 0 pid:15666 ppid: 2 flags:0x00004000 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (nvmet-wq) Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? nvmet_file_execute_io+0x1a4/0x240 [nvmet] Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/7:8 state:I stack: 0 pid:15672 ppid: 2 flags:0x00004000 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/17:9 state:I stack: 0 pid:15696 ppid: 2 flags:0x00004000 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events) Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? psi_avgs_work+0x66/0xa0 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/23:7 state:I stack: 0 pid:15719 ppid: 2 flags:0x00004000 Sep 26 16:18:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/14:4 state:I stack: 0 pid:15738 ppid: 2 flags:0x00004000 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (xfsalloc) Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/27:2 state:I stack: 0 pid:15757 ppid: 2 flags:0x00004000 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (nvmet-wq) Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? nvmet_file_execute_io+0x1a4/0x240 [nvmet] Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/28:3 state:I stack: 0 pid:15760 ppid: 2 flags:0x00004000 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (xfsalloc) Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/31:4 state:I stack: 0 pid:15775 ppid: 2 flags:0x00004000 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (dio/dm-0) Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_timer+0x256/0x380 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/9:3 state:I stack: 0 pid:15780 ppid: 2 flags:0x00004000 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? add_timer_on+0xcc/0xf0 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/10:4 state:I stack: 0 pid:15794 ppid: 2 flags:0x00004000 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (dio/dm-0) Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? aio_complete_rw+0x16b/0x1f0 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/12:0 state:I stack: 0 pid:15797 ppid: 2 flags:0x00004000 Sep 26 16:18:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (xfsalloc) Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? lock_timer_base+0x61/0x80 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/25:4 state:I stack: 0 pid:15807 ppid: 2 flags:0x00004000 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (xfsalloc) Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/27:8 state:I stack: 0 pid:15818 ppid: 2 flags:0x00004000 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/24:7 state:I stack: 0 pid:15830 ppid: 2 flags:0x00004000 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (xfsalloc) Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/16:0 state:I stack: 0 pid:15982 ppid: 2 flags:0x00004000 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (cgroup_destroy) Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? css_free_rwork_fn+0x135/0x3d0 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/5:12 state:I stack: 0 pid:15996 ppid: 2 flags:0x00004000 Sep 26 16:18:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (rcu_par_gp) Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __rcu_report_exp_rnp+0x8f/0xc0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/1:14 state:I stack: 0 pid:16063 ppid: 2 flags:0x00004000 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (rcu_gp) Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_timer+0x256/0x380 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/4:7 state:I stack: 0 pid:16070 ppid: 2 flags:0x00004000 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events) Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __cond_resched+0x1c/0x30 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/3:13 state:I stack: 0 pid:16071 ppid: 2 flags:0x00004000 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (rcu_par_gp) Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __rcu_report_exp_rnp+0x8f/0xc0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/22:5 state:I stack: 0 pid:16145 ppid: 2 flags:0x00004000 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/6:12 state:I stack: 0 pid:16165 ppid: 2 flags:0x00004000 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events) Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? queue_delayed_work_on+0x39/0x50 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? psi_avgs_work+0x93/0xa0 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/23:2 state:I stack: 0 pid:16166 ppid: 2 flags:0x00004000 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events) Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? psi_avgs_work+0x66/0xa0 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/17:8 state:I stack: 0 pid:16189 ppid: 2 flags:0x00004000 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (nvmet-wq) Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? nvmet_file_execute_io+0x1a4/0x240 [nvmet] Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/18:0 state:I stack: 0 pid:16196 ppid: 2 flags:0x00004000 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (nvmet-wq) Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? nvmet_file_execute_io+0x1a4/0x240 [nvmet] Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/18:1 state:I stack: 0 pid:16197 ppid: 2 flags:0x00004000 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/8:0 state:I stack: 0 pid:16502 ppid: 2 flags:0x00004000 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (i40e) Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? queue_delayed_work_on+0x39/0x50 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? psi_avgs_work+0x93/0xa0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/14:0 state:I stack: 0 pid:16581 ppid: 2 flags:0x00004000 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/15:1 state:I stack: 0 pid:16584 ppid: 2 flags:0x00004000 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_power_efficient) Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/24:0 state:I stack: 0 pid:16587 ppid: 2 flags:0x00004000 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/25:0 state:I stack: 0 pid:16602 ppid: 2 flags:0x00004000 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/26:0 state:I stack: 0 pid:16605 ppid: 2 flags:0x00004000 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events) Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? queue_delayed_work_on+0x39/0x50 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? psi_avgs_work+0x93/0xa0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/29:0 state:I stack: 0 pid:16610 ppid: 2 flags:0x00004000 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (dio/dm-0) Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/30:0 state:I stack: 0 pid:16611 ppid: 2 flags:0x00004000 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (dio/dm-0) Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/28:0 state:I stack: 0 pid:16612 ppid: 2 flags:0x00004000 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/11:0 state:I stack: 0 pid:16638 ppid: 2 flags:0x00004000 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_power_efficient) Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? neigh_managed_work+0x92/0xa0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/10:1 state:R running task stack: 0 pid:16797 ppid: 2 flags:0x00004000 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? add_timer_on+0xcc/0xf0 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/12:1 state:I stack: 0 pid:16978 ppid: 2 flags:0x00004000 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/15:2 state:I stack: 0 pid:16982 ppid: 2 flags:0x00004000 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (dio/dm-0) Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? aio_complete_rw+0x16b/0x1f0 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/13:0 state:I stack: 0 pid:17113 ppid: 2 flags:0x00004000 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/u130:2 state:I stack: 0 pid:17252 ppid: 2 flags:0x00004000 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (flush-253:0) Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? queue_delayed_work_on+0x39/0x50 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/5:0 state:I stack: 0 pid:17319 ppid: 2 flags:0x00004000 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? add_timer_on+0xcc/0xf0 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/1:0 state:I stack: 0 pid:17320 ppid: 2 flags:0x00004000 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events) Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kfree_rcu_monitor+0xaf/0x140 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:bash state:S stack: 0 pid:17397 ppid: 11599 flags:0x00000002 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irq+0x1b/0x35 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_wait+0x160/0x2f0 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kernel_wait4+0x8e/0x110 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? thread_group_exited+0x50/0x50 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_to_user+0x21/0x30 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f33b923c7c7 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffd26880118 EFLAGS: 00000202 ORIG_RAX: 000000000000003d Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 00000000000043f6 RCX: 00007f33b923c7c7 Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 00007ffd26880140 RDI: 00000000ffffffff Sep 26 16:18:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000001 R08: 000055e861b088a0 R09: 0000000000000001 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 00007ffd26880140 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:check state:S stack: 0 pid:17398 ppid: 17397 flags:0x00000002 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? page_remove_rmap+0xf5/0x550 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_wait+0x160/0x2f0 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kernel_wait4+0x8e/0x110 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? thread_group_exited+0x50/0x50 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_to_user+0x21/0x30 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_rt_sigaction+0x5e/0xb0 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f3b49b917c7 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffe664d5278 EFLAGS: 00000202 ORIG_RAX: 000000000000003d Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000004400 RCX: 00007f3b49b917c7 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 00007ffe664d52a0 RDI: 00000000ffffffff Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000001 R08: 00005593d245e720 R09: 0000000000000001 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 00007ffe664d52a0 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:check state:S stack: 0 pid:17408 ppid: 17398 flags:0x00000002 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irq+0x1b/0x35 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_wait+0x160/0x2f0 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kernel_wait4+0x8e/0x110 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? thread_group_exited+0x50/0x50 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f3b49b917c7 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffe664d48f8 EFLAGS: 00000202 ORIG_RAX: 000000000000003d Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000004402 RCX: 00007f3b49b917c7 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 00007ffe664d4920 RDI: 00000000ffffffff Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000001 R08: 00005593d245f050 R09: 0000000000000001 Sep 26 16:18:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 00007ffe664d4920 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:check state:S stack: 0 pid:17410 ppid: 17408 flags:0x00000002 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? put_page+0x6/0x60 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_wait+0x160/0x2f0 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kernel_wait4+0x8e/0x110 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? thread_group_exited+0x50/0x50 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_user_addr_fault+0x1ef/0x690 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f3b49b917c7 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffe664d07e8 EFLAGS: 00000202 ORIG_RAX: 000000000000003d Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 000000000000445c RCX: 00007f3b49b917c7 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 00007ffe664d0810 RDI: 00000000ffffffff Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000001 R08: 00005593d255d720 R09: 0000000000000001 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 00007ffe664d0810 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 Sep 26 16:18:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:nvmet-zbd-wq state:I stack: 0 pid:17460 ppid: 2 flags:0x00004000 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:nvmet-buffered- state:I stack: 0 pid:17461 ppid: 2 flags:0x00004000 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:nvmet-wq state:I stack: 0 pid:17462 ppid: 2 flags:0x00004000 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? set_next_entity+0xda/0x150 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rescuer_thread+0x29a/0x380 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? process_one_work+0x380/0x380 Sep 26 16:18:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:fio state:S stack: 0 pid:17500 ppid: 17410 flags:0x00000002 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? psi_task_switch+0xa6/0x1d0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __update_idle_core+0x20/0xc0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __schedule+0x322/0x12a0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? timerqueue_add+0x62/0xb0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? enqueue_hrtimer+0x2f/0x80 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? hrtimer_start_range_ns+0x274/0x3a0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? schedule+0x5d/0xe0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_nanosleep+0x74/0x160 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_sys_newfstatat+0x13/0x40 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? hrtimer_nanosleep+0xa9/0x180 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __hrtimer_init+0xe0/0xe0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? common_nsleep+0x3f/0x50 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_clock_nanosleep+0xaf/0x110 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? hrtimer_nanosleep+0xa9/0x180 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __hrtimer_init+0xe0/0xe0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? common_nsleep+0x3f/0x50 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_clock_nanosleep+0xaf/0x110 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x5b/0x80 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:fio state:S stack: 0 pid:17533 ppid: 17410 flags:0x00000002 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __schedule+0x322/0x12a0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? schedule+0x5d/0xe0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? schedule_hrtimeout_range_clock+0xe2/0xf0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? add_wait_queue+0x64/0xa0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_select+0x632/0x7e0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __pollwait+0xe0/0xe0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? number+0x310/0x3a0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? vsnprintf+0x33a/0x550 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? sprintf+0x46/0x50 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? part_stat_show+0x13b/0x1b0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? core_sys_select+0x18d/0x370 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irq+0x1b/0x35 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_timerfd_settime+0x33d/0x500 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_timerfd_settime+0x45/0x90 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_pselect.constprop.0+0xb1/0x140 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exit_to_user_mode_prepare+0x165/0x1f0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_pselect6+0x65/0x80 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x5b/0x80 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_ioctl+0xa8/0xd0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:fio state:S stack: 0 pid:17534 ppid: 17500 flags:0x00000002 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __schedule+0x322/0x12a0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x47/0xa0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? schedule+0x5d/0xe0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_io_getevents+0x71/0xe0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_io_getevents+0x48/0xb0 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x5b/0x80 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_io_getevents+0x48/0xb0 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x5b/0x80 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/u129:2 state:I stack: 0 pid:17543 ppid: 2 flags:0x00004000 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_unbound) Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/21:1 state:I stack: 0 pid:17546 ppid: 2 flags:0x00004000 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/16:1 state:I stack: 0 pid:17549 ppid: 2 flags:0x00004000 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/0:0 state:I stack: 0 pid:17550 ppid: 2 flags:0x00004000 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (rcu_gp) Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_timer+0x256/0x380 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/u129:3 state:I stack: 0 pid:17573 ppid: 2 flags:0x00004000 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_unbound) Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/2:3 state:I stack: 0 pid:17600 ppid: 2 flags:0x00004000 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events) Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? psi_avgs_work+0x93/0xa0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/2:13 state:I stack: 0 pid:17608 ppid: 2 flags:0x00004000 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (dio/dm-0) Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/9:1 state:I stack: 0 pid:17644 ppid: 2 flags:0x00004000 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/u130:1 state:I stack: 0 pid:17792 ppid: 2 flags:0x00004000 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (flush-253:0) Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? queue_delayed_work_on+0x39/0x50 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:18:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/13:2 state:I stack: 0 pid:17831 ppid: 2 flags:0x00004000 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? lock_timer_base+0x61/0x80 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/u129:0 state:I stack: 0 pid:17925 ppid: 2 flags:0x00004000 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events_unbound) Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/9:0 state:I stack: 0 pid:17930 ppid: 2 flags:0x00004000 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? add_timer_on+0xcc/0xf0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:10_bash_login state:S stack: 0 pid:17965 ppid: 1171 flags:0x00000002 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_lruvec_page_state+0xa6/0x1a0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? page_remove_rmap+0xf5/0x550 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_wait+0x160/0x2f0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kernel_wait4+0x8e/0x110 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? thread_group_exited+0x50/0x50 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_to_user+0x21/0x30 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_rt_sigaction+0x5e/0xb0 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:19:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7eff27d747c7 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffe22288398 EFLAGS: 00000202 ORIG_RAX: 000000000000003d Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 000000000000464b RCX: 00007eff27d747c7 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 00007ffe222883c0 RDI: 00000000ffffffff Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000001 R08: 000055802dc26b90 R09: 0000000000000001 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 00007ffe222883c0 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:systemd-hostnam state:D stack: 0 pid:17976 ppid: 1 flags:0x00000002 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? try_to_wake_up+0x83/0x570 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: synchronize_rcu_expedited+0x205/0x3e0 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? cond_synchronize_rcu_expedited+0x30/0x30 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? dequeue_task_stop+0x70/0x70 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? umount_tree+0x227/0x340 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: namespace_unlock+0xae/0x180 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: put_mnt_ns+0x69/0x90 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: free_nsproxy+0x17/0x1b0 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_exit+0x30c/0xae0 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_group_exit+0x2d/0x90 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_exit_group+0x14/0x20 Sep 26 16:19:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7fc7288f9ffd Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffc3baad6a8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 00007fc7289f29e0 RCX: 00007fc7288f9ffd Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 00000000000000e7 RSI: fffffffffffffea0 RDI: 0000000000000000 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000000 R08: 000056444ddb4fa0 R09: 00007fc7289fdac0 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00000000000000e0 R11: 0000000000000246 R12: 00007fc7289f29e0 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 00007fc7289f82c8 R15: 00007fc7289f82e0 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/3:0 state:D stack: 0 pid:17977 ppid: 2 flags:0x00004000 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: rcu_gp wait_rcu_exp_gp Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? get_nohz_timer_target+0x18/0x1a0 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_timer+0x256/0x380 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_timeout+0x79/0x130 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __bpf_trace_tick_stop+0x10/0x10 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: synchronize_rcu_expedited_wait_once+0x68/0x190 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu_exp_wait_wake+0x36/0x3b0 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: process_one_work+0x1c7/0x380 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0x4d/0x380 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/0:1 state:I stack: 0 pid:17978 ppid: 2 flags:0x00004000 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (events) Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:19:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:run_plugins state:S stack: 0 pid:17995 ppid: 17965 flags:0x00000002 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? wp_page_copy+0x365/0x6f0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_wait+0x160/0x2f0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kernel_wait4+0x8e/0x110 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? thread_group_exited+0x50/0x50 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _copy_to_user+0x21/0x30 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __x64_sys_rt_sigaction+0x5e/0xb0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7ff66d0e87c7 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffc04f35b78 EFLAGS: 00000202 ORIG_RAX: 000000000000003d Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000004683 RCX: 00007ff66d0e87c7 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 00007ffc04f35ba0 RDI: 00000000ffffffff Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 0000000000000001 R08: 0000558374de2280 R09: 0000000000000001 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 00007ffc04f35ba0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/13:1 state:I stack: 0 pid:18024 ppid: 2 flags:0x00004000 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (mm_percpu_wq) Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? add_timer_on+0xcc/0xf0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:20_sysinfo state:R running task stack: 0 pid:18051 ppid: 17995 flags:0x0000000a Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sched_show_task.cold+0xc7/0xdf Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: show_state_filter+0x78/0xe0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sysrq_handle_showstate+0xc/0x20 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __handle_sysrq.cold+0x44/0x11c Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: write_sysrq_trigger+0x24/0x40 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: proc_reg_write+0x56/0xa0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x47/0xa0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: vfs_write+0xb7/0x3a0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ksys_write+0x53/0xd0 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? handle_mm_fault+0xae/0x290 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_user_addr_fault+0x1ef/0x690 Sep 26 16:19:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7fafec96c2c4 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Code: 15 41 7b 0d 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 00 f3 0f 1e fa 80 3d 1d 03 0e 00 00 74 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 c3 0f 1f 00 48 83 ec 28 48 89 54 24 18 48 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffcebe0a398 EFLAGS: 00000202 ORIG_RAX: 0000000000000001 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007fafec96c2c4 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000002 RSI: 000055c023862280 RDI: 0000000000000001 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 000055c023862280 R08: 0000000000000000 R09: 0000000000000073 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000002 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 00007fafeca45760 R14: 0000000000000002 R15: 00007fafeca409e0 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:systemd-userwor state:S stack: 0 pid:18091 ppid: 1038 flags:0x00000002 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? get_nohz_timer_target+0x18/0x1a0 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_timer+0x256/0x380 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_timeout+0x79/0x130 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __bpf_trace_tick_stop+0x10/0x10 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __skb_wait_for_more_packets+0xfb/0x150 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? skb_attempt_defer_free+0x120/0x120 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __skb_recv_datagram+0x5a/0xa0 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: skb_recv_datagram+0x2a/0x40 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: unix_accept+0x68/0x150 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_accept+0x100/0x170 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? proc_nr_files+0x30/0x30 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __rseq_handle_notify_resume+0x99/0x460 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock+0x15/0x30 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? alloc_fd+0xd1/0x170 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __sys_accept4+0x5b/0xc0 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_accept4+0x18/0x20 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f64c05311b7 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007fffe4903e28 EFLAGS: 00000202 ORIG_RAX: 0000000000000120 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 00000000fffffff5 RCX: 00007f64c05311b7 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000003 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 00007fffe4903ed0 R08: 0000000000000010 R09: 0000000000000000 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000080800 R11: 0000000000000202 R12: 00000000ec27e35f Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 00000000fe09865f R14: 00000000ec27e35f R15: ffffffffffffffff Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:kworker/u130:0 state:I stack: 0 pid:18094 ppid: 2 flags:0x00004000 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: 0x0 (flush-253:0) Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? queue_delayed_work_on+0x39/0x50 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irq+0x19/0x40 Sep 26 16:19:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0xaf/0x380 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:systemd-userwor state:S stack: 0 pid:18099 ppid: 1038 flags:0x00000002 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? get_nohz_timer_target+0x18/0x1a0 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_timer+0x256/0x380 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_timeout+0x79/0x130 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __bpf_trace_tick_stop+0x10/0x10 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __skb_wait_for_more_packets+0xfb/0x150 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? skb_attempt_defer_free+0x120/0x120 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __skb_recv_datagram+0x5a/0xa0 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: skb_recv_datagram+0x2a/0x40 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: unix_accept+0x68/0x150 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_accept+0x100/0x170 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? proto_seq_start+0x30/0x30 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock+0x15/0x30 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? alloc_fd+0xd1/0x170 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __sys_accept4+0x5b/0xc0 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_accept4+0x18/0x20 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? strscpy_pad+0xf/0x40 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock+0x15/0x30 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __set_task_comm+0x40/0xa0 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __do_sys_prctl+0x425/0x6d0 Sep 26 16:19:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exit_to_user_mode_prepare+0x18f/0x1f0 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f461f3311b7 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007fff63a7dca8 EFLAGS: 00000202 ORIG_RAX: 0000000000000120 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 00000000fffffff5 RCX: 00007f461f3311b7 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000003 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 00007fff63a7dd50 R08: 0000000000000010 R09: 0000000000000000 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000080800 R11: 0000000000000202 R12: 00000000eccbefd3 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 00000000fead92d3 R14: 00000000eccbefd3 R15: ffffffffffffffff Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:systemd-userwor state:S stack: 0 pid:18100 ppid: 1038 flags:0x00000002 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? get_nohz_timer_target+0x18/0x1a0 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock_irqrestore+0x23/0x40 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __mod_timer+0x256/0x380 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule_timeout+0x79/0x130 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __bpf_trace_tick_stop+0x10/0x10 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __skb_wait_for_more_packets+0xfb/0x150 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? skb_attempt_defer_free+0x120/0x120 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __skb_recv_datagram+0x5a/0xa0 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: skb_recv_datagram+0x2a/0x40 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: unix_accept+0x68/0x150 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_accept+0x100/0x170 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? preempt_count_add+0x6a/0xa0 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_lock_irqsave+0x23/0x50 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? cgroup_rstat_updated+0x42/0xc0 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? _raw_spin_unlock+0x15/0x30 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? alloc_fd+0xd1/0x170 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __sys_accept4+0x5b/0xc0 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_accept4+0x18/0x20 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? fpregs_restore_userregs+0x53/0xe0 Sep 26 16:19:06 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? syscall_exit_to_user_mode+0x17/0x40 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? do_syscall_64+0x67/0x80 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7f8617b311b7 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffe34e7dba8 EFLAGS: 00000202 ORIG_RAX: 0000000000000120 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 00000000fffffff5 RCX: 00007f8617b311b7 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000003 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 00007ffe34e7dc50 R08: 0000000000000010 R09: 0000000000000000 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 0000000000080800 R11: 0000000000000202 R12: 00000000eccbf155 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 00000000fead9455 R14: 00000000eccbf155 R15: ffffffffffffffff Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: task:sleep state:S stack: 0 pid:18130 ppid: 1351 flags:0x00000002 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __schedule+0x322/0x12a0 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? timerqueue_add+0x62/0xb0 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? enqueue_hrtimer+0x2f/0x80 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? hrtimer_start_range_ns+0x274/0x3a0 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: schedule+0x5d/0xe0 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_nanosleep+0x74/0x160 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hrtimer_nanosleep+0xa9/0x180 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? __hrtimer_init+0xe0/0xe0 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: common_nsleep+0x3f/0x50 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __x64_sys_clock_nanosleep+0xaf/0x110 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: do_syscall_64+0x5b/0x80 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? exc_page_fault+0x70/0x170 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: entry_SYSCALL_64_after_hwframe+0x63/0xcd Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0033:0x7fd873714197 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 002b:00007ffed84bcdd8 EFLAGS: 00000202 ORIG_RAX: 00000000000000e6 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: ffffffffffffffda RBX: 00007fd8736416c0 RCX: 00007fd873714197 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: 00007ffed84bce20 RSI: 0000000000000000 RDI: 0000000000000000 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: 00007ffed84bce10 R08: 0000000000000000 R09: 0000000000000000 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: 00007ffed84bce10 R11: 0000000000000202 R12: 0000000000000005 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 00007ffed84bce20 R14: 00007ffed84bce10 R15: 00007ffed84bd557 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sched Debug Version: v0.11, 6.0.0-rc7 #1 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ktime : 4049038.868048 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sched_clk : 4060020.368944 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu_clk : 4057508.036583 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: jiffies : 4298716132 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sched_clock_stable() : 1 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sysctl_sched Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .sysctl_sched_latency : 24.000000 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .sysctl_sched_min_granularity : 3.000000 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .sysctl_sched_idle_min_granularity : 0.750000 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .sysctl_sched_wakeup_granularity : 4.000000 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .sysctl_sched_child_runs_first : 0 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .sysctl_sched_features : 58611259 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .sysctl_sched_tunable_scaling : 1 (logarithmic) Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#0, 2094.764 MHz Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 614115 Sep 26 16:19:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967288 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.716372 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4058023.771166 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4025201.268771 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[0]: Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[0]: Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S systemd 1 1465.602314 3208 120 0.000000 21885.332344 0.000000 0.000000 0 0 /autogroup-1 Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I rcu_gp 3 7.596336 2 100 0.000000 0.004811 0.000000 0.000000 0 0 / Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I rcu_par_gp 4 9.097885 2 100 0.000000 0.003150 0.000000 0.000000 0 0 / Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I slub_flushwq 5 11.099150 2 100 0.000000 0.003086 0.000000 0.000000 0 0 / Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I netns 6 13.100475 2 100 0.000000 0.003183 0.000000 0.000000 0 0 / Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/0:0H 8 17.101837 4 100 0.000000 0.009622 0.000000 0.000000 0 0 / Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/0:1H 10 47239.137577 5486 100 0.000000 77.731917 0.000000 0.000000 0 0 / Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/u128:0 11 19.360426 4 120 0.000000 0.008656 0.000000 0.000000 0 0 / Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I mm_percpu_wq 12 21.354782 2 100 0.000000 0.002838 0.000000 0.000000 0 0 / Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I rcu_tasks_kthre 14 22.856773 2 120 0.000000 0.002904 0.000000 0.000000 0 0 / Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I rcu_tasks_rude_ 15 24.358433 2 120 0.000000 0.003051 0.000000 0.000000 0 0 / Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I rcu_tasks_trace 16 26.360207 2 120 0.000000 0.003478 0.000000 0.000000 0 0 / Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/0 17 47049.659181 3317 120 0.000000 552.075982 0.000000 0.000000 0 0 / Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/0 19 0.000000 1063 0 0.000000 7.749970 0.000000 0.000000 0 0 / Sep 26 16:19:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/0 21 11894.008371 23 120 0.000000 5.483392 0.000000 0.000000 0 0 / Sep 26 16:19:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/u132:0 302 3111.780686 2 100 0.000000 0.052466 0.000000 0.000000 0 0 / Sep 26 16:19:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs-reclaim/dm- 766 3834.845607 2 100 0.000000 0.086079 0.000000 0.000000 0 0 / Sep 26 16:19:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs-log/sda2 993 10984.285323 2 100 0.000000 0.074781 0.000000 0.000000 0 0 / Sep 26 16:19:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S agetty 1111 3.490491 8 120 0.000000 7.191450 0.000000 0.000000 0 0 /autogroup-79 Sep 26 16:19:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/0:9 15626 46672.979184 15158 120 0.000000 477.603469 0.000000 0.000000 0 0 / Sep 26 16:19:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S bash 17397 6532.212921 1 120 0.000000 0.877151 0.000000 0.000000 0 0 /autogroup-97 Sep 26 16:19:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I nvmet-zbd-wq 17460 39914.827364 2 100 0.000000 0.015469 0.000000 0.000000 0 0 / Sep 26 16:19:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I nvmet-buffered- 17461 39926.834382 2 100 0.000000 0.007856 0.000000 0.000000 0 0 / Sep 26 16:19:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/0:0 17550 46672.947209 724 120 0.000000 17.871537 0.000000 0.000000 0 0 / Sep 26 16:19:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/0:1 17978 47239.588629 297 120 0.000000 37.890630 0.000000 0.000000 0 0 / Sep 26 16:19:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#1, 2094.764 MHz Sep 26 16:19:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:09 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 528987 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967289 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.718451 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4059891.821877 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4050650.884608 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[1]:/autogroup-97 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 3939.577962 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -43311.559530 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 2 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->exec_start : 4050980.873610 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->vruntime : 62167.470439 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->sum_exec_runtime : 3335.415570 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->load.weight : 2 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.load_avg : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.util_avg : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.runnable_avg : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[1]:/ Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 62167.470439 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : 14916.332947 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:10 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 2 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 2 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[1]: Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[1]: Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/1 22 14289.403644 23 120 0.000000 0.815961 0.000000 0.000000 0 0 / Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/1 23 0.000000 1049 0 0.000000 327.002761 0.000000 0.000000 0 0 / Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/1 24 62044.124941 1435 120 0.000000 44.009339 0.000000 0.000000 0 0 / Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/1:0H 26 2610.508352 4 100 0.000000 0.064275 0.000000 0.000000 0 0 / Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kthrotld 240 2195.072470 2 100 0.000000 0.009205 0.000000 0.000000 0 0 / Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/1:1H 469 62156.464658 7339 100 0.000000 92.879185 0.000000 0.000000 0 0 / Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/1:14 16063 56986.257570 49348 120 0.000000 1457.444211 0.000000 0.000000 0 0 / Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/1:0 17320 62156.964637 13553 120 0.000000 155.994979 0.000000 0.000000 0 0 / Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I nvmet-wq 17462 57222.648742 2 100 0.000000 0.014812 0.000000 0.000000 0 0 / Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S fio 17533 3941.557567 4776 120 0.000000 1132.140513 0.000000 0.000000 0 0 /autogroup-97 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S systemd-userwor 18100 12.750006 5 120 0.000000 6.251568 0.000000 0.000000 0 0 /autogroup-53 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#2, 2094.764 MHz Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:11 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 1362312 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967260 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.720460 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4062118.825631 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4028183.682822 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[2]: Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[2]: Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/2 27 10967.226035 23 120 0.000000 0.720779 0.000000 0.000000 0 0 / Sep 26 16:19:12 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/2 28 65.984448 1050 0 0.000000 328.702854 0.000000 0.000000 0 0 / Sep 26 16:19:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/2 29 61631.659090 17327 120 0.000000 671.138214 0.000000 0.000000 0 0 / Sep 26 16:19:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/2:0H 31 47.959928 4 100 0.000000 0.039907 0.000000 0.000000 0 0 / Sep 26 16:19:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksmd 190 22.959257 2 125 0.000000 0.003220 0.000000 0.000000 0 0 / Sep 26 16:19:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/2:1H 231 61631.659735 198983 100 0.000000 2645.416577 0.000000 0.000000 0 0 / Sep 26 16:19:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S scsi_eh_4 260 4433.955972 68 120 0.000000 1.320099 0.000000 0.000000 0 0 / Sep 26 16:19:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S gdbus 1070 13.934726 243 120 0.000000 31.860459 0.000000 0.000000 0 0 /autogroup-55 Sep 26 16:19:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S (sd-pam) 1094 11.481820 1 120 0.000000 0.530405 0.000000 0.000000 0 0 /autogroup-72 Sep 26 16:19:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S check 17410 2273.035014 38 120 0.000000 27.105449 0.000000 0.000000 0 0 /autogroup-97 Sep 26 16:19:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/2:3 17600 61631.664718 131024 120 0.000000 6280.970061 0.000000 0.000000 0 0 / Sep 26 16:19:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/2:13 17608 57364.704194 13 120 0.000000 0.218681 0.000000 0.000000 0 0 / Sep 26 16:19:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#3, 2094.764 MHz Sep 26 16:19:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 1 Sep 26 16:19:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 962401 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967256 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.722436 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4063925.881458 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4057526.271463 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[3]:/autogroup-159 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 8319.794481 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -38931.343011 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 15 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 14 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 15 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 7 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 7 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->exec_start : 4058049.166586 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->vruntime : 50510.822334 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->sum_exec_runtime : 8337.310778 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->load.weight : 1048576 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.load_avg : 8 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.util_avg : 8 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.runnable_avg : 8 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[3]:/ Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 50511.006703 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : 3259.869211 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 10 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 8 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 8 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:14 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[3]: Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[3]: Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/3 32 10909.543380 23 120 0.000000 0.928922 0.000000 0.000000 0 0 / Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/3 33 93.984488 1043 0 0.000000 332.006817 0.000000 0.000000 0 0 / Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/3 34 46957.366676 10498 120 0.000000 368.917617 0.000000 0.000000 0 0 / Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/3:0H 36 7.655562 4 100 0.000000 0.027990 0.000000 0.000000 0 0 / Sep 26 16:19:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S khugepaged 191 10.951418 2 139 0.000000 0.000000 0.000000 0.000000 0 0 / Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I scsi_tmf_2 257 19.655378 2 100 0.000000 0.043541 0.000000 0.000000 0 0 / Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/3:1H 268 50529.875608 10001 100 0.000000 105.084143 0.000000 0.000000 0 0 / Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I ipv6_addrconf 281 43.732003 2 100 0.000000 0.081394 0.000000 0.000000 0 0 / Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/3:12 15639 50073.868281 31799 120 0.000000 1016.045331 0.000000 0.000000 0 0 / Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/3:13 16071 46894.658740 3321 120 0.000000 108.797107 0.000000 0.000000 0 0 / Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S fio 17534 8382.835975 353651 120 0.000000 36911.681354 0.000000 0.000000 0 0 /autogroup-159 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: D systemd-hostnam 17976 75.152957 17 120 0.000000 77.190534 0.000000 0.000000 0 0 /autogroup-160 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: D kworker/3:0 17977 50356.647642 32 120 0.000000 0.221277 0.000000 0.000000 0 0 / Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#4, 2094.764 MHz Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 310797 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967278 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.724881 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4066536.926768 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4063435.821639 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[4]: Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[4]: Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/4 37 9919.653245 23 120 0.000000 0.889966 0.000000 0.000000 0 0 / Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/4 38 124.984611 1037 0 0.000000 333.649216 0.000000 0.000000 0 0 / Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/4 39 41740.372485 1415 120 0.000000 607.235994 0.000000 0.000000 0 0 / Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/4:0H 41 485.816574 4 100 0.000000 0.024089 0.000000 0.000000 0 0 / Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/4:1H 270 41843.517287 8130 100 0.000000 80.270332 0.000000 0.000000 0 0 / Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/4:6 15580 40050.557636 7395 120 0.000000 317.225713 0.000000 0.000000 0 0 / Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/4:7 16070 41855.545229 14863 120 0.000000 876.366168 0.000000 0.000000 0 0 / Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/u129:2 17543 41741.022560 4735 120 0.000000 34.652235 0.000000 0.000000 0 0 / Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S systemd-userwor 18091 93.951470 4 120 0.000000 6.957867 0.000000 0.000000 0 0 /autogroup-53 Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#5, 2094.764 MHz Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 355655 Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967294 Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.725887 Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4067263.930702 Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4058254.790361 Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:17 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[5]:/ Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 25182.682156 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -22068.455336 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[5]: Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[5]: Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/5 42 3426.386645 23 120 0.000000 0.903506 0.000000 0.000000 0 0 / Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/5 43 166.984647 1040 0 0.000000 334.915933 0.000000 0.000000 0 0 / Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/5 44 25170.682156 604 120 0.000000 275.529289 0.000000 0.000000 0 0 / Sep 26 16:19:18 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/5:0H 46 635.517402 4 100 0.000000 0.030234 0.000000 0.000000 0 0 / Sep 26 16:19:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/5:1H 577 25170.682844 11098 100 0.000000 633.016573 0.000000 0.000000 0 0 / Sep 26 16:19:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs-buf/sda2 988 2447.114882 2 100 0.000000 0.070156 0.000000 0.000000 0 0 / Sep 26 16:19:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs-conv/sda2 989 2459.130923 2 100 0.000000 0.017914 0.000000 0.000000 0 0 / Sep 26 16:19:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs-reclaim/sda 990 2471.142114 2 100 0.000000 0.013077 0.000000 0.000000 0 0 / Sep 26 16:19:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs-blockgc/sda 991 2483.152817 2 100 0.000000 0.012430 0.000000 0.000000 0 0 / Sep 26 16:19:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs-inodegc/sda 992 2495.163087 2 100 0.000000 0.011921 0.000000 0.000000 0 0 / Sep 26 16:19:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S dbus-broker-lau 1067 -7.307459 208 120 0.000000 54.736285 0.000000 0.000000 0 0 /autogroup-62 Sep 26 16:19:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S systemd 1092 771.423897 487 120 0.000000 1223.328282 0.000000 0.000000 0 0 /autogroup-72 Sep 26 16:19:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S runtest.sh 1351 1243.333446 1642 120 0.000000 728.156538 0.000000 0.000000 0 0 /autogroup-88 Sep 26 16:19:19 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/5:12 15996 23978.885973 3977 120 0.000000 195.390648 0.000000 0.000000 0 0 / Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/5:0 17319 25170.919045 595 120 0.000000 6.978824 0.000000 0.000000 0 0 / Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#6, 2094.764 MHz Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 272748 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967278 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.728648 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4070027.979231 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4064305.857063 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[6]: Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[6]: Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/6 47 11542.378178 23 120 0.000000 0.893117 0.000000 0.000000 0 0 / Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/6 48 208.984681 1064 0 0.000000 338.431564 0.000000 0.000000 0 0 / Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/6 49 40290.140991 5633 120 0.000000 542.135428 0.000000 0.000000 0 0 / Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/6:0H 51 1969.482155 4 100 0.000000 0.028265 0.000000 0.000000 0 0 / Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/6:1H 272 40293.287062 7910 100 0.000000 59.387092 0.000000 0.000000 0 0 / Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs-buf/dm-0 764 2840.478014 2 100 0.000000 0.054441 0.000000 0.000000 0 0 / Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S dbus-broker 1069 1.015335 5070 120 0.000000 322.757584 0.000000 0.000000 0 0 /autogroup-62 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S bash 11599 3015.738908 267 120 0.000000 190.041382 0.000000 0.000000 0 0 /autogroup-97 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/6:4 15573 39426.235962 3304 120 0.000000 201.278243 0.000000 0.000000 0 0 / Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/6:12 16165 40293.335657 3974 120 0.000000 4992.342777 0.000000 0.000000 0 0 / Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S sleep 18134 7453.607087 1 120 0.000000 1.874496 0.000000 0.000000 0 0 /autogroup-88 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#7, 2094.764 MHz Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 317508 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967270 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.729755 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4071376.985235 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4065475.171877 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[7]:/autogroup-88 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 1720.542606 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -45530.594886 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:21 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->exec_start : 4065126.064270 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->vruntime : 38516.933925 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->sum_exec_runtime : 179.592340 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->load.weight : 1041456 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.load_avg : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.util_avg : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.runnable_avg : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[7]:/ Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 38516.933925 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -8734.203567 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[7]: Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[7]: Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/7 52 13285.679248 23 120 0.000000 0.902426 0.000000 0.000000 0 0 / Sep 26 16:19:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/7 53 250.984804 1062 0 0.000000 340.366091 0.000000 0.000000 0 0 / Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/7 54 38503.103700 19629 120 0.000000 833.901837 0.000000 0.000000 0 0 / Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/7:0H 56 1642.516078 4 100 0.000000 0.034743 0.000000 0.000000 0 0 / Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S kauditd 181 38485.674634 251 120 0.000000 1170.716146 0.000000 0.000000 0 0 / Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/7:1H 274 38504.934041 6429 100 0.000000 58.519781 0.000000 0.000000 0 0 / Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs-conv/dm-0 765 2543.394756 2 100 0.000000 0.053822 0.000000 0.000000 0 0 / Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/7:3 15583 36016.154907 3779 120 0.000000 249.317791 0.000000 0.000000 0 0 / Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/7:8 15672 38504.960232 4161 120 0.000000 309.017724 0.000000 0.000000 0 0 / Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S sleep 18136 1720.542606 1 120 0.000000 1.861436 0.000000 0.000000 0 0 /autogroup-88 Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#8, 2094.764 MHz Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 596536 Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 44 Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.732287 Sep 26 16:19:23 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4073723.495257 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4062319.714808 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[8]:/autogroup-97 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 4593.975593 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -42657.161899 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 1 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 1 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 1 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 1 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 1 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:24 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->exec_start : 4063022.902253 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->vruntime : 37425.099463 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->sum_exec_runtime : 3418.947253 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->load.weight : 524288 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.load_avg : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.util_avg : 1 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.runnable_avg : 1 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[8]:/ Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 37425.680069 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -9825.457423 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 2 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 1 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 1 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[8]: Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[8]: Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/8 57 4848.679416 24 120 0.000000 8.390407 0.000000 0.000000 1 0 / Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/8 58 294.984840 1045 0 0.000000 8.171878 0.000000 0.000000 1 0 / Sep 26 16:19:25 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/8 59 37337.295350 4239 120 0.000000 34.120795 0.000000 0.000000 1 0 / Sep 26 16:19:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/8:0H 61 1087.882207 4 100 0.000000 0.025467 0.000000 0.000000 1 0 / Sep 26 16:19:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/8:1 199 28090.442663 74729 120 0.000000 3214.327854 0.000000 0.000000 1 0 / Sep 26 16:19:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I scsi_tmf_1 255 9.339340 2 100 0.000000 0.077962 0.000000 0.000000 1 0 / Sep 26 16:19:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S scsi_eh_2 256 54.073559 25 120 0.000000 0.326999 0.000000 0.000000 1 0 / Sep 26 16:19:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S scsi_eh_3 258 382.793004 25 120 0.000000 329.040825 0.000000 0.000000 1 0 / Sep 26 16:19:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I scsi_tmf_3 259 33.544027 2 100 0.000000 0.065026 0.000000 0.000000 1 0 / Sep 26 16:19:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I scsi_tmf_4 261 49.684476 2 100 0.000000 0.063496 0.000000 0.000000 1 0 / Sep 26 16:19:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S scsi_eh_5 262 1073.938425 25 120 0.000000 0.212361 0.000000 0.000000 1 0 / Sep 26 16:19:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I scsi_tmf_5 263 65.820990 2 100 0.000000 0.070832 0.000000 0.000000 1 0 / Sep 26 16:19:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/8:1H 602 37382.829542 4995 100 0.000000 88.338069 0.000000 0.000000 1 0 / Sep 26 16:19:26 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S sshd 1170 4.837008 1562 120 0.000000 169.060804 0.000000 0.000000 1 0 /autogroup-82 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/8:0 16502 37417.355223 17972 120 0.000000 736.593744 0.000000 0.000000 1 0 / Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S check 17398 2399.799014 12 120 0.000000 11.227790 0.000000 0.000000 1 0 /autogroup-97 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S fio 17500 4600.956077 114092 120 0.000000 5112.025227 0.000000 0.000000 1 0 /autogroup-97 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#9, 2094.764 MHz Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 488738 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 47 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.735486 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4076864.051385 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4067648.381148 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[9]: Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[9]: Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/9 64 11257.310749 23 120 0.000000 0.958028 0.000000 0.000000 1 0 / Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/9 65 423.647355 1036 0 0.000000 408.707205 0.000000 0.000000 1 0 / Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/9 66 37264.930792 46148 120 0.000000 715.086587 0.000000 0.000000 1 0 / Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/9:0H 68 119.895523 4 100 0.000000 0.126897 0.000000 0.000000 1 0 / Sep 26 16:19:27 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/9:1H 629 37426.609328 5174 100 0.000000 92.125618 0.000000 0.000000 1 0 / Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S kipmi0 984 2060.199700 501 139 0.000000 16.306316 0.000000 0.000000 1 0 / Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S restraintd 1317 66.368987 589 120 0.000000 177.546963 0.000000 0.000000 1 0 /autogroup-86 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S gmain 1319 11.096049 2 120 0.000000 0.144634 0.000000 0.000000 1 0 /autogroup-86 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/9:3 15780 37426.845924 14222 120 0.000000 545.569618 0.000000 0.000000 1 0 / Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/9:1 17644 36927.148703 10 120 0.000000 0.228703 0.000000 0.000000 1 0 / Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/u130:1 17792 37270.952972 836 120 0.000000 34.321759 0.000000 0.000000 1 0 / Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/9:0 17930 37417.766838 9 120 0.000000 0.128710 0.000000 0.000000 1 0 / Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#10, 2094.764 MHz Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 4 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 303660 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 22 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.597909 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 18051 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4078313.411358 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4076454.429359 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[10]:/autogroup-161 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 148016.856352 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : 100765.718860 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 1 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 1 Sep 26 16:19:28 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 1048576 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 1024 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 1023 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 1023 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 1010 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 1010 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->exec_start : 4077281.271153 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->vruntime : 181875.025546 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->sum_exec_runtime : 148804.811359 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->load.weight : 1048576 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.load_avg : 1024 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.util_avg : 1024 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.runnable_avg : 1024 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[10]:/ Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 33137.405759 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 33149.405759 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 33137.405759 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -14101.731733 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 3 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 3 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 3145728 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 3072 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 3072 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 1024 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 83 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[10]: Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[10]: Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:29 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/10 69 10636.453134 23 120 0.000000 1.296872 0.000000 0.000000 1 0 / Sep 26 16:19:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R migration/10 70 479.647344 1000 0 0.000000 411.490746 0.000000 0.000000 1 0 / Sep 26 16:19:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R ksoftirqd/10 71 33137.405759 791 120 0.000000 469.767654 0.000000 0.000000 1 0 / Sep 26 16:19:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/10:0H 73 171.819492 4 100 0.000000 0.085865 0.000000 0.000000 1 0 / Sep 26 16:19:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/10:1H 634 33135.608412 2546 100 0.000000 495.482162 0.000000 0.000000 1 0 / Sep 26 16:19:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/10:4 15794 31137.147800 598 120 0.000000 18.535898 0.000000 0.000000 1 0 / Sep 26 16:19:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R kworker/10:1 16797 33137.405759 7336 120 0.000000 311.492133 0.000000 0.000000 1 0 / Sep 26 16:19:30 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S run_plugins 17995 34.438306 16 120 0.000000 31.338088 0.000000 0.000000 1 0 /autogroup-161 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: >R 20_sysinfo 18051 150467.046258 29 120 0.000000 150938.237710 0.000000 0.000000 1 0 /autogroup-161 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#11, 2094.764 MHz Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 616355 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 29 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.739456 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4080998.212269 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4080134.294816 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 936763 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[11]:/ Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 33296.658153 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -13954.479339 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:31 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[11]: Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[11]: Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S kthreadd 2 33279.364520 971 120 0.000000 58.419624 0.000000 0.000000 1 0 / Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/11 74 11210.943144 23 120 0.000000 1.611749 0.000000 0.000000 1 0 / Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/11 75 535.647332 1041 0 0.000000 414.904742 0.000000 0.000000 1 0 / Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/11 76 33284.384153 148554 120 0.000000 631.152121 0.000000 0.000000 1 0 / Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/11:0H 78 16.482557 4 100 0.000000 0.040522 0.000000 0.000000 1 0 / Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/11:1H 280 33284.325968 2154 100 0.000000 402.821040 0.000000 0.000000 1 0 / Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I zswap-shrink 300 456.739928 2 100 0.000000 0.079774 0.000000 0.000000 1 0 / Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/u131:0 301 573.004562 2 100 0.000000 0.075910 0.000000 0.000000 1 0 / Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/11:4 464 32383.413347 2131 120 0.000000 240.869700 0.000000 0.000000 1 0 / Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S 10_bash_login 1324 31.936781 23 120 0.000000 20.028849 0.000000 0.000000 1 0 /autogroup-86 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/11:0 16638 33284.658220 6185 120 0.000000 268.685976 0.000000 0.000000 1 0 / Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/u130:0 18094 33296.658153 1278 120 0.000000 18.050011 0.000000 0.000000 1 0 / Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#12, 2094.764 MHz Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 136473 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 13 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.741089 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4082465.134491 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4080017.299696 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[12]:/ Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 31812.070784 Sep 26 16:19:32 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -15439.066708 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[12]: Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[12]: Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:33 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/12 79 10260.253827 23 120 0.000000 1.380850 0.000000 0.000000 1 0 / Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/12 80 591.647319 1045 0 0.000000 417.452202 0.000000 0.000000 1 0 / Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/12 81 31158.950905 7445 120 0.000000 351.015667 0.000000 0.000000 1 0 / Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/12:0H 83 499.831325 4 100 0.000000 0.059629 0.000000 0.000000 1 0 / Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S kdevtmpfs 179 31408.409697 1153 120 0.000000 40.566343 0.000000 0.000000 1 0 / Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S kcompactd1 189 31800.369621 8086 120 0.000000 24.627984 0.000000 0.000000 1 0 / Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/u133:0 303 27.143546 2 100 0.000000 0.075924 0.000000 0.000000 1 0 / Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/12:1H 465 31644.039195 2157 100 0.000000 35.494688 0.000000 0.000000 1 0 / Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/12:0 15797 31135.288796 119 120 0.000000 4.998441 0.000000 0.000000 1 0 / Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/12:1 16978 31800.074628 328 120 0.000000 12.827422 0.000000 0.000000 1 0 / Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#13, 2094.764 MHz Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 175566 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967287 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.743248 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4084625.443515 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4082452.127891 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[13]:/autogroup-30 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 546.861279 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -46704.276213 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 27 Sep 26 16:19:34 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 28 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 9 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 1 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 2 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->exec_start : 4082975.853239 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->vruntime : 28085.243706 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->sum_exec_runtime : 550.289616 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->load.weight : 1048576 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.load_avg : 12 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.util_avg : 12 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.runnable_avg : 15 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[13]:/ Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 28086.069428 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -19165.068064 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 33 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 31 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 10 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:35 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[13]: Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[13]: Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/13 84 5800.813973 23 120 0.000000 0.927116 0.000000 0.000000 1 0 / Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/13 85 647.647310 1052 0 0.000000 420.769709 0.000000 0.000000 1 0 / Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/13 86 27558.499295 9573 120 0.000000 187.771777 0.000000 0.000000 1 0 / Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/13:0H 88 165.719128 4 100 0.000000 0.060958 0.000000 0.000000 1 0 / Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/13:1H 635 27985.602356 4677 100 0.000000 92.190577 0.000000 0.000000 1 0 / Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S systemd-journal 860 554.076680 8313 120 0.000000 1482.784859 0.000000 0.000000 1 0 /autogroup-30 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/13:0 17113 27562.975567 227 120 0.000000 4.722110 0.000000 0.000000 1 0 / Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/13:2 17831 27571.179435 4 120 0.000000 0.100494 0.000000 0.000000 1 0 / Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/13:1 18024 28077.960541 142 120 0.000000 0.942634 0.000000 0.000000 1 0 / Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#14, 2094.764 MHz Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 343978 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967287 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.742639 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:36 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4086538.731612 Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4084370.476962 Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[14]: Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[14]: Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/14 89 9294.614655 23 120 0.000000 0.959474 0.000000 0.000000 1 0 / Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/14 90 703.647298 1061 0 0.000000 423.754493 0.000000 0.000000 1 0 / Sep 26 16:19:37 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/14 91 33285.626097 45792 120 0.000000 228.774771 0.000000 0.000000 1 0 / Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/14:0H 93 164.004137 4 100 0.000000 0.069887 0.000000 0.000000 1 0 / Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/14:1H 637 33286.382790 2744 100 0.000000 32.849011 0.000000 0.000000 1 0 / Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/14:4 15738 28913.867190 153 120 0.000000 70.332311 0.000000 0.000000 1 0 / Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/14:0 16581 33270.141483 4604 120 0.000000 245.265638 0.000000 0.000000 1 0 / Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/u130:2 17252 33298.855979 1613 120 0.000000 92.622432 0.000000 0.000000 1 0 / Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#15, 2094.764 MHz Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 171724 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 33 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.742639 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4087931.760896 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4086632.362618 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[15]: Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[15]: Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/15 94 7094.242205 23 120 0.000000 1.033446 0.000000 0.000000 1 0 / Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/15 95 759.647288 1051 0 0.000000 426.785645 0.000000 0.000000 1 0 / Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/15 96 28930.232174 24253 120 0.000000 126.317052 0.000000 0.000000 1 0 / Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/15:0H 98 972.754760 4 100 0.000000 0.048539 0.000000 0.000000 1 0 / Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S scsi_eh_0 252 55.289292 34 120 0.000000 48.334097 0.000000 0.000000 1 0 / Sep 26 16:19:38 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S scsi_eh_1 254 55.759541 34 120 0.000000 0.491406 0.000000 0.000000 1 0 / Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs-log/dm-0 769 954.918618 2 100 0.000000 0.078211 0.000000 0.000000 1 0 / Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/15:1H 823 29441.244555 5965 100 0.000000 108.443786 0.000000 0.000000 1 0 / Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S systemd-udevd 878 1421.788563 5495 120 0.000000 1091.655502 0.000000 0.000000 1 0 /autogroup-39 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S agetty 1107 3.014358 9 120 0.000000 37.492246 0.000000 0.000000 1 0 /autogroup-78 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I dio/dm-0 15550 26291.864370 2 100 0.000000 0.112238 0.000000 0.000000 1 0 / Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/15:1 16584 29441.243956 4086 120 0.000000 181.108823 0.000000 0.000000 1 0 / Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/15:2 16982 27906.763386 182 120 0.000000 8.118751 0.000000 0.000000 1 0 / Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#16, 2094.764 MHz Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 185572 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967263 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.747915 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4089294.266801 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4085556.824909 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[16]:/autogroup-55 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 53.834426 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:39 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -47197.303066 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->exec_start : 4085992.359021 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->vruntime : 27072.134092 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->sum_exec_runtime : 55.488519 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->load.weight : 2 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.load_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.util_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.runnable_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[16]:/ Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 27072.345972 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -20178.791520 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[16]: Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:40 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[16]: Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/u128:1 13 26750.075403 81 120 0.000000 619.036564 0.000000 0.000000 0 0 / Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/16 99 11420.916172 23 120 0.000000 1.153605 0.000000 0.000000 0 0 / Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/16 100 815.647275 1035 0 0.000000 429.864122 0.000000 0.000000 0 0 / Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/16 101 27060.140554 1894 120 0.000000 18.565797 0.000000 0.000000 0 0 / Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/16:0H 103 109.859080 4 100 0.000000 0.033941 0.000000 0.000000 0 0 / Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S kswapd0 232 109.875783 4 120 0.000000 0.051261 0.000000 0.000000 0 0 / Sep 26 16:19:41 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/16:1H 275 27060.875406 5058 100 0.000000 68.000123 0.000000 0.000000 0 0 / Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S xfsaild/dm-0 771 27060.993785 76694 120 0.000000 493.975006 0.000000 0.000000 0 0 / Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs-cil/sda2 994 10609.021520 2 100 0.000000 0.068824 0.000000 0.000000 0 0 / Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S NetworkManager 1057 55.485820 4498 120 0.000000 848.889721 0.000000 0.000000 0 0 /autogroup-55 Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/16:0 15982 26744.718224 1059 120 0.000000 44.474485 0.000000 0.000000 0 0 / Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/16:1 17549 27060.889245 602 120 0.000000 8.898312 0.000000 0.000000 0 0 / Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/u129:3 17573 27061.179969 4534 120 0.000000 31.537021 0.000000 0.000000 0 0 / Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#17, 2094.764 MHz Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 305308 Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967275 Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.750709 Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4092086.304751 Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4086661.425717 Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[17]:/ Sep 26 16:19:42 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 31491.371842 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -15759.765650 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[17]: Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[17]: Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I rcu_preempt 18 31492.363973 913316 120 0.000000 9743.902659 0.000000 0.000000 0 0 / Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/17 104 13173.452743 23 120 0.000000 0.911571 0.000000 0.000000 0 0 / Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/17 105 0.000000 1040 0 0.000000 434.641344 0.000000 0.000000 0 0 / Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/17 106 31464.085066 3775 120 0.000000 35.435216 0.000000 0.000000 0 0 / Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/17:0H 108 1983.187456 4 100 0.000000 0.055370 0.000000 0.000000 0 0 / Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S watchdogd 230 0.000000 2 49 0.000000 0.004378 0.000000 0.000000 0 0 / Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I dm_bufio_cache 276 1982.530971 2 100 0.000000 0.063633 0.000000 0.000000 0 0 / Sep 26 16:19:43 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/17:1H 413 31480.286462 4280 100 0.000000 176.268246 0.000000 0.000000 0 0 / Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S auditd 1016 0.762449 16 116 0.000000 1.064881 0.000000 0.000000 0 0 /autogroup-49 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S chronyd 1064 13.895355 186 120 0.000000 29.754554 0.000000 0.000000 0 0 /autogroup-59 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S sshd 1079 31.351138 27 120 0.000000 32.855515 0.000000 0.000000 0 0 /autogroup-66 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/17:9 15696 31464.087657 19070 120 0.000000 628.550512 0.000000 0.000000 0 0 / Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/17:8 16189 29215.233045 470 120 0.000000 16.135896 0.000000 0.000000 0 0 / Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/u129:0 17925 31464.091194 1820 120 0.000000 12.264111 0.000000 0.000000 0 0 / Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#18, 2094.764 MHz Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 202656 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967279 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.752880 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4094258.345076 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4090118.110480 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[18]: Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[18]: Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/18 109 14729.835937 23 120 0.000000 0.915899 0.000000 0.000000 0 0 / Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/18 110 921.659745 1039 0 0.000000 436.547897 0.000000 0.000000 0 0 / Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/18 111 33555.301618 8469 120 0.000000 58.857956 0.000000 0.000000 0 0 / Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/18:0H 113 2082.466277 4 100 0.000000 0.024649 0.000000 0.000000 0 0 / Sep 26 16:19:44 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I tpm_dev_wq 226 10.955882 2 100 0.000000 0.001246 0.000000 0.000000 0 0 / Sep 26 16:19:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I edac-poller 229 22.959204 2 100 0.000000 0.003328 0.000000 0.000000 0 0 / Sep 26 16:19:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I scsi_tmf_0 253 1895.125977 2 100 0.000000 0.047054 0.000000 0.000000 0 0 / Sep 26 16:19:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/18:1H 578 33932.545923 2336 100 0.000000 21.737924 0.000000 0.000000 0 0 / Sep 26 16:19:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs-blockgc/dm- 767 2701.065305 2 100 0.000000 0.064572 0.000000 0.000000 0 0 / Sep 26 16:19:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S xfsaild/sda2 995 13836.167788 2 120 0.000000 0.025081 0.000000 0.000000 0 0 / Sep 26 16:19:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I rpciod 1026 14693.436938 2 100 0.000000 0.015248 0.000000 0.000000 0 0 / Sep 26 16:19:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xprtiod 1027 14705.445427 2 100 0.000000 0.009671 0.000000 0.000000 0 0 / Sep 26 16:19:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S systemd-resolve 1035 15.289561 323 120 0.000000 231.648839 0.000000 0.000000 0 0 /autogroup-51 Sep 26 16:19:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S systemd-userdbd 1038 4.436895 43 120 0.000000 66.045717 0.000000 0.000000 0 0 /autogroup-53 Sep 26 16:19:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S systemd-logind 1065 10.680742 1840 120 0.000000 235.703490 0.000000 0.000000 0 0 /autogroup-60 Sep 26 16:19:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S sshd 1168 43.764250 41 120 0.000000 45.718673 0.000000 0.000000 0 0 /autogroup-82 Sep 26 16:19:45 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/18:0 16196 32740.017763 43 120 0.000000 1.818868 0.000000 0.000000 0 0 / Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/18:1 16197 33878.379286 238 120 0.000000 6.341384 0.000000 0.000000 0 0 / Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#19, 2094.764 MHz Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 315072 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967290 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.754839 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4096219.379380 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4093616.631401 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[19]: Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[19]: Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:46 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/19 114 12086.672291 23 120 0.000000 0.976657 0.000000 0.000000 0 0 / Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/19 115 977.659754 1041 0 0.000000 438.724037 0.000000 0.000000 0 0 / Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/19 116 32491.309106 16293 120 0.000000 98.202997 0.000000 0.000000 0 0 / Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/19:0H 118 11.567869 4 100 0.000000 0.044569 0.000000 0.000000 0 0 / Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I inet_frag_wq 180 10.964120 2 100 0.000000 0.012702 0.000000 0.000000 0 0 / Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I ata_sff 227 22.970858 2 100 0.000000 0.005814 0.000000 0.000000 0 0 / Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/19:1H 583 35417.798088 3152 100 0.000000 33.962905 0.000000 0.000000 0 0 / Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kdmflush/253:0 745 1263.193662 2 100 0.000000 0.070634 0.000000 0.000000 0 0 / Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfsalloc 762 1275.237823 2 100 0.000000 0.046267 0.000000 0.000000 0 0 / Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs-inodegc/dm- 768 1287.326264 2 100 0.000000 0.090429 0.000000 0.000000 0 0 / Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/19:2 972 31866.718027 3534 120 0.000000 362.969197 0.000000 0.000000 0 0 / Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/19:5 15596 35417.830679 15865 120 0.000000 533.712217 0.000000 0.000000 0 0 / Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#20, 2094.764 MHz Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 265879 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967279 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.756193 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4097569.398077 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4095366.834522 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[20]:/ Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 32234.326013 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -15016.811479 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:47 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[20]: Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[20]: Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:48 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/20 119 11029.893754 23 120 0.000000 0.915910 0.000000 0.000000 0 0 / Sep 26 16:19:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/20 120 1033.659860 1045 0 0.000000 439.532630 0.000000 0.000000 0 0 / Sep 26 16:19:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/20 121 32222.272145 8676 120 0.000000 73.776042 0.000000 0.000000 0 0 / Sep 26 16:19:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/20:0H 123 114.543309 4 100 0.000000 0.034327 0.000000 0.000000 0 0 / Sep 26 16:19:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S kcompactd0 188 32223.841171 8117 120 0.000000 27.130270 0.000000 0.000000 0 0 / Sep 26 16:19:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I cryptd 192 22.953317 2 100 0.000000 0.000929 0.000000 0.000000 0 0 / Sep 26 16:19:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I md 228 42.959297 2 100 0.000000 0.005310 0.000000 0.000000 0 0 / Sep 26 16:19:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/20:1H 576 32222.326447 3069 100 0.000000 26.626785 0.000000 0.000000 0 0 / Sep 26 16:19:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S auditd 1015 0.550700 189 116 0.000000 20.388258 0.000000 0.000000 0 0 /autogroup-49 Sep 26 16:19:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/20:0 14842 32222.373357 8974 120 0.000000 278.274925 0.000000 0.000000 0 0 / Sep 26 16:19:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/20:7 15666 25939.655255 2590 120 0.000000 197.531302 0.000000 0.000000 0 0 / Sep 26 16:19:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S systemd-userwor 18099 17.397361 5 120 0.000000 6.445946 0.000000 0.000000 0 0 /autogroup-53 Sep 26 16:19:49 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#21, 2094.764 MHz Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 152148 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967268 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.756717 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4099803.129042 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4099044.751236 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[21]: Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[21]: Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/21 124 11311.912849 23 120 0.000000 0.907285 0.000000 0.000000 0 0 / Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/21 125 1089.659871 1041 0 0.000000 441.523030 0.000000 0.000000 0 0 / Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/21 126 29520.923584 2513 120 0.000000 231.619626 0.000000 0.000000 0 0 / Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/21:0H 128 115.047296 4 100 0.000000 0.034039 0.000000 0.000000 0 0 / Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kintegrityd 193 10.953879 2 100 0.000000 0.002461 0.000000 0.000000 0 0 / Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S xenbus_probe 251 23.023080 2 120 0.000000 0.067888 0.000000 0.000000 0 0 / Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I mld 279 59.198755 2 100 0.000000 0.072444 0.000000 0.000000 0 0 / Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kstrp 286 71.264718 2 100 0.000000 0.070768 0.000000 0.000000 0 0 / Sep 26 16:19:50 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/21:1H 575 31766.487889 1041 100 0.000000 7.018669 0.000000 0.000000 0 0 / Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs_mru_cache 763 1286.344964 2 100 0.000000 0.046977 0.000000 0.000000 0 0 / Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/21:2 15599 27867.629249 1612 120 0.000000 121.404852 0.000000 0.000000 0 0 / Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/21:1 17546 31766.612937 150 120 0.000000 1.909645 0.000000 0.000000 0 0 / Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#22, 2094.764 MHz Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 115212 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 2 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.759803 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4101180.435082 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4096902.246938 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[22]:/ Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 29863.973189 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -17387.164303 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[22]: Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[22]: Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:51 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/22 129 14093.655290 23 120 0.000000 1.152932 0.000000 0.000000 0 0 / Sep 26 16:19:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/22 130 1145.659978 1047 0 0.000000 443.336625 0.000000 0.000000 0 0 / Sep 26 16:19:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/22 131 29845.536580 11843 120 0.000000 158.081012 0.000000 0.000000 0 0 / Sep 26 16:19:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/22:0H 133 1269.298833 4 100 0.000000 0.030982 0.000000 0.000000 0 0 / Sep 26 16:19:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kblockd 194 10.952267 2 100 0.000000 0.000849 0.000000 0.000000 0 0 / Sep 26 16:19:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I acpi_thermal_pm 249 1267.714139 2 100 0.000000 0.068967 0.000000 0.000000 0 0 / Sep 26 16:19:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/22:1H 269 29851.973275 2704 100 0.000000 26.534457 0.000000 0.000000 0 0 / Sep 26 16:19:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S gssproxy 1080 11.050799 1 120 0.000000 0.099384 0.000000 0.000000 0 0 /autogroup-65 Sep 26 16:19:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S gssproxy 1082 23.081626 1 120 0.000000 0.030836 0.000000 0.000000 0 0 /autogroup-65 Sep 26 16:19:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S gssproxy 1083 35.118086 2 120 0.000000 0.036469 0.000000 0.000000 0 0 /autogroup-65 Sep 26 16:19:52 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S gssproxy 1084 47.149381 1 120 0.000000 0.031304 0.000000 0.000000 0 0 /autogroup-65 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/22:4 15615 28935.202792 1215 120 0.000000 81.364385 0.000000 0.000000 0 0 / Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/22:5 16145 29852.002108 756 120 0.000000 24.160560 0.000000 0.000000 0 0 / Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#23, 2094.764 MHz Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 351136 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967287 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.761767 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4103145.484277 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4090145.144733 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[23]: Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[23]: Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:53 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/23 134 5856.664823 23 120 0.000000 1.231496 0.000000 0.000000 0 0 / Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/23 135 1201.659986 1048 0 0.000000 444.285670 0.000000 0.000000 0 0 / Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/23 136 26934.797807 25283 120 0.000000 441.302687 0.000000 0.000000 0 0 / Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/23:0H 138 1672.842689 4 100 0.000000 0.030236 0.000000 0.000000 0 0 / Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I blkcg_punt_bio 195 10.953364 2 100 0.000000 0.001946 0.000000 0.000000 0 0 / Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/23:1H 568 27017.326782 2614 100 0.000000 23.619061 0.000000 0.000000 0 0 / Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S gmain 1058 48.195677 1015 120 0.000000 47.243261 0.000000 0.000000 0 0 /autogroup-55 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/23:7 15719 27017.352678 2219 120 0.000000 79.339749 0.000000 0.000000 0 0 / Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/23:2 16166 22474.352242 180 120 0.000000 9.649934 0.000000 0.000000 0 0 / Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S sleep 18148 1718.410524 1 120 0.000000 1.875581 0.000000 0.000000 0 0 /autogroup-88 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#24, 2094.764 MHz Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 114484 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967286 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.763111 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4104488.469377 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4102653.644979 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[24]: Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[24]: Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:54 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/24 139 14770.719178 23 120 0.000000 0.995653 0.000000 0.000000 1 0 / Sep 26 16:19:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/24 140 1257.660088 1051 0 0.000000 449.461927 0.000000 0.000000 1 0 / Sep 26 16:19:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/24 141 31965.767405 9940 120 0.000000 223.415982 0.000000 0.000000 1 0 / Sep 26 16:19:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/24:0H 143 23.962683 4 100 0.000000 0.054999 0.000000 0.000000 1 0 / Sep 26 16:19:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S oom_reaper 186 23.910305 2 120 0.000000 0.003355 0.000000 0.000000 1 0 / Sep 26 16:19:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I writeback 187 35.910299 2 100 0.000000 0.000000 0.000000 0.000000 1 0 / Sep 26 16:19:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/24:1H 560 31707.922768 2669 100 0.000000 1212.771950 0.000000 0.000000 1 0 / Sep 26 16:19:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I xfs-cil/dm-0 770 769.345313 2 100 0.000000 0.074695 0.000000 0.000000 1 0 / Sep 26 16:19:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S gssproxy 1078 0.106549 75 120 0.000000 7.800414 0.000000 0.000000 1 0 /autogroup-65 Sep 26 16:19:55 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/24:7 15830 31282.085859 135 120 0.000000 4.628106 0.000000 0.000000 1 0 / Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/24:0 16587 31997.785784 4280 120 0.000000 185.781028 0.000000 0.000000 1 0 / Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#25, 2094.764 MHz Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 128428 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967294 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.614336 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4105815.561218 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4104621.942839 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[25]: Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[25]: Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/25 144 8083.535850 23 120 0.000000 0.896102 0.000000 0.000000 1 0 / Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/25 145 1313.660080 1044 0 0.000000 452.906421 0.000000 0.000000 1 0 / Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/25 146 27651.333828 7907 120 0.000000 53.026281 0.000000 0.000000 1 0 / Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/25:0H 148 182.171411 4 100 0.000000 0.035402 0.000000 0.000000 1 0 / Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/25:1H 592 27908.324455 4352 100 0.000000 75.239958 0.000000 0.000000 1 0 / Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I tls-strp 11713 12276.584025 2 100 0.000000 0.032622 0.000000 0.000000 1 0 / Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I nvme-wq 14337 24692.153126 2 100 0.000000 0.020436 0.000000 0.000000 1 0 / Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/25:4 15807 26325.770088 127 120 0.000000 5.438334 0.000000 0.000000 1 0 / Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/25:0 16602 28161.829525 859 120 0.000000 30.304619 0.000000 0.000000 1 0 / Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#26, 2094.764 MHz Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 488415 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 27 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.761716 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4106652.642050 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4105696.526029 Sep 26 16:19:56 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[26]:/ Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 33294.667061 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -13956.470431 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[26]: Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:57 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[26]: Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/26 149 11395.391637 23 120 0.000000 0.933000 0.000000 0.000000 1 0 / Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/26 150 1369.660068 1042 0 0.000000 454.340873 0.000000 0.000000 1 0 / Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/26 151 33257.971234 359 120 0.000000 5.981661 0.000000 0.000000 1 0 / Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/26:0H 153 268.982727 4 100 0.000000 0.034842 0.000000 0.000000 1 0 / Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/26:1 218 32464.326691 227056 120 0.000000 1440.504734 0.000000 0.000000 1 0 / Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/26:1H 561 33264.434399 2045 100 0.000000 37.260495 0.000000 0.000000 1 0 / Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S gmain 1189 375.624644 29 120 0.000000 2.780471 0.000000 0.000000 1 0 /autogroup-83 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S pool-restraintd 1900 52.467672 536 120 0.000000 14.873964 0.000000 0.000000 1 0 /autogroup-86 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I nvme-reset-wq 14338 27503.858738 2 100 0.000000 0.018556 0.000000 0.000000 1 0 / Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/26:0 16605 33283.825907 2523 120 0.000000 78.656932 0.000000 0.000000 1 0 / Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S check 17408 2064.541209 3 120 0.000000 10.731184 0.000000 0.000000 1 0 /autogroup-97 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#27, 2094.764 MHz Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 291437 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 27 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.719300 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4108639.589499 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4107119.557490 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[27]:/ Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 26863.730455 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -20387.407037 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:19:58 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[27]: Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[27]: Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:19:59 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/27 154 6384.446499 23 120 0.000000 0.942379 0.000000 0.000000 1 0 / Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/27 155 1425.660060 1036 0 0.000000 458.000987 0.000000 0.000000 1 0 / Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/27 156 26851.734405 48409 120 0.000000 218.829654 0.000000 0.000000 1 0 / Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/27:0H 158 318.074449 4 100 0.000000 0.037855 0.000000 0.000000 1 0 / Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/27:1H 891 26851.733362 964 100 0.000000 15.130148 0.000000 0.000000 1 0 / Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I ipmi-msghandler 976 1262.210486 2 100 0.000000 0.023548 0.000000 0.000000 1 0 / Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I i40e 985 1359.121064 2 100 0.000000 0.043368 0.000000 0.000000 1 0 / Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I nvme-delete-wq 14339 21918.648812 2 100 0.000000 0.018913 0.000000 0.000000 1 0 / Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/27:2 15757 23717.426295 20 120 0.000000 1.143997 0.000000 0.000000 1 0 / Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/27:8 15818 26851.859230 1961 120 0.000000 72.908277 0.000000 0.000000 1 0 / Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#28, 2094.764 MHz Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 274980 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 35 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.768532 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4109909.554679 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4103887.513493 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[28]:/autogroup-50 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 2304.928849 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -44946.208643 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->exec_start : 4104303.681041 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->vruntime : 33345.137468 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->sum_exec_runtime : 2306.275326 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->load.weight : 2 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.load_avg : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.util_avg : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .se->avg.runnable_avg : 0 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cfs_rq[28]:/ Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .exec_clock : 0.000000 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .MIN_vruntime : 0.000001 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .min_vruntime : 33345.137468 Sep 26 16:20:00 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_vruntime : 0.000001 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread : 0.000000 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .spread0 : -13906.000024 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_spread_over : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .h_nr_running : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_nr_running : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .idle_h_nr_running : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .load_avg : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .runnable_avg : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_avg : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .util_est_enqueued : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.load_avg : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.util_avg : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .removed.runnable_avg : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg_contrib : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .tg_load_avg : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttled : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .throttle_count : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[28]: Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-hostnamed.service: Processes still around after SIGKILL. Ignoring. Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[28]: Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/28 159 7228.132004 23 120 0.000000 0.915204 0.000000 0.000000 1 0 / Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/28 160 1481.660050 1062 0 0.000000 460.631119 0.000000 0.000000 1 0 / Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/28 161 31856.335940 1575 120 0.000000 13.833611 0.000000 0.000000 1 0 / Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/28:0H 163 1.231459 4 100 0.000000 0.044636 0.000000 0.000000 1 0 / Sep 26 16:20:01 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/28:1H 451 33246.529656 5229 100 0.000000 94.681842 0.000000 0.000000 1 0 / Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S systemd-oomd 1023 2305.771441 16272 120 0.000000 3489.846171 0.000000 0.000000 1 0 /autogroup-50 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/28:3 15760 30037.774414 525 120 0.000000 18.567067 0.000000 0.000000 1 0 / Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/28:0 16612 33333.138841 35081 120 0.000000 1607.964633 0.000000 0.000000 1 0 / Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#29, 2094.764 MHz Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 336415 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967295 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.769416 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4112383.645901 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4108721.871583 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[29]: Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[29]: Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:20:02 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/29 164 7538.939646 23 120 0.000000 0.926811 0.000000 0.000000 1 0 / Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/29 165 1537.660039 1056 0 0.000000 464.511578 0.000000 0.000000 1 0 / Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/29 166 34971.679627 1377 120 0.000000 196.002962 0.000000 0.000000 1 0 / Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/29:0H 168 29.972737 4 100 0.000000 0.023010 0.000000 0.000000 1 0 / Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/29:1H 453 35225.746160 4336 100 0.000000 81.236750 0.000000 0.000000 1 0 / Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/29:2 593 35464.321454 14883 120 0.000000 773.144863 0.000000 0.000000 1 0 / Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S gssproxy 1081 11.035763 1 120 0.000000 0.084348 0.000000 0.000000 1 0 /autogroup-65 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/29:0 16610 32410.862023 26917 120 0.000000 1176.683596 0.000000 0.000000 1 0 / Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#30, 2094.764 MHz Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 299462 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 4294967279 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.772352 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4113730.612838 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4109469.859872 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[30]: Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[30]: Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:20:03 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/30 169 6532.246630 23 120 0.000000 0.942946 0.000000 0.000000 1 0 / Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/30 170 1593.660029 1073 0 0.000000 466.811453 0.000000 0.000000 1 0 / Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/30 171 33836.327459 9660 120 0.000000 51.695176 0.000000 0.000000 1 0 / Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/30:0H 173 3.101784 4 100 0.000000 0.022538 0.000000 0.000000 1 0 / Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/30:1 222 33864.377885 24735 120 0.000000 1140.985507 0.000000 0.000000 1 0 / Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/30:1H 454 33863.545399 3458 100 0.000000 84.920912 0.000000 0.000000 1 0 / Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S restraintd 1171 405.311315 9912 120 0.000000 1672.103927 0.000000 0.000000 1 0 /autogroup-83 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/30:0 16611 30356.031133 10834 120 0.000000 372.329439 0.000000 0.000000 1 0 / Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: cpu#31, 2094.764 MHz Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_running : 0 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_switches : 345091 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .nr_uninterruptible : 57 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .next_balance : 4298.609216 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .curr->pid : 0 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock : 4114273.673768 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .clock_task : 4110585.170643 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .avg_idle : 1000000 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .max_idle_balance_cost : 500000 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rt_rq[31]: Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_running : 0 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_nr_migratory : 0 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_throttled : 0 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_time : 0.000000 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .rt_runtime : 950.000000 Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dl_rq[31]: Sep 26 16:20:04 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_running : 0 Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_nr_migratory : 0 Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->bw : 996147 Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: .dl_bw->total_bw : 0 Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: runnable tasks: Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S task PID tree-key switches prio wait-time sum-exec sum-sleep Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ------------------------------------------------------------------------------------------------------------- Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S cpuhp/31 174 11327.205491 23 120 0.000000 0.915892 0.000000 0.000000 1 0 / Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S migration/31 175 1649.660022 1060 0 0.000000 469.098054 0.000000 0.000000 1 0 / Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S ksoftirqd/31 176 40406.516660 17510 120 0.000000 103.725161 0.000000 0.000000 1 0 / Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/31:0H 178 206.496084 4 100 0.000000 0.037516 0.000000 0.000000 1 0 / Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S kswapd1 233 135.030105 3 120 0.000000 0.028204 0.000000 0.000000 1 0 / Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/31:1H 574 40406.516844 4239 100 0.000000 69.359622 0.000000 0.000000 1 0 / Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/31:2 859 40568.908917 20059 120 0.000000 790.026351 0.000000 0.000000 1 0 / Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: I kworker/31:4 15775 36165.810265 3652 120 0.000000 142.180797 0.000000 0.000000 1 0 / Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: S 10_bash_login 17965 413.392235 24 120 0.000000 20.047333 0.000000 0.000000 1 0 /autogroup-83 Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Showing busy workqueues and worker pools: Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: workqueue events: flags=0x0 Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pwq 20: cpus=10 node=1 flags=0x0 nice=0 active=4/256 refcnt=5 Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pending: drm_fb_helper_damage_work, psi_avgs_work, psi_avgs_work, psi_avgs_work Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: workqueue events_power_efficient: flags=0x80 Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pwq 20: cpus=10 node=1 flags=0x0 nice=0 active=1/256 refcnt=2 Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pending: fb_flashcursor Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: workqueue rcu_gp: flags=0x8 Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pwq 6: cpus=3 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: in-flight: 17977:wait_rcu_exp_gp Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: workqueue mm_percpu_wq: flags=0x8 Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pwq 20: cpus=10 node=1 flags=0x0 nice=0 active=1/256 refcnt=2 Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pending: vmstat_update Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com systemd[1]: systemd-hostnamed.service: Failed with result 'timeout'. Sep 26 16:20:05 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: pool 6: cpus=3 node=0 flags=0x0 nice=0 hung=40s workers=3 idle: 15639 16071 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu: INFO: rcu_preempt self-detected stall on CPU Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu: 10-....: (8072 ticks this GP) idle=0bf4/1/0x4000000000000000 softirq=49378/49378 fqs=46293 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: (t=185331 jiffies g=1731589 q=41886 ncpus=32) Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: NMI backtrace for cpu 10 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: CPU: 10 PID: 16797 Comm: kworker/10:1 Tainted: G I 6.0.0-rc7 #1 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Hardware name: HP ProLiant ML350e Gen8, BIOS J02 08/02/2014 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Workqueue: events drm_fb_helper_damage_work Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Call Trace: Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: dump_stack_lvl+0x44/0x5c Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nmi_cpu_backtrace.cold+0x30/0x76 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? lapic_can_unplug_cpu+0x70/0x70 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: nmi_trigger_cpumask_backtrace+0x111/0x130 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu_dump_cpu_stacks+0xf8/0x130 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: rcu_sched_clock_irq.cold+0x60/0x2e1 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? perf_event_task_tick+0x64/0x3f0 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? nohz_balance_exit_idle+0x16/0xc0 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: update_process_times+0x62/0x90 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: tick_sched_handle+0x22/0x60 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: tick_sched_timer+0x6f/0x80 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? tick_sched_do_timer+0xa0/0xa0 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __hrtimer_run_queues+0x11a/0x2a0 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: hrtimer_interrupt+0xfe/0x220 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: __sysvec_apic_timer_interrupt+0x7f/0x170 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sysvec_apic_timer_interrupt+0x99/0xc0 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: asm_sysvec_apic_timer_interrupt+0x16/0x20 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RIP: 0010:drm_modeset_lock+0x7/0xd0 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Code: eb be 0f 0b eb b8 0f b6 47 39 e9 67 ff ff ff 0f 0b e9 60 ff ff ff 0f 0b e9 7b ff ff ff 66 0f 1f 44 00 00 0f 1f 44 00 00 41 56 <41> 55 41 54 55 48 89 fd 53 48 85 f6 74 71 48 83 7e 18 00 0f b6 46 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RSP: 0018:ffffb8edc552bd40 EFLAGS: 00000246 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RAX: 0000000000000000 RBX: ffff9c0c904ec580 RCX: 0000000000000020 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RDX: ffff9c0b53e24a80 RSI: ffffb8edc552bda8 RDI: ffff9c0b44f3cfe0 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: RBP: ffff9c0b44f3cfb8 R08: ffff9c0b48073b30 R09: ffff9c0c886ed200 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R10: ffff9c0b4494c948 R11: ffff9c0c7ffd5d80 R12: 0000000000000000 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: R13: 0000000000000000 R14: ffff9c0b44f3d3a8 R15: ffff9c0b44f3d3c8 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: drm_atomic_get_crtc_state+0x5c/0x130 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: drm_atomic_get_plane_state+0x11c/0x170 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: drm_atomic_helper_dirtyfb+0x14d/0x240 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: drm_fb_helper_damage_work+0x183/0x2c0 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: process_one_work+0x1c7/0x380 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: worker_thread+0x4d/0x380 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? rescuer_thread+0x380/0x380 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: kthread+0xe9/0x110 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ? kthread_complete_and_exit+0x20/0x20 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: ret_from_fork+0x22/0x30 Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: clocksource: timekeeping watchdog on CPU10: Marking clocksource 'tsc' as unstable because the skew is too large: Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: clocksource: 'hpet' wd_nsec: 0 wd_now: b5626e38 wd_last: 26904094 mask: ffffffff Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: clocksource: 'tsc' cs_nsec: 167349626136 cs_now: c2dd0a92339 cs_last: bdc2feed5ad mask: ffffffffffffffff Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: clocksource: 'tsc' is current clocksource. Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: tsc: Marking TSC unstable due to clocksource watchdog Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com audit: BPF prog-id=0 op=UNLOAD Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'. Sep 26 16:20:07 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sched_clock: Marking unstable (4108745852026, 8467528273)<-(4119728261446, -2512332394) Sep 26 16:20:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: clocksource: Checking clocksource tsc synchronization from CPU 12 to CPUs 0,10,13,24. Sep 26 16:20:08 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: clocksource: Switched to clocksource hpet Sep 26 16:20:13 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com logger[18177]: List of t Tasks: Stop Sep 26 16:20:15 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com logger[18179]: List of w Tasks: Start Sep 26 16:20:16 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com kernel: sysrq: Show Blocked State Sep 26 16:20:20 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com logger[18191]: List of w Tasks: Stop Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: Dumping slabinfo - Start - 2022-09-26 16:20:22 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: slabinfo - version: 2.1 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: # name : tunables : slabdata Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: nvmet-bvec 32 32 256 32 2 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: hfsplus_attr_cache 0 0 3840 8 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: hfsplus_icache 0 0 960 34 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: hfs_inode_cache 0 0 832 39 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: fat_inode_cache 0 0 792 41 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: fat_cache 0 0 40 102 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: isofs_inode_cache 0 0 688 47 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: rpc_inode_cache 46 46 704 46 8 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: rpc_buffers 16 16 2048 16 8 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: rpc_tasks 32 32 256 32 2 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kvm_async_pf 0 0 136 30 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kvm_vcpu 0 0 10368 3 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kvm_mmu_page_header 0 0 184 44 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: pte_list_desc 0 0 128 32 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: x86_emulator 0 0 2656 12 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ioat_sed_ent 0 0 32 128 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ioat 512 832 128 32 1 : tunables 0 0 0 : slabdata 26 26 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: zspage 73 73 56 73 1 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: zs_handle 512 512 8 512 1 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: fuse_request 0 0 152 53 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: fuse_inode 0 0 832 39 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_dqtrx 0 0 528 31 4 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_dquot 0 0 480 34 4 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_iul_item 0 0 176 46 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_attri_item 0 0 208 39 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_attrd_item 0 0 176 46 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_bui_item 312 312 208 39 2 : tunables 0 0 0 : slabdata 8 8 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_bud_item 368 368 176 46 2 : tunables 0 0 0 : slabdata 8 8 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_cui_item 444 444 432 37 4 : tunables 0 0 0 : slabdata 12 12 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_cud_item 552 552 176 46 2 : tunables 0 0 0 : slabdata 12 12 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_rui_item 0 0 688 47 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_rud_item 0 0 176 46 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_icr 704 704 184 44 2 : tunables 0 0 0 : slabdata 16 16 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_ili 9918 9920 200 40 2 : tunables 0 0 0 : slabdata 248 248 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_inode 24852 24864 1024 32 8 : tunables 0 0 0 : slabdata 777 777 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_efi_item 2701 2775 432 37 4 : tunables 0 0 0 : slabdata 75 75 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_efd_item 1184 1184 440 37 4 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_buf_item 2130 2370 272 30 2 : tunables 0 0 0 : slabdata 79 79 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_trans 1120 1120 232 35 2 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_ifork 2550 2550 48 85 1 : tunables 0 0 0 : slabdata 30 30 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_da_state 1088 1088 480 34 4 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_attr_intent 1472 1472 88 46 1 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_extfree_intent 3264 3264 40 102 1 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_bmap_intent 512 512 64 64 1 : tunables 0 0 0 : slabdata 8 8 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_refc_intent 1536 1536 32 128 1 : tunables 0 0 0 : slabdata 12 12 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_rmap_intent 0 0 64 64 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_defer_pending 2336 2336 56 73 1 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_refcbt_cur 480 480 200 40 2 : tunables 0 0 0 : slabdata 12 12 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_rmapbt_cur 0 0 248 33 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_bmbt_cur 1568 1568 328 49 4 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_inobt_cur 1280 1280 200 40 2 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_bnobt_cur 1184 1184 216 37 2 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_log_ticket 2720 2720 48 85 1 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfs_buf 8061 8358 384 42 4 : tunables 0 0 0 : slabdata 199 199 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: bio-176 42 42 192 42 2 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: bio-256 2601 2601 320 51 4 : tunables 0 0 0 : slabdata 51 51 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_prelim_ref 0 0 88 46 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_delayed_extent_op 0 0 32 128 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_delayed_data_ref 0 0 112 36 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_delayed_tree_ref 0 0 104 39 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_delayed_ref_head 0 0 144 28 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_inode_defrag 0 0 56 73 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_delayed_node 0 0 320 51 4 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_ordered_extent 0 0 440 37 4 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_extent_map 0 0 144 28 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_extent_state 0 0 80 51 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: bio-352 42 42 384 42 4 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_extent_buffer 0 0 256 32 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: bio-232 32 32 256 32 2 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_free_space_bitmap 0 0 4096 8 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_free_space 0 0 104 39 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_path 0 0 112 36 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_trans_handle 0 0 104 39 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btrfs_inode 0 0 1200 27 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: fsverity_info 0 0 256 32 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: fscrypt_info 0 0 136 30 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: zswap_entry 0 0 64 64 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: MPTCPv6 0 0 2048 16 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ip6-frags 0 0 184 44 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: fib6_nodes 288 288 128 32 1 : tunables 0 0 0 : slabdata 9 9 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ip6_dst_cache 960 960 256 32 2 : tunables 0 0 0 : slabdata 30 30 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ip6_mrt_cache 0 0 192 42 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: PINGv6 0 0 1216 26 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: RAWv6 234 234 1216 26 8 : tunables 0 0 0 : slabdata 9 9 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: UDPLITEv6 0 0 1344 24 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: UDPv6 288 288 1344 24 8 : tunables 0 0 0 : slabdata 12 12 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: tw_sock_TCPv6 924 924 248 33 2 : tunables 0 0 0 : slabdata 28 28 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: request_sock_TCPv6 1352 1352 312 52 4 : tunables 0 0 0 : slabdata 26 26 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: TCPv6 351 351 2432 13 8 : tunables 0 0 0 : slabdata 27 27 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dm_snap_pending_exception 0 0 128 32 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dm_exception 0 0 32 128 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kcopyd_job 0 0 3240 10 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: io 0 0 64 64 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dm_uevent 0 0 2888 11 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: uhci_urb_priv 0 0 56 73 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: bio-136 1344 1344 192 42 2 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: scsi_sense_cache 4416 4576 128 32 1 : tunables 0 0 0 : slabdata 143 143 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: sd_ext_cdb 0 0 32 128 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: sgpool-128 176 176 4096 8 8 : tunables 0 0 0 : slabdata 22 22 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: sgpool-64 448 448 2048 16 8 : tunables 0 0 0 : slabdata 28 28 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: sgpool-32 960 960 1024 32 8 : tunables 0 0 0 : slabdata 30 30 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: sgpool-16 1024 1024 512 32 4 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: sgpool-8 1024 1024 256 32 2 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: btree_node 0 0 128 32 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: io_kiocb 0 0 256 32 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: bfq_io_cq 1120 1120 232 35 2 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: bfq_queue 1288 1288 576 28 4 : tunables 0 0 0 : slabdata 46 46 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: mqueue_inode_cache 34 34 960 34 8 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: jbd2_transaction_s 0 0 256 32 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: jbd2_inode 0 0 64 64 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: jbd2_journal_handle 0 0 56 73 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: jbd2_journal_head 0 0 120 34 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: jbd2_revoke_table_s 0 0 16 256 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: jbd2_revoke_record_s 0 0 32 128 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ext4_fc_dentry_update 0 0 96 42 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ext4_inode_cache 0 0 1192 27 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ext4_free_data 0 0 56 73 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ext4_allocation_context 0 0 136 30 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ext4_prealloc_space 0 0 104 39 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ext4_system_zone 0 0 40 102 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ext4_io_end_vec 0 0 32 128 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ext4_io_end 0 0 64 64 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ext4_bio_post_read_ctx 170 170 48 85 1 : tunables 0 0 0 : slabdata 2 2 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ext4_pending_reservation 0 0 32 128 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ext4_extent_status 0 0 40 102 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: mbcache 0 0 56 73 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kioctx 672 672 576 28 4 : tunables 0 0 0 : slabdata 24 24 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: aio_kiocb 1344 1344 192 42 2 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: userfaultfd_ctx_cache 0 0 192 42 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: fanotify_perm_event 0 0 72 56 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: fanotify_path_event 0 0 64 64 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: fanotify_fid_event 0 0 72 56 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: fsnotify_mark 0 0 72 56 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dnotify_mark 0 0 80 51 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dnotify_struct 0 0 32 128 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dio 0 0 640 51 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: fasync_cache 0 0 48 85 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: audit_tree_mark 0 0 80 51 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: pid_namespace 0 0 136 30 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: posix_timers_cache 0 0 256 32 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: UNIX-STREAM 1024 1024 1024 32 8 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: UNIX 992 992 1024 32 8 : tunables 0 0 0 : slabdata 31 31 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ip4-frags 0 0 200 40 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ip_mrt_cache 0 0 192 42 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: UDP-Lite 0 0 1152 28 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: MPTCP 0 0 1920 17 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: request_sock_subflow 0 0 384 42 4 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: tcp_bind_bucket 864 864 128 32 1 : tunables 0 0 0 : slabdata 27 27 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: inet_peer_cache 84 84 192 42 2 : tunables 0 0 0 : slabdata 2 2 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfrm_dst_cache 0 0 320 51 4 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: xfrm_state 0 0 768 42 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ip_fib_trie 680 680 48 85 1 : tunables 0 0 0 : slabdata 8 8 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ip_fib_alias 584 584 56 73 1 : tunables 0 0 0 : slabdata 8 8 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ip_dst_cache 883 1134 192 42 2 : tunables 0 0 0 : slabdata 27 27 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: PING 0 0 1024 32 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: RAW 128 128 1024 32 8 : tunables 0 0 0 : slabdata 4 4 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: UDP 560 560 1152 28 8 : tunables 0 0 0 : slabdata 20 20 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: tw_sock_TCP 132 132 248 33 2 : tunables 0 0 0 : slabdata 4 4 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: request_sock_TCP 52 52 312 52 4 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: TCP 112 112 2240 14 8 : tunables 0 0 0 : slabdata 8 8 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: hugetlbfs_inode_cache 98 98 664 49 8 : tunables 0 0 0 : slabdata 2 2 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dquot 0 0 256 32 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: bio-264 2193 2193 320 51 4 : tunables 0 0 0 : slabdata 43 43 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ep_head 8192 8192 16 256 1 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: eventpoll_pwq 2112 2112 64 64 1 : tunables 0 0 0 : slabdata 33 33 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: eventpoll_epi 1440 1440 128 32 1 : tunables 0 0 0 : slabdata 45 45 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: inotify_inode_mark 1479 1479 80 51 1 : tunables 0 0 0 : slabdata 29 29 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dax_cache 78 78 832 39 8 : tunables 0 0 0 : slabdata 2 2 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: bio_crypt_ctx 204 204 40 102 1 : tunables 0 0 0 : slabdata 2 2 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: request_queue_srcu 24 24 1336 24 8 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: request_queue 1088 1088 944 34 8 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: blkdev_ioc 1472 1472 88 46 1 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: bio-200 1952 1952 256 32 2 : tunables 0 0 0 : slabdata 61 61 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: biovec-max 464 520 4096 8 8 : tunables 0 0 0 : slabdata 65 65 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: biovec-128 80 80 2048 16 8 : tunables 0 0 0 : slabdata 5 5 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: biovec-64 992 992 1024 32 8 : tunables 0 0 0 : slabdata 31 31 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: biovec-16 1024 1024 256 32 2 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: bio_integrity_payload 84 84 192 42 2 : tunables 0 0 0 : slabdata 2 2 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: khugepaged_mm_slot 0 0 112 36 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ksm_mm_slot 0 0 48 85 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ksm_stable_node 0 0 64 64 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ksm_rmap_item 0 0 64 64 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: user_namespace 52 52 624 52 8 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: uid_cache 252 252 192 42 2 : tunables 0 0 0 : slabdata 6 6 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: iommu_iova 0 0 64 64 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dmaengine-unmap-256 15 15 2112 15 8 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dmaengine-unmap-128 30 30 1088 30 8 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dmaengine-unmap-16 42 42 192 42 2 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dmaengine-unmap-2 64 64 64 64 1 : tunables 0 0 0 : slabdata 1 1 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: audit_buffer 3060 3060 24 170 1 : tunables 0 0 0 : slabdata 18 18 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: sock_inode_cache 1560 1560 832 39 8 : tunables 0 0 0 : slabdata 40 40 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: skbuff_ext_cache 0 0 192 42 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: skbuff_fclone_cache 1024 1024 512 32 4 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: skbuff_head_cache 2273 2304 256 32 2 : tunables 0 0 0 : slabdata 72 72 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: configfs_dir_cache 1058 1058 88 46 1 : tunables 0 0 0 : slabdata 23 23 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: file_lock_cache 1184 1184 216 37 2 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: file_lock_ctx 2044 2044 56 73 1 : tunables 0 0 0 : slabdata 28 28 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: fsnotify_mark_connector 3584 3584 32 128 1 : tunables 0 0 0 : slabdata 28 28 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: buffer_head 1053 1053 104 39 1 : tunables 0 0 0 : slabdata 27 27 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: task_delay_info 0 0 128 32 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: taskstats 1209 1209 416 39 4 : tunables 0 0 0 : slabdata 31 31 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: proc_dir_entry 2352 2352 192 42 2 : tunables 0 0 0 : slabdata 56 56 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: pde_opener 3264 3264 40 102 1 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: proc_inode_cache 4001 4048 712 46 8 : tunables 0 0 0 : slabdata 88 88 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: seq_file 1088 1088 120 34 1 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: sigqueue 1632 1632 80 51 1 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: bdev_cache 640 640 1600 20 8 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: shmem_inode_cache 3157 3157 784 41 8 : tunables 0 0 0 : slabdata 77 77 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kernfs_iattrs_cache 1932 1932 88 46 1 : tunables 0 0 0 : slabdata 42 42 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kernfs_node_cache 54656 54656 128 32 1 : tunables 0 0 0 : slabdata 1708 1708 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: mnt_cache 1989 1989 320 51 4 : tunables 0 0 0 : slabdata 39 39 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: filp 4305 5248 256 32 2 : tunables 0 0 0 : slabdata 164 164 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: inode_cache 35123 35343 640 51 8 : tunables 0 0 0 : slabdata 693 693 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dentry 70691 71064 192 42 2 : tunables 0 0 0 : slabdata 1692 1692 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: names_cache 256 256 4096 8 8 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: net_namespace 42 42 4416 7 8 : tunables 0 0 0 : slabdata 6 6 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: hashtab_node 17850 17850 24 170 1 : tunables 0 0 0 : slabdata 105 105 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ebitmap_node 46016 46016 64 64 1 : tunables 0 0 0 : slabdata 719 719 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: avtab_extended_perms 0 0 40 102 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: avtab_node 91800 91800 24 170 1 : tunables 0 0 0 : slabdata 540 540 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: avc_xperms_data 0 0 32 128 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: avc_xperms_decision_node 0 0 48 85 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: avc_xperms_node 0 0 56 73 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: avc_node 3416 3416 72 56 1 : tunables 0 0 0 : slabdata 61 61 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: iint_cache 0 0 120 34 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: lsm_inode_cache 64400 64400 72 56 1 : tunables 0 0 0 : slabdata 1150 1150 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: lsm_file_cache 8448 8448 16 256 1 : tunables 0 0 0 : slabdata 33 33 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: key_jar 960 960 256 32 2 : tunables 0 0 0 : slabdata 30 30 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: uts_namespace 259 259 432 37 4 : tunables 0 0 0 : slabdata 7 7 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: nsproxy 840 840 72 56 1 : tunables 0 0 0 : slabdata 15 15 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: vm_area_struct 8717 9000 200 40 2 : tunables 0 0 0 : slabdata 225 225 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: mm_struct 1080 1080 1088 30 8 : tunables 0 0 0 : slabdata 36 36 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: fs_cache 2176 2176 64 64 1 : tunables 0 0 0 : slabdata 34 34 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: files_cache 1564 1564 704 46 8 : tunables 0 0 0 : slabdata 34 34 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: signal_cache 1702 1736 1152 28 8 : tunables 0 0 0 : slabdata 62 62 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: sighand_cache 1298 1320 2112 15 8 : tunables 0 0 0 : slabdata 88 88 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: task_struct 663 702 10432 3 8 : tunables 0 0 0 : slabdata 234 234 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: cred_jar 2772 2772 192 42 2 : tunables 0 0 0 : slabdata 66 66 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: anon_vma_chain 7786 7936 64 64 1 : tunables 0 0 0 : slabdata 124 124 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: anon_vma 6474 6474 104 39 1 : tunables 0 0 0 : slabdata 166 166 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: pid 1952 1952 128 32 1 : tunables 0 0 0 : slabdata 61 61 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: Acpi-Operand 3696 3696 72 56 1 : tunables 0 0 0 : slabdata 66 66 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: Acpi-ParseExt 561 561 80 51 1 : tunables 0 0 0 : slabdata 11 11 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: Acpi-Parse 803 803 56 73 1 : tunables 0 0 0 : slabdata 11 11 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: Acpi-State 561 561 80 51 1 : tunables 0 0 0 : slabdata 11 11 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: Acpi-Namespace 1530 1530 48 85 1 : tunables 0 0 0 : slabdata 18 18 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: shared_policy_node 0 0 48 85 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: numa_policy 90 90 272 30 2 : tunables 0 0 0 : slabdata 3 3 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: perf_event 806 806 1216 26 8 : tunables 0 0 0 : slabdata 31 31 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: trace_event_file 3174 3174 88 46 1 : tunables 0 0 0 : slabdata 69 69 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: ftrace_event_field 6970 6970 48 85 1 : tunables 0 0 0 : slabdata 82 82 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: pool_workqueue 552 576 256 32 2 : tunables 0 0 0 : slabdata 18 18 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: radix_tree_node 12573 12600 584 28 4 : tunables 0 0 0 : slabdata 450 450 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: task_group 1632 1632 640 51 8 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: vmap_area 6450 6976 64 64 1 : tunables 0 0 0 : slabdata 109 109 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dma-kmalloc-8k 0 0 8192 4 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dma-kmalloc-4k 0 0 4096 8 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dma-kmalloc-2k 0 0 2048 16 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dma-kmalloc-1k 0 0 1024 32 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dma-kmalloc-512 0 0 512 32 4 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dma-kmalloc-256 0 0 256 32 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dma-kmalloc-192 0 0 192 42 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dma-kmalloc-128 0 0 128 32 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dma-kmalloc-96 0 0 96 42 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dma-kmalloc-64 0 0 64 64 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dma-kmalloc-32 0 0 32 128 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dma-kmalloc-16 0 0 16 256 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: dma-kmalloc-8 0 0 8 512 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-rcl-8k 0 0 8192 4 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-rcl-4k 0 0 4096 8 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-rcl-2k 0 0 2048 16 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-rcl-1k 0 0 1024 32 8 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-rcl-512 0 0 512 32 4 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-rcl-256 0 0 256 32 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-rcl-192 0 0 192 42 2 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-rcl-128 352 352 128 32 1 : tunables 0 0 0 : slabdata 11 11 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-rcl-96 1260 1260 96 42 1 : tunables 0 0 0 : slabdata 30 30 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-rcl-64 4487 4864 64 64 1 : tunables 0 0 0 : slabdata 76 76 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-rcl-32 0 0 32 128 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-rcl-16 0 0 16 256 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-rcl-8 0 0 8 512 1 : tunables 0 0 0 : slabdata 0 0 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-cg-8k 12 12 8192 4 8 : tunables 0 0 0 : slabdata 3 3 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-cg-4k 328 328 4096 8 8 : tunables 0 0 0 : slabdata 41 41 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-cg-2k 528 528 2048 16 8 : tunables 0 0 0 : slabdata 33 33 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-cg-1k 1312 1312 1024 32 8 : tunables 0 0 0 : slabdata 41 41 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-cg-512 1024 1024 512 32 4 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-cg-256 544 544 256 32 2 : tunables 0 0 0 : slabdata 17 17 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-cg-192 1344 1344 192 42 2 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-cg-128 544 544 128 32 1 : tunables 0 0 0 : slabdata 17 17 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-cg-96 1344 1344 96 42 1 : tunables 0 0 0 : slabdata 32 32 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-cg-64 2112 2112 64 64 1 : tunables 0 0 0 : slabdata 33 33 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-cg-32 4736 4736 32 128 1 : tunables 0 0 0 : slabdata 37 37 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-cg-16 5888 5888 16 256 1 : tunables 0 0 0 : slabdata 23 23 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-cg-8 12800 12800 8 512 1 : tunables 0 0 0 : slabdata 25 25 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-8k 420 468 8192 4 8 : tunables 0 0 0 : slabdata 117 117 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-4k 993 1056 4096 8 8 : tunables 0 0 0 : slabdata 132 132 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-2k 1593 1776 2048 16 8 : tunables 0 0 0 : slabdata 111 111 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-1k 7150 8128 1024 32 8 : tunables 0 0 0 : slabdata 254 254 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-512 10482 11712 512 32 4 : tunables 0 0 0 : slabdata 366 366 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-256 7091 8160 256 32 2 : tunables 0 0 0 : slabdata 255 255 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-192 3618 3822 192 42 2 : tunables 0 0 0 : slabdata 91 91 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-128 3923 4032 128 32 1 : tunables 0 0 0 : slabdata 126 126 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-96 5372 5502 96 42 1 : tunables 0 0 0 : slabdata 131 131 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-64 43278 43712 64 64 1 : tunables 0 0 0 : slabdata 683 683 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-32 43177 43520 32 128 1 : tunables 0 0 0 : slabdata 340 340 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-16 48380 48384 16 256 1 : tunables 0 0 0 : slabdata 189 189 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmalloc-8 32065 35840 8 512 1 : tunables 0 0 0 : slabdata 70 70 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmem_cache_node 1397 1408 64 64 1 : tunables 0 0 0 : slabdata 22 22 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: kmem_cache 640 640 256 32 2 : tunables 0 0 0 : slabdata 20 20 0 Sep 26 16:20:22 hpe-ml350egen8-01.hpe2.lab.eng.bos.redhat.com SlabCacheInfo[18198]: Dumping slabinfo - Stop - 2022-09-26 16:20:22