Linux version 5.14.0-256.2009_766119311.el9 [ 0.000000] The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. [ 0.000000] Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug root=/dev/mapper/cs_hpe--dl360pgen8--08-root ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/cs_hpe--dl360pgen8--08-swap rd.lvm.lv=cs_hpe-dl360pgen8-08/root rd.lvm.lv=cs_hpe-dl360pgen8-08/swap console=ttyS1,115200n81 [ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [ 0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. [ 0.000000] signal: max sigframe size: 1776 [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009c7ff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009c800-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bddabfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bddac000-0x00000000bddddfff] ACPI data [ 0.000000] BIOS-e820: [mem 0x00000000bddde000-0x00000000cfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000fee0ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000ff800000-0x00000000ffffffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000100000-0x000000083fffefff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.8 present. [ 0.000000] DMI: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [ 0.000000] tsc: Fast TSC calibration using PIT [ 0.000000] tsc: Detected 2094.797 MHz processor [ 0.001566] last_pfn = 0x83ffff max_arch_pfn = 0x400000000 [ 0.002396] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [ 0.008432] last_pfn = 0xbddac max_arch_pfn = 0x400000000 [ 0.014931] found SMP MP-table at [mem 0x000f4f80-0x000f4f8f] [ 0.014988] Using GB pages for direct mapping [ 0.016784] RAMDISK: [mem 0x33a57000-0x35d23fff] [ 0.016798] ACPI: Early table checksum verification disabled [ 0.016810] ACPI: RSDP 0x00000000000F4F00 000024 (v02 HP ) [ 0.016827] ACPI: XSDT 0x00000000BDDAED00 0000E4 (v01 HP ProLiant 00000002 ? 0000162E) [ 0.016849] ACPI: FACP 0x00000000BDDAEE40 0000F4 (v03 HP ProLiant 00000002 ? 0000162E) [ 0.016868] ACPI BIOS Warning (bug): Invalid length for FADT/Pm1aControlBlock: 32, using default 16 (20211217/tbfadt-669) [ 0.016882] ACPI BIOS Warning (bug): Invalid length for FADT/Pm2ControlBlock: 32, using default 8 (20211217/tbfadt-669) [ 0.016896] ACPI: DSDT 0x00000000BDDAEF40 0026DC (v01 HP DSDT 00000001 INTL 200.016911] ACPI: FACS 0x00000000BDDAC140 000040 [ 0.016924] ACPI: FACS 0x00000000BDDAC140 000040 [ 0.016937] ACPI: SPCR 0x00000000BDDAC180 000050 (v01 HP SPCRRBSU 00000001 ? 0000162E) [ 0.016952] ACPI: MCFG 0x00000000BDDAC200 00003C (v01 HP ProLiant 00000001 00000000) [ 0.016966] ACPI: HPET 0x00000000BDDAC240 000038 (v01 HP ProLiant 00000002 ? 0000162E) [ 0.016980] ACPI: FFFF 0x00000000BDDAC280 000064 (v02 HP ProLiant 00000002 ? 0000162E) [ 0.016994] ACPI: SPMI 0x00000000BDDAC300 000040 (v05 HP ProLiant 00000001 ? 0000162E) [ 0.017008] ACPI: ERST 0x00000000BDDAC340 000230 (v01 HP ProLiant 00000001 ? 0000162E) [ 0.017022] ACPI: APIC 0x00000000BDDAC580 00026A (v01 HP ProLiant 00000002 00000000) [ 0.017036] ACPI: SRAT 0x00000000BDDAC800 000750 (v01 HP Proliant 00000001 ? 0000162E) [ 0.017050] ACPI: FFFF 0x00000000BDDACF80 000176 (v01 HP ProLiant 00000001 ? 0000162E) [ 0.017064] ACPI: BERT 0x00000000BDDAD100 000030 (v01 HP ProLiant 00000001 ? 0000162E) [ 0.017078] ACPI: HEST 0x00000000BDDAD140 0000BC (v01 HP ProLiant 00000001 ? 0000162E) [ 0.017092] ACPI: DMAR 0x00000000BDDAD200 00051C (v01 HP ProLiant 00000001 ? 0000162E) [ 0.017106] ACPI: FFFF 0x00000000BDDAEC40 000030 (v01 HP ProLiant 00000001 00000000) [ 0.017120] ACPI: PCCT 0x00000000BDDAEC80 00006E (v01 HP 1 PH 0000504D) [ 0.017135] ACPI: SSDT 0x00000000BDDB1640 0007EA (v01 HP DEV_PCI1 00000001 INTL 20120503) [ 0.017149] ACPI: SSDT 0x00000000BDDB1E40 000103 (v03 HP CRSPCI0 00000002 HP 00000001) [ 0.017163] ACPI: SSDT 0x00000000BDDB1F80 000098 (v03 HP CRSPCI1 00000002 HP 00000001) [ 0.017177] ACPI: SSDT 0x00000000BDDB2040 00038A (v02 HP riser0 00000002 INTL 20030228) [ 0.017191] ACPI: SSDT 0x00000000BDDB2400 000385 (v03 HP riser1a 00000002 INTL 20030228) [ 0.017206] ACPI: SSDT 0x00000000BDDB27C0 000BB9 (v01 HP pcc 00000001 INTL 20120503) [ 0.017220] ACPI: SSDT 0x00000000BDDB3380 000377 (v01 HP pmab 00000001 INTL 20120503) [ 0.017234] ACPI: SSDT 0x00000000BDDB3700 005524 (v01 HP pcc2 00000001 INTL 20120503) [ 0.017248] ACPI: SSDT 0x00000000BDDB8C40 003AEC (v01 INTEL PPM RCM 00000001 INTL 20061109) [ 0.017261] ACPI: Reserving FACP table memory at [mem 0xbddaee40-0xbddaef33] [ 0.017266] ACPI: Reserving DSDT table memory at [mem 0xbddaef40-0xbddb161b] [ 0.017271] ACPI: Reserving FACS table memory at [mem 0xbddac140-0xbddac17f] [ 0.017275] ACPI: Reserving FACS table memory at [mem 0xbddac140-0xbddac17f] [ 0.017279] ACPI: Reserving SPCR table memory at [mem 0xbddac180-0xbddac1cf] [ 0.017284] ACPI: Reserory at [mem 0xbddac200-0xbddac23b] [ 0.017288] ACPI: Reserving HPET table memory at [mem 0xbddac240-0xbddac277] [ 0.017293] ACPI: Reserving FFFF table memory at [mem 0xbddac280-0xbddac2e3] [ 0.017297] ACPI: Reserving SPMI table memory at [mem 0xbddac300-0xbddac33f] [ 0.017302] ACPI: Reserving ERST table memory at [mem 0xbddac340-0xbddac56f] [ 0.017306] ACPI: Reserving APIC table memory at [mem 0xbddac580-0xbddac7e9] [ 0.017311] ACPI: Reserving SRAT table memory at [mem 0xbddac800-0xbddacf4f] [ 0.017315] ACPI: Reserving FFFF table memory at [mem 0xbddacf80-0xbddad0f5] [ 0.017319] ACPI: Reserving BERT table memory at [mem 0xbddad100-0xbddad12f] [ 0.017324] ACPI: Reserving HEST table memory at [mem 0xbddad140-0xbddad1fb] [ 0.017328] ACPI: Reserving DMAR table memory at [mem 0xbddad200-0xbddad71b] [ 0.017333] ACPI: Reserving FFFF table memory at [mem 0xbddaec40-0xbddaec6f] [ 0.017337] ACPI: Reserving PCCT table memory at [mem 0xbddaec80-0xbddaeced] [ 0.017342] ACPI: Reserving SSDT table memory at [mem 0xbddb1640-0xbddb1e29] [ 0.017346] ACPI: Reserving SSDT table memory at [mem 0xbddb1e40-0xbddb1f42] ] ACPI: Reserving SSDT table memory at [mem 0xbddb1f80-0xbddb2017] [ 0.017356] ACPI: Reserving SSDT table memory at [mem 0xbddb2040-0xbddb23c9] [ 0.0173 ACPI: Reserving SSDT table memory at [mem 0xbddb2400-0xbddb2784] [ 0.017365] ACPI: Reserving SSDT table memory at [mem 0xbddb27c0-0xbddb3378] [ 0.017369] ACPI: Reserving SSDT table memory at [mem 0xbddb3380-0xbddb36f6] [ 0.017374] ACPI: Reserving SSDT table memory at [mem 0xbddb3700-0xbddb8c23] [ 0.017379] ACPI: Reserving SSDT table memory at [mem 0xbddb8c40-0xbddbc72b] [ 0.017472] SRAT: PXM 0 -> APIC 0x00 -> Node 0 [ 0.017479] SRAT: PXM 0 -> APIC 0x01 -> Node 0 [ 0.017483] SRAT: PXM 0 -> APIC 0x02 -> Node 0 [ 0.017487] SRAT: PXM 0 -> APIC 0x03 -> Node 0 [ 0.017490] SRAT: PXM 0 -> APIC 0x04 -> Node 0 [ 0.017494] SRAT: PXM 0 -> APIC 0x05 -> Node 0 [ 0.017498] SRAT: PXM 0 -> APIC 0x06 -> Node 0 [ 0.017501] SRAT: PXM 0 -> APIC 0x07 -> Node 0 [ 0.017505] SRAT: PXM 0 -> APIC 0x08 -> Node 0 [ 0.017509] SRAT: PXM 0 -> APIC 0x09 -> Node 0 [ 0.017513] SRAT: PXM 0 -> APIC 0x0a -> Node 0 [ 0.017516] SRAT: PXM 0 -> APIC 0x0b -> Node 0 [ 0.017520] SRAT: PXM 1 -> APIC 0x20 -> Node 1 [ 0.017524] SRAT: PXM 1 -> APIC 0x21 -> Node 1 [ 0.017528] SRAT: PXM 1 -> APIC 0x22 -> Node 1 [ 0.017532] SRAT: PXM 1 -> APIC 0x23 ->0.017536] SRAT: PXM 1 -> APIC 0x24 -> Node 1 [ 0.017539] SRAT: PXM 1 -> APIC 0x25 -> Node 1 [ 0.017543] SRAT: PXM 1 -> APIC 0x26 -> Node 1 [ 0.017547] SRAT: PXM 1 -> APIC 0x27 -> Node 1 [ 0.017551] SRAT: PXM 1 -> APIC 0x28 -> Node 1 [ 0.017554] SRAT: PXM 1 -> APIC 0x29 -> Node 1 [ 0.017558] SRAT: PXM 1 -> APIC 0x2a -> Node 1 [ 0.017562] SRAT: PXM 1 -> APIC 0x2b -> Node 1 [ 0.017573] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x43fffffff] [ 0.017580] ACPI: SRAT: Node 1 PXM 1 [mem 0x440000000-0x83fffffff] [ 0.017617] NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] [ 0.017661] NODE_DATA(1) allocated [mem 0x83ffd4000-0x83fffefff] [ 0.018165] Reserving 256MB of memory at 2768MB for crashkernel (System RAM: 32733MB) [ 0.116247] Zone ranges: [ 0.116262] DMA [mem 0x0000000000001000-0x0000000000ffffff] [ 0.116277] DMA32 [mem 0x0000000001000000-0x00000000ffffffff] [ 0.116284] Normal [mem 0x0000000100000000-0x000000083fffefff] [ 0.116292] Device empty [ 0.116298] Movable zone start for each node [ 0.116304] Early memory node ranges [ 0.116307] node 0: [mem 0x0000000000001000-0x000000000009bfff] [ 0.116312] node 0: [mem 0x00000000bddabfff] [ 0.116318] node 0: [mem 0x0000000100000000-0x000000043fffffff] [ 0.116324] node 1: [mem 0x0000000440000000-0x000000083fffefff] [ 0.116333] Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] [ 0.116350] Initmem setup node 1 [mem 0x0000000440000000-0x000000083fffefff] [ 0.116373] On node 0, zone DMA: 1 pages in unavailable ranges [ 0.116616] On node 0, zone DMA: 100 pages in unavailable ranges [ 0.164485] On node 0, zone Normal: 8788 pages in unavailable ranges [ 0.166549] On node 1, zone Normal: 1 pages in unavailable ranges [ 0.899428] kasan: KernelAddressSanitizer initialized [ 0.899755] ACPI: PM-Timer IO Port: 0x908 [ 0.899795] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.899856] IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23 [ 0.899871] IOAPIC[1]: apic_id 0, version 32, address 0xfec10000, GSI 24-47 [ 0.899882] IOAPIC[2]: apic_id 10, version 32, address 0xfec40000, GSI 48-71 [ 0.899892] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) [ 0.899901] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.899918] ACPI: Using ACPI (MADT) on information [ 0.899924] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.899939] ACPI: SPCR: SPCR table version 1 [ 0.899943] ACPI: SPCR: Unexpected SPCR Access Width. Defaulting to byte size [ 0.899949] ACPI: SPCR: console: uart,mmio,0x0,9600 [ 0.899957] TSC deadline timer available [ 0.899963] smpboot: Allowing 64 CPUs, 40 hotplug CPUs [ 0.900051] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] [ 0.900062] PM: hibernation: Registered nosave memory: [mem 0x0009c000-0x0009cfff] [ 0.900066] PM: hibernation: Registered nosave memory: [mem 0x0009d000-0x0009ffff] [ 0.900071] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.900075] PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.900084] PM: hibernation: Registered nosave memory: [mem 0xbddac000-0xbddddfff] [ 0.900089] PM: hibernation: Registered nosave memory: [mem 0xbddde000-0xcfffffff] [ 0.900093] PM: hibernation: Registered nosave memory: [mem 0xd0000000-0xfebfffff] [ 0.900097] PM: hibernation: Registered nosave memory: [mem 0xfec00000-0xfee0ffff] [ 0.900101] PM: hibernation: Registered nosave memory: [mem 0xfee10000-0xff7fffff] [ 0.900105] PM: hibernation: Registered nosave memory: [mem 0xff800000-0xffffffff] [ 0xd0000000-0xfebfffff] available for PCI devices [ 0.900124] Booting paravirtualized kernel on bare hardware [ 0.900141] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns [ 0.921521] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:64 nr_cpu_ids:64 nr_node_ids:2 [ 1.005021] percpu: Embedded 515 pages/cpu s2072576 r8192 d28672 u4194304 [ 1.005816] Fallback order for Node 0: 0 1 [ 1.005844] Fallback order for Node 1: 1 0 [ 1.005886] Built 2 zonelists, mobility grouping on. Total pages: 8248628 [ 1.005891] Policy zone: Normal [ 1.005913] Kernel command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug root=/dev/mapper/cs_hpe--dl360pgen8--08-root ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/cs_hpe--dl360pgen8--08-swap rd.lvm.lv=cs_hpe-dl360pgen8-08/root rd.lvm.lv=cs_hpe-dl360pgen8-08/swap console=ttyS1,115200n81 [ 1.006118] Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug", will be passed to user space. [ 1.007815] mem auto-init: stack:off, heap alloc:off, heap free:off [ 1.007822] Stack Depot early init allocating hash table with memblock_alloc, 8388608 bytes [ 1.009718] software IO TLB: area n111] Memory: 3008124K/33518872K available (38920K kernel code, 13007K rwdata, 14984K rodata, 5300K init, 42020K bss, 5601796K reserved, 0K cma-reserved) [ 3.359152] random: get_random_u64 called from kmem_cache_open+0x22/0x380 with crng_init=0 [ 3.380658] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=64, Nodes=2 [ 3.380666] kmemleak: Kernel memory leak detector disabled [ 3.384979] Kernel/User page tables isolation: enabled [ 3.385584] ftrace: allocating 45745 entries in 179 pages [ 3.427225] ftrace: allocated 179 pages with 5 groups [ 3.433135] Dynamic Preempt: voluntary [ 3.437638] Running RCU self tests [ 3.439117] rcu: Preemptible hierarchical RCU implementation. [ 3.439121] rcu: RCU lockdep checking is enabled. [ 3.439125] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=64. [ 3.439131] rcu: RCU callback double-/use-after-free debug is enabled. [ 3.439135] Trampoline variant of Tasks RCU enabled. [ 3.439138] Rude variant of Tasks RCU enabled. [ 3.439141] Tracing variant of Tasks RCU enabled. [ 3.439146] rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. [ 3.439150] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=64 [ 3.461411] NR_IRQ 1752, preallocated irqs: 16 [ 3.462348] rcu: srcu_init: Setting srcu_struct sizes based on contention. [ 3.462446] random: crng init done (trusting CPU's manufacturer) [ 3.469247] Console: colour VGA+ 80x25 [ 8.724945] printk: console [ttyS1] enabled [ 8.726381] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar [ 8.729011] ... MAX_LOCKDEP_SUBCLASSES: 8 [ 8.730406] ... MAX_LOCK_DEPTH: 48 [ 8.731851] ... MAX_LOCKDEP_KEYS: 8192 [ 8.733369] ... CLASSHASH_SIZE: 4096 [ 8.734911] ... MAX_LOCKDEP_ENTRIES: 65536 [ 8.736422] ... MAX_LOCKDEP_CHAINS: 131072 [ 8.738052] ... CHAINHASH_SIZE: 65536 [ 8.739602] memory used by lock dependency info: 11641 kB [ 8.741472] memory used for stack traces: 4224 kB [ 8.743151] per task-struct memory footprint: 2688 bytes [ 8.745445] mempolicy: Enabling automatic NUMA balancing. Configure with numa_balancing= or the kernel.numa_balancing sysctl [ 8.749322] ACPI: Core revision 20211217 [ 8.752267] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns [ : Switch to symmetric I/O mode setup [ 9.157395] DMAR: Host address width 46 [ 9.158792] DMAR: DRHD base: 0x000000fbefe000 flags: 0x0 [ 9.160934] DMAR: dmar0: reg_base_addr fbefe000 ver 1:0 cap d2078c106f0466 ecap f020de [ 9.163752] DMAR: DRHD base: 0x000000f4ffe000 flags: 0x1 [ 9.165630] DMAR: dmar1: reg_base_addr f4ffe000 ver 1:0 cap d2078c106f0466 ecap f020de [ 9.168373] DMAR: RMRR base: 0x000000bdffd000 end: 0x000000bdffffff [ 9.170551] DMAR: RMRR base: 0x000000bdff6000 end: 0x000000bdffcfff [ 9.172713] DMAR: RMRR base: 0x000000bdf83000 end: 0x000000bdf84fff [ 9.174932] DMAR: RMRR base: 0x000000bdf7f000 end: 0x000000bdf82fff [ 9.177082] DMAR: RMRR base: 0x000000bdf6f000 end: 0x000000bdf7efff [ 9.179241] DMAR: RMRR base: 0x000000bdf6e000 end: 0x000000bdf6efff [ 9.181395] DMAR: RMRR base: 0x000000000f4000 end: 0x000000000f4fff [ 9.183574] DMAR: RMRR base: 0x000000000e8000 end: 0x000000000e8fff [ 9.185761] DMAR: [Firmware Bug]: No firmware reserved region can cover this RMRR [0x00000000000e8000-0x00000000000e8fff], contact BIOS vendor for fixes [ 9.190330] DMAR: [Firmware Bug]: Your BIOS is broken; bad RMRR [0x00000000000e8000-0x00000000000e8fff] [ 9.190330] BIOS ve Product Version: [ 9.695151] DMAR: RMRR base: 0x000000bddde000 end: 0x000000bdddefff [ 9.697325] DMAR: ATSR flags: 0x0 [ 9.698514] DMAR-IR: IOAPIC id 10 under DRHD base 0xfbefe000 IOMMU 0 [ 9.700714] DMAR-IR: IOAPIC id 8 under DRHD base 0xf4ffe000 IOMMU 1 [ 9.702891] DMAR-IR: IOAPIC id 0 under DRHD base 0xf4ffe000 IOMMU 1 [ 9.705135] DMAR-IR: HPET id 0 under DRHD base 0xf4ffe000 [ 9.706994] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit. [ 9.706998] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting. [ 9.713238] DMAR-IR: Enabled IRQ remapping in xapic mode [ 9.715168] x2apic: IRQ remapping doesn't support X2APIC mode [ 9.717145] Switched APIC routing to physical flat. [ 9.721109] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 9.727667] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1e31fd5adcb, max_idle_ns: 440795240010 ns [ 9.731307] Calibrating delay loop (skipped), value calculated using timer frequency.. 4189.59 BogoMIPS (lpj=2094797) [ 9.732304] pid_max: default: 65536 minimum: 512 [ 9.735033] LSM: Security Framework initializing [ 9.735439] Yama: becoming mindful. [ 9.736407] SELinux: Initializing. [ 9.737830] LF active [ 9.753029] Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, vmalloc hugepage) [ 9.761001] Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, vmalloc hugepage) [ 9.762687] Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, vmalloc) [ 9.763616] Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, vmalloc) [ 9.769330] CPU0: Thermal monitoring enabled (TM1) [ 9.770430] process: using mwait in idle threads [ 9.771318] Last level iTLB entries: 4KB 512, 2MB 8, 4MB 8 [ 9.772300] Last level dTLB entries: 4KB 512, 2MB 0, 4MB 0, 1GB 4 [ 9.773328] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization [ 9.774303] Spectre V2 : Mitigation: Retpolines [ 9.775300] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch [ 9.776300] Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT [ 9.777300] Spectre V2 : Enabling Restricted Speculation for firmware calls [ 9.778308] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier [ 9.779301] Spectre V2 : User space: Mitigation: STIBP via prctl [ 9.780302] Speculative Store Bypass: Mitlative Store Bypass disabled via prctl [ 9.781316] MDS: Mitigation: Clear CPU buffers [ 9.782300] MMIO Stale Data: Unknown: No mitigations [ 9.822386] Freeing SMP alternatives memory: 32K [ 9.826499] smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1170 [ 9.827344] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz (family: 0x6, model: 0x3e, stepping: 0x4) [ 9.832382] cblist_init_generic: Setting adjustable number of callback queues. [ 9.833301] cblist_init_generic: Setting shift to 6 and lim to 1. [ 9.834936] cblist_init_generic: Setting shift to 6 and lim to 1. [ 9.835958] cblist_init_generic: Setting shift to 6 and lim to 1. [ 9.836543] Running RCU-tasks wait API self tests [ 9.945726] Performance Events: PEBS fmt1+, IvyBridge events, 16-deep LBR, full-width counters, Broken BIOS detected, complain to your hardware vendor. [ 9.946308] [Firmware Bug]: the BIOS has corrupted hw-PMU resources (MSR 38d is 330) [ 9.947307] Intel PMU driver. [ 9.948330] ... version: 3 [ 9.949308] ... bit width: 48 [ 9.950305] ... generic registers: 4 [ 9.951306] mask: 0000ffffffffffff [ 9.952301] ... max period: 00007fffffffffff [ 9.953301] ... fixed-purpose events: 3 [ 9.954301] ... event mask: 000000070000000f [ 9.956948] rcu: Hierarchical SRCU implementation. [ 9.957303] rcu: Max phase no-delay instances is 400. [ 9.961378] Callback from call_rcu_tasks_trace() invoked. [ 9.977452] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. [ 9.991738] smp: Bringing up secondary CPUs ... [ 9.994523] x86: Booting SMP configuration: [ 9.995305] .... node #0, CPUs: #1 [ 10.005515] #2 [ 10.011702] #3 [ 10.017745] #4 [ 10.023735] #5 [ 10.029633] [ 10.030236] .... node #1, CPUs: #6 [ 6.235899] smpboot: CPU 6 Converting physical 0 to logical die 1 [ 10.105076] Callback from call_rcu_tasks_rude() invoked. [ 10.108472] #7 [ 10.118103] #8 [ 10.126953] #9 [ 10.136110] #10 [ 10.145073] #11 [ 10.154059] [ 10.154312] .... node #0, CPUs: #12 [ 10.159558] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. [ 10.164075] #13 [ 10.172427] #14 [ 10.180484] #15 [ 10.188127] #16 [ 10.195294] #17 [ 10.201660] Calfrom call_rcu_tasks() invoked. [ 10.204191] [ 10.204311] .... node #1, CPUs: #18 [ 10.211786] #19 [ 10.218847] #20 [ 10.225819] #21 [ 10.232887] #22 [ 10.239671] #23 [ 10.243726] smp: Brought up 2 nodes, 24 CPUs [ 10.244310] smpboot: Max logical packages: 6 [ 10.245315] smpboot: Total of 24 processors activated (101429.37 BogoMIPS) [ 10.866730] node 0 deferred pages initialised in 605ms [ 10.868840] pgdatinit0 (143) used greatest stack depth: 28672 bytes left [ 11.235483] node 1 deferred pages initialised in 973ms [ 11.249484] devtmpfs: initialized [ 11.253242] x86/mm: Memory block size: 128MB [ 11.442082] DMA-API: preallocated 65536 debug entries [ 11.442310] DMA-API: debugging enabled by kernel config [ 11.443312] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns [ 11.446237] futex hash table entries: 16384 (order: 9, 2097152 bytes, vmalloc) [ 11.455173] prandom: seed boundary self test passed [ 11.457902] prandom: 100 self tests passed [ 11.466824] prandom32: self test passed (less than 6 bits correlated) [ 11.467327] pinctrl core: initialized pinctrl subsystem [ 11.470604] [ 11.471249] ************************************************************* [ 11.471311] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE ** [ 11.472307] ** ** [ 11.473308] ** IOMMU DebugFS SUPPORT HAS BEEN ENABLED IN THIS KERNEL ** [ 11.474328] ** ** [ 11.475309] ** This means that this kernel is built to expose internal ** [ 11.476306] ** IOMMU data structures, which may compromise security on ** [ 11.477308] ** your system. ** [ 11.478307] ** ** [ 11.479307] ** If you see this message and you are not debugging the ** [ 11.480307] ** kernel, report this immediately to your vendor! ** [ 11.481308] ** ** [ 11.482306] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE ** [ 11.483308] ************************************************************* [ 11.484432] PM: RTC time: 00:32:33, date: 2023-02-03 [ 11.501652] NET: Registered PF_NETLINK/PF_ROUTE protocol family [ 11.508973] DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations [ 11.509603] DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations [ 11.510599] DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations [ 11.511747] audit: initializing netlink subsys (disabled) [ 11.512936] audit: type=2000 audit(1675384344.815:1): state=initialized audit_enabled=0 res=1 [ 11.515294] thermal_sys: Registered thermal governor 'fair_share' [ 11.515294] thermal_sys: Registered thermal governor 'step_wise' [ 11.517321] thermal_sys: Registered thermal governor 'user_space' [ 11.519947] cpuidle: using governor menu [ 11.524623] Detected 1 PCC Subspaces [ 11.526313] Registering PCC driver as Mailbox controller [ 11.529387] HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB [ 11.531369] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it [ 11.534316] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 11.539311] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xc0000000-0xcfffffff] (base 0xc0000000) [ 11.543330] PCI: MMCONFIG at [mem 0xc0000000-0xcfffffff] reserved in E820 [ 11.649448] PCI: Using configuration type 1 for base access [ 11.651358] PCI: HP ProLiant DL360 detected, enabling pci=bfsort. [ 11.653516] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 11.675794] ENERGY_PERF_BIAS: Set to 'normal', was 'performance' [ 11.926172] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. [ 11.930561] HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB [ 11.932337] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages [ 11.935307] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages [ 11.945921] cryptd: max_cpu_qlen set to 1000 [ 11.960104] ACPI: Added _OSI(Module Device) [ 11.961319] ACPI: Added _OSI(Processor Device) [ 11.963316] ACPI: Added _OSI(3.0 _SCP Extensions) [ 11.965314] ACPI: Added _OSI(Processor Aggregator Device) [ 11.966342] ACPI: Added _OSI(Linux-Dell-Video) [ 11.968337] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) [ 11.970337] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) [ 12.431440] ACPI: 10 ACPI AML tables successfully acquired and loaded [ 12.830116] ACPI: Interpreter enabled [ 12.831585] ACPI: PM: (supports S0 S4 S5) [ 12.833333] ACPI: Using IOAPIC for interrupt routing [ 12.835929] HEST: Table parsing has been initialized. [ 12.837309] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 12.841307] PCI: Using E820 reservations for host bridge windows [ 13.207307] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-1f]) [ 13.210361] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 13.216838] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER PCIeCapability LTR DPC] [ 13.220311] acpi PNP0A08:00: FADT indicates ASPM is unsupported, using BIOS configuration [ 13.233592] PCI host bridge to bus 0000:00 [ 13.234318] pci_bus 0000:00: root bus resource [mem 0xf4000000-0xf7ffffff window] [ 13.237324] pci_bus 0000:00: root bus resource [io 0x1000-0x7fff window] [ 13.239314] pci_bus 0000:00: root bus resource [io 0x0000-0x03af window] [ 13.242315] pci_bus 0000:00: root bus resource [io 0x03e0-0x0cf7 window] [ 13.244313] pci_bus 0000:00: root bus resource [io 0x0d00-0x0fff window] [ 13.246313] pci_bus 0000:00: root bus resource [io 0x03b0-0x03bb window] [ 13.248314] pci_bus 0000:00: root bus resource [io 0x03c0-0x03df window] [ 13.251315] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 13.253319] pci_bus 0000:00: root bus resource [bus 00-1f] [ 13.255739] pci 0000:00:00.0: [8086:0e00] type 00 class 0x060000 [ 13.258723] pci 0000:00:00.0: PME# supported from D0 D3hot D3cold [ 13.261676] pci 0000:00:01.0: [8086:0e02] type 01 class 0x060400 [ 13.264644] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold [ 13.270672] pci 0000:00:01.1: [8086:0e03] type 01 class 0x060400 [ 13.273655] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold [ 13.278523] pci 0000:00:02.0: [8086:0e04] type 01 class 0x060400 [ 13.281711] pci 0000:00:02.0: PME# supported from D0 D3hot D3cold [ 13.290216] pci 0000:00:02.1: [8086:0e05] type 01 class 0x060400 [ 13.292865] pci 0000:00:02.1: PME# supported from D0 D3hot D3cold [ 13.301229] pci 0000:00:02.2: [8086:0e06] type 01 class 0x060400 [ 13.303864] pci 0000:00:02.2: PME# supported from D0 D3hot D3cold [ 13.310887] pci 0000:00:02.3: [8086:0e07] type 01 class 0x060400 [ 13.313855] pci 0000:00:02.3: PME# supported from D0 D3hot D3cold [ 13.321853] pci 0000:00:03.0: [8086:0e08] type 01 class 0x060400 [ 13.324448] pci 0000:00:03.0: enabling Extended Tags [ 13.326762] pci 0000:00:03.0: PME# supported from D0 D3hot D3cold [ 13.335306] pci 0000:00:03.1: [8086:0e09] type 01 class 0x060400 [ 13.338267] pci 0000:00:03.1: PME# supported from D0 D3hot D3cold [ 13.345877] pci 0000:00:03.2: [8086:0e0a] type 01 class 0x060400 [ 13.348850] pci 0000:00:03.2: PME# supported from D0 D3hot D3cold [ 13.356901] pci 0000:00:03.3: [8086:0e0b] type 01 class 0x060400 [ 13.358870] pci 0000:00:03.3: PME# supported from D0 D3hot D3cold [ 13.366776] pci 0000:00:04.0: [8086:0e20] type 00 class 0x088000 [ 13.369386] pci 0000:00:04.0: reg 0x10: [mem 0xf6cf0000-0xf6cf3fff 64bit] [ 13.373458] pci 0000:00:04.1: [8086:0e21] type 00 class 0x088000 [ 13.376376] pci 0000:00:04.1: reg 0x10: [mem 0xf6ce0000-0xf6ce3fff 64bit] [ 13.380107] pci 0000:00:04.2: [8086:0e22] type 00 class 0x088000 [ 13.382355] pci 0000:00:04.2: reg 0x10: [mem 0xf6cd0000-0xf6cd3fff 64bit] [ 13.386615] pci 0000:00:04.3: [8086:0e23] type 00 class 0x088000 [ 13.388345] pci 0000:00:04.3: reg 0x10: [mem 0xf6cc0000-0xf6cc3fff 64bit] [ 13.393639] pci 0000:00:04.4: [8086:0e24] type 00 class 0x088000 [ 13.395397] pci 0000:00:04.4: reg 0x10: [mem 0xf6cb0000-0xf6cb3fff 64bit] [ 13.400435] pci 0000:00:04.5: [8086:0e25] type 00 class 0x088000 [ 13.402371] pci 0000:00:04.5: reg 0x10: [mem 0xf6ca0000-0xf6ca3fff 64bit] [ 13.406383] pci 0000:00:04.6: [8086:0e26] type 00 class 0x088000 [ 13.000:00:04.6: reg 0x10: [mem 0xf6c90000-0xf6c93fff 64bit] [ 13.804986] pci 0000:00:04.7: [8086:0e27] type 00 class 0x088000 [ 13.807377] pci 0000:00:04.7: reg 0x10: [mem 0xf6c80000-0xf6c83fff 64bit] [ 13.811426] pci 0000:00:05.0: [8086:0e28] type 00 class 0x088000 [ 13.815378] pci 0000:00:05.2: [8086:0e2a] type 00 class 0x088000 [ 13.820383] pci 0000:00:05.4: [8086:0e2c] type 00 class 0x080020 [ 13.822357] pci 0000:00:05.4: reg 0x10: [mem 0xf6c70000-0xf6c70fff] [ 13.826555] pci 0000:00:11.0: [8086:1d3e] type 01 class 0x060400 [ 13.828854] pci 0000:00:11.0: PME# supported from D0 D3hot D3cold [ 13.836822] pci 0000:00:1a.0: [8086:1d2d] type 00 class 0x0c0320 [ 13.839368] pci 0000:00:1a.0: reg 0x10: [mem 0xf6c60000-0xf6c603ff] [ 13.841762] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold [ 13.846256] pci 0000:00:1c.0: [8086:1d10] type 01 class 0x060400 [ 13.849026] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold [ 13.857850] pci 0000:00:1c.7: [8086:1d1e] type 01 class 0x060400 [ 13.859809] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold [ 13.868626] pci 0000:00:1d.0: [8086:1d26] type 00 class 0x0c0320 [ 13.870366] pci 0000:00:1d.0: reg 0x10: [mem 0xf6c50000-0xf6c503f1] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold [ 14.364984] pci 0000:00:1e.0: [8086:244e] type 01 class 0x060401 [ 14.369412] pci 0000:00:1f.0: [8086:1d41] type 00 class 0x060100 [ 14.376896] pci 0000:00:1f.2: [8086:1d00] type 00 class 0x01018f [ 14.379365] pci 0000:00:1f.2: reg 0x10: [io 0x4000-0x4007] [ 14.381336] pci 0000:00:1f.2: reg 0x14: [io 0x4008-0x400b] [ 14.384340] pci 0000:00:1f.2: reg 0x18: [io 0x4010-0x4017] [ 14.386334] pci 0000:00:1f.2: reg 0x1c: [io 0x4018-0x401b] [ 14.388335] pci 0000:00:1f.2: reg 0x20: [io 0x4020-0x402f] [ 14.390336] pci 0000:00:1f.2: reg 0x24: [io 0x4030-0x403f] [ 14.429937] pci 0000:04:00.0: [103c:323b] type 00 class 0x010400 [ 14.432373] pci 0000:04:00.0: reg 0x10: [mem 0xf7f00000-0xf7ffffff 64bit] [ 14.434345] pci 0000:04:00.0: reg 0x18: [mem 0xf7ef0000-0xf7ef03ff 64bit] [ 14.437337] pci 0000:04:00.0: reg 0x20: [io 0x6000-0x60ff] [ 14.439352] pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] [ 14.441333] pci 0000:04:00.0: enabling Extended Tags [ 14.444016] pci 0000:04:00.0: PME# supported from D0 D1 D3hot [ 14.460922] pci 0000:00:01.0: PCI bridge to [bus 04] [ 14.462326] pci 0000:00:01.0: bridge window [io f] [ 14.855824] pci 0000:00:01.0: bridge window [mem 0xf7e00000-0xf7ffffff] [ 14.858895] pci 0000:00:01.1: PCI bridge to [bus 11] [ 14.863460] pci 0000:03:00.0: [14e4:1657] type 00 class 0x020000 [ 14.865342] pci 0000:03:00.0: reg 0x10: [mem 0xf6bf0000-0xf6bfffff 64bit pref] [ 14.868342] pci 0000:03:00.0: reg 0x18: [mem 0xf6be0000-0xf6beffff 64bit pref] [ 14.870327] pci 0000:03:00.0: reg 0x20: [mem 0xf6bd0000-0xf6bdffff 64bit pref] [ 14.873320] pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 14.875657] pci 0000:03:00.0: PME# supported from D0 D3hot D3cold [ 14.885137] pci 0000:03:00.1: [14e4:1657] type 00 class 0x020000 [ 14.887345] pci 0000:03:00.1: reg 0x10: [mem 0xf6bc0000-0xf6bcffff 64bit pref] [ 14.890328] pci 0000:03:00.1: reg 0x18: [mem 0xf6bb0000-0xf6bbffff 64bit pref] [ 14.892326] pci 0000:03:00.1: reg 0x20: [mem 0xf6ba0000-0xf6baffff 64bit pref] [ 14.895320] pci 0000:03:00.1: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 14.897657] pci 0000:03:00.1: PME# supported from D0 D3hot D3cold [ 14.907273] pci 0000:03:00.2: [14e4:1657] type 00 class 0x020000 [ 14.909345] pci 0000:03:00.2: reg 0x10: [mem 0xf6b90000-0xf6b9ffff 64bit pref] [ 14.912328] pci 0000:03:00.2: reg 0x18: [mem 0xf6b80000-0xf6b8ffff 64bit pref] [ 14.914326] pci 0000:03:00.2: reg 0x20: [mem 0xf6b70000-0xf6b7ffff 64bit pref] [ 14.917320] pci 0000:03:00.2: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 14.919577] pci 0000:03:00.2: PME# supported from D0 D3hot D3cold [ 14.936245] pci 0000:03:00.3: [14e4:1657] type 00 class 0x020059695] pci 0000:03:00.3: reg 0x10: [mem 0xf6b60000-0xf6b6ffff 64bit pref] [ 15.331695] pci 0000:03:00.3: reg 0x18: [mem 0xf6b50000-0xf6b5ffff 64bit pref] [ 15.334328] pci 0000:03:00.3: reg 0x20: [mem 0xf6b40000-0xf6b4ffff 64bit pref] [ 15.336320] pci 0000:03:00.3: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 15.339552] pci 0000:03:00.3: PME# supported from D0 D3hot D3cold [ 15.355657] pci 0000:00:02.0: PCI bridge to [bus 03] [ 15.357344] pci 0000:00:02.0: bridge window [mem 0xf6b00000-0xf6bfffff 64bit pref] [ 15.361227] pci 0000:00:02.1: PCI bridge to [bus 12] [ 15.364473] pci 0000:02:00.0: [103c:323b] type 00 class 0x010400 [ 15.366370] pci 0000:02:00.0: reg 0x10: [mem 0xf7d00000-0xf7dfffff 64bit] [ 15.369368] pci 0000:02:00.0: reg 0x18: [mem 0xf7cf0000-0xf7cf03ff 64bit] [ 15.372336] pci 0000:02:00.0: reg 0x20: [io 0x5000-0x50ff] [ 15.373350] pci 0000:02:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] [ 15.376336] pci 0000:02:00.0: enabling Extended Tags [ 15.378823] pci 0000:02:00.0: PME# supported from D0 D1 D3hot [ 15.382518] pci 0000:00:02.2: PCI bridge to [bus 02] [ 15.384324] pci 0000:00:02.2: bridge window [io 0x5000-0x5fff] [ 15.387323] pci 0000:00:02.2: bridge window [mem 0xf7c00000-0xf7dfffff] [ 15.390251] pci 0000:00:02.3: PCI bridge to [bus 13] [ 15.428418] pci 0000:00:03.0: PCI bridge to [bus 07] [ 15.430:00:03.1: PCI bridge to [bus 14] [ 15.824294] pci 0000:00:03.2: PCI bridge to [bus 15] [ 15.826862] pci 0000:00:03.3: PCI bridge to [bus 16] [ 15.828879] pci 0000:00:11.0: PCI bridge to [bus 18] [ 15.831872] pci 0000:00:1c.0: PCI bridge to [bus 0a] [ 15.834334] pci 0000:01:00.0: [103c:3306] type 00 class 0x088000 [ 15.836348] pci 0000:01:00.0: reg 0x10: [io 0x3000-0x30ff] [ 15.838326] pci 0000:01:00.0: reg 0x14: [mem 0xf7bf0000-0xf7bf01ff] [ 15.841327] pci 0000:01:00.0: reg 0x18: [io 0x3400-0x34ff] [ 15.845294] pci 0000:01:00.1: [102b:0533] type 00 class 0x030000 [ 15.848358] pci 0000:01:00.1: reg 0x10: [mem 0xf5000000-0xf5ffffff pref] [ 15.850326] pci 0000:01:00.1: reg 0x14: [mem 0xf7be0000-0xf7be3fff] [ 15.852325] pci 0000:01:00.1: reg 0x18: [mem 0xf7000000-0xf77fffff] [ 15.855744] pci 0000:01:00.1: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] [ 15.859474] pci 0000:01:00.2: [103c:3307] type 00 class 0x088000 [ 15.861348] pci 0000:01:00.2: reg 0x10: [io 0x3800-0x38ff] [ 15.863325] pci 0000:01:00.2: reg 0x14: [mem 0xf6ff0000-0xf6ff00ff] [ 15.865325] pci 0000:01:00.2: reg 0x18: [meefffff] [ 16.355982] pci 0000:01:00.2: reg 0x1c: [mem 0xf6d80000-0xf6dfffff] [ 16.358325] pci 0000:01:00.2: reg 0x20: [mem 0xf6d70000-0xf6d77fff] [ 16.360326] pci 0000:01:00.2: reg 0x24: [mem 0xf6d60000-0xf6d67fff] [ 16.362326] pci 0000:01:00.2: reg 0x30: [mem 0x00000000-0x0000ffff pref] [ 16.365792] pci 0000:01:00.2: PME# supported from D0 D3hot D3cold [ 16.370287] pci 0000:01:00.4: [103c:3300] type 00 class 0x0c0300 [ 16.372479] pci 0000:01:00.4: reg 0x20: [io 0x3c00-0x3c1f] [ 16.376778] pci 0000:00:1c.7: PCI bridge to [bus 01] [ 16.378324] pci 0000:00:1c.7: bridge window [io 0x3000-0x3fff] [ 16.381323] pci 0000:00:1c.7: bridge window [mem 0xf6d00000-0xf7bfffff] [ 16.383325] pci 0000:00:1c.7: bridge window [mem 0xf5000000-0xf5ffffff 64bit pref] [ 16.386495] pci_bus 0000:17: extended config space not accessible [ 16.389212] pci 0000:00:1e.0: PCI bridge to [bus 17] (subtractive decode) [ 16.392368] pci 0000:00:1e.0: bridge window [mem 0xf4000000-0xf7ffffff window] (subtractive decode) [ 16.395332] pci 0000:00:1e.0: bridge window [io 0x1000-0x7fff window] (subtractive decode) [ 16.398331] pci 0000:00:1e.0: bridge windo-0x03af window] (subtractive decode) [ 16.792847] pci 0000:00:1e.0: bridge window [io 0x03e0-0x0cf7 window] (subtractive decode) [ 16.795315] pci 0000:00:1e.0: bridge window [io 0x0d00-0x0fff window] (subtractive decode) [ 16.798315] pci 0000:00:1e.0: bridge window [io 0x03b0-0x03bb window] (subtractive decode) [ 16.801315] pci 0000:00:1e.0: bridge window [io 0x03c0-0x03df window] (subtractive decode) [ 16.804315] pci 0000:00:1e.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) [ 16.820872] ACPI: PCI: Interrupt link LNKA configured for IRQ 5 [ 16.825653] ACPI: PCI: Interrupt link LNKB configured for IRQ 7 [ 16.829527] ACPI: PCI: Interrupt link LNKC configured for IRQ 10 [ 16.834549] ACPI: PCI: Interrupt link LNKD configured for IRQ 10 [ 16.838553] ACPI: PCI: Interrupt link LNKE configured for IRQ 5 [ 16.842567] ACPI: PCI: Interrupt link LNKF configured for IRQ 7 [ 16.847533] ACPI: PCI: Interrupt link LNKG configured for IRQ 0 [ 16.849305] ACPI: PCI: Interrupt link LNKG disabled [ 16.852294] ACPI: PCI: Interrupt link LNKH configured for IRQ 0 [ 16.855316] ACPI: PCI: Interrupt link LNKH disabled [ 16.857017] ACPI: PCI Root Bridge [PCI1] (domain 000[ 17.303379] acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 17.356268] acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER PCIeCapability LTR DPC] [ 17.360322] acpi PNP0A08:01: FADT indicates ASPM is unsupported, using BIOS configuration [ 17.372081] PCI host bridge to bus 0000:20 [ 17.373331] pci_bus 0000:20: root bus resource [mem 0xfb000000-0xfbffffff window] [ 17.376315] pci_bus 0000:20: root bus resource [io 0x8000-0xffff window] [ 17.378315] pci_bus 0000:20: root bus resource [bus 20-3f] [ 17.380776] pci 0000:20:00.0: [8086:0e01] type 01 class 0x060400 [ 17.383830] pci 0000:20:00.0: PME# supported from D0 D3hot D3cold [ 17.388844] pci 0000:20:01.0: [8086:0e02] type 01 class 0x060400 [ 17.391864] pci 0000:20:01.0: PME# supported from D0 D3hot D3cold [ 17.399294] pci 0000:20:01.1: [8086:0e03] type 01 class 0x060400 [ 17.401848] pci 0000:20:01.1: PME# supported from D0 D3hot D3cold [ 17.410254] pci 0000:20:02.0: [8086:0e04] type 01 class 0x060400 [ 17.412861] pci 0000:20:02.0: PME# supported from D0 D3hot D3cold [ 17.421572] pci 0000:20:02.1: [8086:0e05]s 0x060400 [ 17.814148] pci 0000:20:02.1: PME# supported from D0 D3hot D3cold [ 17.819483] pci 0000:20:02.2: [8086:0e06] type 01 class 0x060400 [ 17.821627] pci 0000:20:02.2: PME# supported from D0 D3hot D3cold [ 17.827498] pci 0000:20:02.3: [8086:0e07] type 01 class 0x060400 [ 17.829640] pci 0000:20:02.3: PME# supported from D0 D3hot D3cold [ 17.835621] pci 0000:20:03.0: [8086:0e08] type 01 class 0x060400 [ 17.837391] pci 0000:20:03.0: enabling Extended Tags [ 17.839569] pci 0000:20:03.0: PME# supported from D0 D3hot D3cold [ 17.844524] pci 0000:20:03.1: [8086:0e09] type 01 class 0x060400 [ 17.846643] pci 0000:20:03.1: PME# supported from D0 D3hot D3cold [ 17.852507] pci 0000:20:03.2: [8086:0e0a] type 01 class 0x060400 [ 17.854626] pci 0000:20:03.2: PME# supported from D0 D3hot D3cold [ 17.860477] pci 0000:20:03.3: [8086:0e0b] type 01 class 0x060400 [ 17.862749] pci 0000:20:03.3: PME# supported from D0 D3hot D3cold [ 17.867436] pci 0000:20:04.0: [8086:0e20] type 00 class 0x088000 [ 17.870349] pci 0000:20:04.0: reg 0x10: [mem 0xfbff0000-0xfbff3fff 64bit] [ 17.873522] pci 0000:20:04.1: [8086:0e21] type 00 class 0x088000 [ 17.876342] pci 0000:20:04.1: reg 0x10: [mem 0xfbfe0000-0xfbfe3fff 64bit] [ 17.879527] pci 0000:20:04.2: [8086:0e22] type 00 class 0x0.075600] pci 0000:20:04.2: reg 0x10: [mem 0xfbfd0000-0xfbfd3fff 64bit] [ 18.276317] pci 0000:20:04.3: [8086:0e23] type 00 class 0x088000 [ 18.279375] pci 0000:20:04.3: reg 0x10: [mem 0xfbfc0000-0xfbfc3fff 64bit] [ 18.284397] pci 0000:20:04.4: [8086:0e24] type 00 class 0x088000 [ 18.286371] pci 0000:20:04.4: reg 0x10: [mem 0xfbfb0000-0xfbfb3fff 64bit] [ 18.291463] pci 0000:20:04.5: [8086:0e25] type 00 class 0x088000 [ 18.293372] pci 0000:20:04.5: reg 0x10: [mem 0xfbfa0000-0xfbfa3fff 64bit] [ 18.297408] pci 0000:20:04.6: [8086:0e26] type 00 class 0x088000 [ 18.300374] pci 0000:20:04.6: reg 0x10: [mem 0xfbf90000-0xfbf93fff 64bit] [ 18.304402] pci 0000:20:04.7: [8086:0e27] type 00 class 0x088000 [ 18.307374] pci 0000:20:04.7: reg 0x10: [mem 0xfbf80000-0xfbf83fff 64bit] [ 18.311354] pci 0000:20:05.0: [8086:0e28] type 00 class 0x088000 [ 18.315366] pci 0000:20:05.2: [8086:0e2a] type 00 class 0x088000 [ 18.320568] pci 0000:20:05.4: [8086:0e2c] type 00 class 0x080020 [ 18.322358] pci 0000:20:05.4: reg 0x10: [mem 0xfbf70000-0xfbf70fff] [ 18.328206] pci 0000:20:00.0: PCI bridge to [bus 2b] [ 18.330291] pci 0000:20:01.0: PCI bridge to [bus 21] [ 18.333286] pci 0000:20:01.1: PCI bridge to [bus 22] [ 18.336226] pci 0000:20:02.0: PCI bridge to [bus 23] [ 18.339277] pPCI bridge to [bus 24] [ 18.830522] pci 0000:20:02.2: PCI bridge to [bus 25] [ 18.833308] pci 0000:20:02.3: PCI bridge to [bus 26] [ 18.835289] pci 0000:20:03.0: PCI bridge to [bus 27] [ 18.838230] pci 0000:20:03.1: PCI bridge to [bus 28] [ 18.841467] pci 0000:20:03.2: PCI bridge to [bus 29] [ 18.844264] pci 0000:20:03.3: PCI bridge to [bus 2a] [ 18.860500] iommu: Default domain type: Translated [ 18.862314] iommu: DMA domain TLB invalidation policy: lazy mode [ 18.869784] SCSI subsystem initialized [ 18.872375] ACPI: bus type USB registered [ 18.874178] usbcore: registered new interface driver usbfs [ 18.876749] usbcore: registered new interface driver hub [ 18.880993] usbcore: registered new device driver usb [ 18.883844] pps_core: LinuxPPS API ver. 1 registered [ 18.886329] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti [ 18.889446] PTP clock support registered [ 18.894245] EDAC MC: Ver: 3.0.0 [ 18.905922] NetLabel: Initializing [ 18.907313] NetLabel: domain hash size = 128 [ 18.909310] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO [ 18.911888] NetLabel: unlabeled traffic allowed by default [ 18.913314] PCI: Using ACPI for IRQ routing [ 1: Discovered peer bus 1f [ 19.309294] PCI host bridge to bus 0000:1f [ 19.311307] pci_bus 0000:1f: Unknown NUMA node; performance will be reduced [ 19.313313] pci_bus 0000:1f: root bus resource [io 0x0000-0xffff] [ 19.315311] pci_bus 0000:1f: root bus resource [mem 0x00000000-0x3fffffffffff] [ 19.318308] pci_bus 0000:1f: No busn resource found for root bus, will use [bus 1f-ff] [ 19.320306] pci_bus 0000:1f: busn_res: can not insert [bus 1f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 00-1f]) [ 19.324360] pci 0000:1f:08.0: [8086:0e80] type 00 class 0x088000 [ 19.327158] pci 0000:1f:09.0: [8086:0e90] type 00 class 0x088000 [ 19.330132] pci 0000:1f:0a.0: [8086:0ec0] type 00 class 0x088000 [ 19.333116] pci 0000:1f:0a.1: [8086:0ec1] type 00 class 0x088000 [ 19.336093] pci 0000:1f:0a.2: [8086:0ec2] type 00 class 0x088000 [ 19.339107] pci 0000:1f:0a.3: [8086:0ec3] type 00 class 0x088000 [ 19.342099] pci 0000:1f:0b.0: [8086:0e1e] type 00 class 0x088000 [ 19.345110] pci 0000:1f:0b.3: [8086:0e1f] type 00 class 0x088000 [ 19.348110] pci 0000:1f:0c.0: [8086:0ee0] type 00 class 0x088000 [ 19.351186] pci 0000:1f:0c.1: [8086:0ee2] type 00 class 0x088000 [ 19.353093] pci 0000:1f:0c.2type 00 class 0x088000 [ 19.747518] pci 0000:1f:0d.0: [8086:0ee1] type 00 class 0x088000 [ 19.751755] pci 0000:1f:0d.1: [8086:0ee3] type 00 class 0x088000 [ 19.754848] pci 0000:1f:0d.2: [8086:0ee5] type 00 class 0x088000 [ 19.758703] pci 0000:1f:0e.0: [8086:0ea0] type 00 class 0x088000 [ 19.761690] pci 0000:1f:0e.1: [8086:0e30] type 00 class 0x110100 [ 19.765743] pci 0000:1f:0f.0: [8086:0ea8] type 00 class 0x088000 [ 19.769880] pci 0000:1f:0f.1: [8086:0e71] type 00 class 0x088000 [ 19.772928] pci 0000:1f:0f.2: [8086:0eaa] type 00 class 0x088000 [ 19.776895] pci 0000:1f:0f.3: [8086:0eab] type 00 class 0x088000 [ 19.780938] pci 0000:1f:0f.4: [8086:0eac] type 00 class 0x088000 [ 19.784907] pci 0000:1f:0f.5: [8086:0ead] type 00 class 0x088000 [ 19.789063] pci 0000:1f:10.0: [8086:0eb0] type 00 class 0x088000 [ 19.792888] pci 0000:1f:10.1: [8086:0eb1] type 00 class 0x088000 [ 19.795951] pci 0000:1f:10.2: [8086:0eb2] type 00 class 0x088000 [ 19.799895] pci 0000:1f:10.3: [8086:0eb3] type 00 class 0x088000 [ 19.803891] pci 0000:1f:10.4: [8086:0eb4] type 00 class 0x088000 [ 19.807894] pci 0000:1f:10.5: [8086:0eb5] type 00 class 0x088000 [ 19.811933] pci 0000:1f:10.6: [8086:0eb6] type 00 [ 20.297294] pci 0000:1f:10.7: [8086:0eb7] type 00 class 0x088000 [ 20.305316] pci 0000:1f:13.0: [8086:0e1d] type 00 class 0x088000 [ 20.308093] pci 0000:1f:13.1: [8086:0e34] type 00 class 0x110100 [ 20.311105] pci 0000:1f:13.4: [8086:0e81] type 00 class 0x088000 [ 20.314094] pci 0000:1f:13.5: [8086:0e36] type 00 class 0x110100 [ 20.316090] pci 0000:1f:16.0: [8086:0ec8] type 00 class 0x088000 [ 20.319075] pci 0000:1f:16.1: [8086:0ec9] type 00 class 0x088000 [ 20.322118] pci 0000:1f:16.2: [8086:0eca] type 00 class 0x088000 [ 20.325126] pci_bus 0000:1f: busn_res: [bus 1f-ff] end is updated to 1f [ 20.327308] pci_bus 0000:1f: busn_res: can not insert [bus 1f] under domain [bus 00-ff] (conflicts with (null) [bus 00-1f]) [ 20.332427] PCI: Discovered peer bus 3f [ 20.335014] PCI host bridge to bus 0000:3f [ 20.336305] pci_bus 0000:3f: Unknown NUMA node; performance will be reduced [ 20.338312] pci_bus 0000:3f: root bus resource [io 0x0000-0xffff] [ 20.340311] pci_bus 0000:3f: root bus resource [mem 0x00000000-0x3fffffffffff] [ 20.343307] pci_bus 0000:3f: No busn resource s, will use [bus 3f-ff] [ 20.833960] pci_bus 0000:3f: busn_res: can not insert [bus 3f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 20-3f]) [ 20.838388] pci 0000:3f:08.0: [8086:0e80] type 00 class 0x088000 [ 20.841677] pci 0000:3f:09.0: [8086:0e90] type 00 class 0x088000 [ 20.844725] pci 0000:3f:0a.0: [8086:0ec0] type 00 class 0x088000 [ 20.848693] pci 0000:3f:0a.1: [8086:0ec1] type 00 class 0x088000 [ 20.851654] pci 0000:3f:0a.2: [8086:0ec2] type 00 class 0x088000 [ 20.855740] pci 0000:3f:0a.3: [8086:0ec3] type 00 class 0x088000 [ 20.858681] pci 0000:3f:0b.0: [8086:0e1e] type 00 class 0x088000 [ 20.863005] pci 0000:3f:0b.3: [8086:0e1f] type 00 class 0x088000 [ 20.866696] pci 0000:3f:0c.0: [8086:0ee0] type 00 class 0x088000 [ 20.870695] pci 0000:3f:0c.1: [8086:0ee2] type 00 class 0x088000 [ 20.873697] pci 0000:3f:0c.2: [8086:0ee4] type 00 class 0x088000 [ 20.877686] pci 0000:3f:0d.0: [8086:0ee1] type 00 class 0x088000 [ 20.880694] pci 0000:3f:0d.1: [8086:0ee3] type 00 class 0x088000 [ 20.884685] pci 0000:3f:0d.2: [8086:0ee5] type 00 class 0x088000 [ 20.887696] pci 0000:3f:0e.0: [8e 00 class 0x088000 [ 21.281294] pci 0000:3f:0e.1: [8086:0e30] type 00 class 0x110100 [ 21.284185] pci 0000:3f:0f.0: [8086:0ea8] type 00 class 0x088000 [ 21.287224] pci 0000:3f:0f.1: [8086:0e71] type 00 class 0x088000 [ 21.290230] pci 0000:3f:0f.2: [8086:0eaa] type 00 class 0x088000 [ 21.293217] pci 0000:3f:0f.3: [8086:0eab] type 00 class 0x088000 [ 21.296250] pci 0000:3f:0f.4: [8086:0eac] type 00 class 0x088000 [ 21.299226] pci 0000:3f:0f.5: [8086:0ead] type 00 class 0x088000 [ 21.302207] pci 0000:3f:10.0: [8086:0eb0] type 00 class 0x088000 [ 21.305215] pci 0000:3f:10.1: [8086:0eb1] type 00 class 0x088000 [ 21.308288] pci 0000:3f:10.2: [8086:0eb2] type 00 class 0x088000 [ 21.311222] pci 0000:3f:10.3: [8086:0eb3] type 00 class 0x088000 [ 21.314214] pci 0000:3f:10.4: [8086:0eb4] type 00 class 0x088000 [ 21.317232] pci 0000:3f:10.5: [8086:0eb5] type 00 class 0x088000 [ 21.321200] pci 0000:3f:10.6: [8086:0eb6] type 00 class 0x088000 [ 21.324228] pci 0000:3f:10.7: [8086:0eb7] type 00 class 0x088000 [ 21.327206] pci 0000:3f:13.0: [8086:0e1d] type 00 class 0x088000 [ 21.330088] pci 0000:3f:13.1: [8086:0e34] type 00 class 0x110100 [ 21.333093] pci 0000:3f:13.4: [8086:0e81] type 00 class 0x088000 [ 21.336184] pci 0000:3f: type 00 class 0x110100 [ 21.826294] pci 0000:3f:16.0: [8086:0ec8] type 00 class 0x088000 [ 21.830653] pci 0000:3f:16.1: [8086:0ec9] type 00 class 0x088000 [ 21.833663] pci 0000:3f:16.2: [8086:0eca] type 00 class 0x088000 [ 21.837756] pci_bus 0000:3f: busn_res: [bus 3f-ff] end is updated to 3f [ 21.840321] pci_bus 0000:3f: busn_res: can not insert [bus 3f] under domain [bus 00-ff] (conflicts with (null) [bus 20-3f]) [ 21.866884] pci 0000:01:00.1: vgaarb: setting as boot VGA device [ 21.867294] pci 0000:01:00.1: vgaarb: bridge control possible [ 21.867294] pci 0000:01:00.1: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none [ 21.874727] vgaarb: loaded [ 21.876760] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 [ 21.879314] hpet0: 8 comparators, 64-bit 14.318180 MHz counter [ 21.886871] clocksource: Switched to clocksource tsc-early [ 22.424990] VFS: Disk quotas dquot_6.6.0 [ 22.426677] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 22.431111] pnp: PnP ACPI init [ 22.437986] system 00:00: [mem 0xf4ffe000-0xf4ffffff] could not be reserved [ 22.445384] system 00:01: [io 0x0408-0x040f] has been reserved [ 22.447532] system 00:01: [io 0x04d0-0x04d1] has been reserved [ 22.449639] system 00:01: [io 0x0310-0x0315] has been reserved [ 22.451734] system 00:01: [io 0x0316-0x0317] has been reserved [ 22.453892] system 00:01: [io 0x0700-0x071f] has been reserved [ 22.456196] system 00:01: [io 0x0880-0x08ff] has been reserved [ 22.458410] system 00:01: [io 0x0900-0x097f] has been reserved [ 22.460552] system 00:01: [io 0x0cd4-0x0cd7] has been reserved [ 22.462692] system 00:01: [io 0x0cd0-0x0cd3] has been reserved [ 22.464834] system 00:01: [io 0x0f50-0x0f58] has been reserved [ 22.467165] system 00:01: [io 0x0ca0-0x0ca1] has been reserved [ 22.469410] system 00:01: [io 0x0ca4-0x0ca5] has been reserved [ 22.471563] system 00:01: [io 0x02f8-0x02ff] has been reserved [ 22.473704] system 00:01: [mem 0xc0000000-0xcfffffff] has been reserved [ 22.476093] system 00:01: [mem 0xfe000000-0xfebfffff] has been reserved [ 22.478524] system 00:01: [mem 0xfc000000-0xfc000fff] has been reserved [ 22.480891] system 00:01: [mem 0xfed1c000-0xfed1ffff] has been reserved [ 22.483265] system 00:01: [mem 0xfed30000-0xfed3ffff] has been reserved [ 22.485635] system 00:01: [mem 0xfee00000-0xfee00fff] has been reserved [ 22.488083] system 00:01: [mem 0xff800000-0xhas been reserved [ 22.808249] system 00:06: [mem 0xfbefe000-0xfbefffff] could not be reserved [ 22.812407] pnp: PnP ACPI: found 7 devices [ 22.865376] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns [ 22.870047] NET: Registered PF_INET protocol family [ 22.874876] IP idents hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) [ 22.894593] tcp_listen_portaddr_hash hash table entries: 16384 (order: 8, 1310720 bytes, vmalloc) [ 22.899888] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, vmalloc) [ 22.904806] TCP established hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) [ 22.911587] TCP bind hash table entries: 65536 (order: 10, 5242880 bytes, vmalloc hugepage) [ 22.921501] TCP: Hash tables configured (established 262144 bind 65536) [ 22.939581] MPTCP token hash table entries: 32768 (order: 9, 3145728 bytes, vmalloc) [ 22.949271] UDP hash table entries: 16384 (order: 9, 3145728 bytes, vmalloc) [ 22.958384] UDP-Lite hash table entries: 16384 (order: 9, 3145728 bytes, vmalloc) [ 22.971992] NET: Registered PF_UNIX/PF_LOCAL protocol family [ 22.974185] NET: Registered PF_XDP protocol family [ 22.976123] pci 0000:00:02.0: BAR 14: assigned [mem 0xf4000000-0xf40fffff] [ 22.978749] pci 0000:04:00.0: BAR 6: assigned [mem 0xf7e00000-0xf7e7ffff pref] [ 22.981351] pci 0000:00:01.0: PCI bridge to [bus 04] [ 22.983178] p:01.0: bridge window [io 0x6000-0x6fff] [ 23.285391] pci 0000:00:01.0: bridge window [mem 0xf7e00000-0xf7ffffff] [ 23.287824] pci 0000:00:01.1: PCI bridge to [bus 11] [ 23.289636] pci 0000:03:00.0: BAR 6: assigned [mem 0xf4000000-0xf403ffff pref] [ 23.292153] pci 0000:03:00.1: BAR 6: assigned [mem 0xf4040000-0xf407ffff pref] [ 23.294616] pci 0000:03:00.2: BAR 6: assigned [mem 0xf4080000-0xf40bffff pref] [ 23.297086] pci 0000:03:00.3: BAR 6: assigned [mem 0xf40c0000-0xf40fffff pref] [ 23.299645] pci 0000:00:02.0: PCI bridge to [bus 03] [ 23.301402] pci 0000:00:02.0: bridge window [mem 0xf4000000-0xf40fffff] [ 23.303732] pci 0000:00:02.0: bridge window [mem 0xf6b00000-0xf6bfffff 64bit pref] [ 23.306785] pci 0000:00:02.1: PCI bridge to [bus 12] [ 23.308646] pci 0000:02:00.0: BAR 6: assigned [mem 0xf7c00000-0xf7c7ffff pref] [ 23.311138] pci 0000:00:02.2: PCI bridge to [bus 02] [ 23.312856] pci 0000:00:02.2: bridge window [io 0x5000-0x5fff] [ 23.314969] pci 0000:00:02.2: bridge window [mem 0xf7c00000-0xf7dfffff] [ 23.317323] pci 0000:00to [bus 13] [ 23.819471] pci 0000:00:03.0: PCI bridge to [bus 07] [ 23.821284] pci 0000:00:03.1: PCI bridge to [bus 14] [ 23.823031] pci 0000:00:03.2: PCI bridge to [bus 15] [ 23.824797] pci 0000:00:03.3: PCI bridge to [bus 16] [ 23.826575] pci 0000:00:11.0: PCI bridge to [bus 18] [ 23.828400] pci 0000:00:1c.0: PCI bridge to [bus 0a] [ 23.830208] pci 0000:01:00.2: BAR 6: assigned [mem 0xf6d00000-0xf6d0ffff pref] [ 23.832737] pci 0000:00:1c.7: PCI bridge to [bus 01] [ 23.834465] pci 0000:00:1c.7: bridge window [io 0x3000-0x3fff] [ 23.836550] pci 0000:00:1c.7: bridge window [mem 0xf6d00000-0xf7bfffff] [ 23.838949] pci 0000:00:1c.7: bridge window [mem 0xf5000000-0xf5ffffff 64bit pref] [ 23.841603] pci 0000:00:1e.0: PCI bridge to [bus 17] [ 23.843333] pci_bus 0000:00: resource 4 [mem 0xf4000000-0xf7ffffff window] [ 23.845685] pci_bus 0000:00: resource 5 [io 0x1000-0x7fff window] [ 23.847836] pci_bus 0000:00: resource 6 [io 0x0000-0x03af window] [ 23.849980] pci_bus 0000:00: resource 7 [io 0x03e0-0x0cf7 window] [ 23.852087] pci_bus 0000:00: resource 8 [io 0x0d00-0x0fff window] [ 23.854210] pci_bus 0000:00: resource 9 [io 0x03b0-0x03bb window] [ 23.856338] pci_bus 0000:00: resource 10 [io 0x03c0-0x03df window] [ 23.858731] pci_bus 0000:00: resource 11 [m0bffff window] [ 24.361101] pci_bus 0000:04: resource 0 [io 0x6000-0x6fff] [ 24.363026] pci_bus 0000:04: resource 1 [mem 0xf7e00000-0xf7ffffff] [ 24.365185] pci_bus 0000:03: resource 1 [mem 0xf4000000-0xf40fffff] [ 24.367322] pci_bus 0000:03: resource 2 [mem 0xf6b00000-0xf6bfffff 64bit pref] [ 24.369836] pci_bus 0000:02: resource 0 [io 0x5000-0x5fff] [ 24.371752] pci_bus 0000:02: resource 1 [mem 0xf7c00000-0xf7dfffff] [ 24.373890] pci_bus 0000:01: resource 0 [io 0x3000-0x3fff] [ 24.375763] pci_bus 0000:01: resource 1 [mem 0xf6d00000-0xf7bfffff] [ 24.377931] pci_bus 0000:01: resource 2 [mem 0xf5000000-0xf5ffffff 64bit pref] [ 24.380433] pci_bus 0000:17: resource 4 [mem 0xf4000000-0xf7ffffff window] [ 24.382762] pci_bus 0000:17: resource 5 [io 0x1000-0x7fff window] [ 24.384859] pci_bus 0000:17: resource 6 [io 0x0000-0x03af window] [ 24.386959] pci_bus 0000:17: resource 7 [io 0x03e0-0x0cf7 window] [ 24.389134] pci_bus 0000:17: resource 8 [io 0x0d00-0x0fff window] [ 24.391250] pci_bus 0000:17: resource 9 [io 0x03b0-0x03bb window] [ 24.393354] pci_bus 0000:17: resource 10 [io 0x03c0-0x03df window] [ 24.395475] pci_bus 0000:17: resource 11 [mem 0x000a0000-0x000bffff window] [ 24.400190] pci: PCI bridge to [bus 2b] [ 24.801904] pci 0000:20:01.0: PCI bridge to [bus 21] [ 24.803735] pci 0000:20:01.1: PCI bridge to [bus 22] [ 24.805950] pci 0000:20:02.0: PCI bridge to [bus 23] [ 24.807706] pci 0000:20:02.1: PCI bridge to [bus 24] [ 24.809496] pci 0000:20:02.2: PCI bridge to [bus 25] [ 24.811246] pci 0000:20:02.3: PCI bridge to [bus 26] [ 24.812969] pci 0000:20:03.0: PCI bridge to [bus 27] [ 24.814700] pci 0000:20:03.1: PCI bridge to [bus 28] [ 24.816430] pci 0000:20:03.2: PCI bridge to [bus 29] [ 24.818259] pci 0000:20:03.3: PCI bridge to [bus 2a] [ 24.819991] pci_bus 0000:20: resource 4 [mem 0xfb000000-0xfbffffff window] [ 24.822366] pci_bus 0000:20: resource 5 [io 0x8000-0xffff window] [ 24.824760] pci_bus 0000:1f: resource 4 [io 0x0000-0xffff] [ 24.826739] pci_bus 0000:1f: resource 5 [mem 0x00000000-0x3fffffffffff] [ 24.829143] pci_bus 0000:3f: resource 4 [io 0x0000-0xffff] [ 24.831060] pci_bus 0000:3f: resource 5 [mem 0x00000000-0x3fffffffffff] [ 24.833522] pci 0000:00:05.0: disabled boot interrupts on device [8086:0e28] [ 24.862893] pci 0000:00:1a.0: quirk_usb_early_handoff+0x0/0x290 took 26245 usecs [ 24.901864] pci 0000:00:1d.0: quirk_usb_early_handoff+ 35459 usecs [ 25.317133] pci 0000:01:00.4: quirk_usb_early_handoff+0x0/0x290 took 12350 usecs [ 25.320693] pci 0000:20:05.0: disabled boot interrupts on device [8086:0e28] [ 25.323526] PCI: CLS 64 bytes, default 64 [ 25.326866] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 25.326952] Trying to unpack rootfs image as initramfs... [ 25.329255] software IO TLB: mapped [mem 0x00000000a9000000-0x00000000ad000000] (64MB) [ 25.333938] ACPI: bus type thunderbolt registered [ 25.447792] Initialise system trusted keyrings [ 25.449713] Key type blacklist registered [ 25.452721] workingset: timestamp_bits=36 max_order=23 bucket_order=0 [ 25.571588] zbud: loaded [ 25.585379] integrity: Platform Keyring initialized [ 25.601332] NET: Registered PF_ALG protocol family [ 25.603116] xor: automatically using best checksumming function avx [ 25.605587] Key type asymmetric registered [ 25.607031] Asymmetric key parser 'x509' registered [ 25.608874] Running certificate verification selftests [ 25.719885] cryptomgr_test (209) used greatest stack depth: 28528 bytes left [ 25.781022] Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' [ 25.788082] Bk layer SCSI generic (bsg) driver version 0.4 loaded (major 246) [ 25.891914] io scheduler mq-deadline registered [ 25.893527] io scheduler kyber registered [ 25.896060] io scheduler bfq registered [ 25.905833] atomic64_test: passed for x86-64 platform with CX8 and with SSE [ 26.168681] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 [ 26.176719] ACPI: \_PR_.CP00: Found 2 idle states [ 26.230775] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 26.236939] ACPI: button: Power Button [PWRF] [ 26.291419] thermal LNXTHERM:00: registered as thermal_zone0 [ 26.293461] ACPI: thermal: Thermal Zone [THM0] (8 C) [ 26.296705] ERST: Error Record Serialization Table (ERST) support is initialized. [ 26.299611] pstore: Registered erst as persistent store backend [ 26.306786] GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. [ 26.313441] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 26.316955] 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A [ 26.323521] serial8250: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A [ 26.340462] Non-volatile memory driver v1.3 [ 26.369637] tsc: Refined TSC clocksource calibration: 2094.949 MHz [ 26.372087] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1e328cf0a17, max_idle_ns: 440795250041 ns [ 26.376197] clocksource: Switched to clocksource tsc [ 26.403284] rdac: device handler registered [ 26.405928] hp_sw: device handler registered [ 26.407584] emc: device handler registered [ 26.410811] alua: device handler registered [ 26.417647] libphy: Fixed MDIO Bus: probed [ 26.420935] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 26.423460] ehci-pci: EHCI PCI platform driver [ 26.436369] ehci-pci 0000:00:1a.0: EHCI Host Controller [ 26.440334] ehci-pci 0000:00:1a.0: new USB bus registered, assigned bus number 1 [ 26.443076] ehci-pci 0000:00:1a.0: debug port 2 [ 26.449488] ehci-pci 0000:00:1a.0: irq 21, io mem 0xf6c60000 [ 26.458425] ehci-pci 0000:00:1a.0: USB 2.0 started, EHCI 1.00 [ 26.462249] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.14 [ 26.465147] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 26.467623] usb usb1: Product: EHCI Host Controller [ 26.469400] usb usb1: Manufacturer: Linux 5.14.0-256.2009_76611931ebug ehci_hcd [ 26.872201] usb usb1: SerialNumber: 0000:00:1a.0 [ 26.877763] hub 1-0:1.0: USB hub found [ 26.879502] hub 1-0:1.0: 2 ports detected [ 26.893608] ehci-pci 0000:00:1d.0: EHCI Host Controller [ 26.896401] ehci-pci 0000:00:1d.0: new USB bus registered, assigned bus number 2 [ 26.899097] ehci-pci 0000:00:1d.0: debug port 2 [ 26.905050] ehci-pci 0000:00:1d.0: irq 20, io mem 0xf6c50000 [ 26.913375] ehci-pci 0000:00:1d.0: USB 2.0 started, EHCI 1.00 [ 26.916474] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.14 [ 26.919621] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 26.922110] usb usb2: Product: EHCI Host Controller [ 26.923816] usb usb2: Manufacturer: Linux 5.14.0-256.2009_766119311.el9.x86_64+debug ehci_hcd [ 26.926770] usb usb2: SerialNumber: 0000:00:1d.0 [ 26.931500] hub 2-0:1.0: USB hub found [ 26.933076] hub 2-0:1.0: 2 ports detected [ 26.938716] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 26.941087] ohci-pci: OHCI PCI platform driver [ 26.943461] uhci_hcd: USB Universal Host Controller Interface driver [ 26.949998] uhci_hcd 0000:01:00.4: UHCI Host Controller [ 26.953393] uh:00.4: new USB bus registered, assigned bus number 3 [ 27.356390] uhci_hcd 0000:01:00.4: detected 8 ports [ 27.358200] uhci_hcd 0000:01:00.4: port count misdetected? forcing to 2 ports [ 27.361126] uhci_hcd 0000:01:00.4: irq 47, io port 0x00003c00 [ 27.364779] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 [ 27.368024] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 27.370677] usb usb3: Product: UHCI Host Controller [ 27.372510] usb usb3: Manufacturer: Linux 5.14.0-256.2009_766119311.el9.x86_64+debug uhci_hcd [ 27.375589] usb usb3: SerialNumber: 0000:01:00.4 [ 27.381734] hub 3-0:1.0: USB hub found [ 27.383615] hub 3-0:1.0: 2 ports detected [ 27.391441] usbcore: registered new interface driver usbserial_generic [ 27.394485] usbserial: USB Serial support registered for generic [ 27.397968] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f0e:PS2M] at 0x60,0x64 irq 1,12 [ 27.403483] Freeing initrd memory: 35636K [ 27.404526] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 27.406934] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 27.414556] mousedev: PS/2 mouse device common for all 479] usb 1-1: new high-speed USB device number 2 using ehci-pci [ 27.598437] rtc_cmos 00:03: RTC can wake from S4 [ 27.924724] rtc_cmos 00:03: registered as rtc0 [ 27.926529] rtc_cmos 00:03: setting system clock to 2023-02-03T00:32:50 UTC (1675384370) [ 27.929909] rtc_cmos 00:03: alarms up to one day, 114 bytes nvram, hpet irqs [ 27.942542] intel_pstate: Intel P-state driver initializing [ 27.970429] usb 2-1: new high-speed USB device number 2 using ehci-pci [ 27.979889] hid: raw HID events driver (C) Jiri Kosina [ 27.982724] usbcore: registered new interface driver usbhid [ 27.984735] usbhid: USB HID core driver [ 27.987067] drop_monitor: Initializing network drop monitor service [ 28.020355] Initializing XFRM netlink socket [ 28.025771] NET: Registered PF_INET6 protocol family [ 28.040756] Segment Routing with IPv6 [ 28.042219] NET: Registered PF_PACKET protocol family [ 28.044669] mpls_gso: MPLS GSO support [ 28.049128] usb 1-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00 [ 28.052063] usb 1-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 28.060242] hub 1-1:1.0: USB hub found [ 28.062625] hub 1-1:1.0: 6 ports detected [ 28.086687] microcode: sig=0x306e4, pf=0x1, revision=0x42e [ 28.092246] microcode: Microcode Update Driver: v2.2. [ 28.092280] IPI shorthand broadcast: enabled [ 28.095928]on of gcm_enc/dec engaged. [ 28.104872] usb 2-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00 [ 28.400483] usb 2-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 28.403112] AES CTR mode by8 optimization enabled [ 28.406219] hub 2-1:1.0: USB hub found [ 28.408046] hub 2-1:1.0: 8 ports detected [ 28.410030] sched_clock: Marking stable (22174913660, 6234899205)->(30327011892, -1917199027) [ 28.450167] registered taskstats version 1 [ 28.458524] Loading compiled-in X.509 certificates [ 28.466358] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 3bcdb855b1eeffc8155fd4f9576830612b2c709a' [ 28.472663] Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' [ 28.478877] Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' [ 28.545953] cryptomgr_test (242) used greatest stack depth: 27672 bytes left [ 28.572027] zswap: loaded using pool lzo/zbud [ 28.581018] debug_vm_pgtable: [debug_vm_pgtable ]: Validating architecture page table helpers [ 28.690525] usb 2-1.3: new high-speed USB device number 3 using ehci-pci [ 28.773037] usb 2-1.3: New USB device found, idVendor=0424, idProduct=2660, bcdDevice= 8.01 [ 276076] usb 2-1.3: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 28.881874] hub 2-1.3:1.0: USB hub found [ 28.883708] hub 2-1.3:1.0: 2 ports detected [ 29.548169] page_owner is disabled [ 29.552044] pstore: Using crash dump compression: deflate [ 29.555717] Key type big_key registered [ 29.589888] modprobe (246) used greatest stack depth: 27512 bytes left [ 29.628029] Key type encrypted registered [ 29.629808] ima: No TPM chip found, activating TPM-bypass! [ 29.631908] Loading compiled-in module X.509 certificates [ 29.636288] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 3bcdb855b1eeffc8155fd4f9576830612b2c709a' [ 29.640427] ima: Allocated hash algorithm: sha256 [ 29.642435] ima: No architecture policies found [ 29.644733] evm: Initialising EVM extended attributes: [ 29.646596] evm: security.selinux [ 29.647818] evm: security.SMACK64 (disabled) [ 29.649427] evm: security.SMACK64EXEC (disabled) [ 29.651034] evm: security.SMACK64TRANSMUTE (disabled) [ 29.652815] evm: security.SMACK64MMAP (disabled) [ 29.654447] evm: security.apparmor (disabled) [ 29.655964] evm: security.ima [ 29.657032] evm: security.capability [ 29.658391] evm: HMAC attrs: 0x1 [ 29.815853] cryptomgr_test (256) used greatest stack depth: 27296 bytes left [ 30.525087] cryptomgr_test (353) used greatest stack depth: 27032 bytes left [ 30.878822] PM: Magic number: 11:672:505 [ 30.880473] i8042 i8042: hash matches [ 30.928234] Freeing unused decrypted memory: 2036K [ 30.937452] Freeing unused kernel image (initmem) memory: 5300K [ 30.938944] Write protecting the kernel read-only data: 57344k [ 30.947508] Freeing unused kernel image (text/rodata gap) memory: 2036K [ 30.951764] Freeing unused kernel image (rodata/data gap) memory: 1400K [ 31.059284] x86/mm: Checked W+X mappings: passed, no W+X pages found. [ 31.059672] x86/mm: Checking user space page tables [ 31.143662] x86/mm: Checked W+X mappings: passed, no W+X pages found. [ 31.144082] Run /init as init process [ 31.292675] systemd[1]: systemd 252-3.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [ 31.310174] systemd[1]: Detected architecture x86-64. [ 31.310651] systemd[1]: Running in initrd. Welcome to CentOS Stream 9 dracut-057-20.git20221213.el9 (Initramfs) ! [ 31.317548] systemd[1]: Hostname set to . [ 32.473237] systemd[1]: Queued start job for default target Initrd Default Target. [ 32.493832] systemd[1]: Created slice Slice /system/systemd-hibernate-resume. [ OK ] Created slice Slice /system/systemd-hibernate-resume . [ 32.501211] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [ OK ] Started Dispatch Password …ts to Console Directory Watch . [ 32.505794] systemd[1]: Reached target Initrd /usr File System. [ OK ] Reached target Initrd /usr File System . [ 32.510672] systemd[1]: Reached target Path Units. [ OK ] Reached target Path Units . [ 32.514673] systemd[1]: Reached target Slice Units. [ OK ] Reached target Slice Units . [ 32.519670] systemd[1]: Reached target Swaps. [ OK ] Reached target Swaps . [ 32.524696] systemd[1]: Reached target Timer Units. [ OK ] Reached target Timer Units . [ 32.528852] systemd[1]: Listening on D-Bus System Message Bus Socket. [ OK ] Listening on D-Bus System Message Bus Socket . [ 32.535175] systemd[1]: Listening on Journal Socket (/dev/log). [ OK ] Listening on Journal Socket (/dev/log) . [ 32.542081] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket . [ 32.548196] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket . [ 32.554399] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket . [ 32.558720] systemd[1]: Reached target Socket Units. [ OK ] Reached target Socket Units . [ 32.589217] systemd[1]: Starting Create List of Static Device Nodes... Starting Create List of Static Device Nodes ... [ 32.645832] systemd[1]: Starting Journal Service... Starting Journal Service ... [ 32.654187] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [ 32.683745] systemd[1]: Starting Apply Kernel Variables... Starting Apply Kernel Variables ... [ 32.715953] systemd[1]: Starting Create System Users... Starting Create System Users ... [ 32.749086] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console ... [ 32.789510] systemd[1]: Finished Create List of Static Device Nodes. [ OK ] Finished Create List of Static Device Nodes . [ 32.844141] systemd[1]: Finished Apply Kernel Variables. [ OK ] Finished Apply Kernel Variables . [ 33.008657] systemd[1]: Finished Create System Users. [ OK ] Finished Create System Users . [ 33.044263] systemd[1]: Starting Create Static Device Nodes in /dev... Starting Create Static Device Nodes in /dev ... [ 33.257079] systemd[1]: Finished Create Static Device Nodes in /dev. [ OK ] Finished Create Static Device Nodes in /dev . [ 33.406476] systemd[1]: Finished Setup Virtual Console. [ OK ] Finished Setup Virtual Console . [ 33.415169] systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. [ 33.449243] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook ... [ 34.144831] systemd[1]: Started Journal Service. [ OK ] Started Journal Service . Starting Create Volatile Files and Directories ... [ OK ] Finished Create Volatile Files and Directories . [ OK ] Finished dracut cmdline hook . Starting dracut pre-udev hook ... [ 35.815375] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. [ 35.816912] device-mapper: uevent: version 1.0.3 [ 35.820087] device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com [ OK ] Finished dracut pre-udev hook . Starting Rule-based Manage…for Device Events and Files ... [ OK ] Started Rule-based Manager for Device Events and Files . Starting Coldplug All udev Devices ... [ * ] (1 of 3) A start job is running for…g All udev Devices (6s / no limit) M [ * * ] (1 of 3) A start job is running for…g All udev Devices (6s / no limit) M [ * * * ] (1 of 3) A start job is running for…g All udev Devices (7s / no limit) M [ * * * ] (2 of 3) A start job is running for…l360pgen8--08-root (7s / no limit) M [ * * * ] (2 of 3) A start job is running for…l360pgen8--08-root (8s / no limit) M [ * * * ] (2 of 3) A start job is running for…l360pgen8--08-root (8s / no limit) M [ OK ] Finished Coldplug All udev Devices . [ OK ] Reached target Network . Starting dracut initqueue hook ... [ 41.444328] hpwdt 0000:01:00.0: HPE Watchdog Timer Driver: NMI decoding initialized [ 41.515103] hpwdt 0000:01:00.0: HPE Watchdog Timer Driver: Version: 2.0.4 [ 41.515587] hpwdt 0000:01:00.0: timeout: 30 seconds (nowayout=0) [ 41.516410] hpwdt 0000:01:00.0: pretimeout: on. [ 41.517251] hpwdt 0000:01:00.0: kdumptimeout: -1. [ 41.576976] Warning: Unmaintained hardware is detected: hpsa:323B:103C @ 0000:02:00.0 [ 41.577510] HP HPSA Driver (v 3.4.20-200) [ 41.577852] hpsa 0000:02:00.0: can't disable ASPM; OS doesn't have ASPM control [ 41.600121] tg3 0000:03:00.0 eth0: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c4 [ 41.600904] tg3 0000:03:00.0 eth0: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 41.601946] tg3 0000:03:00.0 eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 41.602467] tg3 0000:03:00.0 eth0: dma_rwctrl[00000001] dma_mask[64-bit] [ 41.678543] ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] [ 41.694229] tg3 0000:03:00.1 eth1: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c5 [ 41.694903] tg3 0000:03:00.1 eth1: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 41.695865] tg3 0000:03:00.1 eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 41.696339] tg3 0000:03:00.1 eth1: dma_rwctrl[00000001] dma_mask[64-bit] [ 41.778176] hpsa 0000:02:00.0: Logical aborts not supported [ 41.778570] hpsa 0000:02:00.0: HP SSD Smart Path aborts not supported [ 41.797447] tg3 0000:03:00.2 eth2: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c6 [ 41.798110] tg3 0000:03:00.2 eth2: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 41.799219] tg3 0000:03:00.2 eth2: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 41.799727] tg3 0000:03:00.2 eth2: dma_rwctrl[00000001] dma_mask[64-bit] [ 41.860674] scsi host0: hpsa [ 41.885567] hpsa can't handle SMP requests [ 41.887723] scsi host1: ata_piix [ 41.890421] Warning: Unmaintained hardware is detected: hpsa:323B:103C @ 0000:04:00.0 [ 41.891407] hpsa 0000:04:00.0: can't disable ASPM; OS doesn't have ASPM control [ 41.898257] tg3 0000:03:00.3 eth3: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c7 [ 41.898982] tg3 0000:03:00.3 eth3: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 41.899990] tg3 0000:03:00.3 eth3: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 41.900445] tg3 0000:03:00.3 eth3: dma_rwctrl[00000001] dma_mask[64-bit] [ 41.902892] hpsa 0000:02:00.0: scsi 0:0:0:0: added RAID HP P420i controller SSDSmartPathCap- En- Exp=1 [ 41.903689] hpsa 0000:02:00.0: scsi 0:0:1:0: masked Direct-Access ATA MM0500GBKAK PHYS DRV SSDSmartPathCap- En- Exp=0 [ 41.904777] hpsa 0000:02:00.0: scsi 0:1:0:0: added Direct-Access HP LOGICAL VOLUME RAID-0 SSDSmartPathCap- En- Exp=1 [ 41.909977] hpsa can't handle SMP requests [ 41.913356] scsi 0:0:0:0: RAID HP P420i 8.32 PQ: 0 ANSI: 5 [ 41.915082] scsi host2: ata_piix [ 41.918731] ata1: SATA max UDMA/133 cmd 0x4000 ctl 0x4008 bmdma 0x4020 irq 17 [ 41.919628] ata2: SATA max UDMA/133 cmd 0x4010 ctl 0x4018 bmdma 0x4028 irq 17 [ 41.921066] scsi 0:1:0:0: Direct-Access HP LOGICAL VOLUME 8.32 PQ: 0 ANSI: 5 [ 41.942365] hpsa 0000:04:00.0: Logical aborts not supported [ 41.942737] hpsa 0000:04:00.0: HP SSD Smart Path aborts not supported [ 41.963581] tg3 0000:03:00.1 eno2: renamed from eth1 [ 41.984653] tg3 0000:03:00.2 eno3: renamed from eth2 [ 42.001992] tg3 0000:03:00.0 eno1: renamed from eth0 [ 42.013997] scsi host3: hpsa [ 42.022527] hpsa can't handle SMP requests [ 42.027495] tg3 0000:03:00.3 eno4: renamed from eth3 [ 42.035076] hpsa 0000:04:00.0: scsi 3:0:0:0: added RAID HP P421 controller SSDSmartPathCap- En- Exp=1 [ 42.035818] hpsa 0000:04:00.0: scsi 3:0:1:0: masked Enclosure PMCSIERA SRCv8x6G enclosure SSDSmartPathCap- En- Exp=0 [ 42.040143] hpsa can't handle SMP requests [ 42.042727] scsi 3:0:0:0: RAID HP P421 8.32 PQ: 0 ANSI: 5 [ 42.156117] scsi 0:0:0:0: Attached scsi generic sg0 type 12 [ 42.157974] scsi 0:1:0:0: Attached scsi generic sg1 type 0 [ 42.159814] scsi 3:0:0:0: Attached scsi generic sg2 type 12 [ 42.273227] sd 0:1:0:0: [sda] 976707632 512-byte logical blocks: (500 GB/466 GiB) [ 42.274762] sd 0:1:0:0: [sda] Write Protect is off [ 42.276193] sd 0:1:0:0: [sda] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA [ 42.276873] sd 0:1:0:0: [sda] Preferred minimum I/O size 262144 bytes [ 42.277355] sd 0:1:0:0: [sda] Optimal transfer size 262144 bytes [ 42.332029] sda: sda1 sda2 [ 42.336173] sd 0:1:0:0: [sda] Attached SCSI disk [ 42.961477] ata2.00: failed to resume link (SControl 0) [ 43.273471] ata1.01: failed to resume link (SControl 0) [ 43.285440] ata1.00: SATA link down (SStatus 0 SControl 300) [ 43.286374] ata1.01: SATA link down (SStatus 4 SControl 0) [ 44.001464] ata2.01: failed to resume link (SControl 0) [ 44.013407] ata2.00: SATA link down (SStatus 4 SControl 0) [ 44.013764] ata2.01: SATA link down (SStatus 4 SControl 0) [ 45.886123] cp (709) used greatest stack depth: 26472 bytes left [ OK ] Found device /dev/mapper/cs_hpe--dl360pgen8--08-root . [ OK ] Reached target Initrd Root Device . [ OK ] Found device /dev/mapper/cs_hpe--dl360pgen8--08-swap . Starting Resume from hiber…cs_hpe--dl360pgen8--08-swap ... [ OK ] Finished Resume from hiber…r/cs_hpe--dl360pgen8--08-swap . [ OK ] Reached target Preparation for Local File Systems . [ OK ] Reached target Local File Systems . [ OK ] Reached target System Initialization . [ OK ] Reached target Basic System . [ OK ] Finished dracut initqueue hook . [ OK ] Reached target Preparation for Remote File Systems . [ OK ] Reached target Remote File Systems . Starting File System Check…cs_hpe--dl360pgen8--08-root ... [ 48.371700] fsck (745) used greatest stack depth: 26136 bytes left [ OK ] Finished File System Check…r/cs_hpe--dl360pgen8--08-root . Mounting /sysroot ... [ 49.773685] SGI XFS with ACLs, security attributes, scrub, verbose warnings, quota, no debug enabled [ 49.844664] XFS (dm-0): Mounting V5 Filesystem [ 50.496479] XFS (dm-0): Ending clean mount [ 50.548592] mount (747) used greatest stack depth: 24656 bytes left [ OK ] Mounted /sysroot . [ OK ] Reached target Initrd Root File System . Starting Mountpoints Configured in the Real Root ... [ 50.712853] systemd-fstab-g (759) used greatest stack depth: 23880 bytes left [ OK ] Finished Mountpoints Configured in the Real Root . [ OK ] Reached target Initrd File Systems . [ OK ] Reached target Initrd Default Target . Starting dracut pre-pivot and cleanup hook ... [ OK ] Finished dracut pre-pivot and cleanup hook . Starting Cleaning Up and Shutting Down Daemons ... [ OK ] Stopped target Network . [ OK ] Stopped target Timer Units . [ OK ] Closed D-Bus System Message Bus Socket . [ OK ] Stopped dracut pre-pivot and cleanup hook . [ OK ] Stopped target Initrd Default Target . [ OK ] Stopped target Basic System . [ OK ] Stopped target Initrd Root Device . [ OK ] Stopped target Initrd /usr File System . [ OK ] Stopped target Path Units . [ OK ] Stopped Dispatch Password …ts to Console Directory Watch . [ OK ] Stopped target Remote File Systems . [ OK ] Stopped target Preparation for Remote File Systems . [ OK ] Stopped target Slice Units . [ OK ] Stopped target Socket Units . [ OK ] Stopped target System Initialization . [ OK ] Stopped target Local File Systems . [ OK ] Stopped target Preparation for Local File Systems . [ OK ] Stopped target Swaps . [ OK ] Stopped dracut initqueue hook . [ OK ] Stopped Apply Kernel Variables . [ OK ] Stopped Create Volatile Files and Directories . [ OK ] Stopped Coldplug All udev Devices . Stopping Rule-based Manage…for Device Events and Files ... [ OK ] Stopped Setup Virtual Console . [ OK ] Finished Cleaning Up and Shutting Down Daemons . [ OK ] Stopped Rule-based Manager for Device Events and Files . [ OK ] Closed udev Control Socket . [ OK ] Closed udev Kernel Socket . [ OK ] Stopped dracut pre-udev hook . [ OK ] Stopped dracut cmdline hook . Starting Cleanup udev Database ... [ OK ] Stopped Create Static Device Nodes in /dev . [ OK ] Stopped Create List of Static Device Nodes . [ OK ] Stopped Create System Users . [ OK ] Finished Cleanup udev Database . [ OK ] Reached target Switch Root . Starting Switch Root ... [ 52.802546] systemd-journald[403]: Received SIGTERM from PID 1 (systemd). [ 56.415341] SELinux: policy capability network_peer_controls=1 [ 56.416398] SELinux: policy capability open_perms=1 [ 56.416722] SELinux: policy capability extended_socket_class=1 [ 56.417671] SELinux: policy capability always_check_network=0 [ 56.418538] SELinux: policy capability cgroup_seclabel=1 [ 56.418929] SELinux: policy capability nnp_nosuid_transition=1 [ 56.419669] SELinux: policy capability genfs_seclabel_symlinks=1 [ 56.978551] audit: type=1403 audit(1675384399.550:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 [ 57.007887] systemd[1]: Successfully loaded SELinux policy in 2.412850s. [ 57.200512] systemd[1]: RTC configured in localtime, applying delta of -300 minutes to system time. [ 57.611184] systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 307.103ms. [ 57.686122] systemd[1]: systemd 252-3.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [ 57.705138] systemd[1]: Detected architecture x86-64. Welcome to CentOS Stream 9 ! [ 58.287264] systemd-rc-local-generator[804]: /etc/rc.d/rc.local is not marked executable, skipping. [ 60.167612] systemd[1]: /usr/lib/systemd/system/restraintd.service:8: Standard output type syslog+console is obsolete, automatically updating to journal+console. Please update your unit file, and consider removing the setting altogether. [ 60.829988] systemd[1]: initrd-switch-root.service: Deactivated successfully. [ 60.837535] systemd[1]: Stopped Switch Root. [ OK ] Stopped Switch Root . [ 60.852494] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. [ 60.866622] systemd[1]: Created slice Slice /system/getty. [ OK ] Created slice Slice /system/getty . [ 60.884909] systemd[1]: Created slice Slice /system/modprobe. [ OK ] Created slice Slice /system/modprobe . [ 60.901400] systemd[1]: Created slice Slice /system/serial-getty. [ OK ] Created slice Slice /system/serial-getty . [ 60.915743] systemd[1]: Created slice Slice /system/sshd-keygen. [ OK ] Created slice Slice /system/sshd-keygen . [ 60.938974] systemd[1]: Created slice User and Session Slice. [ OK ] Created slice User and Session Slice . [ 60.947571] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [ OK ] Started Dispatch Password …ts to Console Directory Watch . [ 60.956053] systemd[1]: Started Forward Password Requests to Wall Directory Watch. [ OK ] Started Forward Password R…uests to Wall Directory Watch . [ 60.966716] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. [ OK ] Set up automount Arbitrary…s File System Automount Point . [ 60.968958] systemd[1]: Reached target Local Encrypted Volumes. [ OK ] Reached target Local Encrypted Volumes . [ 60.971693] systemd[1]: Stopped target Switch Root. [ OK ] Stopped target Switch Root . [ 60.976879] systemd[1]: Stopped target Initrd File Systems. [ OK ] Stopped target Initrd File Systems . [ 60.981906] systemd[1]: Stopped target Initrd Root File System. [ OK ] Stopped target Initrd Root File System . [ 60.986896] systemd[1]: Reached target Local Integrity Protected Volumes. [ OK ] Reached target Local Integrity Protected Volumes . [ 60.991960] systemd[1]: Reached target Path Units. [ OK ] Reached target Path Units . [ 60.994257] systemd[1]: Reached target Slice Units. [ OK ] Reached target Slice Units . [ 60.999936] systemd[1]: Reached target System Time Set. [ OK ] Reached target System Time Set . [ 61.004898] systemd[1]: Reached target Local Verity Protected Volumes. [ OK ] Reached target Local Verity Protected Volumes . [ 61.015184] systemd[1]: Listening on Device-mapper event daemon FIFOs. [ OK ] Listening on Device-mapper event daemon FIFOs . [ 61.052734] systemd[1]: Listening on LVM2 poll daemon socket. [ OK ] Listening on LVM2 poll daemon socket . [ 61.231435] systemd[1]: Listening on RPCbind Server Activation Socket. [ OK ] Listening on RPCbind Server Activation Socket . [ 61.236925] systemd[1]: Reached target RPC Port Mapper. [ OK ] Reached target RPC Port Mapper . [ 61.266037] systemd[1]: Listening on Process Core Dump Socket. [ OK ] Listening on Process Core Dump Socket . [ 61.271980] systemd[1]: Listening on initctl Compatibility Named Pipe. [ OK ] Listening on initctl Compatibility Named Pipe . [ 61.303443] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket . [ 61.314248] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket . [ 61.354830] systemd[1]: Activating swap /dev/mapper/cs_hpe--dl360pgen8--08-swap... Activating swap /dev/mappe…cs_hpe--dl360pgen8--08-swap ... [ 61.405639] systemd[1]: Mounting Huge Pages File System... Mounting Huge Pages File System ... [ 61.453286] systemd[1]: Mounting POSIX Message Queue File System... Mounting POSIX Message Queue File System ... [ 61.502838] systemd[1]: Mounting Kernel Debug File System... Mounting Kernel Debug File System ... [ 61.523351] Adding 16502780k swap on /dev/mapper/cs_hpe--dl360pgen8--08-swap. Priority:-2 extents:1 across:16502780k FS [ 61.549202] systemd[1]: Mounting Kernel Trace File System... Mounting Kernel Trace File System ... [ 61.552881] systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). [ 61.617232] systemd[1]: Starting Create List of Static Device Nodes... Starting Create List of Static Device Nodes ... [ 61.656360] systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Starting Monitoring of LVM…meventd or progress polling ... [ 61.701632] systemd[1]: Starting Load Kernel Module configfs... Starting Load Kernel Module configfs ... [ 61.742167] systemd[1]: Starting Load Kernel Module drm... Starting Load Kernel Module drm ... [ 61.779929] systemd[1]: Starting Load Kernel Module fuse... Starting Load Kernel Module fuse ... [ 61.851220] systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Starting Read and set NIS …from /etc/sysconfig/network ... [ 61.857203] systemd[1]: systemd-fsck-root.service: Deactivated successfully. [ 61.860082] systemd[1]: Stopped File System Check on Root Device. [ OK ] Stopped File System Check on Root Device . [ 61.864455] systemd[1]: Stopped Journal Service. [ OK ] Stopped Journal Service . [ 61.867042] systemd[1]: systemd-journald.service: Consumed 1.807s CPU time. [ 61.926114] systemd[1]: Starting Journal Service... Starting Journal Service ... [ 61.948484] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [ 61.982695] systemd[1]: Starting Generate network units from Kernel command line... Starting Generate network …ts from Kernel command line ... [ 62.024982] systemd[1]: Starting Remount Root and Kernel File Systems... Starting Remount Root and Kernel File Systems ... [ 62.032378] systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. [ 62.070498] systemd[1]: Starting Apply Kernel Variables... Starting Apply Kernel Variables ... [ 62.110927] systemd[1]: Starting Coldplug All udev Devices... Starting Coldplug All udev Devices ... [ 62.179104] systemd[1]: Activated swap /dev/mapper/cs_hpe--dl360pgen8--08-swap. [ OK ] Activated swap /dev/mapper/cs_hpe--dl360pgen8--08-swap . [ 62.218700] systemd[1]: Started Journal Service. [ OK ] Started Journal Service . [ OK ] Mounted Huge Pages File System . [ OK ] Mounted POSIX Message Queue File System . [ OK ] Mounted Kernel Debug File System . [ OK ] Mounted Kernel Trace File System . [ OK ] Finished Create List of Static Device Nodes . [ OK ] Finished Load Kernel Module configfs . [ OK ] Finished Read and set NIS …e from /etc/sysconfig/network . [ OK ] Finished Generate network units from Kernel command line . [ OK ] Finished Apply Kernel Variables . [ OK ] Reached target Preparation for Network . [ OK ] Reached target Swaps . Mounting Kernel Configuration File System ... [ OK ] Mounted Kernel Configuration File System . [ OK ] Finished Monitoring of LVM… dmeventd or progress polling . [ 62.496575] fuse: init (API version 7.36) [ OK ] Finished Load Kernel Module fuse . Mounting FUSE Control File System ... [ OK ] Finished Remount Root and Kernel File Systems . [ 62.588750] ACPI: bus type drm_connector registered Starting Flush Journal to Persistent Storage ... Starting Load/Save Random Seed ... Starting Create Static Device Nodes in /dev ... [ OK ] Finished Load Kernel Module drm . [ OK ] Mounted FUSE Control File System . [ 62.770878] systemd-journald[829]: Received client request to flush runtime journal. [ OK ] Finished Flush Journal to Persistent Storage . [ OK ] Finished Load/Save Random Seed . [ OK ] Finished Create Static Device Nodes in /dev . [ OK ] Reached target Preparation for Local File Systems . Starting Rule-based Manage…for Device Events and Files ... [ OK ] Started Rule-based Manager for Device Events and Files . [ * ] (1 of 4) A start job is running for…l360pgen8--08-home (5s / no limit) M [ * * ] (1 of 4) A start job is running for…l360pgen8--08-home (5s / no limit) M [ * * * ] (1 of 4) A start job is running for…l360pgen8--08-home (6s / no limit) M Starting Load Kernel Module configfs ... [ OK ] Finished Load Kernel Module configfs . [ OK ] Finished Coldplug All udev Devices . [ 67.777435] power_meter ACPI000D:00: Found ACPI power meter. [ 67.782411] power_meter ACPI000D:00: Ignoring unsafe software power cap! [ 67.783196] power_meter ACPI000D:00: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). [ 67.949818] IPMI message handler: version 39.2 [ 68.001851] ipmi device interface [ 68.015365] dca service started, version 1.12.1 [ 68.088029] ipmi_si: IPMI System Interface driver [ 68.089085] ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS [ 68.089560] ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 [ 68.091082] ipmi_si: Adding SMBIOS-specified kcs state machine [ 68.095679] ipmi_si IPI0001:00: ipmi_platform: probing via ACPI [ 68.097912] ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2-0x0ca3] regsize 1 spacing 1 irq 0 [ 68.123980] ioatdma: Intel(R) QuickData Technology Driver 5.00 [ 68.224661] ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI [ 68.225686] ipmi_si: Adding ACPI-specified kcs state machine [ 68.229036] ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 [ 68.321415] ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. [ 68.406344] ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x00000b, prod_id: 0x2000, dev_id: 0x13) Mounting /boot ... [ 68.568266] ipmi_si IPI0001:00: IPMI kcs interface initialized [ 68.602011] ipmi_ssif: IPMI SSIF Interface driver [ OK ] Started /usr/sbin/lvm vgch…on event cs_hpe-dl360pgen8-08 . [ 68.626109] XFS (sda1): Mounting V5 Filesystem [ 68.669683] mgag200 0000:01:00.1: vgaarb: deactivate vga console [ 69.185574] Console: switching to colour dummy device 80x25 [ 69.192623] input: PC Speaker as /devices/platform/pcspkr/input/input4 [ 69.223008] [drm] Initialized mgag200 1.0.0 20110418 for 0000:01:00.1 on minor 0 [ 69.301267] XFS (sda1): Ending clean mount [ OK ] Mounted /boot . [ 69.631845] fbcon: mgag200drmfb (fb0) is primary device [ 69.696728] RAPL PMU: API unit is 2^-32 Joules, 2 fixed counters, 163840 ms ovfl timer [ 69.696744] RAPL PMU: hw unit of domain pp0-core 2^-16 Joules [ 69.696749] RAPL PMU: hw unit of domain package 2^-16 Joules [ 70.360763] Console: switching to colour frame buffer device 128x48 [ 70.402342] mgag200 0000:01:00.1: [drm] fb0: mgag200drmfb frame buffer device [ * * * ] A start job is running for /dev/map…360pgen8--08-home (10s / no limit) M [ * * * ] A start job is running for /dev/map…360pgen8--08-home (11s / no limit) [ 71.792472] iTCO_vendor_support: vendor-support=0 [ 71.816170] iTCO_wdt iTCO_wdt.1.auto: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS M [ OK ] Found device /dev/mapper/cs_hpe--dl360pgen8--08-home . Mounting /home ... [ 72.356556] EDAC MC0: Giving out device to module sb_edac controller Ivy Bridge SrcID#0_Ha#0: DEV 0000:1f:0e.0 (INTERRUPT) [ 72.360461] EDAC MC1: Giving out device to module sb_edac controller Ivy Bridge SrcID#1_Ha#0: DEV 0000:3f:0e.0 (INTERRUPT) [ 72.361187] EDAC sbridge: Ver: 1.1.2 [ 72.383941] XFS (dm-2): Mounting V5 Filesystem [ 72.687488] intel_rapl_common: Found RAPL domain package [ 72.687827] intel_rapl_common: Found RAPL domain core [ 72.693419] intel_rapl_common: Found RAPL domain package [ 72.693770] intel_rapl_common: Found RAPL domain core [ 73.016563] XFS (dm-2): Ending clean mount [ OK ] Mounted /home . [ OK ] Reached target Local File Systems . Starting Automatic Boot Loader Update ... Starting Create Volatile Files and Directories ... [ OK ] Finished Automatic Boot Loader Update . [ OK ] Finished Create Volatile Files and Directories . Mounting RPC Pipe File System ... Starting Security Auditing Service ... Starting RPC Bind ... [ OK ] Started RPC Bind . [ 75.096917] RPC: Registered named UNIX socket transport module. [ 75.097892] RPC: Registered udp transport module. [ 75.098783] RPC: Registered tcp transport module. [ 75.099508] RPC: Registered tcp NFSv4.1 backchannel transport module. [ OK ] Mounted RPC Pipe File System . [ OK ] Reached target rpc_pipefs.target . [ 75.248508] mktemp (1020) used greatest stack depth: 23176 bytes left [ OK ] Started Security Auditing Service . Starting Record System Boot/Shutdown in UTMP ... [ OK ] Finished Record System Boot/Shutdown in UTMP . [ OK ] Reached target System Initialization . [ OK ] Started dnf makecache --timer . [ OK ] Started Daily Cleanup of Temporary Directories . [ OK ] Listening on D-Bus System Message Bus Socket . [ OK ] Listening on SSSD Kerberos…ache Manager responder socket . [ OK ] Reached target Socket Units . [ OK ] Reached target Basic System . Starting Network Manager ... Starting NTP client/server ... Starting Restore /run/initramfs on shutdown ... [ OK ] Started irqbalance daemon . Starting Load CPU microcode update ... Starting System Logging Service ... [ OK ] Reached target sshd-keygen.target . [ OK ] Reached target User and Group Name Lookups . Starting User Login Management ... [ OK ] Started System Logging Service . [ OK ] Finished Restore /run/initramfs on shutdown . Starting D-Bus System Message Bus ... [ OK ] Started NTP client/server . Starting Wait for chrony to synchronize system clock ... [ 77.795793] reload_microcod (1063) used greatest stack depth: 21048 bytes left [ OK ] Finished Load CPU microcode update . [ OK ] Started D-Bus System Message Bus . [ OK ] Started User Login Management . [ OK ] Started Network Manager . [ OK ] Created slice User Slice of UID 0 . [ OK ] Reached target Network . Starting Network Manager Wait Online ... Starting GSSAPI Proxy Daemon ... Starting OpenSSH server daemon ... Starting User Runtime Directory /run/user/0 ... Starting Hostname Service ... [ OK ] Finished User Runtime Directory /run/user/0 . Starting User Manager for UID 0 ... [ OK ] Started OpenSSH server daemon . [ OK ] Started GSSAPI Proxy Daemon . [ OK ] Reached target NFS client services . [ OK ] Reached target Preparation for Remote File Systems . [ OK ] Reached target Remote File Systems . Starting Permit User Sessions ... [ OK ] Finished Permit User Sessions . [ OK ] Started Getty on tty1 . [ OK ] Started Serial Getty on ttyS1 . [ OK ] Reached target Login Prompts . [ OK ] Started Hostname Service . [ OK ] Listening on Load/Save RF …itch Status /dev/rfkill Watch . Starting Network Manager Script Dispatcher Service ... [ OK ] Started Network Manager Script Dispatcher Service . [ OK ] Started User Manager for UID 0 . [ 82.496467] tg3 0000:03:00.0 eno1: Link is up at 1000 Mbps, full duplex [ 82.497048] tg3 0000:03:00.0 eno1: Flow control is off for TX and off for RX [ 82.497872] tg3 0000:03:00.0 eno1: EEE is disabled [ 82.498945] IPv6: ADDRCONF(NETDEV_CHANGE): eno1: link becomes ready CentOS Stream 9 Kernel 5.14.0-256.2009_766119311.el9.x86_64+debug on an x86_64 hpe-dl360pgen8-08 login: [ 91.500440] restraintd[1516]: * Fetching recipe: http://lab-02.hosts.prod.psi.bos.redhat.com:8000//recipes/13330040/ [ 91.647684] restraintd[1516]: * Parsing recipe [ 91.697957] restraintd[1516]: * Running recipe [ 91.700216] restraintd[1516]: ** Continuing task: 155735207 [/mnt/tests/github.com/beaker-project/beaker-core-tasks/archive/master.tar.gz/reservesys] [ 91.748469] restraintd[1516]: ** Preparing metadata [ 91.870575] restraintd[1516]: ** Refreshing peer role hostnames: Retries 0 [ 92.008613] restraintd[1516]: ** Updating env vars [ 92.009586] restraintd[1516]: *** Current Time: Fri Feb 03 00:34:00 2023 Localwatchdog at: * Disabled! * [ 92.115781] restraintd[1516]: ** Running task: 155735207 [/distribution/reservesys] [ 103.512111] Running test [R:13330040 T:155735207 - /distribution/reservesys - Kernel: 5.14.0-256.2009_766119311.el9.x86_64+debug] [ 134.351179] dracut-install (2477) used greatest stack depth: 20920 bytes left [ 155.245440] Running test [R:13330040 T:5 - Boot test - Kernel: 5.14.0-256.2009_766119311.el9.x86_64+debug] [-- MARK -- Fri Feb 3 05:35:00 2023] [ 273.750948] PKCS7: Message signed outside of X.509 validity window [ 387.579006] Running test [R:13330040 T:6 - /kernel/kdump/setup-nfsdump - Kernel: 5.14.0-256.2009_766119311.el9.x86_64+debug] Stopping Session 2 of User root ... [ OK ] Removed slice Slice /system/modprobe . [ OK ] Removed slice Slice /system/sshd-keygen . [ OK ] Removed slice Slice /system/systemd-hibernate-resume . [ OK ] Stopped target Multi-User System . [ OK ] Stopped target Login Prompts . [ OK ] Stopped target rpc_pipefs.target . [ OK ] Stopped target RPC Port Mapper . [ OK ] Stopped target Timer Units . [ OK ] Stopped dnf makecache --timer . [ OK ] Stopped Daily rotation of log files . [ OK ] Stopped Daily Cleanup of Temporary Directories . [ OK ] Closed LVM2 poll daemon socket . [ OK ] Closed Process Core Dump Socket . [ OK ] Closed Load/Save RF Kill Switch Status /dev/rfkill Watch . Unmounting RPC Pipe File System ... Stopping Command Scheduler ... Stopping Restore /run/initramfs on shutdown ... Stopping Getty on tty1 ... Stopping irqbalance daemon ... Stopping The restraint harness. ... Stopping System Logging Service ... Stopping Serial Getty on ttyS1 ... Stopping OpenSSH server daemon ... Stopping Hostname Service ... Stopping Load/Save Random Seed ... [ OK ] Stopped irqbalance daemon . [ OK ] Stopped OpenSSH server daemon . [ 415.468821] sda1: Can't mount, would change RO state [ OK ] Stopped Getty on tty1 . [ OK ] Stopped Serial Getty on ttyS1 . [ OK ] Stopped Command Scheduler . [ OK ] Stopped The restraint harness. . [ OK ] Stopped System Logging Service . [ OK ] Stopped Hostname Service . [ OK ] Unmounted RPC Pipe File System . [ OK ] Stopped Load/Save Random Seed . [ OK ] Stopped Session 2 of User root . [ OK ] Removed slice Slice /system/getty . [ OK ] Removed slice Slice /system/serial-getty . [ OK ] Stopped target Network is Online . [ OK ] Stopped target sshd-keygen.target . [ OK ] Stopped target System Time Synchronized . [ OK ] Stopped target System Time Set . [ OK ] Stopped Network Manager Wait Online . [ OK ] Stopped Wait for chrony to synchronize system clock . Stopping NTP client/server ... Stopping User Login Management ... Stopping Permit User Sessions ... Stopping User Manager for UID 0 ... [ OK ] Stopped NTP client/server . [ OK ] Stopped User Manager for UID 0 . [ OK ] Stopped User Login Management . [ OK ] Stopped Permit User Sessions . [ OK ] Stopped target User and Group Name Lookups . [ OK ] Stopped target Remote File Systems . [ OK ] Stopped target Preparation for Remote File Systems . [ OK ] Stopped target NFS client services . Stopping GSSAPI Proxy Daemon ... Stopping User Runtime Directory /run/user/0 ... [ OK ] Stopped GSSAPI Proxy Daemon . [ OK ] Stopped target Network . Stopping Network Manager ... [ OK ] Unmounted /run/user/0 . [ OK ] Stopped User Runtime Directory /run/user/0 . [ OK ] Removed slice User Slice of UID 0 . [ OK ] Stopped Network Manager . [ OK ] Stopped target Preparation for Network . [ OK ] Stopped Generate network units from Kernel command line . [ * * * ] A stop job is running for Restore /…tramfs on shutdown (3s / no limit) M [ * * * ] A stop job is running for Restore /…tramfs on shutdown (4s / no limit) M [ * * * ] A stop job is running for Restore /…tramfs on shutdown (4s / no limit) M [ * * * ] A stop job is running for Restore /…tramfs on shutdown (5s / no limit) M [ * * ] A stop job is running for Restore /…tramfs on shutdown (5s / no limit) M [ * ] A stop job is running for Restore /…tramfs on shutdown (6s / no limit) M [ * * ] A stop job is running for Restore /…tramfs on shutdown (6s / no limit) M [ * * * ] A stop job is running for Restore /…tramfs on shutdown (7s / no limit) M [ * * * ] A stop job is running for Restore /…tramfs on shutdown (7s / no limit) M [ * * * ] A stop job is running for Restore /…tramfs on shutdown (8s / no limit) M [ * * * ] A stop job is running for Restore /…tramfs on shutdown (8s / no limit) M [ * * ] A stop job is running for Restore /…tramfs on shutdown (9s / no limit) M [ * ] A stop job is running for Restore /…tramfs on shutdown (9s / no limit) M [ OK ] Stopped Restore /run/initramfs on shutdown . [ OK ] Stopped target Basic System . [ OK ] Stopped target Path Units . [ OK ] Stopped target Slice Units . [ OK ] Removed slice User and Session Slice . [ OK ] Stopped target Socket Units . [ OK ] Closed SSSD Kerberos Cache Manager responder socket . Stopping D-Bus System Message Bus ... [ OK ] Stopped D-Bus System Message Bus . [ OK ] Closed D-Bus System Message Bus Socket . [ OK ] Stopped target System Initialization . [ OK ] Unset automount Arbitrary …s File System Automount Point . [ OK ] Stopped target Local Encrypted Volumes . [ OK ] Stopped Dispatch Password …ts to Console Directory Watch . [ OK ] Stopped Forward Password R…uests to Wall Directory Watch . [ OK ] Stopped target Local Integrity Protected Volumes . [ OK ] Stopped target Swaps . [ OK ] Stopped target Local Verity Protected Volumes . Deactivating swap /dev/cs_hpe-dl360pgen8-08/swap ... [ OK ] Stopped Read and set NIS d…e from /etc/sysconfig/network . [ OK ] Stopped Automatic Boot Loader Update . [ OK ] Stopped Apply Kernel Variables . Stopping Record System Boot/Shutdown in UTMP ... [ OK ] Unmounted /run/credentials/systemd-sysctl.service . [ OK ] Deactivated swap /dev/cs_hpe-dl360pgen8-08/swap . [ OK ] Deactivated swap /dev/disk…e-cs_hpe--dl360pgen8--08-swap . [ OK ] Deactivated swap /dev/disk…VddQjM1zfKyIkLdc2MXslMhJMCGs5 . [ OK ] Deactivated swap /dev/disk…9-6e40-4c96-b71b-8f5ccefa4a5f . [ OK ] Deactivated swap /dev/dm-1 . [ OK ] Deactivated swap /dev/mapper/cs_hpe--dl360pgen8--08-swap . [ OK ] Stopped Record System Boot/Shutdown in UTMP . Stopping Security Auditing Service ... [ 425.051771] audit: type=1305 audit(1675402767.154:126): op=set audit_pid=0 old=1012 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 [ OK ] Stopped Security Auditing Service . [ 425.099857] audit: type=1131 audit(1675402767.203:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=auditd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ OK ] Stopped Create Volatile Files and Directories . [ 425.115153] audit: type=1131 audit(1675402767.218:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ OK ] Stopped target Local File Systems . Unmounting /boot ... Unmounting /home ... Unmounting /run/credential…temd-tmpfiles-setup.service ... [ 425.215867] XFS (dm-2): Unmounting Filesystem Unmounting /run/credential…-tmpfiles-setup-dev.service ... [ OK ] Unmounted /run/credentials…ystemd-tmpfiles-setup.service . [ OK ] Unmounted /run/credentials…md-tmpfiles-setup-dev.service . [ OK ] Unmounted /home . [ 425.406520] XFS (sda1): Unmounting Filesystem [ OK ] Unmounted /boot . [ OK ] Stopped target Preparation for Local File Systems . [ OK ] Reached target Unmount All Filesystems . Stopping Monitoring of LVM…meventd or progress polling ... [ OK ] Stopped Remount Root and Kernel File Systems . [ 425.966761] audit: type=1131 audit(1675402768.070:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ OK ] Stopped Create Static Device Nodes in /dev . [ 425.971971] audit: type=1131 audit(1675402768.075:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ OK ] Stopped Monitoring of LVM2… dmeventd or progress polling . [ 426.307228] audit: type=1131 audit(1675402768.410:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-monitor comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ OK ] Reached target System Shutdown . [ OK ] Reached target Late Shutdown Services . [ OK ] Finished System Reboot . [ 426.330190] audit: type=1130 audit(1675402768.433:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-reboot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ 426.331541] audit: type=1131 audit(1675402768.433:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-reboot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ OK ] Reached target System Reboot . [ 426.350127] audit: type=1334 audit(1675402768.453:134): prog-id=0 op=UNLOAD [ 426.350674] audit: type=1334 audit(1675402768.454:135): prog-id=0 op=UNLOAD [ 426.499244] watchdog: watchdog0: watchdog did not stop! [ 426.632640] systemd-shutdown[1]: Using hardware watchdog 'HPE iLO2+ HW Watchdog Timer', version 0, device /dev/watchdog0 [ 426.633511] systemd-shutdown[1]: Watchdog running with a timeout of 10min. [ 426.722562] systemd-shutdown[1]: Syncing filesystems and block devices. [ 426.761859] systemd-shutdown[1]: Sending SIGTERM to remaining processes... [ 426.942842] systemd-journald[829]: Received SIGTERM from PID 1 (systemd-shutdow). [ 427.036629] systemd-shutdown[1]: Sending SIGKILL to remaining processes... [ 427.214476] systemd-shutdown[1]: Unmounting file systems. [ 427.231595] [7532]: Remounting '/' read-only with options 'seclabel,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=512,noquota'. [ 427.628820] systemd-shutdown[1]: All filesystems unmounted. [ 427.629211] systemd-shutdown[1]: Deactivating swaps. [ 427.630153] systemd-shutdown[1]: All swaps deactivated. [ 427.631063] systemd-shutdown[1]: Detaching loop devices. [ 427.633062] systemd-shutdown[1]: All loop devices detached. [ 427.633533] systemd-shutdown[1]: Stopping MD devices. [ 427.635228] systemd-shutdown[1]: All MD devices stopped. [ 427.635695] systemd-shutdown[1]: Detaching DM devices. [ 427.659217] systemd-shutdown[1]: Detaching DM /dev/dm-2 (253:2). [ 427.710301] systemd-shutdown[1]: Detaching DM /dev/dm-1 (253:1). [ 427.737694] systemd-shutdown[1]: Not all DM devices detached, 1 left. [ 427.739027] systemd-shutdown[1]: Detaching DM devices. [ 427.747198] systemd-shutdown[1]: Not all DM devices detached, 1 left. [ 427.748005] systemd-shutdown[1]: Cannot finalize remaining DM devices, continuing. [ 427.749060] watchdog: watchdog0: watchdog did not stop! [ 427.792505] systemd-shutdown[1]: Successfully changed into root pivot. [ 427.792965] systemd-shutdown[1]: Returning to initrd... [ 428.729752] dracut Warning: Killing all remaining processes dracut Warning: Killing all remaining processes [ 430.148024] XFS (dm-0): Unmounting Filesystem [ 430.643993] dracut Warning: Unmounted /oldroot. dracut Warning: Unmounted /oldroot. [ 430.822033] dracut: Disassembling device-mapper devices Rebooting. [ 431.355532] kvm: exiting hardware virtualization [ 432.785333] reboot: Restarting system [ 432.785964] reboot: machine restart [-- MARK -- Fri Feb 3 05:40:00 2023] [7l [7l [7l ProLiant System BIOS - P71 (05/21/2018) Copyright 1982, 2018 Hewlett-Packard Development Company, L.P. 32 GB Installed 2 Processor(s) detected, 12 total cores enabled, Hyperthreading is enabled Proc 1: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz Proc 2: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz QPI Speed: 7.2 GT/s HP Power Profile Mode: Balanced Power and Performance Power Regulator Mode: Dynamic Power Savings Redundant ROM Detected - This system contains a valid backup System ROM. Inlet Ambient Temperature: 19C/66F Advanced Memory Protection Mode: Advanced ECC Support HP SmartMemory authenticated in all populated DIMM slots. SATA Option ROM ver 2.00.C02 Copyright 1982, 2011. Hewlett-Packard Development Company, L.P. iLO 4 Advanced press [F8] to configure iLO 4 v2.80 Jan 25 2022 10.16.216.85 Slot 0 HP Smart Array P420i Controller Initializing... (0 MB, v8.32) 1 Logical Drive [1;25r Press to run the HP Smart Storage Administrator (HP SSA) or ACU Press to run the Option ROM Configuration For Arrays Utility Press to Skip Configuration and Continue Slot 1 HP Smart Array P421 Controller Initializing... (1 GB, v8.32) 0 Logical Drives 1785-Slot 1 Drive Array Not Configured No Drives Detected Press to run the HP Smart Storage Administrator (HP SSA) or ACU Press to run the Option ROM Configuration For Arrays Utility Press to Skip Configuration and Continue Broadcom NetXtreme Ethernet Boot Agent Copyright (C) 2000-2017 Broadcom Corporation All rights reserved. Press Ctrl-S to enter Configuration Menu [7l [7l [7l Press "F9" key for ROM-Based Setup Utility Press "F10" key for Intelligent Provisioning Press "F11" key for Default Boot Override Options Press "F12" key for Network Boot For access via BIOS Serial Console Press "ESC+9" for ROM-Based Setup Utility Press "ESC+0" for Intelligent Provisioning Press "ESC+!" for Default Boot Override Options Press "ESC+@" for Network Boot [7l [7l Attempting Boot From NIC Broadcom UNDI PXE-2.1 v20.6.50 Copyright (C) 2000-2017 Broadcom Corporation Copyright (C) 1997-2000 Intel Corporation All rights reserved. CLIENT MAC ADDR: 2C 44 FD 84 51 C4. GUID: 30343536-3138-5355-4534-303452355454 DHCP.- - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - .\ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / - CLIENT IP: 10.16.216.84 MASK: 255.255.254.0 DHCP IP: 10.19.43.29 GATEWAY IP: 10.16.217.254 TFTP. TFTP.| P X E L I N U X 4 . 0 5 2 0 1 1 - 1 2 - 0 9 C o p y r i g h t ( C ) 1 9 9 4 - 2 0 1 1 H . P e t e r A n v i n e t a l ! P X E e n t r y p o i n t f o u n d ( w e h o p e ) a t 9 5 A 1 : 0 0 D 6 v i a p l a n A U N D I c o d e s e g m e n t a t 9 5 A 1 l e n 6 B 7 0 U N D I d a t a s e g m e n t a t 9 1 E A l e n 3 B 7 0 G e t t i n g c a c h e d p a c k e t 0 1 0 2 0 3 M y I P a d d r e s s s e e m s t o b e 0 A 1 0 D 8 5 4 1 0 . 1 6 . 2 1 6 . 8 4 i p = 1 0 . 1 6 . 2 1 6 . 8 4 : 1 0 . 1 9 . 1 6 5 . 1 6 4 : 1 0 . 1 6 . 2 1 7 . 2 5 4 : 2 5 5 . 2 5 5 . 2 5 4 . 0 B O O T I F = 0 1 - 2 c - 4 4 - f d - 8 4 - 5 1 - c 4 S Y S U U I D = 3 6 3 5 3 4 3 0 - 3 8 3 1 - 5 5 5 3 - 4 5 3 4 - 3 0 3 4 5 2 3 5 5 4 5 4 T F T P p r e f i x : T r y i n g t o l o a d : p x e l i n u x . c f g / 3 6 3 5 3 4 3 0 - 3 8 3 1 - 5 5 5 3 - 4 5 3 4 - 3 0 3 4 5 2 3 5 5 4 5 4 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 1 - 2 c - 4 4 - f d - 8 4 - 5 1 - c 4 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 D 8 5 4 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 D 8 5 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 D 8 [22;49H [22;49H [22;50H T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 D T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A T r y i n g t o l o a d : p x e l i n u x . c f g / 0 T r y i n g t o l o a d : p x e l i n u x . c f g / d e f a u l t o k * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * R e d H a t E n g i n e e r i n g L a b s N e t w o r k B o o t P r e s s E N T E R t o b o o t f r o m l o c a l d i s k T y p e " m e n u " a t b o o t p r o m p t t o v i e w i n s t a l l m e n u * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * b o o t : [24;01H B o o t i n g . . . .. [?25l Use the ^ and v keys to change the selection. Press 'e' to edit the selected item, or 'c' for a command prompt. CentOS Stream (5.14.0-256.2009_766119311.el9.x86_64+debug) 9 with debugg> CentOS Stream (5.14.0-247.el9.x86_64) 9 CentOS Stream (0-rescue-99e1b32cbaf74173bd2789197e86723f) 9 U s e t h e a n d k e y s t o c h a n g e t h e s e l e c t i o n . P r e s s ' e ' t o e d i t t h e s e l e c t e d i t e m , o r ' c ' f o r a c o m m a n d p r o m p t . C e n t O S S t r e a m ( 5 . 1 4 . 0 - 2 5 6 . 2 0 0 9 _ 7 6 6 1 1 9 3 1 1 . e l 9 . x 8 6 _ 6 4 + d e b u g ) 9 w i t h d e b u g g C e n t O S S t r e a m ( 5 . 1 4 . 0 - 2 4 7 . e l 9 . x 8 6 _ 6 4 ) 9 C e n t O S S t r e a m ( 0 - r e s c u e - 9 9 e 1 b 3 2 c b a f 7 4 1 7 3 b d 2 7 8 9 1 9 7 e 8 6 7 2 3 f ) 9 T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 5 s . The selected entry will be started automatically in 5s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 4 s . The selected entry will be started automatically in 4s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 3 s . The selected entry will be started automatically in 3s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 2 s . [23;77H The selected entry will be started automatically in 2s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 1 s . The selected entry will be started automatically in 1s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 0 s . The selected entry will be started automatically in 0s. Probing EDD (edd=off to disable)... ok [7l [ 0.000000] microcode: microcode updated early to revision 0x42e, date = 2019-03-14 [ 0.000000] [ 0.000000] The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. [ 0.000000] Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug root=/dev/mapper/cs_hpe--dl360pgen8--08-root ro resume=/dev/mapper/cs_hpe--dl360pgen8--08-swap rd.lvm.lv=cs_hpe-dl360pgen8-08/root rd.lvm.lv=cs_hpe-dl360pgen8-08/swap console=ttyS1,115200n81 crashkernel=1G-2G:384M,2G-3G:512M,3G-4G:768M,4G-16G:1G,16G-64G:2G,64G-128G:2G,128G-:4G [ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [ 0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. [ 0.000000] signal: max sigframe size: 1776 [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009c7ff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009c800-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bddabfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bddac000-0x00000000bddddfff] ACPI data [ 0.000000] BIOS-e820: [mem 0x00000000bddde000-0x00000000cfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000fee0ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000ff800000-0x00000000ffffffff] reserved [ 000000] BIOS-e820: [mem 0x0000000100000000-0x000000083fffefff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.8 present. [ 0.000000] DMI: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [ 0.000000] tsc: Fast TSC calibration using PIT [ 0.000000] tsc: Detected 2095.096 MHz processor [ 0.001550] last_pfn = 0x83ffff max_arch_pfn = 0x400000000 [ 0.002388] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [ 0.008449] last_pfn = 0xbddac max_arch_pfn = 0x400000000 [ 0.014947] found SMP MP-table at [mem 0x000f4f80-0x000f4f8f] [ 0.015005] Using GB pages for direct mapping [ 0.016827] RAMDISK: [mem 0x33a57000-0x35d23fff] [ 0.016841] ACPI: Early table checksum verification disabled [ 0.016854] ACPI: RSDP 0x00000000000F4F00 000024 (v02 HP ) [ 0.016871] ACPI: XSDT 0x00000000BDDAED00 0000E4 (v01 HP ProLiant 00000002 ? 0000162E) [ 0.016893] ACPI: FACP 0x00000000BDDAEE40 0000F4 (v03 HP ProLiant 00000002 ? 0000162E) [ 0.016913] ACPI BIOS Warning (bug): Invalid length for FADT/Pm1aControlBlock: 32, using default 16 (20211217/tbfadt-669) [ 0.016925] ACPI BIOS Warning (bug): Invalid length for FADT/Pm2ControlBlock: 32, usin211217/tbfadt-669) [ 0.016940] ACPI: DSDT 0x00000000BDDAEF40 0026DC (v01 HP DSDT 00000001 INTL 20030228) [ 0.016955] ACPI: FACS 0x00000000BDDAC140 000040 [ 0.016968] ACPI: FACS 0x00000000BDDAC140 000040 [ 0.016981] ACPI: SPCR 0x00000000BDDAC180 000050 (v01 HP SPCRRBSU 00000001 ? 0000162E) [ 0.016994] ACPI: MCFG 0x00000000BDDAC200 00003C (v01 HP ProLiant 00000001 00000000) [ 0.017008] ACPI: HPET 0x00000000BDDAC240 000038 (v01 HP ProLiant 00000002 ? 0000162E) [ 0.017022] ACPI: FFFF 0x00000000BDDAC280 000064 (v02 HP ProLiant 00000002 ? 0000162E) [ 0.017036] ACPI: SPMI 0x00000000BDDAC300 000040 (v05 HP ProLiant 00000001 ? 0000162E) [ 0.017050] ACPI: ERST 0x00000000BDDAC340 000230 (v01 HP ProLiant 00000001 ? 0000162E) [ 0.017065] ACPI: APIC 0x00000000BDDAC580 00026A (v01 HP ProLiant 00000002 00000000) [ 0.017079] ACPI: SRAT 0x00000000BDDAC800 000750 (v01 HP Proliant 00000001 ? 0000162E) [ 0.017093] ACPI: FFFF 0x00000000BDDACF80 000176 (v01 HP ProLiant 00000001 ? 0000162E) [ 0.017107] ACPI: BERT 0x00000000BDDAD100 000030 (v01 HP ProLiant 00000001 ? 0000162E) [ 0.017121] ACPI: HEST 0x00000000BDDAD140 0000BC (v01 HP ProLiant 02E) [ 0.017135] ACPI: DMAR 0x00000000BDDAD200 00051C (v01 HP ProLiant 00000001 ? 0000162E) [ 0.017149] ACPI: FFFF 0x00000000BDDAEC40 000030 (v01 HP ProLiant 00000001 00000000) [ 0.017163] ACPI: PCCT 0x00000000BDDAEC80 00006E (v01 HP Proliant 00000001 PH 0000504D) [ 0.017177] ACPI: SSDT 0x00000000BDDB1640 0007EA (v01 HP DEV_PCI1 00000001 INTL 20120503) [ 0.017191] ACPI: SSDT 0x00000000BDDB1E40 000103 (v03 HP CRSPCI0 00000002 HP 00000001) [ 0.017205] ACPI: SSDT 0x00000000BDDB1F80 000098 (v03 HP CRSPCI1 00000002 HP 00000001) [ 0.017219] ACPI: SSDT 0x00000000BDDB2040 00038A (v02 HP riser0 00000002 INTL 20030228) [ 0.017233] ACPI: SSDT 0x00000000BDDB2400 000385 (v03 HP riser1a 00000002 INTL 20030228) [ 0.017247] ACPI: SSDT 0x00000000BDDB27C0 000BB9 (v01 HP pcc 00000001 INTL 20120503) [ 0.017261] ACPI: SSDT 0x00000000BDDB3380 000377 (v01 HP pmab 00000001 INTL 20120503) [ 0.017275] ACPI: SSDT 0x00000000BDDB3700 005524 (v01 HP pcc2 00000001 INTL 20120503) [ 0.017289] ACPI: SSDT 0x00000000BDDB8C40 003AEC (v01 INTEL PPM RCM 00000001 INTL 20061109) [ 0.017302] ACPI: Reserving FACP table memory at [mem 0xbddaee40-0.017307] ACPI: Reserving DSDT table memory at [mem 0xbddaef40-0xbddb161b] [ 0.017312] ACPI: Reserving FACS table memory at [mem 0xbddac140-0xbddac17f] [ 0.017316] ACPI: Reserving FACS table memory at [mem 0xbddac140-0xbddac17f] [ 0.017321] ACPI: Reserving SPCR table memory at [mem 0xbddac180-0xbddac1cf] [ 0.017325] ACPI: Reserving MCFG table memory at [mem 0xbddac200-0xbddac23b] [ 0.017329] ACPI: Reserving HPET table memory at [mem 0xbddac240-0xbddac277] [ 0.017333] ACPI: Reserving FFFF table memory at [mem 0xbddac280-0xbddac2e3] [ 0.017338] ACPI: Reserving SPMI table memory at [mem 0xbddac300-0xbddac33f] [ 0.017342] ACPI: Reserving ERST table memory at [mem 0xbddac340-0xbddac56f] [ 0.017347] ACPI: Reserving APIC table memory at [mem 0xbddac580-0xbddac7e9] [ 0.017352] ACPI: Reserving SRAT table memory at [mem 0xbddac800-0xbddacf4f] [ 0.017356] ACPI: Reserving FFFF table memory at [mem 0xbddacf80-0xbddad0f5] [ 0.017361] ACPI: Reserving BERT table memory at [mem 0xbddad100-0xbddad12f] [ 0.017365] ACPI: Reserving HEST table memory at [mem 0xbddad140-0xbddad1fb] [ 0.017370] ACPI: Reserving DMAR table memory at [mem 0xbddad200-0 0.017374] ACPI: Reserving FFFF table memory at [mem 0xbddaec40-0xbddaec6f] [ 0.017378] ACPI: Reserving PCCT table memory at [mem 0xbddaec80-0xbddaeced] [ 0.017383] ACPI: Reserving SSDT table memory at [mem 0xbddb1640-0xbddb1e29] [ 0.017387] ACPI: Reserving SSDT table memory at [mem 0xbddb1e40-0xbddb1f42] [ 0.017392] ACPI: Reserving SSDT table memory at [mem 0xbddb1f80-0xbddb2017] [ 0.017397] ACPI: Reserving SSDT table memory at [mem 0xbddb2040-0xbddb23c9] [ 0.017401] ACPI: Reserving SSDT table memory at [mem 0xbddb2400-0xbddb2784] [ 0.017406] ACPI: Reserving SSDT table memory at [mem 0xbddb27c0-0xbddb3378] [ 0.017410] ACPI: Reserving SSDT table memory at [mem 0xbddb3380-0xbddb36f6] [ 0.017415] ACPI: Reserving SSDT table memory at [mem 0xbddb3700-0xbddb8c23] [ 0.017420] ACPI: Reserving SSDT table memory at [mem 0xbddb8c40-0xbddbc72b] [ 0.017514] SRAT: PXM 0 -> APIC 0x00 -> Node 0 [ 0.017521] SRAT: PXM 0 -> APIC 0x01 -> Node 0 [ 0.017525] SRAT: PXM 0 -> APIC 0x02 -> Node 0 [ 0.017529] SRAT: PXM 0 -> APIC 0x03 -> Node 0 [ 0.017532] SRAT: PXM 0 -> APIC 0x04 -> Node 0 [ 0.017537] SRAT: PXM 0 -> APIC 0x05 -> Node 0 [ 0.017540] SRAT: PXM 0 -> APIC 0x06 -> Node 0 [ XM 0 -> APIC 0x07 -> Node 0 [ 0.017548] SRAT: PXM 0 -> APIC 0x08 -> Node 0 [ 0.017552] SRAT: PXM 0 -> APIC 0x09 -> Node 0 [ 0.017556] SRAT: PXM 0 -> APIC 0x0a -> Node 0 [ 0.017560] SRAT: PXM 0 -> APIC 0x0b -> Node 0 [ 0.017564] SRAT: PXM 1 -> APIC 0x20 -> Node 1 [ 0.017568] SRAT: PXM 1 -> APIC 0x21 -> Node 1 [ 0.017572] SRAT: PXM 1 -> APIC 0x22 -> Node 1 [ 0.017576] SRAT: PXM 1 -> APIC 0x23 -> Node 1 [ 0.017580] SRAT: PXM 1 -> APIC 0x24 -> Node 1 [ 0.017584] SRAT: PXM 1 -> APIC 0x25 -> Node 1 [ 0.017588] SRAT: PXM 1 -> APIC 0x26 -> Node 1 [ 0.017591] SRAT: PXM 1 -> APIC 0x27 -> Node 1 [ 0.017595] SRAT: PXM 1 -> APIC 0x28 -> Node 1 [ 0.017599] SRAT: PXM 1 -> APIC 0x29 -> Node 1 [ 0.017602] SRAT: PXM 1 -> APIC 0x2a -> Node 1 [ 0.017606] SRAT: PXM 1 -> APIC 0x2b -> Node 1 [ 0.017617] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x43fffffff] [ 0.017624] ACPI: SRAT: Node 1 PXM 1 [mem 0x440000000-0x83fffffff] [ 0.017661] NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] [ 0.017705] NODE_DATA(1) allocated [mem 0x83ffd4000-0x83fffefff] [ 0.018211] Reserving 2048MB of memory at 9l (System RAM: 32733MB) [ 0.116268] Zone ranges: [ 0.116284] DMA [mem 0x0000000000001000-0x0000000000ffffff] [ 0.116297] DMA32 [mem 0x0000000001000000-0x00000000ffffffff] [ 0.116305] Normal [mem 0x0000000100000000-0x000000083fffefff] [ 0.116312] Device empty [ 0.116318] Movable zone start for each node [ 0.116324] Early memory node ranges [ 0.116328] node 0: [mem 0x0000000000001000-0x000000000009bfff] [ 0.116335] node 0: [mem 0x0000000000100000-0x00000000bddabfff] [ 0.116340] node 0: [mem 0x0000000100000000-0x000000043fffffff] [ 0.116346] node 1: [mem 0x0000000440000000-0x000000083fffefff] [ 0.116358] Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] [ 0.116374] Initmem setup node 1 [mem 0x0000000440000000-0x000000083fffefff] [ 0.116398] On node 0, zone DMA: 1 pages in unavailable ranges [ 0.116637] On node 0, zone DMA: 100 pages in unavailable ranges [ 0.164451] On node 0, zone Normal: 8788 pages in unavailable ranges [ 0.166513] On node 1, zone Normal: 1 pages in unavailable ranges [ 0.898912] kasan: KernelAddressSanitizer initialized [ 0.899239] ACPI: PM-Timer IO Port: 0x908 [ 0.899278] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.899340] IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, 0.899355] IOAPIC[1]: apic_id 0, version 32, address 0xfec10000, GSI 24-47 [ 0.899366] IOAPIC[2]: apic_id 10, version 32, address 0xfec40000, GSI 48-71 [ 0.899375] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) [ 0.899385] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.899402] ACPI: Using ACPI (MADT) for SMP configuration information [ 0.899407] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.899422] ACPI: SPCR: SPCR table version 1 [ 0.899426] ACPI: SPCR: Unexpected SPCR Access Width. Defaulting to byte size [ 0.899433] ACPI: SPCR: console: u,mmio,0x0,9600 [ 0.899441] TSC deadline timer available [ 0.899446] smpboot: Allowing 64 CPUs, 40 hotplug CPUs [ 0.899533] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] [ 0.899543] PM: hibernation: Registered nosave memory: [mem 0x0009c000-0x0009cfff] [ 0.899548] PM: hibernation: Registered nosave memory: [mem 0x0009d000-0x0009ffff] [ 0.899552] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.899557] PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.899566] PM: hibernation: Registered nosave memory: [mem 0xbddac000-0xbddddfff] [ 0.899571] PM: hibernation: Registered nosave memory: [mem 0xbddde000-0xcfffffff] [ 0.899575] PM: hibernation: Registered nosave memory: [mem 0xd0000 [ 0.899579] PM: hibernation: Registered nosave memory: [mem 0xfec00000-0xfee0ffff] [ 0.899583] PM: hibernation: Registered nosave memory: [mem 0xfee10000-0xff7fffff] [ 0.899587] PM: hibernation: Registered nosave memory: [mem 0xff800000-0xffffffff] [ 0.899601] [mem 0xd0000000-0xfebfffff] available for PCI devices [ 0.899606] Booting paravirtualized kernel on bare hardware [ 0.899622] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns [ 0.920959] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:64 nr_cpu_ids:64 nr_node_ids:2 [ 1.004360] percpu: Embedded 515 pages/cpu s2072576 r8192 d28672 u4194304 [ 1.005161] Fallback order for Node 0: 0 1 [ 1.005189] Fallback order for Node 1: 1 0 [ 1.005231] Built 2 zonelists, mobility grouping on. Total pages: 8248628 [ 1.005237] Policy zone: Normal [ 1.005258] Kernel command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug root=/dev/mapper/cs_hpe--dl360pgen8--08-root ro resume=/dev/mapper/cs_hpe--dl360pgen8--08-swap rd.lvm.lv=cs_hpe-dl360pgen8-08/root rd.lvm.lv=cs_hpe-dl360pgen8-08/swap console=ttyS1,115200n81 crashkernel=1G-2G:384M,2G-3G:512M,3G-4G:768M,4G-16-128G:2G,128G-:4G [ 1.005463] Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug", will be passed to user space. [ 1.007161] mem auto-init: stack:off, heap alloc:off, heap free:off [ 1.007168] Stack Depot early init allocating hash table with memblock_alloc, 8388608 bytes [ 1.009061] software IO TLB: area num 64. [ 3.359317] Memory: 1173116K/33518872K available (38920K kernel code, 13007K rwdata, 14984K rodata, 5300K init, 42020K bss, 7436796K reserved, 0K cma-reserved) [ 3.359357] random: get_random_u64 called from kmem_cache_open+0x22/0x380 with crng_init=0 [ 3.380919] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=64, Nodes=2 [ 3.380928] kmemleak: Kernel memory leak detector disabled [ 3.385218] Kernel/User page tables isolation: enabled [ 3.385819] ftrace: allocating 45745 entries in 179 pages [ 3.427445] ftrace: allocated 179 pages with 5 groups [ 3.433316] Dynamic Preempt: voluntary [ 3.437825] Running RCU self tests [ 3.439302] rcu: Preemptible hierarchical RCU implementation. [ 3.439306] rcu: RCU lockdep checking is enabled. [ 3.439310] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=64. [ 3.439316] rcu: RCU callback double-/use-after-free debug is enabled. [ 3.439320] Trampoline variant of Tasks RCU enabled. [ 3.439323] Rude variant of Tasks RCU enabled.Tracing variant of Tasks RCU enabled. [ 3.439332] rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. [ 3.439336] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=64 [ 3.461609] NR_IRQS: 524544, nr_irqs: 1752, preallocated irqs: 16 [ 3.462538] rcu: srcu_init: Setting srcu_struct sizes based on contention. [ 3.462638] random: crng init done (trusting CPU's manufacturer) [ 3.469835] Console: colour VGA+ 80x25 [ 8.732720] printk: console [ttyS1] enabled [ 8.734148] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar [ 8.736767] ... MAX_LOCKDEP_SUBCLASSES: 8 [ 8.738163] ... MAX_LOCK_DEPTH: 48 [ 8.739630] ... MAX_LOCKDEP_KEYS: 8192 [ 8.741265] ... CLASSHASH_SIZE: 4096 [ 8.742862] ... MAX_LOCKDEP_ENTRIES: 65536 [ 8.744381] ... MAX_LOCKDEP_CHAINS: 131072 [ 8.745945] ... CHAINHASH_SIZE: 65536 [ 8.747453] memory used by lock dependency info: 11641 kB [ 8.749305] memory used for stack traces: 4224 kB [ 8.750941] per task-struct memory footprint: 2688 bytes [ 8.753226] mempolicy: Enabling automatic NUMA balancing. Configure with numa_balancing= or the kernel.numa_balan 9.089885] ACPI: Core revision 20211217 [ 9.159823] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns [ 9.163198] APIC: Switch to symmetric I/O mode setup [ 9.165154] DMAR: Host address width 46 [ 9.166564] DMAR: DRHD base: 0x000000fbefe000 flags: 0x0 [ 9.168529] DMAR: dmar0: reg_base_addr fbefe000 ver 1:0 cap d2078c106f0466 ecap f020de [ 9.171815] DMAR: DRHD base: 0x000000f4ffe000 flags: 0x1 [ 9.173896] DMAR: dmar1: reg_base_addr f4ffe000 ver 1:0 cap d2078c106f0466 ecap f020de [ 9.176635] DMAR: RMRR base: 0x000000bdffd000 end: 0x000000bdffffff [ 9.178811] DMAR: RMRR base: 0x000000bdff6000 end: 0x000000bdffcfff [ 9.181052] DMAR: RMRR base: 0x000000bdf83000 end: 0x000000bdf84fff [ 9.183280] DMAR: RMRR base: 0x000000bdf7f000 end: 0x000000bdf82fff [ 9.185435] DMAR: RMRR base: 0x000000bdf6f000 end: 0x000000bdf7efff [ 9.187590] DMAR: RMRR base: 0x000000bdf6e000 end: 0x000000bdf6efff [ 9.189760] DMAR: RMRR base: 0x000000000f4000 end: 0x000000000f4fff [ 9.191975] DMAR: RMRR base: 0x000000000e8000 end: 0x000000000e8fff [ 9.194130] DMAR: [Firmware Bug]: No firmware reserved region can 00000000000e8000-0x00000000000e8fff], contact BIOS vendor for fixes [ 9.698640] DMAR: [Firmware Bug]: Your BIOS is broken; bad RMRR [0x00000000000e8000-0x00000000000e8fff] [ 9.698640] BIOS vendor: HP; Ver: P71; Product Version: [ 9.703723] DMAR: RMRR base: 0x000000bddde000 end: 0x000000bdddefff [ 9.706015] DMAR: ATSR flags: 0x0 [ 9.707217] DMAR-IR: IOAPIC id 10 under DRHD base 0xfbefe000 IOMMU 0 [ 9.709594] DMAR-IR: IOAPIC id 8 under DRHD base 0xf4ffe000 IOMMU 1 [ 9.711863] DMAR-IR: IOAPIC id 0 under DRHD base 0xf4ffe000 IOMMU 1 [ 9.714400] DMAR-IR: HPET id 0 under DRHD base 0xf4ffe000 [ 9.716277] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit. [ 9.716282] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting. [ 9.722544] DMAR-IR: Enabled IRQ remapping in xapic mode [ 9.724444] x2apic: IRQ remapping doesn't support X2APIC mode [ 9.726444] Switched APIC routing to physical flat. [ 9.730119] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 9.737141] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1e3317deb62, max_idle_ns: 440795270104 ns [ 9.740812] Calibrating delay loop (skipped), value calculated using time.19 BogoMIPS (lpj=2095096) [ 9.741809] pid_max: default: 65536 minimum: 512 [ 9.744427] LSM: Security Framework initializing [ 9.744943] Yama: becoming mindful. [ 9.745911] SELinux: Initializing. [ 9.747334] LSM support for eBPF active [ 9.761708] Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, vmalloc hugepage) [ 9.768681] Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, vmalloc hugepage) [ 9.769339] Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, vmalloc) [ 9.770124] Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, vmalloc) [ 9.776096] CPU0: Thermal monitoring enabled (TM1) [ 9.776937] process: using mwait in idle threads [ 9.777823] Last level iTLB entries: 4KB 512, 2MB 8, 4MB 8 [ 9.778805] Last level dTLB entries: 4KB 512, 2MB 0, 4MB 0, 1GB 4 [ 9.779832] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization [ 9.780809] Spectre V2 : Mitigation: Retpolines [ 9.781805] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch [ 9.782805] Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT [ 9.783805] Spectre V2 : Enabling Restricn for firmware calls [ 9.784815] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier [ 9.785806] Spectre V2 : User space: Mitigation: STIBP via prctl [ 9.786807] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl [ 9.787821] MDS: Mitigation: Clear CPU buffers [ 9.788805] MMIO Stale Data: Unknown: No mitigations [ 9.828816] Freeing SMP alternatives memory: 32K [ 9.833043] smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1170 [ 9.833850] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz (family: 0x6, model: 0x3e, stepping: 0x4) [ 9.838887] cblist_init_generic: Setting adjustable number of callback queues. [ 9.839806] cblist_init_generic: Setting shift to 6 and lim to 1. [ 9.841435] cblist_init_generic: Setting shift to 6 and lim to 1. [ 9.842459] cblist_init_generic: Setting shift to 6 and lim to 1. [ 9.843047] Running RCU-tasks wait API self tests [ 9.947156] Performance Events: PEBS fmt1+, IvyBridge events, 16-deep LBR, full-width counters, Broken BIOS detected, complain to your hardware vendor. [ 9.947812] [Firmware Bug]: the BIOS has corrupted hw-PMU resources (MSR 38d is 330) [ 9.948809] Intel PMU driver. [ 9.949831] ... version: 3 [ 9.950811] ... 48 [ 9.951806] ... generic registers: 4 [ 9.952806] ... value mask: 0000ffffffffffff [ 9.953806] ... max period: 00007fffffffffff [ 9.954806] ... fixed-purpose events: 3 [ 9.955806] ... event mask: 000000070000000f [ 9.958453] rcu: Hierarchical SRCU implementation. [ 9.958807] rcu: Max phase no-delay instances is 400. [ 9.962879] Callback from call_rcu_tasks_trace() invoked. [ 9.978988] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. [ 9.993237] smp: Bringing up secondary CPUs ... [ 9.996030] x86: Booting SMP configuration: [ 9.996810] .... node #0, CPUs: #1 [ 10.007010] #2 [ 10.013211] #3 [ 10.019249] #4 [ 10.025246] #5 [ 10.032363] [ 10.032808] .... node #1, CPUs: #6 [ 6.245214] smpboot: CPU 6 Converting physical 0 to logical die 1 [ 10.107271] Callback from call_rcu_tasks_rude() invoked. [ 10.110914] #7 [ 10.120603] #8 [ 10.130460] #9 [ 10.139496] #10 [ 10.148533] #11 [ 10.156757] [ 10.156813] .... node #0, CPUs: #12 [ 10.161949] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/adminvuln/mds.html for more details. [ 10.165072] #13 [ 10.171150] #14 [ 10.176292] #15 [ 10.182266] #16 [ 10.188076] #17 [ 10.193310] [ 10.193823] .... node #1, CPUs: #18 [ 10.200177] #19 [ 10.206212] #20 [ 10.211279] Callback from call_rcu_tasks() invoked. [ 10.213199] #21 [ 10.219238] #22 [ 10.225218] #23 [ 10.229241] smp: Brought up 2 nodes, 24 CPUs [ 10.229827] smpboot: Max logical packages: 6 [ 10.230813] smpboot: Total of 24 processors activated (101703.26 BogoMIPS) [ 10.793799] node 0 deferred pages initialised in 553ms [ 10.797294] pgdatinit0 (143) used greatest stack depth: 29008 bytes left [ 11.240799] node 1 deferred pages initialised in 999ms [ 11.254356] devtmpfs: initialized [ 11.257372] x86/mm: Memory block size: 128MB [ 11.445529] DMA-API: preallocated 65536 debug entries [ 11.445809] DMA-API: debugging enabled by kernel config [ 11.446811] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns [ 11.448984] futex hash table entries: 16384 (order: 9, 2097152 bytes, vmalloc) [ 11.455075] prandom: seed boundary self test passed [ 11.456959] prandom: 100 self tests passed [ 11.462559] prandom32: self test passed (less than 6 bits correlated) [ 11.462817] pinctrl core: initialized pinctrl subsystem [ 11.465147] [ 11.465757] ************************************************************* [ 11.465809] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE ** [ 11.466907] ** ** [ 11.467810] ** IOMMU DebugFS SUPPORT HAS BEEN ENABLED IN THIS KERNEL ** [ 11.468807] ** ** [ 11.469807] ** This means that this kernel is built to expose internal ** [ 11.470807] ** IOMMU data structures, which may compromise security on ** [ 11.471807] ** your system. ** [ 11.472807] ** ** [ 11.473808] ** If you see this message and you are not debugging the ** [ 11.474807] ** kernel, report this immediately to your vendor! ** [ 11.475807] ** ** [ 11.476807] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE ** [ 11.477807] ************************************************************* [ 11.479080] PM: RTC time: 00:44:36, date: 2023-02-03 [ 11.494809] NET: Registered PF_NETLINK/PF_ROUTE protocol family [ 11.499829] DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations [ 11.501007] DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations [ 11.502005] DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations [ 11.503078] audit: initializing netlink subsys (disabled) [ 11.504303] audit: type=2000 audit(1675385067.797:1): state=initialized audit_enabled=0 res=1 [ 11.507716] thermal_sys: Registered thermal governor 'fair_share' [ 11.507729] thermal_sys: Registered thermal governor 'step_wise' [ 11.507813] thermal_sys: Registered thermal governor 'user_space' [ 11.509285] cpuidle: using governor menu [ 11.511560] Detected 1 PCC Subspaces [ 11.511810] Registering PCC driver as Mailbox controller [ 11.513475] HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB [ 11.513844] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it [ 11.514810] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 11.517668] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xc0000000-0xcfffffff] (base 0xc0000000) [ 11.517819] PCI: MMCONFIG at [mem 0xc0000000-0xcfffffff] reserved in E820 [ 11.587370] PCI: Using configuration type 1 for base access [ 11.587843] PCI: HP ProLiant DL360 detected, enabling pci=bfsort. [ 11.588941] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 11.601066] ENERGY_PERF_BIAS: Set to 'normal', was 'performance' [ 11.756156] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. [ 11.761897] HugeTLB: cmize 7 vmemmap pages for hugepages-2048kB [ 11.762835] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages [ 11.763809] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages [ 11.771110] cryptd: max_cpu_qlen set to 1000 [ 11.779294] ACPI: Added _OSI(Module Device) [ 11.779812] ACPI: Added _OSI(Processor Device) [ 11.780811] ACPI: Added _OSI(3.0 _SCP Extensions) [ 11.781810] ACPI: Added _OSI(Processor Aggregator Device) [ 11.782827] ACPI: Added _OSI(Linux-Dell-Video) [ 11.783822] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) [ 11.784823] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) [ 12.164568] ACPI: 10 ACPI AML tables successfully acquired and loaded [ 12.439203] ACPI: Interpreter enabled [ 12.440103] ACPI: PM: (supports S0 S4 S5) [ 12.440839] ACPI: Using IOAPIC for interrupt routing [ 12.442399] HEST: Table parsing has been initialized. [ 12.442812] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 12.443808] PCI: Using E820 reservations for host bridge windows [ 12.694569] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-1f]) [ 12.694867] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 12.699276] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER PCIeCapability LTR DPC] [ 12.699811] acpi PNP0A08:00: FADT indicates ASPM is unsupported, using BIOS configuration [ 12.711461] PCI host bridge to bus 0000:00 [ 12.711826] pci_bus 0000:00: root bus resource [mem 0xf4000000-0xf7ffffff window] [ 12.712819] pci_bus 0000:00: root bus resource [io 0x1000-0x7fff window] [ 12.713818] pci_bus 0000:00: root bus resource [io 0x0000-0x03af window] [ 12.714817] pci_bus 0000:00: root bus resource [io 0x03e0-0x0cf7 window] [ 12.715818] pci_bus 0000:00: root bus resource [io 0x0d00-0x0fff window] [ 12.716816] pci_bus 0000:00: root bus resource [io 0x03b0-0x03bb window] [ 12.717830] pci_bus 0000:00: root bus resource [io 0x03c0-0x03df window] [ 12.718821] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 12.719823] pci_bus 0000:00: root bus resource [bus 00-1f] [ 12.721251] pci 0000:00:00.0: [8086:0e00] type 00 class060000 [ 12.722206] pci 0000:00:00.0: PME# supported from D0 D3hot D3cold [ 12.724190] pci 0000:00:01.0: [8086:0e02] type 01 class 0x060400 [ 12.725134] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold [ 12.729729] pci 0000:00:01.1: [8086:0e03] type 01 class 0x060400 [ 12.730134] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold [ 12.734040] pci 0000:00:02.0: [8086:0e04] type 01 class 0x060400 [ 12.735166] pci 0000:00:02.0: PME# supported from D0 D3hot D3cold [ 12.739296] pci 0000:00:02.1: [8086:0e05] type 01 class 0x060400 [ 12.740133] pci 0000:00:02.1: PME# supported from D0 D3hot D3cold [ 12.744084] pci 0000:00:02.2: [8086:0e06] type 01 class 0x060400 [ 12.745132] pci 0000:00:02.2: PME# supported from D0 D3hot D3cold [ 12.748977] pci 0000:00:02.3: [8086:0e07] type 01 class 0x060400 [ 12.750132] pci 0000:00:02.3: PME# supported from D0 D3hot D3cold [ 12.754030] pci 0000:00:03.0: [8086:0e08] type 01 class 0x060400 [ 12.754891] pci 0000:00:03.0: enabling Extended Tags [ 12.756296] pci 0000:00:03.0: PME# supported from D0 D3hot D3cold [ 12.760208] pci 0000:00:03.1: [8086:0e09] type 01 class 0x060400 [ 12.761133] pci 0000:00:03.1: Pfrom D0 D3hot D3cold [ 12.764971] pci 0000:00:03.2: [8086:0e0a] type 01 class 0x060400 [ 12.766145] pci 0000:00:03.2: PME# supported from D0 D3hot D3cold [ 12.769941] pci 0000:00:03.3: [8086:0e0b] type 01 class 0x060400 [ 12.771131] pci 0000:00:03.3: PME# supported from D0 D3hot D3cold [ 12.775123] pci 0000:00:04.0: [8086:0e20] type 00 class 0x088000 [ 12.775852] pci 0000:00:04.0: reg 0x10: [mem 0xf6cf0000-0xf6cf3fff 64bit] [ 12.778015] pci 0000:00:04.1: [8086:0e21] type 00 class 0x088000 [ 12.778847] pci 0000:00:04.1: reg 0x10: [mem 0xf6ce0000-0xf6ce3fff 64bit] [ 12.781021] pci 0000:00:04.2: [8086:0e22] type 00 class 0x088000 [ 12.781846] pci 0000:00:04.2: reg 0x10: [mem 0xf6cd0000-0xf6cd3fff 64bit] [ 12.784023] pci 0000:00:04.3: [8086:0e23] type 00 class 0x088000 [ 12.784847] pci 0000:00:04.3: reg 0x10: [mem 0xf6cc0000-0xf6cc3fff 64bit] [ 12.787000] pci 0000:00:04.4: [8086:0e24] type 00 class 0x088000 [ 12.787846] pci 0000:00:04.4: reg 0x10: [mem 0xf6cb0000-0xf6cb3fff 64bit] [ 12.790058] pci 0000:00:04.5: [8086:0e25] type 00 class 0x088000 [ 12.790846] pci 0000:00:04.5: reg 0x10: [mem 0xf6ca0000-0xf6ca3fff 64bit] [ 12.793002] pci 0000:00:04.6: [8086:0e26] type 00 c [ 12.793844] pci 0000:00:04.6: reg 0x10: [mem 0xf6c90000-0xf6c93fff 64bit] [ 12.795886] pci 0000:00:04.7: [8086:0e27] type 00 class 0x088000 [ 12.796841] pci 0000:00:04.7: reg 0x10: [mem 0xf6c80000-0xf6c83fff 64bit] [ 12.798948] pci 0000:00:05.0: [8086:0e28] type 00 class 0x088000 [ 12.800842] pci 0000:00:05.2: [8086:0e2a] type 00 class 0x088000 [ 12.802832] pci 0000:00:05.4: [8086:0e2c] type 00 class 0x080020 [ 12.803831] pci 0000:00:05.4: reg 0x10: [mem 0xf6c70000-0xf6c70fff] [ 12.805934] pci 0000:00:11.0: [8086:1d3e] type 01 class 0x060400 [ 12.807105] pci 0000:00:11.0: PME# supported from D0 D3hot D3cold [ 12.810485] pci 0000:00:1a.0: [8086:1d2d] type 00 class 0x0c0320 [ 12.810839] pci 0000:00:1a.0: reg 0x10: [mem 0xf6c60000-0xf6c603ff] [ 12.812052] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold [ 12.813751] pci 0000:00:1c.0: [8086:1d10] type 01 class 0x060400 [ 12.814079] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold [ 12.818145] pci 0000:00:1c.7: [8086:1d1e] type 01 class 0x060400 [ 12.819070] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold [ 12.822913] pci 0000:00:1d.0: [8086:1d26] type 00 class 0x0c0320 [ 12.823838] pci 0000:00:1d.0: reg 0x10: [mem 0xf6c50000-0xf6c503ff] [ 12.825051] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold [ 12.826780] pci 0000:00:type 01 class 0x060401 [ 12.827807] pci 0000:00:1f.0: [8086:1d41] type 00 class 0x060100 [ 12.832001] pci 0000:00:1f.2: [8086:1d00] type 00 class 0x01018f [ 12.832837] pci 0000:00:1f.2: reg 0x10: [io 0x4000-0x4007] [ 12.833821] pci 0000:00:1f.2: reg 0x14: [io 0x4008-0x400b] [ 12.834821] pci 0000:00:1f.2: reg 0x18: [io 0x4010-0x4017] [ 12.835821] pci 0000:00:1f.2: reg 0x1c: [io 0x4018-0x401b] [ 12.836821] pci 0000:00:1f.2: reg 0x20: [io 0x4020-0x402f] [ 12.837820] pci 0000:00:1f.2: reg 0x24: [io 0x4030-0x403f] [ 12.857513] pci 0000:04:00.0: [103c:323b] type 00 class 0x010400 [ 12.857839] pci 0000:04:00.0: reg 0x10: [mem 0xf7f00000-0xf7ffffff 64bit] [ 12.858825] pci 0000:04:00.0: reg 0x18: [mem 0xf7ef0000-0xf7ef03ff 64bit] [ 12.859819] pci 0000:04:00.0: reg 0x20: [io 0x6000-0x60ff] [ 12.860829] pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] [ 12.861819] pci 0000:04:00.0: enabling Extended Tags [ 12.863077] pci 0000:04:00.0: PME# supported from D0 D1 D3hot [ 12.871121] pci 0000:00:01.0: PCI bridge to [bus 04] [ 12.871815] pci 0000:00:01.0: bridge window [io 0x6000-0x6fff] [ 12.0:01.0: bridge window [mem 0xf7e00000-0xf7ffffff] [ 12.874282] pci 0000:00:01.1: PCI bridge to [bus 11] [ 12.877635] pci 0000:03:00.0: [14e4:1657] type 00 class 0x020000 [ 12.877843] pci 0000:03:00.0: reg 0x10: [mem 0xf6bf0000-0xf6bfffff 64bit pref] [ 12.878827] pci 0000:03:00.0: reg 0x18: [mem 0xf6be0000-0xf6beffff 64bit pref] [ 12.879826] pci 0000:03:00.0: reg 0x20: [mem 0xf6bd0000-0xf6bdffff 64bit pref] [ 12.880820] pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 12.882117] pci 0000:03:00.0: PME# supported from D0 D3hot D3cold [ 12.889551] pci 0000:03:00.1: [14e4:1657] type 00 class 0x020000 [ 12.889843] pci 0000:03:00.1: reg 0x10: [mem 0xf6bc0000-0xf6bcffff 64bit pref] [ 12.890827] pci 0000:03:00.1: reg 0x18: [mem 0xf6bb0000-0xf6bbffff 64bit pref] [ 12.891826] pci 0000:03:00.1: reg 0x20: [mem 0xf6ba0000-0xf6baffff 64bit pref] [ 12.892820] pci 0000:03:00.1: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 12.894115] pci 0000:03:00.1: PME# supported from D0 D3hot D3cold [ 12.901531] pci 0000:03:00.2: [14e4:1657] type 00 class 0x020000 [ 12.901842] pci 0000:03:00.2: reg 0x10: [mem 0xf6b90000-0xf6b9ffff 64bit pref] [ 12.902827] pci 0000:03:00.2: reg 0x18: [mem 0xf6b80000-0xf6b8ffff 64bit pref] [ i 0000:03:00.2: reg 0x20: [mem 0xf6b70000-0xf6b7ffff 64bit pref] [ 12.904822] pci 0000:03:00.2: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 12.906115] pci 0000:03:00.2: PME# supported from D0 D3hot D3cold [ 12.913504] pci 0000:03:00.3: [14e4:1657] type 00 class 0x020000 [ 12.913843] pci 0000:03:00.3: reg 0x10: [mem 0xf6b60000-0xf6b6ffff 64bit pref] [ 12.914827] pci 0000:03:00.3: reg 0x18: [mem 0xf6b50000-0xf6b5ffff 64bit pref] [ 12.915826] pci 0000:03:00.3: reg 0x20: [mem 0xf6b40000-0xf6b4ffff 64bit pref] [ 12.916820] pci 0000:03:00.3: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 12.918238] pci 0000:03:00.3: PME# supported from D0 D3hot D3cold [ 12.925548] pci 0000:00:02.0: PCI bridge to [bus 03] [ 12.925824] pci 0000:00:02.0: bridge window [mem 0xf6b00000-0xf6bfffff 64bit pref] [ 12.927278] pci 0000:00:02.1: PCI bridge to [bus 12] [ 12.928352] pci 0000:02:00.0: [103c:323b] type 00 class 0x010400 [ 12.928839] pci 0000:02:00.0: reg 0x10: [mem 0xf7d00000-0xf7dfffff 64bit] [ 12.929824] pci 0000:02:00.0: reg 0x18: [mem 0xf7cf0000-0xf7cf03ff 64bit] [ 12.930818] pci 0000:02:00.0: reg 0x20: [io 0x5000-0x50ff] [ 12.931829] pci 0000:02:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] [ 000:02:00.0: enabling Extended Tags [ 12.934066] pci 0000:02:00.0: PME# supported from D0 D1 D3hot [ 12.935909] pci 0000:00:02.2: PCI bridge to [bus 02] [ 12.936813] pci 0000:00:02.2: bridge window [io 0x5000-0x5fff] [ 12.937811] pci 0000:00:02.2: bridge window [mem 0xf7c00000-0xf7dfffff] [ 12.939308] pci 0000:00:02.3: PCI bridge to [bus 13] [ 12.957159] pci 0000:00:03.0: PCI bridge to [bus 07] [ 12.958292] pci 0000:00:03.1: PCI bridge to [bus 14] [ 12.959380] pci 0000:00:03.2: PCI bridge to [bus 15] [ 12.960291] pci 0000:00:03.3: PCI bridge to [bus 16] [ 12.961309] pci 0000:00:11.0: PCI bridge to [bus 18] [ 12.962317] pci 0000:00:1c.0: PCI bridge to [bus 0a] [ 12.963715] pci 0000:01:00.0: [103c:3306] type 00 class 0x088000 [ 12.963848] pci 0000:01:00.0: reg 0x10: [io 0x3000-0x30ff] [ 12.964826] pci 0000:01:00.0: reg 0x14: [mem 0xf7bf0000-0xf7bf01ff] [ 12.965827] pci 0000:01:00.0: reg 0x18: [io 0x3400-0x34ff] [ 12.969686] pci 0000:01:00.1: [102b:0533] type 00 class 0x030000 [ 12.969858] pci 0000:01:00.1: reg 0x10: [mem 0xf5000000-0xf5ffffff pref] [ 12.970828] pci 0000:01:00.1: reg 0x14: [mem 0xf7be0000-0xf7be3fff] [ 12.971827] pci 0000:01:00.1: reg 0x18: [mem 0xf7000000-0xf77fffff] [ 12.973082] pci 0000:01:00.1: Video device with shadowed ROM at [mem 0x000c0000-0x000dfff5142] pci 0000:01:00.2: [103c:3307] type 00 class 0x088000 [ 12.975848] pci 0000:01:00.2: reg 0x10: [io 0x3800-0x38ff] [ 12.976827] pci 0000:01:00.2: reg 0x14: [mem 0xf6ff0000-0xf6ff00ff] [ 12.977826] pci 0000:01:00.2: reg 0x18: [mem 0xf6e00000-0xf6efffff] [ 12.978826] pci 0000:01:00.2: reg 0x1c: [mem 0xf6d80000-0xf6dfffff] [ 12.979826] pci 0000:01:00.2: reg 0x20: [mem 0xf6d70000-0xf6d77fff] [ 12.980826] pci 0000:01:00.2: reg 0x24: [mem 0xf6d60000-0xf6d67fff] [ 12.981826] pci 0000:01:00.2: reg 0x30: [mem 0x00000000-0x0000ffff pref] [ 12.983199] pci 0000:01:00.2: PME# supported from D0 D3hot D3cold [ 12.984799] pci 0000:01:00.4: [103c:3300] type 00 class 0x0c0300 [ 12.985919] pci 0000:01:00.4: reg 0x20: [io 0x3c00-0x3c1f] [ 12.990928] pci 0000:00:1c.7: PCI bridge to [bus 01] [ 12.991814] pci 0000:00:1c.7: bridge window [io 0x3000-0x3fff] [ 12.992818] pci 0000:00:1c.7: bridge window [mem 0xf6d00000-0xf7bfffff] [ 12.993815] pci 0000:00:1c.7: bridge window [mem 0xf5000000-0xf5ffffff 64bit pref] [ 12.994858] pci_bus 0000:17: extended config space not accessible [ 12.996262] pci 0000:00:1e.0: PCI bridbtractive decode) [ 12.996838] pci 0000:00:1e.0: bridge window [mem 0xf4000000-0xf7ffffff window] (subtractive decode) [ 12.997816] pci 0000:00:1e.0: bridge window [io 0x1000-0x7fff window] (subtractive decode) [ 12.998815] pci 0000:00:1e.0: bridge window [io 0x0000-0x03af window] (subtractive decode) [ 12.999815] pci 0000:00:1e.0: bridge window [io 0x03e0-0x0cf7 window] (subtractive decode) [ 13.000815] pci 0000:00:1e.0: bridge window [io 0x0d00-0x0fff window] (subtractive decode) [ 13.001815] pci 0000:00:1e.0: bridge window [io 0x03b0-0x03bb window] (subtractive decode) [ 13.002814] pci 0000:00:1e.0: bridge window [io 0x03c0-0x03df window] (subtractive decode) [ 13.003815] pci 0000:00:1e.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) [ 13.016595] ACPI: PCI: Interrupt link LNKA configured for IRQ 5 [ 13.018811] ACPI: PCI: Interrupt link LNKB configured for IRQ 7 [ 13.021756] ACPI: PCI: Interrupt link LNKC configured for IRQ 10 [ 13.023779] ACPI: PCI: Interrupt link LNKD configured for IRQ 10 [ 13.025736] ACPI: PCI: Interrupt link LNKE configured for IRQ 5 [ 13.027777] ACPI: PCI: Interrupt link LNKF configured for IRQ 7 [ 13.029719] ACPI: link LNKG configured for IRQ 0 [ 13.029809] ACPI: PCI: Interrupt link LNKG disabled [ 13.032714] ACPI: PCI: Interrupt link LNKH configured for IRQ 0 [ 13.032809] ACPI: PCI: Interrupt link LNKH disabled [ 13.034416] ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 20-3f]) [ 13.034846] acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 13.038676] acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER PCIeCapability LTR DPC] [ 13.038810] acpi PNP0A08:01: FADT indicates ASPM is unsupported, using BIOS configuration [ 13.044267] PCI host bridge to bus 0000:20 [ 13.044819] pci_bus 0000:20: root bus resource [mem 0xfb000000-0xfbffffff window] [ 13.045815] pci_bus 0000:20: root bus resource [io 0x8000-0xffff window] [ 13.046809] pci_bus 0000:20: root bus resource [bus 20-3f] [ 13.048035] pci 0000:20:00.0: [8086:0e01] type 01 class 0x060400 [ 13.049091] pci 0000:20:00.0: PME# supported from D0 D3hot D3cold [ 13.050983] pci 0000:20:01.0: [8086:0e02] type 01 class 0x060400 [ 13.052121] pci 0000:20:01.0: PME# supported from D0 D3hot D3cold [ 13.055578] pci 0000:20:01.1: [8086:0e03] type 01 class 0x060400 [ 13.056111] pci 0000:20:01.1: PME# supported ld [ 13.059620] pci 0000:20:02.0: [8086:0e04] type 01 class 0x060400 [ 13.060109] pci 0000:20:02.0: PME# supported from D0 D3hot D3cold [ 13.063653] pci 0000:20:02.1: [8086:0e05] type 01 class 0x060400 [ 13.064108] pci 0000:20:02.1: PME# supported from D0 D3hot D3cold [ 13.067622] pci 0000:20:02.2: [8086:0e06] type 01 class 0x060400 [ 13.068108] pci 0000:20:02.2: PME# supported from D0 D3hot D3cold [ 13.071783] pci 0000:20:02.3: [8086:0e07] type 01 class 0x060400 [ 13.072108] pci 0000:20:02.3: PME# supported from D0 D3hot D3cold [ 13.075658] pci 0000:20:03.0: [8086:0e08] type 01 class 0x060400 [ 13.075886] pci 0000:20:03.0: enabling Extended Tags [ 13.077056] pci 0000:20:03.0: PME# supported from D0 D3hot D3cold [ 13.080587] pci 0000:20:03.1: [8086:0e09] type 01 class 0x060400 [ 13.081121] pci 0000:20:03.1: PME# supported from D0 D3hot D3cold [ 13.084576] pci 0000:20:03.2: [8086:0e0a] type 01 class 0x060400 [ 13.085108] pci 0000:20:03.2: PME# supported from D0 D3hot D3cold [ 13.088562] pci 0000:20:03.3: [8086:0e0b] type 01 class 0x060400 [ 13.089204] pci 0000:20:03.3: PME# supported from D0 D3hot D3cold [ 13.092541] pci 0000:20:04.0: [8086:0e20] type 00 class 0x088000 [ 13.092843] pci 0000:20:04.0: reg 0x10: [mem 0xfbff0000-0xfbff3fff 64bit] [ 13.09420:04.1: [8086:0e21] type 00 class 0x088000 [ 13.095844] pci 0000:20:04.1: reg 0x10: [mem 0xfbfe0000-0xfbfe3fff 64bit] [ 13.097865] pci 0000:20:04.2: [8086:0e22] type 00 class 0x088000 [ 13.098842] pci 0000:20:04.2: reg 0x10: [mem 0xfbfd0000-0xfbfd3fff 64bit] [ 13.100856] pci 0000:20:04.3: [8086:0e23] type 00 class 0x088000 [ 13.101842] pci 0000:20:04.3: reg 0x10: [mem 0xfbfc0000-0xfbfc3fff 64bit] [ 13.103851] pci 0000:20:04.4: [8086:0e24] type 00 class 0x088000 [ 13.104842] pci 0000:20:04.4: reg 0x10: [mem 0xfbfb0000-0xfbfb3fff 64bit] [ 13.106864] pci 0000:20:04.5: [8086:0e25] type 00 class 0x088000 [ 13.107842] pci 0000:20:04.5: reg 0x10: [mem 0xfbfa0000-0xfbfa3fff 64bit] [ 13.109928] pci 0000:20:04.6: [8086:0e26] type 00 class 0x088000 [ 13.110842] pci 0000:20:04.6: reg 0x10: [mem 0xfbf90000-0xfbf93fff 64bit] [ 13.112952] pci 0000:20:04.7: [8086:0e27] type 00 class 0x088000 [ 13.113843] pci 0000:20:04.7: reg 0x10: [mem 0xfbf80000-0xfbf83fff 64bit] [ 13.115868] pci 0000:20:05.0: [8086:0e28] type 00 class 0x088000 [ 13.117852] pci 0000:20:05.2: [8086:0e2a] type 00 class 0x088000 [ 13.119859] pci 0000:20:05.4: [8086:0e2c] type 00 class 0x080020 [ 13.120834] pci 0000:20:05.4: reg 0x10: [mem 0xfbf70000-0xfbf70fff] [ 13.123301] pci 0000:20:00.0: PCI bridge to [bus 2b] [ 13.124308] pci 0000:20:01.0: PCI bridge to [bus 21] [ 13.1252.1: PCI bridge to [bus 22] [ 13.126278] pci 0000:20:02.0: PCI bridge to [bus 23] [ 13.127408] pci 0000:20:02.1: PCI bridge to [bus 24] [ 13.128318] pci 0000:20:02.2: PCI bridge to [bus 25] [ 13.129281] pci 0000:20:02.3: PCI bridge to [bus 26] [ 13.130293] pci 0000:20:03.0: PCI bridge to [bus 27] [ 13.131270] pci 0000:20:03.1: PCI bridge to [bus 28] [ 13.132294] pci 0000:20:03.2: PCI bridge to [bus 29] [ 13.133270] pci 0000:20:03.3: PCI bridge to [bus 2a] [ 13.141147] iommu: Default domain type: Translated [ 13.141807] iommu: DMA domain TLB invalidation policy: lazy mode [ 13.145632] SCSI subsystem initialized [ 13.146384] ACPI: bus type USB registered [ 13.147264] usbcore: registered new interface driver usbfs [ 13.148017] usbcore: registered new interface driver hub [ 13.149389] usbcore: registered new device driver usb [ 13.150603] pps_core: LinuxPPS API ver. 1 registered [ 13.150808] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti [ 13.151873] PTP clock support registered [ 13.154912] EDAC MC: Ver: 3.0.0 [ 13.161308] NetLabel: Initializing [ 13.161808] NetLabel: domain ha [ 13.162813] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO [ 13.164282] NetLabel: unlabeled traffic allowed by default [ 13.164816] PCI: Using ACPI for IRQ routing [ 13.167138] PCI: Discovered peer bus 1f [ 13.168787] PCI host bridge to bus 0000:1f [ 13.168817] pci_bus 0000:1f: Unknown NUMA node; performance will be reduced [ 13.169825] pci_bus 0000:1f: root bus resource [io 0x0000-0xffff] [ 13.170822] pci_bus 0000:1f: root bus resource [mem 0x00000000-0x3fffffffffff] [ 13.171817] pci_bus 0000:1f: No busn resource found for root bus, will use [bus 1f-ff] [ 13.172815] pci_bus 0000:1f: busn_res: can not insert [bus 1f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 00-1f]) [ 13.173915] pci 0000:1f:08.0: [8086:0e80] type 00 class 0x088000 [ 13.176042] pci 0000:1f:09.0: [8086:0e90] type 00 class 0x088000 [ 13.178065] pci 0000:1f:0a.0: [8086:0ec0] type 00 class 0x088000 [ 13.179982] pci 0000:1f:0a.1: [8086:0ec1] type 00 class 0x088000 [ 13.181478] pci 0000:1f:0a.2: [8086:0ec2] type 00 class 0x088000 [ 13.182479] pci 0000:1f:0a.3: [8086:0ec3] type 00 class 0x088000 [ 13.183509] pci 0000:1f:0b.0: [8086:0e1e] type 00 class 0x088000 [ 13.184501] pci 0000:1f:0b.3: [8086:0e0x088000 [ 13.185504] pci 0000:1f:0c.0: [8086:0ee0] type 00 class 0x088000 [ 13.186467] pci 0000:1f:0c.1: [8086:0ee2] type 00 class 0x088000 [ 13.187485] pci 0000:1f:0c.2: [8086:0ee4] type 00 class 0x088000 [ 13.188480] pci 0000:1f:0d.0: [8086:0ee1] type 00 class 0x088000 [ 13.189572] pci 0000:1f:0d.1: [8086:0ee3] type 00 class 0x088000 [ 13.190478] pci 0000:1f:0d.2: [8086:0ee5] type 00 class 0x088000 [ 13.191484] pci 0000:1f:0e.0: [8086:0ea0] type 00 class 0x088000 [ 13.192487] pci 0000:1f:0e.1: [8086:0e30] type 00 class 0x110100 [ 13.193530] pci 0000:1f:0f.0: [8086:0ea8] type 00 class 0x088000 [ 13.194579] pci 0000:1f:0f.1: [8086:0e71] type 00 class 0x088000 [ 13.195589] pci 0000:1f:0f.2: [8086:0eaa] type 00 class 0x088000 [ 13.196579] pci 0000:1f:0f.3: [8086:0eab] type 00 class 0x088000 [ 13.197677] pci 0000:1f:0f.4: [8086:0eac] type 00 class 0x088000 [ 13.198590] pci 0000:1f:0f.5: [8086:0ead] type 00 class 0x088000 [ 13.199596] pci 0000:1f:10.0: [8086:0eb0] type 00 class 0x088000 [ 13.200590] pci 0000:1f:10.1: [8086:0eb1] type 00 class 0x088000 [ 13.201591] pci 0000:1f:10.2: [8086:0eb2] type 00 class 0x088000 [ 13.202599] pci 0000:1f:10.3: [8086:0eb3] type 00 class 0x088000 [ 13.203591] pci 0000:1f:10.4: [8086:0eb4] type 00 class 0x088000 [ 13.204592] pci 0000:1f:10.5: [8080 class 0x088000 [ 13.205598] pci 0000:1f:10.6: [8086:0eb6] type 00 class 0x088000 [ 13.206643] pci 0000:1f:10.7: [8086:0eb7] type 00 class 0x088000 [ 13.207588] pci 0000:1f:13.0: [8086:0e1d] type 00 class 0x088000 [ 13.208511] pci 0000:1f:13.1: [8086:0e34] type 00 class 0x110100 [ 13.209482] pci 0000:1f:13.4: [8086:0e81] type 00 class 0x088000 [ 13.210517] pci 0000:1f:13.5: [8086:0e36] type 00 class 0x110100 [ 13.211489] pci 0000:1f:16.0: [8086:0ec8] type 00 class 0x088000 [ 13.212474] pci 0000:1f:16.1: [8086:0ec9] type 00 class 0x088000 [ 13.213481] pci 0000:1f:16.2: [8086:0eca] type 00 class 0x088000 [ 13.214634] pci_bus 0000:1f: busn_res: [bus 1f-ff] end is updated to 1f [ 13.214811] pci_bus 0000:1f: busn_res: can not insert [bus 1f] under domain [bus 00-ff] (conflicts with (null) [bus 00-1f]) [ 13.216830] PCI: Discovered peer bus 3f [ 13.218344] PCI host bridge to bus 0000:3f [ 13.218809] pci_bus 0000:3f: Unknown NUMA node; performance will be reduced [ 13.219813] pci_bus 0000:3f: root bus resource [io 0x0000-0xffff] [ 13.220813] pci_bus 0000:3f: root bus resource [mem 0x00000000-0x3fffffffffff] [bus 0000:3f: No busn resource found for root bus, will use [bus 3f-ff] [ 13.222810] pci_bus 0000:3f: busn_res: can not insert [bus 3f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 20-3f]) [ 13.223851] pci 0000:3f:08.0: [8086:0e80] type 00 class 0x088000 [ 13.225512] pci 0000:3f:09.0: [8086:0e90] type 00 class 0x088000 [ 13.226513] pci 0000:3f:0a.0: [8086:0ec0] type 00 class 0x088000 [ 13.227487] pci 0000:3f:0a.1: [8086:0ec1] type 00 class 0x088000 [ 13.228486] pci 0000:3f:0a.2: [8086:0ec2] type 00 class 0x088000 [ 13.229504] pci 0000:3f:0a.3: [8086:0ec3] type 00 class 0x088000 [ 13.230485] pci 0000:3f:0b.0: [8086:0e1e] type 00 class 0x088000 [ 13.231567] pci 0000:3f:0b.3: [8086:0e1f] type 00 class 0x088000 [ 13.232499] pci 0000:3f:0c.0: [8086:0ee0] type 00 class 0x088000 [ 13.233480] pci 0000:3f:0c.1: [8086:0ee2] type 00 class 0x088000 [ 13.234482] pci 0000:3f:0c.2: [8086:0ee4] type 00 class 0x088000 [ 13.235492] pci 0000:3f:0d.0: [8086:0ee1] type 00 class 0x088000 [ 13.236477] pci 0000:3f:0d.1: [8086:0ee3] type 00 class 0x088000 [ 13.237498] pci 0000:3f:0d.2: [8086:0ee5] type 00 class 0x088000 [ 13.238485] pci 0000:3f:0e.0: [8086:0ea0] type 00 class 0x088000 [ 13.23957:0e.1: [8086:0e30] type 00 class 0x110100 [ 13.240505] pci 0000:3f:0f.0: [8086:0ea8] type 00 class 0x088000 [ 13.241607] pci 0000:3f:0f.1: [8086:0e71] type 00 class 0x088000 [ 13.242596] pci 0000:3f:0f.2: [8086:0eaa] type 00 class 0x088000 [ 13.243594] pci 0000:3f:0f.3: [8086:0eab] type 00 class 0x088000 [ 13.244629] pci 0000:3f:0f.4: [8086:0eac] type 00 class 0x088000 [ 13.245604] pci 0000:3f:0f.5: [8086:0ead] type 00 class 0x088000 [ 13.246594] pci 0000:3f:10.0: [8086:0eb0] type 00 class 0x088000 [ 13.247614] pci 0000:3f:10.1: [8086:0eb1] type 00 class 0x088000 [ 13.248661] pci 0000:3f:10.2: [8086:0eb2] type 00 class 0x088000 [ 13.249600] pci 0000:3f:10.3: [8086:0eb3] type 00 class 0x088000 [ 13.250608] pci 0000:3f:10.4: [8086:0eb4] type 00 class 0x088000 [ 13.251634] pci 0000:3f:10.5: [8086:0eb5] type 00 class 0x088000 [ 13.252592] pci 0000:3f:10.6: [8086:0eb6] type 00 class 0x088000 [ 13.253606] pci 0000:3f:10.7: [8086:0eb7] type 00 class 0x088000 [ 13.254611] pci 0000:3f:13.0: [8086:0e1d] type 00 class 0x088000 [ 13.255506] pci 0000:3f:13.1: [8086:0e34] type 00 class 0x110100 [ 13.256562] pci 0000:3f:13.4: [8086:0e81] type 00 class 0x088000 [ 13.257491] pci 0000:3f:13.5: [8086:0e36] type 00 class 0x110100 [ 13.258488] pci 0000:3f:16.0: [8086:0ec8] type 00 class 0x088000 [ 00:3f:16.1: [8086:0ec9] type 00 class 0x088000 [ 13.260480] pci 0000:3f:16.2: [8086:0eca] type 00 class 0x088000 [ 13.261522] pci_bus 0000:3f: busn_res: [bus 3f-ff] end is updated to 3f [ 13.261810] pci_bus 0000:3f: busn_res: can not insert [bus 3f] under domain [bus 00-ff] (conflicts with (null) [bus 20-3f]) [ 13.276036] pci 0000:01:00.1: vgaarb: setting as boot VGA device [ 13.276799] pci 0000:01:00.1: vgaarb: bridge control possible [ 13.276799] pci 0000:01:00.1: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none [ 13.277003] vgaarb: loaded [ 13.278216] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 [ 13.278809] hpet0: 8 comparators, 64-bit 14.318180 MHz counter [ 13.284650] clocksource: Switched to clocksource tsc-early [ 13.783237] VFS: Disk quotas dquot_6.6.0 [ 13.785037] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 13.789052] pnp: PnP ACPI init [ 13.796124] system 00:00: [mem 0xf4ffe000-0xf4ffffff] could not be reserved [ 13.803519] system 00:01: [io 0x0408-0x040f] has been reserved [ 13.805638] system 00:01: [io 0x04d0-0x04d1] has been reserved [ 13.807704] system 00:01: [io 0x0310-0x0315] has been reserved [ 13.809879] system 00:01: [io 0x0316-0x0317] has been reserved [ 13.812013] system 00:01: [io 0x0700-0x071f] has been reserved [ 13.814217] system 00:01: [io 0x0880-0x08ff] has been reserved [ 13.816376] system 00:01: [io 0x0900-0x097f] has been reserved [ 13.818513] system 00:01: [io 0x0cd4-0x0cd7] has been reserved [ 13.820650] system 00:01: [io 0x0cd0-0x0cd3] has been reserved [ 13.822789] system 00:01: [io 0x0f50-0x0f58] has been reserved [ 13.825052] system 00:01: [io 0x0ca0-0x0ca1] has been reserved [ 13.827200] system 00:01: [io 0x0ca4-0x0ca5] has been reserved [ 13.829341] system 00:01: [io 0x02f8-0x02ff] has been reserved [ 13.831493] system 00:01: [mem 0xc0000000-0xcfffffff] has been reserved [ 13.833970] system 00:01: [mem 0xfe000000-0xfebfffff] has been reserved [ 13.836345] system 00:01: [mem 0xfc000000-0xfc000fff] has been reserved [ 13.838709] system 00:01: [mem 0xfed1c000-0xfed1ffff] has been reserved [ 13.841455] system 00:01: [mem 0xfed30000-0xfed3ffff] has been reserved [ 13.843946] system 00:01: [mem 0xfee00000-0xfee00fff] has been reserved [ 13.846310] system 00:01: [mem 0xff800000-0xffffffff] has been reserved [ 13.880010] system 00:06: [mem 0xfbefe000-0xfbefffff] could not be reserved [ 13.885399] pnp: PnP ACPI: found 7 devices [ 13.9602ource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns [ 14.264201] NET: Registered PF_INET protocol family [ 14.268197] IP idents hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) [ 14.280749] tcp_listen_portaddr_hash hash table entries: 16384 (order: 8, 1310720 bytes, vmalloc) [ 14.285037] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, vmalloc) [ 14.289148] TCP established hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) [ 14.294383] TCP bind hash table entries: 65536 (order: 10, 5242880 bytes, vmalloc hugepage) [ 14.301294] TCP: Hash tables configured (established 262144 bind 65536) [ 14.312999] MPTCP token hash table entries: 32768 (order: 9, 3145728 bytes, vmalloc) [ 14.319663] UDP hash table entries: 16384 (order: 9, 3145728 bytes, vmalloc) [ 14.326119] UDP-Lite hash table entries: 16384 (order: 9, 3145728 bytes, vmalloc) [ 14.335111] NET: Registered PF_UNIX/PF_LOCAL protocol family [ 14.337196] NET: Registered PF_XDP protocol family [ 14.339020] pci 0000:00:02.0: BAR 14: assigned [mem 0xf4000000-0xf40fffff] [ 14.341839] pci 0000:04:00.0: BAR 6: assigned [mem 0xf7e00000-0xf7e7ffff pref] [ 14.344428] pci 0000:00:01.0: PCI bridge to [bus 04] [ 14.346182] p0: bridge window [io 0x6000-0x6fff] [ 14.748241] pci 0000:00:01.0: bridge window [mem 0xf7e00000-0xf7ffffff] [ 14.750640] pci 0000:00:01.1: PCI bridge to [bus 11] [ 14.752612] pci 0000:03:00.0: BAR 6: assigned [mem 0xf4000000-0xf403ffff pref] [ 14.755328] pci 0000:03:00.1: BAR 6: assigned [mem 0xf4040000-0xf407ffff pref] [ 14.757920] pci 0000:03:00.2: BAR 6: assigned [mem 0xf4080000-0xf40bffff pref] [ 14.760466] pci 0000:03:00.3: BAR 6: assigned [mem 0xf40c0000-0xf40fffff pref] [ 14.763069] pci 0000:00:02.0: PCI bridge to [bus 03] [ 14.764941] pci 0000:00:02.0: bridge window [mem 0xf4000000-0xf40fffff] [ 14.767309] pci 0000:00:02.0: bridge window [mem 0xf6b00000-0xf6bfffff 64bit pref] [ 14.770051] pci 0000:00:02.1: PCI bridge to [bus 12] [ 14.771884] pci 0000:02:00.0: BAR 6: assigned [mem 0xf7c00000-0xf7c7ffff pref] [ 14.774498] pci 0000:00:02.2: PCI bridge to [bus 02] [ 14.776302] pci 0000:00:02.2: bridge window [io 0x5000-0x5fff] [ 14.778465] pci 0000:00:02.2: bridge window [mem 0xf7c00000-0xf7dfffff] [ 14.780881] pci 0000:00:02.3: PCI bridge to [bus 13] [ 14.782675] pci 0000:00:03.0: PCI bridge to [bus 07] [ 14.784544] pci 0000:00:03.1: PCI bridge 15.194908] pci 0000:00:03.2: PCI bridge to [bus 15] [ 15.287924] pci 0000:00:03.3: PCI bridge to [bus 16] [ 15.289721] pci 0000:00:11.0: PCI bridge to [bus 18] [ 15.291466] pci 0000:00:1c.0: PCI bridge to [bus 0a] [ 15.293274] pci 0000:01:00.2: BAR 6: assigned [mem 0xf6d00000-0xf6d0ffff pref] [ 15.295877] pci 0000:00:1c.7: PCI bridge to [bus 01] [ 15.297588] pci 0000:00:1c.7: bridge window [io 0x3000-0x3fff] [ 15.299693] pci 0000:00:1c.7: bridge window [mem 0xf6d00000-0xf7bfffff] [ 15.302187] pci 0000:00:1c.7: bridge window [mem 0xf5000000-0xf5ffffff 64bit pref] [ 15.304940] pci 0000:00:1e.0: PCI bridge to [bus 17] [ 15.306662] pci_bus 0000:00: resource 4 [mem 0xf4000000-0xf7ffffff window] [ 15.309179] pci_bus 0000:00: resource 5 [io 0x1000-0x7fff window] [ 15.311317] pci_bus 0000:00: resource 6 [io 0x0000-0x03af window] [ 15.313476] pci_bus 0000:00: resource 7 [io 0x03e0-0x0cf7 window] [ 15.315636] pci_bus 0000:00: resource 8 [io 0x0d00-0x0fff window] [ 15.317734] pci_bus 0000:00: resource 9 [io 0x03b0-0x03bb window] [ 15.319848] pci_bus 0000:00: resourcex03df window] [ 15.821886] pci_bus 0000:00: resource 11 [mem 0x000a0000-0x000bffff window] [ 15.824403] pci_bus 0000:04: resource 0 [io 0x6000-0x6fff] [ 15.826402] pci_bus 0000:04: resource 1 [mem 0xf7e00000-0xf7ffffff] [ 15.828722] pci_bus 0000:03: resource 1 [mem 0xf4000000-0xf40fffff] [ 15.830846] pci_bus 0000:03: resource 2 [mem 0xf6b00000-0xf6bfffff 64bit pref] [ 15.833317] pci_bus 0000:02: resource 0 [io 0x5000-0x5fff] [ 15.835272] pci_bus 0000:02: resource 1 [mem 0xf7c00000-0xf7dfffff] [ 15.837415] pci_bus 0000:01: resource 0 [io 0x3000-0x3fff] [ 15.839324] pci_bus 0000:01: resource 1 [mem 0xf6d00000-0xf7bfffff] [ 15.841828] pci_bus 0000:01: resource 2 [mem 0xf5000000-0xf5ffffff 64bit pref] [ 15.844345] pci_bus 0000:17: resource 4 [mem 0xf4000000-0xf7ffffff window] [ 15.846721] pci_bus 0000:17: resource 5 [io 0x1000-0x7fff window] [ 15.848823] pci_bus 0000:17: resource 6 [io 0x0000-0x03af window] [ 15.850930] pci_bus 0000:17: resource 7 [io 0x03e0-0x0cf7 window] [ 15.853050] pci_bus 0000:17: resource 8 [io 0x0d00-0x0fff window] [ 15.855243] pci_bus 0000:17: resource 9 [io 0x03b0-0x03bb window] [ 15.857351] pci_bus 0000:17: resource 10 [io 0x03c0-0x [ 16.256165] pci_bus 0000:17: resource 11 [mem 0x000a0000-0x000bffff window] [ 16.264178] pci 0000:20:00.0: PCI bridge to [bus 2b] [ 16.266298] pci 0000:20:01.0: PCI bridge to [bus 21] [ 16.268138] pci 0000:20:01.1: PCI bridge to [bus 22] [ 16.269941] pci 0000:20:02.0: PCI bridge to [bus 23] [ 16.271774] pci 0000:20:02.1: PCI bridge to [bus 24] [ 16.273576] pci 0000:20:02.2: PCI bridge to [bus 25] [ 16.275355] pci 0000:20:02.3: PCI bridge to [bus 26] [ 16.277165] pci 0000:20:03.0: PCI bridge to [bus 27] [ 16.279041] pci 0000:20:03.1: PCI bridge to [bus 28] [ 16.280866] pci 0000:20:03.2: PCI bridge to [bus 29] [ 16.282868] pci 0000:20:03.3: PCI bridge to [bus 2a] [ 16.284847] pci_bus 0000:20: resource 4 [mem 0xfb000000-0xfbffffff window] [ 16.287274] pci_bus 0000:20: resource 5 [io 0x8000-0xffff window] [ 16.289925] pci_bus 0000:1f: resource 4 [io 0x0000-0xffff] [ 16.291928] pci_bus 0000:1f: resource 5 [mem 0x00000000-0x3fffffffffff] [ 16.294352] pci_bus 0000:3f: resource 4 [io 0x0000-0xffff] [ 16.296504] pci_bus 0000:em 0x00000000-0x3fffffffffff] [ 16.798965] pci 0000:00:05.0: disabled boot interrupts on device [8086:0e28] [ 16.828438] pci 0000:00:1a.0: quirk_usb_early_handoff+0x0/0x290 took 26314 usecs [ 16.857230] pci 0000:00:1d.0: quirk_usb_early_handoff+0x0/0x290 took 25494 usecs [ 16.872781] pci 0000:01:00.4: quirk_usb_early_handoff+0x0/0x290 took 12498 usecs [ 16.875987] pci 0000:20:05.0: disabled boot interrupts on device [8086:0e28] [ 16.878791] PCI: CLS 64 bytes, default 64 [ 16.882790] Trying to unpack rootfs image as initramfs... [ 16.882863] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 16.887411] software IO TLB: mapped [mem 0x0000000039000000-0x000000003d000000] (64MB) [ 16.890474] ACPI: bus type thunderbolt registered [ 17.006006] Initialise system trusted keyrings [ 17.008100] Key type blacklist registered [ 17.011392] workingset: timestamp_bits=36 max_order=23 bucket_order=0 [ 17.119585] zbud: loaded [ 17.133405] integrity: Platform Keyring initialized [ 17.149116] NET: Registered PF_ALG protocol family [ 17.150953] xor: automatically using best checksumming function avx [ 17.153471] Key type asymmetric registered [ 17.154983] Asymmetric key parser 'x509' registered [ 17.772] Running certificate verification selftests [ 17.367714] cryptomgr_test (209) used greatest stack depth: 28528 bytes left [ 17.476207] Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' [ 17.485011] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) [ 17.489118] io scheduler mq-deadline registered [ 17.490850] io scheduler kyber registered [ 17.494430] io scheduler bfq registered [ 17.506262] atomic64_test: passed for x86-64 platform with CX8 and with SSE [ 17.767468] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 [ 17.775625] ACPI: \_PR_.CP00: Found 2 idle states [ 17.828403] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 17.834646] ACPI: button: Power Button [PWRF] [ 17.890612] thermal LNXTHERM:00: registered as thermal_zone0 [ 17.892717] ACPI: thermal: Thermal Zone [THM0] (8 C) [ 17.896102] ERST: Error Record Serialization Table (ERST) support is initialized. [ 17.899036] pstore: Registered erst as persistent store backend [ 17.905851] GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. [ 17.912577] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 17.916205] 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A [ 17.923075] serial8250: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A [ 17.931175] tsc: Refined TSC clocksource calibration: 2094.951 MHz [ 17.933738] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1e328ef1914, max_idle_ns: 440795263413 ns [ 17.938017] clocksource: Switched to clocksource tsc [ 17.941341] Non-volatile memory driver v1.3 [ 18.007520] rdac: device handler registered [ 18.010587] hp_sw: device handler registered [ 18.012222] emc: device handler registered [ 18.015597] alua: device handler registered [ 18.022405] libphy: Fixed MDIO Bus: probed [ 18.025967] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 18.028443] ehci-pci: EHCI PCI platform driver [ 18.045429] ehci-pci 0000:00:1a.0: EHCI Host Controller [ 18.050277] ehci-pci 0000:00:1a.0: new USB bus registered, assigned bus number 1 [ 18.053126] ehci-pci 0000:00:1a.0: debug port 2 [ 1ehci-pci 0000:00:1a.0: irq 21, io mem 0xf6c60000 [ 18.368949] ehci-pci 0000:00:1a.0: USB 2.0 started, EHCI 1.00 [ 18.372770] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.14 [ 18.375836] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 18.378362] usb usb1: Product: EHCI Host Controller [ 18.380095] usb usb1: Manufacturer: Linux 5.14.0-256.2009_766119311.el9.x86_64+debug ehci_hcd [ 18.382970] usb usb1: SerialNumber: 0000:00:1a.0 [ 18.388642] hub 1-0:1.0: USB hub found [ 18.390427] hub 1-0:1.0: 2 ports detected [ 18.404872] ehci-pci 0000:00:1d.0: EHCI Host Controller [ 18.408065] ehci-pci 0000:00:1d.0: new USB bus registered, assigned bus number 2 [ 18.410783] ehci-pci 0000:00:1d.0: debug port 2 [ 18.416643] ehci-pci 0000:00:1d.0: irq 20, io mem 0xf6c50000 [ 18.424880] ehci-pci 0000:00:1d.0: USB 2.0 started, EHCI 1.00 [ 18.427973] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.14 [ 18.430856] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 18.433343] usb usb2: Product: EHCI Host Controller [ 18.435076] usb urer: Linux 5.14.0-256.2009_766119311.el9.x86_64+debug ehci_hcd [ 18.838146] usb usb2: SerialNumber: 0000:00:1d.0 [ 18.843614] hub 2-0:1.0: USB hub found [ 18.845323] hub 2-0:1.0: 2 ports detected [ 18.851053] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 18.853458] ohci-pci: OHCI PCI platform driver [ 18.855887] uhci_hcd: USB Universal Host Controller Interface driver [ 18.862633] uhci_hcd 0000:01:00.4: UHCI Host Controller [ 18.866306] uhci_hcd 0000:01:00.4: new USB bus registered, assigned bus number 3 [ 18.869055] uhci_hcd 0000:01:00.4: detected 8 ports [ 18.870834] uhci_hcd 0000:01:00.4: port count misdetected? forcing to 2 ports [ 18.873639] uhci_hcd 0000:01:00.4: irq 47, io port 0x00003c00 [ 18.877196] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 [ 18.880371] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 18.883006] usb usb3: Product: UHCI Host Controller [ 18.884922] usb usb3: Manufacturer: Linux 5.14.0-256.2009_766119311.el9.x86_64+debug uhci_hcd [ 18.888020] usb usb3: SerialNumber: 0000:01:00.4 [ 18.893246] hub 3-0:1.0: USB hub found [ 18.895110] hub 3-0:1.0: 2 ports detected [ 18.902848] usbcore: registered new interface drl_generic [ 18.904949] Freeing initrd memory: 35636K [ 18.955918] usb 1-1: new high-speed USB device number 2 using ehci-pci [ 19.086229] usb 1-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00 [ 19.086261] usb 1-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 19.090003] hub 1-1:1.0: USB hub found [ 19.090514] hub 1-1:1.0: 6 ports detected [ 19.108900] usb 2-1: new high-speed USB device number 2 using ehci-pci [ 19.237988] usb 2-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00 [ 19.238001] usb 2-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 19.240316] hub 2-1:1.0: USB hub found [ 19.240930] hub 2-1:1.0: 8 ports detected [ 19.305477] usbserial: USB Serial support registered for generic [ 19.330202] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f0e:PS2M] at 0x60,0x64 irq 1,12 [ 19.336179] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 19.338142] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 19.343688] mousedev: PS/2 mouse device common for all mice [ 19.348139] rtc_cmos 00:03: RTC can wake from S4 [ 19.353627] rtc_cmos 00:03: registered as rtc0 [ 19.355633] rtc_cmos 00:03: setting system clock to 2023-02-03T00:4093) [ 19.859488] rtc_cmos 00:03: alarms up to one day, 114 bytes nvram, hpet irqs [ 19.872579] intel_pstate: Intel P-state driver initializing [ 19.908654] hid: raw HID events driver (C) Jiri Kosina [ 19.911704] usbcore: registered new interface driver usbhid [ 19.913784] usbhid: USB HID core driver [ 19.916197] drop_monitor: Initializing network drop monitor service [ 19.949726] Initializing XFRM netlink socket [ 19.955343] NET: Registered PF_INET6 protocol family [ 19.974862] Segment Routing with IPv6 [ 19.976499] NET: Registered PF_PACKET protocol family [ 19.979684] mpls_gso: MPLS GSO support [ 20.021986] microcode: sig=0x306e4, pf=0x1, revision=0x42e [ 20.026923] microcode: Microcode Update Driver: v2.2. [ 20.026949] IPI shorthand broadcast: enabled [ 20.030486] usb 2-1.3: new high-speed USB device number 3 using ehci-pci [ 20.033181] AVX version of gcm_enc/dec engaged. [ 20.035373] AES CTR mode by8 optimization enabled [ 20.042602] sched_clock: Marking stable (13797993232, 6244214049)->(30789325050, -10747117769) [ 20.105121] registered taskstats version 1 [ 20.114056] usb 2-1.3: New USB device found, idVendor=0424, idProduct=2660, bcdDevice= 8.01 [ 20.116591] Loading compiled-in X.509 certificates [ 20.117075] usb 2-1.3: New USB devicegs: Mfr=0, Product=0, SerialNumber=0 [ 20.124901] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 3bcdb855b1eeffc8155fd4f9576830612b2c709a' [ 20.325381] hub 2-1.3:1.0: USB hub found [ 20.327610] Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' [ 20.327722] hub 2-1.3:1.0: 2 ports detected [ 20.333738] Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' [ 20.404309] cryptomgr_test (245) used greatest stack depth: 27672 bytes left [ 20.419250] zswap: loaded using pool lzo/zbud [ 20.426035] debug_vm_pgtable: [debug_vm_pgtable ]: Validating architecture page table helpers [ 21.472619] page_owner is disabled [ 21.476689] pstore: Using crash dump compression: deflate [ 21.480071] Key type big_key registered [ 21.552527] Key type encrypted registered [ 21.554198] ima: No TPM chip found, activating TPM-bypass! [ 21.556245] Loading compiled-in module X.509 certificates [ 21.559646] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 3bcdb855b1eeffc8155fd4f9576830612b2c709a' [ 21.563602] ima: Allocated hash algorithm: sha256 [ 21.565427] ima: No architecture policies found [ 21.567483] evm: Initialising EVM extended attributes: [ 21.569334] evm: security.selinux [ 21.570501] evm: security.SMACK64 (disabled) [ 21.572003] evm: security.SMACK64EXEC (disabled) [ 21.573673] evm: security.SMACK64TRANSMUTE (disabled) [ 21.575408] evm: security.SMACK64MMAP (disabled) [ 21.577010] evm: security.apparmor (disabled) [ 21.578499] evm: security.ima [ 21.579537] evm: security.capability [ 21.580783] evm: HMAC attrs: 0x1 [ 21.687694] modprobe (259) used greatest stack depth: 27464 bytes left [ 22.447593] cryptomgr_test (355) used greatest stack depth: 27032 bytes left [ 22.795645] PM: Magic number: 11:878:707 [ 22.873687] Freeing unused decrypted memory: 2036K [ 22.887430] Freeing unused kernel image (initmem) memory: 5300K [ 22.889268] Write protecting the kernel read-only data: 57344k [ 22.900685] Freeing unused kernel image (text/rodata gap) memory: 2036K [ 22.906001] Freeing unused kernel image (rodata/data gap) memory: 1400K [ 23.024143] x86/mm: Checked W+X mappings: passed, no W+X pages found. [ 23.024639] x86/mm: Checking user space page tables [ 23.109324] x86/mm: Checked W+X mappings: passed, no W+X pages found. [ 23.109789] Run /init as init process [ 23.253242] systemd[1]: systemd 252-3.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [ 23.271513] systemd[1]: Detected architecture x86-64. [ 23.271944] systemd[1]: Running in initrd. Welcome to CentOS Stream 9 dracut-057-20.git20221213.el9 (Initramfs) ! [ 23.277889] systemd[1]: Hostname set to . [ 23.861139] cat (392) used greatest stack depth: 27000 bytes left [ 24.427659] systemd[1]: Queued start job for default target Initrd Default Target. [ 24.448136] systemd[1]: Created slice Slice /system/systemd-hibernate-resume. [ OK ] Created slice Slice /system/systemd-hibernate-resume . [ 24.455690] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [ OK ] Started Dispatch Password …ts to Console Directory Watch . [ 24.460311] systemd[1]: Reached target Initrd /usr File System. [ OK ] Reached target Initrd /usr File System . [ 24.465089] systemd[1]: Reached target Path Units. [ OK ] Reached target Path Units . [ 24.467206] systemd[1]: Reached target Slice Units. [ OK ] Reached target Slice Units . [ 24.471092] systemd[1]: Reached target Swaps. [ OK ] Reached target Swaps . [ 24.476185] systemd[1]: Reached target Timer Units. [ OK ] Reached target Timer Units . [ 24.483486] systemd[1]: Listening on D-Bus System Message Bus Socket. [ OK ] Listening on D-Bus System Message Bus Socket . [ 24.489568] systemd[1]: Listening on Journal Socket (/dev/log). [ OK ] Listening on Journal Socket (/dev/log) . [ 24.496565] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket . [ 24.502755] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket . [ 24.508912] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket . [ 24.513171] systemd[1]: Reached target Socket Units. [ OK ] Reached target Socket Units . [ 24.535387] systemd[1]: Starting Create List of Static Device Nodes... Starting Create List of Static Device Nodes ... [ 24.568726] systemd[1]: Starting Journal Service... Starting Journal Service ... [ 24.577002] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [ 24.608365] systemd[1]: Starting Apply Kernel Variables... Starting Apply Kernel Variables ... [ 24.640199] systemd[1]: Starting Create System Users... Starting Create System Users ... [ 24.675577] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console ... [ 24.715628] systemd[1]: Finished Create List of Static Device Nodes. [ OK ] Finished Create List of Static Device Nodes . [ 24.790920] systemd[1]: Finished Apply Kernel Variables. [ OK ] Finished Apply Kernel Variables . [ 24.903624] systemd[1]: Finished Create System Users. [ OK ] Finished Create System Users . [ 24.927317] systemd[1]: Starting Create Static Device Nodes in /dev... Starting Create Static Device Nodes in /dev ... [ 25.097726] systemd[1]: Finished Create Static Device Nodes in /dev. [ OK ] Finished Create Static Device Nodes in /dev . [ 25.394961] systemd[1]: Finished Setup Virtual Console. [ OK ] Finished Setup Virtual Console . [ 25.401129] systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. [ 25.422296] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook ... [ 26.058528] systemd[1]: Started Journal Service. [ OK ] Started Journal Service . Starting Create Volatile Files and Directories ... [ OK ] Finished Create Volatile Files and Directories .[-- MARK -- Fri Feb 3 05:45:00 2023] [ OK ] Finished dracut cmdline hook . Starting dracut pre-udev hook ... [ 27.799935] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. [ 27.801986] device-mapper: uevent: version 1.0.3 [ 27.807139] device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com [ OK ] Finished dracut pre-udev hook . Starting Rule-based Manage…for Device Events and Files ... [ OK ] Started Rule-based Manager for Device Events and Files . Starting Coldplug All udev Devices ... [ * ] (1 of 3) A start job is running for…g All udev Devices (6s / no limit) M [ * * ] (1 of 3) A start job is running for…g All udev Devices (6s / no limit) M [ * * * ] (1 of 3) A start job is running for…g All udev Devices (7s / no limit) M [ * * * ] (2 of 3) A start job is running for…l360pgen8--08-swap (7s / no limit) M [ * * * ] (2 of 3) A start job is running for…l360pgen8--08-swap (8s / no limit) M [ * * * ] (2 of 3) A start job is running for…l360pgen8--08-swap (8s / no limit) M [ OK ] Finished Coldplug All udev Devices . [ OK ] Reached target Network . Starting dracut initqueue hook ... [ 33.298758] hpwdt 0000:01:00.0: HPE Watchdog Timer Driver: NMI decoding initialized [ 33.345686] hpwdt 0000:01:00.0: HPE Watchdog Timer Driver: Version: 2.0.4 [ 33.346159] hpwdt 0000:01:00.0: timeout: 30 seconds (nowayout=0) [ 33.346933] hpwdt 0000:01:00.0: pretimeout: on. [ 33.347645] hpwdt 0000:01:00.0: kdumptimeout: -1. [ 33.369891] Warning: Unmaintained hardware is detected: hpsa:323B:103C @ 0000:02:00.0 [ 33.370377] HP HPSA Driver (v 3.4.20-200) [ 33.370677] hpsa 0000:02:00.0: can't disable ASPM; OS doesn't have ASPM control [ 33.391109] tg3 0000:03:00.0 eth0: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c4 [ 33.391772] tg3 0000:03:00.0 eth0: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 33.392748] tg3 0000:03:00.0 eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 33.393224] tg3 0000:03:00.0 eth0: dma_rwctrl[00000001] dma_mask[64-bit] [ 33.460800] tg3 0000:03:00.1 eth1: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c5 [ 33.461511] tg3 0000:03:00.1 eth1: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 33.462520] tg3 0000:03:00.1 eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 33.463012] tg3 0000:03:00.1 eth1: dma_rwctrl[00000001] dma_mask[64-bit] [ 33.536269] tg3 0000:03:00.2 eth2: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c6 [ 33.536978] tg3 0000:03:00.2 eth2: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 33.537963] tg3 0000:03:00.2 eth2: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 33.538421] tg3 0000:03:00.2 eth2: dma_rwctrl[00000001] dma_mask[64-bit] [ 33.559789] hpsa 0000:02:00.0: Logical aborts not supported [ 33.560178] hpsa 0000:02:00.0: HP SSD Smart Path aborts not supported [ 33.604040] ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] [ 33.606716] tg3 0000:03:00.3 eth3: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c7 [ 33.607405] tg3 0000:03:00.3 eth3: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 33.608442] tg3 0000:03:00.3 eth3: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 33.608940] tg3 0000:03:00.3 eth3: dma_rwctrl[00000001] dma_mask[64-bit] [ 33.664054] scsi host0: hpsa [ 33.769667] hpsa can't handle SMP requests [ 33.783186] Warning: Unmaintained hardware is detected: hpsa:323B:103C @ 0000:04:00.0 [ 33.783793] hpsa 0000:04:00.0: can't disable ASPM; OS doesn't have ASPM control [ 33.792918] hpsa 0000:02:00.0: scsi 0:0:0:0: added RAID HP P420i controller SSDSmartPathCap- En- Exp=1 [ 33.793732] hpsa 0000:02:00.0: scsi 0:0:1:0: masked Direct-Access ATA MM0500GBKAK PHYS DRV SSDSmartPathCap- En- Exp=0 [ 33.794773] hpsa 0000:02:00.0: scsi 0:1:0:0: added Direct-Access HP LOGICAL VOLUME RAID-0 SSDSmartPathCap- En- Exp=1 [ 33.855467] scsi host1: ata_piix [ 33.864708] hpsa can't handle SMP requests [ 33.870525] scsi 0:0:0:0: RAID HP P420i 8.32 PQ: 0 ANSI: 5 [ 33.871670] scsi host2: ata_piix [ 33.874566] ata1: SATA max UDMA/133 cmd 0x4000 ctl 0x4008 bmdma 0x4020 irq 17 [ 33.875718] ata2: SATA max UDMA/133 cmd 0x4010 ctl 0x4018 bmdma 0x4028 irq 17 [ 33.876125] scsi 0:1:0:0: Direct-Access HP LOGICAL VOLUME 8.32 PQ: 0 ANSI: 5 [ 33.912705] tg3 0000:03:00.2 eno3: renamed from eth2 [ 33.928919] tg3 0000:03:00.3 eno4: renamed from eth3 [ 33.941527] hpsa 0000:04:00.0: Logical aborts not supported [ 33.941961] hpsa 0000:04:00.0: HP SSD Smart Path aborts not supported [ 33.948003] tg3 0000:03:00.1 eno2: renamed from eth1 [ 33.968793] tg3 0000:03:00.0 eno1: renamed from eth0 [ 34.052529] scsi host3: hpsa [ 34.065076] hpsa can't handle SMP requests [ 34.078641] hpsa 0000:04:00.0: scsi 3:0:0:0: added RAID HP P421 controller SSDSmartPathCap- En- Exp=1 [ 34.079392] hpsa 0000:04:00.0: scsi 3:0:1:0: masked Enclosure PMCSIERA SRCv8x6G enclosure SSDSmartPathCap- En- Exp=0 [ 34.084074] hpsa can't handle SMP requests [ 34.086592] scsi 3:0:0:0: RAID HP P421 8.32 PQ: 0 ANSI: 5 [ 34.181482] scsi 0:0:0:0: Attached scsi generic sg0 type 12 [ 34.185279] scsi 0:1:0:0: Attached scsi generic sg1 type 0 [ 34.186978] scsi 3:0:0:0: Attached scsi generic sg2 type 12 [ 34.197721] modprobe (650) used greatest stack depth: 26984 bytes left [ 34.278383] sd 0:1:0:0: [sda] 976707632 512-byte logical blocks: (500 GB/466 GiB) [ 34.280576] sd 0:1:0:0: [sda] Write Protect is off [ 34.282634] sd 0:1:0:0: [sda] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA [ 34.283311] sd 0:1:0:0: [sda] Preferred minimum I/O size 262144 bytes [ 34.283844] sd 0:1:0:0: [sda] Optimal transfer size 262144 bytes [ 34.341893] sda: sda1 sda2 [ 34.344797] sd 0:1:0:0: [sda] Attached SCSI disk [ 34.914977] ata2.00: failed to resume link (SControl 0) [ 35.226986] ata1.01: failed to resume link (SControl 0) [ 35.238558] ata1.00: SATA link down (SStatus 0 SControl 300) [ 35.239396] ata1.01: SATA link down (SStatus 4 SControl 0) [ 35.954977] ata2.01: failed to resume link (SControl 0) [ 35.967217] ata2.00: SATA link down (SStatus 4 SControl 0) [ 35.967590] ata2.01: SATA link down (SStatus 4 SControl 0) [ 37.820862] cp (707) used greatest stack depth: 26456 bytes left [ OK ] Found device /dev/mapper/cs_hpe--dl360pgen8--08-root . [ OK ] Reached target Initrd Root Device . [ OK ] Found device /dev/mapper/cs_hpe--dl360pgen8--08-swap . Starting Resume from hiber…cs_hpe--dl360pgen8--08-swap ... [ OK ] Finished Resume from hiber…r/cs_hpe--dl360pgen8--08-swap . [ OK ] Reached target Preparation for Local File Systems . [ OK ] Reached target Local File Systems . [ OK ] Reached target System Initialization . [ OK ] Reached target Basic System . [ OK ] Finished dracut initqueue hook . [ OK ] Reached target Preparation for Remote File Systems . [ OK ] Reached target Remote File Systems . Starting File System Check…cs_hpe--dl360pgen8--08-root ... [ 40.315866] fsck (743) used greatest stack depth: 25784 bytes left [ OK ] Finished File System Check…r/cs_hpe--dl360pgen8--08-root . Mounting /sysroot ... [ 41.769078] SGI XFS with ACLs, security attributes, scrub, verbose warnings, quota, no debug enabled [ 41.838439] XFS (dm-0): Mounting V5 Filesystem [ 42.503356] XFS (dm-0): Ending clean mount [ * ] A start job is running for /sysroot (18s / no limit) [ 42.562741] mount (745) used greatest stack depth: 24832 bytes left M [ OK ] Mounted /sysroot . [ OK ] Reached target Initrd Root File System . Starting Mountpoints Configured in the Real Root ... [ 42.729642] systemd-fstab-g (759) used greatest stack depth: 23944 bytes left [ OK ] Finished Mountpoints Configured in the Real Root . [ OK ] Reached target Initrd File Systems . [ OK ] Reached target Initrd Default Target . Starting dracut pre-pivot and cleanup hook ... [ OK ] Finished dracut pre-pivot and cleanup hook . Starting Cleaning Up and Shutting Down Daemons ... [ OK ] Stopped target Network . [ OK ] Stopped target Timer Units . [ OK ] Closed D-Bus System Message Bus Socket . [ OK ] Stopped dracut pre-pivot and cleanup hook . [ OK ] Stopped target Initrd Default Target . [ OK ] Stopped target Basic System . [ OK ] Stopped target Initrd Root Device . [ OK ] Stopped target Initrd /usr File System . [ OK ] Stopped target Path Units . [ OK ] Stopped Dispatch Password …ts to Console Directory Watch . [ OK ] Stopped target Remote File Systems . [ OK ] Stopped target Preparation for Remote File Systems . [ OK ] Stopped target Slice Units . [ OK ] Stopped target Socket Units . [ OK ] Stopped target System Initialization . [ OK ] Stopped target Local File Systems . [ OK ] Stopped target Preparation for Local File Systems . [ OK ] Stopped target Swaps . [ OK ] Stopped dracut initqueue hook . [ OK ] Stopped Apply Kernel Variables . [ OK ] Stopped Create Volatile Files and Directories . [ OK ] Stopped Coldplug All udev Devices . Stopping Rule-based Manage…for Device Events and Files ... [ OK ] Stopped Setup Virtual Console . [ OK ] Finished Cleaning Up and Shutting Down Daemons . [ OK ] Stopped Rule-based Manager for Device Events and Files . [ OK ] Closed udev Control Socket . [ OK ] Closed udev Kernel Socket . [ OK ] Stopped dracut pre-udev hook . [ OK ] Stopped dracut cmdline hook . Starting Cleanup udev Database ... [ OK ] Stopped Create Static Device Nodes in /dev . [ OK ] Stopped Create List of Static Device Nodes . [ OK ] Stopped Create System Users . [ OK ] Finished Cleanup udev Database . [ OK ] Reached target Switch Root . Starting Switch Root ... [ 44.867925] systemd-journald[403]: Received SIGTERM from PID 1 (systemd). [ 48.341329] SELinux: policy capability network_peer_controls=1 [ 48.342338] SELinux: policy capability open_perms=1 [ 48.342668] SELinux: policy capability extended_socket_class=1 [ 48.343746] SELinux: policy capability always_check_network=0 [ 48.344825] SELinux: policy capability cgroup_seclabel=1 [ 48.345160] SELinux: policy capability nnp_nosuid_transition=1 [ 48.345917] SELinux: policy capability genfs_seclabel_symlinks=1 [ 48.875537] audit: type=1403 audit(1675385123.019:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 [ 48.891169] systemd[1]: Successfully loaded SELinux policy in 2.321252s. [ 49.057314] systemd[1]: RTC configured in localtime, applying delta of -300 minutes to system time. [ 49.488496] systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 338.580ms. [ 49.549888] systemd[1]: systemd 252-3.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [ 49.566431] systemd[1]: Detected architecture x86-64. Welcome to CentOS Stream 9 ! [ 50.136511] systemd-rc-local-generator[805]: /etc/rc.d/rc.local is not marked executable, skipping. [ 50.808474] kdump-dep-gener (792) used greatest stack depth: 22808 bytes left [ 52.033011] systemd[1]: /usr/lib/systemd/system/restraintd.service:8: Standard output type syslog+console is obsolete, automatically updating to journal+console. Please update your unit file, and consider removing the setting altogether. [ 52.721067] systemd[1]: initrd-switch-root.service: Deactivated successfully. [ 52.729921] systemd[1]: Stopped Switch Root. [ OK ] Stopped Switch Root . [ 52.744449] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. [ 52.758418] systemd[1]: Created slice Slice /system/getty. [ OK ] Created slice Slice /system/getty . [ 52.778139] systemd[1]: Created slice Slice /system/modprobe. [ OK ] Created slice Slice /system/modprobe . [ 52.796661] systemd[1]: Created slice Slice /system/serial-getty. [ OK ] Created slice Slice /system/serial-getty . [ 52.811476] systemd[1]: Created slice Slice /system/sshd-keygen. [ OK ] Created slice Slice /system/sshd-keygen . [ 52.832230] systemd[1]: Created slice User and Session Slice. [ OK ] Created slice User and Session Slice . [ 52.842912] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [ OK ] Started Dispatch Password …ts to Console Directory Watch . [ 52.848089] systemd[1]: Started Forward Password Requests to Wall Directory Watch. [ OK ] Started Forward Password R…uests to Wall Directory Watch . [ 52.861289] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. [ OK ] Set up automount Arbitrary…s File System Automount Point . [ 52.863532] systemd[1]: Reached target Local Encrypted Volumes. [ OK ] Reached target Local Encrypted Volumes . [ 52.866149] systemd[1]: Stopped target Switch Root. [ OK ] Stopped target Switch Root . [ 52.871263] systemd[1]: Stopped target Initrd File Systems. [ OK ] Stopped target Initrd File Systems . [ 52.875373] systemd[1]: Stopped target Initrd Root File System. [ OK ] Stopped target Initrd Root File System . [ 52.880391] systemd[1]: Reached target Local Integrity Protected Volumes. [ OK ] Reached target Local Integrity Protected Volumes . [ 52.885427] systemd[1]: Reached target Path Units. [ OK ] Reached target Path Units . [ 52.890420] systemd[1]: Reached target Slice Units. [ OK ] Reached target Slice Units . [ 52.892786] systemd[1]: Reached target System Time Set. [ OK ] Reached target System Time Set . [ 52.897422] systemd[1]: Reached target Local Verity Protected Volumes. [ OK ] Reached target Local Verity Protected Volumes . [ 52.907668] systemd[1]: Listening on Device-mapper event daemon FIFOs. [ OK ] Listening on Device-mapper event daemon FIFOs . [ 52.932847] systemd[1]: Listening on LVM2 poll daemon socket. [ OK ] Listening on LVM2 poll daemon socket . [ 53.115057] systemd[1]: Listening on RPCbind Server Activation Socket. [ OK ] Listening on RPCbind Server Activation Socket . [ 53.120549] systemd[1]: Reached target RPC Port Mapper. [ OK ] Reached target RPC Port Mapper . [ 53.148312] systemd[1]: Listening on Process Core Dump Socket. [ OK ] Listening on Process Core Dump Socket . [ 53.154161] systemd[1]: Listening on initctl Compatibility Named Pipe. [ OK ] Listening on initctl Compatibility Named Pipe . [ 53.185442] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket . [ 53.195934] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket . [ 53.232626] systemd[1]: Activating swap /dev/mapper/cs_hpe--dl360pgen8--08-swap... Activating swap /dev/mappe…cs_hpe--dl360pgen8--08-swap ... [ 53.283012] systemd[1]: Mounting Huge Pages File System... Mounting Huge Pages File System ... [ 53.327277] systemd[1]: Mounting POSIX [ 53.404428] Adding 16502780k swap on /dev/mapper/cs_hpe--dl360pgen8--08-swap. Priority:-2 extents:1 across:16502780k FS Mounting POSIX Message Queue File System ... [ 53.455190] systemd[1]: Mounting Kernel Debug File System... Mounting Kernel Debug File System ... [ 53.493125] systemd[1]: Mounting Kernel Trace File System... Mounting Kernel Trace File System ... [ 53.500281] systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). [ 53.603451] systemd[1]: Starting Create List of Static Device Nodes... Starting Create List of Static Device Nodes ... [ 53.643430] systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Starting Monitoring of LVM…meventd or progress polling ... [ 53.684588] systemd[1]: Starting Load Kernel Module configfs... Starting Load Kernel Module configfs ... [ 53.719865] systemd[1]: Starting Load Kernel Module drm... Starting Load Kernel Module drm ... [ 53.745295] systemd[1]: Starting Load Kernel Module fuse... Starting Load Kernel Module fuse ... [ 53.890153ystemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Starting Read and set NIS …from /etc/sysconfig/network ... [ 53.952140] systemd[1]: systemd-fsck-root.service: Deactivated successfully. [ 53.954740] systemd[1]: Stopped File System Check on Root Device. [ OK ] Stopped File System Check on Root Device . [ 53.958524] systemd[1]: Stopped Journal Service. [ OK ] Stopped Journal Service . [ 53.961145] systemd[1]: systemd-journald.service: Consumed 1.842s CPU time. [ 54.020043] systemd[1]: Starting Journal Service... Starting Journal Service ... [ 54.055419] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [ 54.089303] systemd[1]: Starting Generate network units from Kernel command line... Starting Generate network …ts from Kernel command line ... [ 54.130775] systemd[1]: Starting Remount Root and Kernel File Systems... Starting Remount Root and Kernel File Systems ... [ 54.137961] systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. [ 54.175576] systemd[1]: Starting Apply Kernel Variables... Starting Apply Kernel Variables ... [ 54.255172] [ 54.454871] fuse: init (API version 7.36) Starting Coldplug All udev Devices ... [ 54.525897] systemd[1]: Activated swap /dev/mapper/cs_hpe--dl360pgen8--08-swap. [ OK ] Activated swap /dev/mapper/cs_hpe--dl360pgen8--08-swap . [ 54.552615] ACPI: bus type drm_connector registered [ 54.563313] systemd[1]: Started Journal Service. [ OK ] Started Journal Service . [ OK ] Mounted Huge Pages File System . [ OK ] Mounted POSIX Message Queue File System . [ OK ] Mounted Kernel Debug File System . [ OK ] Mounted Kernel Trace File System . [ OK ] Finished Create List of Static Device Nodes . [ OK ] Finished Monitoring of LVM… dmeventd or progress polling . [ OK ] Finished Load Kernel Module configfs . [ OK ] Finished Load Kernel Module drm . [ OK ] Finished Load Kernel Module fuse . [ OK ] Finished Read and set NIS …e from /etc/sysconfig/network . [ OK ] Finished Generate network units from Kernel command line . [ OK ] Finished Remount Root and Kernel File Systems . [ OK ] Finished Apply Kernel Variables . [ OK ] Reached target Preparation for Network . [ OK ] Reached target Swaps . Mounting FUSE Control File System ... Mounting Kernel Configuration File System ... Starting Flush Journal to Persistent Storage ... Starting Load/Save Random Seed ... Starting Create Static Device Nodes in /dev ... [ OK ] Mounted FUSE Control File System . [ OK ] Mounted Kernel Configuration File System . [ 55.366707] systemd-journald[830]: Received client request to flush runtime journal. [ OK ] Finished Flush Journal to Persistent Storage . [ OK ] Finished Load/Save Random Seed . [ OK ] Finished Create Static Device Nodes in /dev . [ OK ] Reached target Preparation for Local File Systems . Starting Rule-based Manage…for Device Events and Files ... [ OK ] Started Rule-based Manager for Device Events and Files . [ * ] (1 of 4) A start job is running for…l360pgen8--08-home (6s / no limit) M Starting Load Kernel Module configfs ... [ OK ] Finished Load Kernel Module configfs . [ OK ] Finished Coldplug All udev Devices . [ 59.966671] power_meter ACPI000D:00: Found ACPI power meter. [ 59.969244] power_meter ACPI000D:00: Ignoring unsafe software power cap! [ 59.969787] power_meter ACPI000D:00: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). [ 60.132705] IPMI message handler: version 39.2 [ 60.174315] ipmi device interface [ 60.234412] dca service started, version 1.12.1 [ 60.269422] ipmi_si: IPMI System Interface driver [ 60.270411] ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS [ 60.271236] ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 [ 60.272518] ipmi_si: Adding SMBIOS-specified kcs state machine [ 60.275902] ipmi_si IPI0001:00: ipmi_platform: probing via ACPI [ 60.277557] ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2-0x0ca3] regsize 1 spacing 1 irq 0 [ 60.338743] ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI [ 60.339911] ipmi_si: Adding ACPI-specified kcs state machine [ 60.343459] ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 [ 60.378135] ioatdma: Intel(R) QuickData Technology Driver 5.00 [ 60.429993] ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. [ 60.503242] ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x00000b, prod_id: 0x2000, dev_id: 0x13) [ 60.669533] ipmi_si IPI0001:00: IPMI kcs interface initialized Mounting /boot ... [ 60.737457] ipmi_ssif: IPMI SSIF Interface driver [ 60.798721] XFS (sda1): Mounting V5 Filesystem [ OK ] Started /usr/sbin/lvm vgch…on event cs_hpe-dl360pgen8-08 . [ 61.308615] input: PC Speaker as /devices/platform/pcspkr/input/input4 [ 61.316257] mgag200 0000:01:00.1: vgaarb: deactivate vga console [ 61.327664] Console: switching to colour dummy device 80x25 [ 61.416318] XFS (sda1): Ending clean mount [ 62.198546] [drm] Initialized mgag200 1.0.0 20110418 for 0000:01:00.1 on minor 0 [ 62.209328] fbcon: mgag200drmfb (fb0) is primary device [ 62.744588] RAPL PMU: API unit is 2^-32 Joules, 2 fixed counters, 163840 ms ovfl timer [ 62.744604] RAPL PMU: hw unit of domain pp0-core 2^-16 Joules [ 62.744613] RAPL PMU: hw unit of domain package 2^-16 Joules [ 62.764433] Console: switching to colour frame buffer device 128x48 [ 62.810256] mgag200 0000:01:00.1: [drm] fb0: mgag200drmfb frame buffer device [ OK ] Mounted /boot . [ 63.374605] iTCO_vendor_support: vendor-support=0 [ 63.420956] iTCO_wdt iTCO_wdt.1.auto: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS [ OK ] Found device /dev/mapper/cs_hpe--dl360pgen8--08-home . Mounting /home ... [ 63.857499] XFS (dm-2): Mounting V5 Filesystem [ 64.474102] XFS (dm-2): Ending clean mount [ OK ] Mounted /home . [ OK ] Reached target Local File Systems . Starting Automatic Boot Loader Update ... Starting Create Volatile Files and Directories ... [ OK ] Finished Automatic Boot Loader Update . [ 65.305127] EDAC MC0: Giving out device to module sb_edac controller Ivy Bridge SrcID#0_Ha#0: DEV 0000:1f:0e.0 (INTERRUPT) [ 65.309088] EDAC MC1: Giving out device to module sb_edac controller Ivy Bridge SrcID#1_Ha#0: DEV 0000:3f:0e.0 (INTERRUPT) [ 65.309692] EDAC sbridge: Ver: 1.1.2 [ 65.614663] intel_rapl_common: Found RAPL domain package [ 65.615177] intel_rapl_common: Found RAPL domain core [ 65.618998] intel_rapl_common: Found RAPL domain package [ 65.619429] intel_rapl_common: Found RAPL domain core [ OK ] Finished Create Volatile Files and Directories . Mounting RPC Pipe File System ... Starting Security Auditing Service ... Starting RPC Bind ... [ OK ] Started RPC Bind . [ 66.696547] RPC: Registered named UNIX socket transport module. [ 66.697452] RPC: Registered udp transport module. [ 66.698668] RPC: Registered tcp transport module. [ 66.699473] RPC: Registered tcp NFSv4.1 backchannel transport module. [ OK ] Mounted RPC Pipe File System . [ OK ] Reached target rpc_pipefs.target . [ OK ] Started Security Auditing Service . Starting Record System Boot/Shutdown in UTMP ... [ OK ] Finished Record System Boot/Shutdown in UTMP . [ OK ] Reached target System Initialization . [ OK ] Started dnf makecache --timer . [ OK ] Started Daily Cleanup of Temporary Directories . [ OK ] Listening on D-Bus System Message Bus Socket . [ OK ] Listening on SSSD Kerberos…ache Manager responder socket . [ OK ] Reached target Socket Units . [ OK ] Reached target Basic System . Starting Network Manager ... Starting NTP client/server ... Starting Restore /run/initramfs on shutdown ... [ OK ] Started irqbalance daemon . Starting Load CPU microcode update ... Starting System Logging Service ... [ OK ] Reached target sshd-keygen.target . [ OK ] Reached target User and Group Name Lookups . Starting User Login Management ... [ OK ] Finished Restore /run/initramfs on shutdown . [ OK ] Started System Logging Service . Starting D-Bus System Message Bus ... [ OK ] Started NTP client/server . Starting Wait for chrony to synchronize system clock ... [ 69.786635] reload_microcod (1067) used greatest stack depth: 22360 bytes left [ OK ] Finished Load CPU microcode update . [ OK ] Started D-Bus System Message Bus . [ OK ] Started User Login Management . [ OK ] Started Network Manager . [ OK ] Created slice User Slice of UID 0 . [ OK ] Reached target Network . Starting Network Manager Wait Online ... Starting GSSAPI Proxy Daemon ... Starting OpenSSH server daemon ... Starting User Runtime Directory /run/user/0 ... Starting Hostname Service ... [ OK ] Finished User Runtime Directory /run/user/0 . Starting User Manager for UID 0 ... [ OK ] Started OpenSSH server daemon . [ OK ] Started GSSAPI Proxy Daemon . [ OK ] Reached target NFS client services . [ OK ] Reached target Preparation for Remote File Systems . [ OK ] Reached target Remote File Systems . Starting Permit User Sessions ... [ OK ] Started Hostname Service . [ OK ] Finished Permit User Sessions . [ OK ] Started Getty on tty1 . [ OK ] Started Serial Getty on ttyS1 . [ OK ] Reached target Login Prompts . [ OK ] Listening on Load/Save RF …itch Status /dev/rfkill Watch . Starting Network Manager Script Dispatcher Service ... [ OK ] Started Network Manager Script Dispatcher Service . [ OK ] Started User Manager for UID 0 . [ 74.300556] tg3 0000:03:00.0 eno1: Link is up at 1000 Mbps, full duplex [ 74.301284] tg3 0000:03:00.0 eno1: Flow control is off for TX and off for RX [ 74.302208] tg3 0000:03:00.0 eno1: EEE is disabled [ 74.303639] IPv6: ADDRCONF(NETDEV_CHANGE): eno1: link becomes ready CentOS Stream 9 Kernel 5.14.0-256.2009_766119311.el9.x86_64+debug on an x86_64 hpe-dl360pgen8-08 login: [ 82.944196] restraintd[1494]: * Fetching recipe: http://lab-02.hosts.prod.psi.bos.redhat.com:8000//recipes/13330040/ [ 83.117368] restraintd[1494]: * Parsing recipe [ 83.268307] restraintd[1494]: * Running recipe [ 83.272432] restraintd[1494]: ** Continuing task: 155735207 [/mnt/tests/github.com/beaker-project/beaker-core-tasks/archive/master.tar.gz/reservesys] [ 83.423436] restraintd[1494]: ** Preparing metadata [ 83.547880] restraintd[1494]: ** Refreshing peer role hostnames: Retries 0 [ 83.693082] restraintd[1494]: ** Updating env vars [ 83.694043] restraintd[1494]: *** Current Time: Fri Feb 03 00:46:04 2023 Localwatchdog at: * Disabled! * [ 83.808726] restraintd[1494]: ** Running task: 155735207 [/distribution/reservesys] [ 95.350647] Running test [R:13330040 T:155735207 - /distribution/reservesys - Kernel: 5.14.0-256.2009_766119311.el9.x86_64+debug] [ 95.749685] ln (1711) used greatest stack depth: 22272 bytes left [ 98.536872] PKCS7: Message signed outside of X.509 validity window [ 135.546189] Running test [R:13330040 T:6 - /kernel/kdump/setup-nfsdump - Kernel: 5.14.0-256.2009_766119311.el9.x86_64+debug] [ 154.430407] systemd-rc-local-generator[2572]: /etc/rc.d/rc.local is not marked executable, skipping. [ 156.175235] FS-Cache: Loaded [ 156.702188] Key type dns_resolver registered [ 157.385628] NFS: Registering the id_resolver key type [ 157.386075] Key type id_resolver registered [ 157.386327] Key type id_legacy registered [ 189.347150] dracut-install (3292) used greatest stack depth: 20808 bytes left [-- MARK -- Fri Feb 3 05:50:00 2023] [ 332.012034] PKCS7: Message signed outside of X.509 validity window [ 554.036196] systemd-rc-local-generator[7425]: /etc/rc.d/rc.local is not marked executable, skipping. [-- MARK -- Fri Feb 3 05:55:00 2023] [ 720.972966] systemd-rc-local-generator[14168]: /etc/rc.d/rc.local is not marked executable, skipping. [ 792.841094] Running test [R:13330040 T:7 - LTP lite - bare_metal - Kernel: 5.14.0-256.2009_766119311.el9.x86_64+debug] [-- MARK -- Fri Feb 3 06:00:00 2023] [ 1196.723802] perf: interrupt took too long (2505 > 2500), lowering kernel.perf_event_max_sample_rate to 79000 [-- MARK -- Fri Feb 3 06:05:00 2023] [-- MARK -- Fri Feb 3 06:10:00 2023] [ 1539.896374] perf: interrupt took too long (3201 > 3131), lowering kernel.perf_event_max_sample_rate to 62000 [-- MARK -- Fri Feb 3 06:15:00 2023] [ 1974.166029] perf: interrupt took too long (4003 > 4001), lowering kernel.perf_event_max_sample_rate to 49000 [-- MARK -- Fri Feb 3 06:20:00 2023] [ 2218.843977] bash (17693): /proc/1/oom_adj is deprecated, please use /proc/1/oom_score_adj instead. [ 2225.044078] LTP: starting kmsg01 [ 2225.124997] LTP kmsg01 TEST MESSAGE 917044861 prio: 0, facility: 0 [ 2225.149919] LTP kmsg01 TEST MESSAGE 1087125093 prio: 1, facility: 0 [ 2225.174169] LTP kmsg01 TEST MESSAGE 2033078291 prio: 2, facility: 0 [ 2225.198647] LTP kmsg01 TEST MESSAGE 29881992 prio: 3, facility: 0 [ 2225.289156] LTP kmsg01 TEST MESSAGE 762503792 prio: 0, facility: 1 [ 2225.303124] LTP kmsg01 TEST MESSAGE 425231809 prio: 1, facility: 1 [ 2225.317119] LTP kmsg01 TEST MESSAGE 1921767619 prio: 2, facility: 1 [ 2225.331191] LTP kmsg01 TEST MESSAGE 1181387060 prio: 3, facility: 1 [ 2225.398348] LTP kmsg01 TEST MESSAGE 552529397 prio: 0, facility: 2 [ 2225.412488] LTP kmsg01 TEST MESSAGE 1327546821 prio: 1, facility: 2 [ 2225.426638] LTP kmsg01 TEST MESSAGE 1650399543 prio: 2, facility: 2 [ 2225.440714] LTP kmsg01 TEST MESSAGE 1397892562 prio: 3, facility: 2 [ 2225.508589] LTP kmsg01 TEST MESSAGE 668653620 prio: 0, facility: 3 [ 2225.522747] LTP kmsg01 TEST MESSAGE 379050740 prio: 1, facility: 3 [ 2225.536976] LTP kmsg01 TEST MESSAGE 6641946 prio: 2, facility: 3 [ 2225.551079] LTP kmsg01 TEST MESSAGE 2140391649 prio: 3, facility: 3 [ 2225.618722] LTP kmsg01 TEST MESSAGE 125035395 prio: 0, facility: 4 [ 2225.633359] LTP kmsg01 TEST MESSAGE 1092162987 prio: 1, facility: 4 [ 2225.647536] LTP kmsg01 TEST MESSAGE 1803783776 prio: 2, facility: 4 [ 2225.661655] LTP kmsg01 TEST MESSAGE 1161292013 prio: 3, facility: 4 [ 2225.729547] LTP kmsg01 TEST MESSAGE 1986522304 prio: 0, facility: 5 [ 2225.744268] LTP kmsg01 TEST MESSAGE 1999149270 prio: 1, facility: 5 [ 2225.758907] LTP kmsg01 TEST MESSAGE 1244068127 prio: 2, facility: 5 [ 2225.773178] LTP kmsg01 TEST MESSAGE 978149709 prio: 3, facility: 5 [ 2225.841304] LTP kmsg01 TEST MESSAGE 670766550 prio: 0, facility: 6 [ 2225.855543] LTP kmsg01 TEST MESSAGE 538000716 prio: 1, facility: 6 [ 2225.869805] LTP kmsg01 TEST MESSAGE 1710670775 prio: 2, facility: 6 [ 2225.884508] LTP kmsg01 TEST MESSAGE 266114252 prio: 3, facility: 6 [ 2225.953047] LTP kmsg01 TEST MESSAGE 1815278171 prio: 0, facility: 7 [ 2225.967455] LTP kmsg01 TEST MESSAGE 228162846 prio: 1, facility: 7 [ 2225.982146] LTP kmsg01 TEST MESSAGE 858747538 prio: 2, facility: 7 [ 2225.996463] LTP kmsg01 TEST MESSAGE 524651447 prio: 3, facility: 7 [ 2226.066042] LTP kmsg01 TEST MESSAGE 1009995220 prio: 0, facility: 8 [ 2226.080467] LTP kmsg01 TEST MESSAGE 1954853360 prio: 1, facility: 8 [ 2226.094825] LTP kmsg01 TEST MESSAGE 552400558 prio: 2, facility: 8 [ 2226.109252] LTP kmsg01 TEST MESSAGE 310172496 prio: 3, facility: 8 [ 2226.178941] LTP kmsg01 TEST MESSAGE 481447830 prio: 0, facility: 9 [ 2226.193344] LTP kmsg01 TEST MESSAGE 1616921691 prio: 1, facility: 9 [ 2226.207708] LTP kmsg01 TEST MESSAGE 38364924 prio: 2, facility: 9 [ 2226.222252] LTP kmsg01 TEST MESSAGE 241696646 prio: 3, facility: 9 [ 2226.291963] LTP kmsg01 TEST MESSAGE 1611450461 prio: 0, facility: 10 [ 2226.306145] LTP kmsg01 TEST MESSAGE 117661985 prio: 1, facility: 10 [ 2226.320703] LTP kmsg01 TEST MESSAGE 1897022223 prio: 2, facility: 10 [ 2226.334917] LTP kmsg01 TEST MESSAGE 1808636377 prio: 3, facility: 10 [ 2226.404606] LTP kmsg01 TEST MESSAGE 199222322 prio: 0, facility: 11 [ 2226.419248] LTP kmsg01 TEST MESSAGE 1385739804 prio: 1, facility: 11 [ 2226.433457] LTP kmsg01 TEST MESSAGE 1746335387 prio: 2, facility: 11 [ 2226.447610] LTP kmsg01 TEST MESSAGE 1612779119 prio: 3, facility: 11 [ 2226.517834] LTP kmsg01 TEST MESSAGE 1704774683 prio: 0, facility: 12 [ 2226.532117] LTP kmsg01 TEST MESSAGE 1556288221 prio: 1, facility: 12 [ 2226.546355] LTP kmsg01 TEST MESSAGE 476256106 prio: 2, facility: 12 [ 2226.561028] LTP kmsg01 TEST MESSAGE 925951242 prio: 3, facility: 12 [ 2226.620232] LTP kmsg01 TEST MESSAGE 318547712 prio: 0, facility: 13 [ 2226.632706] LTP kmsg01 TEST MESSAGE 24531381 prio: 1, facility: 13 [ 2226.644717] LTP kmsg01 TEST MESSAGE 761731258 prio: 2, facility: 13 [ 2226.656729] LTP kmsg01 TEST MESSAGE 1278689132 prio: 3, facility: 13 [ 2226.713219] LTP kmsg01 TEST MESSAGE 1433868102 prio: 0, facility: 14 [ 2226.724830] LTP kmsg01 TEST MESSAGE 511652030 prio: 1, facility: 14 [ 2226.736863] LTP kmsg01 TEST MESSAGE 223100668 prio: 2, facility: 14 [ 2226.748880] LTP kmsg01 TEST MESSAGE 840273871 prio: 3, facility: 14 [ 2226.806276] LTP kmsg01 TEST MESSAGE 2135832738 prio: 0, facility: 15 [ 2226.817987] LTP kmsg01 TEST MESSAGE 1660809551 prio: 1, facility: 15 [ 2226.829711] LTP kmsg01 TEST MESSAGE 147229298 prio: 2, facility: 15 [ 2226.841737] LTP kmsg01 TEST MESSAGE 1291921128 prio: 3, facility: 15 [ 2230.233240] LTP: starting rtc02 [ 2230.325355] LTP: starting umip_basic_test [ 2230.456188] LTP: starting abs01 [ 2230.538767] LTP: starting atof01 [ 2230.615283] LTP: starting float_bessel (float_bessel -v) [ 2236.099167] LTP: starting float_exp_log (float_exp_log -v) [ 2242.787761] LTP: starting float_iperb (float_iperb -v) [ 2245.467706] LTP: starting float_power (float_power -v) [ 2252.111854] LTP: starting float_trigo (float_trigo -v) [ 2258.046617] LTP: starting fptest01 [ 2258.117261] LTP: starting fptest02 [ 2258.196477] LTP: starting nextafter01 [ 2258.274793] LTP: starting fsx-linux (export TCbin=$LTPROOT/testcases/bin;fsxtest02 10000) [ 2279.762090] LTP: starting pipeio_1 (pipeio -T pipeio_1 -c 5 -s 4090 -i 100 -b -f x80) [ 2279.890752] LTP: starting pipeio_3 (pipeio -T pipeio_3 -c 5 -s 4090 -i 100 -u -b -f x80) [ 2279.962750] LTP: starting pipeio_4 (pipeio -T pipeio_4 -c 5 -s 4090 -i 100 -u -f x80) [ 2280.030782] LTP: starting pipeio_5 (pipeio -T pipeio_5 -c 5 -s 5000 -i 10 -b -f x80) [ 2280.085390] LTP: starting pipeio_6 (pipeio -T pipeio_6 -c 5 -s 5000 -i 10 -b -u -f x80) [ 2280.138330] LTP: starting pipeio_8 (pipeio -T pipeio_8 -c 5 -s 5000 -i 10 -u -f x80) [ 2280.191995] LTP: starting sem01 [ 2280.287149] LTP: starting sem02 [ 2300.384146] LTP: starting abort01 [ 2300.528119] LTP: starting accept01 [ 2300.633673] LTP: starting accept02 [ 2300.760245] LTP: starting accept4_01 [ 2300.899668] LTP: starting access01 [ 2302.037676] LTP: starting access02 [ 2302.544319] LTP: starting access03 [ 2302.668556] LTP: starting access04 [ 2302.846421] LTP: starting acct01 [ 2302.951813] Process accounting resumed [ 2302.990943] LTP: starting acct02 [ 2304.093760] Process accounting resumed [ 2304.110218] LTP: starting add_key01 [ 2304.238131] LTP: starting add_key02 [ 2304.306709] LTP: starting add_key03 [ 2304.439423] LTP: starting add_key04 [ 2304.517419] LTP: starting add_key05 [ 2310.108996] LTP: starting adjtimex01 [ 2310.209276] LTP: starting adjtimex02 [ 2310.323397] LTP: starting adjtimex03 [ 2310.408430] LTP: starting alarm02 [ 2310.485934] LTP: starting alarm03 [ 2310.564309] LTP: starting alarm05 [ 2312.635276] LTP: starting alarm06 [ 2315.698933] LTP: starting alarm07 [ 2318.768713] LTP: starting bind01 [ 2318.851994] LTP: starting bind02 [ 2318.929970] LTP: starting bind03 [ 2319.002026] LTP: starting bind04 [ 2319.190999] LTP: starting bind05 [ 2319.317748] LTP: starting bind06 [-- MARK -- Fri Feb 3 06:25:00 2023] [ 2497.206598] LTP: starting bpf_map01 [ 2497.323635] LTP: starting bpf_prog01 [ 2497.404265] LTP: starting bpf_prog02 [ 2497.486751] LTP: starting bpf_prog03 [ 2497.544727] LTP: starting bpf_prog04 [ 2497.628086] LTP: starting bpf_prog05 [ 2499.807082] LTP: starting bpf_prog06 [ 2502.014202] LTP: starting bpf_prog07 [ 2504.200714] LTP: starting brk01 [ 2504.305361] LTP: starting brk02 [ 2504.393229] LTP: starting capget01 [ 2504.471327] capability: warning: `capget01' uses 32-bit capabilities (legacy support in use) [ 2504.472902] capability: warning: `capget01' uses deprecated v2 capabilities in a way that may be insecure [ 2504.484723] LTP: starting capget02 [ 2504.573113] LTP: starting capset01 [ 2504.658533] LTP: starting capset02 [ 2504.739216] LTP: starting capset03 [ 2504.817965] LTP: starting capset04 [ 2504.900767] LTP: starting cacheflush01 [ 2504.962508] LTP: starting chdir01 [ 2505.333064] loop: module loaded [ 2505.391707] loop0: detected capacity change from 0 to 614400 [ 2506.071974] /dev/zero: Can't open blockdev [ 2506.188183] /dev/zero: Can't open blockdev [ 2506.263006] /dev/zero: Can't open blockdev [ 2506.340420] /dev/zero: Can't open blockdev [ 2506.776328] /dev/zero: Can't open blockdev [ 2507.532182] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2507.573801] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2507.684357] EXT4-fs (loop0): unmounting filesystem. [ 2509.845409] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2509.926358] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2510.039473] EXT4-fs (loop0): unmounting filesystem. [ 2510.757523] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2510.856450] EXT4-fs (loop0): unmounting filesystem. [ 2511.328032] XFS (loop0): Mounting V5 Filesystem [ 2511.393672] XFS (loop0): Ending clean mount [ 2511.560006] XFS (loop0): Unmounting Filesystem [ 2511.971816] LTP: starting chdir01A (symlink01 -T chdir01) [ 2512.076929] LTP: starting chdir04 [ 2512.158809] LTP: starting chmod01 [ 2512.262507] LTP: starting chmod01A (symlink01 -T chmod01) [ 2512.313386] LTP: starting chmod03 [ 2512.403911] LTP: starting chmod05 [ 2512.548378] LTP: starting chmod06 [ 2512.674749] LTP: starting chmod07 [ 2512.772972] LTP: starting chown01 [ 2512.847670] LTP: starting chown01_16 [ 2512.919420] LTP: starting chown02 [ 2512.998268] LTP: starting chown02_16 [ 2513.071717] LTP: starting chown03 [ 2513.154404] LTP: starting chown03_16 [ 2513.231708] LTP: starting chown04 [ 2513.343850] LTP: starting chown04_16 [ 2513.441511] LTP: starting chown05 [ 2513.538297] LTP: starting chown05_16 [ 2513.623828] LTP: starting chroot01 [ 2513.709092] LTP: starting chroot02 [ 2513.786288] LTP: starting chroot03 [ 2513.865303] LTP: starting chroot04 [ 2513.950375] LTP: starting clock_adjtime01 [ 2514.034245] LTP: starting clock_adjtime02 [ 2514.117181] LTP: starting clock_getres01 [ 2514.222065] LTP: starting clock_nanosleep01 [ 2515.830337] LTP: starting clock_nanosleep02 [ 2524.506019] LTP: starting clock_nanosleep03 [ 2524.795985] LTP: starting clock_nanosleep04 [ 2524.912302] LTP: starting clock_gettime01 [ 2525.640130] LTP: starting clock_gettime02 [ 2525.712184] LTP: starting clock_gettime03 [ 2526.157127] LTP: starting clock_gettime04 [ 2526.637126] LTP: starting leapsec01 [ 2529.708591] Clock: inserting leap second 23:59:60 UTC [ 2535.229658] LTP: starting clock_settime01 [ 2535.231023] systemd-journald[830]: Time jumped backwards, rotating. [ 2535.336896] LTP: starting clock_settime02 [ 2535.420680] LTP: starting clock_settime03 [ 2535.845007] systemd-journald[830]: Oldest entry in /run/log/journal/99e1b32cbaf74173bd2789197e86723f/system.journal is older than the configured file retention duration (1month), suggesting rotation. [ 2535.846185] systemd-journald[830]: /run/log/journal/99e1b32cbaf74173bd2789197e86723f/system.journal: Journal header limits reached or header out-of-date, rotating. [ 2538.497639] LTP: starting clone01 [ 2538.499012] systemd-journald[830]: Time jumped backwards, rotating. [ 2538.607822] LTP: starting clone02 [ 2538.680299] LTP: starting clone03 [ 2538.749706] LTP: starting clone04 [ 2538.819568] LTP: starting clone05 [ 2539.002678] LTP: starting clone06 [ 2539.076422] LTP: starting clone07 [ 2539.146925] LTP: starting clone08 [ 2539.233970] LTP: starting clone09 [ 2539.333455] LTP: starting clone301 [ 2539.451233] LTP: starting clone302 [ 2539.523601] LTP: starting close01 [ 2539.593141] LTP: starting close02 [ 2539.660458] LTP: starting close_range01 [ 2539.720721] loop0: detected capacity change from 0 to 614400 [ 2539.744514] /dev/zero: Can't open blockdev [ 2539.805520] /dev/zero: Can't open blockdev [ 2539.862985] /dev/zero: Can't open blockdev [ 2539.934235] /dev/zero: Can't open blockdev [ 2540.069141] /dev/zero: Can't open blockdev [ 2540.797534] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2540.839306] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2540.927673] EXT4-fs (loop0): unmounting filesystem. [ 2542.955544] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2543.038025] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2543.159723] EXT4-fs (loop0): unmounting filesystem. [ 2543.821664] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2543.932347] EXT4-fs (loop0): unmounting filesystem. [ 2545.128643] XFS (loop0): Mounting V5 Filesystem [ 2545.199201] XFS (loop0): Ending clean mount [ 2545.374302] XFS (loop0): Unmounting Filesystem [ 2545.779619] LTP: starting close_range02 [ 2545.883057] LTP: starting confstr01 [ 2545.974788] LTP: starting connect01 [ 2546.041331] LTP: starting connect02 [ 2551.464760] LTP: starting creat01 [ 2551.589521] LTP: starting creat03 [ 2551.694971] LTP: starting creat04 [ 2551.825789] LTP: starting creat05 [ 2553.763392] LTP: starting creat06 [ 2553.889299] LTP: starting creat07 [ 2554.092447] LTP: starting creat08 [ 2554.272429] LTP: starting dup01 [ 2554.344728] LTP: starting dup02 [ 2554.414764] LTP: starting dup03 [ 2554.504295] LTP: starting dup04 [ 2554.577388] LTP: starting dup05 [ 2554.644535] LTP: starting dup06 [ 2554.713873] LTP: starting dup07 [ 2554.775772] LTP: starting dup201 [ 2554.851118] LTP: starting dup202 [ 2554.944036] LTP: starting dup203 [ 2555.021092] LTP: starting dup204 [ 2555.090146] LTP: starting dup205 [ 2555.171045] LTP: starting dup206 [ 2555.238348] LTP: starting dup207 [ 2555.312262] LTP: starting dup3_01 [ 2555.382623] LTP: starting dup3_02 [ 2555.449988] LTP: starting epoll_create01 [ 2555.527621] LTP: starting epoll_create02 [ 2555.619026] LTP: starting epoll_create1_01 [ 2555.689165] LTP: starting epoll_create1_02 [ 2555.758091] LTP: starting epoll01 (epoll-ltp) [ 2650.191963] LTP: starting epoll_ctl01 [ 2650.288897] LTP: starting epoll_ctl02 [ 2650.356955] LTP: starting epoll_ctl03 [ 2650.429380] LTP: starting epoll_ctl04 [ 2650.494135] LTP: starting epoll_ctl05 [ 2650.557739] LTP: starting epoll_wait01 [ 2650.644362] LTP: starting epoll_wait03 [ 2650.726866] LTP: starting epoll_wait04 [ 2650.789261] LTP: starting epoll_pwait01 [ 2650.946172] LTP: starting epoll_pwait02 [ 2651.037591] LTP: starting epoll_pwait03 [ 2668.160154] LTP: starting epoll_pwait04 [ 2668.210211] LTP: starting epoll_pwait05 [ 2668.291708] LTP: starting eventfd01 [ 2668.434574] LTP: starting eventfd2_01 [ 2668.496073] LTP: starting eventfd2_02 [ 2668.544546] LTP: starting eventfd2_03 [ 2668.605138] LTP: starting execl01 [ 2668.708456] LTP: starting execle01 [ 2668.807754] LTP: starting execlp01 [ 2668.923689] LTP: starting execv01 [ 2669.019856] LTP: starting execve01 [ 2669.133866] LTP: starting execve02 [ 2669.298325] LTP: starting execve03 [ 2669.423058] LTP: starting execve04 [ 2669.582113] LTP: starting execve05 (execve05 -i 5 -n 32) [ 2670.366772] LTP: starting execve06 [ 2670.458608] process 'execve06' launched '/mnt/testarea/ltp/testcases/bin/execve06_child' with NULL argv: empty string added [ 2670.512253] LTP: starting execvp01 [ 2670.640499] LTP: starting execveat01 [ 2670.952622] LTP: starting execveat02 [ 2671.212233] LTP: starting execveat03 [ 2671.279450] loop0: detected capacity change from 0 to 614400 [ 2671.854416] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2671.899651] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2672.523069] EXT4-fs (loop0): unmounting filesystem. [ 2672.652149] LTP: starting exit01 [ 2672.730189] LTP: starting exit02 [ 2672.818009] LTP: starting exit_group01 [ 2672.893009] LTP: starting faccessat01 [ 2672.992211] LTP: starting fallocate01 [ 2673.077079] LTP: starting fallocate02 [ 2673.155865] LTP: starting fallocate03 [ 2673.214538] LTP: starting fallocate04 [ 2673.261508] loop0: detected capacity change from 0 to 614400 [ 2673.273391] /dev/zero: Can't open blockdev [ 2673.347340] /dev/zero: Can't open blockdev [ 2673.424268] /dev/zero: Can't open blockdev [ 2673.498632] /dev/zero: Can't open blockdev [ 2673.629763] /dev/zero: Can't open blockdev [ 2674.520992] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2674.560961] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2674.614770] EXT4-fs (loop0): unmounting filesystem. [ 2676.101675] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2676.159032] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2676.236150] EXT4-fs (loop0): unmounting filesystem. [ 2676.866719] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2677.366237] EXT4-fs (loop0): unmounting filesystem. [ 2677.858642] XFS (loop0): Mounting V5 Filesystem [ 2677.927101] XFS (loop0): Ending clean mount [ 2678.316540] XFS (loop0): Unmounting Filesystem [ 2678.690776] LTP: starting fallocate05 [ 2678.752668] loop0: detected capacity change from 0 to 614400 [ 2678.767037] /dev/zero: Can't open blockdev [ 2678.843243] /dev/zero: Can't open blockdev [ 2678.918169] /dev/zero: Can't open blockdev [ 2678.992334] /dev/zero: Can't open blockdev [ 2679.123979] /dev/zero: Can't open blockdev [ 2679.827401] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2679.866215] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2679.936162] EXT4-fs (loop0): unmounting filesystem. [ 2681.956771] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2682.015830] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2682.096622] EXT4-fs (loop0): unmounting filesystem. [ 2682.765994] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2708.553940] EXT4-fs (loop0): unmounting filesystem. [ 2709.646414] XFS (loop0): Mounting V5 Filesystem [ 2709.710983] XFS (loop0): Ending clean mount [ 2722.083725] XFS (loop0): Unmounting Filesystem [ 2723.329510] LTP: starting fallocate06 [ 2723.422026] loop0: detected capacity change from 0 to 614400 [ 2723.435180] /dev/zero: Can't open blockdev [ 2723.512481] /dev/zero: Can't open blockdev [ 2723.591612] /dev/zero: Can't open blockdev [ 2723.666746] /dev/zero: Can't open blockdev [ 2723.801197] /dev/zero: Can't open blockdev [ 2724.528707] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2724.567948] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2724.659572] EXT4-fs (loop0): unmounting filesystem. [-- MARK -- Fri Feb 3 06:30:00 2023] [ 2726.693690] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2726.753747] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2726.869184] EXT4-fs (loop0): unmounting filesystem. [ 2727.543861] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2754.692969] EXT4-fs (loop0): unmounting filesystem. [ 2755.805539] XFS (loop0): Mounting V5 Filesystem [ 2755.872676] XFS (loop0): Ending clean mount [ 2770.822328] XFS (loop0): Unmounting Filesystem [ 2772.081659] LTP: starting fsetxattr01 [ 2772.176258] loop0: detected capacity change from 0 to 614400 [ 2772.192467] /dev/zero: Can't open blockdev [ 2772.265371] /dev/zero: Can't open blockdev [ 2772.343161] /dev/zero: Can't open blockdev [ 2772.419315] /dev/zero: Can't open blockdev [ 2772.556392] /dev/zero: Can't open blockdev [ 2773.625105] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2773.673825] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2773.748402] EXT4-fs (loop0): unmounting filesystem. [ 2775.952714] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2776.031886] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2776.117543] EXT4-fs (loop0): unmounting filesystem. [ 2776.745562] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2776.825801] EXT4-fs (loop0): unmounting filesystem. [ 2777.283959] XFS (loop0): Mounting V5 Filesystem [ 2777.357416] XFS (loop0): Ending clean mount [ 2777.484121] XFS (loop0): Unmounting Filesystem [ 2777.835047] LTP: starting fsetxattr02 [ 2778.104872] brd: module loaded [ 2778.116422] block device autoloading is deprecated and will be removed. [ 2778.147158] LTP: starting posix_fadvise01 [ 2778.224423] LTP: starting posix_fadvise01_64 [ 2778.304983] LTP: starting posix_fadvise02 [ 2778.359085] LTP: starting posix_fadvise02_64 [ 2778.404080] LTP: starting posix_fadvise03 [ 2778.464822] LTP: starting posix_fadvise03_64 [ 2778.524733] LTP: starting posix_fadvise04 [ 2778.589432] LTP: starting posix_fadvise04_64 [ 2778.651919] LTP: starting fchdir01 [ 2778.738056] LTP: starting fchdir02 [ 2778.805050] LTP: starting fchdir03 [ 2778.886734] LTP: starting fchmod01 [ 2778.957775] LTP: starting fchmod02 [ 2779.040718] LTP: starting fchmod03 [ 2779.132063] LTP: starting fchmod04 [ 2779.212451] LTP: starting fchmod05 [ 2779.350154] LTP: starting fchmod06 [ 2779.459877] LTP: starting fchmodat01 [ 2779.568037] LTP: starting fchown01 [ 2779.638226] LTP: starting fchown01_16 [ 2779.713910] LTP: starting fchown02 [ 2779.784146] LTP: starting fchown02_16 [ 2779.853749] LTP: starting fchown03 [ 2779.939294] LTP: starting fchown03_16 [ 2780.019971] LTP: starting fchown04 [ 2780.111110] LTP: starting fchown04_16 [ 2780.220174] LTP: starting fchown05 [ 2780.293426] LTP: starting fchown05_16 [ 2780.361268] LTP: starting fchownat01 [ 2780.424613] LTP: starting fchownat02 [ 2780.494315] LTP: starting fcntl01 [ 2780.566702] LTP: starting fcntl01_64 [ 2780.638858] LTP: starting fcntl02 [ 2780.722823] LTP: starting fcntl02_64 [ 2780.794081] LTP: starting fcntl03 [ 2780.866924] LTP: starting fcntl03_64 [ 2780.942149] LTP: starting fcntl04 [ 2781.010911] LTP: starting fcntl04_64 [ 2781.077729] LTP: starting fcntl05 [ 2781.143842] LTP: starting fcntl05_64 [ 2781.211703] LTP: starting fcntl06 [ 2781.262880] LTP: starting fcntl06_64 [ 2781.314392] LTP: starting fcntl07 [ 2781.483748] LTP: starting fcntl07_64 [ 2781.660151] LTP: starting fcntl08 [ 2781.726143] LTP: starting fcntl08_64 [ 2781.788388] LTP: starting fcntl09 [ 2781.844491] LTP: starting fcntl09_64 [ 2781.902008] LTP: starting fcntl10 [ 2781.954378] LTP: starting fcntl10_64 [ 2782.015099] LTP: starting fcntl11 [ 2782.086142] LTP: starting fcntl11_64 [ 2782.180173] LTP: starting fcntl12 [ 2782.421765] LTP: starting fcntl12_64 [ 2782.674866] LTP: starting fcntl13 [ 2782.748467] LTP: starting fcntl13_64 [ 2782.831072] LTP: starting fcntl14 [ 2790.072005] LTP: starting fcntl14_64 [ 2797.337757] LTP: starting fcntl15 [ 2797.495193] LTP: starting fcntl15_64 [ 2797.664019] LTP: starting fcntl16 [ 2798.010745] LTP: starting fcntl16_64 [ 2798.327045] LTP: starting fcntl17 [ 2798.412176] LTP: starting fcntl17_64 [ 2798.513714] LTP: starting fcntl18 [ 2798.598513] LTP: starting fcntl18_64 [ 2798.671437] LTP: starting fcntl19 [ 2798.755471] LTP: starting fcntl19_64 [ 2798.828563] LTP: starting fcntl20 [ 2798.902068] LTP: starting fcntl20_64 [ 2798.980055] LTP: starting fcntl21 [ 2799.050908] LTP: starting fcntl21_64 [ 2799.129065] LTP: starting fcntl22 [ 2799.193885] LTP: starting fcntl22_64 [ 2799.263548] LTP: starting fcntl23 [ 2799.331664] LTP: starting fcntl23_64 [ 2799.398195] LTP: starting fcntl24 [ 2799.471730] LTP: starting fcntl24_64 [ 2799.523303] LTP: starting fcntl25 [ 2799.576997] LTP: starting fcntl25_64 [ 2799.631334] LTP: starting fcntl26 [ 2799.682281] LTP: starting fcntl26_64 [ 2799.747864] LTP: starting fcntl27 [ 2799.815930] LTP: starting fcntl27_64 [ 2799.866116] LTP: starting fcntl28 [ 2799.918667] LTP: starting fcntl28_64 [ 2799.978137] LTP: starting fcntl29 [ 2800.037594] LTP: starting fcntl29_64 [ 2800.094212] LTP: starting fcntl30 [ 2800.144393] LTP: starting fcntl30_64 [ 2800.195312] LTP: starting fcntl31 [ 2800.289870] LTP: starting fcntl31_64 [ 2800.393022] LTP: starting fcntl32 [ 2800.454624] LTP: starting fcntl32_64 [ 2800.509518] LTP: starting fcntl33 [ 2800.643755] LTP: starting fcntl33_64 [ 2800.784113] LTP: starting fcntl34 [ 2802.248769] LTP: starting fcntl34_64 [ 2803.728520] LTP: starting fcntl35 [ 2803.853741] LTP: starting fcntl35_64 [ 2803.950272] LTP: starting fcntl36 [ 2811.911035] LTP: starting fcntl36_64 [ 2813.734669] perf: interrupt took too long (5054 > 5003), lowering kernel.perf_event_max_sample_rate to 39000 [ 2819.893916] LTP: starting fcntl37 [ 2819.985549] LTP: starting fcntl37_64 [ 2820.058370] LTP: starting fcntl38 [ 2820.142205] LTP: starting fcntl38_64 [ 2820.228911] LTP: starting fcntl39 [ 2820.335047] LTP: starting fcntl39_64 [ 2820.409359] LTP: starting fdatasync01 [ 2820.505455] LTP: starting fdatasync02 [ 2820.552901] LTP: starting fdatasync03 [ 2820.620199] loop0: detected capacity change from 0 to 614400 [ 2820.636442] /dev/zero: Can't open blockdev [ 2820.710924] /dev/zero: Can't open blockdev [ 2820.790707] /dev/zero: Can't open blockdev [ 2820.864414] /dev/zero: Can't open blockdev [ 2820.999829] /dev/zero: Can't open blockdev [ 2821.884421] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2821.927641] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2826.540269] EXT4-fs (loop0): unmounting filesystem. [ 2828.614449] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2828.671957] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2833.701861] EXT4-fs (loop0): unmounting filesystem. [ 2834.602500] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2838.057700] EXT4-fs (loop0): unmounting filesystem. [ 2838.641068] XFS (loop0): Mounting V5 Filesystem [ 2838.707536] XFS (loop0): Ending clean mount [ 2840.596333] XFS (loop0): Unmounting Filesystem [ 2841.021632] LTP: starting fgetxattr01 [ 2841.086748] loop0: detected capacity change from 0 to 614400 [ 2841.106967] /dev/zero: Can't open blockdev [ 2841.192485] /dev/zero: Can't open blockdev [ 2841.266562] /dev/zero: Can't open blockdev [ 2841.341018] /dev/zero: Can't open blockdev [ 2841.471964] /dev/zero: Can't open blockdev [ 2842.145636] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2842.184536] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2842.250976] EXT4-fs (loop0): unmounting filesystem. [ 2844.207767] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2844.268956] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2844.356461] EXT4-fs (loop0): unmounting filesystem. [ 2844.995041] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2845.065648] EXT4-fs (loop0): unmounting filesystem. [ 2845.528790] XFS (loop0): Mounting V5 Filesystem [ 2845.593584] XFS (loop0): Ending clean mount [ 2845.734822] XFS (loop0): Unmounting Filesystem [ 2846.143672] LTP: starting fgetxattr02 [ 2846.292847] block device autoloading is deprecated and will be removed. [ 2846.326714] LTP: starting fgetxattr03 [ 2846.397542] LTP: starting finit_module01 [ 2846.468132] finit_module: loading out-of-tree module taints kernel. [ 2846.469428] finit_module: module verification failed: signature and/or required key missing - tainting kernel [ 2846.645390] LTP: starting finit_module02 [ 2846.900706] LTP: starting flistxattr01 [ 2846.981214] LTP: starting flistxattr02 [ 2847.059429] LTP: starting flistxattr03 [ 2847.133365] LTP: starting flock01 [ 2847.209103] LTP: starting flock02 [ 2847.283425] LTP: starting flock03 [ 2847.366131] LTP: starting flock04 [ 2847.467932] LTP: starting flock06 [ 2847.546661] LTP: starting fmtmsg01 [ 2847.627689] LTP: starting fork01 [ 2847.696071] LTP: starting fork02 [ 2847.749321] LTP: starting fork03 [ 2847.810553] LTP: starting fork04 [ 2847.883443] LTP: starting fork05 [ 2847.940567] LTP: starting fork06 [ 2853.342570] LTP: starting fork07 [ 2853.661747] LTP: starting fork08 [ 2853.738815] LTP: starting fork09 [ 2855.638043] LTP: starting fork10 [ 2855.729030] LTP: starting fork11 [ 2856.398705] LTP: starting fpathconf01 [ 2856.475417] LTP: starting fremovexattr01 [ 2856.537307] loop0: detected capacity change from 0 to 614400 [ 2856.554866] /dev/zero: Can't open blockdev [ 2856.634617] /dev/zero: Can't open blockdev [ 2856.712653] /dev/zero: Can't open blockdev [ 2856.787354] /dev/zero: Can't open blockdev [ 2856.917650] /dev/zero: Can't open blockdev [ 2857.626625] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2857.667141] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2857.740635] EXT4-fs (loop0): unmounting filesystem. [ 2859.725705] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2859.783938] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2859.885185] EXT4-fs (loop0): unmounting filesystem. [ 2860.518753] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2860.597154] EXT4-fs (loop0): unmounting filesystem. [ 2861.041595] XFS (loop0): Mounting V5 Filesystem [ 2861.110080] XFS (loop0): Ending clean mount [ 2861.255111] XFS (loop0): Unmounting Filesystem [ 2861.643831] LTP: starting fremovexattr02 [ 2861.728542] loop0: detected capacity change from 0 to 614400 [ 2861.743810] /dev/zero: Can't open blockdev [ 2861.820556] /dev/zero: Can't open blockdev [ 2861.897977] /dev/zero: Can't open blockdev [ 2861.971257] /dev/zero: Can't open blockdev [ 2862.105743] /dev/zero: Can't open blockdev [ 2862.827401] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2862.867588] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2862.931019] EXT4-fs (loop0): unmounting filesystem. [ 2864.489758] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2864.566686] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2864.657039] EXT4-fs (loop0): unmounting filesystem. [ 2865.322651] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2865.401993] EXT4-fs (loop0): unmounting filesystem. [ 2865.921440] XFS (loop0): Mounting V5 Filesystem [ 2865.991456] XFS (loop0): Ending clean mount [ 2866.144513] XFS (loop0): Unmounting Filesystem [ 2866.534615] LTP: starting fsconfig01 [ 2866.597922] loop0: detected capacity change from 0 to 614400 [ 2866.611046] /dev/zero: Can't open blockdev [ 2866.685575] /dev/zero: Can't open blockdev [ 2866.763436] /dev/zero: Can't open blockdev [ 2866.841983] /dev/zero: Can't open blockdev [ 2866.981294] /dev/zero: Can't open blockdev [ 2867.840978] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2867.877241] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2867.903161] EXT4-fs (loop0): unmounting filesystem. [ 2870.152995] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2870.214199] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2870.236991] EXT4-fs (loop0): unmounting filesystem. [ 2870.798909] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2870.821953] EXT4-fs (loop0): unmounting filesystem. [ 2871.980917] XFS (loop0): Mounting V5 Filesystem [ 2872.050852] XFS (loop0): Ending clean mount [ 2872.077659] XFS (loop0): Unmounting Filesystem [ 2872.415934] LTP: starting fsconfig02 [ 2872.492880] loop0: detected capacity change from 0 to 614400 [ 2872.589695] LTP: starting fsmount01 [ 2872.654032] loop0: detected capacity change from 0 to 614400 [ 2872.667871] /dev/zero: Can't open blockdev [ 2872.743356] /dev/zero: Can't open blockdev [ 2872.821059] /dev/zero: Can't open blockdev [ 2872.898835] /dev/zero: Can't open blockdev [ 2873.037759] /dev/zero: Can't open blockdev [ 2873.729224] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2873.772091] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2873.790627] EXT4-fs (loop0): unmounting filesystem. [ 2873.830654] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2873.867107] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2873.880044] EXT4-fs (loop0): unmounting filesystem. [ 2873.909006] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2873.941777] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2873.949694] EXT4-fs (loop0): unmounting filesystem. [ 2873.983325] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2874.024527] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2874.032290] EXT4-fs (loop0): unmounting filesystem. [ 2874.074515] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2874.115753] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2874.123614] EXT4-fs (loop0): unmounting filesystem. [ 2874.157543] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2874.198524] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2874.211208] EXT4-fs (loop0): unmounting filesystem. [ 2874.240006] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2874.281442] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2874.291962] EXT4-fs (loop0): unmounting filesystem. [ 2874.323660] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2874.364298] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2874.373762] EXT4-fs (loop0): unmounting filesystem. [ 2874.405810] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2874.446858] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2874.454655] EXT4-fs (loop0): unmounting filesystem. [ 2874.480316] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2874.521699] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2874.529531] EXT4-fs (loop0): unmounting filesystem. [ 2874.555827] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2874.595956] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2874.603898] EXT4-fs (loop0): unmounting filesystem. [ 2874.646099] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2874.686987] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2874.694850] EXT4-fs (loop0): unmounting filesystem. [ 2874.737278] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2874.778200] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2874.786075] EXT4-fs (loop0): unmounting filesystem. [ 2874.828242] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2874.869457] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2874.877284] EXT4-fs (loop0): unmounting filesystem. [ 2874.919600] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2874.960728] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2874.968913] EXT4-fs (loop0): unmounting filesystem. [ 2875.005557] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2875.043421] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2875.051152] EXT4-fs (loop0): unmounting filesystem. [ 2877.051502] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2877.113014] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2877.131726] EXT4-fs (loop0): unmounting filesystem. [ 2877.181799] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2877.240164] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2877.257906] EXT4-fs (loop0): unmounting filesystem. [ 2877.290555] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2877.351745] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2877.369784] EXT4-fs (loop0): unmounting filesystem. [ 2877.410293] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2877.464379] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2877.483999] EXT4-fs (loop0): unmounting filesystem. [ 2877.517634] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2877.579609] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2877.596162] EXT4-fs (loop0): unmounting filesystem. [ 2877.637505] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2877.708217] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2877.724393] EXT4-fs (loop0): unmounting filesystem. [ 2877.758390] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2877.821606] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2877.837796] EXT4-fs (loop0): unmounting filesystem. [ 2877.870832] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2877.991765] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2878.279162] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2878.400762] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2878.520852] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2878.641732] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2878.762776] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2878.874264] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2878.986815] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2881.191950] XFS (loop0): Mounting V5 Filesystem [ 2881.262081] XFS (loop0): Ending clean mount [ 2881.274297] XFS (loop0): Unmounting Filesystem [ 2881.558531] XFS (loop0): Mounting V5 Filesystem [ 2881.609515] XFS (loop0): Ending clean mount [ 2881.625867] XFS (loop0): Unmounting Filesystem [ 2881.893273] XFS (loop0): Mounting V5 Filesystem [ 2881.945389] XFS (loop0): Ending clean mount [ 2881.957539] XFS (loop0): Unmounting Filesystem [ 2882.245191] XFS (loop0): Mounting V5 Filesystem [ 2882.298402] XFS (loop0): Ending clean mount [ 2882.312139] XFS (loop0): Unmounting Filesystem [ 2882.577053] XFS (loop0): Mounting V5 Filesystem [ 2882.629625] XFS (loop0): Ending clean mount [ 2882.642915] XFS (loop0): Unmounting Filesystem [ 2882.911287] XFS (loop0): Mounting V5 Filesystem [ 2882.963576] XFS (loop0): Ending clean mount [ 2882.975789] XFS (loop0): Unmounting Filesystem [ 2883.234001] XFS (loop0): Mounting V5 Filesystem [ 2883.287692] XFS (loop0): Ending clean mount [ 2883.300820] XFS (loop0): Unmounting Filesystem [ 2883.553440] XFS (loop0): Mounting V5 Filesystem [ 2883.606528] XFS (loop0): Ending clean mount [ 2883.619379] XFS (loop0): Unmounting Filesystem [ 2883.873248] XFS (loop0): Mounting V5 Filesystem [ 2883.927044] XFS (loop0): Ending clean mount [ 2883.939348] XFS (loop0): Unmounting Filesystem [ 2884.232569] XFS (loop0): Mounting V5 Filesystem [ 2884.285708] XFS (loop0): Ending clean mount [ 2884.301065] XFS (loop0): Unmounting Filesystem [ 2884.552351] XFS (loop0): Mounting V5 Filesystem [ 2884.606727] XFS (loop0): Ending clean mount [ 2884.619506] XFS (loop0): Unmounting Filesystem [ 2884.899904] XFS (loop0): Mounting V5 Filesystem [ 2884.953650] XFS (loop0): Ending clean mount [ 2884.965606] XFS (loop0): Unmounting Filesystem [ 2885.249665] XFS (loop0): Mounting V5 Filesystem [ 2885.307101] XFS (loop0): Ending clean mount [ 2885.321188] XFS (loop0): Unmounting Filesystem [ 2885.589013] XFS (loop0): Mounting V5 Filesystem [ 2885.643319] XFS (loop0): Ending clean mount [ 2885.655874] XFS (loop0): Unmounting Filesystem [ 2885.931036] XFS (loop0): Mounting V5 Filesystem [ 2885.984780] XFS (loop0): Ending clean mount [ 2885.997759] XFS (loop0): Unmounting Filesystem [ 2886.245251] XFS (loop0): Mounting V5 Filesystem [ 2886.299709] XFS (loop0): Ending clean mount [ 2886.312074] XFS (loop0): Unmounting Filesystem [ 2886.754365] LTP: starting fsmount02 [ 2886.823997] loop0: detected capacity change from 0 to 614400 [ 2886.837320] /dev/zero: Can't open blockdev [ 2886.914371] /dev/zero: Can't open blockdev [ 2886.988832] /dev/zero: Can't open blockdev [ 2887.066916] /dev/zero: Can't open blockdev [ 2887.205089] /dev/zero: Can't open blockdev [ 2887.915475] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2887.955267] EXT4-fs mount: 50 callbacks suppressed [ 2887.955285] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2887.970081] EXT4-fs (loop0): unmounting filesystem. [ 2889.993753] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2890.052849] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2890.065149] EXT4-fs (loop0): unmounting filesystem. [ 2890.595308] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2890.610386] EXT4-fs (loop0): unmounting filesystem. [ 2890.950479] XFS (loop0): Mounting V5 Filesystem [ 2891.024903] XFS (loop0): Ending clean mount [ 2891.043658] XFS (loop0): Unmounting Filesystem [ 2891.428595] LTP: starting fsopen01 [ 2891.496612] loop0: detected capacity change from 0 to 614400 [ 2891.508327] /dev/zero: Can't open blockdev [ 2891.589365] /dev/zero: Can't open blockdev [ 2891.666789] /dev/zero: Can't open blockdev [ 2891.743385] /dev/zero: Can't open blockdev [ 2891.883265] /dev/zero: Can't open blockdev [ 2892.577142] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2892.610787] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2892.629217] EXT4-fs (loop0): unmounting filesystem. [ 2892.656828] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2892.697459] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2892.714873] EXT4-fs (loop0): unmounting filesystem. [ 2894.722848] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2894.778618] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2894.802248] EXT4-fs (loop0): unmounting filesystem. [ 2894.834153] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2894.898025] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2894.915529] EXT4-fs (loop0): unmounting filesystem. [ 2895.492612] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2895.511744] EXT4-fs (loop0): unmounting filesystem. [ 2895.591922] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2895.599733] EXT4-fs (loop0): unmounting filesystem. [ 2896.548445] XFS (loop0): Mounting V5 Filesystem [ 2896.617694] XFS (loop0): Ending clean mount [ 2896.631246] XFS (loop0): Unmounting Filesystem [ 2896.885946] XFS (loop0): Mounting V5 Filesystem [ 2896.937465] XFS (loop0): Ending clean mount [ 2896.957229] XFS (loop0): Unmounting Filesystem [ 2897.316533] LTP: starting fsopen02 [ 2897.374541] loop0: detected capacity change from 0 to 614400 [ 2897.608838] LTP: starting fspick01 [ 2897.681920] loop0: detected capacity change from 0 to 614400 [ 2897.692283] /dev/zero: Can't open blockdev [ 2897.767682] /dev/zero: Can't open blockdev [ 2897.844395] /dev/zero: Can't open blockdev [ 2897.918996] /dev/zero: Can't open blockdev [ 2898.059821] /dev/zero: Can't open blockdev [ 2898.724687] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2898.762846] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2898.804241] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 2898.813994] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 2898.815091] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 2898.816259] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 2898.825412] EXT4-fs (loop0): unmounting filesystem. [ 2900.670096] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2900.732530] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2900.781018] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 2900.792799] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 2900.793904] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 2900.795238] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 2900.804397] EXT4-fs (loop0): unmounting filesystem. [ 2901.350236] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2901.387757] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 2901.391970] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 2901.393095] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 2901.394532] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 2901.407831] EXT4-fs (loop0): unmounting filesystem. [ 2901.777812] XFS (loop0): Mounting V5 Filesystem [ 2901.844328] XFS (loop0): Ending clean mount [ 2902.047280] XFS (loop0): Unmounting Filesystem [ 2902.405977] LTP: starting fspick02 [ 2902.489828] loop0: detected capacity change from 0 to 614400 [ 2902.510689] /dev/zero: Can't open blockdev [ 2902.584927] /dev/zero: Can't open blockdev [ 2902.662951] /dev/zero: Can't open blockdev [ 2902.741019] /dev/zero: Can't open blockdev [ 2902.882088] /dev/zero: Can't open blockdev [ 2903.782027] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2903.822563] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2903.851825] EXT4-fs (loop0): unmounting filesystem. [ 2906.061122] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2906.119596] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2906.146042] EXT4-fs (loop0): unmounting filesystem. [ 2906.657187] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2906.689016] EXT4-fs (loop0): unmounting filesystem. [ 2907.445843] XFS (loop0): Mounting V5 Filesystem [ 2907.515294] XFS (loop0): Ending clean mount [ 2907.546326] XFS (loop0): Unmounting Filesystem [ 2907.894847] LTP: starting fstat02 [ 2907.987856] LTP: starting fstat02_64 [ 2908.088824] LTP: starting fstat03 [ 2908.186893] LTP: starting fstat03_64 [ 2908.286793] LTP: starting fstatat01 [ 2908.354175] LTP: starting fstatfs01 [ 2908.416323] LTP: starting fstatfs01_64 [ 2908.470965] LTP: starting fstatfs02 [ 2908.560437] LTP: starting fstatfs02_64 [ 2908.663061] LTP: starting fsync01 [ 2908.726000] loop0: detected capacity change from 0 to 614400 [ 2908.736162] /dev/zero: Can't open blockdev [ 2908.812727] /dev/zero: Can't open blockdev [ 2908.897799] /dev/zero: Can't open blockdev [ 2908.970843] /dev/zero: Can't open blockdev [ 2909.101537] /dev/zero: Can't open blockdev [ 2909.766296] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2909.811274] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2910.235892] EXT4-fs (loop0): unmounting filesystem. [ 2912.195301] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2912.256855] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2912.938619] EXT4-fs (loop0): unmounting filesystem. [ 2913.602410] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2914.325273] EXT4-fs (loop0): unmounting filesystem. [ 2915.676160] XFS (loop0): Mounting V5 Filesystem [ 2915.743387] XFS (loop0): Ending clean mount [ 2916.351209] XFS (loop0): Unmounting Filesystem [ 2916.768967] LTP: starting fsync02 [ 2922.665140] LTP: starting fsync03 [ 2922.751335] LTP: starting fsync04 [ 2922.827407] loop0: detected capacity change from 0 to 614400 [ 2922.845593] /dev/zero: Can't open blockdev [ 2922.920651] /dev/zero: Can't open blockdev [ 2922.996096] /dev/zero: Can't open blockdev [ 2923.072291] /dev/zero: Can't open blockdev [ 2923.204527] /dev/zero: Can't open blockdev [ 2924.154127] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 2924.200345] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 2928.836448] EXT4-fs (loop0): unmounting filesystem. [ 2931.128464] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 2931.193913] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2936.311847] EXT4-fs (loop0): unmounting filesystem. [ 2937.094060] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 2940.351153] EXT4-fs (loop0): unmounting filesystem. [ 2940.925038] XFS (loop0): Mounting V5 Filesystem [ 2940.994295] XFS (loop0): Ending clean mount [ 2942.799891] XFS (loop0): Unmounting Filesystem [ 2943.228293] LTP: starting ftruncate01 [ 2943.341266] LTP: starting ftruncate01_64 [ 2943.434524] LTP: starting ftruncate03 [ 2943.511494] LTP: starting ftruncate03_64 [ 2943.587167] LTP: starting ftruncate04 [ 2943.639823] LTP: starting ftruncate04_64 [ 2943.713451] LTP: starting futimesat01 [ 2943.785264] LTP: starting getcontext01 [ 2943.851334] LTP: starting getcpu01 [ 2943.926028] LTP: starting getcwd01 [ 2943.990754] LTP: starting getcwd02 [ 2944.053998] LTP: starting getcwd03 [ 2944.141314] LTP: starting getcwd04 [ 2949.222660] LTP: starting getdents01 [ 2949.340613] LTP: starting getdents02 [ 2949.472416] LTP: starting getdomainname01 [ 2949.553365] LTP: starting getdtablesize01 [ 2949.784088] LTP: starting getegid01 [ 2949.849252] LTP: starting getegid01_16 [ 2949.905347] LTP: starting getegid02 [ 2949.966627] LTP: starting getegid02_16 [ 2950.014831] LTP: starting geteuid01 [ 2950.077407] LTP: starting geteuid01_16 [ 2950.127801] LTP: starting geteuid02 [ 2950.190603] LTP: starting geteuid02_16 [ 2950.238623] LTP: starting getgid01 [ 2950.313277] LTP: starting getgid01_16 [ 2950.379258] LTP: starting getgid03 [ 2950.445420] LTP: starting getgid03_16 [ 2950.506280] LTP: starting getgroups01 [ 2950.562175] LTP: starting getgroups01_16 [ 2950.614139] LTP: starting getgroups03 [ 2950.711386] LTP: starting getgroups03_16 [ 2950.792549] LTP: starting gethostbyname_r01 [ 2950.832267] LTP: starting gethostid01 [ 2950.953795] LTP: starting gethostname01 [ 2951.004393] LTP: starting getitimer01 [ 2951.061117] LTP: starting getitimer02 [ 2951.127965] LTP: starting getitimer03 [ 2951.177220] LTP: starting getpagesize01 [ 2951.246851] LTP: starting getpeername01 [ 2951.308358] LTP: starting getpgid01 [ 2951.374650] LTP: starting getpgid02 [ 2951.427398] LTP: starting getpgrp01 [ 2951.481474] LTP: starting getpid01 [ 2952.327108] LTP: starting getpid02 [ 2952.404636] LTP: starting getppid01 [ 2952.494311] LTP: starting getppid02 [ 2952.573596] LTP: starting getpriority01 [ 2952.647900] LTP: starting getpriority02 [ 2952.708501] LTP: starting getrandom01 [ 2952.776334] LTP: starting getrandom02 [ 2952.846336] LTP: starting getrandom03 [ 2952.908507] LTP: starting getrandom04 [ 2952.976338] LTP: starting getresgid01 [ 2953.033881] LTP: starting getresgid01_16 [ 2953.087898] LTP: starting getresgid02 [ 2953.144314] LTP: starting getresgid02_16 [ 2953.198800] LTP: starting getresgid03 [ 2953.252973] LTP: starting getresgid03_16 [ 2953.318957] LTP: starting getresuid01 [ 2953.373641] LTP: starting getresuid01_16 [ 2953.426071] LTP: starting getresuid02 [ 2953.491552] LTP: starting getresuid02_16 [ 2953.545973] LTP: starting getresuid03 [ 2953.601220] LTP: starting getresuid03_16 [ 2953.676684] LTP: starting getrlimit01 [ 2953.749346] LTP: starting getrlimit02 [ 2953.814116] LTP: starting getrlimit03 [ 2953.876398] LTP: starting get_mempolicy01 [ 2953.960409] LTP: starting get_mempolicy02 [ 2954.036760] LTP: starting get_robust_list01 [ 2954.090866] LTP: starting getrusage01 [ 2954.159154] LTP: starting getrusage02 [ 2954.237138] LTP: starting getrusage03 [ 2956.917956] LTP: starting getrusage04 (sh -c "getrusage04 || true") [ 2957.497347] LTP: starting getsid01 [ 2957.575172] LTP: starting getsid02 [ 2957.636602] LTP: starting getsockname01 [ 2957.691271] LTP: starting getsockopt01 [ 2957.750001] LTP: starting getsockopt02 [ 2957.827893] LTP: starting gettid01 [ 2957.879276] LTP: starting gettimeofday01 [ 2957.938893] LTP: starting gettimeofday02 [ 3007.996920] LTP: starting getuid01 [ 3008.055995] LTP: starting getuid01_16 [ 3008.142481] LTP: starting getuid03 [ 3008.205535] LTP: starting getuid03_16 [ 3008.279017] LTP: starting getxattr01 [ 3008.337591] LTP: starting getxattr02 [ 3008.401974] LTP: starting getxattr03 [ 3008.467258] LTP: starting getxattr04 [ 3008.532512] loop0: detected capacity change from 0 to 614400 [ 3008.861996] XFS (loop0): Mounting V5 Filesystem [ 3008.930733] XFS (loop0): Ending clean mount [ 3009.507118] XFS (loop0): Unmounting Filesystem [ 3009.850186] LTP: starting init_module01 [ 3010.155828] LTP: starting init_module02 [ 3010.382595] LTP: starting ioctl03 [ 3010.547378] tun: Universal TUN/TAP device driver, 1.6 [ 3010.572654] LTP: starting ioctl04 [ 3010.627833] loop0: detected capacity change from 0 to 614400 [ 3011.365919] /dev/loop0: Can't open blockdev [ 3011.367726] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3011.383240] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3011.391220] EXT4-fs (loop0): unmounting filesystem. [ 3011.544105] LTP: starting ioctl05 [ 3011.605361] loop0: detected capacity change from 0 to 614400 [ 3011.701053] LTP: starting ioctl06 [ 3011.771239] loop0: detected capacity change from 0 to 614400 [ 3011.870790] LTP: starting ioctl07 [ 3011.948812] LTP: starting ioctl08 [ 3012.014510] LTP: starting ioctl_loop01 [ 3012.687342] loop0: detected capacity change from 0 to 20480 [ 3012.710605] loop0: p1 [ 3012.955055] LTP: starting ioctl_loop02 [ 3013.032221] loop0: detected capacity change from 0 to 20 [ 3013.103326] loop0: detected capacity change from 0 to 20 [ 3013.197141] LTP: starting ioctl_loop03 [ 3013.254514] loop0: detected capacity change from 0 to 20 [ 3013.340291] LTP: starting ioctl_loop04 [ 3013.399778] loop0: detected capacity change from 0 to 20 [ 3013.434055] loop0: detected capacity change from 20 to 10 [ 3013.564789] LTP: starting ioctl_loop05 [ 3013.678804] loop0: detected capacity change from 0 to 2048 [ 3013.813703] loop0: detected capacity change from 2048 to 2047 [ 3013.987895] LTP: starting ioctl_loop06 [ 3014.087638] loop0: detected capacity change from 0 to 2048 [ 3014.181141] LTP: starting ioctl_loop07 [ 3014.259171] loop0: detected capacity change from 0 to 2048 [ 3014.327334] loop0: detected capacity change from 0 to 2048 [ 3014.364640] loop0: detected capacity change from 2048 to 1024 [ 3014.380701] loop0: detected capacity change from 1024 to 2048 [ 3014.590664] loop0: detected capacity change from 0 to 2048 [ 3014.654182] loop0: detected capacity change from 0 to 1024 [ 3014.734681] LTP: starting ioctl_ns01 [ 3014.806876] LTP: starting ioctl_ns02 [ 3014.898729] LTP: starting ioctl_ns03 [ 3014.961896] LTP: starting ioctl_ns04 [ 3015.132707] LTP: starting ioctl_ns05 [ 3015.225053] LTP: starting ioctl_ns06 [ 3015.336844] LTP: starting ioctl_ns07 [ 3015.404790] LTP: starting inotify_init1_01 [ 3015.469189] LTP: starting inotify_init1_02 [ 3015.534573] LTP: starting inotify01 [ 3015.609872] LTP: starting inotify02 [ 3015.713012] LTP: starting inotify03 [ 3015.773415] loop0: detected capacity change from 0 to 614400 [ 3016.499535] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3016.539155] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3016.630440] EXT4-fs (loop0): unmounting filesystem. [ 3016.676918] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3016.717475] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3016.744665] EXT4-fs (loop0): unmounting filesystem. [ 3016.856240] LTP: starting inotify04 [ 3016.940182] LTP: starting inotify05 [ 3018.364857] LTP: starting inotify06 [ 3019.448762] LTP: starting inotify07 [ 3019.508629] loop0: detected capacity change from 0 to 614400 [ 3020.046946] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3020.095844] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [-- MARK -- Fri Feb 3 06:35:00 2023] [ 3032.453271] inotify07 (78722): drop_caches: 2 [ 3032.587314] EXT4-fs (loop0): unmounting filesystem. [ 3032.740270] LTP: starting inotify08 [ 3032.908376] loop0: detected capacity change from 0 to 614400 [ 3033.515051] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3033.556324] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3034.135372] inotify08 (78735): drop_caches: 2 [ 3034.243215] EXT4-fs (loop0): unmounting filesystem. [ 3034.380330] LTP: starting inotify09 [-- MARK -- Fri Feb 3 06:40:00 2023] [-- MARK -- Fri Feb 3 06:45:00 2023] [ 3784.706695] LTP: starting inotify10 [ 3784.850005] LTP: starting fanotify01 [ 3784.976755] loop0: detected capacity change from 0 to 614400 [ 3785.748823] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3785.787688] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3785.902974] EXT4-fs (loop0): unmounting filesystem. [ 3786.020353] LTP: starting fanotify02 [ 3786.140847] LTP: starting fanotify03 [ 3786.247964] loop0: detected capacity change from 0 to 614400 [ 3786.953938] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3786.994193] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3787.436705] EXT4-fs (loop0): unmounting filesystem. [ 3787.573236] LTP: starting fanotify04 [ 3787.703897] LTP: starting fanotify05 [ 3787.808266] loop0: detected capacity change from 0 to 614400 [ 3788.324433] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3788.364115] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3810.402538] EXT4-fs (loop0): unmounting filesystem. [ 3810.666824] LTP: starting fanotify06 [ 3810.779565] loop0: detected capacity change from 0 to 614400 [ 3811.272610] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3811.321588] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3811.474180] EXT4-fs (loop0): unmounting filesystem. [ 3811.595356] LTP: starting fanotify07 [ 3811.764781] LTP: starting fanotify08 [ 3811.830012] LTP: starting fanotify09 [ 3811.910898] loop0: detected capacity change from 0 to 614400 [ 3812.442332] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3812.481815] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3812.632505] EXT4-fs (loop0): unmounting filesystem. [ 3812.752040] LTP: starting fanotify10 [ 3812.844154] loop0: detected capacity change from 0 to 614400 [ 3813.336174] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3813.368074] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3813.711249] EXT4-fs (loop0): unmounting filesystem. [ 3813.747502] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3813.787850] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3813.861896] EXT4-fs (loop0): unmounting filesystem. [ 3813.892273] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3813.932410] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3814.015791] EXT4-fs (loop0): unmounting filesystem. [ 3814.044749] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3814.077481] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3814.152245] EXT4-fs (loop0): unmounting filesystem. [ 3814.182074] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3814.222730] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3814.305618] EXT4-fs (loop0): unmounting filesystem. [ 3814.334952] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3814.367526] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3814.424487] EXT4-fs (loop0): unmounting filesystem. [ 3814.445430] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3814.476365] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3814.527776] EXT4-fs (loop0): unmounting filesystem. [ 3814.558216] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3814.599468] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3814.667722] EXT4-fs (loop0): unmounting filesystem. [ 3814.695631] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3814.736201] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3814.811013] EXT4-fs (loop0): unmounting filesystem. [ 3814.839831] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3814.881483] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3814.947444] EXT4-fs (loop0): unmounting filesystem. [ 3814.986035] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3815.018934] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3815.092523] EXT4-fs (loop0): unmounting filesystem. [ 3815.121818] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3815.163223] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3815.229393] EXT4-fs (loop0): unmounting filesystem. [ 3815.259109] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3815.291446] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3815.366046] EXT4-fs (loop0): unmounting filesystem. [ 3815.403672] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3815.444747] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3815.502841] EXT4-fs (loop0): unmounting filesystem. [ 3815.533133] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3815.573113] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3815.640334] EXT4-fs (loop0): unmounting filesystem. [ 3815.668666] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3815.701721] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3815.769095] EXT4-fs (loop0): unmounting filesystem. [ 3815.797905] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3815.838279] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3815.906106] EXT4-fs (loop0): unmounting filesystem. [ 3815.942422] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3815.975257] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3816.041398] EXT4-fs (loop0): unmounting filesystem. [ 3816.070392] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3816.103381] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3816.161046] EXT4-fs (loop0): unmounting filesystem. [ 3816.191271] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3816.231595] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3816.298102] EXT4-fs (loop0): unmounting filesystem. [ 3816.336337] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3816.368658] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3816.443020] EXT4-fs (loop0): unmounting filesystem. [ 3816.473134] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3816.514012] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3816.579737] EXT4-fs (loop0): unmounting filesystem. [ 3816.697461] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3816.733607] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3816.808153] EXT4-fs (loop0): unmounting filesystem. [ 3816.829512] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3816.870356] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3816.937069] EXT4-fs (loop0): unmounting filesystem. [ 3816.970037] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3817.002354] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3817.077329] EXT4-fs (loop0): unmounting filesystem. [ 3817.107246] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3817.147909] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3817.450075] EXT4-fs (loop0): unmounting filesystem. [ 3817.491392] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3817.527203] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3817.593267] EXT4-fs (loop0): unmounting filesystem. [ 3817.705657] LTP: starting fanotify11 [ 3817.814855] LTP: starting fanotify12 [ 3818.122469] LTP: starting fanotify13 [ 3818.205240] loop0: detected capacity change from 0 to 614400 [ 3818.218588] /dev/zero: Can't open blockdev [ 3818.292550] /dev/zero: Can't open blockdev [ 3818.366586] /dev/zero: Can't open blockdev [ 3818.444768] /dev/zero: Can't open blockdev [ 3818.648257] /dev/zero: Can't open blockdev [ 3819.509313] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3819.555861] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3819.672178] EXT4-fs (loop0): unmounting filesystem. [ 3821.525095] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 3821.582887] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3821.732388] EXT4-fs (loop0): unmounting filesystem. [ 3822.380470] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3822.519793] EXT4-fs (loop0): unmounting filesystem. [ 3824.077464] XFS (loop0): Mounting V5 Filesystem [ 3824.141812] XFS (loop0): Ending clean mount [ 3824.342230] XFS (loop0): Unmounting Filesystem [ 3824.782947] LTP: starting fanotify14 [ 3824.863494] loop0: detected capacity change from 0 to 614400 [ 3824.875307] /dev/zero: Can't open blockdev [ 3824.949929] /dev/zero: Can't open blockdev [ 3825.023096] /dev/zero: Can't open blockdev [ 3825.103202] /dev/zero: Can't open blockdev [ 3825.243376] /dev/zero: Can't open blockdev [ 3825.897896] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3825.945031] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3826.018197] EXT4-fs (loop0): unmounting filesystem. [ 3828.037103] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 3828.088177] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3828.198216] EXT4-fs (loop0): unmounting filesystem. [ 3828.787453] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3828.874359] EXT4-fs (loop0): unmounting filesystem. [ 3829.425559] XFS (loop0): Mounting V5 Filesystem [ 3829.493944] XFS (loop0): Ending clean mount [ 3829.647175] XFS (loop0): Unmounting Filesystem [ 3830.017541] LTP: starting fanotify15 [ 3830.100490] loop0: detected capacity change from 0 to 614400 [ 3830.113652] /dev/zero: Can't open blockdev [ 3830.188653] /dev/zero: Can't open blockdev [ 3830.262700] /dev/zero: Can't open blockdev [ 3830.338880] /dev/zero: Can't open blockdev [ 3830.470918] /dev/zero: Can't open blockdev [ 3831.175314] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3831.221673] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3831.345185] EXT4-fs (loop0): unmounting filesystem. [ 3833.287308] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 3833.356165] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3833.460773] EXT4-fs (loop0): unmounting filesystem. [ 3834.124888] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3834.229285] EXT4-fs (loop0): unmounting filesystem. [ 3834.696187] XFS (loop0): Mounting V5 Filesystem [ 3834.760738] XFS (loop0): Ending clean mount [ 3834.937652] XFS (loop0): Unmounting Filesystem [ 3835.351552] LTP: starting fanotify16 [ 3835.411751] loop0: detected capacity change from 0 to 614400 [ 3835.424625] /dev/zero: Can't open blockdev [ 3835.497649] /dev/zero: Can't open blockdev [ 3835.576468] /dev/zero: Can't open blockdev [ 3835.652470] /dev/zero: Can't open blockdev [ 3835.784709] /dev/zero: Can't open blockdev [ 3836.716990] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3836.756187] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3836.971494] EXT4-fs (loop0): unmounting filesystem. [ 3838.943012] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 3839.004321] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3839.225705] EXT4-fs (loop0): unmounting filesystem. [ 3839.880488] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3840.077599] EXT4-fs (loop0): unmounting filesystem. [ 3841.073031] XFS (loop0): Mounting V5 Filesystem [ 3841.138450] XFS (loop0): Ending clean mount [ 3841.396550] XFS (loop0): Unmounting Filesystem [ 3841.853477] LTP: starting fanotify17 [ 3841.931547] loop0: detected capacity change from 0 to 614400 [ 3842.509603] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3842.545742] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3842.844452] EXT4-fs (loop0): unmounting filesystem. [ 3842.966320] LTP: starting fanotify18 [ 3843.044983] loop0: detected capacity change from 0 to 614400 [ 3843.576773] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3843.615871] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3843.681862] EXT4-fs (loop0): unmounting filesystem. [ 3843.800307] LTP: starting fanotify19 [ 3843.886661] loop0: detected capacity change from 0 to 614400 [ 3844.387402] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3844.418771] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3844.535279] EXT4-fs (loop0): unmounting filesystem. [ 3844.654752] LTP: starting fanotify20 [ 3844.725322] loop0: detected capacity change from 0 to 614400 [ 3844.735456] /dev/zero: Can't open blockdev [ 3844.811512] /dev/zero: Can't open blockdev [ 3844.886851] /dev/zero: Can't open blockdev [ 3844.962284] /dev/zero: Can't open blockdev [ 3845.094539] /dev/zero: Can't open blockdev [ 3845.376641] LTP: starting fanotify21 [ 3845.459381] loop0: detected capacity change from 0 to 614400 [ 3845.470501] /dev/zero: Can't open blockdev [ 3845.545068] /dev/zero: Can't open blockdev [ 3845.620272] /dev/zero: Can't open blockdev [ 3845.700322] /dev/zero: Can't open blockdev [ 3845.833481] /dev/zero: Can't open blockdev [ 3846.116581] LTP: starting fanotify22 [ 3846.205459] loop0: detected capacity change from 0 to 614400 [ 3846.635542] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3846.680349] EXT4-fs (loop0): unmounting filesystem. [ 3846.797044] LTP: starting ioperm01 [ 3846.894338] LTP: starting ioperm02 [ 3846.971611] LTP: starting iopl01 [ 3847.043027] LTP: starting iopl02 [ 3847.111162] LTP: starting ioprio_get01 [ 3847.184876] LTP: starting ioprio_set01 [ 3847.285533] LTP: starting ioprio_set02 [ 3847.364391] LTP: starting ioprio_set03 [ 3847.438224] LTP: starting io_cancel01 [ 3847.549972] LTP: starting io_cancel02 [ 3847.634548] LTP: starting io_destroy01 [ 3847.706068] LTP: starting io_destroy02 [ 3847.769713] LTP: starting io_getevents01 [ 3847.857233] LTP: starting io_getevents02 [ 3847.949466] LTP: starting io_pgetevents01 [ 3848.039592] LTP: starting io_pgetevents02 [ 3848.129667] LTP: starting io_setup01 [ 3848.217722] LTP: starting io_setup02 [ 3848.294504] LTP: starting io_submit01 [ 3848.396778] LTP: starting io_submit02 [ 3848.489473] LTP: starting io_submit03 [ 3848.585055] LTP: starting keyctl01 [ 3848.663419] LTP: starting keyctl02 [ 3881.254370] perf: interrupt took too long (6340 > 6317), lowering kernel.perf_event_max_sample_rate to 31000 [ 3888.635202] LTP: starting keyctl03 [ 3888.753441] LTP: starting keyctl04 [ 3888.841938] LTP: starting keyctl05 [ 3889.862649] LTP: starting keyctl06 [ 3889.928853] LTP: starting keyctl07 [ 3890.101008] LTP: starting keyctl08 [ 3890.174996] LTP: starting keyctl09 [ 3890.234634] LTP: starting kcmp01 [ 3890.355633] LTP: starting kcmp02 [ 3890.434057] LTP: starting kcmp03 [ 3890.515215] LTP: starting kill02 [ 3900.585469] LTP: starting kill03 [ 3900.668481] LTP: starting kill05 [ 3900.749132] LTP: starting kill06 [ 3900.834728] LTP: starting kill07 [ 3901.890797] LTP: starting kill08 [ 3901.983085] LTP: starting kill09 [ 3902.043093] LTP: starting kill10 [ 3903.154712] LTP: starting kill11 [ 3903.498171] LTP: starting kill12 [ 3903.648531] LTP: starting kill13 [ 3903.704556] LTP: starting lchown01 [ 3903.774133] LTP: starting lchown01_16 [ 3903.845122] LTP: starting lchown02 [ 3903.920766] LTP: starting lchown03 [ 3903.979289] loop0: detected capacity change from 0 to 614400 [ 3904.552066] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3904.567134] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3904.583110] EXT4-fs (loop0): unmounting filesystem. [ 3904.785459] LTP: starting lchown02_16 [ 3904.872066] LTP: starting lchown03_16 [ 3904.935935] loop0: detected capacity change from 0 to 614400 [ 3905.510846] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3905.526247] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3905.537272] EXT4-fs (loop0): unmounting filesystem. [ 3905.730197] LTP: starting lgetxattr01 [ 3905.812618] LTP: starting lgetxattr02 [ 3905.881120] LTP: starting link01 (symlink01 -T link01) [ 3905.932951] LTP: starting link02 [ 3906.117723] LTP: starting link03 [ 3906.202699] LTP: starting link04 [ 3906.317850] LTP: starting link05 [ 3907.058340] LTP: starting link08 [ 3907.180463] LTP: starting linkat01 [ 3907.322922] LTP: starting linkat02 [ 3907.377080] loop0: detected capacity change from 0 to 614400 [ 3907.958828] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3908.000039] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [-- MARK -- Fri Feb 3 06:50:00 2023] [ 3939.258825] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 3939.279352] EXT4-fs (loop0): unmounting filesystem. [ 3942.396527] LTP: starting listen01 [ 3942.470027] LTP: starting listxattr01 [ 3942.562859] LTP: starting listxattr02 [ 3942.638965] LTP: starting listxattr03 [ 3942.716806] LTP: starting llistxattr01 [ 3942.796922] LTP: starting llistxattr02 [ 3942.878422] LTP: starting llistxattr03 [ 3942.950629] LTP: starting llseek01 [ 3943.030365] LTP: starting llseek02 [ 3943.100829] LTP: starting llseek03 [ 3943.174787] LTP: starting lremovexattr01 [ 3943.247522] loop0: detected capacity change from 0 to 614400 [ 3943.257639] /dev/zero: Can't open blockdev [ 3943.330617] /dev/zero: Can't open blockdev [ 3943.403583] /dev/zero: Can't open blockdev [ 3943.479623] /dev/zero: Can't open blockdev [ 3943.611254] /dev/zero: Can't open blockdev [ 3944.554662] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3944.594133] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3944.665131] EXT4-fs (loop0): unmounting filesystem. [ 3946.680149] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 3946.741478] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3946.830412] EXT4-fs (loop0): unmounting filesystem. [ 3947.430781] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3947.511147] EXT4-fs (loop0): unmounting filesystem. [ 3947.936461] XFS (loop0): Mounting V5 Filesystem [ 3948.001156] XFS (loop0): Ending clean mount [ 3948.150870] XFS (loop0): Unmounting Filesystem [ 3948.552606] LTP: starting lseek01 [ 3948.640240] LTP: starting lseek02 [ 3948.704959] LTP: starting lseek07 [ 3948.752925] LTP: starting lseek11 [ 3949.217909] LTP: starting lstat01A (symlink01 -T lstat01) [ 3949.262544] LTP: starting lstat01A_64 (symlink01 -T lstat01_64) [ 3949.310136] LTP: starting lstat01 [ 3949.388180] LTP: starting lstat01_64 [ 3949.460291] LTP: starting lstat02 [ 3949.551377] LTP: starting lstat02_64 [ 3949.647347] LTP: starting mallinfo02 [ 3949.715106] LTP: starting mallinfo2_01 [ 3949.783036] LTP: starting mallopt01 [ 3949.855039] LTP: starting mbind01 [ 3951.030085] LTP: starting mbind02 [ 3951.131921] LTP: starting mbind03 [ 3951.214284] LTP: starting mbind04 [ 3951.420913] LTP: starting memset01 [ 3951.527317] LTP: starting memcmp01 [ 3951.626275] LTP: starting memcpy01 [ 3951.700118] LTP: starting migrate_pages01 [ 3952.105924] LTP: starting migrate_pages02 [ 3953.983961] LTP: starting migrate_pages03 [ 3956.177283] LTP: starting mlockall01 [ 3956.258518] LTP: starting mlockall02 [ 3956.319191] LTP: starting mlockall03 [ 3956.376880] LTP: starting mkdir02 [ 3956.537422] LTP: starting mkdir03 [ 3956.621935] LTP: starting mkdir04 [ 3956.704314] LTP: starting mkdir05 [ 3956.779276] LTP: starting mkdir05A (symlink01 -T mkdir05) [ 3956.817595] LTP: starting mkdir09 [ 3956.876553] loop0: detected capacity change from 0 to 614400 [ 3956.896346] /dev/zero: Can't open blockdev [ 3956.970168] /dev/zero: Can't open blockdev [ 3957.042404] /dev/zero: Can't open blockdev [ 3957.117189] /dev/zero: Can't open blockdev [ 3957.248512] /dev/zero: Can't open blockdev [ 3957.965191] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3958.004462] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3959.121670] EXT4-fs (loop0): unmounting filesystem. [ 3961.131917] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 3961.192402] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3962.358525] EXT4-fs (loop0): unmounting filesystem. [ 3963.063472] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3964.187574] EXT4-fs (loop0): unmounting filesystem. [ 3964.660670] XFS (loop0): Mounting V5 Filesystem [ 3964.726226] XFS (loop0): Ending clean mount [ 3965.901602] XFS (loop0): Unmounting Filesystem [ 3967.296033] LTP: starting mkdirat01 [ 3967.384259] LTP: starting mkdirat02 [ 3967.477419] LTP: starting mknod01 [ 3967.556546] LTP: starting mknod02 [ 3967.631013] LTP: starting mknod03 [ 3967.695725] LTP: starting mknod04 [ 3967.769156] LTP: starting mknod05 [ 3967.849836] LTP: starting mknod06 [ 3967.913489] LTP: starting mknod07 [ 3967.961082] loop0: detected capacity change from 0 to 614400 [ 3968.769302] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3968.785446] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3968.809812] EXT4-fs (loop0): unmounting filesystem. [ 3968.956486] LTP: starting mknod08 [ 3969.049212] LTP: starting mknod09 [ 3969.108700] LTP: starting mknodat01 [ 3969.180483] LTP: starting mknodat02 [ 3969.248076] loop0: detected capacity change from 0 to 614400 [ 3969.774359] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3969.790404] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3969.819206] EXT4-fs (loop0): unmounting filesystem. [ 3969.964528] LTP: starting mlock01 [ 3970.046611] LTP: starting mlock02 [ 3970.109723] LTP: starting mlock03 (mlock03 -i 20) [ 3970.269766] LTP: starting mlock04 [ 3970.325382] LTP: starting mlock201 [ 3970.393377] LTP: starting mlock202 [ 3970.462807] LTP: starting mlock203 [ 3970.527329] LTP: starting qmm01 (mmap001 -m 1) [ 3970.620612] LTP: starting mmap01 [ 3970.792064] LTP: starting mmap02 [ 3970.860880] LTP: starting mmap03 [ 3970.917259] LTP: starting mmap04 [ 3970.976812] LTP: starting mmap05 [ 3971.031167] LTP: starting mmap06 [ 3971.084324] LTP: starting mmap07 [ 3971.137160] LTP: starting mmap08 [ 3971.196828] LTP: starting mmap09 [ 3971.288600] LTP: starting mmap12 [ 3971.382366] LTP: starting mmap13 [ 3971.435457] LTP: starting mmap14 [ 3971.506805] LTP: starting mmap15 [ 3971.568038] LTP: starting mmap16 [ 3971.628035] loop0: detected capacity change from 0 to 614400 [ 3972.005051] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3989.092188] EXT4-fs (loop0): unmounting filesystem. [ 3989.317501] LTP: starting mmap17 [ 3989.395101] LTP: starting mmap18 [ 3989.491376] mmap18[159993]: segfault at 7f81cc7b6ff8 ip 0000000000404e06 sp 00007f81cc7b7000 error 6 in mmap18[404000+1a000] [ 3989.493003] Code: 83 c4 18 c3 48 89 0d 29 ae 02 00 e8 d4 ff ff ff eb ed 66 90 48 83 ec 18 49 89 f9 4c 8d 44 24 08 48 89 7c 24 08 4c 39 c7 7f 0a e5 ff ff ff 48 83 c4 18 c3 31 c0 b9 10 e0 41 00 ba 10 00 00 00 [ 3989.505347] mmap18[159995]: segfault at 7f81cc7aaff8 ip 0000000000404e06 sp 00007f81cc7ab000 error 6 in mmap18[404000+1a000] [ 3989.506556] Code: 83 c4 18 c3 48 89 0d 29 ae 02 00 e8 d4 ff ff ff eb ed 66 90 48 83 ec 18 49 89 f9 4c 8d 44 24 08 48 89 7c 24 08 4c 39 c7 7f 0a e5 ff ff ff 48 83 c4 18 c3 31 c0 b9 10 e0 41 00 ba 10 00 00 00 [ 3989.523260] LTP: starting mmap19 [ 3989.618634] LTP: starting modify_ldt01 [ 3989.672331] LTP: starting modify_ldt02 [ 3989.719739] LTP: starting modify_ldt03 [ 3989.775691] LTP: starting mount01 [ 3989.826343] loop0: detected capacity change from 0 to 614400 [ 3990.344505] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3990.383647] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3990.402456] EXT4-fs (loop0): unmounting filesystem. [ 3990.556794] LTP: starting mount02 [ 3990.616939] loop0: detected capacity change from 0 to 614400 [ 3991.428600] char_device: Can't open blockdev [ 3991.432602] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3991.472644] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3991.491540] EXT4-fs (loop0): unmounting filesystem. [ 3991.529016] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3991.567511] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3991.610309] EXT4-fs (loop0): unmounting filesystem. [ 3991.636816] No source specified [ 3991.639458] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3991.679260] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3991.685004] EXT4-fs (loop0): unmounting filesystem. [ 3991.799474] LTP: starting mount03 [ 3991.854555] loop0: detected capacity change from 0 to 614400 [ 3992.389621] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3992.435925] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3992.594410] EXT4-fs (loop0): unmounting filesystem. [ 3992.633882] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3992.649273] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3992.660327] EXT4-fs (loop0): unmounting filesystem. [ 3992.670038] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3992.706186] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3992.755930] EXT4-fs (loop0): unmounting filesystem. [ 3992.784924] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3992.826047] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3992.867238] EXT4-fs (loop0): unmounting filesystem. [ 3992.904308] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3992.946170] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3993.032933] EXT4-fs (loop0): unmounting filesystem. [ 3993.066943] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3993.083456] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3993.112933] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 3993.152760] EXT4-fs (loop0): unmounting filesystem. [ 3993.181483] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3993.223301] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3993.289803] EXT4-fs (loop0): unmounting filesystem. [ 3993.327190] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3993.368296] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3994.434427] EXT4-fs (loop0): unmounting filesystem. [ 3994.564478] LTP: starting mount04 [ 3994.618272] loop0: detected capacity change from 0 to 614400 [ 3995.232390] LTP: starting mount05 [ 3995.315200] LTP: starting mount06 [ 3995.380151] loop0: detected capacity change from 0 to 614400 [ 3995.910324] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3995.952861] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3995.970708] EXT4-fs (loop0): unmounting filesystem. [ 3996.130621] LTP: starting mount_setattr01 [ 3996.185824] loop0: detected capacity change from 0 to 614400 [ 3996.198635] /dev/zero: Can't open blockdev [ 3996.274304] /dev/zero: Can't open blockdev [ 3996.352665] /dev/zero: Can't open blockdev [ 3996.428450] /dev/zero: Can't open blockdev [ 3996.563780] /dev/zero: Can't open blockdev [ 3997.208550] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 3997.248689] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 3997.305285] EXT4-fs (loop0): unmounting filesystem. [ 3999.283171] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 3999.342402] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 3999.394840] EXT4-fs (loop0): unmounting filesystem. [ 4000.014228] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4000.064484] EXT4-fs (loop0): unmounting filesystem. [ 4000.412549] XFS (loop0): Mounting V5 Filesystem [ 4000.476965] XFS (loop0): Ending clean mount [ 4000.520691] XFS (loop0): Unmounting Filesystem [ 4000.883211] LTP: starting move_mount01 [ 4000.970988] loop0: detected capacity change from 0 to 614400 [ 4000.982818] /dev/zero: Can't open blockdev [ 4001.060324] /dev/zero: Can't open blockdev [ 4001.144330] /dev/zero: Can't open blockdev [ 4001.224857] /dev/zero: Can't open blockdev [ 4001.361027] /dev/zero: Can't open blockdev [ 4002.038871] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4002.079510] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4002.096515] EXT4-fs (loop0): unmounting filesystem. [ 4002.136508] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4002.174278] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4002.184811] EXT4-fs (loop0): unmounting filesystem. [ 4002.217113] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4002.257612] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4002.265944] EXT4-fs (loop0): unmounting filesystem. [ 4002.299917] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4002.341014] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4002.348850] EXT4-fs (loop0): unmounting filesystem. [ 4002.382803] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4002.424375] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4002.432334] EXT4-fs (loop0): unmounting filesystem. [ 4002.465846] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4002.507499] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4002.517951] EXT4-fs (loop0): unmounting filesystem. [ 4004.511567] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4004.577087] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4004.598124] EXT4-fs (loop0): unmounting filesystem. [ 4004.635535] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4004.694632] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4004.712946] EXT4-fs (loop0): unmounting filesystem. [ 4004.752041] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4004.814802] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4004.833248] EXT4-fs (loop0): unmounting filesystem. [ 4004.878432] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4004.934638] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4004.952231] EXT4-fs (loop0): unmounting filesystem. [ 4004.985312] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4005.047772] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4005.063814] EXT4-fs (loop0): unmounting filesystem. [ 4005.106361] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4005.169292] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4005.184968] EXT4-fs (loop0): unmounting filesystem. [ 4005.756539] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4005.766654] EXT4-fs (loop0): unmounting filesystem. [ 4005.847881] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4005.864130] EXT4-fs (loop0): unmounting filesystem. [ 4005.951487] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4005.959143] EXT4-fs (loop0): unmounting filesystem. [ 4006.035263] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4006.042646] EXT4-fs (loop0): unmounting filesystem. [ 4006.224961] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4006.233005] EXT4-fs (loop0): unmounting filesystem. [ 4006.307727] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4006.315056] EXT4-fs (loop0): unmounting filesystem. [ 4007.300459] XFS (loop0): Mounting V5 Filesystem [ 4007.371591] XFS (loop0): Ending clean mount [ 4007.394415] XFS (loop0): Unmounting Filesystem [ 4007.645015] XFS (loop0): Mounting V5 Filesystem [ 4007.696293] XFS (loop0): Ending clean mount [ 4007.709976] XFS (loop0): Unmounting Filesystem [ 4007.969349] XFS (loop0): Mounting V5 Filesystem [ 4008.019930] XFS (loop0): Ending clean mount [ 4008.032415] XFS (loop0): Unmounting Filesystem [ 4008.300737] XFS (loop0): Mounting V5 Filesystem [ 4008.353809] XFS (loop0): Ending clean mount [ 4008.366519] XFS (loop0): Unmounting Filesystem [ 4008.637908] XFS (loop0): Mounting V5 Filesystem [ 4008.689256] XFS (loop0): Ending clean mount [ 4008.702032] XFS (loop0): Unmounting Filesystem [ 4008.968007] XFS (loop0): Mounting V5 Filesystem [ 4009.019572] XFS (loop0): Ending clean mount [ 4009.032177] XFS (loop0): Unmounting Filesystem [ 4009.418592] LTP: starting move_mount02 [ 4009.472536] loop0: detected capacity change from 0 to 614400 [ 4009.485916] /dev/zero: Can't open blockdev [ 4009.560286] /dev/zero: Can't open blockdev [ 4009.636314] /dev/zero: Can't open blockdev [ 4009.709792] /dev/zero: Can't open blockdev [ 4009.843798] /dev/zero: Can't open blockdev [ 4010.754106] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4010.794322] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4010.809410] EXT4-fs (loop0): unmounting filesystem. [ 4010.848337] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4010.888424] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4010.904400] EXT4-fs (loop0): unmounting filesystem. [ 4010.937115] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4010.975087] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4010.980142] EXT4-fs (loop0): unmounting filesystem. [ 4011.008633] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4011.083155] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4011.088321] EXT4-fs (loop0): unmounting filesystem. [ 4011.150520] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4011.190816] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4011.195793] EXT4-fs (loop0): unmounting filesystem. [ 4013.434707] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4013.496014] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4013.512271] EXT4-fs (loop0): unmounting filesystem. [ 4013.546094] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4013.599202] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4013.615437] EXT4-fs (loop0): unmounting filesystem. [ 4013.654584] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4013.710583] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4013.727480] EXT4-fs (loop0): unmounting filesystem. [ 4013.760916] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4013.822542] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4013.839126] EXT4-fs (loop0): unmounting filesystem. [ 4013.880916] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4013.942541] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4013.959113] EXT4-fs (loop0): unmounting filesystem. [ 4014.505853] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4014.523140] EXT4-fs (loop0): unmounting filesystem. [ 4014.596628] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4014.610531] EXT4-fs (loop0): unmounting filesystem. [ 4014.690902] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4014.695581] EXT4-fs (loop0): unmounting filesystem. [ 4015.242799] XFS (loop0): Mounting V5 Filesystem [ 4015.310966] XFS (loop0): Ending clean mount [ 4015.320328] XFS (loop0): Unmounting Filesystem [ 4015.595805] XFS (loop0): Mounting V5 Filesystem [ 4015.646475] XFS (loop0): Ending clean mount [ 4015.655673] XFS (loop0): Unmounting Filesystem [ 4015.933304] XFS (loop0): Mounting V5 Filesystem [ 4015.984855] XFS (loop0): Ending clean mount [ 4015.994737] XFS (loop0): Unmounting Filesystem [ 4016.272875] XFS (loop0): Mounting V5 Filesystem [ 4016.323846] XFS (loop0): Ending clean mount [ 4016.334198] XFS (loop0): Unmounting Filesystem [ 4016.625824] XFS (loop0): Mounting V5 Filesystem [ 4016.678093] XFS (loop0): Ending clean mount [ 4016.687320] XFS (loop0): Unmounting Filesystem [ 4017.041762] LTP: starting move_pages01 [ 4017.399140] LTP: starting move_pages02 [ 4017.677696] LTP: starting move_pages03 [ 4017.953749] LTP: starting move_pages04 [ 4018.214020] LTP: starting move_pages05 [ 4018.481573] LTP: starting move_pages06 [ 4018.771928] LTP: starting move_pages07 [ 4019.060891] LTP: starting move_pages10 [ 4019.321402] LTP: starting move_pages11 [ 4019.591699] LTP: starting move_pages12 [ 4020.878979] Soft offlining pfn 0x17ba00 at process virtual address 0x7f3f8e400000 [ 4020.895025] Soft offlining pfn 0x16b600 at process virtual address 0x7f3f8e600000 [ 4020.903252] Soft offlining pfn 0x15d000 at process virtual address 0x7f3f8e400000 [ 4020.907911] Soft offlining pfn 0x17b400 at process virtual address 0x7f3f8e600000 [ 4020.918442] Soft offlining pfn 0x4d1e00 at process virtual address 0x7f3f8e400000 [ 4020.934191] Soft offlining pfn 0x4e1400 at process virtual address 0x7f3f8e600000 [ 4020.943001] Soft offlining pfn 0x13ce00 at process virtual address 0x7f3f8e400000 [ 4020.944447] soft offline: 0x13ce00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4020.950900] Soft offlining pfn 0x4e9200 at process virtual address 0x7f3f8e400000 [ 4020.955812] Soft offlining pfn 0x4e1600 at process virtual address 0x7f3f8e600000 [ 4020.961527] Soft offlining pfn 0x13ce00 at process virtual address 0x7f3f8e400000 [ 4020.965595] Soft offlining pfn 0x500400 at process virtual address 0x7f3f8e600000 [ 4020.971061] Soft offlining pfn 0x4d2000 at process virtual address 0x7f3f8e400000 [ 4020.975462] Soft offlining pfn 0x500600 at process virtual address 0x7f3f8e600000 [ 4020.980459] Soft offlining pfn 0x12fc00 at process virtual address 0x7f3f8e400000 [ 4020.984208] Soft offlining pfn 0x4d2200 at process virtual address 0x7f3f8e600000 [ 4020.990888] Soft offlining pfn 0x4ebe00 at process virtual address 0x7f3f8e400000 [ 4020.992350] soft offline: 0x4ebe00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4020.996562] Soft offlining pfn 0x12fe00 at process virtual address 0x7f3f8e400000 [ 4021.000955] Soft offlining pfn 0x1d0400 at process virtual address 0x7f3f8e600000 [ 4021.008060] Soft offlining pfn 0x1f0000 at process virtual address 0x7f3f8e400000 [ 4021.011574] Soft offlining pfn 0x4ebc00 at process virtual address 0x7f3f8e600000 [ 4021.022485] Soft offlining pfn 0x1f0200 at process virtual address 0x7f3f8e400000 [ 4021.026571] Soft offlining pfn 0x4ebe00 at process virtual address 0x7f3f8e600000 [ 4021.032229] Soft offlining pfn 0x1e0400 at process virtual address 0x7f3f8e400000 [ 4021.037068] Soft offlining pfn 0x1d0600 at process virtual address 0x7f3f8e600000 [ 4021.045254] Soft offlining pfn 0x1ce400 at process virtual address 0x7f3f8e400000 [ 4021.049430] Soft offlining pfn 0x501000 at process virtual address 0x7f3f8e600000 [ 4021.055629] Soft offlining pfn 0x1e0600 at process virtual address 0x7f3f8e400000 [ 4021.059969] Soft offlining pfn 0x1ce600 at process virtual address 0x7f3f8e600000 [ 4021.066139] Soft offlining pfn 0x17ee00 at process virtual address 0x7f3f8e400000 [ 4021.070680] Soft offlining pfn 0x501200 at process virtual address 0x7f3f8e600000 [ 4021.077081] Soft offlining pfn 0x16e800 at process virtual address 0x7f3f8e400000 [ 4021.081421] Soft offlining pfn 0x4db400 at process virtual address 0x7f3f8e600000 [ 4021.083947] soft offline: 0x4db400: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4021.090356] Soft offlining pfn 0x16ea00 at process virtual address 0x7f3f8e400000 [ 4021.091971] soft offline: 0x16ea00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4021.097018] Soft offlining pfn 0x17ec00 at process virtual address 0x7f3f8e400000 [ 4021.100746] Soft offlining pfn 0x4db400 at process virtual address 0x7f3f8e600000 [ 4021.106683] Soft offlining pfn 0x16ea00 at process virtual address 0x7f3f8e400000 [ 4021.117722] Soft offlining pfn 0x4db600 at process virtual address 0x7f3f8e600000 [ 4021.124399] Soft offlining pfn 0x4c3e00 at process virtual address 0x7f3f8e400000 [ 4021.129080] Soft offlining pfn 0x17e800 at process virtual address 0x7f3f8e600000 [ 4021.135857] Soft offlining pfn 0x4c3c00 at process virtual address 0x7f3f8e400000 [ 4021.150812] Soft offlining pfn 0x4d2400 at process virtual address 0x7f3f8e600000 [ 4021.157722] soft offline: 0x4d2400: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4021.163588] Soft offlining pfn 0x14f800 at process virtual address 0x7f3f8e400000 [ 4021.169357] Soft offlining pfn 0x17ea00 at process virtual address 0x7f3f8e600000 [ 4021.176365] Soft offlining pfn 0x1ce000 at process virtual address 0x7f3f8e400000 [ 4021.181205] Soft offlining pfn 0x14fa00 at process virtual address 0x7f3f8e600000 [ 4021.188071] Soft offlining pfn 0x1e0000 at process virtual address 0x7f3f8e400000 [ 4021.189908] soft offline: 0x1e0000: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4021.197046] Soft offlining pfn 0x1ce200 at process virtual address 0x7f3f8e400000 [ 4021.201449] Soft offlining pfn 0x4d2400 at process virtual address 0x7f3f8e600000 [ 4021.208259] Soft offlining pfn 0x1e0000 at process virtual address 0x7f3f8e400000 [ 4021.213168] Soft offlining pfn 0x4d2600 at process virtual address 0x7f3f8e600000 [ 4021.220589] Soft offlining pfn 0x4f8e00 at process vlastcpupid=0x1fffff) [ 4021.624881] Soft offlining pfn 0x1e0200 at process virtual address 0x7f3f8e400000 [ 4021.631245] Soft offlining pfn 0x13fc00 at process virtual address 0x7f3f8e600000 [ 4021.635781] Soft offlining pfn 0x1d0000 at process virtual address 0x7f3f8e400000 [ 4021.639012] Soft offlining pfn 0x13fe00 at process virtual address 0x7f3f8e600000 [ 4021.643398] Soft offlining pfn 0x12f800 at process virtual address 0x7f3f8e400000 [ 4021.646791] Soft offlining pfn 0x4b2c00 at process virtual address 0x7f3f8e600000 [ 4021.651748] Soft offlining pfn 0x12fa00 at process virtual address 0x7f3f8e400000 [ 4021.654997] Soft offlining pfn 0x17e600 at process virtual address 0x7f3f8e600000 [ 4021.660906] Soft offlining pfn 0x17e400 at process virtual address 0x7f3f8e400000 [ 4021.664676] Soft offlining pfn 0x4b2e00 at process virtual address 0x7f3f8e600000 [ 4021.669497] Soft offlining pfn 0x1cdc00 at process virtual address 0x7f3f8e400000 [ 4021.672796] Soft offlining pfn 0x1d0200 at process virtual address 0x7f3f8e600000 [ 4021.677546] Soft offlining pfn 0x14f400 at process virtual address 0x7f3f8e4000oft offlining pfn 0x14f600 at process virtual address 0x7f3f8e600000 [ 4022.182847] Soft offlining pfn 0x16e400 at process virtual address 0x7f3f8e400000 [ 4022.190399] Soft offlining pfn 0x16e400 at process virtual address 0x7f3f8e400000 [ 4022.193630] Soft offlining pfn 0x16e600 at process virtual address 0x7f3f8e600000 [ 4022.198050] Soft offlining pfn 0x11fe00 at process virtual address 0x7f3f8e400000 [ 4022.201470] Soft offlining pfn 0x11fc00 at process virtual address 0x7f3f8e600000 [ 4022.205711] Soft offlining pfn 0x13fa00 at process virtual address 0x7f3f8e400000 [ 4022.207449] soft offline: 0x13fa00: hugepage isolation failed, page count 2, type 0x17ffffc003000f(locked|referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1 [ 4022.211349] Soft offlining pfn 0x13f800 at process virtual address 0x7f3f8e400000 [ 4022.217624] Soft offlining pfn 0x4d3c00 at process virtual address 0x7f3f8e600000 [ 4022.219657] soft offline: 0x4d3c00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4022.225439] Soft offlining pfn 0x15fc00 at process virtual address 0x7f3f8e400000 [ 4022.230379] Soft of[ 4022.635980] Soft offlining pfn 0x4f8c00 at process virtual address 0x7f3f8e600000 [ 4022.644806] Soft offlining pfn 0x13fa00 at process virtual address 0x7f3f8e400000 [ 4022.648473] Soft offlining pfn 0x4d1400 at process virtual address 0x7f3f8e600000 [ 4022.664476] Soft offlining pfn 0x15fe00 at process virtual address 0x7f3f8e400000 [ 4022.669678] Soft offlining pfn 0x4d1600 at process virtual address 0x7f3f8e600000 [ 4022.676556] Soft offlining pfn 0x1cda00 at process virtual address 0x7f3f8e400000 [ 4022.680918] Soft offlining pfn 0x502c00 at process virtual address 0x7f3f8e600000 [ 4022.683439] soft offline: 0x502c00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4022.690974] Soft offlining pfn 0x12f400 at process virtual address 0x7f3f8e400000 [ 4022.693210] soft offline: 0x12f400: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4022.699153] Soft offlining pfn 0x1cd800 at process virtual address 0x7f3f8e400000 [ 4022.703626] Soft offlining pfn 0x502e00 at process virtual address 0x7f3f8e|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4023.207674] Soft offlining pfn 0x4af800 at process virtual address 0x7f3f8e400000 [ 4023.211344] Soft offlining pfn 0x502c00 at process virtual address 0x7f3f8e600000 [ 4023.217594] Soft offlining pfn 0x12f600 at process virtual address 0x7f3f8e400000 [ 4023.221655] Soft offlining pfn 0x17e200 at process virtual address 0x7f3f8e600000 [ 4023.226358] Soft offlining pfn 0x17e000 at process virtual address 0x7f3f8e400000 [ 4023.229710] Soft offlining pfn 0x12f400 at process virtual address 0x7f3f8e600000 [ 4023.230942] soft offline: 0x12f400: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4023.236852] Soft offlining pfn 0x14f000 at process virtual address 0x7f3f8e400000 [ 4023.240631] Soft offlining pfn 0x4ec200 at process virtual address 0x7f3f8e600000 [ 4023.245116] Soft offlining pfn 0x12f400 at process virtual address 0x7f3f8e400000 [ 4023.246485] soft offline: 0x12f400: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4023.251870] Soft off[ 4023.634907] Soft offlining pfn 0x16e000 at process virtual address 0x7f3f8e400000 [ 4023.656068] Soft offlining pfn 0x16e200 at process virtual address 0x7f3f8e600000 [ 4023.661598] Soft offlining pfn 0x12f400 at process virtual address 0x7f3f8e400000 [ 4023.664793] Soft offlining pfn 0x4e4600 at process virtual address 0x7f3f8e600000 [ 4023.669357] Soft offlining pfn 0x11f800 at process virtual address 0x7f3f8e400000 [ 4023.673967] Soft offlining pfn 0x13f400 at process virtual address 0x7f3f8e600000 [ 4023.678587] Soft offlining pfn 0x4e4400 at process virtual address 0x7f3f8e400000 [ 4023.682353] Soft offlining pfn 0x14f200 at process virtual address 0x7f3f8e600000 [ 4023.684414] soft offline: 0x14f200: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4023.688810] Soft offlining pfn 0x4ac200 at process virtual address 0x7f3f8e400000 [ 4023.690083] soft offline: 0x4ac200: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff)[ 4024.162940] Soft offlining pfn 0x4d9800 at process virtual address 0x7f3f8e400000 [ 4024.192535] soft offline: 0x4d9800: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4024.197810] Soft offlining pfn 0x14f200 at process virtual address 0x7f3f8e400000 [ 4024.204746] Soft offlining pfn 0x1cd400 at process virtual address 0x7f3f8e600000 [ 4024.211417] Soft offlining pfn 0x15f800 at process virtual address 0x7f3f8e400000 [ 4024.216130] Soft offlining pfn 0x1cd600 at process virtual address 0x7f3f8e600000 [ 4024.223143] Soft offlining pfn 0x12f000 at process virtual address 0x7f3f8e400000 [ 4024.225510] soft offline: 0x12f000: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4024.233949] Soft offlining pfn 0x4d9a00 at process virtual address 0x7f3f8e400000 [ 4024.238064] Soft offlining pfn 0x4ac000 at process virtual address 0x7f3f8e600000 [ 4024.246697] Soft offlining pfn 0x15fa00 at process virtual address 0x7f3f8e400000 [ 4024.251417] Soft offlining pfn 0x17dc00 at process virtual address 0x7f3f8e600000 [ 4024.259221] Soft offlinoft offlining pfn 0x12f200 at process virtual address 0x7f3f8e400000 [ 4024.662772] Soft offlining pfn 0x12f000 at process virtual address 0x7f3f8e600000 [ 4024.667421] Soft offlining pfn 0x14ee00 at process virtual address 0x7f3f8e400000 [ 4024.668761] soft offline: 0x14ee00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4024.674770] Soft offlining pfn 0x14ec00 at process virtual address 0x7f3f8e400000 [ 4024.678559] Soft offlining pfn 0x4ac200 at process virtual address 0x7f3f8e600000 [ 4024.683234] Soft offlining pfn 0x14ee00 at process virtual address 0x7f3f8e400000 [ 4024.686649] Soft offlining pfn 0x4ca000 at process virtual address 0x7f3f8e600000 [ 4024.691287] Soft offlining pfn 0x16dc00 at process virtual address 0x7f3f8e400000 [ 4024.694654] Soft offlining pfn 0x4ca200 at process virtual address 0x7f3f8e600000 [ 4024.700076] Soft offlining pfn 0x4d3600 at process virtual address 0x7f3f8e400000 [ 4024.703695] Soft offlining pfn 0x4d3400 at process virtual address 0x7f3f8e600000 [ 4024.708707] Soft offlining pfn 0x4b1e00 at process virtual address 0x7f3f8e400000 [ 4024.709962] soft offline: 0x4b1e00: hugepage isolation failed, page count 2,oft offline: 0x16de00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4025.215027] Soft offlining pfn 0x4b1e00 at process virtual address 0x7f3f8e400000 [ 4025.219044] Soft offlining pfn 0x4b1c00 at process virtual address 0x7f3f8e600000 [ 4025.225493] Soft offlining pfn 0x11f400 at process virtual address 0x7f3f8e400000 [ 4025.229672] Soft offlining pfn 0x13f000 at process virtual address 0x7f3f8e600000 [ 4025.235835] Soft offlining pfn 0x11f600 at process virtual address 0x7f3f8e400000 [ 4025.238793] Soft offlining pfn 0x502200 at process virtual address 0x7f3f8e600000 [ 4025.243241] Soft offlining pfn 0x16de00 at process virtual address 0x7f3f8e400000 [ 4025.246620] Soft offlining pfn 0x502000 at process virtual address 0x7f3f8e600000 [ 4025.251436] Soft offlining pfn 0x13f200 at process virtual address 0x7f3f8e400000 [ 4025.255859] Soft offlining pfn 0x1cd200 at process virtual address 0x7f3f8e600000 [ 4025.260690] Soft offlining pfn 0x4c3800 at process virtual address 0x7f3f8e400000 [ 4025.264248] Soft offlining pfn 0x1cd000 at process virtual adn 0x1cd000 at process virtual address 0x7f3f8e400000 [ 4025.768232] Soft offlining pfn 0x15f400 at process virtual address 0x7f3f8e600000 [ 4025.773073] Soft offlining pfn 0x12ec00 at process virtual address 0x7f3f8e400000 [ 4025.776066] Soft offlining pfn 0x4c3a00 at process virtual address 0x7f3f8e600000 [ 4025.782422] Soft offlining pfn 0x12ee00 at process virtual address 0x7f3f8e400000 [ 4025.783743] soft offline: 0x12ee00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4025.787963] Soft offlining pfn 0x15f600 at process virtual address 0x7f3f8e400000 [ 4025.790951] Soft offlining pfn 0x4ad400 at process virtual address 0x7f3f8e600000 [ 4025.795569] Soft offlining pfn 0x12ee00 at process virtual address 0x7f3f8e400000 [ 4025.798738] Soft offlining pfn 0x4ad600 at process virtual address 0x7f3f8e600000 [ 4025.803586] Soft offlining pfn 0x17d800 at process virtual address 0x7f3f8e400000 [ 4025.806816] Soft offlining pfn 0x4d2c00 at process virtual address 0x7f3f8e600000 [ 4025.808852] soft offline: 0x4d2c00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1f, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4026.213646] Soft offlining pfn 0x17da00 at process virtual address 0x7f3f8e400000 [ 4026.216785] Soft offlining pfn 0x4d2c00 at process virtual address 0x7f3f8e600000 [ 4026.225034] Soft offlining pfn 0x4d2e00 at process virtual address 0x7f3f8e400000 [ 4026.228516] Soft offlining pfn 0x14e800 at process virtual address 0x7f3f8e600000 [ 4026.229767] soft offline: 0x14e800: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4026.235763] Soft offlining pfn 0x4eba00 at process virtual address 0x7f3f8e400000 [ 4026.237425] soft offline: 0x4eba00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4026.241926] Soft offlining pfn 0x14e800 at process virtual address 0x7f3f8e400000 [ 4026.245197] Soft offlining pfn 0x14ea00 at process virtual address 0x7f3f8e600000 [ 4026.250202] Soft offlining pfn 0x16da00 at process virtual address 0x7f3f8e400000 [ 4026.253231] Soft offlining pfn 0x4eb800 at process virtual address 0x7f3f8e600000 [ 4026.258008] Soft offlining pfn 0x16d800 at process virtual address 0x7f3f8e40oft offlining pfn 0x1cce00 at process virtual address 0x7f3f8e600000 [ 4026.763966] Soft offlining pfn 0x13ec00 at process virtual address 0x7f3f8e400000 [ 4026.767310] Soft offlining pfn 0x4eba00 at process virtual address 0x7f3f8e600000 [ 4026.772100] Soft offlining pfn 0x11f200 at process virtual address 0x7f3f8e400000 [ 4026.786488] Soft offlining pfn 0x4cb400 at process virtual address 0x7f3f8e600000 [ 4026.791590] Soft offlining pfn 0x15f000 at process virtual address 0x7f3f8e400000 [ 4026.795022] Soft offlining pfn 0x4cb600 at process virtual address 0x7f3f8e600000 [ 4026.797232] soft offline: 0x4cb600: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4026.802432] Soft offlining pfn 0x15f200 at process virtual address 0x7f3f8e400000 [ 4026.810135] Soft offlining pfn 0x4b3800 at process virtual address 0x7f3f8e600000 [ 4026.815391] Soft offlining pfn 0x12e800 at process virtual address 0x7f3f8e400000 [ 4026.818925] Soft offlining pfn 0x13ee00 at process virtual address 0x7f3f8e600000 [ 4026.824371] Soft offlining pfn 0x17d400 at process virtual address 0x7f3f8e400000 [ 4026.827618] Soft offlining pfn 0x4b3a00 at 2ea00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4027.233362] Soft offlining pfn 0x4c3400 at process virtual address 0x7f3f8e400000 [ 4027.236876] Soft offlining pfn 0x4cb600 at process virtual address 0x7f3f8e600000 [ 4027.241401] Soft offlining pfn 0x12ea00 at process virtual address 0x7f3f8e400000 [ 4027.252465] Soft offlining pfn 0x4c3600 at process virtual address 0x7f3f8e600000 [ 4027.257430] Soft offlining pfn 0x17d600 at process virtual address 0x7f3f8e400000 [ 4027.259084] soft offline: 0x17d600: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4027.264279] Soft offlining pfn 0x4d3200 at process virtual address 0x7f3f8e400000 [ 4027.273436] Soft offlining pfn 0x14e400 at process virtual address 0x7f3f8e600000 [ 4027.280516] soft offline: 0x14e400: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4027.285022] Soft offlining pfn 0x4d3000 at process virtual address 0x7f3f8e400000 [ 4027.286529] soft offline: 0x4d3000: hugepage isolation failed, page count 2, tn 0x14e600 at process virtual address 0x7f3f8e600000 [ 4027.794695] Soft offlining pfn 0x16d400 at process virtual address 0x7f3f8e400000 [ 4027.799034] Soft offlining pfn 0x4ea400 at process virtual address 0x7f3f8e600000 [ 4027.806321] Soft offlining pfn 0x17d600 at process virtual address 0x7f3f8e400000 [ 4027.811028] Soft offlining pfn 0x4d3000 at process virtual address 0x7f3f8e600000 [ 4027.817863] Soft offlining pfn 0x16d600 at process virtual address 0x7f3f8e400000 [ 4027.822684] Soft offlining pfn 0x4ea600 at process virtual address 0x7f3f8e600000 [ 4027.829673] Soft offlining pfn 0x1cc800 at process virtual address 0x7f3f8e400000 [ 4027.831312] soft offline: 0x1cc800: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4027.834514] Soft offlining pfn 0x1cc800 at process virtual address 0x7f3f8e400000 [ 4027.839461] Soft offlining pfn 0x4e2800 at process virtual address 0x7f3f8e600000 [ 4027.846273] Soft offlining pfn 0x1cca00 at process virtual address 0x7f3f8e400000 [ 4027.850856] Soft offlining pfn 0x4e2a00 at process virtual address 0x7f3f8e600000 [ 4027.853958] soft offline: 0x4e2a00: hugepage isolation faildress 0x7f3f8e400000 [ 4028.255474] soft offline: 0x11ee00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4028.261485] Soft offlining pfn 0x11ec00 at process virtual address 0x7f3f8e400000 [ 4028.264987] Soft offlining pfn 0x4e9800 at process virtual address 0x7f3f8e600000 [ 4028.269517] Soft offlining pfn 0x11ee00 at process virtual address 0x7f3f8e400000 [ 4028.273094] Soft offlining pfn 0x4e2a00 at process virtual address 0x7f3f8e600000 [ 4028.277935] Soft offlining pfn 0x13e800 at process virtual address 0x7f3f8e400000 [ 4028.281232] Soft offlining pfn 0x4e9a00 at process virtual address 0x7f3f8e600000 [ 4028.288868] Soft offlining pfn 0x4b2a00 at process virtual address 0x7f3f8e400000 [ 4028.293033] Soft offlining pfn 0x4b2800 at process virtual address 0x7f3f8e600000 [ 4028.298850] Soft offlining pfn 0x13ea00 at process virtual address 0x7f3f8e400000 [ 4028.303317] Soft offlining pfn 0x4da400 at process virtual address 0x7f3f8e600000 [ 4028.310657] Soft offlining pfn 0x4ea800 at process virtual address 0x7f3f8e400000 [ 4028.315633] Soft offlining pfn 0x4da600 at process virtual address 0x7f3f8e600000 [ 4028.344973dress 0x7f3f8e400000 [ 4028.817765] soft offline: 0x15ec00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4028.823078] Soft offlining pfn 0x502800 at process virtual address 0x7f3f8e400000 [ 4028.826544] Soft offlining pfn 0x4eaa00 at process virtual address 0x7f3f8e600000 [ 4028.830644] Soft offlining pfn 0x15ec00 at process virtual address 0x7f3f8e400000 [ 4028.833845] Soft offlining pfn 0x4da600 at process virtual address 0x7f3f8e600000 [ 4028.838701] Soft offlining pfn 0x15ee00 at process virtual address 0x7f3f8e400000 [ 4028.844481] Soft offlining pfn 0x12e400 at process virtual address 0x7f3f8e400000 [ 4028.845736] soft offline: 0x12e400: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4028.850966] Soft offlining pfn 0x4fa000 at process virtual address 0x7f3f8e400000 [ 4028.854370] Soft offlining pfn 0x502a00 at process virtual address 0x7f3f8e600000 [ 4028.858494] Soft offlining pfn 0x12e400 at process virtual address 0x7f3f8e400000 [ 4028.861619] Soft offlining pfn 0x4fa200 at process virtual address 0x7f3f8e600000 [ 4028.866277] Soft offlining pfn 0x15ee00 at process virtual address 0x7f3f8e400000 [ 4028.869712] Soft offlinhead|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4029.273392] Soft offlining pfn 0x14e200 at process virtual address 0x7f3f8e400000 [ 4029.282674] Soft offlining pfn 0x4e9e00 at process virtual address 0x7f3f8e600000 [ 4029.288953] Soft offlining pfn 0x14e000 at process virtual address 0x7f3f8e400000 [ 4029.293543] Soft offlining pfn 0x17d200 at process virtual address 0x7f3f8e600000 [ 4029.299191] Soft offlining pfn 0x17d000 at process virtual address 0x7f3f8e400000 [ 4029.303549] Soft offlining pfn 0x4e9c00 at process virtual address 0x7f3f8e600000 [ 4029.309540] Soft offlining pfn 0x1cc400 at process virtual address 0x7f3f8e400000 [ 4029.312984] Soft offlining pfn 0x4f0c00 at process virtual address 0x7f3f8e600000 [ 4029.315044] soft offline: 0x4f0c00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4029.319702] Soft offlining pfn 0x1cc600 at process virtual address 0x7f3f8e400000 [ 4029.323378] Soft offlining pfn 0x12e600 at process virtual address 0x7f3f8e600000 [ 4029.329415] Soft offlining pfn 0x16d200 at process virtual address 0x7foft offlining pfn 0x16d000 at process virtual address 0x7f3f8e400000 [ 4029.832968] Soft offlining pfn 0x4f0e00 at process virtual address 0x7f3f8e600000 [ 4029.834406] soft offline: 0x4f0e00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4029.838219] Soft offlining pfn 0x11e800 at process virtual address 0x7f3f8e400000 [ 4029.841501] Soft offlining pfn 0x16d200 at process virtual address 0x7f3f8e600000 [ 4029.845887] Soft offlining pfn 0x13e400 at process virtual address 0x7f3f8e400000 [ 4029.849180] Soft offlining pfn 0x11ea00 at process virtual address 0x7f3f8e600000 [ 4029.853581] Soft offlining pfn 0x220c00 at process virtual address 0x7f3f8e400000 [ 4029.856853] Soft offlining pfn 0x4f0c00 at process virtual address 0x7f3f8e600000 [ 4029.861763] Soft offlining pfn 0x220e00 at process virtual address 0x7f3f8e400000 [ 4029.865055] Soft offlining pfn 0x201200 at process virtual address 0x7f3f8e600000 [ 4029.869796] Soft offlining pfn 0x4f0e00 at process virtual address 0x7f3f8e400000 [ 4029.873329] Soft offlining pfn 0x13e600 at process virtual address 0x7f3f8e600000 [ 4029.874575] soft offline: 0x13e600: hugepage isolation failed, page count 2,0000 [ 4030.278122] Soft offlining pfn 0x201000 at process virtual address 0x7f3f8e600000 [ 4030.282973] Soft offlining pfn 0x1cfa00 at process virtual address 0x7f3f8e400000 [ 4030.286340] Soft offlining pfn 0x4abc00 at process virtual address 0x7f3f8e600000 [ 4030.295539] soft offline: 0x4abc00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4030.301901] Soft offlining pfn 0x16fc00 at process virtual address 0x7f3f8e400000 [ 4030.305145] Soft offlining pfn 0x1cf800 at process virtual address 0x7f3f8e600000 [ 4030.309611] Soft offlining pfn 0x1d1800 at process virtual address 0x7f3f8e400000 [ 4030.312868] Soft offlining pfn 0x16fe00 at process virtual address 0x7f3f8e600000 [ 4030.320132] Soft offlining pfn 0x211000 at process virtual address 0x7f3f8e400000 [ 4030.329566] Soft offlining pfn 0x4abe00 at process virtual address 0x7f3f8e600000 [ 4030.334484] Soft offlining pfn 0x211200 at process virtual address 0x7f3f8e400000 [ 4030.338533] Soft offlining pfn 0x1e1800 at process virtual address 0x7f3f8e600000 [ 4030.343717] Soft offlining pfn 0x4abc00 at process virtual address 0x7f3f8e400000 [ 4030.347124] Soft offlining [ 4030.850590] Soft offlining pfn 0x501a00 at process virtual address 0x7f3f8e400000 [ 4030.857551] Soft offlining pfn 0x1d1a00 at process virtual address 0x7f3f8e600000 [ 4030.863045] Soft offlining pfn 0x1e1a00 at process virtual address 0x7f3f8e400000 [ 4030.866282] Soft offlining pfn 0x4e4000 at process virtual address 0x7f3f8e600000 [ 4030.870913] Soft offlining pfn 0x1f1400 at process virtual address 0x7f3f8e400000 [ 4030.874824] Soft offlining pfn 0x501800 at process virtual address 0x7f3f8e600000 [ 4030.881214] Soft offlining pfn 0x1f1600 at process virtual address 0x7f3f8e400000 [ 4030.885523] Soft offlining pfn 0x4e4200 at process virtual address 0x7f3f8e600000 [ 4030.892061] Soft offlining pfn 0x230000 at process virtual address 0x7f3f8e400000 [ 4030.893489] soft offline: 0x230000: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4030.900504] Soft offlining pfn 0x4bca00 at process virtual address 0x7f3f8e400000 [ 4030.904816] Soft offlining pfn 0x4bc800 at poft offline: 0x230000: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4031.310752] Soft offlining pfn 0x4b9c00 at process virtual address 0x7f3f8e400000 [ 4031.314526] Soft offlining pfn 0x4b9e00 at process virtual address 0x7f3f8e600000 [ 4031.318882] Soft offlining pfn 0x230200 at process virtual address 0x7f3f8e400000 [ 4031.322115] Soft offlining pfn 0x4f2800 at process virtual address 0x7f3f8e600000 [ 4031.326665] Soft offlining pfn 0x230000 at process virtual address 0x7f3f8e400000 [ 4031.329982] Soft offlining pfn 0x4f2a00 at process virtual address 0x7f3f8e600000 [ 4031.331244] soft offline: 0x4f2a00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4031.336511] Soft offlining pfn 0x221200 at process virtual address 0x7f3f8e400000 [ 4031.350575] Soft offlining pfn 0x221000 at process virtual address 0x7f3f8e600000 [ 4031.356797] Soft offlining pfn 0x1cfe00 at process virtual address 0x7f3f8e400000 [ 4031.367343] Soft offlining pfn 0x1cfe00 at process virtual address 0x7f3f8e400000 [ 4031.370508] Soft offlining pfn 0x1cfc00 at process virtual address oft offlining pfn 0x1cfc00 at process virtual address 0x7f3f8e400000 [ 4031.874403] Soft offlining pfn 0x201400 at process virtual address 0x7f3f8e600000 [ 4031.882692] Soft offlining pfn 0x240000 at process virtual address 0x7f3f8e400000 [ 4031.885758] Soft offlining pfn 0x4b4c00 at process virtual address 0x7f3f8e600000 [ 4031.894188] Soft offlining pfn 0x240200 at process virtual address 0x7f3f8e400000 [ 4031.895870] soft offline: 0x240200: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4031.900417] Soft offlining pfn 0x201600 at process virtual address 0x7f3f8e400000 [ 4031.903184] Soft offlining pfn 0x4f2a00 at process virtual address 0x7f3f8e600000 [ 4031.907537] Soft offlining pfn 0x240200 at process virtual address 0x7f3f8e400000 [ 4031.912551] Soft offlining pfn 0x211400 at process virtual address 0x7f3f8e600000 [ 4031.919256] Soft offlining pfn 0x211600 at process virtual address 0x7f3f8e400000 [ 4031.921592] soft offline: 0x211600: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4031.926683] Soft offlining pfn 0x1d1c00oft offlining pfn 0x4b4e00 at process virtual address 0x7f3f8e600000 [ 4032.332013] soft offline: 0x4b4e00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4032.336001] Soft offlining pfn 0x1d1e00 at process virtual address 0x7f3f8e400000 [ 4032.338441] soft offline: 0x1d1e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4032.342297] Soft offlining pfn 0x211600 at process virtual address 0x7f3f8e400000 [ 4032.348163] Soft offlining pfn 0x4b4e00 at process virtual address 0x7f3f8e600000 [ 4032.349781] soft offline: 0x4b4e00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4032.353648] Soft offlining pfn 0x1e1c00 at process virtual address 0x7f3f8e400000 [ 4032.356993] Soft offlining pfn 0x1d1e00 at process virtual address 0x7f3f8e600000 [ 4032.361505] Soft offlining pfn 0x1f1800 at process virtual address 0x7f3f8e400000 [ 4032.364702] Soft offlining pfn 0x1e1e00 at process virtual add[ 4032.839833] soft offline: 0x4cc800: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4032.869171] Soft offlining pfn 0x250200 at process virtual address 0x7f3f8e400000 [ 4032.870933] soft offline: 0x250200: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4032.874710] Soft offlining pfn 0x1f1a00 at process virtual address 0x7f3f8e400000 [ 4032.877528] Soft offlining pfn 0x4cc800 at process virtual address 0x7f3f8e600000 [ 4032.881899] Soft offlining pfn 0x250200 at process virtual address 0x7f3f8e400000 [ 4032.885011] Soft offlining pfn 0x4b4e00 at process virtual address 0x7f3f8e600000 [ 4032.889845] Soft offlining pfn 0x230400 at process virtual address 0x7f3f8e400000 [ 4032.893853] Soft offlining pfn 0x260000 at process virtual address 0x7f3f8e600000 [ 4032.898557] Soft offlining pfn 0x4cca00 at process virtual address 0x7f3f8e400000 [ 4032.901887] Soft offlining pfn 0x23[ 4033.396328] Soft offlining pfn 0x4f2e00 at process virtual address 0x7f3f8e400000 [ 4033.403733] soft offline: 0x4f2e00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4033.407701] Soft offlining pfn 0x230600 at process virtual address 0x7f3f8e400000 [ 4033.411189] Soft offlining pfn 0x260200 at process virtual address 0x7f3f8e600000 [ 4033.418757] Soft offlining pfn 0x221600 at process virtual address 0x7f3f8e400000 [ 4033.421703] Soft offlining pfn 0x4c4400 at process virtual address 0x7f3f8e600000 [ 4033.426299] Soft offlining pfn 0x221400 at process virtual address 0x7f3f8e400000 [ 4033.429588] Soft offlining pfn 0x4c4600 at process virtual address 0x7f3f8e600000 [ 4033.434305] Soft offlining pfn 0x201800 at process virtual address 0x7f3f8e400000 [ 4033.437625] Soft offlining pfn 0x4f2c00 at process virtual address 0x7f3f8e600000 [ 4033.439575] soft offline: 0x4f2c00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4033.443481] Soft offlining pfn 0x1d2000 at process virtual address 0x7f3f8e400000 [ 4033.446590] Soft offlining pfn 0x201a00 at processlastcpupid=0x1fffff) [ 4033.850291] Soft offlining pfn 0x1d2200 at process virtual address 0x7f3f8e400000 [ 4033.853332] Soft offlining pfn 0x4f2c00 at process virtual address 0x7f3f8e600000 [ 4033.855327] soft offline: 0x4f2c00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4033.859163] Soft offlining pfn 0x240400 at process virtual address 0x7f3f8e400000 [ 4033.860426] soft offline: 0x240400: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4033.864169] Soft offlining pfn 0x201a00 at process virtual address 0x7f3f8e400000 [ 4033.866898] Soft offlining pfn 0x4f2c00 at process virtual address 0x7f3f8e600000 [ 4033.871302] Soft offlining pfn 0x240400 at process virtual address 0x7f3f8e400000 [ 4033.874583] Soft offlining pfn 0x4f2e00 at process virtual address 0x7f3f8e600000 [ 4033.879204] Soft offlining pfn 0x240600 at process virtual address 0x7f3f8e400000 [ 4033.882623] Soft offlining pfn 0x4dbc00 at process virtual address 0x7f3f8e600000 [ 4033.913ocess virtual address 0x7f3f8e400000 [ 4034.385082] soft offline: 0x211a00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4034.391603] Soft offlining pfn 0x211a00 at process virtual address 0x7f3f8e400000 [ 4034.396708] Soft offlining pfn 0x211800 at process virtual address 0x7f3f8e600000 [ 4034.405683] Soft offlining pfn 0x1e2000 at process virtual address 0x7f3f8e400000 [ 4034.411089] Soft offlining pfn 0x1e2200 at process virtual address 0x7f3f8e600000 [ 4034.418588] Soft offlining pfn 0x4dbc00 at process virtual address 0x7f3f8e400000 [ 4034.423268] Soft offlining pfn 0x1f1c00 at process virtual address 0x7f3f8e600000 [ 4034.429910] Soft offlining pfn 0x4dbe00 at process virtual address 0x7f3f8e400000 [ 4034.431797] soft offline: 0x4dbe00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4034.437729] Soft offlining pfn 0x250400 at process virtual address 0x7f3f8e400000 [ 4034.442296] Soft offlining pfn 0x1f1e00 at process virtual address 0x7f3f8e600000 [ 4034.449081] Soft offlining pfn 0x260400 at process virtual address 0x7f3f8e400000 [ 4034.453486] Soft offlining pfn 0x508000 at process virtual addresisolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4034.859372] Soft offlining pfn 0x4dbe00 at process virtual address 0x7f3f8e400000 [ 4034.863692] Soft offlining pfn 0x508200 at process virtual address 0x7f3f8e600000 [ 4034.869289] Soft offlining pfn 0x260600 at process virtual address 0x7f3f8e400000 [ 4034.872912] Soft offlining pfn 0x4b5800 at process virtual address 0x7f3f8e600000 [ 4034.878206] Soft offlining pfn 0x250600 at process virtual address 0x7f3f8e400000 [ 4034.881995] Soft offlining pfn 0x4b5a00 at process virtual address 0x7f3f8e600000 [ 4034.883265] soft offline: 0x4b5a00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4034.888899] Soft offlining pfn 0x230a00 at process virtual address 0x7f3f8e400000 [ 4034.890287] soft offline: 0x230a00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4034.894349] Soft offlining pfn 0x230800 at process virtual address 0x7f3f8e400000 [ 4034.9277k|node=1|zone=2|lastcpupid=0x1fffff) [ 4035.398649] Soft offlining pfn 0x221800 at process virtual address 0x7f3f8e400000 [ 4035.399884] soft offline: 0x221800: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4035.403675] Soft offlining pfn 0x230a00 at process virtual address 0x7f3f8e400000 [ 4035.406573] Soft offlining pfn 0x4b5a00 at process virtual address 0x7f3f8e600000 [ 4035.411469] Soft offlining pfn 0x221800 at process virtual address 0x7f3f8e400000 [ 4035.414827] Soft offlining pfn 0x4d5000 at process virtual address 0x7f3f8e600000 [ 4035.419454] Soft offlining pfn 0x221a00 at process virtual address 0x7f3f8e400000 [ 4035.422810] Soft offlining pfn 0x4d5200 at process virtual address 0x7f3f8e600000 [ 4035.424051] soft offline: 0x4d5200: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4035.429267] Soft offlining pfn 0x201e00 at process virtual address 0x7f3f8e400000 [ 4035.430678] soft offline: 0x201e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|oft offline: 0x4d5200: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4035.935576] Soft offlining pfn 0x1d2400 at process virtual address 0x7f3f8e400000 [ 4035.936824] soft offline: 0x1d2400: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4035.940606] Soft offlining pfn 0x201e00 at process virtual address 0x7f3f8e400000 [ 4035.943272] Soft offlining pfn 0x4d5200 at process virtual address 0x7f3f8e600000 [ 4035.947836] Soft offlining pfn 0x1d2400 at process virtual address 0x7f3f8e400000 [ 4035.951009] Soft offlining pfn 0x503400 at process virtual address 0x7f3f8e600000 [ 4035.955780] Soft offlining pfn 0x1d2600 at process virtual address 0x7f3f8e400000 [ 4035.958949] Soft offlining pfn 0x503600 at process virtual address 0x7f3f8e600000 [ 4035.960858] soft offline: 0x503600: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4035.964709] Soft offlining pfn 0x240a00 at process virtual address 0x7f3f8e400000 [ 4035.967741] Soft offlining pfn 0x240800 at process virtual address 0x7f3f8e600000 [ 4035.972191] Soft offlihead|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4036.376102] Soft offlining pfn 0x260800 at process virtual address 0x7f3f8e400000 [ 4036.379458] Soft offlining pfn 0x503600 at process virtual address 0x7f3f8e600000 [ 4036.384738] Soft offlining pfn 0x260a00 at process virtual address 0x7f3f8e400000 [ 4036.386347] soft offline: 0x260a00: hugepage isolation failed, page count 2, type 0x17ffffc003000f(locked|referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4036.390001] Soft offlining pfn 0x4eca00 at process virtual address 0x7f3f8e400000 [ 4036.394826] Soft offlining pfn 0x4ec800 at process virtual address 0x7f3f8e600000 [ 4036.396262] soft offline: 0x4ec800: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4036.400070] Soft offlining pfn 0x260a00 at process virtual address 0x7f3f8e400000 [ 4036.401311] soft offline: 0x260a00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4036.404999] Soft offlining pfn 0x4ec800 at process virtual address 0x7f3f8e400000 [ 4036.408460] Soft offlining pfn 0x4e4c00 at process virtual address 0x7f3f8e600000 [ 4036.413054] Soft offlining pfn 0x260a00 at process virtual address 0x7f3f8e400000 [ 4036.416243] Soft offlining pfn 0x4e4e00 at process vir|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4036.920121] Soft offlining pfn 0x211c00 at process virtual address 0x7f3f8e400000 [ 4036.923050] Soft offlining pfn 0x4bd800 at process virtual address 0x7f3f8e600000 [ 4036.929252] Soft offlining pfn 0x1e2400 at process virtual address 0x7f3f8e400000 [ 4036.932637] Soft offlining pfn 0x211e00 at process virtual address 0x7f3f8e600000 [ 4036.937224] Soft offlining pfn 0x1f2000 at process virtual address 0x7f3f8e400000 [ 4036.940582] Soft offlining pfn 0x1e2600 at process virtual address 0x7f3f8e600000 [ 4036.944858] Soft offlining pfn 0x250800 at process virtual address 0x7f3f8e400000 [ 4036.948193] Soft offlining pfn 0x4fb800 at process virtual address 0x7f3f8e600000 [ 4036.953016] Soft offlining pfn 0x250a00 at process virtual address 0x7f3f8e400000 [ 4036.956346] Soft offlining pfn 0x230e00 at process virtual address 0x7f3f8e600000 [ 4036.961162] Soft offlining pfn 0x4fba00 at process virtual address 0x7f3f8e400000 [ 4036.964796] Soft offlining pfn 0x1f2200 at process virtual address 0x7f3f8e600000 [ 4036.966570] soft offline: 0x1f2200: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4036.970522] Soft offlining pfn 0x4dd000 at[ 4037.464873] Soft offlining pfn 0x1f2200 at process virtual address 0x7f3f8e400000 [ 4037.474391] Soft offlining pfn 0x230c00 at process virtual address 0x7f3f8e600000 [ 4037.481880] Soft offlining pfn 0x221e00 at process virtual address 0x7f3f8e400000 [ 4037.484867] Soft offlining pfn 0x4dd200 at process virtual address 0x7f3f8e600000 [ 4037.491162] Soft offlining pfn 0x221c00 at process virtual address 0x7f3f8e400000 [ 4037.492706] soft offline: 0x221c00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4037.498114] Soft offlining pfn 0x4dd000 at process virtual address 0x7f3f8e400000 [ 4037.502858] Soft offlining pfn 0x4bda00 at process virtual address 0x7f3f8e600000 [ 4037.507246] Soft offlining pfn 0x221c00 at process virtual address 0x7f3f8e400000 [ 4037.510619] Soft offlining pfn 0x4c5800 at process virtual address 0x7f3f8e600000 [ 4037.515312] Soft offlining pfn 0x1d2800 at process virtual address 0x7f3f8e400000 [ 4037.524724] Soft offlining pfn 0x1d2a00 at process virtual address 0x7f3f8e600000 [ 4037.529398] Soft offlining pfn 0x202200 at process virtual address 0x7f3f8e400000 [ 4037.532577] Soft offlining pfn 060c00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4037.936746] Soft offlining pfn 0x202000 at process virtual address 0x7f3f8e400000 [ 4037.939745] Soft offlining pfn 0x4c5a00 at process virtual address 0x7f3f8e600000 [ 4037.944346] Soft offlining pfn 0x260c00 at process virtual address 0x7f3f8e400000 [ 4037.947665] Soft offlining pfn 0x4f3e00 at process virtual address 0x7f3f8e600000 [ 4037.952348] Soft offlining pfn 0x260e00 at process virtual address 0x7f3f8e400000 [ 4037.955726] Soft offlining pfn 0x508400 at process virtual address 0x7f3f8e600000 [ 4037.957006] soft offline: 0x508400: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4037.962292] Soft offlining pfn 0x240e00 at process virtual address 0x7f3f8e400000 [ 4037.963614] soft offline: 0x240e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4037.968865] Soft offlining pfn 0x240c00 at process virtual address 0x7f3f8e400000 [ 4037.972048] Soft offlining pfn 0x508600 at process virtual address 0x7f3f8e600000 [ 4037.976064] Soft offlining pfn 0x240e00 at process virtual address 0x7f3f8e400000 [ 4037.977401] soft offline: 0x240e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodis3000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4038.482295] Soft offlining pfn 0x240e00 at process virtual address 0x7f3f8e400000 [ 4038.485731] Soft offlining pfn 0x212000 at process virtual address 0x7f3f8e600000 [ 4038.495168] Soft offlining pfn 0x1e2800 at process virtual address 0x7f3f8e400000 [ 4038.499053] Soft offlining pfn 0x4cda00 at process virtual address 0x7f3f8e600000 [ 4038.505512] Soft offlining pfn 0x212200 at process virtual address 0x7f3f8e400000 [ 4038.509859] Soft offlining pfn 0x508400 at process virtual address 0x7f3f8e600000 [ 4038.516589] Soft offlining pfn 0x1e2a00 at process virtual address 0x7f3f8e400000 [ 4038.521187] Soft offlining pfn 0x4cd800 at process virtual address 0x7f3f8e600000 [ 4038.523272] soft offline: 0x4cd800: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4038.528393] Soft offlining pfn 0x1f2600 at process virtual address 0x7f3f8e400000 [ 4038.532614] Soft offlining pfn 0x1f2400 at process virtual address 0x7f3f8e600000 [ 4038.539935] Soft offlining pfn 0x250e00 at process virtual address 0x7f3f8e400000 [ 4038.544851] Soft offlining pfn 0xoft offlining pfn 0x4cd800 at process virtual address 0x7f3f8e600000 [ 4038.947584] soft offline: 0x4cd800: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4038.953156] Soft offlining pfn 0x222000 at process virtual address 0x7f3f8e400000 [ 4038.954541] soft offline: 0x222000: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4038.958490] Soft offlining pfn 0x231000 at process virtual address 0x7f3f8e400000 [ 4038.961561] Soft offlining pfn 0x4d5400 at process virtual address 0x7f3f8e600000 [ 4038.965533] Soft offlining pfn 0x222000 at process virtual address 0x7f3f8e400000 [ 4038.968843] Soft offlining pfn 0x222200 at process virtual address 0x7f3f8e600000 [ 4038.974901] Soft offlining pfn 0x261200 at process virtual address 0x7f3f8e400000 [ 4038.977885] Soft offlining pfn 0x4cd800 at process virtual address 0x7f3f8e600000 [ 4038.984201] Soft offlining pfn 0x1d2c00 at process virtual address 0x7f3f8e400000 [ 4038.987659] Soft offlining pfn 0x261000 at process virtual address 0x7f3f8e600000 [ 4038.992017] Soft offlining pfn 0x202400 at process virtual addressoft offlining pfn 0x4d5600 at process virtual address 0x7f3f8e600000 [ 4039.494706] soft offline: 0x4d5600: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4039.499120] Soft offlining pfn 0x241200 at process virtual address 0x7f3f8e400000 [ 4039.503349] Soft offlining pfn 0x202600 at process virtual address 0x7f3f8e600000 [ 4039.507837] Soft offlining pfn 0x1e2e00 at process virtual address 0x7f3f8e400000 [ 4039.514721] Soft offlining pfn 0x1e2c00 at process virtual address 0x7f3f8e600000 [ 4039.521616] soft offline: 0x1e2c00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4039.525810] Soft offlining pfn 0x4b5e00 at process virtual address 0x7f3f8e400000 [ 4039.529125] Soft offlining pfn 0x4b5c00 at process virtual address 0x7f3f8e600000 [ 4039.533865] Soft offlining pfn 0x503800 at process virtual address 0x7f3f8e400000 [ 4039.537723] Soft offlining pfn 0x4d5600 at process virtual address 0x7f3f8e600000 [ 4039.542502] Soft offlining pfn 0x4ecc00 at process virtual address 0x7f3f8e400000 [ 4039.546119] Soft offlining pfn 0x1e2c00n 0x4ece00 at process virtual address 0x7f3f8e600000 [ 4039.948753] soft offline: 0x4ece00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 40oft offlining pfn 0x1f2800 at process virtual address 0x7f3f8e400000 [ 4040.050854] soft offline: 0x1f2800: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4040.054741] Soft offlining pfn 0x212600 at process virtual address 0x7f3f8e400000 [ 4040.057676] Soft offlining pfn 0x4ece00 at process virtual address 0x7f3f8e600000 [ 4040.062076] Soft offlining pfn 0x1f2800 at process virtual address 0x7f3f8e400000 [ 4040.065336] Soft offlining pfn 0x503a00 at process virtual address 0x7f3f8e600000 [ 4040.070023] Soft offlining pfn 0x1f2a00 at process virtual address 0x7f3f8e400000 [ 4040.077728] Soft offlining pfn 0x1f2a00 at process virtual address 0x7f3f8e400000 [ 4040.080815] Soft offlining pfn 0x231400 at process virtual address 0x7f3f8e600000 [ 4040.085129] Soft offlining pfn 0x251000 at process virtual address 0x7f3f8e400000 [ 4040.088560] Soft offlining pfn 0x231600 at process virtual address 0x7f3f8e600000 [ 4040.093739] Soft offlining pfn 0x4e5200 at process virtual aocess virtual address 0x7f3f8e400000 [ 4040.497638] Soft offlining pfn 0x222600 at process virtual address 0x7f3f8e600000 [ 4040.504855] Soft offlining pfn 0x261400 at process virtual address 0x7f3f8e400000 [ 4040.507844] Soft offlining pfn 0x4e5000 at process virtual address 0x7f3f8e600000 [ 4040.512229] Soft offlining pfn 0x251200 at process virtual address 0x7f3f8e400000 [ 4040.515358] Soft offlining pfn 0x4bde00 at process virtual address 0x7f3f8e600000 [ 4040.519924] Soft offlining pfn 0x261600 at process virtual address 0x7f3f8e400000 [ 4040.527749] Soft offlining pfn 0x261600 at process virtual address 0x7f3f8e400000 [ 4040.528976] soft offline: 0x261600: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4040.532759] Soft offlining pfn 0x1d3000 at process virtual address 0x7f3f8e400000 [ 4040.536080] Soft offlining pfn 0x4fbc00 at process virtual address 0x7f3f8e600000 [ 4040.540470] Soft offlining pfn 0x261600 at process virtual address 0x7f3f8e400000 [ 4040.543756] Soft offlining pfn 0x4fbe00 at process virtual address 0x7f3f8e600000 [ 4040.548360] Soft offlining pfn 0x1d3200 at process virtual address 0x7f3f8e400000 [ 4040.551794] Soft offlining pfn 0x4dd400 at process virtual address 0x7f3f8e600000 [ 4040.553040] soft offline: 0x4dd400: hugepage isolation failed, page count 2, type 0x57ffffc003000isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4041.058708] Soft offlining pfn 0x202800 at process virtual address 0x7f3f8e400000 [ 4041.062969] Soft offlining pfn 0x4dd600 at process virtual address 0x7f3f8e600000 [ 4041.068892] Soft offlining pfn 0x202a00 at process virtual address 0x7f3f8e400000 [ 4041.073321] Soft offlining pfn 0x1e3000 at process virtual address 0x7f3f8e600000 [ 4041.080684] Soft offlining pfn 0x4dd400 at process virtual address 0x7f3f8e400000 [ 4041.085633] Soft offlining pfn 0x241400 at process virtual address 0x7f3f8e600000 [ 4041.091896] Soft offlining pfn 0x1e3200 at process virtual address 0x7f3f8e400000 [ 4041.096675] Soft offlining pfn 0x4c5c00 at process virtual address 0x7f3f8e600000 [ 4041.099106] soft offline: 0x4c5c00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4041.104029] Soft offlining pfn 0x212800 at process virtual address 0x7f3f8e400000 [ 4041.108837] Soft offlining pfn 0x241600 at process virtual address 0x7f3f8e6fff) [ 4041.512906] Soft offlining pfn 0x212a00 at process virtual address 0x7f3f8e400000 [ 4041.521663] Soft offlining pfn 0x212a00 at process virtual address 0x7f3f8e400000 [ 4041.524988] Soft offlining pfn 0x4c5e00 at process virtual address 0x7f3f8e600000 [ 4041.529720] Soft offlining pfn 0x1f2c00 at process virtual address 0x7f3f8e400000 [ 4041.532901] Soft offlining pfn 0x231800 at process virtual address 0x7f3f8e600000 [ 4041.538071] Soft offlining pfn 0x4c5c00 at process virtual address 0x7f3f8e400000 [ 4041.541670] Soft offlining pfn 0x241600 at process virtual address 0x7f3f8e600000 [ 4041.543718] soft offline: 0x241600: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4041.549220] Soft offlining pfn 0x508a00 at process virtual address 0x7f3f8e400000 [ 4041.550672] soft offline: 0x508a00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4041.554558] Soft offlining pfn 0x241600 at process virtual address 0x7f3f8e400000 [ 4041.557921] Soft offlining pfn 0x1f2e00 at process virtual address 0x7f3f8e600000 [ 4041.564068] Softdress 0x7f3f8e400000 [ 4042.067267] Soft offlining pfn 0x508800 at process virtual address 0x7f3f8e600000 [ 4042.068783] soft offline: 0x508800: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4042.074097] Soft offlining pfn 0x251600 at process virtual address 0x7f3f8e400000 [ 4042.075429] soft offline: 0x251600: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4042.079354] Soft offlining pfn 0x261a00 at process virtual address 0x7f3f8e400000 [ 4042.082308] Soft offlining pfn 0x508800 at process virtual address 0x7f3f8e600000 [ 4042.086861] Soft offlining pfn 0x251600 at process virtual address 0x7f3f8e400000 [ 4042.090063] Soft offlining pfn 0x508a00 at process virtual address 0x7f3f8e600000 [ 4042.094819] Soft offlining pfn 0x222800 at process virtual address 0x7f3f8e400000 [ 4042.099037] Soft offlining pfn 0x1d3400 at process virtual address 0x7f3f8e600000 [ 4042.103943] Soft offlining pfn 0x4f4000 at process virtual address 0x7f3f8e400000 [ 4042.107611] Soft offlining pfn 0x222a00 at process virtual address 0x7f3f8e600000 [ 4042.109605] soft offline: 0x2ocess virtual address 0x7f3f8e400000 [ 4042.511924] soft offline: 0x4cdc00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4042.515910] Soft offlining pfn 0x1d3600 at process virtual address 0x7f3f8e400000 [ 4042.519419] Soft offlining pfn 0x270000 at process virtual address 0x7f3f8e600000 [ 4042.525704] Soft offlining pfn 0x270200 at process virtual address 0x7f3f8e400000 [ 4042.529138] Soft offlining pfn 0x4f4200 at process virtual address 0x7f3f8e600000 [ 4042.535622] Soft offlining pfn 0x202c00 at process virtual address 0x7f3f8e400000 [ 4042.539599] Soft offlining pfn 0x222a00 at process virtual address 0x7f3f8e600000 [ 4042.540880] soft offline: 0x222a00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4042.545031] Soft offlining pfn 0x202e00 at process virtual address 0x7f3f8e400000 [ 4042.548132] Soft offlining pfn 0x4cde00 at process virtual address 0x7f3f8e600000 [ 4042.553060] Soft offlining pfn 0x222a00 at process virtual address 0x7f3f8e400000 [ 4042.556333] Soft offlining pfn 0x4cdc00 at process virtual address 0x7f3f8e600000 [ 4042.561348] Soft offlining pfn 0x1e3400 at process virtual address 0x7f3f8e400000 [ 4042.565073] Soft offlining pfn 0x1e3600 at process virtual[ 4043.066506] soft offline: 0x4d5800: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4043.070343] Soft offlining pfn 0x261c00 at process virtual address 0x7f3f8e400000 [ 4043.072073] soft offline: 0x261c00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4043.075929] Soft offlining pfn 0x241800 at process virtual address 0x7f3f8e400000 [ 4043.078801] Soft offlining pfn 0x4d5800 at process virtual address 0x7f3f8e600000 [ 4043.083147] Soft offlining pfn 0x261c00 at process virtual address 0x7f3f8e400000 [ 4043.086144] Soft offlining pfn 0x4d5a00 at process virtual address 0x7f3f8e600000 [ 4043.090851] Soft offlining pfn 0x261e00 at process virtual address 0x7f3f8e400000 [ 4043.093904] Soft offlining pfn 0x4b6000 at process virtual address 0x7f3f8e600000 [ 4043.095735] soft offline: 0x4b6000: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4043.099630] Soft offlining pfn 0x1f3200 at process virtual address 0x7f3f8e400000 [ 4043.127676] dress 0x7f3f8e400000 [ 4043.603717] Soft offlining pfn 0x4b6000 at process virtual address 0x7f3f8e600000 [ 4043.610878] Soft offlining pfn 0x1f3200 at process virtual address 0x7f3f8e400000 [ 4043.617764] Soft offlining pfn 0x4b6200 at process virtual address 0x7f3f8e600000 [ 4043.622431] Soft offlining pfn 0x212c00 at process virtual address 0x7f3f8e400000 [ 4043.624645] soft offline: 0x212c00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4043.629951] Soft offlining pfn 0x503e00 at process virtual address 0x7f3f8e400000 [ 4043.633269] Soft offlining pfn 0x503c00 at process virtual address 0x7f3f8e600000 [ 4043.637555] Soft offlining pfn 0x212c00 at process virtual address 0x7f3f8e400000 [ 4043.640823] Soft offlining pfn 0x4ed000 at process virtual address 0x7f3f8e600000 [ 4043.645425] Soft offlining pfn 0x212e00 at process virtual address 0x7f3f8e400000 [ 4043.649885] Soft offlining pfn 0x231e00 at process virtual address 0x7f3f8e600000 [ 4043.654865] Soft offlining pfn 0x4ed200 at process virtual address 0x7f3f8e400000 [ 4043.658332] Soft offlining pfn 0x231c00 at process virtual address 0x7f3f8e600000 [ 4043.659927] soft offline: 0x231c00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4043.665419] Soft head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4044.068947] Soft offlining pfn 0x231c00 at process virtual address 0x7f3f8e400000 [ 4044.072072] Soft offlining pfn 0x251800 at process virtual address 0x7f3f8e600000 [ 4044.078022] Soft offlining pfn 0x222c00 at process virtual address 0x7f3f8e400000 [ 4044.081019] Soft offlining pfn 0x4e5400 at process virtual address 0x7f3f8e600000 [ 4044.085559] Soft offlining pfn 0x251a00 at process virtual address 0x7f3f8e400000 [ 4044.088877] Soft offlining pfn 0x4e5600 at process virtual address 0x7f3f8e600000 [ 4044.095564] Soft offlining pfn 0x222e00 at process virtual address 0x7f3f8e400000 [ 4044.099596] Soft offlining pfn 0x270400 at process virtual address 0x7f3f8e600000 [ 4044.104167] Soft offlining pfn 0x270600 at process virtual address 0x7f3f8e400000 [ 4044.107356] Soft offlining pfn 0x4be000 at process virtual address 0x7f3f8e600000 [ 4044.112077] Soft offlining pfn 0x1d3a00 at process virtual address 0x7f3f8e400000 [ 4044.115407] Soft offlining pfn 0x1d3800 at process virtual address 0x7f3f8e600000 [ 4044.119987] Soft offlining pfn 0x203200 at process virtual address 0x7f3f8e400000 [ 4044.122939] Soft offlining pfn 0x4be200 at process virtual address 0x7f3f8head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4044.627095] Soft offlining pfn 0x508e00 at process virtual address 0x7f3f8e400000 [ 4044.630760] Soft offlining pfn 0x508c00 at process virtual address 0x7f3f8e600000 [ 4044.635735] Soft offlining pfn 0x262000 at process virtual address 0x7f3f8e400000 [ 4044.638953] Soft offlining pfn 0x262200 at process virtual address 0x7f3f8e600000 [ 4044.644864] Soft offlining pfn 0x1e3800 at process virtual address 0x7f3f8e400000 [ 4044.647722] Soft offlining pfn 0x4fc200 at process virtual address 0x7f3f8e600000 [ 4044.652148] Soft offlining pfn 0x203000 at process virtual address 0x7f3f8e400000 [ 4044.660119] Soft offlining pfn 0x4fc000 at process virtual address 0x7f3f8e600000 [ 4044.665159] Soft offlining pfn 0x1e3a00 at process virtual address 0x7f3f8e400000 [ 4044.668340] Soft offlining pfn 0x4dd800 at process virtual address 0x7f3f8e600000 [ 4044.672930] Soft offlining pfn 0x241c00 at process virtual address 0x7f3f8e400000 [ 4044.676226] Soft offlining pfn 0x4dda00 at process virtual address 0x7f3f8e600000 [ 4044.677445] soft offline: 0x4dda00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4044.7093000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4045.082263] Soft offlining pfn 0x241e00 at process virtual address 0x7f3f8e400000 [ 4045.088746] Soft offlining pfn 0x4dda00 at process virtual address 0x7f3f8e600000 [ 4045.089971] soft offline: 0x4dda00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4045.093823] Soft offlining pfn 0x280200 at process virtual address 0x7f3f8e400000 [ 4045.095028] soft offline: 0x280200: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4045.098796] Soft offlining pfn 0x280000 at process virtual address 0x7f3f8e400000 [ 4045.101514] Soft offlining pfn 0x4dda00 at process virtual address 0x7f3f8e600000 [ 4045.107538] Soft offlining pfn 0x1f3400 at process virtual address 0x7f3f8e400000 [ 4045.111058] Soft offlining pfn 0x280200 at process virtual address 0x7f3f8e600000 [ 4045.115620] Soft offlining pfn 0x232000 at process virtual address 0x7f3f8e400000 [ 4045.118829] Soft offlining p0000 [ 4045.620829] soft offline: 0x4c6000: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4045.626206] Soft offlining pfn 0x213200 at process virtual address 0x7f3f8e400000 [ 4045.627531] soft offline: 0x213200: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4045.631409] Soft offlining pfn 0x232200 at process virtual address 0x7f3f8e400000 [ 4045.634346] Soft offlining pfn 0x4c6000 at process virtual address 0x7f3f8e600000 [ 4045.640929] Soft offlining pfn 0x223000 at process virtual address 0x7f3f8e400000 [ 4045.644280] Soft offlining pfn 0x213200 at process virtual address 0x7f3f8e600000 [ 4045.648814] Soft offlining pfn 0x251c00 at process virtual address 0x7f3f8e400000 [ 4045.651995] Soft offlining pfn 0x223200 at process virtual address 0x7f3f8e600000 [ 4045.656335] Soft offlining pfn 0x1d3c00 at process virtual address 0x7f3f8e400000 [ 4045.660159] Soft offlining pfn 0x4c6200 at process virtual address 0x7f3f8e600000 [ 4045.665071] Soft offlining pfn 0x1d3e00 at process virtual address 0x7f3f8e400000 [ 4045.668247] Soft oft offline: 0x4ce000: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4046.072919] Soft offlining pfn 0x270800 at process virtual address 0x7f3f8e400000 [ 4046.077318] Soft offlining pfn 0x251e00 at process virtual address 0x7f3f8e600000 [ 4046.085000] Soft offlining pfn 0x262600 at process virtual address 0x7f3f8e400000 [ 4046.089143] Soft offlining pfn 0x4f4600 at process virtual address 0x7f3f8e600000 [ 4046.095692] Soft offlining pfn 0x262400 at process virtual address 0x7f3f8e400000 [ 4046.099874] Soft offlining pfn 0x4f4400 at process virtual address 0x7f3f8e600000 [ 4046.105761] Soft offlining pfn 0x4ce200 at process virtual address 0x7f3f8e400000 [ 4046.107000] soft offline: 0x4ce200: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4046.112227] Soft offlining pfn 0x203600 at process virtual address 0x7f3f8e400000 [ 4046.115692] Soft offlining pfn 0x1e3c00 at process virtual address 0x7f3f8e600000 [ 4046.121514] Soft offlining pfn 0x1e3e00 at process virtual address 0x7f3f8e400000 [ 4046.124366] Soft offlining pfn 0x4ce000 at process virtual address 0x7f3f8e600000 [ 4046.128872] Soft offlining pfn 0x203400 at process virtual a[ 4046.628626] Soft offlining pfn 0x4d5c00 at process virtual address 0x7f3f8e600000 [ 4046.631439] soft offline: 0x4d5c00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4046.635333] Soft offlining pfn 0x280600 at process virtual address 0x7f3f8e400000 [ 4046.638674] Soft offlining pfn 0x242200 at process virtual address 0x7f3f8e600000 [ 4046.643184] Soft offlining pfn 0x232600 at process virtual address 0x7f3f8e400000 [ 4046.646754] Soft offlining pfn 0x232400 at process virtual address 0x7f3f8e600000 [ 4046.652131] Soft offlining pfn 0x4d5c00 at process virtual address 0x7f3f8e400000 [ 4046.655919] Soft offlining pfn 0x1f3800 at process virtual address 0x7f3f8e600000 [ 4046.658246] soft offline: 0x1f3800: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4046.662370] Soft offlining pfn 0x4ce200 at process virtual address 0x7f3f8e400000 [ 4046.666109] Soft offlining pfn 0x4d5e00 at process virtual address 0x7f3f8e600000 [ 4046.667634] soft offline: 0x4d5e00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4046.698lastcpupid=0x1fffff) [ 4047.172277] Soft offlining pfn 0x4d5e00 at process virtual address 0x7f3f8e400000 [ 4047.175971] Soft offlining pfn 0x4b6400 at process virtual address 0x7f3f8e600000 [ 4047.181086] Soft offlining pfn 0x1f3a00 at process virtual address 0x7f3f8e400000 [ 4047.184343] Soft offlining pfn 0x4b6600 at process virtual address 0x7f3f8e600000 [ 4047.188979] Soft offlining pfn 0x1f3800 at process virtual address 0x7f3f8e400000 [ 4047.195174] Soft offlining pfn 0x1f3800 at process virtual address 0x7f3f8e400000 [ 4047.198357] Soft offlining pfn 0x213400 at process virtual address 0x7f3f8e600000 [ 4047.202896] Soft offlining pfn 0x223400 at process virtual address 0x7f3f8e400000 [ 4047.206189] Soft offlining pfn 0x213600 at process virtual address 0x7f3f8e600000 [ 4047.210614] Soft offlining pfn 0x252000 at process virtual address 0x7f3f8e400000 [ 4047.213991] Soft offlining pfn 0x504200 at process virtual address 0x7f3f8e600000 [ 4047.218907] Soft offlining pfn 0x252200 at process virtual address 0x7f3f8e400000 [ 4047.222267] Soft offlining pfn 0x1d4200 at process virtual address 0x7f3f8e600000 [ 4047.227066] Soft offlining pfn 0x504000 at process virtual address 0x7f3f8e400000 [ 4047.230652] Soft offlining pfn 0x223600 at process virtual address 0x7f3f8e600000 [ 4047.232334] son 0x4ed600 at process virtual address 0x7f3f8e400000 [ 4047.634326] soft offline: 0x4ed600: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4047.638180] Soft offlining pfn 0x1d4000 at process virtual address 0x7f3f8e400000 [ 4047.641210] Soft offlining pfn 0x262800 at process virtual address 0x7f3f8e600000 [ 4047.647228] Soft offlining pfn 0x262a00 at process virtual address 0x7f3f8e400000 [ 4047.650217] Soft offlining pfn 0x4ed400 at process virtual address 0x7f3f8e600000 [ 4047.657949] Soft offlining pfn 0x223600 at process virtual address 0x7f3f8e400000 [ 4047.660324] soft offline: 0x223600: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4047.665613] Soft offlining pfn 0x4e5800 at process virtual address 0x7f3f8e400000 [ 4047.669157] Soft offlining pfn 0x4ed600 at process virtual address 0x7f3f8e600000 [ 4047.673922] Soft offlining pfn 0x270c00 at process virtual address 0x7f3f8e400000 [ 4047.677007] Soft offlining pfn 0x4e5a00 at process virtual address 0x7f3f8e600000 [ 4047.681538] Soft offlining pfn 0x223600 at process virtual address 0x7f3f8e400000 head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4048.185460] Soft offlining pfn 0x1e4000 at process virtual address 0x7f3f8e400000 [ 4048.194450] Soft offlining pfn 0x509200 at process virtual address 0x7f3f8e600000 [ 4048.199961] soft offline: 0x509200: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4048.203846] Soft offlining pfn 0x270e00 at process virtual address 0x7f3f8e400000 [ 4048.205107] soft offline: 0x270e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4048.208851] Soft offlining pfn 0x1e4200 at process virtual address 0x7f3f8e400000 [ 4048.211816] Soft offlining pfn 0x509200 at process virtual address 0x7f3f8e600000 [ 4048.216081] Soft offlining pfn 0x270e00 at process virtual address 0x7f3f8e400000 [ 4048.219168] Soft offlining pfn 0x509000 at process virtual address 0x7f3f8e600000 [ 4048.223912] Soft offlining pfn 0x203800 at process virtual address 0x7f3f8e400000 [ 4048.227208] Soft offlining pfn 0x4be400 at process virtual address 0x7f3f8e600000 [ 4048.229090] soft offline: 0x4be400: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mapp80800: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4048.633755] Soft offlining pfn 0x203a00 at process virtual address 0x7f3f8e400000 [ 4048.636766] Soft offlining pfn 0x4be400 at process virtual address 0x7f3f8e600000 [ 4048.641138] Soft offlining pfn 0x280800 at process virtual address 0x7f3f8e400000 [ 4048.644363] Soft offlining pfn 0x4be600 at process virtual address 0x7f3f8e600000 [ 4048.649180] Soft offlining pfn 0x280a00 at process virtual address 0x7f3f8e400000 [ 4048.655140] Soft offlining pfn 0x280a00 at process virtual address 0x7f3f8e400000 [ 4048.658338] Soft offlining pfn 0x242400 at process virtual address 0x7f3f8e600000 [ 4048.663086] Soft offlining pfn 0x232800 at process virtual address 0x7f3f8e400000 [ 4048.666412] Soft offlining pfn 0x242600 at process virtual address 0x7f3f8e600000 [ 4048.672010] Soft offlining pfn 0x4fc600 at process virtual address 0x7f3f8e400000 [ 4048.675627] Soft offlining pfn 0x232a00 at process virtual address 0x7f3f8e600000 [ 4048.677695] soft offline: 0x232a00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4049.182042] Soft offlining pfn 0x1f3c00 at process virtual address 0x7f3f8e400000 [ 4049.185192] Soft offlining pfn 0x1f3e00 at process virtual address 0x7f3f8e600000 [ 4049.191235] Soft offlining pfn 0x213800 at process virtual address 0x7f3f8e400000 [ 4049.194130] Soft offlining pfn 0x4fc400 at process virtual address 0x7f3f8e600000 [ 4049.200346] Soft offlining pfn 0x213a00 at process virtual address 0x7f3f8e400000 [ 4049.203738] Soft offlining pfn 0x232a00 at process virtual address 0x7f3f8e600000 [ 4049.208116] Soft offlining pfn 0x223a00 at process virtual address 0x7f3f8e400000 [ 4049.211402] Soft offlining pfn 0x223800 at process virtual address 0x7f3f8e600000 [ 4049.215898] Soft offlining pfn 0x252600 at process virtual address 0x7f3f8e400000 [ 4049.219347] Soft offlining pfn 0x4ddc00 at process virtual address 0x7f3f8e600000 [ 4049.224122] Soft offlining pfn 0x1d4400 at process virtual address 0x7f3f8e400000 [ 4049.227455] Soft offlining pfn 0x252400 at process virtual address 0x7f3f8e600000 [ 4049.233383] Soft offlining pfn 0x262c00 at process virtual address 0x7f3f8e400000 [ 4049.2643473000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4049.638103] Soft offlining pfn 0x262e00 at process virtual address 0x7f3f8e400000 [ 4049.639791] soft offline: 0x262e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4049.643670] Soft offlining pfn 0x1d4600 at process virtual address 0x7f3f8e400000 [ 4049.646547] Soft offlining pfn 0x4dde00 at process virtual address 0x7f3f8e600000 [ 4049.651029] Soft offlining pfn 0x262e00 at process virtual address 0x7f3f8e400000 [ 4049.652763] soft offline: 0x262e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4049.658214] Soft offlining pfn 0x4c6600 at process virtual address 0x7f3f8e400000 [ 4049.662197] Soft offlining pfn 0x4c6400 at process virtual address 0x7f3f8e600000 [ 4049.666274] Soft offlining pfn 0x262e00 at process virtual address 0x7f3f8e400000 [ 4049.669626] Soft offlining pfn 0x4f4800 at process virtual address 0x7f3f8e600000 [ 4049.674586] Soft offlining pfn 0x271000 at process virtual address 0x7f3f8e400000 [ 4049.678450] Soft offlin0000 [ 4050.180250] soft offline: 0x4f4a00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4050.184042] Soft offlining pfn 0x1e4400 at process virtual address 0x7f3f8e400000 [ 4050.187301] Soft offlining pfn 0x271200 at process virtual address 0x7f3f8e600000 [ 4050.191959] Soft offlining pfn 0x280c00 at process virtual address 0x7f3f8e400000 [ 4050.195313] Soft offlining pfn 0x1e4600 at process virtual address 0x7f3f8e600000 [ 4050.199965] Soft offlining pfn 0x242800 at process virtual address 0x7f3f8e400000 [ 4050.203471] Soft offlining pfn 0x4f4a00 at process virtual address 0x7f3f8e600000 [ 4050.208172] Soft offlining pfn 0x242a00 at process virtual address 0x7f3f8e400000 [ 4050.211845] Soft offlining pfn 0x232e00 at process virtual address 0x7f3f8e600000 [ 4050.216733] Soft offlining pfn 0x4ce400 at process virtual address 0x7f3f8e400000 [ 4050.220359] Soft offlining pfn 0x280e00 at process virtual address 0x7f3f8e600000 [ 4050.221704] soft offline: 0x280e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4050.227546] Soft offlining pfn 0x232c00 at process virtual address 0x7f3f8e400000 [ 4050.230870] Soft offlining pfn 0x4ce60[ 4050.727482] Soft offlining pfn 0x1f4000 at process virtual address 0x7f3f8e400000 [ 4050.736100] Soft offlining pfn 0x4ce600 at process virtual address 0x7f3f8e600000 [ 4050.742089] Soft offlining pfn 0x4d6200 at process virtual address 0x7f3f8e400000 [ 4050.745727] Soft offlining pfn 0x4d6000 at process virtual address 0x7f3f8e600000 [ 4050.750663] Soft offlining pfn 0x4b6a00 at process virtual address 0x7f3f8e400000 [ 4050.754281] Soft offlining pfn 0x1f4200 at process virtual address 0x7f3f8e600000 [ 4050.759445] Soft offlining pfn 0x4b6800 at process virtual address 0x7f3f8e400000 [ 4050.763742] Soft offlining pfn 0x504400 at process virtual address 0x7f3f8e600000 [ 4050.769356] Soft offlining pfn 0x280e00 at process virtual address 0x7f3f8e400000 [ 4050.773192] Soft offlining pfn 0x504600 at process virtual address 0x7f3f8e600000 [ 4050.775074] soft offline: 0x504600: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4050.779010] Soft offlining pfn 0x223e00 at process virtual address 0x7f3f8e400000 [ 4050.782422] Soft offlining pfn 0x223c00 at process virtual address 0x7f3f8e600000 [ 4050.789177] Soft offlining pfn 0x213e00 at process virtual address 0x7f3f8e400000 [ 4050.794116] Soft offlining pfn 0x213c00 at process virtual address 0x7f3f8e600000 [ 4050.800862] Soft offl|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4051.203123] Soft offlining pfn 0x263000 at process virtual address 0x7f3f8e400000 [ 4051.206400] Soft offlining pfn 0x263200 at process virtual address 0x7f3f8e600000 [ 4051.212200] Soft offlining pfn 0x1d4a00 at process virtual address 0x7f3f8e400000 [ 4051.217143] Soft offlining pfn 0x4ed800 at process virtual address 0x7f3f8e600000 [ 4051.222414] Soft offlining pfn 0x1d4800 at process virtual address 0x7f3f8e400000 [ 4051.224284] soft offline: 0x1d4800: hugepage isolation failed, page count 2, type 0x17ffffc003000f(locked|referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4051.227845] Soft offlining pfn 0x504600 at process virtual address 0x7f3f8e400000 [ 4051.231304] Soft offlining pfn 0x4eda00 at process virtual address 0x7f3f8e600000 [ 4051.235560] Soft offlining pfn 0x1d4800 at process virtual address 0x7f3f8e400000 [ 4051.238895] Soft offlining pfn 0x509400 at process virtual address 0x7f3f8e600000 [ 4051.243448] Soft offlining pfn 0x252800 at process vi|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4051.747288] Soft offlining pfn 0x271400 at process virtual address 0x7f3f8e400000 [ 4051.750326] Soft offlining pfn 0x252a00 at process virtual address 0x7f3f8e600000 [ 4051.754628] Soft offlining pfn 0x204000 at process virtual address 0x7f3f8e400000 [ 4051.759165] Soft offlining pfn 0x271600 at process virtual address 0x7f3f8e600000 [ 4051.765228] Soft offlining pfn 0x1e4800 at process virtual address 0x7f3f8e400000 [ 4051.767329] soft offline: 0x1e4800: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4051.771875] Soft offlining pfn 0x204200 at process virtual address 0x7f3f8e400000 [ 4051.775784] Soft offlining pfn 0x4e5c00 at process virtual address 0x7f3f8e600000 [ 4051.781821] Soft offlining pfn 0x1e4800 at process virtual address 0x7f3f8e400000 [ 4051.786616] Soft offlining pfn 0x281200 at process virtual address 0x7f3f8e600000 [ 4051.793300] Soft offlining pfn 0x509600 at process virtual address 0x7f3f8e400000 [ 4051.796865] Soft offlining pfn 0x1e4a00 at process virtual address 0x7f3f8e600000 [ 4051.798785] soft offline: 0x1e4a00: hugepage isolation failed, page count 2, [ 4052.199976] soft offline: 0x4be800: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4052.203853] Soft offlining pfn 0x281000 at process virtual address 0x7f3f8e400000 [ 4052.207084] Soft offlining pfn 0x1e4a00 at process virtual address 0x7f3f8e600000 [ 4052.213017] Soft offlining pfn 0x233200 at process virtual address 0x7f3f8e400000 [ 4052.216127] Soft offlining pfn 0x4e5e00 at process virtual address 0x7f3f8e600000 [ 4052.220566] Soft offlining pfn 0x233000 at process virtual address 0x7f3f8e400000 [ 4052.223855] Soft offlining pfn 0x4be800 at process virtual address 0x7f3f8e600000 [ 4052.228499] Soft offlining pfn 0x242c00 at process virtual address 0x7f3f8e400000 [ 4052.232885] Soft offlining pfn 0x1f4400 at process virtual address 0x7f3f8e600000 [ 4052.237580] Soft offlining pfn 0x4bea00 at process virtual address 0x7f3f8e400000 [ 4052.240941] Soft offlining pfn 0x242e00 at process virtual address 0x7f3f8e600000 [ 4052.242147] soft offline: 0x242e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4052.247615] Soft offli[ 4052.750742] Soft offlining pfn 0x242e00 at process virtual address 0x7f3f8e400000 [ 4052.753981] Soft offlining pfn 0x1f4600 at process virtual address 0x7f3f8e600000 [ 4052.761473] Soft offlining pfn 0x224200 at process virtual address 0x7f3f8e400000 [ 4052.764503] Soft offlining pfn 0x4de000 at process virtual address 0x7f3f8e600000 [ 4052.769916] Soft offlining pfn 0x224000 at process virtual address 0x7f3f8e400000 [ 4052.771256] soft offline: 0x224000: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4052.775179] Soft offlining pfn 0x4fc800 at process virtual address 0x7f3f8e400000 [ 4052.778719] Soft offlining pfn 0x4de200 at process virtual address 0x7f3f8e600000 [ 4052.783690] Soft offlining pfn 0x224000 at process virtual address 0x7f3f8e400000 [ 4052.787107] Soft offlining pfn 0x4c6800 at process virtual address 0x7f3f8e600000 [ 4052.792516] Soft offlining pfn 0x4fca00 at process virtual address 0x7f3f8e400000 [ 4052.796071] Soft offlining pfn 0x4c6a00 at process virtual address 0x7f3f8e600000 [ 4052.800681] Soft offlining pfn 0x4f4e00 at process virtual address 0x7f3f8e400000 [ 4052.804085] Soft offlining pfn 0x4f4c00 at process virtual address 0x7f3f8e600000 [ 4052.808887] Soft offlining pfn 0x4cea00 at process virtual address 0x7f3f8e400000 [ 4052.810627] soft offline: 0x4cea00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(r[ 4053.197480] Soft offlining pfn 0x214000 at process virtual address 0x7f3f8e600000 [ 4053.215981] Soft offlining pfn 0x263400 at process virtual address 0x7f3f8e400000 [ 4053.218984] Soft offlining pfn 0x509800 at process virtual address 0x7f3f8e600000 [ 4053.225328] Soft offlining pfn 0x1d4c00 at process virtual address 0x7f3f8e400000 [ 4053.226965] soft offline: 0x1d4c00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4053.230903] Soft offlining pfn 0x214200 at process virtual address 0x7f3f8e400000 [ 4053.233843] Soft offlining pfn 0x509a00 at process virtual address 0x7f3f8e600000 [ 4053.238228] Soft offlining pfn 0x1d4c00 at process virtual address 0x7f3f8e400000 [ 4053.241753] Soft offlining pfn 0x4ce800 at process virtual address 0x7f3f8e600000 [ 4053.246616] Soft offlining pfn 0x1d4e00 at process virtual address 0x7f3f8e400000 [ 4053.250615] Soft offlining pfn 0x271800 at process virtual address 0x7f3f8e600000 [ 4053.255267] Soft offlining pfn 0x4cea00 at process virtual address 0x7f3f8e400000 [ 4053.258640] Soft offlining pfn 0x252c00 at process virtual address 0x7f3f8e600000 [ 4053.260506] soft offline: 0x252c00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4053.264520] Soft offlining pfn 0x4b6e00[ 4053.757756] Soft offlining pfn 0x252c00 at process virtual address 0x7f3f8e400000 [ 4053.768431] Soft offlining pfn 0x271a00 at process virtual address 0x7f3f8e600000 [ 4053.776020] Soft offlining pfn 0x1e4c00 at process virtual address 0x7f3f8e400000 [ 4053.778988] Soft offlining pfn 0x4d6400 at process virtual address 0x7f3f8e600000 [ 4053.783976] Soft offlining pfn 0x252e00 at process virtual address 0x7f3f8e400000 [ 4053.785742] soft offline: 0x252e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4053.791023] Soft offlining pfn 0x4b6c00 at process virtual address 0x7f3f8e400000 [ 4053.794388] Soft offlining pfn 0x4d6600 at process virtual address 0x7f3f8e600000 [ 4053.798521] Soft offlining pfn 0x252e00 at process virtual address 0x7f3f8e400000 [ 4053.801940] Soft offlining pfn 0x4b6e00 at process virtual address 0x7f3f8e600000 [ 4053.806518] Soft offlining pfn 0x1e4e00 at process virtual address 0x7f3f8e400000 [ 4053.809850] Soft offlining pfn 0x504800 at process virtual address 0x7f3f8e600000 [ 4053.812043] soft offline: 0x504800: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4053.816043] Softhead|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4054.220051] Soft offlining pfn 0x281400 at process virtual address 0x7f3f8e400000 [ 4054.223190] Soft offlining pfn 0x504800 at process virtual address 0x7f3f8e600000 [ 4054.229618] Soft offlining pfn 0x204400 at process virtual address 0x7f3f8e400000 [ 4054.232910] Soft offlining pfn 0x281600 at process virtual address 0x7f3f8e600000 [ 4054.237454] Soft offlining pfn 0x243000 at process virtual address 0x7f3f8e400000 [ 4054.240863] Soft offlining pfn 0x204600 at process virtual address 0x7f3f8e600000 [ 4054.246346] Soft offlining pfn 0x4edc00 at process virtual address 0x7f3f8e400000 [ 4054.250009] Soft offlining pfn 0x243200 at process virtual address 0x7f3f8e600000 [ 4054.252006] soft offline: 0x243200: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4054.257400] Soft offlining pfn 0x504a00 at process virtual address 0x7f3f8e400000 [ 4054.258842] soft offline: 0x504a00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4054oft offlining pfn 0x1f4800 at process virtual address 0x7f3f8e400000 [ 4054.762960] Soft offlining pfn 0x4e6000 at process virtual address 0x7f3f8e600000 [ 4054.764531] soft offline: 0x4e6000: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4054.769904] Soft offlining pfn 0x1f4a00 at process virtual address 0x7f3f8e400000 [ 4054.771182] soft offline: 0x1f4a00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4054.775056] Soft offlining pfn 0x243200 at process virtual address 0x7f3f8e400000 [ 4054.778017] Soft offlining pfn 0x4ede00 at process virtual address 0x7f3f8e600000 [ 4054.782486] Soft offlining pfn 0x1f4a00 at process virtual address 0x7f3f8e400000 [ 4054.786276] Soft offlining pfn 0x263800 at process virtual address 0x7f3f8e600000 [ 4054.792181] Soft offlining pfn 0x224400 at process virtual address 0x7f3f8e400000 [ 4054.795217] Soft offlining pfn 0x504a00 at process virtual address 0x7f3f8e600000 [ 4054.802831] Soft offlining pfn 0x224600 at process virtual address 0x7f3f8e400000 [ 4054.804528] soft offline: 0x224600: hu[ 4055.305931] soft offline: 0x224600: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4055.311917] Soft offlining pfn 0x263a00 at process virtual address 0x7f3f8e400000 [ 4055.316459] Soft offlining pfn 0x4e6200 at process virtual address 0x7f3f8e600000 [ 4055.323358] Soft offlining pfn 0x224600 at process virtual address 0x7f3f8e400000 [ 4055.328685] Soft offlining pfn 0x4e6000 at process virtual address 0x7f3f8e600000 [ 4055.337143] Soft offlining pfn 0x4bee00 at process virtual address 0x7f3f8e400000 [ 4055.342408] Soft offlining pfn 0x4bec00 at process virtual address 0x7f3f8e600000 [ 4055.349789] Soft offlining pfn 0x4fce00 at process virtual address 0x7f3f8e400000 [ 4055.355812] Soft offlining pfn 0x4fcc00 at process virtual address 0x7f3f8e600000 [ 4055.363164] Soft offlining pfn 0x4de600 at process virtual address 0x7f3f8e400000 [ 4055.365498] soft offline: 0x4de600: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4055.370478] Soft offlining pfn 0x214400 at process virtual address 0x7f3f8e400000 [ 4055.375445] Soft offlining pfn 0x1d5200 at process virtual address 0x7f3f8e600000 [ 4055.383322] Soft [ 4055.757273] Soft offlining pfn 0x1d5000 at process virtual address 0x7f3f8e400000 [ 4055.785126] soft offline: 0x1d5000: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4055.789284] Soft offlining pfn 0x4c6c00 at process virtual address 0x7f3f8e400000 [ 4055.792820] Soft offlining pfn 0x4c6e00 at process virtual address 0x7f3f8e600000 [ 4055.798007] Soft offlining pfn 0x1d5000 at process virtual address 0x7f3f8e400000 [ 4055.801357] Soft offlining pfn 0x509e00 at process virtual address 0x7f3f8e600000 [ 4055.805866] Soft offlining pfn 0x214600 at process virtual address 0x7f3f8e400000 [ 4055.811862] Soft offlining pfn 0x271c00 at process virtual address 0x7f3f8e400000 [ 4055.813164] soft offline: 0x271c00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4055.818922] Soft offlining pfn 0x4f5000 at process virtual address 0x7f3f8e400000 [ 4055.822251] Soft offlining pfn 0x509c00 at process virtual address 0x7f3f8e600000 [ 4055.826481] Soft offlining pfn 0x271c00 at process virtual address 0x7f3f8e400000 [ 4055.829476] Soft offlining pfn 0x4f5200 at process|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4056.333474] Soft offlining pfn 0x4cee00 at process virtual address 0x7f3f8e400000 [ 4056.337024] Soft offlining pfn 0x4cec00 at process virtual address 0x7f3f8e600000 [ 4056.341998] Soft offlining pfn 0x271e00 at process virtual address 0x7f3f8e400000 [ 4056.345372] Soft offlining pfn 0x253000 at process virtual address 0x7f3f8e600000 [ 4056.349895] Soft offlining pfn 0x253200 at process virtual address 0x7f3f8e400000 [ 4056.352824] Soft offlining pfn 0x4b7200 at process virtual address 0x7f3f8e600000 [ 4056.357263] Soft offlining pfn 0x214600 at process virtual address 0x7f3f8e400000 [ 4056.360497] Soft offlining pfn 0x4b7000 at process virtual address 0x7f3f8e600000 [ 4056.365248] Soft offlining pfn 0x1e5000 at process virtual address 0x7f3f8e400000 [ 4056.368472] Soft offlining pfn 0x4d6800 at process virtual address 0x7f3f8e600000 [ 4056.370532] soft offline: 0x4d6800: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4056.375935] Soft offlining pfn 0x281800 at process virtual address 0x7f3f8e400000 [ 4056.377222] soft offline: 0x281800: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|headoft offlining pfn 0x4d6a00 at process virtual address 0x7f3f8e600000 [ 4056.780366] soft offline: 0x4d6a00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4056.785311] Soft offlining pfn 0x281a00 at process virtual address 0x7f3f8e400000 [ 4056.788491] Soft offlining pfn 0x281800 at process virtual address 0x7f3f8e600000 [ 4056.792919] Soft offlining pfn 0x204a00 at process virtual address 0x7f3f8e400000 [ 4056.796345] Soft offlining pfn 0x204800 at process virtual address 0x7f3f8e600000 [ 4056.801886] Soft offlining pfn 0x4d6a00 at process virtual address 0x7f3f8e400000 [ 4056.805467] Soft offlining pfn 0x243400 at process virtual address 0x7f3f8e600000 [ 4056.807349] soft offline: 0x243400: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4056.812793] Soft offlining pfn 0x504c00 at process virtual address 0x7f3f8e400000 [ 4056.814123] soft offline: 0x504c00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4056.817926] Soft offlining p0000 [ 4057.320836] Soft offlining pfn 0x504e00 at process virtual address 0x7f3f8e600000 [ 4057.322190] soft offline: 0x504e00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4057.327550] Soft offlining pfn 0x263c00 at process virtual address 0x7f3f8e400000 [ 4057.329139] soft offline: 0x263c00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4057.333221] Soft offlining pfn 0x233800 at process virtual address 0x7f3f8e400000 [ 4057.336208] Soft offlining pfn 0x504e00 at process virtual address 0x7f3f8e600000 [ 4057.348459] Soft offlining pfn 0x263e00 at process virtual address 0x7f3f8e400000 [ 4057.351417] Soft offlining pfn 0x4d6800 at process virtual address 0x7f3f8e600000 [ 4057.357468] Soft offlining pfn 0x263c00 at process virtual address 0x7f3f8e400000 [ 4057.358824] soft offline: 0x263c00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4057.364153] Soft offlining pfn 0x4ee000 at process virtual address 0x7f3f8e400000 [ 4057.367789] Soft offlining pfn 0x504c00 at process virtual address 0x7f3f8e600000 [ 4057.372496] Soft offlining pftype 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4057.877241] Soft offlining pfn 0x1f4c00 at process virtual address 0x7f3f8e400000 [ 4057.878675] soft offline: 0x1f4c00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4057.882422] Soft offlining pfn 0x4ee200 at process virtual address 0x7f3f8e400000 [ 4057.886006] Soft offlining pfn 0x4e6600 at process virtual address 0x7f3f8e600000 [ 4057.890219] Soft offlining pfn 0x1f4c00 at process virtual address 0x7f3f8e400000 [ 4057.893706] Soft offlining pfn 0x4bf000 at process virtual address 0x7f3f8e600000 [ 4057.898429] Soft offlining pfn 0x263c00 at process virtual address 0x7f3f8e400000 [ 4057.901774] Soft offlining pfn 0x4bf200 at process virtual address 0x7f3f8e600000 [ 4057.903060] soft offline: 0x4bf200: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4057.908447] Soft offlining pfn 0x224800 at process virtual address 0x7f3f8e400000 [ 4057.910229] soft offline: 0x224800: hugepage isolation failed, page count 2, type 0x17ffffc0ocess virtual address 0x7f3f8e600000 [ 4058.412751] soft offline: 0x4fd000: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4058.416621] Soft offlining pfn 0x224a00 at process virtual address 0x7f3f8e400000 [ 4058.419763] Soft offlining pfn 0x224800 at process virtual address 0x7f3f8e600000 [ 4058.424190] Soft offlining pfn 0x1d5600 at process virtual address 0x7f3f8e400000 [ 4058.427505] Soft offlining pfn 0x1d5400 at process virtual address 0x7f3f8e600000 [ 4058.432029] Soft offlining pfn 0x214a00 at process virtual address 0x7f3f8e400000 [ 4058.435352] Soft offlining pfn 0x4fd000 at process virtual address 0x7f3f8e600000 [ 4058.440253] Soft offlining pfn 0x272000 at process virtual address 0x7f3f8e400000 [ 4058.443544] Soft offlining pfn 0x253400 at process virtual address 0x7f3f8e600000 [ 4058.448417] Soft offlining pfn 0x4fd200 at process virtual address 0x7f3f8e400000 [ 4058.451985] Soft offlining pfn 0x214800 at process virtual address 0x7f3f8e600000 [ 4058.453205] soft offline: 0x214800: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4058.458652] Soft offlining pfn 0x50a000 at process virtual addrefff) [ 4058.862438] Soft offlining pfn 0x214800 at process virtual address 0x7f3f8e400000 [ 4058.865776] Soft offlining pfn 0x272200 at process virtual address 0x7f3f8e600000 [ 4058.872445] Soft offlining pfn 0x1e5400 at process virtual address 0x7f3f8e400000 [ 4058.875599] Soft offlining pfn 0x4bf200 at process virtual address 0x7f3f8e600000 [ 4058.880424] Soft offlining pfn 0x253600 at process virtual address 0x7f3f8e400000 [ 4058.882258] soft offline: 0x253600: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4058.886293] Soft offlining pfn 0x50a200 at process virtual address 0x7f3f8e400000 [ 4058.889773] Soft offlining pfn 0x50a000 at process virtual address 0x7f3f8e600000 [ 4058.895732] Soft offlining pfn 0x1e5600 at process virtual address 0x7f3f8e400000 [ 4058.899388] Soft offlining pfn 0x281c00 at process virtual address 0x7f3f8e600000 [ 4058.904119] Soft offlining pfn 0x281e00 at process virtual address 0x7f3f8e400000 [ 4058.907058] Soft offlining pfn 0x4de800 at process virtual address 0x7f3f8e600000 [ 4058.912132] Soft offlining pfn 0x253600 at process virtual address 0x7f3f8e400000 [ 4058.913965] soft offline: 0x253600: hugepage isolation [ 4059.387763] soft offline: 0x4c7000: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4059.418244] Soft offlining pfn 0x253600 at process virtual address 0x7f3f8e400000 [ 4059.421425] Soft offlining pfn 0x264000 at process virtual address 0x7f3f8e600000 [ 4059.427658] Soft offlining pfn 0x204c00 at process virtual address 0x7f3f8e400000 [ 4059.430905] Soft offlining pfn 0x4dea00 at process virtual address 0x7f3f8e600000 [ 4059.435437] Soft offlining pfn 0x264200 at process virtual address 0x7f3f8e400000 [ 4059.436774] soft offline: 0x264200: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4059.440779] Soft offlining pfn 0x4c7200 at process virtual address 0x7f3f8e400000 [ 4059.444539] Soft offlining pfn 0x4c7000 at process virtual address 0x7f3f8e600000 [ 4059.453319] Soft offlining pfn 0x243800 at process virtual address 0x7f3f8e400000 [ 4059.455111] soft offline: 0x243800: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptooft offlining pfn 0x4f5400 at process virtual address 0x7f3f8e600000 [ 4059.857444] soft offline: 0x4f5400: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4059.862148] Soft offlining pfn 0x243800 at process virtual address 0x7f3f8e400000 [ 4059.865497] Soft offlining pfn 0x264200 at process virtual address 0x7f3f8e600000 [ 4059.869862] Soft offlining pfn 0x233c00 at process virtual address 0x7f3f8e400000 [ 4059.873320] Soft offlining pfn 0x243a00 at process virtual address 0x7f3f8e600000 [ 4059.878756] Soft offlining pfn 0x4f5600 at process virtual address 0x7f3f8e400000 [ 4059.882273] Soft offlining pfn 0x233e00 at process virtual address 0x7f3f8e600000 [ 4059.883565] soft offline: 0x233e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4059.890001] Soft offlining pfn 0x4cf000 at process virtual address 0x7f3f8e400000 [ 4059.891412] soft offline: 0x4cf000: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|la[ 4060.379158] Soft offlining pfn 0x224c00 at process virtual address 0x7f3f8e400000 [ 4060.395433] Soft offlining pfn 0x4f5400 at process virtual address 0x7f3f8e600000 [ 4060.397029] soft offline: 0x4f5400: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4060.400917] Soft offlining pfn 0x224e00 at process virtual address 0x7f3f8e400000 [ 4060.404138] Soft offlining pfn 0x1f5200 at process virtual address 0x7f3f8e600000 [ 4060.408635] Soft offlining pfn 0x1d5a00 at process virtual address 0x7f3f8e400000 [ 4060.417993] Soft offlining pfn 0x1d5800 at process virtual address 0x7f3f8e600000 [ 4060.424363] Soft offlining pfn 0x214c00 at process virtual address 0x7f3f8e400000 [ 4060.428505] Soft offlining pfn 0x4cf000 at process virtual address 0x7f3f8e600000 [ 4060.434655] Soft offlining pfn 0x214e00 at process virtual address 0x7f3f8e400000 [ 4060.436284] soft offline: 0x214e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4060.440804] Soft offlining p[ 4060.944063] Soft offlining pfn 0x214e00 at process virtual address 0x7f3f8e400000 [ 4060.948210] Soft offlining pfn 0x272400 at process virtual address 0x7f3f8e600000 [ 4060.958985] Soft offlining pfn 0x4cf200 at process virtual address 0x7f3f8e400000 [ 4060.963999] Soft offlining pfn 0x4f5400 at process virtual address 0x7f3f8e600000 [ 4060.971649] Soft offlining pfn 0x4b7600 at process virtual address 0x7f3f8e400000 [ 4060.976533] Soft offlining pfn 0x253800 at process virtual address 0x7f3f8e600000 [ 4060.984265] Soft offlining pfn 0x253a00 at process virtual address 0x7f3f8e400000 [ 4060.989634] Soft offlining pfn 0x272600 at process virtual address 0x7f3f8e600000 [ 4060.997518] Soft offlining pfn 0x282200 at process virtual address 0x7f3f8e400000 [ 4061.003210] Soft offlining pfn 0x4b7400 at process virtual address 0x7f3f8e600000 [ 4061.010225] Soft offlining pfn 0x4d6c00 at process virtual address 0x7f3f8e400000 [ 4061.015177] Soft offlining pfn 0x264400 at process virtual address 0x7f3f8e600000 [ 4061.024164] Soft offlining pfn 0x282000 at process virtual address 0x7f3f8e400000 [ 4061.028815] Soft offlining pfn 0x4d6e00 at process virtual address 0x7f3f8e600000 [ 4061.040259] Soft offlining pfn 0x1e5800 at k|node=0|zone=2|lastcpupid=0x1fffff) [ 4061.443901] Soft offlining pfn 0x264600 at process virtual address 0x7f3f8e400000 [ 4061.446803] Soft offlining pfn 0x4ee400 at process virtual address 0x7f3f8e600000 [ 4061.452951] Soft offlining pfn 0x1e5a00 at process virtual address 0x7f3f8e400000 [ 4061.456471] Soft offlining pfn 0x1e5800 at process virtual address 0x7f3f8e600000 [ 4061.457740] soft offline: 0x1e5800: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4061.462895] Soft offlining pfn 0x205000 at process virtual address 0x7f3f8e400000 [ 4061.466441] Soft offlining pfn 0x205200 at process virtual address 0x7f3f8e600000 [ 4061.471160] Soft offlining pfn 0x505000 at process virtual address 0x7f3f8e400000 [ 4061.474499] Soft offlining pfn 0x1e5800 at process virtual address 0x7f3f8e600000 [ 4061.476244] soft offline: 0x1e5800: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4061.481224] Soft offlining pfn 0x4ee600 at process virtual address 0x7f3f8e400000 [ 4061.482771] soft offline: 0x4ee600: hugepage isolation failed, page count 2, type 0x57ffffc003000e[ 4061.874414] Soft offlining pfn 0x4ee600 at process virtual address 0x7f3f8e600000 [ 4061.888662] Soft offlining pfn 0x243c00 at process virtual address 0x7f3f8e400000 [ 4061.892008] Soft offlining pfn 0x4e6800 at process virtual address 0x7f3f8e600000 [ 4061.896958] Soft offlining pfn 0x243e00 at process virtual address 0x7f3f8e400000 [ 4061.902854] Soft offlining pfn 0x243e00 at process virtual address 0x7f3f8e400000 [ 4061.904360] soft offline: 0x243e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4061.908249] Soft offlining pfn 0x234000 at process virtual address 0x7f3f8e400000 [ 4061.911571] Soft offlining pfn 0x4e6a00 at process virtual address 0x7f3f8e600000 [ 4061.916633] Soft offlining pfn 0x243e00 at process virtual address 0x7f3f8e400000 [ 4061.918323] soft offline: 0x243e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4061.922180] Soft offlining pfn 0x4bf400 at process virtual address 0x7f3f8e400000 [ 4061.925639] Soft offlining pfntype 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4062.429557] Soft offlining pfn 0x50a600 at process virtual address 0x7f3f8e400000 [ 4062.433070] Soft offlining pfn 0x50a400 at process virtual address 0x7f3f8e600000 [ 4062.437949] Soft offlining pfn 0x243e00 at process virtual address 0x7f3f8e400000 [ 4062.441221] Soft offlining pfn 0x4fd400 at process virtual address 0x7f3f8e600000 [ 4062.446716] Soft offlining pfn 0x4bf600 at process virtual address 0x7f3f8e400000 [ 4062.448421] soft offline: 0x4bf600: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4062.452329] Soft offlining pfn 0x1f5400 at process virtual address 0x7f3f8e400000 [ 4062.455306] Soft offlining pfn 0x225000 at process virtual address 0x7f3f8e600000 [ 4062.459964] Soft offlining pfn 0x4fd600 at process virtual address 0x7f3f8e400000 [ 4062.463211] Soft offlining pfn 0x234200 at process virtual address 0x7f3f8e600000 [ 4062.465309] soft offline: 0x234200: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappe3000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4062.969716] Soft offlining pfn 0x1f5600 at process virtual address 0x7f3f8e400000 [ 4062.972955] Soft offlining pfn 0x225200 at process virtual address 0x7f3f8e600000 [ 4062.979043] Soft offlining pfn 0x1d5c00 at process virtual address 0x7f3f8e400000 [ 4062.982038] Soft offlining pfn 0x4bf600 at process virtual address 0x7f3f8e600000 [ 4062.988315] Soft offlining pfn 0x1d5e00 at process virtual address 0x7f3f8e400000 [ 4062.991773] Soft offlining pfn 0x234200 at process virtual address 0x7f3f8e600000 [ 4062.993125] soft offline: 0x234200: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4062.997221] Soft offlining pfn 0x272800 at process virtual address 0x7f3f8e400000 [ 4063.000317] Soft offlining pfn 0x4dec00 at process virtual address 0x7f3f8e600000 [ 4063.002300] soft offline: 0x4dec00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4063.006252] Soft offlining pfn 0x272a00 at process virtual address 0x7f3f8e400000 [ 4063.007766] soft offlinen 0x234200 at process virtual address 0x7f3f8e400000 [ 4063.411167] Soft offlining pfn 0x4dec00 at process virtual address 0x7f3f8e600000 [ 4063.412873] soft offline: 0x4dec00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4063.417701] Soft offlining pfn 0x215000 at process virtual address 0x7f3f8e400000 [ 4063.419192] soft offline: 0x215000: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4063.423081] Soft offlining pfn 0x272a00 at process virtual address 0x7f3f8e400000 [ 4063.426144] Soft offlining pfn 0x4dec00 at process virtual address 0x7f3f8e600000 [ 4063.434132] Soft offlining pfn 0x215200 at process virtual address 0x7f3f8e400000 [ 4063.435408] soft offline: 0x215200: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4063.439205] Soft offlining pfn 0x215000 at process virtual address 0x7f3f8e400000 [ 4063.442147] Soft offlining pfn 0x4dee00 at process virtual address 0x7f3f8e600000 [ 4063.446497] Soft offlining pfn 0x215200 at process virtual address 0x7f3f8e400000 [ 4063.449826] Soft offlining pfn 0x4c740[ 4063.948291] Soft offlining pfn 0x253e00 at process virtual address 0x7f3f8e400000 [ 4063.951558] soft offline: 0x253e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4063.955397] Soft offlining pfn 0x253c00 at process virtual address 0x7f3f8e400000 [ 4063.958371] Soft offlining pfn 0x4c7400 at process virtual address 0x7f3f8e600000 [ 4063.964215] soft offline: 0x4c7400: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4063.968168] Soft offlining pfn 0x264800 at process virtual address 0x7f3f8e400000 [ 4063.969428] soft offline: 0x264800: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4063.973229] Soft offlining pfn 0x253e00 at process virtual address 0x7f3f8e400000 [ 4063.976023] Soft offlining pfn 0x4c7400 at process virtual address 0x7f3f8e600000 [ 4063.982213] Soft offlining pfn 0x264800 at process virtual address 0x7f3f8e400000 [ 4063.986652] Soft offlining pfn 0x4c7600 at process virtual address 0x7f3f8e600000 [ 4063.994097] Soft offlining pfn 0x4f5a00 a[ 4064.497199] Soft offlining pfn 0x264a00 at process virtual address 0x7f3f8e400000 [ 4064.500409] Soft offlining pfn 0x282400 at process virtual address 0x7f3f8e600000 [ 4064.506147] Soft offlining pfn 0x1e5c00 at process virtual address 0x7f3f8e400000 [ 4064.509145] Soft offlining pfn 0x4f5800 at process virtual address 0x7f3f8e600000 [ 4064.515327] Soft offlining pfn 0x1e5e00 at process virtual address 0x7f3f8e400000 [ 4064.516578] soft offline: 0x1e5e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4064.520361] Soft offlining pfn 0x282600 at process virtual address 0x7f3f8e400000 [ 4064.523479] Soft offlining pfn 0x4f5a00 at process virtual address 0x7f3f8e600000 [ 4064.527942] Soft offlining pfn 0x1e5e00 at process virtual address 0x7f3f8e400000 [ 4064.531105] Soft offlining pfn 0x4cf400 at process virtual address 0x7f3f8e600000 [ 4064.536177] Soft offlining pfn 0x234400 at process virtual address 0x7f3f8e400000 [ 4064.539323] Soft offlining pfn 0x4cf600 at process virtual address 0x7f3f8e600000 [ 4064.541145] soft offline: 0x4cf600: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodateoft offline: 0x205400: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4064.945939] Soft offlining pfn 0x234600 at process virtual address 0x7f3f8e400000 [ 4064.948955] Soft offlining pfn 0x4cf600 at process virtual address 0x7f3f8e600000 [ 4064.953590] Soft offlining pfn 0x205400 at process virtual address 0x7f3f8e400000 [ 4064.955488] soft offline: 0x205400: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4064.960784] Soft offlining pfn 0x4b7a00 at process virtual address 0x7f3f8e400000 [ 4064.964488] Soft offlining pfn 0x4b7800 at process virtual address 0x7f3f8e600000 [ 4064.969304] Soft offlining pfn 0x4eea00 at process virtual address 0x7f3f8e400000 [ 4064.972798] Soft offlining pfn 0x4ee800 at process virtual address 0x7f3f8e600000 [ 4064.977306] Soft offlining pfn 0x4d7200 at process virtual address 0x7f3f8e400000 [ 4064.980970] Soft offlining pfn 0x205400 at process virtual address 0x7f3f8e600000 [ 4064.985592] Soft offlining pfn 0x4d7000 at process virtual address 0x7f3f8e400000 [ 4064.989638] Soft offlining pfn 0x505400 at process virtual address 0x7f3f8e600000 [ 4064.995049] Soft offlining pfn 0x4e6c00 at process virtual adoft offlining pfn 0x4e6e00 at process virtual address 0x7f3f8e600000 [ 4065.497490] soft offline: 0x4e6e00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4065.501403] Soft offlining pfn 0x1f5800 at process virtual address 0x7f3f8e400000 [ 4065.504506] Soft offlining pfn 0x244200 at process virtual address 0x7f3f8e600000 [ 4065.506330] soft offline: 0x244200: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4065.510208] Soft offlining pfn 0x1f5a00 at process virtual address 0x7f3f8e400000 [ 4065.512917] Soft offlining pfn 0x4e6e00 at process virtual address 0x7f3f8e600000 [ 4065.517358] Soft offlining pfn 0x244200 at process virtual address 0x7f3f8e400000 [ 4065.518603] soft offline: 0x244200: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4065.523947] Soft offlining pfn 0x4bf800 at process virtual address 0x7f3f8e400000 [ 4065.527348] Soft offlining pfn 0x505600 at process virtual address 0x7f3f8e600000 [ 4065.531451] Soft offlining p[ 4066.035040] Soft offlining pfn 0x50a800 at process virtual address 0x7f3f8e400000 [ 4066.039193] Soft offlining pfn 0x4bfa00 at process virtual address 0x7f3f8e600000 [ 4066.045227] Soft offlining pfn 0x225400 at process virtual address 0x7f3f8e400000 [ 4066.049249] Soft offlining pfn 0x225600 at process virtual address 0x7f3f8e600000 [ 4066.056408] Soft offlining pfn 0x1d6000 at process virtual address 0x7f3f8e400000 [ 4066.060037] Soft offlining pfn 0x4fd800 at process virtual address 0x7f3f8e600000 [ 4066.065775] Soft offlining pfn 0x244200 at process virtual address 0x7f3f8e400000 [ 4066.070160] Soft offlining pfn 0x4fda00 at process virtual address 0x7f3f8e600000 [ 4066.075941] Soft offlining pfn 0x1d6200 at process virtual address 0x7f3f8e400000 [ 4066.079950] Soft offlining pfn 0x50aa00 at process virtual address 0x7f3f8e600000 [ 4066.082012] soft offline: 0x50aa00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4066.088168] Soft offlining pfn 0x272e00 at process virtual address 0x7f3f8e400000 [ 4066.089555] soft offline: 0x272e00: hugocess virtual address 0x7f3f8e400000 [ 4066.493040] Soft offlining pfn 0x4df000 at process virtual address 0x7f3f8e600000 [ 4066.494482] soft offline: 0x4df000: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4066.498336] Soft offlining pfn 0x215400 at process virtual address 0x7f3f8e400000 [ 4066.501481] Soft offlining pfn 0x272e00 at process virtual address 0x7f3f8e600000 [ 4066.506000] Soft offlining pfn 0x264c00 at process virtual address 0x7f3f8e400000 [ 4066.509395] Soft offlining pfn 0x215600 at process virtual address 0x7f3f8e600000 [ 4066.514974] Soft offlining pfn 0x4df000 at process virtual address 0x7f3f8e400000 [ 4066.518464] Soft offlining pfn 0x264e00 at process virtual address 0x7f3f8e600000 [ 4066.520681] soft offline: 0x264e00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4066.526129] Soft offlining pfn 0x4df200 at process virtual address 0x7f3f8e400000 [ 4066.527393] soft offline: 0x4df200: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4066.531282] Soft offlining pfn 0x264e00 at process virtual address 0x7f3f8e400000 [ 4066.534390] Soft offlining pfn 0x254000 at process virtual address 0x7f3f8e6oft offline: 0x50aa00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4067.039371] Soft offlining pfn 0x282a00 at process virtual address 0x7f3f8e400000 [ 4067.042477] Soft offlining pfn 0x254200 at process virtual address 0x7f3f8e600000 [ 4067.048484] Soft offlining pfn 0x1e6200 at process virtual address 0x7f3f8e400000 [ 4067.052295] Soft offlining pfn 0x1e6000 at process virtual address 0x7f3f8e600000 [ 4067.056864] Soft offlining pfn 0x234a00 at process virtual address 0x7f3f8e400000 [ 4067.059809] Soft offlining pfn 0x4df200 at process virtual address 0x7f3f8e600000 [ 4067.064817] Soft offlining pfn 0x234800 at process virtual address 0x7f3f8e400000 [ 4067.066061] soft offline: 0x234800: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4067.069926] Soft offlining pfn 0x4c7800 at process virtual address 0x7f3f8e400000 [ 4067.073226] Soft offlining pfn 0x50aa00 at process virtual address 0x7f3f8e600000 [ 4067.078160] Soft offlining pfn 0x205800 at process virtual address 0x7f3f8e400000 [ 4067.081450] Soft offlining pfn 0x205a00n 0x4f5c00 at process virtual address 0x7f3f8e600000 [ 4067.484195] soft offline: 0x4f5c00: hugepage isolation failed, page count 2, type 0x57ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=1|zone=2|lastcpupid=0x1fffff) [ 4067.489733] Soft offlining pfn 0x244600 at process virtual address 0x7f3f8e400000 [ 4067.491385] soft offline: 0x244600: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4067.495410] Soft offlining pfn 0x234800 at process virtual address 0x7f3f8e400000 [ 4067.498256] Soft offlining pfn 0x4f5e00 at process virtual address 0x7f3f8e600000 [ 4067.502229] Soft offlining pfn 0x244600 at process virtual address 0x7f3f8e400000 [ 4067.505505] Soft offlining pfn 0x1f5c00 at process virtual address 0x7f3f8e600000 [ 4067.511504] Soft offlining pfn 0x1f5e00 at process virtual address 0x7f3f8e400000 [ 4067.514480] Soft offlining pfn 0x4f5c00 at process virtual address 0x7f3f8e600000 [ 4067.520711] Soft offlining pfn 0x225a00 at process virtual address 0x7f3f8e400000 [ 4067.522377] soft offline: 0x225a00: hugepage isolation failed, page count 2, type 0x17ffffc003000e(referenced|uptodate|dirty|head|mappedtodisk|node=0|zone=2|lastcpupid=0x1fffff) [ 4067.526166] Soft offlining pfn 0x22580[ 4068.106230] LTP: starting mprotect02 [ 4068.211237] LTP: starting mprotect03 [ 4068.288431] LTP: starting mprotect04 [ 4068.366055] LTP: starting pkey01 [ 4077.618611] pkey01 (160325): drop_caches: 3 [ 4077.816240] LTP: starting mq_notify01 [ 4078.037831] LTP: starting mq_notify02 [ 4078.113151] LTP: starting mq_open01 [ 4078.305897] LTP: starting mq_timedreceive01 [ 4078.558433] LTP: starting mq_timedsend01 [ 4078.829963] LTP: starting mq_unlink01 [ 4078.933808] LTP: starting mremap01 [ 4079.424141] LTP: starting mremap02 [ 4079.497832] LTP: starting mremap03 [ 4079.574585] LTP: starting mremap04 [ 4079.665502] LTP: starting mremap05 [ 4079.739686] LTP: starting msgctl01 [ 4079.866546] LTP: starting msgctl02 [ 4079.983290] LTP: starting msgctl03 [ 4080.083495] LTP: starting msgctl04 [ 4080.209286] LTP: starting msgctl05 [ 4080.278471] LTP: starting msgctl06 [ 4080.368565] LTP: starting msgstress01 [ 4086.849712] LTP: starting msgstress02 [ 4088.029567] LTP: starting msgctl12 [ 4088.110694] LTP: starting msgget01 [ 4088.249109] LTP: starting msgget02 [ 4088.396521] LTP: starting msgget03 [ 4091.512609] LTP: starting msgget04 [ 4091.654952] LTP: starting msgget05 [ 4091.744213] LTP: starting msgrcv01 [ 4091.832362] LTP: starting msgrcv02 [ 4092.016920] LTP: starting msgrcv03 [ 4092.117545] LTP: starting msgrcv05 [ 4092.237993] LTP: starting msgrcv06 [ 4092.337912] LTP: starting msgrcv07 [ 4092.429350] LTP: starting msgrcv08 [ 4092.518603] LTP: starting msgsnd01 [ 4092.609134] LTP: starting msgsnd02 [ 4092.711496] LTP: starting msgsnd05 [ 4092.810972] LTP: starting msgsnd06 [ 4092.909336] LTP: starting msync01 [ 4093.010928] LTP: starting msync02 [ 4093.096038] LTP: starting msync03 [ 4093.188955] LTP: starting msync04 [ 4093.272107] loop0: detected capacity change from 0 to 614400 [ 4093.285929] /dev/zero: Can't open blockdev [ 4093.578066] /dev/zero: Can't open blockdev [ 4093.652944] /dev/zero: Can't open blockdev [ 4093.726401] /dev/zero: Can't open blockdev [ 4094.034616] /dev/zero: Can't open blockdev [ 4094.952352] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4094.993662] EXT4-fs mount: 4 callbacks suppressed [ 4094.993680] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4095.104030] EXT4-fs (loop0): unmounting filesystem. [ 4097.050476] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4097.109962] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4097.261540] EXT4-fs (loop0): unmounting filesystem. [ 4097.835799] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4097.967188] EXT4-fs (loop0): unmounting filesystem. [ 4098.515631] XFS (loop0): Mounting V5 Filesystem [ 4098.581679] XFS (loop0): Ending clean mount [ 4098.720571] XFS (loop0): Unmounting Filesystem [ 4099.068743] LTP: starting munlock01 [ 4099.167389] LTP: starting munlock02 [ 4099.236955] LTP: starting munlockall01 [ 4099.293602] LTP: starting munmap01 [ 4099.362973] LTP: starting munmap02 [ 4099.424958] LTP: starting munmap03 [ 4099.482649] LTP: starting nanosleep01 [ 4108.377768] LTP: starting nanosleep02 [ 4109.424002] LTP: starting nanosleep04 [ 4109.520244] LTP: starting name_to_handle_at01 [ 4109.645610] LTP: starting name_to_handle_at02 [ 4109.733672] LTP: starting nftw01 [ 4110.049336] LTP: starting nftw6401 [ 4110.322147] LTP: starting nice01 [ 4110.413124] LTP: starting nice02 [ 4110.501145] LTP: starting nice03 [ 4110.588951] LTP: starting nice04 [ 4110.674056] LTP: starting open01 [ 4110.777719] LTP: starting open01A (symlink01 -T open01) [ 4110.878455] LTP: starting open02 [ 4110.974690] LTP: starting open03 [ 4111.055685] LTP: starting open04 [ 4112.968976] LTP: starting open06 [ 4113.051700] LTP: starting open07 [ 4113.149430] LTP: starting open08 [ 4113.262447] LTP: starting open09 [ 4113.354433] LTP: starting open10 [ 4113.556493] LTP: starting open11 [ 4113.708729] LTP: starting open12 [ 4113.794887] loop0: detected capacity change from 0 to 614400 [ 4114.600466] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4114.647855] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4115.754607] EXT4-fs (loop0): unmounting filesystem. [ 4115.872773] LTP: starting open13 [ 4115.946441] LTP: starting open14 [ 4116.866254] LTP: starting openat01 [ 4116.966795] LTP: starting openat02 [ 4117.104830] LTP: starting openat03 [ 4117.906445] LTP: starting openat201 [ 4118.023605] LTP: starting openat202 [ 4118.105738] LTP: starting openat203 [ 4118.184236] LTP: starting open_by_handle_at01 [ 4118.279563] LTP: starting open_by_handle_at02 [ 4118.366807] LTP: starting open_tree01 [ 4118.441876] loop0: detected capacity change from 0 to 614400 [ 4118.453577] /dev/zero: Can't open blockdev [ 4118.528989] /dev/zero: Can't open blockdev [ 4118.603018] /dev/zero: Can't open blockdev [ 4118.678307] /dev/zero: Can't open blockdev [ 4118.813126] /dev/zero: Can't open blockdev [ 4119.754352] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4119.794462] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4119.835256] EXT4-fs (loop0): unmounting filesystem. [ 4121.838273] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4121.898635] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4121.942729] EXT4-fs (loop0): unmounting filesystem. [ 4122.509375] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4122.557947] EXT4-fs (loop0): unmounting filesystem. [ 4123.711598] XFS (loop0): Mounting V5 Filesystem [ 4123.789334] XFS (loop0): Ending clean mount [ 4123.820493] XFS (loop0): Unmounting Filesystem [ 4124.191924] LTP: starting open_tree02 [ 4124.263682] loop0: detected capacity change from 0 to 614400 [ 4124.273593] /dev/zero: Can't open blockdev [ 4124.347862] /dev/zero: Can't open blockdev [ 4124.425333] /dev/zero: Can't open blockdev [ 4124.498381] /dev/zero: Can't open blockdev [ 4124.630485] /dev/zero: Can't open blockdev [ 4125.524264] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4125.577946] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4125.609370] EXT4-fs (loop0): unmounting filesystem. [ 4127.826053] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4127.895801] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4127.921837] EXT4-fs (loop0): unmounting filesystem. [ 4128.476909] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4128.502337] EXT4-fs (loop0): unmounting filesystem. [ 4128.898846] XFS (loop0): Mounting V5 Filesystem [ 4128.963041] XFS (loop0): Ending clean mount [ 4128.997264] XFS (loop0): Unmounting Filesystem [ 4129.357922] LTP: starting mincore01 [ 4129.467271] LTP: starting mincore02 [ 4129.539964] LTP: starting mincore03 [ 4129.617934] LTP: starting mincore04 [ 4129.724033] LTP: starting madvise01 [ 4129.830913] Injecting memory failure for pfn 0x18a3dc at process virtual address 0x7fe95562c000 [ 4129.835973] Memory failure: 0x18a3dc: recovery action for clean LRU page: Recovered [ 4129.837088] Injecting memory failure for pfn 0x1842b9 at process virtual address 0x7fe95562d000 [ 4129.838329] Memory failure: 0x1842b9: recovery action for clean LRU page: Recovered [ 4129.839439] Injecting memory failure for pfn 0x1892e1 at process virtual address 0x7fe95562e000 [ 4129.840602] Memory failure: 0x1892e1: recovery action for clean LRU page: Recovered [ 4129.841618] Injecting memory failure for pfn 0x190b13 at process virtual address 0x7fe95562f000 [ 4129.842717] Memory failure: 0x190b13: recovery action for clean LRU page: Recovered [ 4129.843775] Injecting memory failure for pfn 0x18b33f at process virtual address 0x7fe955630000 [ 4129.844897] Memory failure: 0x18b33f: recovery action for clean LRU page: Recovered [ 4129.845854] Injecting memory failure for pfn 0x252801 at process virtual address 0x7fe955631000 [ 4129.846995] Memory failure: 0x252801: recovery action for clean LRU page: Recovered [ 4129.848369] Injecting memory failure for pfn 0x1bfc23 at process virtual address 0x7fe955632000 [ 4129.849593] Memory failure: 0x1bfc23: recovery action for clean LRU page: Recovered [ 4129.850818] Injecting memory failure for pfn 0x19211f at process virtual address 0x7fe955633000 [ 4129.851996] Memory failure: 0x19211f: recovery action for clean LRU page: Recovered [ 4129.852962] Injecting memory failure for pfn 0x1b6ae0 at process virtual address 0x7fe955634000 [ 4129.854075] Memory failure: 0x1b6ae0: recovery action for clean LRU page: Recovered [ 4129.855075] Injecting memory failure for pfn 0x19e2e2 at process virtual address 0x7fe955635000 [ 4129.856231] Memory failure: 0x19e2e2: recovery action for clean LRU page: Recovered [ 4129.876348] LTP: starting madvise02 [ 4130.033668] LTP: starting madvise05 [ 4130.107998] LTP: starting madvise07 [ 4130.169795] Injecting memory failure for pfn 0x4a0341 at process virtual address 0x7f02eae67000 [ 4130.172389] Memory failure: 0x4a0341: recovery action for dirty LRU page: Recovered [ 4130.173259] MCE: Killing madvise07:161025 due to hardware memory corruption fault at 7f02eae67000 [ 4130.219271] LTP: starting madvise08 [ 4130.354765] LTP: starting madvise09 [ 4130.431211] LTP: starting madvise10 [ 4130.543911] LTP: starting newuname01 [ 4130.610346] LTP: starting pathconf01 [ 4130.678477] LTP: starting pause01 [ 4130.780493] LTP: starting pause02 [ 4130.842306] LTP: starting pause03 [ 4130.901041] LTP: starting personality01 [ 4131.116932] LTP: starting personality02 [ 4131.184249] LTP: starting pidfd_getfd01 [ 4131.279883] LTP: starting pidfd_getfd02 [ 4131.379635] LTP: starting pidfd_open01 [ 4131.461574] LTP: starting pidfd_open02 [ 4131.534405] LTP: starting pidfd_open03 [ 4131.614467] LTP: starting pidfd_open04 [ 4131.695439] LTP: starting pidfd_send_signal01 [ 4131.780797] LTP: starting pidfd_send_signal02 [ 4131.867865] LTP: starting pidfd_send_signal03 [ 4131.964325] LTP: starting pipe01 [ 4132.041008] LTP: starting pipe02 [ 4132.095463] LTP: starting pipe03 [ 4132.156620] LTP: starting pipe04 [ 4132.223056] LTP: starting pipe05 [ 4132.289529] LTP: starting pipe06 [ 4132.491416] LTP: starting pipe07 [ 4132.802382] LTP: starting pipe08 [ 4132.864027] LTP: starting pipe09 [ 4132.928724] LTP: starting pipe10 [ 4132.988742] LTP: starting pipe11 [ 4133.276132] LTP: starting pipe12 [ 4133.356485] LTP: starting pipe13 [ 4134.271640] LTP: starting pipe2_01 [ 4134.366370] LTP: starting pipe2_02 [ 4134.619849] LTP: starting pipe2_04 [ 4134.717497] LTP: starting pivot_root01 [ 4134.891479] LTP: starting poll01 [ 4134.970915] LTP: starting poll02 [ 4143.617842] LTP: starting ppoll01 [ 4144.003923] LTP: starting prctl01 [ 4144.083674] LTP: starting prctl02 [ 4144.166894] LTP: starting prctl03 [ 4144.264994] LTP: starting prctl04 [ 4144.451113] LTP: starting prctl05 [ 4144.526116] LTP: starting prctl06 [ 4144.611327] loop0: detected capacity change from 0 to 614400 [ 4145.138058] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4145.179026] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4145.693454] EXT4-fs (loop0): unmounting filesystem. [ 4145.837862] LTP: starting prctl07 [ 4145.947860] LTP: starting prctl08 [ 4146.033023] LTP: starting prctl09 [ 4154.847562] LTP: starting pread01 [ 4154.902111] LTP: starting pread01_64 [ 4154.969461] LTP: starting pread02 [ 4155.056178] LTP: starting pread02_64 [ 4155.144557] LTP: starting preadv01 [ 4155.261221] LTP: starting preadv01_64 [ 4155.350173] LTP: starting preadv02 [ 4155.425642] LTP: starting preadv02_64 [ 4155.526541] LTP: starting preadv03 [ 4155.579663] loop0: detected capacity change from 0 to 614400 [ 4155.591834] /dev/zero: Can't open blockdev [ 4155.666701] /dev/zero: Can't open blockdev [ 4155.739600] /dev/zero: Can't open blockdev [ 4155.818060] /dev/zero: Can't open blockdev [ 4155.963009] /dev/zero: Can't open blockdev [ 4156.638575] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4156.686418] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4156.753896] EXT4-fs (loop0): unmounting filesystem. [ 4158.728834] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4158.789088] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4158.895442] EXT4-fs (loop0): unmounting filesystem. [ 4159.526516] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4159.623504] EXT4-fs (loop0): unmounting filesystem. [ 4160.058895] XFS (loop0): Mounting V5 Filesystem [ 4160.126764] XFS (loop0): Ending clean mount [ 4160.273266] XFS (loop0): Unmounting Filesystem [ 4160.606719] LTP: starting preadv03_64 [ 4160.701179] loop0: detected capacity change from 0 to 614400 [ 4160.710966] /dev/zero: Can't open blockdev [ 4160.791784] /dev/zero: Can't open blockdev [ 4160.866920] /dev/zero: Can't open blockdev [ 4160.946537] /dev/zero: Can't open blockdev [ 4161.095821] /dev/zero: Can't open blockdev [ 4161.752245] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4161.807334] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4161.882914] EXT4-fs (loop0): unmounting filesystem. [ 4163.373010] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4163.443018] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4163.557024] EXT4-fs (loop0): unmounting filesystem. [ 4164.170192] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4164.274858] EXT4-fs (loop0): unmounting filesystem. [ 4165.108106] XFS (loop0): Mounting V5 Filesystem [ 4165.173348] XFS (loop0): Ending clean mount [ 4165.356146] XFS (loop0): Unmounting Filesystem [ 4165.733146] LTP: starting preadv203 [ 4165.822924] loop0: detected capacity change from 0 to 614400 [ 4165.834222] /dev/zero: Can't open blockdev [ 4165.916724] /dev/zero: Can't open blockdev [ 4166.001761] /dev/zero: Can't open blockdev [ 4166.089519] /dev/zero: Can't open blockdev [ 4166.222097] /dev/zero: Can't open blockdev [ 4167.163457] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4167.202136] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4177.194472] preadv203 (161533): drop_caches: 3 [ 4177.469956] preadv203 (161538): drop_caches: 3 [ 4177.635216] preadv203 (161538): drop_caches: 3 [ 4177.793381] preadv203 (161538): drop_caches: 3 [ 4177.951715] preadv203 (161538): drop_caches: 3 [ 4178.113452] preadv203 (161538): drop_caches: 3 [ 4178.277773] preadv203 (161538): drop_caches: 3 [ 4178.441345] preadv203 (161538): drop_caches: 3 [ 4194.203166] EXT4-fs (loop0): unmounting filesystem. [ 4197.404357] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4197.465016] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4209.965358] preadv203 (161556): drop_caches: 3 [ 4210.431458] preadv203 (161563): drop_caches: 3 [ 4218.492560] EXT4-fs (loop0): unmounting filesystem. [ 4220.224484] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4225.585722] preadv203 (161578): drop_caches: 3 [ 4225.845363] preadv203 (161581): drop_caches: 3 [ 4226.005177] preadv203 (161581): drop_caches: 3 [ 4226.158096] preadv203 (161581): drop_caches: 3 [ 4226.310657] preadv203 (161581): drop_caches: [-- MARK -- Fri Feb 3 06:55:00 2023] 3 [ 4226.462440] preadv203 (161581): drop_caches: 3 [ 4226.614812] preadv203 (161581): drop_caches: 3 [ 4226.766948] preadv203 (161581): drop_caches: 3 [ 4226.921556] preadv203 (161581): drop_caches: 3 [ 4227.074353] preadv203 (161581): drop_caches: 3 [ 4227.226083] preadv203 (161581): drop_caches: 3 [ 4227.379543] preadv203 (161581): drop_caches: 3 [ 4227.532435] preadv203 (161581): drop_caches: 3 [ 4227.685339] preadv203 (161581): drop_caches: 3 [ 4227.837754] preadv203 (161581): drop_caches: 3 [ 4227.990354] preadv203 (161581): drop_caches: 3 [ 4228.143011] preadv203 (161581): drop_caches: 3 [ 4228.297352] preadv203 (161581): drop_caches: 3 [ 4228.446256] preadv203 (161581): drop_caches: 3 [ 4228.703840] preadv203 (161581): drop_caches: 3 [ 4228.873674] preadv203 (161581): drop_caches: 3 [ 4229.029735] preadv203 (161581): drop_caches: 3 [ 4229.193728] preadv203 (161581): drop_caches: 3 [ 4237.062783] EXT4-fs (loop0): unmounting filesystem. [ 4238.389224] XFS (loop0): Mounting V5 Filesystem [ 4238.455440] XFS (loop0): Ending clean mount [ 4240.409696] preadv203 (161600): drop_caches: 3 [ 4240.677800] preadv203 (161603): drop_caches: 3 [ 4240.834866] preadv203 (161603): drop_caches: 3 [ 4240.989860] preadv203 (161603): drop_caches: 3 [ 4241.144605] preadv203 (161603): drop_caches: 3 [ 4241.299641] preadv203 (161603): drop_caches: 3 [ 4241.454814] preadv203 (161603): drop_caches: 3 [ 4241.609837] preadv203 (161603): drop_caches: 3 [ 4241.746847] preadv203 (161603): drop_caches: 3 [ 4241.904583] preadv203 (161603): drop_caches: 3 [ 4242.062320] preadv203 (161603): drop_caches: 3 [ 4242.219532] preadv203 (161603): drop_caches: 3 [ 4242.375741] preadv203 (161603): drop_caches: 3 [ 4242.531163] preadv203 (161603): drop_caches: 3 [ 4242.687025] preadv203 (161603): drop_caches: 3 [ 4242.842846] preadv203 (161603): drop_caches: 3 [ 4242.997984] preadv203 (161603): drop_caches: 3 [ 4243.153438] preadv203 (161603): drop_caches: 3 [ 4243.307613] preadv203 (161603): drop_caches: 3 [ 4243.461879] preadv203 (161603): drop_caches: 3 [ 4243.616915] preadv203 (161603): drop_caches: 3 [ 4243.771090] preadv203 (161603): drop_caches: 3 [ 4243.925801] preadv203 (161603): drop_caches: 3 [ 4244.079998] preadv203 (161603): drop_caches: 3 [ 4244.234654] preadv203 (161603): drop_caches: 3 [ 4244.388629] preadv203 (161603): drop_caches: 3 [ 4244.543309] preadv203 (161603): drop_caches: 3 [ 4244.699369] preadv203 (161603): drop_caches: 3 [ 4244.853786] preadv203 (161603): drop_caches: 3 [ 4245.007393] preadv203 (161603): drop_caches: 3 [ 4245.166399] preadv203 (161603): drop_caches: 3 [ 4245.321558] preadv203 (161603): drop_caches: 3 [ 4245.475139] preadv203 (161603): drop_caches: 3 [ 4245.628582] preadv203 (161603): drop_caches: 3 [ 4245.781877] preadv203 (161603): drop_caches: 3 [ 4245.935920] preadv203 (161603): drop_caches: 3 [ 4246.089096] preadv203 (161603): drop_caches: 3 [ 4246.242957] preadv203 (161603): drop_caches: 3 [ 4246.396959] preadv203 (161603): drop_caches: 3 [ 4246.551013] preadv203 (161603): drop_caches: 3 [ 4246.704661] preadv203 (161603): drop_caches: 3 [ 4246.858492] preadv203 (161603): drop_caches: 3 [ 4247.012595] preadv203 (161603): drop_caches: 3 [ 4247.166704] preadv203 (161603): drop_caches: 3 [ 4247.320092] preadv203 (161603): drop_caches: 3 [ 4247.473507] preadv203 (161603): drop_caches: 3 [ 4247.627057] preadv203 (161603): drop_caches: 3 [ 4247.780473] preadv203 (161603): drop_caches: 3 [ 4247.934082] preadv203 (161603): drop_caches: 3 [ 4248.087164] preadv203 (161603): drop_caches: 3 [ 4248.240823] preadv203 (161603): drop_caches: 3 [ 4248.396126] preadv203 (161603): drop_caches: 3 [ 4248.550578] preadv203 (161603): drop_caches: 3 [ 4248.704578] preadv203 (161603): drop_caches: 3 [ 4248.858211] preadv203 (161603): drop_caches: 3 [ 4249.011628] preadv203 (161603): drop_caches: 3 [ 4249.165244] preadv203 (161603): drop_caches: 3 [ 4249.318607] preadv203 (161603): drop_caches: 3 [ 4249.471968] preadv203 (161603): drop_caches: 3 [ 4249.626338] preadv203 (161603): drop_caches: 3 [ 4249.780773] preadv203 (161603): drop_caches: 3 [ 4249.934621] preadv203 (161603): drop_caches: 3 [ 4250.088091] preadv203 (161603): drop_caches: 3 [ 4250.243882] preadv203 (161603): drop_caches: 3 [ 4250.399340] preadv203 (161603): drop_caches: 3 [ 4250.553606] preadv203 (161603): drop_caches: 3 [ 4250.707256] preadv203 (161603): drop_caches: 3 [ 4250.863905] preadv203 (161603): drop_caches: 3 [ 4251.018879] preadv203 (161603): drop_caches: 3 [ 4251.173228] preadv203 (161603): drop_caches: 3 [ 4251.326497] preadv203 (161603): drop_caches: 3 [ 4251.480402] preadv203 (161603): drop_caches: 3 [ 4251.633913] preadv203 (161603): drop_caches: 3 [ 4251.787368] preadv203 (161603): drop_caches: 3 [ 4251.940789] preadv203 (161603): drop_caches: 3 [ 4252.094351] preadv203 (161603): drop_caches: 3 [ 4252.247755] preadv203 (161603): drop_caches: 3 [ 4252.400839] preadv203 (161603): drop_caches: 3 [ 4252.560590] preadv203 (161603): drop_caches: 3 [ 4252.727601] preadv203 (161603): drop_caches: 3 [ 4252.915656] preadv203 (161603): drop_caches: 3 [ 4253.072777] preadv203 (161603): drop_caches: 3 [ 4253.247194] preadv203 (161603): drop_caches: 3 [ 4253.405309] preadv203 (161603): drop_caches: 3 [ 4259.244423] XFS (loop0): Unmounting Filesystem [ 4259.973964] LTP: starting preadv203_64 [ 4260.149640] loop0: detected capacity change from 0 to 614400 [ 4260.170338] /dev/zero: Can't open blockdev [ 4260.531355] /dev/zero: Can't open blockdev [ 4260.623767] /dev/zero: Can't open blockdev [ 4260.725298] /dev/zero: Can't open blockdev [ 4260.978501] /dev/zero: Can't open blockdev [ 4262.024124] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4262.064676] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4271.273150] preadv203_64 (161637): drop_caches: 3 [ 4271.425306] preadv203_64 (161642): drop_caches: 3 [ 4271.561147] preadv203_64 (161642): drop_caches: 3 [ 4271.693767] preadv203_64 (161642): drop_caches: 3 [ 4271.825895] preadv203_64 (161642): drop_caches: 3 [ 4271.957863] preadv203_64 (161642): drop_caches: 3 [ 4272.090565] preadv203_64 (161642): drop_caches: 3 [ 4272.223332] preadv203_64 (161642): drop_caches: 3 [ 4272.355634] preadv203_64 (161642): drop_caches: 3 [ 4272.519354] preadv203_64 (161642): drop_caches: 3 [ 4272.694851] preadv203_64 (161642): drop_caches: 3 [ 4272.846596] preadv203_64 (161642): drop_caches: 3 [ 4272.997643] preadv203_64 (161642): drop_caches: 3 [ 4273.148510] preadv203_64 (161642): drop_caches: 3 [ 4273.299449] preadv203_64 (161642): drop_caches: 3 [ 4273.450799] preadv203_64 (161642): drop_caches: 3 [ 4273.602876] preadv203_64 (161642): drop_caches: 3 [ 4273.753629] preadv203_64 (161642): drop_caches: 3 [ 4273.904586] preadv203_64 (161642): drop_caches: 3 [ 4274.055767] preadv203_64 (161642): drop_caches: 3 [ 4274.206624] preadv203_64 (161642): drop_caches: 3 [ 4274.357279] preadv203_64 (161642): drop_caches: 3 [ 4274.507511] preadv203_64 (161642): drop_caches: 3 [ 4274.658834] preadv203_64 (161642): drop_caches: 3 [ 4274.810585] preadv203_64 (161642): drop_caches: 3 [ 4274.961340] preadv203_64 (161642): drop_caches: 3 [ 4275.113927] preadv203_64 (161642): drop_caches: 3 [ 4275.269463] preadv203_64 (161642): drop_caches: 3 [ 4275.423642] preadv203_64 (161642): drop_caches: 3 [ 4275.584329] preadv203_64 (161642): drop_caches: 3 [ 4275.741237] preadv203_64 (161642): drop_caches: 3 [ 4275.896195] preadv203_64 (161642): drop_caches: 3 [ 4276.051369] preadv203_64 (161642): drop_caches: 3 [ 4276.206319] preadv203_64 (161642): drop_caches: 3 [ 4276.361974] preadv203_64 (161642): drop_caches: 3 [ 4276.516990] preadv203_64 (161642): drop_caches: 3 [ 4276.660652] preadv203_64 (161642): drop_caches: 3 [ 4276.793796] preadv203_64 (161642): drop_caches: 3 [ 4276.925909] preadv203_64 (161642): drop_caches: 3 [ 4277.057745] preadv203_64 (161642): drop_caches: 3 [ 4277.222460] preadv203_64 (161642): drop_caches: 3 [ 4277.389825] preadv203_64 (161642): drop_caches: 3 [ 4293.877170] EXT4-fs (loop0): unmounting filesystem. [ 4296.996401] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4297.057409] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4309.480452] preadv203_64 (161661): drop_caches: 3 [ 4309.931665] preadv203_64 (161667): drop_caches: 3 [ 4320.962045] EXT4-fs (loop0): unmounting filesystem. [ 4322.744247] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4328.556689] preadv203_64 (161685): drop_caches: 3 [ 4328.808351] preadv203_64 (161688): drop_caches: 3 [ 4329.003364] preadv203_64 (161688): drop_caches: 3 [ 4329.155492] preadv203_64 (161688): drop_caches: 3 [ 4329.308268] preadv203_64 (161688): drop_caches: 3 [ 4329.462260] preadv203_64 (161688): drop_caches: 3 [ 4329.614481] preadv203_64 (161688): drop_caches: 3 [ 4329.769106] preadv203_64 (161688): drop_caches: 3 [ 4329.924073] preadv203_64 (161688): drop_caches: 3 [ 4330.076469] preadv203_64 (161688): drop_caches: 3 [ 4330.228771] preadv203_64 (161688): drop_caches: 3 [ 4330.383373] preadv203_64 (161688): drop_caches: 3 [ 4330.535621] preadv203_64 (161688): drop_caches: 3 [ 4330.694422] preadv203_64 (161688): drop_caches: 3 [ 4330.847844] preadv203_64 (161688): drop_caches: 3 [ 4331.001912] preadv203_64 (161688): drop_caches: 3 [ 4331.155646] preadv203_64 (161688): drop_caches: 3 [ 4331.307814] preadv203_64 (161688): drop_caches: 3 [ 4331.460831] preadv203_64 (161688): drop_caches: 3 [ 4331.618052] preadv203_64 (161688): drop_caches: 3 [ 4331.770568] preadv203_64 (161688): drop_caches: 3 [ 4331.922811] preadv203_64 (161688): drop_caches: 3 [ 4332.074494] preadv203_64 (161688): drop_caches: 3 [ 4332.226155] preadv203_64 (161688): drop_caches: 3 [ 4332.378926] preadv203_64 (161688): drop_caches: 3 [ 4332.530772] preadv203_64 (161688): drop_caches: 3 [ 4332.683315] preadv203_64 (161688): drop_caches: 3 [ 4332.844770] preadv203_64 (161688): drop_caches: 3 [ 4332.997545] preadv203_64 (161688): drop_caches: 3 [ 4333.151330] preadv203_64 (161688): drop_caches: 3 [ 4333.303683] preadv203_64 (161688): drop_caches: 3 [ 4333.455936] preadv203_64 (161688): drop_caches: 3 [ 4333.610102] preadv203_64 (161688): drop_caches: 3 [ 4333.762467] preadv203_64 (161688): drop_caches: 3 [ 4333.915143] preadv203_64 (161688): drop_caches: 3 [ 4334.067734] preadv203_64 (161688): drop_caches: 3 [ 4334.219989] preadv203_64 (161688): drop_caches: 3 [ 4334.373437] preadv203_64 (161688): drop_caches: 3 [ 4334.525448] preadv203_64 (161688): drop_caches: 3 [ 4334.678265] preadv203_64 (161688): drop_caches: 3 [ 4334.830876] preadv203_64 (161688): drop_caches: 3 [ 4334.982967] preadv203_64 (161688): drop_caches: 3 [ 4335.136429] preadv203_64 (161688): drop_caches: 3 [ 4335.288816] preadv203_64 (161688): drop_caches: 3 [ 4335.440449] preadv203_64 (161688): drop_caches: 3 [ 4335.591981] preadv203_64 (161688): drop_caches: 3 [ 4335.744284] preadv203_64 (161688): drop_caches: 3 [ 4335.897516] preadv203_64 (161688): drop_caches: 3 [ 4336.107135] preadv203_64 (161688): drop_caches: 3 [ 4336.265302] preadv203_64 (161688): drop_caches: 3 [ 4344.529810] EXT4-fs (loop0): unmounting filesystem. [ 4345.798664] XFS (loop0): Mounting V5 Filesystem [ 4345.864533] XFS (loop0): Ending clean mount [ 4347.934327] preadv203_64 (161708): drop_caches: 3 [ 4348.208182] preadv203_64 (161711): drop_caches: 3 [ 4348.462067] preadv203_64 (161711): drop_caches: 3 [ 4348.620873] preadv203_64 (161711): drop_caches: 3 [ 4348.779513] preadv203_64 (161711): drop_caches: 3 [ 4348.939753] preadv203_64 (161711): drop_caches: 3 [ 4349.098956] preadv203_64 (161711): drop_caches: 3 [ 4349.260472] preadv203_64 (161711): drop_caches: 3 [ 4349.421163] preadv203_64 (161711): drop_caches: 3 [ 4349.580055] preadv203_64 (161711): drop_caches: 3 [ 4349.740371] preadv203_64 (161711): drop_caches: 3 [ 4349.900056] preadv203_64 (161711): drop_caches: 3 [ 4350.058363] preadv203_64 (161711): drop_caches: 3 [ 4350.216332] preadv203_64 (161711): drop_caches: 3 [ 4350.374596] preadv203_64 (161711): drop_caches: 3 [ 4350.533277] preadv203_64 (161711): drop_caches: 3 [ 4350.692036] preadv203_64 (161711): drop_caches: 3 [ 4350.854169] preadv203_64 (161711): drop_caches: 3 [ 4351.014638] preadv203_64 (161711): drop_caches: 3 [ 4351.175189] preadv203_64 (161711): drop_caches: 3 [ 4351.334099] preadv203_64 (161711): drop_caches: 3 [ 4351.492396] preadv203_64 (161711): drop_caches: 3 [ 4351.651320] preadv203_64 (161711): drop_caches: 3 [ 4351.810598] preadv203_64 (161711): drop_caches: 3 [ 4351.974426] preadv203_64 (161711): drop_caches: 3 [ 4352.133805] preadv203_64 (161711): drop_caches: 3 [ 4352.292840] preadv203_64 (161711): drop_caches: 3 [ 4352.451457] preadv203_64 (161711): drop_caches: 3 [ 4352.609741] preadv203_64 (161711): drop_caches: 3 [ 4352.768444] preadv203_64 (161711): drop_caches: 3 [ 4352.927713] preadv203_64 (161711): drop_caches: 3 [ 4353.086509] preadv203_64 (161711): drop_caches: 3 [ 4353.245238] preadv203_64 (161711): drop_caches: 3 [ 4353.461623] preadv203_64 (161711): drop_caches: 3 [ 4353.619688] preadv203_64 (161711): drop_caches: 3 [ 4353.777944] preadv203_64 (161711): drop_caches: 3 [ 4353.947618] preadv203_64 (161711): drop_caches: 3 [ 4354.140419] preadv203_64 (161711): drop_caches: 3 [ 4354.299063] preadv203_64 (161711): drop_caches: 3 [ 4354.457858] preadv203_64 (161711): drop_caches: 3 [ 4354.616844] preadv203_64 (161711): drop_caches: 3 [ 4354.775457] preadv203_64 (161711): drop_caches: 3 [ 4354.933297] preadv203_64 (161711): drop_caches: 3 [ 4355.091515] preadv203_64 (161711): drop_caches: 3 [ 4355.249469] preadv203_64 (161711): drop_caches: 3 [ 4355.407687] preadv203_64 (161711): drop_caches: 3 [ 4355.566057] preadv203_64 (161711): drop_caches: 3 [ 4355.724355] preadv203_64 (161711): drop_caches: 3 [ 4355.882331] preadv203_64 (161711): drop_caches: 3 [ 4356.041417] preadv203_64 (161711): drop_caches: 3 [ 4356.201126] preadv203_64 (161711): drop_caches: 3 [ 4356.359940] preadv203_64 (161711): drop_caches: 3 [ 4356.518557] preadv203_64 (161711): drop_caches: 3 [ 4356.678247] preadv203_64 (161711): drop_caches: 3 [ 4356.838781] preadv203_64 (161711): drop_caches: 3 [ 4357.001457] preadv203_64 (161711): drop_caches: 3 [ 4357.161647] preadv203_64 (161711): drop_caches: 3 [ 4357.320291] preadv203_64 (161711): drop_caches: 3 [ 4357.478428] preadv203_64 (161711): drop_caches: 3 [ 4357.637146] preadv203_64 (161711): drop_caches: 3 [ 4357.796092] preadv203_64 (161711): drop_caches: 3 [ 4357.953467] preadv203_64 (161711): drop_caches: 3 [ 4358.111736] preadv203_64 (161711): drop_caches: 3 [ 4358.269659] preadv203_64 (161711): drop_caches: 3 [ 4358.427530] preadv203_64 (161711): drop_caches: 3 [ 4358.586028] preadv203_64 (161711): drop_caches: 3 [ 4358.744491] preadv203_64 (161711): drop_caches: 3 [ 4358.903035] preadv203_64 (161711): drop_caches: 3 [ 4359.061428] preadv203_64 (161711): drop_caches: 3 [ 4359.219630] preadv203_64 (161711): drop_caches: 3 [ 4359.378686] preadv203_64 (161711): drop_caches: 3 [ 4359.537257] preadv203_64 (161711): drop_caches: 3 [ 4359.696090] preadv203_64 (161711): drop_caches: 3 [ 4359.854995] preadv203_64 (161711): drop_caches: 3 [ 4360.012542] preadv203_64 (161711): drop_caches: 3 [ 4360.180697] preadv203_64 (161711): drop_caches: 3 [ 4360.342793] preadv203_64 (161711): drop_caches: 3 [ 4360.506721] preadv203_64 (161711): drop_caches: 3 [ 4360.673367] preadv203_64 (161711): drop_caches: 3 [ 4360.842434] preadv203_64 (161711): drop_caches: 3 [ 4361.006317] preadv203_64 (161711): drop_caches: 3 [ 4361.170312] preadv203_64 (161711): drop_caches: 3 [ 4361.330832] preadv203_64 (161711): drop_caches: 3 [ 4361.491647] preadv203_64 (161711): drop_caches: 3 [ 4361.655768] preadv203_64 (161711): drop_caches: 3 [ 4361.825601] preadv203_64 (161711): drop_caches: 3 [ 4361.995118] preadv203_64 (161711): drop_caches: 3 [ 4362.157869] preadv203_64 (161711): drop_caches: 3 [ 4362.320150] preadv203_64 (161711): drop_caches: 3 [ 4362.483210] preadv203_64 (161711): drop_caches: 3 [ 4362.645417] preadv203_64 (161711): drop_caches: 3 [ 4362.804888] preadv203_64 (161711): drop_caches: 3 [ 4362.963650] preadv203_64 (161711): drop_caches: 3 [ 4363.122578] preadv203_64 (161711): drop_caches: 3 [ 4363.281441] preadv203_64 (161711): drop_caches: 3 [ 4363.441253] preadv203_64 (161711): drop_caches: 3 [ 4363.655254] preadv203_64 (161711): drop_caches: 3 [ 4363.818456] preadv203_64 (161711): drop_caches: 3 [ 4363.978580] preadv203_64 (161711): drop_caches: 3 [ 4364.137380] preadv203_64 (161711): drop_caches: 3 [ 4364.297836] preadv203_64 (161711): drop_caches: 3 [ 4364.457735] preadv203_64 (161711): drop_caches: 3 [ 4364.616931] preadv203_64 (161711): drop_caches: 3 [ 4364.778649] preadv203_64 (161711): drop_caches: 3 [ 4364.938496] preadv203_64 (161711): drop_caches: 3 [ 4365.096571] preadv203_64 (161711): drop_caches: 3 [ 4365.271228] preadv203_64 (161711): drop_caches: 3 [ 4365.460807] preadv203_64 (161711): drop_caches: 3 [ 4365.650108] preadv203_64 (161711): drop_caches: 3 [ 4374.948793] XFS (loop0): Unmounting Filesystem [ 4375.729971] LTP: starting profil01 [ 4380.907654] LTP: starting process_vm_readv01 (process_vm01 -r) [ 4381.104639] LTP: starting process_vm_readv02 [ 4381.230940] LTP: starting process_vm_readv03 [ 4381.663396] LTP: starting process_vm_writev01 (process_vm01 -w) [ 4381.719623] LTP: starting process_vm_writev02 [ 4381.861203] LTP: starting prot_hsymlinks [ 4387.742731] LTP: starting dirtyc0w [ 4388.999909] LTP: starting dirtypipe [ 4389.100162] LTP: starting pselect01 (sh -c "pselect01 || true") [ 4397.928278] LTP: starting pselect01_64 (sh -c "pselect01_64 || true") [ 4406.589385] LTP: starting pselect02 [ 4406.630909] LTP: starting pselect02_64 [ 4406.749369] LTP: starting pselect03 [ 4406.847573] LTP: starting pselect03_64 [ 4406.959995] LTP: starting ptrace01 [ 4407.106873] LTP: starting ptrace02 [ 4407.207869] LTP: starting ptrace03 [ 4407.315263] LTP: starting ptrace04 [ 4407.377839] LTP: starting ptrace05 [ 4407.792017] LTP: starting ptrace07 [ 4409.941949] LTP: starting ptrace08 [ 4410.075796] LTP: starting ptrace09 [ 4410.161362] LTP: starting ptrace10 [ 4410.261340] LTP: starting ptrace11 [ 4410.351547] LTP: starting pwrite01 [ 4410.444131] LTP: starting pwrite02 [ 4410.537117] LTP: starting pwrite03 [ 4410.629865] LTP: starting pwrite04 [ 4410.729443] LTP: starting pwrite01_64 [ 4410.827149] LTP: starting pwrite02_64 [ 4410.928997] LTP: starting pwrite03_64 [ 4411.024982] LTP: starting pwrite04_64 [ 4411.110752] LTP: starting pwritev01 [ 4411.204291] LTP: starting pwritev01_64 [ 4411.292211] LTP: starting pwritev02 [ 4411.377436] LTP: starting pwritev02_64 [ 4411.465664] LTP: starting pwritev03 [ 4411.546529] loop0: detected capacity change from 0 to 614400 [ 4411.558858] /dev/zero: Can't open blockdev [ 4411.795172] /dev/zero: Can't open blockdev [ 4411.870254] /dev/zero: Can't open blockdev [ 4411.945239] /dev/zero: Can't open blockdev [ 4412.152936] /dev/zero: Can't open blockdev [ 4413.035391] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4413.073928] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4413.155438] EXT4-fs (loop0): unmounting filesystem. [ 4415.123769] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4415.184040] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4415.306319] EXT4-fs (loop0): unmounting filesystem. [ 4416.000285] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4416.110271] EXT4-fs (loop0): unmounting filesystem. [ 4416.841457] XFS (loop0): Mounting V5 Filesystem [ 4416.905611] XFS (loop0): Ending clean mount [ 4417.051839] XFS (loop0): Unmounting Filesystem [ 4417.429612] LTP: starting pwritev03_64 [ 4417.511146] loop0: detected capacity change from 0 to 614400 [ 4417.523862] /dev/zero: Can't open blockdev [ 4417.601141] /dev/zero: Can't open blockdev [ 4417.679578] /dev/zero: Can't open blockdev [ 4417.755801] /dev/zero: Can't open blockdev [ 4417.892061] /dev/zero: Can't open blockdev [ 4418.532652] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4418.570433] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4418.650309] EXT4-fs (loop0): unmounting filesystem. [ 4419.852936] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4419.926270] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4420.028744] EXT4-fs (loop0): unmounting filesystem. [ 4420.659884] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4420.754936] EXT4-fs (loop0): unmounting filesystem. [ 4422.104694] XFS (loop0): Mounting V5 Filesystem [ 4422.169951] XFS (loop0): Ending clean mount [ 4422.319285] XFS (loop0): Unmounting Filesystem [ 4422.649450] LTP: starting pwritev201 [ 4422.759358] LTP: starting pwritev201_64 [ 4422.866043] LTP: starting pwritev202 [ 4422.954184] LTP: starting pwritev202_64 [ 4423.049999] LTP: starting quotactl01 [ 4423.152081] loop0: detected capacity change from 0 to 614400 [ 4423.590225] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: writeback. [ 4423.781825] EXT4-fs (loop0): re-mounted. Quota mode: writeback. [ 4423.827560] EXT4-fs (loop0): re-mounted. Quota mode: writeback. [ 4424.593211] EXT4-fs (loop0): re-mounted. Quota mode: writeback. [ 4424.655387] EXT4-fs (loop0): re-mounted. Quota mode: writeback. [ 4425.206937] EXT4-fs (loop0): unmounting filesystem. [ 4425.437321] LTP: starting quotactl02 [ 4425.550932] LTP: starting quotactl03 [ 4425.607839] LTP: starting quotactl04 [ 4425.742063] loop0: detected capacity change from 0 to 614400 [ 4426.171346] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: journalled. [ 4426.243689] EXT4-fs (loop0): unmounting filesystem. [ 4426.796021] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: journalled. [ 4426.872077] EXT4-fs (loop0): unmounting filesystem. [ 4427.075708] LTP: starting quotactl05 [ 4427.154966] LTP: starting quotactl06 [ 4427.230277] loop0: detected capacity change from 0 to 614400 [ 4427.608913] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: writeback. [ 4427.734118] EXT4-fs (loop0): re-mounted. Quota mode: writeback. [ 4427.762957] EXT4-fs (loop0): re-mounted. Quota mode: writeback. [ 4428.524879] EXT4-fs (loop0): re-mounted. Quota mode: writeback. [ 4428.554114] EXT4-fs (loop0): re-mounted. Quota mode: writeback. [ 4429.198544] EXT4-fs (loop0): unmounting filesystem. [ 4429.386606] LTP: starting quotactl07 [ 4429.530153] LTP: starting quotactl08 [ 4429.683684] loop0: detected capacity change from 0 to 614400 [ 4430.092055] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: journalled. [ 4430.176650] EXT4-fs (loop0): unmounting filesystem. [ 4430.772125] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: journalled. [ 4430.857209] EXT4-fs (loop0): unmounting filesystem. [ 4431.061609] LTP: starting quotactl09 [ 4431.212901] loop0: detected capacity change from 0 to 614400 [ 4431.635878] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: journalled. [ 4431.727893] EXT4-fs (loop0): unmounting filesystem. [ 4431.869593] LTP: starting read01 [ 4431.974322] LTP: starting read02 [ 4432.093837] LTP: starting read03 [ 4432.181803] LTP: starting read04 [ 4432.263369] LTP: starting readahead01 [ 4432.355380] LTP: starting readahead02 [ 4432.421989] loop0: detected capacity change from 0 to 614400 [ 4432.993003] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4433.031098] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4442.149455] readahead02 (162135): drop_caches: 1 [ 4445.173225] readahead02 (162135): drop_caches: 1 [ 4457.623772] readahead02 (162135): drop_caches: 1 [ 4460.629670] readahead02 (162135): drop_caches: 1 [ 4481.094271] readahead02 (162135): drop_caches: 1 [ 4484.058127] readahead02 (162135): drop_caches: 1 [ 4506.359962] readahead02 (162135): drop_caches: 1 [ 4509.421005] readahead02 (162135): drop_caches: 1 [ 4511.509445] EXT4-fs (loop0): unmounting filesystem. [ 4511.826939] LTP: starting readdir01 [ 4512.036780] LTP: starting readdir21 [ 4512.137413] LTP: starting readlink01A (symlink01 -T readlink01) [ 4512.259195] LTP: starting readlink01 [ 4512.424979] LTP: starting readlink03 [ 4512.560198] LTP: starting readlinkat01 [ 4512.662800] LTP: starting readlinkat02 [ 4512.764496] LTP: starting readv01 [ 4512.882734] LTP: starting readv02 [ 4512.986009] LTP: starting realpath01 [ 4513.150702] LTP: starting reboot01 [ 4513.247884] LTP: starting reboot02 [ 4513.336980] LTP: starting recv01 [ 4513.428482] LTP: starting recvfrom01 [ 4513.520188] LTP: starting recvmsg01 [ 4513.639017] LTP: starting recvmsg02 [ 4513.722277] LTP: starting recvmsg03 [ 4514.046925] LTP: starting recvmmsg01 [ 4514.155747] LTP: starting remap_file_pages01 [ 4514.268017] mmap: remap_file_page (162208) uses deprecated remap_file_pages() syscall. See Documentation/vm/remap_file_pages.rst. [ 4514.368784] LTP: starting remap_file_pages02 [ 4514.453937] LTP: starting removexattr01 [ 4514.530959] LTP: starting removexattr02 [ 4514.607605] LTP: starting rename01 [ 4514.690859] loop0: detected capacity change from 0 to 614400 [ 4514.711675] /dev/zero: Can't open blockdev [ 4514.884218] /dev/zero: Can't open blockdev [ 4514.958244] /dev/zero: Can't open blockdev [ 4515.033909] /dev/zero: Can't open blockdev [ 4515.219715] /dev/zero: Can't open blockdev [ 4516.137677] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4516.178019] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4516.268707] EXT4-fs (loop0): unmounting filesystem. [ 4518.299821] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4518.350906] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4518.449896] EXT4-fs (loop0): unmounting filesystem. [ 4519.068575] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4519.149533] EXT4-fs (loop0): unmounting filesystem. [ 4520.433577] XFS (loop0): Mounting V5 Filesystem [ 4520.501217] XFS (loop0): Ending clean mount [ 4520.648010] XFS (loop0): Unmounting Filesystem [ 4521.030747] LTP: starting rename01A (symlink01 -T rename01) [ 4521.077096] LTP: starting rename03 [ 4521.144018] loop0: detected capacity change from 0 to 614400 [ 4521.163750] /dev/zero: Can't open blockdev [ 4521.240880] /dev/zero: Can't open blockdev [ 4521.313961] /dev/zero: Can't open blockdev [ 4521.392683] /dev/zero: Can't open blockdev [ 4521.525774] /dev/zero: Can't open blockdev [ 4522.228051] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4522.266549] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4522.365574] EXT4-fs (loop0): unmounting filesystem. [ 4524.354285] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4524.415650] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4524.517546] EXT4-fs (loop0): unmounting filesystem. [ 4525.207577] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4525.296660] EXT4-fs (loop0): unmounting filesystem. [ 4525.858459] XFS (loop0): Mounting V5 Filesystem [ 4525.925107] XFS (loop0): Ending clean mount [ 4526.064950] XFS (loop0): Unmounting Filesystem [-- MARK -- Fri Feb 3 07:00:00 2023] [ 4526.433318] LTP: starting rename04 [ 4526.523172] loop0: detected capacity change from 0 to 614400 [ 4526.538933] /dev/zero: Can't open blockdev [ 4526.613407] /dev/zero: Can't open blockdev [ 4526.690356] /dev/zero: Can't open blockdev [ 4526.763360] /dev/zero: Can't open blockdev [ 4526.896455] /dev/zero: Can't open blockdev [ 4527.567645] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4527.612809] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4527.688702] EXT4-fs (loop0): unmounting filesystem. [ 4529.624866] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4529.685053] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4529.779584] EXT4-fs (loop0): unmounting filesystem. [ 4530.466786] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4530.555695] EXT4-fs (loop0): unmounting filesystem. [ 4531.563698] XFS (loop0): Mounting V5 Filesystem [ 4531.629164] XFS (loop0): Ending clean mount [ 4531.769698] XFS (loop0): Unmounting Filesystem [ 4532.158879] LTP: starting rename05 [ 4532.236696] loop0: detected capacity change from 0 to 614400 [ 4532.250053] /dev/zero: Can't open blockdev [ 4532.326971] /dev/zero: Can't open blockdev [ 4532.401791] /dev/zero: Can't open blockdev [ 4532.476605] /dev/zero: Can't open blockdev [ 4532.605028] /dev/zero: Can't open blockdev [ 4533.496706] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4533.536731] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4533.617564] EXT4-fs (loop0): unmounting filesystem. [ 4535.817424] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4535.868187] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4535.951010] EXT4-fs (loop0): unmounting filesystem. [ 4536.601171] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4536.688514] EXT4-fs (loop0): unmounting filesystem. [ 4537.342035] XFS (loop0): Mounting V5 Filesystem [ 4537.409013] XFS (loop0): Ending clean mount [ 4537.549061] XFS (loop0): Unmounting Filesystem [ 4537.946827] LTP: starting rename06 [ 4538.030826] loop0: detected capacity change from 0 to 614400 [ 4538.041860] /dev/zero: Can't open blockdev [ 4538.116458] /dev/zero: Can't open blockdev [ 4538.193887] /dev/zero: Can't open blockdev [ 4538.266606] /dev/zero: Can't open blockdev [ 4538.399934] /dev/zero: Can't open blockdev [ 4539.078195] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4539.125435] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4539.200142] EXT4-fs (loop0): unmounting filesystem. [ 4541.150282] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4541.221569] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4541.301582] EXT4-fs (loop0): unmounting filesystem. [ 4541.936901] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4542.013896] EXT4-fs (loop0): unmounting filesystem. [ 4542.449106] XFS (loop0): Mounting V5 Filesystem [ 4542.513126] XFS (loop0): Ending clean mount [ 4542.663438] XFS (loop0): Unmounting Filesystem [ 4543.046428] LTP: starting rename07 [ 4543.134810] loop0: detected capacity change from 0 to 614400 [ 4543.144854] /dev/zero: Can't open blockdev [ 4543.220142] /dev/zero: Can't open blockdev [ 4543.301071] /dev/zero: Can't open blockdev [ 4543.373742] /dev/zero: Can't open blockdev [ 4543.503847] /dev/zero: Can't open blockdev [ 4544.193078] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4544.230842] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4544.305906] EXT4-fs (loop0): unmounting filesystem. [ 4546.307960] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4546.369285] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4546.465102] EXT4-fs (loop0): unmounting filesystem. [ 4547.136985] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4547.216686] EXT4-fs (loop0): unmounting filesystem. [ 4548.773693] XFS (loop0): Mounting V5 Filesystem [ 4548.845949] XFS (loop0): Ending clean mount [ 4548.980147] XFS (loop0): Unmounting Filesystem [ 4549.362319] LTP: starting rename08 [ 4549.443807] loop0: detected capacity change from 0 to 614400 [ 4549.456003] /dev/zero: Can't open blockdev [ 4549.530240] /dev/zero: Can't open blockdev [ 4549.603463] /dev/zero: Can't open blockdev [ 4549.678892] /dev/zero: Can't open blockdev [ 4549.810341] /dev/zero: Can't open blockdev [ 4550.665361] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4550.704590] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4550.756858] EXT4-fs (loop0): unmounting filesystem. [ 4552.952905] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4553.004331] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4553.069884] EXT4-fs (loop0): unmounting filesystem. [ 4553.678965] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4553.734733] EXT4-fs (loop0): unmounting filesystem. [ 4555.049627] XFS (loop0): Mounting V5 Filesystem [ 4555.117847] XFS (loop0): Ending clean mount [ 4555.230546] XFS (loop0): Unmounting Filesystem [ 4555.579109] LTP: starting rename09 [ 4555.761545] LTP: starting rename10 [ 4555.853365] loop0: detected capacity change from 0 to 614400 [ 4555.873144] /dev/zero: Can't open blockdev [ 4555.954008] /dev/zero: Can't open blockdev [ 4556.032030] /dev/zero: Can't open blockdev [ 4556.106045] /dev/zero: Can't open blockdev [ 4556.238913] /dev/zero: Can't open blockdev [ 4556.937358] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4556.975046] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4557.032847] EXT4-fs (loop0): unmounting filesystem. [ 4558.963143] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4559.026245] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4559.102834] EXT4-fs (loop0): unmounting filesystem. [ 4559.676189] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4559.755924] EXT4-fs (loop0): unmounting filesystem. [ 4560.165303] XFS (loop0): Mounting V5 Filesystem [ 4560.241923] XFS (loop0): Ending clean mount [ 4560.372198] XFS (loop0): Unmounting Filesystem [ 4560.780906] LTP: starting rename11 [ 4560.851704] loop0: detected capacity change from 0 to 614400 [ 4561.376902] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4561.415656] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4561.499802] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 4561.537350] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 4561.542638] EXT4-fs (loop0): unmounting filesystem. [ 4561.665709] LTP: starting rename12 [ 4561.753740] loop0: detected capacity change from 0 to 614400 [ 4561.772157] /dev/zero: Can't open blockdev [ 4561.856223] /dev/zero: Can't open blockdev [ 4561.932867] /dev/zero: Can't open blockdev [ 4562.007986] /dev/zero: Can't open blockdev [ 4562.622651] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4562.661440] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4562.735248] EXT4-fs (loop0): unmounting filesystem. [ 4564.679151] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4564.744470] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4564.836692] EXT4-fs (loop0): unmounting filesystem. [ 4565.443556] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4565.532940] EXT4-fs (loop0): unmounting filesystem. [ 4566.099324] XFS (loop0): Mounting V5 Filesystem [ 4566.165784] XFS (loop0): Ending clean mount [ 4566.306739] XFS (loop0): Unmounting Filesystem [ 4566.717867] LTP: starting rename13 [ 4566.853307] loop0: detected capacity change from 0 to 614400 [ 4566.864300] /dev/zero: Can't open blockdev [ 4566.943146] /dev/zero: Can't open blockdev [ 4567.019890] /dev/zero: Can't open blockdev [ 4567.093011] /dev/zero: Can't open blockdev [ 4568.028978] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4568.068609] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4568.128913] EXT4-fs (loop0): unmounting filesystem. [ 4570.112163] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4570.171180] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4570.245849] EXT4-fs (loop0): unmounting filesystem. [ 4570.882845] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4570.978925] EXT4-fs (loop0): unmounting filesystem. [ 4571.537307] XFS (loop0): Mounting V5 Filesystem [ 4571.603396] XFS (loop0): Ending clean mount [ 4571.727201] XFS (loop0): Unmounting Filesystem [ 4572.122153] LTP: starting rename14 [ 4577.225156] LTP: starting renameat01 [ 4577.325369] loop0: detected capacity change from 0 to 614400 [ 4577.868856] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4577.916824] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4577.991365] EXT4-fs (loop0): re-mounted. Quota mode: none. [ 4578.024049] EXT4-fs (loop0): unmounting filesystem. [ 4578.140336] LTP: starting renameat201 [ 4578.264638] LTP: starting renameat202 (renameat202 -i 10) [ 4578.362636] LTP: starting request_key01 [ 4578.444482] LTP: starting request_key02 [ 4580.529239] LTP: starting request_key03 [ 4580.599135] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.604811] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.607064] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.608268] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.609425] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.610511] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.611852] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.612834] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.614037] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.615947] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.617170] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.618294] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.620297] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.621929] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.623935] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.625491] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.626870] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.627899] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.629814] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.630859] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.632898] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.634708] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.636133] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.637782] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.639782] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.641720] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.643002] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.644323] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.645969] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4580.647587] trusted_key: encrypted_key: keyword 'u[ 4581.254448] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4581.256348] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4581.257475] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4581.258730] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4581.260065] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4581.261152] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4581.262219] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4581.263446] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4581.265096] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4581.266709] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4581.267718] trusted_key: encrypted_key: keyword 'update' not allowed when cal[ 4582.233739] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4582.235623] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4582.237076] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4582.239080] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4582.240602] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4582.241899] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4582.242949] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4582.244423] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4582.246395] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4582.248158] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4582.249946] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4582.251100] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4582.252686] trusted_key: encrypted_key: keyword 'update' not allowed when called from .instantiate method [ 4620.399185] LTP: starting request_key04 [ 4620.511161] LTP: starting rmdir01 [ 4620.619042] LTP: starting rmdir02 [ 4620.749113] LTP: starting rmdir03 [ 4620.858107] LTP: starting rmdir03A (symlink01 -T rmdir03) [ 4620.899791] LTP: starting rt_sigaction01 [ 4621.001900] LTP: starting rt_sigaction02 [ 4621.080427] LTP: starting rt_sigaction03 [ 4621.155157] LTP: starting rt_sigprocmask01 [ 4621.222350] LTP: starting rt_sigprocmask02 [ 4621.295404] LTP: starting rt_sigqueueinfo01 [ 4621.386927] LTP: starting rt_sigsuspend01 [ 4622.474545] LTP: starting rt_sigtimedwait01 [ 4624.954283] LTP: starting rt_tgsigqueueinfo01 [ 4625.078870] LTP: starting sbrk01 [ 4625.145499] LTP: starting sbrk02 [ 4625.243551] LTP: starting sbrk03 [ 4625.317795] LTP: starting sched_get_priority_max01 [ 4625.392097] LTP: starting sched_get_priority_max02 [ 4625.471620] LTP: starting sched_get_priority_min01 [ 4625.548615] LTP: starting sched_get_priority_min02 [ 4625.624808] LTP: starting sched_getparam01 [ 4625.744201] LTP: starting sched_getparam03 [ 4625.828836] LTP: starting sched_rr_get_interval01 [ 4625.933245] LTP: starting sched_rr_get_interval02 [ 4626.013162] LTP: starting sched_rr_get_interval03 [ 4626.097223] LTP: starting sched_setparam01 [ 4626.189227] LTP: starting sched_setparam02 [ 4626.281614] LTP: starting sched_setparam03 [ 4626.380413] LTP: starting sched_setparam04 [ 4626.462186] LTP: starting sched_setparam05 [ 4626.583481] LTP: starting sched_getscheduler01 [ 4626.685378] LTP: starting sched_getscheduler02 [ 4626.770553] LTP: starting sched_setscheduler01 [ 4626.862488] LTP: starting sched_setscheduler02 [ 4626.984613] LTP: starting sched_setscheduler03 [ 4627.136093] LTP: starting sched_yield01 [ 4627.197903] LTP: starting sched_setaffinity01 [ 4627.301061] LTP: starting sched_getaffinity01 [ 4627.383391] LTP: starting sched_setattr01 [ 4627.466703] LTP: starting sched_getattr01 [ 4627.538717] LTP: starting sched_getattr02 [ 4627.596206] LTP: starting select01 [ 4627.760435] LTP: starting select02 [ 4653.708132] LTP: starting select03 [ 4653.821388] select03[165002]: segfault at 7f85a8690000 ip 00007f85a854500a sp 00007ffd7dc96650 error 4 in libc.so.6[7f85a8428000+175000] [ 4653.822291] Code: 41 54 41 89 fc 55 48 89 f5 53 4c 89 c3 48 83 ec 38 64 48 8b 04 25 28 00 00 00 48 89 44 24 28 31 c0 4d 85 c0 0f 84 2e 01 00 00 <49> 8b 30 49 8b 50 08 48 85 f6 0f 88 7a 01 00 00 85 d2 0f 88 72 01 [ 4654.203917] LTP: starting select04 [ 4655.241459] LTP: starting semctl01 [ 4655.420754] LTP: starting semctl02 [ 4655.521514] LTP: starting semctl03 [ 4655.602280] LTP: starting semctl04 [ 4655.692533] LTP: starting semctl05 [ 4655.765087] LTP: starting semctl06 [ 4655.902825] LTP: starting semctl07 [ 4655.989667] LTP: starting semctl08 [ 4656.039803] LTP: starting semctl09 [ 4656.138087] LTP: starting semget01 [ 4656.203475] LTP: starting semget02 [ 4656.286910] LTP: starting semget03 [ 4656.352794] LTP: starting semget05 [ 4658.795903] LTP: starting semget06 [ 4658.970957] LTP: starting semop01 [ 4659.091733] LTP: starting semop02 [ 4659.197977] LTP: starting semop03 [ 4659.371938] LTP: starting send01 [ 4659.442623] LTP: starting send02 [ 4665.935453] LTP: starting sendfile02 [ 4666.049523] LTP: starting sendfile02_64 [ 4666.128091] LTP: starting sendfile03 [ 4666.201976] LTP: starting sendfile03_64 [ 4666.281526] LTP: starting sendfile04 [ 4666.357964] LTP: starting sendfile04_64 [ 4666.464041] LTP: starting sendfile05 [ 4666.559808] LTP: starting sendfile05_64 [ 4666.640892] LTP: starting sendfile06 [ 4666.733783] LTP: starting sendfile06_64 [ 4666.820279] LTP: starting sendfile07 [ 4666.918458] LTP: starting sendfile07_64 [ 4667.031234] LTP: starting sendfile08 [ 4667.119961] LTP: starting sendfile08_64 [ 4667.199211] LTP: starting sendfile09 [ 4691.166142] LTP: starting sendfile09_64 [ 4713.181385] LTP: starting sendmsg01 [ 4713.429975] LTP: starting sendmsg02 [ 4730.129966] LTP: starting sendmsg03 [ 4730.227172] raw_sendmsg: sendmsg03 forgot to set AF_INET. Fix it! [ 4771.700155] LTP: starting sendmmsg01 [ 4771.813851] LTP: starting sendmmsg02 [ 4771.922845] LTP: starting sendto01 [ 4772.019993] LTP: starting sendto02 [ 4772.183979] LTP: starting sendto03 [ 4772.439719] LTP: starting set_mempolicy01 [ 4772.633472] LTP: starting set_mempolicy02 [ 4772.746985] LTP: starting set_mempolicy03 [ 4772.829742] loop0: detected capacity change from 0 to 614400 [ 4772.842781] /dev/zero: Can't open blockdev [ 4772.899857] /dev/zero: Can't open blockdev [ 4772.942697] /dev/zero: Can't open blockdev [ 4772.985844] /dev/zero: Can't open blockdev [ 4773.100570] /dev/zero: Can't open blockdev [ 4773.500094] LTP: starting set_mempolicy04 [ 4773.594645] loop0: detected capacity change from 0 to 614400 [ 4773.607791] /dev/zero: Can't open blockdev [ 4773.682428] /dev/zero: Can't open blockdev [ 4773.758606] /dev/zero: Can't open blockdev [ 4773.833421] /dev/zero: Can't open blockdev [ 4773.965240] /dev/zero: Can't open blockdev [ 4775.250949] LTP: starting set_robust_list01 [ 4775.326272] LTP: starting set_thread_area01 [ 4775.389879] LTP: starting set_tid_address01 [ 4775.451754] LTP: starting setdomainname01 [ 4775.536418] LTP: starting setdomainname02 [ 4775.617223] LTP: starting setdomainname03 [ 4775.715508] LTP: starting setfsgid01 [ 4775.784250] LTP: starting setfsgid01_16 [ 4775.835878] LTP: starting setfsgid02 [ 4775.916617] LTP: starting setfsgid02_16 [ 4775.998186] LTP: starting setfsgid03 [ 4776.060567] LTP: starting setfsgid03_16 [ 4776.131234] LTP: starting setfsuid01 [ 4776.188170] LTP: starting setfsuid01_16 [ 4776.239860] LTP: starting setfsuid02 [ 4776.363630] LTP: starting setfsuid02_16 [ 4776.486260] LTP: starting setfsuid03 [ 4776.551592] LTP: starting setfsuid03_16 [ 4776.612346] LTP: starting setfsuid04 [ 4776.703136] LTP: starting setfsuid04_16 [ 4776.776136] LTP: starting setgid01 [ 4776.848769] LTP: starting setgid01_16 [ 4776.911878] LTP: starting setgid02 [ 4776.990422] LTP: starting setgid02_16 [ 4777.069455] LTP: starting setgid03 [ 4777.140950] LTP: starting setgid03_16 [ 4777.216389] LTP: starting setegid01 [ 4777.298239] LTP: starting setegid02 [ 4777.393882] LTP: starting sgetmask01 [ 4777.453813] LTP: starting setgroups01 [ 4777.509799] LTP: starting setgroups01_16 [ 4777.565068] LTP: starting setgroups02 [ 4777.622375] LTP: starting setgroups02_16 [ 4777.685479] LTP: starting setgroups03 [ 4777.750997] LTP: starting setgroups03_16 [ 4777.801686] LTP: starting setgroups04 [ 4777.852798] LTP: starting setgroups04_16 [ 4777.911930] LTP: starting sethostname01 [ 4777.991229] LTP: starting sethostname02 [ 4778.072356] LTP: starting sethostname03 [ 4778.162854] LTP: starting setitimer01 [ 4778.242358] LTP: starting setitimer02 [ 4778.316226] LTP: starting setitimer03 [ 4778.370785] LTP: starting setns01 [ 4778.466792] LTP: starting setns02 [ 4778.567441] LTP: starting setpgid01 [ 4778.638439] LTP: starting setpgid02 [ 4778.689133] LTP: starting setpgid03 [ 4778.837854] LTP: starting setpgrp01 [ 4778.903253] LTP: starting setpgrp02 [ 4778.962678] LTP: starting setpriority01 [ 4782.248004] LTP: starting setpriority02 [ 4782.370038] LTP: starting setregid01 [ 4782.447800] LTP: starting setregid01_16 [ 4782.536169] LTP: starting setregid02 [ 4782.618087] LTP: starting setregid02_16 [ 4782.696471] LTP: starting setregid03 [ 4782.776995] LTP: starting setregid03_16 [ 4782.861422] LTP: starting setregid04 [ 4782.951265] LTP: starting setregid04_16 [ 4783.030177] LTP: starting setresgid01 [ 4783.101825] LTP: starting setresgid01_16 [ 4783.162853] LTP: starting setresgid02 [ 4783.235500] LTP: starting setresgid02_16 [ 4783.307564] LTP: starting setresgid03 [ 4783.384891] LTP: starting setresgid03_16 [ 4783.464104] LTP: starting setresgid04 [ 4783.532965] LTP: starting setresgid04_16 [ 4783.591452] LTP: starting setresuid01 [ 4783.657866] LTP: starting setresuid01_16 [ 4783.731350] LTP: starting setresuid02 [ 4783.807007] LTP: starting setresuid02_16 [ 4783.877969] LTP: starting setresuid03 [ 4783.948800] LTP: starting setresuid03_16 [ 4784.013160] LTP: starting setresuid04 [ 4784.101926] LTP: starting setresuid04_16 [ 4784.191455] LTP: starting setresuid05 [ 4784.257949] LTP: starting setresuid05_16 [ 4784.342917] LTP: starting setreuid01 [ 4784.420509] LTP: starting setreuid01_16 [ 4784.480540] LTP: starting setreuid02 [ 4784.557852] LTP: starting setreuid02_16 [ 4784.636729] LTP: starting setreuid03 [ 4784.723645] LTP: starting setreuid03_16 [ 4784.794432] LTP: starting setreuid04 [ 4784.882058] LTP: starting setreuid04_16 [ 4784.967190] LTP: starting setreuid05 [ 4785.062603] LTP: starting setreuid05_16 [ 4785.143497] LTP: starting setreuid06 [ 4785.197545] LTP: starting setreuid06_16 [ 4785.255058] LTP: starting setreuid07 [ 4785.332296] LTP: starting setreuid07_16 [ 4785.410939] LTP: starting setrlimit01 [ 4785.536365] LTP: starting setrlimit02 [ 4785.631130] LTP: starting setrlimit03 [ 4785.701263] LTP: starting setrlimit04 [ 4785.801309] LTP: starting setrlimit05 [ 4785.875014] LTP: starting setrlimit06 [ 4787.994678] LTP: starting setsid01 [ 4789.062094] LTP: starting setsockopt01 [ 4789.153559] LTP: starting setsockopt02 [ 4789.251775] LTP: starting setsockopt03 [ 4789.319914] LTP: starting setsockopt04 [ 4789.383322] LTP: starting setsockopt05 [ 4791.197239] LTP: starting setsockopt07 [ 4801.114802] LTP: starting setsockopt08 [ 4801.256198] LTP: starting setsockopt09 [ 4802.512909] LTP: starting settimeofday01 [ 4802.604304] LTP: starting settimeofday02 [ 4802.690139] LTP: starting setuid01 [ 4802.758266] LTP: starting setuid01_16 [ 4802.847911] LTP: starting setuid03 [ 4802.921783] LTP: starting setuid03_16 [ 4803.013516] LTP: starting setuid04 [ 4803.096171] LTP: starting setuid04_16 [ 4803.169631] LTP: starting setxattr01 [ 4803.247260] loop0: detected capacity change from 0 to 614400 [ 4803.260252] /dev/zero: Can't open blockdev [ 4803.334890] /dev/zero: Can't open blockdev [ 4803.409764] /dev/zero: Can't open blockdev [ 4803.481687] /dev/zero: Can't open blockdev [ 4803.613419] /dev/zero: Can't open blockdev [ 4804.325335] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4804.371346] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4804.445329] EXT4-fs (loop0): unmounting filesystem. [ 4806.414144] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4806.492905] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4806.596114] EXT4-fs (loop0): unmounting filesystem. [ 4807.219696] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4807.308301] EXT4-fs (loop0): unmounting filesystem. [ 4808.680260] XFS (loop0): Mounting V5 Filesystem [ 4808.746303] XFS (loop0): Ending clean mount [ 4808.895628] XFS (loop0): Unmounting Filesystem [ 4809.335732] LTP: starting setxattr02 [ 4809.447854] LTP: starting setxattr03 [ 4809.542584] LTP: starting shmat01 [ 4809.640265] shmat01[165797]: segfault at 7f859d492000 ip 00000000004050b2 sp 00007ffdbc8d2350 error 6 in shmat01[404000+1a000] [ 4809.641531] Code: 7c 00 00 00 4c 8b 04 c5 70 e1 41 00 bf 10 e0 41 00 31 c0 e8 80 88 00 00 e9 b6 fe ff ff 48 6b db 18 83 bb 6c e1 41 00 0b 74 0f <41> c7 04 24 0a 00 00 00 31 ff e8 1f fb ff ff bf 01 00 00 00 e8 e5 [ 4809.660039] LTP: starting shmat02 [ 4809.763943] LTP: starting shmat03 [ 4809.844662] LTP: starting shmctl01 [ 4810.118754] LTP: starting shmctl02 [ 4810.238495] LTP: starting shmctl03 [ 4810.323636] LTP: starting shmctl04 [ 4810.411708] LTP: starting shmctl05 [-- MARK -- Fri Feb 3 07:05:00 2023] [ 4860.513039] LTP: starting shmctl06 [ 4860.569724] LTP: starting shmctl07 [ 4860.679544] LTP: starting shmctl08 [ 4861.753368] LTP: starting shmdt01 [ 4861.858541] LTP: starting shmdt02 [ 4861.921722] LTP: starting shmget02 [ 4862.033989] LTP: starting shmget03 [ 4863.673089] LTP: starting shmget04 [ 4863.750671] LTP: starting shmget05 [ 4863.849568] LTP: starting shmget06 [ 4863.919450] LTP: starting sigaction01 [ 4863.971891] LTP: starting sigaction02 [ 4864.089578] LTP: starting sigaltstack01 [ 4864.147528] LTP: starting sigaltstack02 [ 4864.196387] LTP: starting sighold02 [ 4864.286357] LTP: starting signal01 [ 4864.401739] LTP: starting signal02 [ 4864.478427] LTP: starting signal03 [ 4864.531184] LTP: starting signal04 [ 4864.578419] LTP: starting signal05 [ 4864.634001] LTP: starting signal06 [ 4878.634182] LTP: starting signalfd01 [ 4878.727443] LTP: starting signalfd4_01 [ 4878.802082] LTP: starting signalfd4_02 [ 4878.861289] LTP: starting sigpending02 [ 4878.960575] LTP: starting sigprocmask01 [ 4879.026629] LTP: starting sigrelse01 [ 4879.098852] LTP: starting sigsuspend01 [ 4880.174441] LTP: starting sigtimedwait01 [ 4881.618051] LTP: starting sigwait01 [ 4881.824786] LTP: starting sigwaitinfo01 [ 4882.245305] LTP: starting socket01 [ 4882.665444] LTP: starting socket02 [ 4882.751754] LTP: starting socketcall01 [ 4882.826136] LTP: starting socketcall02 [ 4882.912764] LTP: starting socketcall03 [ 4882.984663] LTP: starting socketpair01 [ 4883.314520] LTP: starting socketpair02 [ 4883.387203] LTP: starting sockioctl01 [ 4883.474084] LTP: starting splice01 [ 4883.563385] LTP: starting splice02 [ 4883.768250] LTP: starting splice03 [ 4883.866070] LTP: starting splice04 [ 4883.937297] LTP: starting splice05 [ 4884.008890] LTP: starting tee01 [ 4884.106677] LTP: starting tee02 [ 4884.182142] LTP: starting ssetmask01 [ 4884.245523] LTP: starting stat01 [ 4884.329637] LTP: starting stat01_64 [ 4884.420286] LTP: starting stat02 [ 4884.500267] LTP: starting stat02_64 [ 4884.593942] LTP: starting stat03 [ 4884.701326] LTP: starting stat03_64 [ 4884.794157] LTP: starting stat04 (symlink01 -T stat04) [ 4884.843526] LTP: starting stat04_64 (symlink01 -T stat04_64) [ 4884.894291] LTP: starting statfs01 [ 4884.956300] LTP: starting statfs01_64 [ 4885.015251] LTP: starting statfs02 [ 4885.139805] LTP: starting statfs02_64 [ 4885.272377] LTP: starting statfs03 [ 4885.353177] LTP: starting statfs03_64 [ 4885.432782] LTP: starting statvfs01 [ 4885.485907] LTP: starting statvfs02 [ 4885.551064] LTP: starting stime01 [ 4885.648924] LTP: starting stime02 [ 4885.758351] LTP: starting string01 [ 4885.820695] LTP: starting swapoff01 [ 4886.000390] Adding 36k swap on ./tstswap. Priority:-3 extents:1 across:36k FS [ 4889.114150] Adding 65532k swap on ./swapfile01. Priority:-3 extents:1 across:65532k FS [ 4889.310528] LTP: starting swapoff02 [ 4889.448234] Adding 36k swap on ./tstswap. Priority:-3 extents:1 across:36k FS [ 4889.485250] LTP: starting swapon01 [ 4889.625118] Adding 36k swap on ./tstswap. Priority:-3 extents:1 across:36k FS [ 4889.712001] Adding 36k swap on ./swapfile01. Priority:-3 extents:1 across:36k FS [ 4889.744203] LTP: starting swapon02 [ 4889.883131] Adding 36k swap on ./tstswap. Priority:-3 extents:1 across:36k FS [ 4890.057166] Adding 36k swap on alreadyused. Priority:-3 extents:1 across:36k FS [ 4890.059240] Unable to find swap-space signature [ 4890.097461] LTP: starting swapon03 [ 4890.244870] Adding 36k swap on ./tstswap. Priority:-3 extents:1 across:36k FS [ 4890.339680] Adding 36k swap on swapfile02. Priority:-3 extents:1 across:36k FS [ 4890.410277] Adding 36k swap on swapfile03. Priority:-4 extents:1 across:36k FS [ 4890.489303] Adding 36k swap on swapfile04. Priority:-5 extents:1 across:36k FS [ 4890.567914] Adding 36k swap on swapfile05. Priority:-6 extents:1 across:36k FS [ 4890.638302] Adding 36k swap on swapfile06. Priority:-7 extents:1 across:36k FS [ 4890.717462] Adding 36k swap on swapfile07. Priority:-8 extents:1 across:36k FS [ 4890.796616] Adding 36k swap on swapfile08. Priority:-9 extents:1 across:36k FS [ 4890.875351] Adding 36k swap on swapfile09. Priority:-10 extents:1 across:36k FS [ 4890.953947] Adding 36k swap on swapfile10. Priority:-11 extents:1 across:36k FS [ 4891.033103] Adding 36k swap on swapfile11. Priority:-12 extents:1 across:36k FS [ 4891.111790] Adding 36k swap on swapfile12. Priority:-13 extents:1 across:36k FS [ 4891.190622] Adding 36k swap on swapfile13. Priority:-14 extents:1 across:36k FS [ 4891.261364] Adding 36k swap on swapfile14. Priority:-15 extents:1 across:36k FS [ 4891.339992] Adding 36k swap on swapfile15. Priority:-16 extents:1 across:36k FS [ 4891.420172] Adding 36k swap on swapfile16. Priority:-17 extents:1 across:36k FS [ 4891.497921] Adding 36k swap on swapfile17. Priority:-18 extents:1 across:36k FS [ 4891.576985] Adding 36k swap on swapfile18. Priority:-19 extents:1 across:36k FS [ 4891.655731] Adding 36k swap on swapfile19. Priority:-20 extents:1 across:36k FS [ 4891.734813] Adding 36k swap on swapfile20. Priority:-21 extents:1 across:36k FS [ 4891.805451] Adding 36k swap on swapfile21. Priority:-22 extents:1 across:36k FS [ 4891.884809] Adding 36k swap on swapfile22. Priority:-23 extents:1 across:36k FS [ 4891.954733] Adding 36k swap on swapfile23. Priority:-24 extents:1 across:36k FS [ 4892.025126] Adding 36k swap on swapfile24. Priority:-25 extents:1 across:36k FS [ 4895.889786] LTP: starting switch01 (endian_switch01) [ 4895.967503] LTP: starting symlink01 [ 4896.023172] LTP: starting symlink02 [ 4896.093228] LTP: starting symlink03 [ 4896.169962] LTP: starting symlink04 [ 4896.225871] LTP: starting symlink05 [ 4896.282572] LTP: starting symlinkat01 [ 4896.402373] LTP: starting sync01 [ 4896.471656] loop0: detected capacity change from 0 to 614400 [ 4896.487162] /dev/zero: Can't open blockdev [ 4896.562039] /dev/zero: Can't open blockdev [ 4896.633960] /dev/zero: Can't open blockdev [ 4896.708982] /dev/zero: Can't open blockdev [ 4896.847147] /dev/zero: Can't open blockdev [ 4897.527972] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4897.567390] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4902.124452] EXT4-fs (loop0): unmounting filesystem. [ 4904.189266] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4904.255072] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4909.351117] EXT4-fs (loop0): unmounting filesystem. [ 4910.059502] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4913.457288] EXT4-fs (loop0): unmounting filesystem. [ 4914.053416] XFS (loop0): Mounting V5 Filesystem [ 4914.120871] XFS (loop0): Ending clean mount [ 4916.149854] XFS (loop0): Unmounting Filesystem [ 4916.593569] LTP: starting syncfs01 [ 4916.710326] loop0: detected capacity change from 0 to 614400 [ 4916.723007] /dev/zero: Can't open blockdev [ 4916.798642] /dev/zero: Can't open blockdev [ 4916.877321] /dev/zero: Can't open blockdev [ 4916.950903] /dev/zero: Can't open blockdev [ 4917.086090] /dev/zero: Can't open blockdev [ 4917.718145] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4917.758886] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4922.325843] EXT4-fs (loop0): unmounting filesystem. [ 4924.347382] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4924.411416] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4929.369063] EXT4-fs (loop0): unmounting filesystem. [ 4930.054089] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4933.460228] EXT4-fs (loop0): unmounting filesystem. [ 4934.086717] XFS (loop0): Mounting V5 Filesystem [ 4934.157205] XFS (loop0): Ending clean mount [ 4935.890973] XFS (loop0): Unmounting Filesystem [ 4936.334086] LTP: starting sync_file_range01 [ 4936.432583] LTP: starting sync_file_range02 [ 4936.487631] loop0: detected capacity change from 0 to 614400 [ 4936.500199] /dev/zero: Can't open blockdev [ 4936.573090] /dev/zero: Can't open blockdev [ 4936.647697] /dev/zero: Can't open blockdev [ 4936.726873] /dev/zero: Can't open blockdev [ 4936.865067] /dev/zero: Can't open blockdev [ 4937.666704] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 4937.707794] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 4948.256671] EXT4-fs (loop0): unmounting filesystem. [ 4950.681437] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 4950.743641] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4962.866865] EXT4-fs (loop0): unmounting filesystem. [ 4963.735907] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 4971.105070] EXT4-fs (loop0): unmounting filesystem. [ 4971.877738] XFS (loop0): Mounting V5 Filesystem [ 4971.947649] XFS (loop0): Ending clean mount [ 4976.130864] XFS (loop0): Unmounting Filesystem [ 4976.704356] LTP: starting syscall01 [ 4976.785068] LTP: starting sysconf01 [ 4976.873384] LTP: starting sysctl01 [ 4976.959268] LTP: starting sysctl03 [ 4977.037120] LTP: starting sysctl04 [ 4977.106505] LTP: starting sysfs01 [ 4977.178183] LTP: starting sysfs02 [ 4977.246855] LTP: starting sysfs03 [ 4977.319325] LTP: starting sysfs04 [ 4977.382039] LTP: starting sysfs05 [ 4977.453378] LTP: starting sysinfo01 [ 4977.513008] LTP: starting sysinfo02 [ 4977.585580] LTP: starting sysinfo03 [ 4977.682688] LTP: starting syslog11 [ 4977.760039] LTP: starting syslog12 [ 4977.838822] LTP: starting tgkill01 [ 4977.943390] LTP: starting tgkill02 [ 4978.023339] LTP: starting tgkill03 [ 4978.104948] LTP: starting time01 [ 4978.185060] LTP: starting times01 [ 4978.267103] LTP: starting times03 [ 4986.249890] LTP: starting timerfd01 [ 4987.274356] LTP: starting timerfd02 [ 4987.348415] LTP: starting timerfd03 [ 4987.404472] LTP: starting timerfd04 [ 4987.649546] LTP: starting timerfd_create01 [ 4987.708504] LTP: starting timerfd_gettime01 [ 4987.781194] LTP: starting timerfd_settime01 [ 4987.852506] LTP: starting timerfd_settime02 [-- MARK -- Fri Feb 3 07:10:00 2023] [ 5207.545352] LTP: starting timer_create01 [ 5207.629383] LTP: starting timer_create02 [ 5207.699889] LTP: starting timer_create03 [ 5207.763857] LTP: starting timer_delete01 [ 5207.841177] LTP: starting timer_delete02 [ 5207.909256] LTP: starting timer_getoverrun01 [ 5207.975272] LTP: starting timer_gettime01 [ 5208.050537] LTP: starting timer_settime01 [ 5209.740855] LTP: starting timer_settime02 [ 5209.827383] LTP: starting timer_settime03 [ 5209.900473] LTP: starting tkill01 [ 5209.973609] LTP: starting tkill02 [ 5210.042830] LTP: starting truncate02 [ 5210.143204] LTP: starting truncate02_64 [ 5210.245917] LTP: starting truncate03 [ 5210.339767] LTP: starting truncate03_64 [ 5210.444261] LTP: starting umask01 [ 5211.411911] LTP: starting uname01 [ 5211.506615] LTP: starting uname02 [ 5211.579773] LTP: starting uname04 [ 5211.643363] LTP: starting unlink01 (symlink01 -T unlink01) [ 5211.686612] LTP: starting unlink05 [ 5211.762140] LTP: starting unlink07 [ 5211.870855] LTP: starting unlink08 [ 5212.004653] LTP: starting unlinkat01 [ 5212.107083] LTP: starting unshare01 [ 5212.220694] LTP: starting unshare02 [ 5212.320484] LTP: starting umount01 [ 5212.399966] loop0: detected capacity change from 0 to 614400 [ 5212.946375] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5212.983049] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5213.000800] EXT4-fs (loop0): unmounting filesystem. [ 5213.184765] LTP: starting umount02 [ 5213.272611] loop0: detected capacity change from 0 to 614400 [ 5213.778969] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5213.825734] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5213.877966] EXT4-fs (loop0): unmounting filesystem. [ 5214.047398] LTP: starting umount03 [ 5214.097234] loop0: detected capacity change from 0 to 614400 [ 5214.638517] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5214.678786] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5214.704756] EXT4-fs (loop0): unmounting filesystem. [ 5214.832727] LTP: starting umount2_01 [ 5214.899405] loop0: detected capacity change from 0 to 614400 [ 5215.624651] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5215.664445] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5215.715903] EXT4-fs (loop0): unmounting filesystem. [ 5215.746990] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5215.774936] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5215.808103] EXT4-fs (loop0): unmounting filesystem. [ 5215.914847] LTP: starting umount2_02 [ 5215.971272] loop0: detected capacity change from 0 to 614400 [ 5216.746396] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5216.785351] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5216.812291] EXT4-fs (loop0): unmounting filesystem. [ 5216.851612] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5216.888511] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5216.899025] EXT4-fs (loop0): unmounting filesystem. [ 5217.022094] LTP: starting userfaultfd01 [ 5217.108657] LTP: starting ustat01 [ 5217.184228] LTP: starting ustat02 [ 5217.247168] LTP: starting utime01 [ 5217.303538] loop0: detected capacity change from 0 to 614400 [ 5217.313931] /dev/zero: Can't open blockdev [ 5217.388996] /dev/zero: Can't open blockdev [ 5217.461630] /dev/zero: Can't open blockdev [ 5217.536062] /dev/zero: Can't open blockdev [ 5218.448754] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5218.494870] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5218.554204] EXT4-fs (loop0): unmounting filesystem. [ 5221.002749] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 5221.078957] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5221.167915] EXT4-fs (loop0): unmounting filesystem. [ 5221.806723] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5221.877399] EXT4-fs (loop0): unmounting filesystem. [ 5223.013300] XFS (loop0): Mounting V5 Filesystem [ 5223.078865] XFS (loop0): Ending clean mount [ 5223.230056] XFS (loop0): Unmounting Filesystem [ 5223.640811] LTP: starting utime01A (symlink01 -T utime01) [ 5223.691952] LTP: starting utime02 [ 5223.759914] loop0: detected capacity change from 0 to 614400 [ 5223.775504] /dev/zero: Can't open blockdev [ 5223.850662] /dev/zero: Can't open blockdev [ 5223.926373] /dev/zero: Can't open blockdev [ 5224.001052] /dev/zero: Can't open blockdev [ 5224.706135] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5224.744601] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5224.811508] EXT4-fs (loop0): unmounting filesystem. [ 5226.757851] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 5226.809803] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5226.890595] EXT4-fs (loop0): unmounting filesystem. [ 5227.460941] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5227.549866] EXT4-fs (loop0): unmounting filesystem. [ 5228.007813] XFS (loop0): Mounting V5 Filesystem [ 5228.077915] XFS (loop0): Ending clean mount [ 5228.206613] XFS (loop0): Unmounting Filesystem [ 5228.581112] LTP: starting utime03 [ 5228.649565] loop0: detected capacity change from 0 to 614400 [ 5228.659463] /dev/zero: Can't open blockdev [ 5228.734424] /dev/zero: Can't open blockdev [ 5228.810438] /dev/zero: Can't open blockdev [ 5228.888017] /dev/zero: Can't open blockdev [ 5229.561902] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5229.607649] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5229.669306] EXT4-fs (loop0): unmounting filesystem. [ 5231.641589] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 5231.701293] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5231.791483] EXT4-fs (loop0): unmounting filesystem. [ 5232.445894] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5232.533461] EXT4-fs (loop0): unmounting filesystem. [ 5233.851442] XFS (loop0): Mounting V5 Filesystem [ 5233.918556] XFS (loop0): Ending clean mount [ 5234.073312] XFS (loop0): Unmounting Filesystem [ 5234.480285] LTP: starting utime04 [ 5234.549159] loop0: detected capacity change from 0 to 614400 [ 5234.572192] /dev/zero: Can't open blockdev [ 5234.646052] /dev/zero: Can't open blockdev [ 5234.720194] /dev/zero: Can't open blockdev [ 5234.793305] /dev/zero: Can't open blockdev [ 5235.497388] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5235.535444] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5235.587290] EXT4-fs (loop0): unmounting filesystem. [ 5237.540067] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 5237.592171] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5237.655290] EXT4-fs (loop0): unmounting filesystem. [ 5238.273438] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5238.342260] EXT4-fs (loop0): unmounting filesystem. [ 5239.353295] XFS (loop0): Mounting V5 Filesystem [ 5239.421141] XFS (loop0): Ending clean mount [ 5239.553337] XFS (loop0): Unmounting Filesystem [ 5239.942999] LTP: starting utime05 [ 5240.008522] loop0: detected capacity change from 0 to 614400 [ 5240.019275] /dev/zero: Can't open blockdev [ 5240.094057] /dev/zero: Can't open blockdev [ 5240.168556] /dev/zero: Can't open blockdev [ 5240.243258] /dev/zero: Can't open blockdev [ 5240.947217] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5240.993807] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5241.054023] EXT4-fs (loop0): unmounting filesystem. [ 5243.027825] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 5243.086639] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5243.171427] EXT4-fs (loop0): unmounting filesystem. [ 5243.787477] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5243.875966] EXT4-fs (loop0): unmounting filesystem. [ 5244.708322] XFS (loop0): Mounting V5 Filesystem [ 5244.774367] XFS (loop0): Ending clean mount [ 5244.916296] XFS (loop0): Unmounting Filesystem [ 5245.316991] LTP: starting utime06 [ 5245.433216] LTP: starting utimes01 [ 5245.548813] LTP: starting utimensat01 [ 5245.619300] loop0: detected capacity change from 0 to 614400 [ 5246.119131] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5246.151735] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5246.300584] EXT4-fs (loop0): unmounting filesystem. [ 5246.419311] LTP: starting vfork01 [ 5246.513639] LTP: starting vfork02 [ 5246.574141] LTP: starting vhangup01 [ 5246.676126] LTP: starting vhangup02 [ 5246.755356] LTP: starting vmsplice01 [ 5246.857380] LTP: starting vmsplice02 [ 5246.943081] LTP: starting vmsplice03 [ 5247.033780] LTP: starting vmsplice04 [ 5247.111203] LTP: starting wait01 [ 5247.202303] LTP: starting wait02 [ 5247.275211] LTP: starting wait401 [ 5247.352977] LTP: starting wait402 [ 5247.402438] LTP: starting wait403 [ 5247.489316] LTP: starting waitpid01 [ 5247.565113] LTP: starting waitpid02 [ 5247.636051] LTP: starting waitpid03 [ 5247.743880] LTP: starting waitpid04 [ 5247.820874] LTP: starting waitpid05 [ 5248.003868] LTP: starting waitpid06 [ 5248.108660] LTP: starting waitpid07 [ 5248.226196] LTP: starting waitpid08 [ 5248.340104] LTP: starting waitpid09 [ 5248.429192] LTP: starting waitpid10 [ 5250.545784] LTP: starting waitpid11 [ 5250.655000] LTP: starting waitpid12 [ 5250.775084] LTP: starting waitpid13 [ 5250.885672] LTP: starting waitid01 [ 5250.966454] LTP: starting waitid02 [ 5251.031571] LTP: starting waitid03 [ 5251.099113] LTP: starting waitid04 [ 5251.187795] LTP: starting waitid05 [ 5251.287489] LTP: starting waitid06 [ 5251.371477] LTP: starting waitid07 [ 5251.463000] LTP: starting waitid08 [ 5251.550697] LTP: starting waitid09 [ 5251.637528] LTP: starting waitid10 [ 5251.721950] LTP: starting waitid11 [ 5251.792803] LTP: starting write01 [ 5252.441615] LTP: starting write02 [ 5252.528934] LTP: starting write03 [ 5252.598629] LTP: starting write04 [ 5252.669207] LTP: starting write05 [ 5252.740415] LTP: starting write06 [ 5252.824512] LTP: starting writev01 [ 5252.911197] LTP: starting writev02 [ 5252.986297] LTP: starting writev05 [ 5253.051828] LTP: starting writev06 [ 5253.108259] LTP: starting writev07 [ 5253.236471] LTP: starting futex_cmp_requeue02 [ 5253.320516] LTP: starting futex_wait01 [ 5253.390667] LTP: starting futex_wait02 [ 5253.464307] LTP: starting futex_wait03 [ 5253.542454] LTP: starting futex_wait04 [ 5253.619150] LTP: starting futex_wait05 [ 5262.253558] LTP: starting futex_waitv01 [ 5262.323693] LTP: starting futex_waitv02 [ 5262.387048] LTP: starting futex_waitv03 [ 5262.466582] LTP: starting futex_wake01 [ 5262.531839] LTP: starting futex_wake02 [ 5262.712857] LTP: starting futex_wake03 [ 5262.994136] LTP: starting futex_wake04 [ 5264.348041] futex_wake04 (167294): drop_caches: 3 [ 5264.420086] LTP: starting futex_wait_bitset01 [ 5264.757333] LTP: starting memfd_create01 [ 5264.892080] LTP: starting memfd_create02 [ 5264.994229] LTP: starting memfd_create03 [ 5265.241293] memfd_create03 (167306): drop_caches: 3 [ 5265.278106] LTP: starting memfd_create04 [ 5265.447628] LTP: starting copy_file_range01 [ 5265.565177] loop0: detected capacity change from 0 to 614400 [ 5265.584452] /dev/zero: Can't open blockdev [ 5266.063184] /dev/zero: Can't open blockdev [ 5266.172017] /dev/zero: Can't open blockdev [ 5266.273705] /dev/zero: Can't open blockdev [ 5266.566684] /dev/zero: Can't open blockdev [ 5267.535294] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5267.575616] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5269.335726] EXT4-fs (loop0): unmounting filesystem. [ 5271.246651] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 5271.312291] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5273.827862] EXT4-fs (loop0): unmounting filesystem. [ 5274.477734] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5277.022674] EXT4-fs (loop0): unmounting filesystem. [ 5277.489791] XFS (loop0): Mounting V5 Filesystem [ 5277.554166] XFS (loop0): Ending clean mount [ 5280.253147] XFS (loop0): Unmounting Filesystem [ 5283.035706] /dev/zero: Can't open blockdev [ 5283.128996] /dev/zero: Can't open blockdev [ 5283.219897] /dev/zero: Can't open blockdev [ 5283.307780] /dev/zero: Can't open blockdev [ 5283.457734] /dev/zero: Can't open blockdev [ 5284.175505] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5284.214281] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5286.798853] EXT4-fs (loop0): unmounting filesystem. [ 5288.787921] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 5288.848794] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5291.417609] EXT4-fs (loop0): unmounting filesystem. [ 5292.040283] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5294.554877] EXT4-fs (loop0): unmounting filesystem. [ 5295.013541] XFS (loop0): Mounting V5 Filesystem [ 5295.077826] XFS (loop0): Ending clean mount [ 5297.718672] XFS (loop0): Unmounting Filesystem [ 5300.541170] LTP: starting copy_file_range02 [ 5300.638338] loop0: detected capacity change from 0 to 614400 [ 5301.193871] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5301.234343] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5301.838786] Adding 36k swap on /mnt/testarea/ltp-eExk763QEX/cop0VwZuS/file_swap. Priority:-3 extents:1 across:36k FS [ 5302.395831] Adding 36k swap on /mnt/testarea/ltp-eExk763QEX/cop0VwZuS/file_swap. Priority:-3 extents:1 across:36k FS [ 5302.506584] EXT4-fs (loop0): unmounting filesystem. [ 5302.631297] LTP: starting copy_file_range03 [ 5305.799229] LTP: starting statx01 [ 5305.927936] LTP: starting statx02 [ 5306.017057] LTP: starting statx03 [ 5306.129748] LTP: starting statx04 [ 5306.218014] loop0: detected capacity change from 0 to 614400 [ 5306.230843] /dev/zero: Can't open blockdev [ 5306.304745] /dev/zero: Can't open blockdev [ 5306.378558] /dev/zero: Can't open blockdev [ 5306.452733] /dev/zero: Can't open blockdev [ 5306.584815] /dev/zero: Can't open blockdev [ 5307.194569] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5307.242095] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5307.324054] EXT4-fs (loop0): unmounting filesystem. [ 5309.390347] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 5309.458632] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5309.557058] EXT4-fs (loop0): unmounting filesystem. [ 5310.248829] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5310.328801] EXT4-fs (loop0): unmounting filesystem. [ 5311.400709] XFS (loop0): Mounting V5 Filesystem [ 5311.465524] XFS (loop0): Ending clean mount [ 5311.591228] XFS (loop0): Unmounting Filesystem [ 5311.997396] LTP: starting statx06 [ 5312.094737] loop0: detected capacity change from 0 to 614400 [ 5312.477770] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5312.557585] EXT4-fs (loop0): unmounting filesystem. [ 5312.790875] LTP: starting statx07 [ 5313.585428] LTP: starting statx08 [ 5313.695997] loop0: detected capacity change from 0 to 614400 [ 5313.707901] /dev/zero: Can't open blockdev [ 5313.781608] /dev/zero: Can't open blockdev [ 5313.856406] /dev/zero: Can't open blockdev [ 5313.930581] /dev/zero: Can't open blockdev [ 5314.066381] /dev/zero: Can't open blockdev [ 5314.715654] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem [ 5314.754762] EXT4-fs (loop0): mounted filesystem without journal. Quota mode: none. [ 5314.836895] EXT4-fs (loop0): unmounting filesystem. [ 5316.805173] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem [ 5316.866600] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5316.963777] EXT4-fs (loop0): unmounting filesystem. [ 5317.578568] EXT4-fs (loop0): mounted filesystem with ordered data mode. Quota mode: none. [ 5317.660304] EXT4-fs (loop0): unmounting filesystem. [ 5318.525955] XFS (loop0): Mounting V5 Filesystem [ 5318.595726] XFS (loop0): Ending clean mount [ 5318.740983] XFS (loop0): Unmounting Filesystem [ 5319.120119] LTP: starting statx09 [ 5319.259856] LTP: starting io_uring01 [ 5319.378809] LTP: starting io_uring02 [ 5319.475076] LTP: starting perf_event_open03 [ 5319.579163] LTP: starting mm01 (mmap001 -m 10000) [ 5321.185522] LTP: starting mm02 (mmap001) [ 5321.419152] LTP: starting mtest01 (mtest01 -p80) [ 5321.749171] LTP: starting mtest01w (mtest01 -p80 -w) [ 5338.195705] LTP: starting mtest05 (mmstress) [ 5341.254581] LTP: starting mem02 [ 5342.064621] LTP: starting page01 [ 5343.288127] LTP: starting page02 [ 5344.414429] LTP: starting data_space [ 5344.617714] LTP: starting stack_space [ 5344.781002] LTP: starting shmt02 [ 5344.851628] LTP: starting shmt03 [ 5344.919649] LTP: starting shmt04 [ 5345.004539] LTP: starting shmt05 [ 5345.073124] LTP: starting shmt06 [ 5345.159514] LTP: starting shmt07 [ 5345.236757] LTP: starting shmt08 [ 5345.308091] LTP: starting shmt09 [ 5345.393631] LTP: starting shmt10 [ 5345.589243] LTP: starting shm_test01 (shm_test -l 10 -t 2) [-- MARK -- Fri Feb 3 07:15:00 2023] [ 5488.071231] LTP: starting mallocstress01 (mallocstress) [ 5535.589423] LTP: starting mmapstress01 (mmapstress01 -p 20 -t 0.2) [ 5547.694572] LTP: starting mmapstress02 [ 5547.796690] LTP: starting mmapstress03 [ 5547.891705] LTP: starting mmapstress04 [ 5548.053602] LTP: starting mmapstress05 [ 5548.149382] LTP: starting mmapstress06 (mmapstress06 20) [ 5568.229423] LTP: starting mmapstress07 (TMPFILE=`mktemp /tmp/example.XXXXXXXXXXXX`; mmapstress07 $TMPFILE) [ 5568.956642] LTP: starting mmapstress08 [ 5569.016660] LTP: starting mmapstress09 (mmapstress09 -p 20 -t 0.2) [ 5581.134311] LTP: starting mmapstress10 (mmapstress10 -p 20 -t 0.2) [ 5593.246356] LTP: starting cpuset01 [ 5594.047415] LTP: starting thp04 [ 5619.563302] LTP: starting vma01 [ 5619.677794] LTP: starting vma02 [ 5619.884746] LTP: starting vma03 [ 5619.957437] LTP: starting vma04 [ 5620.170809] LTP: starting vma05 (vma05.sh) [ 5620.716148] LTP: starting pth_str01 [ 5620.927141] LTP: starting pth_str02 (pth_str02 -n1000) [ 5622.329881] LTP: starting pth_str03 [ 5622.534786] LTP: starting cfs_bandwidth01 (cfs_bandwidth01 -i 5) [ 5638.316916] LTP: starting nptl01 [ 5656.954054] LTP: starting pty01 [ 5667.098224] LTP: starting pty02 [ 5667.218132] LTP: starting pty04 [ 5667.683198] SLIP: version 0.8.4-NET3.019-NEWTTY (dynamic channels, max=256). [ 5667.684175] CSLIP: code copyright 1989 Regents of the University of California. [ 5667.685009] SLIP linefill/keepalive option. [ 5669.906501] LTP: starting pty05 [ 5670.070822] N_HDLC line discipline registered with maxframe=4096 [-- MARK -- Fri Feb 3 07:20:00 2023] [ 5856.844573] LTP: starting pty06 [-- MARK -- Fri Feb 3 07:25:00 2023] [ 6205.702285] perf: interrupt took too long (7989 > 7925), lowering kernel.perf_event_max_sample_rate to 25000 [-- MARK -- Fri Feb 3 07:30:00 2023] [ 6606.973261] LTP: starting pty07 [-- MARK -- Fri Feb 3 07:35:00 2023] [-- MARK -- Fri Feb 3 07:40:00 2023] [-- MARK -- Fri Feb 3 07:45:00 2023] [ 7357.980254] LTP: starting ptem01 [ 7358.101335] LTP: starting hangup01 [ 7359.187339] LTP: starting dynamic_debug01 (dynamic_debug01.sh) [ 7484.555386] LTP: starting pt_snapshot_trace_basic (pt_test -m) [ 7484.655016] LTP: starting pt_ex_user (pt_test -e user) [ 7484.706458] LTP: starting pt_ex_kernel (pt_test -e kernel) [ 7484.757560] LTP: starting pt_disable_branch (pt_test -b) [ 7484.808805] LTP: starting gf01 (growfiles -W gf01 -b -e 1 -u -i 0 -L 20 -w -C 1 -l -I r -T 10 -f glseek20 -S 2 -d $TMPDIR) [ 7505.374767] LTP: starting gf02 (growfiles -W gf02 -b -e 1 -L 10 -i 100 -I p -S 2 -u -f gf03_ -d $TMPDIR) [ 7505.820840] LTP: starting gf03 (growfiles -W gf03 -b -e 1 -g 1 -i 1 -S 150 -u -f gf05_ -d $TMPDIR) [ 7506.215314] LTP: starting gf04 (growfiles -W gf04 -b -e 1 -g 4090 -i 500 -t 39000 -u -f gf06_ -d $TMPDIR) [ 7506.520056] LTP: starting gf05 (growfiles -W gf05 -b -e 1 -g 5000 -i 500 -t 49900 -T10 -c9 -I p -u -f gf07_ -d $TMPDIR) [ 7508.048813] LTP: starting gf06 (growfiles -W gf06 -b -e 1 -u -r 1-5000 -R 0--1 -i 0 -L 30 -C 1 -f g_rand10 -S 2 -d $TMPDIR) [-- MARK -- Fri Feb 3 07:50:00 2023] [ 7538.264085] LTP: starting gf07 (growfiles -W gf07 -b -e 1 -u -r 1-5000 -R 0--2 -i 0 -L 30 -C 1 -I p -f g_rand13 -S 2 -d $TMPDIR) [ 7569.272078] LTP: starting gf08 (growfiles -W gf08 -b -e 1 -u -r 1-5000 -R 0--2 -i 0 -L 30 -C 1 -f g_rand11 -S 2 -d $TMPDIR) [ 7600.268002] LTP: starting gf09 (growfiles -W gf09 -b -e 1 -u -r 1-5000 -R 0--1 -i 0 -L 30 -C 1 -I p -f g_rand12 -S 2 -d $TMPDIR) [ 7631.271457] LTP: starting gf10 (growfiles -W gf10 -b -e 1 -u -r 1-5000 -i 0 -L 30 -C 1 -I l -f g_lio14 -S 2 -d $TMPDIR) [ 7662.442709] LTP: starting gf11 (growfiles -W gf11 -b -e 1 -u -r 1-5000 -i 0 -L 30 -C 1 -I L -f g_lio15 -S 2 -d $TMPDIR) [ 7693.421721] LTP: starting gf12 (mkfifo $TMPDIR/gffifo17; growfiles -b -W gf12 -e 1 -u -i 0 -L 30 $TMPDIR/gffifo17) [ 7724.262316] LTP: starting gf13 (mkfifo $TMPDIR/gffifo18; growfiles -b -W gf13 -e 1 -u -i 0 -L 30 -I r -r 1-4096 $TMPDIR/gffifo18) [ 7755.263014] LTP: starting gf14 (growfiles -W gf14 -b -e 1 -u -i 0 -L 20 -w -l -C 1 -T 10 -f glseek19 -S 2 -d $TMPDIR) [ 7776.380627] LTP: starting gf15 (growfiles -W gf15 -b -e 1 -u -r 1-49600 -I r -u -i 0 -L 120 -f Lgfile1 -d $TMPDIR) [-- MARK -- Fri Feb 3 07:55:00 2023] [ 7833.863451] LTP: starting gf16 (growfiles -W gf16 -b -e 1 -i 0 -L 120 -u -g 4090 -T 101 -t 408990 -l -C 10 -c 1000 -S 10 -f Lgf02_ -d $TMPDIR) [ 7954.296625] LTP: starting gf17 (growfiles -W gf17 -b -e 1 -i 0 -L 120 -u -g 5000 -T 101 -t 499990 -l -C 10 -c 1000 -S 10 -f Lgf03_ -d $TMPDIR) [ 8075.424223] LTP: starting gf18 (growfiles -W gf18 -b -e 1 -i 0 -L 120 -w -u -r 10-5000 -I r -l -S 2 -f Lgf04_ -d $TMPDIR) [-- MARK -- Fri Feb 3 08:00:00 2023] [ 8197.362360] LTP: starting gf19 (growfiles -W gf19 -b -e 1 -g 5000 -i 500 -t 49900 -T10 -c9 -I p -o O_RDWR,O_CREAT,O_TRUNC -u -f gf08i_ -d $TMPDIR) [ 8202.229539] LTP: starting gf20 (growfiles -W gf20 -D 0 -b -i 0 -L 60 -u -B 1000b -e 1 -r 1-256000:512 -R 512-256000 -T 4 -f gfbigio-$$ -d $TMPDIR) [ 8263.293917] LTP: starting gf21 (growfiles -W gf21 -D 0 -b -i 0 -L 60 -u -B 1000b -e 1 -g 20480 -T 10 -t 20480 -f gf-bld-$$ -d $TMPDIR) [ 8263.425470] LTP: starting gf22 (growfiles -W gf22 -D 0 -b -i 0 -L 60 -u -B 1000b -e 1 -g 20480 -T 10 -t 20480 -f gf-bldf-$$ -d $TMPDIR) [ 8263.566325] LTP: starting gf23 (growfiles -W gf23 -D 0 -b -i 0 -L 60 -u -B 1000b -e 1 -r 512-64000:1024 -R 1-384000 -T 4 -f gf-inf-$$ -d $TMPDIR) [ 8324.278240] LTP: starting gf24 (growfiles -W gf24 -D 0 -b -i 0 -L 60 -u -B 1000b -e 1 -g 20480 -f gf-jbld-$$ -d $TMPDIR) [ 8324.385039] LTP: starting gf25 (growfiles -W gf25 -D 0 -b -i 0 -L 60 -u -B 1000b -e 1 -r 1024000-2048000:2048 -R 4095-2048000 -T 1 -f gf-large-gs-$$ -d $TMPDIR) [ 8324.533331] LTP: starting gf26 (growfiles -W gf26 -D 0 -b -i 0 -L 60 -u -B 1000b -e 1 -r 128-32768:128 -R 512-64000 -T 4 -f gfsmallio-$$ -d $TMPDIR) [ 8385.279865] LTP: starting gf27 (growfiles -W gf27 -b -D 0 -w -g 8b -C 1 -b -i 1000 -u -f gfsparse-1-$$ -d $TMPDIR) [ 8386.080200] LTP: starting gf28 (growfiles -W gf28 -b -D 0 -w -g 16b -C 1 -b -i 1000 -u -f gfsparse-2-$$ -d $TMPDIR) [ 8387.221041] LTP: starting gf29 (growfiles -W gf29 -b -D 0 -r 1-4096 -R 0-33554432 -i 0 -L 60 -C 1 -u -f gfsparse-3-$$ -d $TMPDIR) [-- MARK -- Fri Feb 3 08:05:00 2023] [ 8447.501462] LTP: starting gf30 (growfiles -W gf30 -D 0 -b -i 0 -L 60 -u -B 1000b -e 1 -o O_RDWR,O_CREAT,O_SYNC -g 20480 -T 10 -t 20480 -f gf-sync-$$ -d $TMPDIR) [ 8448.298320] LTP: starting rwtest01 (export LTPROOT; rwtest -N rwtest01 -c -q -i 60s -f sync 10%25000:$TMPDIR/rw-sync-$$) [ 8513.131615] LTP: starting rwtest02 (export LTPROOT; rwtest -N rwtest02 -c -q -i 60s -f buffered 10%25000:$TMPDIR/rw-buffered-$$) [ 8574.424516] LTP: starting rwtest03 (export LTPROOT; rwtest -N rwtest03 -c -q -i 60s -n 2 -f buffered -s mmread,mmwrite -m random -Dv 10%25000:$TMPDIR/mm-buff-$$) [ 8638.097463] LTP: starting rwtest04 (export LTPROOT; rwtest -N rwtest04 -c -q -i 60s -n 2 -f sync -s mmread,mmwrite -m random -Dv 10%25000:$TMPDIR/mm-sync-$$) [ 8702.157091] LTP: starting rwtest05 (export LTPROOT; rwtest -N rwtest05 -c -q -i 50 -T 64b 500b:$TMPDIR/rwtest01%s) [ 8702.542128] LTP: starting read_all_proc (read_all -d /proc -q -r 3) [ 8703.799137] ICMPv6: process `read_all' is using deprecated sysctl (syscall) net.ipv6.neigh.default.base_reachable_time - use net.ipv6.neigh.default.base_reachable_time_ms instead [ 8720.444273] LTP: starting read_all_sys (read_all -d /sys -q -r 3) [-- MARK -- Fri Feb 3 08:10:00 2023] [ 8730.573364] WARNING! power/level is deprecated; use power/control instead [ 8733.887490] bdi 1:2: the stable_pages_required attribute has been removed. Use the stable_writes queue attribute instead. [ 8736.968319] LTP: starting binfmt_misc01 (binfmt_misc01.sh) [ 8739.852401] LTP: starting binfmt_misc02 (binfmt_misc02.sh) [ 8743.645302] LTP: starting squashfs01 [ 8743.739489] loop0: detected capacity change from 0 to 2048 [ 8775.731548] squashfs: version 4.0 (2009/01/31) Phillip Lougher [ 8777.964601] LTP: starting SETUP01 (dd if=/dev/urandom of=$BIG_FILE bs=4096 count=4096) [ 8778.667814] LTP: starting AIOSETUP01 (dd if=$BIG_FILE of=$SCRATCH_MNT/aiodio/junkfile bs=8192 conv=block,sync) [ 8778.880372] LTP: starting ADS1000 (aio-stress -I500 -o2 -S -r4 -s 128M $SCRATCH_MNT/aiodio/file1) [ 8885.372118] LTP: starting ADS1005 (aio-stress -I500 -o3 -S -r4 -s 128M $SCRATCH_MNT/aiodio/junkfile $SCRATCH_MNT/aiodio/file2) [ 8885.942472] LTP: starting ADS1008 (aio-stress -I500 -o3 -S -r32 -t4 -s 128M $SCRATCH_MNT/aiodio/junkfile $SCRATCH_MNT/file2 $SCRATCH_MNT/aiodio/file3 $SCRATCH_MNT/aiodio/file4) [ 8886.379138] LTP: starting ADS1014 (aio-stress -I500 -o2 -O -r8 -s 128M $SCRATCH_MNT/aiodio/file1 $SCRATCH_MNT/aiodio/file2) [ 8897.761398] LTP: starting ADS1020 (aio-stress -I500 -o3 -O -r16 -t2 -s 128M $SCRATCH_MNT/aiodio/junkfile $SCRATCH_MNT/aiodio/file2) [ 8899.945236] LTP: starting ADS1027 (aio-stress -I500 -o0 -S -r8 -s 128M $SCRATCH_MNT/aiodio/file2) [ 9009.897097] LTP: starting ADS1031 (aio-stress -I500 -o1 -S -r4 -t2 -s 128M $SCRATCH_MNT/aiodio/junkfile $SCRATCH_MNT/aiodio/file2) [ 9010.177764] LTP: starting ADS1040 (aio-stress -I500 -o1 -O -r8 -t2 -s 128M $SCRATCH_MNT/aiodio/junkfile $SCRATCH_MNT/aiodio/file2) [ 9012.039717] LTP: starting AIOSETUP01 (dd if=$BIG_FILE of=$SCRATCH_MNT/aiodio/junkfile bs=8192 conv=block,sync) [ 9012.620969] LTP: starting AIOSETUP02 (dd if=$BIG_FILE of=$SCRATCH_MNT/aiodio/fff bs=4096 conv=block,sync) [ 9012.988128] LTP: starting AIOSETUP03 (dd if=$BIG_FILE of=$SCRATCH_MNT/aiodio/ff1 bs=2048 conv=block,sync) [ 9013.414716] LTP: starting AIOSETUP04 (dd if=$BIG_FILE of=$SCRATCH_MNT/aiodio/ff2 bs=1024 conv=block,sync) [ 9014.006365] LTP: starting AIOSETUP05 (dd if=$BIG_FILE of=$SCRATCH_MNT/aiodio/ff3 bs=512 conv=block,sync) [ 9015.045729] LTP: starting AD056 (time aiocp -a $BUF_ALIGN -b 4k -n 16 -f DIRECT $SCRATCH_MNT/aiodio/junkfile $SCRATCH_MNT/aiodio/ff2) [-- MARK -- Fri Feb 3 08:15:00 2023] [ 9040.943218] LTP: starting AD057 (time aiocp -b 4k -n 16 -f SYNC $SCRATCH_MNT/aiodio/junkfile $SCRATCH_MNT/aiodio/ff2) [ 9143.651490] LTP: starting AD058 (time aiocp -a $BUF_ALIGN -b 4k -n 16 -f DIRECT -f SYNC $SCRATCH_MNT/aiodio/junkfile $SCRATCH_MNT/aiodio/ff2) [ 9151.215686] perf: interrupt took too long (10004 > 9986), lowering kernel.perf_event_max_sample_rate to 19000 [ 9156.761204] LTP: starting AD130 (time aiocp -a $BUF_ALIGN -b 64k -n 1 -f DIRECT $SCRATCH_MNT/aiodio/junkfile $SCRATCH_MNT/aiodio/ff2) [ 9159.649826] LTP: starting AD131 (time aiocp -b 64k -n 1 -f SYNC $SCRATCH_MNT/aiodio/junkfile $SCRATCH_MNT/aiodio/ff2) [ 9166.735789] LTP: starting AD132 (time aiocp -a $BUF_ALIGN -b 64k -n 1 -f DIRECT -f SYNC $SCRATCH_MNT/aiodio/junkfile $SCRATCH_MNT/aiodio/ff2) [ 9174.391902] LTP: starting AD193 (time aiocp -a $BUF_ALIGN -b 512k -n 1 -f DIRECT $SCRATCH_MNT/aiodio/junkfile $SCRATCH_MNT/aiodio/ff2) [ 9175.500777] LTP: starting AD194 (time aiocp -b 512k -n 1 -f SYNC $SCRATCH_MNT/aiodio/junkfile $SCRATCH_MNT/aiodio/ff2) [ 9176.902914] LTP: starting AD195 (time aiocp -a $BUF_ALIGN -b 512k -n 1 -f DIRECT -f SYNC $SCRATCH_MNT/aiodio/junkfile $SCRATCH_MNT/aiodio/ff2) [ 9178.631114] LTP: starting AD301 (time aiocp -a $BUF_ALIGN -b 128k -n 32 -f CREAT -f DIRECT $SCRATCH_MNT/aiodio/fff $SCRATCH_MNT/aiodio/junkdir/fff) [ 9180.312759] LTP: starting AD302 (time aiocp -a $BUF_ALIGN -b 128k -n 32 -f CREAT -f DIRECT $SCRATCH_MNT/aiodio/ff1 $SCRATCH_MNT/aiodio/junkdir/ff1) [ 9182.078424] LTP: starting AD303 (time aiocp -a $BUF_ALIGN -b 128k -n 32 -f CREAT -f DIRECT $SCRATCH_MNT/aiodio/ff2 $SCRATCH_MNT/aiodio/junkdir/ff2) [ 9183.821132] LTP: starting AD304 (time aiocp -a $BUF_ALIGN -b 128k -n 32 -f CREAT -f DIRECT $SCRATCH_MNT/aiodio/ff3 $SCRATCH_MNT/aiodio/junkdir/ff3) [ 9185.548337] LTP: starting ADSP005 (aiodio_sparse -o 2 -w 4k -s 8k -n 2) [ 9186.693151] LTP: starting ADSP006 (aiodio_sparse -o 2 -w 4k -s 8k -n 2) [ 9187.792144] LTP: starting ADSP007 (aiodio_sparse -o 4 -w 8k -s 32k -n 2) [ 9188.895983] LTP: starting ADSP008 (aiodio_sparse -o 4 -w 16k -s 64k -n 2) [ 9189.976731] LTP: starting ADSP009 (aiodio_sparse -o 4 -w 32k -s 128k -n 2) [ 9191.083202] LTP: starting ADSP010 (aiodio_sparse -o 4 -w 64k -s 256k -n 2) [ 9192.201514] LTP: starting ADSP011 (aiodio_sparse -o 4 -w 128k -s 512k -n 2) [ 9193.315884] LTP: starting ADSP012 (aiodio_sparse -o 4 -w 256k -s 1024k -n 2) [ 9194.476838] LTP: starting ADSP013 (aiodio_sparse -o 4 -w 512k -s 2048k -n 2) [ 9195.676889] LTP: starting ADSP014 (aiodio_sparse -o 4 -w 1024k -s 4096k -n 2) [ 9196.981617] LTP: starting ADSP015 (aiodio_sparse -o 4 -w 2048k -s 8192k -n 2) [ 9198.471951] LTP: starting ADSP016 (aiodio_sparse -o 4 -w 4096k -s 16384k -n 2) [ 9200.366866] LTP: starting ADSP017 (aiodio_sparse -o 4 -w 8192k -s 32768k -n 2) [ 9202.802371] LTP: starting ADSP018 (aiodio_sparse -o 4 -w 16384k -s 65536k -n 2) [ 9207.296690] LTP: starting ADSP045 (dio_sparse -w 4k -s 2k -n 2) [ 9211.597431] LTP: starting ADSP046 (dio_sparse -w 4k -s 4k -n 2) [ 9215.362587] LTP: starting ADSP047 (dio_sparse -w 16k -s 16k -n 2) [ 9219.193733] LTP: starting ADSP048 (dio_sparse -w 32k -s 32k -n 2) [ 9222.603425] LTP: starting ADSP049 (dio_sparse -w 64k -s 64k -n 2) [ 9226.194600] LTP: starting ADSP050 (dio_sparse -w 128k -s 128k -n 2) [ 9229.763863] LTP: starting ADSP051 (dio_sparse -w 256k -s 256k -n 2) [ 9233.245199] LTP: starting ADSP052 (dio_sparse -w 512k -s 512k -n 2) [ 9236.716470] LTP: starting ADSP053 (dio_sparse -w 1024k -s 1024k -n 2) [ 9240.600277] LTP: starting ADSP054 (dio_sparse -w 2048k -s 2048k -n 2) [ 9244.236422] LTP: starting ADSP055 (dio_sparse -w 4096k -s 4096k -n 2) [ 9247.889958] LTP: starting ADSP056 (dio_sparse -w 8192k -s 8192k -n 2) [ 9252.028307] LTP: starting AIOSETUP01 (dd if=$BIG_FILE of=$SCRATCH_MNT/aiodio/junkfile bs=8192 conv=block,sync) [ 9252.583558] LTP: starting FSX032 (fsx-linux -d -l 500000 -r 4096 -t 4096 -w 4096 -N 1000 $SCRATCH_MNT/aiodio/junkfile) [ 9254.663083] LTP: starting FSX042 (fsx-linux -N 1000 -o 4096 $SCRATCH_MNT/aiodio/junkfile) [ 9256.110026] LTP: starting proc01 (proc01 -m 128) [ 9256.853338] ICMPv6: process `proc01' is using deprecated sysctl (syscall) net.ipv6.neigh.default.base_reachable_time - use net.ipv6.neigh.default.base_reachable_time_ms instead [-- MARK -- Fri Feb 3 08:20:00 2023] [ 9423.138543] Running test [R:13330040 T:8 - load/unload kernel module test - bare_metal - Kernel: 5.14.0-256.2009_766119311.el9.x86_64+debug] ** Attempting to load blowfish... ** ** Attempting to unload blowfish... ** ** Attempting to load 8021q... ** [ 9466.568527] 8021q: 802.1Q VLAN Support v1.8 ** Attempting to unload 8021q... ** ** Attempting to load act_bpf... ** ** Attempting to unload act_bpf... ** ** Attempting to load act_csum... ** ** Attempting to unload act_csum... ** ** Attempting to load act_gact... ** [ 9471.684635] GACT probability on ** Attempting to unload act_gact... ** ** Attempting to load act_mirred... ** [ 9473.298071] Mirror/redirect action on ** Attempting to unload act_mirred... ** ** Attempting to load act_pedit... ** ** Attempting to unload act_pedit... ** ** Attempting to load act_police... ** ** Attempting to unload act_police... ** ** Attempting to load act_sample... ** ** Attempting to unload act_sample... ** ** Attempting to load act_skbedit... ** ** Attempting to unload act_skbedit... ** ** Attempting to load act_tunnel_key... ** ** Attempting to unload act_tunnel_key... ** ** Attempting to load act_vlan... ** ** Attempting to unload act_vlan... ** ** Attempting to load adiantum... ** ** Attempting to unload adiantum... ** ** Attempting to load af_key... ** [ 9486.748663] NET: Registered PF_KEY protocol family ** Attempting to unload af_key... ** [ 9487.307424] NET: Unregistered PF_KEY protocol family ** Attempting to load ah4... ** ** Attempting to unload ah4... ** ** Attempting to load ah6... ** ** Attempting to unload ah6... ** ** Attempting to load ansi_cprng... ** [ 9491.941466] alg: No test for fips(ansi_cprng) (fips_ansi_cprng) ** Attempting to unload ansi_cprng... ** ** Attempting to load apple_bl... ** ** Attempting to unload apple_bl... ** ** Attempting to load aquantia... ** ** Attempting to unload aquantia... ** ** Attempting to load arc_ps2... ** ** Attempting to unload arc_ps2... ** ** Attempting to load arp_tables... ** [ 9499.489877] Warning: Deprecated Driver is detected: arptables will not be maintained in a future major release and may be disabled ** Attempting to unload arp_tables... ** ** Attempting to load arpt_mangle... ** ** Attempting to unload arpt_mangle... ** ** Attempting to load arptable_filter... ** [ 9502.695767] Warning: Deprecated Driver is detected: arptables will not be maintained in a future major release and may be disabled ** Attempting to unload arptable_filter... ** ** Attempting to load asym_tpm... ** ** Attempting to unload asym_tpm... ** ** Attempting to load async_memcpy... ** [ 9505.971154] async_tx: api initialized (async) ** Attempting to unload async_memcpy... ** ** Attempting to load async_pq... ** [ 9507.615321] raid6: skip pq benchmark and using algorithm sse2x4 [ 9507.616180] raid6: using ssse3x2 recovery algorithm [ 9507.630529] async_tx: api initialized (async) ** Attempting to unload async_pq... ** ** Attempting to load async_raid6_recov... ** [ 9509.339241] raid6: skip pq benchmark and using algorithm sse2x4 [ 9509.340078] raid6: using ssse3x2 recovery algorithm [ 9509.354430] async_tx: api initialized (async) ** Attempting to unload async_raid6_recov... ** ** Attempting to load async_tx... ** [ 9511.073781] async_tx: api initialized (async) ** Attempting to unload async_tx... ** ** Attempting to load async_xor... ** [ 9512.659715] async_tx: api initialized (async) ** Attempting to unload async_xor... ** ** Attempting to load bareudp... ** ** Attempting to unload bareudp... ** ** Attempting to load blowfish_common... ** ** Attempting to unload blowfish_common... ** ** Attempting to load blowfish_generic... ** ** Attempting to unload blowfish_generic... ** ** Attempting to load bluetooth... ** [ 9520.543940] Bluetooth: Core ver 2.22 [ 9520.544594] NET: Registered PF_BLUETOOTH protocol family [ 9520.544938] Bluetooth: HCI device and connection manager initialized [ 9520.545622] Bluetooth: HCI socket layer initialized [ 9520.546886] Bluetooth: L2CAP socket layer initialized [ 9520.547412] Bluetooth: SCO socket layer initialized ** Attempting to unload bluetooth... ** [ 9521.058853] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load bnep... ** [ 9522.507263] Bluetooth: Core ver 2.22 [ 9522.507999] NET: Registered PF_BLUETOOTH protocol family [ 9522.508394] Bluetooth: HCI device and connection manager initialized [ 9522.509040] Bluetooth: HCI socket layer initialized [ 9522.510051] Bluetooth: L2CAP socket layer initialized [ 9522.510611] Bluetooth: SCO socket layer initialized [ 9522.525248] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 9522.525685] Bluetooth: BNEP filters: protocol multicast [ 9522.526797] Bluetooth: BNEP socket layer initialized ** Attempting to unload bnep... ** [ 9523.065841] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load bonding... ** ** Attempting to unload bonding... ** ** Attempting to load br_netfilter... ** [ 9526.278915] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. [ 9526.293913] Bridge firewalling registered ** Attempting to unload br_netfilter... ** ** Attempting to load bridge... ** [ 9528.276902] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. ** Attempting to unload bridge... ** ** Attempting to load bsd_comp... ** [ 9529.979750] PPP generic driver version 2.4.2 [ 9529.996583] PPP BSD Compression module registered ** Attempting to unload bsd_comp... ** ** Attempting to load cachefiles... ** [ 9531.822908] CacheFiles: Loaded ** Attempting to unload cachefiles... ** [ 9532.313637] CacheFiles: Unloading ** Attempting to load camellia_generic... ** ** Attempting to unload camellia_generic... ** ** Attempting to load can... ** [ 9534.979139] can: controller area network core [ 9534.983084] NET: Registered PF_CAN protocol family ** Attempting to unload can... ** [ 9535.524855] NET: Unregistered PF_CAN protocol family ** Attempting to load can_bcm... ** [ 9536.613815] can: controller area network core [ 9536.616058] NET: Registered PF_CAN protocol family [ 9536.638400] can: broadcast manager protocol ** Attempting to unload can_bcm... ** [ 9537.221852] NET: Unregistered PF_CAN protocol family ** Attempting to load can_dev... ** ** Attempting to unload can_dev... ** ** Attempting to load can_gw... ** [ 9539.927819] can: controller area network core [ 9539.930035] NET: Registered PF_CAN protocol family [ 9539.950336] can: netlink gateway - max_hops=1 ** Attempting to unload can_gw... ** [ 9540.475883] NET: Unregistered PF_CAN protocol family ** Attempting to load can_isotp... ** [ 9541.572484] can: controller area network core [ 9541.574805] NET: Registered PF_CAN protocol family [ 9541.609540] can: isotp protocol ** Attempting to unload can_isotp... ** [ 9542.167891] NET: Unregistered PF_CAN protocol family ** Attempting to load can_j1939... ** [ 9543.315701] can: controller area network core [ 9543.317975] NET: Registered PF_CAN protocol family [ 9543.357839] can: SAE J1939 ** Attempting to unload can_j1939... ** [ 9543.923909] NET: Unregistered PF_CAN protocol family ** Attempting to load can_raw... ** [ 9545.041070] can: controller area network core [ 9545.043427] NET: Registered PF_CAN protocol family [ 9545.060965] can: raw protocol ** Attempting to unload can_raw... ** [ 9545.611944] NET: Unregistered PF_CAN protocol family ** Attempting to load cast5_generic... ** ** Attempting to unload cast5_generic... ** ** Attempting to load cast6_generic... ** ** Attempting to unload cast6_generic... ** ** Attempting to load cdc_acm... ** [ 9549.960108] usbcore: registered new interface driver cdc_acm [ 9549.960967] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters ** Attempting to unload cdc_acm... ** [ 9550.473761] usbcore: deregistering interface driver cdc_acm ** Attempting to load ceph... ** [ 9551.943570] Key type ceph registered [ 9551.947819] libceph: loaded (mon/osd proto 15/24) [ 9552.168457] ceph: loaded (mds proto 32) ** Attempting to unload ceph... ** [ 9552.739632] Key type ceph unregistered ** Attempting to load chacha20poly1305... ** ** Attempting to unload chacha20poly1305... ** ** Attempting to load cifs... ** [ 9557.223285] Key type cifs.spnego registered [ 9557.223609] Key type cifs.idmap registered ** Attempting to unload cifs... ** [ 9557.788186] Key type cifs.idmap unregistered [ 9557.789138] Key type cifs.spnego unregistered ** Attempting to load cls_bpf... ** ** Attempting to unload cls_bpf... ** ** Attempting to load cls_flow... ** ** Attempting to unload cls_flow... ** ** Attempting to load cls_flower... ** ** Attempting to unload cls_flower... ** ** Attempting to load cls_fw... ** ** Attempting to unload cls_fw... ** ** Attempting to load cls_matchall... ** ** Attempting to unload cls_matchall... ** ** Attempting to load cls_u32... ** [ 9567.901535] u32 classifier [ 9567.901722] Performance counters on [ 9567.902073] input device check on [ 9567.902374] Actions configured ** Attempting to unload cls_u32... ** ** Attempting to load cordic... ** ** Attempting to unload cordic... ** ** Attempting to load cqhci... ** ** Attempting to unload cqhci... ** ** Attempting to load crc_itu_t... ** ** Attempting to unload crc_itu_t... ** ** Attempting to load crc32_generic... ** ** Attempting to unload crc32_generic... ** ** Attempting to load crc7... ** ** Attempting to unload crc7... ** ** Attempting to load crc8... ** ** Attempting to unload crc8... ** ** Attempting to load des_generic... ** ** Attempting to unload des_generic... ** ** Attempting to load diag... ** [ 9582.677850] tipc: Activated (version 2.0.0) [ 9582.680801] NET: Registered PF_TIPC protocol family [ 9582.682918] tipc: Started in single node mode ** Attempting to unload diag... ** [ 9583.270239] NET: Unregistered PF_TIPC protocol family [ 9583.448746] tipc: Deactivated ** Attempting to load dm_bio_prison... ** ** Attempting to unload dm_bio_prison... ** ** Attempting to load dm_bufio... ** ** Attempting to unload dm_bufio... ** ** Attempting to load dm_cache_smq... ** ** Attempting to unload dm_cache_smq... ** ** Attempting to load dm_cache... ** ** Attempting to unload dm_cache... ** ** Attempting to load dm_crypt... ** ** Attempting to unload dm_crypt... ** ** Attempting to load dm_delay... ** ** Attempting to unload dm_delay... ** ** Attempting to load dm_era... ** ** Attempting to unload dm_era... ** ** Attempting to load dm_flakey... ** ** Attempting to unload dm_flakey... ** ** Attempting to load dm_integrity... ** [ 9598.172451] async_tx: api initialized (async) ** Attempting to unload dm_integrity... ** ** Attempting to load dm_io_affinity... ** ** Attempting to unload dm_io_affinity... ** ** Attempting to load dm_log_userspace... ** [ 9601.585659] device-mapper: dm-log-userspace: version 1.3.0 loaded ** Attempting to unload dm_log_userspace... ** [ 9602.112596] device-mapper: dm-log-userspace: version 1.3.0 unloaded ** Attempting to load dm_log_writes... ** ** Attempting to unload dm_log_writes... ** ** Attempting to load dm_multipath... ** ** Attempting to unload dm_multipath... ** ** Attempting to load dm_persistent_data... ** ** Attempting to unload dm_persistent_data... ** ** Attempting to load dm_queue_length... ** [ 9609.464026] device-mapper: multipath queue-length: version 0.2.0 loaded ** Attempting to unload dm_queue_length... ** ** Attempting to load dm_raid... ** [ 9611.123915] raid6: skip pq benchmark and using algorithm sse2x4 [ 9611.124715] raid6: using ssse3x2 recovery algorithm [ 9611.139827] async_tx: api initialized (async) [ 9611.367096] device-mapper: raid: Loading target version 1.15.1 ** Attempting to unload dm_raid... ** ** Attempting to load dm_round_robin... ** [ 9613.621773] device-mapper: multipath round-robin: version 1.2.0 loaded ** Attempting to unload dm_round_robin... ** ** Attempting to load dm_service_time... ** [ 9615.323214] device-mapper: multipath service-time: version 0.3.0 loaded ** Attempting to unload dm_service_time... ** ** Attempting to load dm_snapshot... ** ** Attempting to unload dm_snapshot... ** ** Attempting to load dm_switch... ** ** Attempting to unload dm_switch... ** ** Attempting to load dm_thin_pool... ** ** Attempting to unload dm_thin_pool... ** ** Attempting to load dm_verity... ** ** Attempting to unload dm_verity... ** ** Attempting to load dm_writecache... ** ** Attempting to unload dm_writecache... ** ** Attempting to load dm_zero... ** ** Attempting to unload dm_zero... ** [-- MARK -- Fri Feb 3 08:25:00 2023] ** Attempting to load dummy... ** ** Attempting to unload dummy... ** ** Attempting to load ebt_802_3... ** ** Attempting to unload ebt_802_3... ** ** Attempting to load ebt_among... ** ** Attempting to unload ebt_among... ** ** Attempting to load ebt_arp... ** ** Attempting to unload ebt_arp... ** ** Attempting to load ebt_arpreply... ** ** Attempting to unload ebt_arpreply... ** ** Attempting to load ebt_dnat... ** ** Attempting to unload ebt_dnat... ** ** Attempting to load ebt_ip... ** ** Attempting to unload ebt_ip... ** ** Attempting to load ebt_ip6... ** ** Attempting to unload ebt_ip6... ** ** Attempting to load ebt_limit... ** ** Attempting to unload ebt_limit... ** ** Attempting to load ebt_log... ** ** Attempting to unload ebt_log... ** ** Attempting to load ebt_mark... ** ** Attempting to unload ebt_mark... ** ** Attempting to load ebt_mark_m... ** ** Attempting to unload ebt_mark_m... ** ** Attempting to load ebt_nflog... ** ** Attempting to unload ebt_nflog... ** ** Attempting to load ebt_pkttype... ** ** Attempting to unload ebt_pkttype... ** ** Attempting to load ebt_redirect... ** ** Attempting to unload ebt_redirect... ** ** Attempting to load ebt_snat... ** ** Attempting to unload ebt_snat... ** ** Attempting to load ebt_stp... ** ** Attempting to unload ebt_stp... ** ** Attempting to load ebt_vlan... ** ** Attempting to unload ebt_vlan... ** ** Attempting to load ebtable_broute... ** [ 9655.336028] Warning: Deprecated Driver is detected: ebtables will not be maintained in a future major release and may be disabled ** Attempting to unload ebtable_broute... ** ** Attempting to load ebtable_filter... ** [ 9657.023202] Warning: Deprecated Driver is detected: ebtables will not be maintained in a future major release and may be disabled ** Attempting to unload ebtable_filter... ** ** Attempting to load ebtable_nat... ** [ 9658.660820] Warning: Deprecated Driver is detected: ebtables will not be maintained in a future major release and may be disabled ** Attempting to unload ebtable_nat... ** ** Attempting to load ebtables... ** [ 9660.321018] Warning: Deprecated Driver is detected: ebtables will not be maintained in a future major release and may be disabled ** Attempting to unload ebtables... ** ** Attempting to load echainiv... ** ** Attempting to unload echainiv... ** ** Attempting to load enclosure... ** ** Attempting to unload enclosure... ** ** Attempting to load esp4... ** ** Attempting to unload esp4... ** ** Attempting to load esp4_offload... ** ** Attempting to unload esp4_offload... ** ** Attempting to load esp6... ** ** Attempting to unload esp6... ** ** Attempting to load esp6_offload... ** ** Attempting to unload esp6_offload... ** ** Attempting to load essiv... ** ** Attempting to unload essiv... ** ** Attempting to load failover... ** ** Attempting to unload failover... ** ** Attempting to load faulty... ** ** Attempting to unload faulty... ** ** Attempting to load fcrypt... ** ** Attempting to unload fcrypt... ** ** Attempting to load geneve... ** ** Attempting to unload geneve... ** ** Attempting to load gfs2... ** [ 9684.595709] DLM installed [ 9684.781683] gfs2: GFS2 installed ** Attempting to unload gfs2... ** ** Attempting to load hci_uart... ** [ 9686.882333] Bluetooth: Core ver 2.22 [ 9686.883469] NET: Registered PF_BLUETOOTH protocol family [ 9686.883869] Bluetooth: HCI device and connection manager initialized [ 9686.884538] Bluetooth: HCI socket layer initialized [ 9686.885284] Bluetooth: L2CAP socket layer initialized [ 9686.885852] Bluetooth: SCO socket layer initialized [ 9686.908112] Bluetooth: HCI UART driver ver 2.3 [ 9686.908916] Bluetooth: HCI UART protocol H4 registered [ 9686.909259] Bluetooth: HCI UART protocol BCSP registered [ 9686.909594] Bluetooth: HCI UART protocol ATH3K registered ** Attempting to unload hci_uart... ** [ 9687.453131] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load hci_vhci... ** [ 9688.819297] Bluetooth: Core ver 2.22 [ 9688.820048] NET: Registered PF_BLUETOOTH protocol family [ 9688.820477] Bluetooth: HCI device and connection manager initialized [ 9688.821164] Bluetooth: HCI socket layer initialized [ 9688.822310] Bluetooth: L2CAP socket layer initialized [ 9688.822921] Bluetooth: SCO socket layer initialized ** Attempting to unload hci_vhci... ** [ 9689.382145] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load hidp... ** [ 9690.762703] Bluetooth: Core ver 2.22 [ 9690.763377] NET: Registered PF_BLUETOOTH protocol family [ 9690.763702] Bluetooth: HCI device and connection manager initialized [ 9690.764342] Bluetooth: HCI socket layer initialized [ 9690.765125] Bluetooth: L2CAP socket layer initialized [ 9690.765659] Bluetooth: SCO socket layer initialized [ 9690.781729] Bluetooth: HIDP (Human Interface Emulation) ver 1.2 [ 9690.782686] Bluetooth: HIDP socket layer initialized ** Attempting to unload hidp... ** [ 9691.324224] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load iavf... ** [ 9692.990671] iavf: Intel(R) Ethernet Adaptive Virtual Function Network Driver [ 9692.991578] Copyright (c) 2013 - 2018 Intel Corporation. ** Attempting to unload iavf... ** ** Attempting to load ib_cm... ** ** Attempting to unload ib_cm... ** ** Attempting to load ib_core... ** ** Attempting to unload ib_core... ** ** Attempting to load ib_iser... ** [ 9699.459227] Loading iSCSI transport class v2.0-870. [ 9699.546255] iscsi: registered transport (iser) ** Attempting to unload ib_iser... ** ** Attempting to load ib_isert... ** [ 9702.403299] Rounding down aligned max_sectors from 4294967295 to 4294967288 [ 9702.405125] db_root: cannot open: /etc/target ** Attempting to unload ib_isert... ** ** Attempting to load ib_srp... ** ** Attempting to unload ib_srp... ** ** Attempting to load ib_srpt... ** [ 9708.330262] Rounding down aligned max_sectors from 4294967295 to 4294967288 [ 9708.332590] db_root: cannot open: /etc/target ** Attempting to unload ib_srpt... ** ** Attempting to load ib_umad... ** ** Attempting to unload ib_umad... ** ** Attempting to load ib_uverbs... ** ** Attempting to unload ib_uverbs... ** ** Attempting to load ieee802154_6lowpan... ** ** Attempting to unload ieee802154_6lowpan... ** ** Attempting to load ieee802154_socket... ** [ 9716.892839] NET: Registered PF_IEEE802154 protocol family ** Attempting to unload ieee802154_socket... ** [ 9717.415258] NET: Unregistered PF_IEEE802154 protocol family ** Attempting to load ifb... ** ** Attempting to unload ifb... ** ** Attempting to load ipcomp... ** ** Attempting to unload ipcomp... ** ** Attempting to load ipcomp6... ** ** Attempting to unload ipcomp6... ** ** Attempting to load ip6_gre... ** [ 9723.662899] gre: GRE over IPv4 demultiplexor driver [ 9723.690455] ip6_gre: GRE over IPv6 tunneling driver ** Attempting to unload ip6_gre... ** ** Attempting to load ip6_tables... ** [ 9725.477580] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6_tables... ** ** Attempting to load ip6_tunnel... ** ** Attempting to unload ip6_tunnel... ** ** Attempting to load ip6_udp_tunnel... ** ** Attempting to unload ip6_udp_tunnel... ** ** Attempting to load ip6_vti... ** ** Attempting to unload ip6_vti... ** ** Attempting to load ip6t_NPT... ** ** Attempting to unload ip6t_NPT... ** ** Attempting to load ip6t_REJECT... ** ** Attempting to unload ip6t_REJECT... ** ** Attempting to load ip6t_SYNPROXY... ** ** Attempting to unload ip6t_SYNPROXY... ** ** Attempting to load ip6t_ah... ** ** Attempting to unload ip6t_ah... ** ** Attempting to load ip6t_eui64... ** ** Attempting to unload ip6t_eui64... ** ** Attempting to load ip6t_frag... ** ** Attempting to unload ip6t_frag... ** ** Attempting to load ip6t_hbh... ** ** Attempting to unload ip6t_hbh... ** ** Attempting to load ip6t_ipv6header... ** ** Attempting to unload ip6t_ipv6header... ** ** Attempting to load ip6t_mh... ** ** Attempting to unload ip6t_mh... ** ** Attempting to load ip6t_rpfilter... ** ** Attempting to unload ip6t_rpfilter... ** ** Attempting to load ip6t_rt... ** ** Attempting to unload ip6t_rt... ** ** Attempting to load ip6table_filter... ** [ 9749.958177] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_filter... ** ** Attempting to load ip6table_mangle... ** [ 9751.577262] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_mangle... ** ** Attempting to load ip6table_nat... ** [ 9753.488863] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_nat... ** ** Attempting to load ip6table_raw... ** [ 9756.438815] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_raw... ** ** Attempting to load ip6table_security... ** [ 9758.070164] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_security... ** ** Attempting to load ip_gre... ** [ 9759.709749] gre: GRE over IPv4 demultiplexor driver [ 9759.760181] ip_gre: GRE over IPv4 tunneling driver ** Attempting to unload ip_gre... ** ** Attempting to load ipip... ** [ 9761.590822] ipip: IPv4 and MPLS over IPv4 tunneling driver ** Attempting to unload ipip... ** ** Attempting to load ip_set... ** ** Attempting to unload ip_set... ** ** Attempting to load ip_set_bitmap_ip... ** ** Attempting to unload ip_set_bitmap_ip... ** ** Attempting to load ip_set_bitmap_ipmac... ** ** Attempting to unload ip_set_bitmap_ipmac... ** ** Attempting to load ip_set_bitmap_port... ** ** Attempting to unload ip_set_bitmap_port... ** ** Attempting to load ip_set_hash_ip... ** ** Attempting to unload ip_set_hash_ip... ** ** Attempting to load ip_set_hash_ipmac... ** ** Attempting to unload ip_set_hash_ipmac... ** ** Attempting to load ip_set_hash_ipmark... ** ** Attempting to unload ip_set_hash_ipmark... ** ** Attempting to load ip_set_hash_ipport... ** ** Attempting to unload ip_set_hash_ipport... ** ** Attempting to load ip_set_hash_ipportip... ** ** Attempting to unload ip_set_hash_ipportip... ** ** Attempting to load ip_set_hash_ipportnet... ** ** Attempting to unload ip_set_hash_ipportnet... ** ** Attempting to load ip_set_hash_mac... ** ** Attempting to unload ip_set_hash_mac... ** ** Attempting to load ip_set_hash_net... ** ** Attempting to unload ip_set_hash_net... ** ** Attempting to load ip_set_hash_netiface... ** ** Attempting to unload ip_set_hash_netiface... ** ** Attempting to load ip_set_hash_netnet... ** ** Attempting to unload ip_set_hash_netnet... ** ** Attempting to load ip_set_hash_netport... ** ** Attempting to unload ip_set_hash_netport... ** ** Attempting to load ip_set_hash_netportnet... ** ** Attempting to unload ip_set_hash_netportnet... ** ** Attempting to load ip_set_list_set... ** ** Attempting to unload ip_set_list_set... ** ** Attempting to load ip_tables... ** [ 9793.080506] Warning: Deprecated Driver is detected: iptables will not be maintained in a future major release and may be disabled ** Attempting to unload ip_tables... ** ** Attempting to load ip_tunnel... ** ** Attempting to unload ip_tunnel... ** ** Attempting to load ip_vs... ** [ 9796.673327] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9796.675481] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9796.676429] IPVS: Each connection entry needs 416 bytes at least [ 9796.678623] IPVS: ipvs loaded. ** Attempting to unload ip_vs... ** [ 9797.198753] IPVS: ipvs unloaded. ** Attempting to load ip_vs_dh... ** [ 9798.812395] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9798.814500] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9798.815480] IPVS: Each connection entry needs 416 bytes at least [ 9798.817527] IPVS: ipvs loaded. [ 9798.833724] IPVS: [dh] scheduler registered. ** Attempting to unload ip_vs_dh... ** [ 9799.367527] IPVS: [dh] scheduler unregistered. [ 9799.424681] IPVS: ipvs unloaded. ** Attempting to load ip_vs_fo... ** [ 9801.066075] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9801.068287] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9801.069206] IPVS: Each connection entry needs 416 bytes at least [ 9801.071270] IPVS: ipvs loaded. [ 9801.085036] IPVS: [fo] scheduler registered. ** Attempting to unload ip_vs_fo... ** [ 9801.620195] IPVS: [fo] scheduler unregistered. [ 9801.687693] IPVS: ipvs unloaded. ** Attempting to load ip_vs_ftp... ** [ 9803.372973] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9803.375257] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9803.376245] IPVS: Each connection entry needs 416 bytes at least [ 9803.378163] IPVS: ipvs loaded. ** Attempting to unload ip_vs_ftp... ** [ 9805.048564] IPVS: ipvs unloaded. ** Attempting to load ip_vs_lblc... ** [ 9806.705990] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9806.708143] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9806.709056] IPVS: Each connection entry needs 416 bytes at least [ 9806.711146] IPVS: ipvs loaded. [ 9806.729569] IPVS: [lblc] scheduler registered. ** Attempting to unload ip_vs_lblc... ** [ 9807.257199] IPVS: [lblc] scheduler unregistered. [ 9807.321733] IPVS: ipvs unloaded. ** Attempting to load ip_vs_lblcr... ** [ 9808.932309] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9808.934367] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9808.935317] IPVS: Each connection entry needs 416 bytes at least [ 9808.937303] IPVS: ipvs loaded. [ 9808.956603] IPVS: [lblcr] scheduler registered. ** Attempting to unload ip_vs_lblcr... ** [ 9809.438858] IPVS: [lblcr] scheduler unregistered. [ 9809.494789] IPVS: ipvs unloaded. ** Attempting to load ip_vs_lc... ** [ 9811.015510] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9811.017252] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9811.018157] IPVS: Each connection entry needs 416 bytes at least [ 9811.019705] IPVS: ipvs loaded. [ 9811.029684] IPVS: [lc] scheduler registered. ** Attempting to unload ip_vs_lc... ** [ 9811.547687] IPVS: [lc] scheduler unregistered. [ 9811.598795] IPVS: ipvs unloaded. ** Attempting to load ip_vs_nq... ** [ 9813.158942] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9813.161148] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9813.162134] IPVS: Each connection entry needs 416 bytes at least [ 9813.164112] IPVS: ipvs loaded. [ 9813.181345] IPVS: [nq] scheduler registered. ** Attempting to unload ip_vs_nq... ** [ 9813.690131] IPVS: [nq] scheduler unregistered. [ 9813.746749] IPVS: ipvs unloaded. ** Attempting to load ip_vs_ovf... ** [ 9815.348482] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9815.350742] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9815.351650] IPVS: Each connection entry needs 416 bytes at least [ 9815.353624] IPVS: ipvs loaded. [ 9815.368432] IPVS: [ovf] scheduler registered. ** Attempting to unload ip_vs_ovf... ** [ 9815.885060] IPVS: [ovf] scheduler unregistered. [ 9815.936821] IPVS: ipvs unloaded. ** Attempting to load ip_vs_pe_sip... ** [ 9817.584050] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9817.586264] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9817.587631] IPVS: Each connection entry needs 416 bytes at least [ 9817.589574] IPVS: ipvs loaded. [ 9817.609920] IPVS: [sip] pe registered. ** Attempting to unload ip_vs_pe_sip... ** [ 9818.080095] IPVS: [sip] pe unregistered. [ 9822.376718] IPVS: ipvs unloaded. ** Attempting to load ip_vs_rr... ** [ 9824.064709] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9824.066818] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9824.067732] IPVS: Each connection entry needs 416 bytes at least [ 9824.069675] IPVS: ipvs loaded. [ 9824.085658] IPVS: [rr] scheduler registered. ** Attempting to unload ip_vs_rr... ** [ 9824.610820] IPVS: [rr] scheduler unregistered. [ 9824.661742] IPVS: ipvs unloaded. ** Attempting to load ip_vs_sed... ** [ 9826.305866] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9826.307966] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9826.308931] IPVS: Each connection entry needs 416 bytes at least [ 9826.310862] IPVS: ipvs loaded. [ 9826.325927] IPVS: [sed] scheduler registered. ** Attempting to unload ip_vs_sed... ** [ 9826.821841] IPVS: [sed] scheduler unregistered. [ 9826.872852] IPVS: ipvs unloaded. ** Attempting to load ip_vs_sh... ** [ 9828.383830] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9828.385329] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9828.386604] IPVS: Each connection entry needs 416 bytes at least [ 9828.388477] IPVS: ipvs loaded. [ 9828.402792] IPVS: [sh] scheduler registered. ** Attempting to unload ip_vs_sh... ** [ 9828.926580] IPVS: [sh] scheduler unregistered. [ 9829.001936] IPVS: ipvs unloaded. ** Attempting to load ip_vs_wlc... ** [ 9830.638961] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9830.640869] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9830.641798] IPVS: Each connection entry needs 416 bytes at least [ 9830.643650] IPVS: ipvs loaded. [ 9830.652795] IPVS: [wlc] scheduler registered. ** Attempting to unload ip_vs_wlc... ** [ 9831.169309] IPVS: [wlc] scheduler unregistered. [ 9831.230990] IPVS: ipvs unloaded. ** Attempting to load ip_vs_wrr... ** [ 9832.818176] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [ 9832.820356] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [ 9832.821227] IPVS: Each connection entry needs 416 bytes at least [ 9832.828031] IPVS: ipvs loaded. [ 9832.845941] IPVS: [wrr] scheduler registered. ** Attempting to unload ip_vs_wrr... ** [ 9833.382196] IPVS: [wrr] scheduler unregistered. [ 9833.434961] IPVS: ipvs unloaded. ** Attempting to load ip_vti... ** [ 9834.725646] IPv4 over IPsec tunneling driver ** Attempting to unload ip_vti... ** ** Attempting to load ipcomp... ** ** Attempting to unload ipcomp... ** ** Attempting to load ipcomp6... ** ** Attempting to unload ipcomp6... ** ** Attempting to load ipip... ** [ 9839.838997] ipip: IPv4 and MPLS over IPv4 tunneling driver ** Attempting to unload ipip... ** ** Attempting to load ipvlan... ** ** Attempting to unload ipvlan... ** ** Attempting to load ipvtap... ** ** Attempting to unload ipvtap... ** ** Attempting to load ip_vti... ** [ 9844.964515] IPv4 over IPsec tunneling driver ** Attempting to unload ip_vti... ** ** Attempting to load isofs... ** ** Attempting to unload isofs... ** [ 9847.333203] cdrom: Uniform CD-ROM driver unloaded ** Attempting to load iw_cm... ** ** Attempting to unload iw_cm... ** ** Attempting to load kheaders... ** ** Attempting to unload kheaders... ** ** Attempting to load kmem... ** ** Attempting to unload kmem... ** ** Attempting to load linear... ** ** Attempting to unload linear... ** ** Attempting to load llc... ** ** Attempting to unload llc... ** ** Attempting to load lrw... ** ** Attempting to unload lrw... ** ** Attempting to load lz4_compress... ** ** Attempting to unload lz4_compress... ** ** Attempting to load mac_celtic... ** ** Attempting to unload mac_celtic... ** ** Attempting to load mac_centeuro... ** ** Attempting to unload mac_centeuro... ** ** Attempting to load mac_croatian... ** ** Attempting to unload mac_croatian... ** ** Attempting to load mac_cyrillic... ** ** Attempting to unload mac_cyrillic... ** ** Attempting to load mac_gaelic... ** ** Attempting to unload mac_gaelic... ** ** Attempting to load mac_greek... ** ** Attempting to unload mac_greek... ** ** Attempting to load mac_iceland... ** ** Attempting to unload mac_iceland... ** ** Attempting to load mac_inuit... ** ** Attempting to unload mac_inuit... ** ** Attempting to load mac_roman... ** ** Attempting to unload mac_roman... ** ** Attempting to load mac_romanian... ** ** Attempting to unload mac_romanian... ** ** Attempting to load mac_turkish... ** ** Attempting to unload mac_turkish... ** ** Attempting to load macsec... ** [ 9879.506290] MACsec IEEE 802.1AE ** Attempting to unload macsec... ** ** Attempting to load macvlan... ** ** Attempting to unload macvlan... ** ** Attempting to load macvtap... ** ** Attempting to unload macvtap... ** ** Attempting to load md4... ** ** Attempting to unload md4... ** ** Attempting to load michael_mic... ** ** Attempting to unload michael_mic... ** ** Attempting to load mip6... ** [ 9888.712086] mip6: Mobile IPv6 ** Attempting to unload mip6... ** ** Attempting to load mpt3sas... ** [ 9891.160085] mpt3sas version 43.100.00.00 loaded ** Attempting to unload mpt3sas... ** [ 9891.667246] mpt3sas version 43.100.00.00 unloading ** Attempting to load msdos... ** ** Attempting to unload msdos... ** ** Attempting to load mtd... ** ** Attempting to unload mtd... ** ** Attempting to load n_gsm... ** ** Attempting to unload n_gsm... ** ** Attempting to load nd_blk... ** ** Attempting to unload nd_blk... ** ** Attempting to load nd_btt... ** ** Attempting to unload nd_btt... ** ** Attempting to load nd_pmem... ** ** Attempting to unload nd_pmem... ** ** Attempting to load net_failover... ** ** Attempting to unload net_failover... ** ** Attempting to load netconsole... ** [ 9904.962314] printk: console [netcon0] enabled [ 9904.963099] netconsole: network logging started ** Attempting to unload netconsole... ** [ 9905.455321] printk: console [netcon_ext0] disabled [ 9905.456366] printk: console [netcon0] disabled ** Attempting to load nf_conncount... ** ** Attempting to unload nf_conncount... ** ** Attempting to load nf_conntrack... ** ** Attempting to unload nf_conntrack... ** ** Attempting to load nf_conntrack_amanda... ** ** Attempting to unload nf_conntrack_amanda... ** ** Attempting to load nf_conntrack_bridge... ** [ 9915.182371] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. ** Attempting to unload nf_conntrack_bridge... ** ** Attempting to load nf_conntrack_broadcast... ** ** Attempting to unload nf_conntrack_broadcast... ** ** Attempting to load nf_conntrack_ftp... ** ** Attempting to unload nf_conntrack_ftp... ** ** Attempting to load nf_conntrack_h323... ** ** Attempting to unload nf_conntrack_h323... ** [-- MARK -- Fri Feb 3 08:30:00 2023] ** Attempting to load nf_conntrack_irc... ** ** Attempting to unload nf_conntrack_irc... ** ** Attempting to load nf_conntrack_netbios_ns... ** ** Attempting to unload nf_conntrack_netbios_ns... ** ** Attempting to load nf_conntrack_netlink... ** ** Attempting to unload nf_conntrack_netlink... ** ** Attempting to load nf_conntrack_pptp... ** ** Attempting to unload nf_conntrack_pptp... ** ** Attempting to load nf_conntrack_sane... ** ** Attempting to unload nf_conntrack_sane... ** ** Attempting to load nf_conntrack_sip... ** ** Attempting to unload nf_conntrack_sip... ** ** Attempting to load nf_conntrack_snmp... ** ** Attempting to unload nf_conntrack_snmp... ** ** Attempting to load nf_conntrack_tftp... ** ** Attempting to unload nf_conntrack_tftp... ** ** Attempting to load nf_defrag_ipv4... ** ** Attempting to unload nf_defrag_ipv4... ** ** Attempting to load nf_defrag_ipv6... ** ** Attempting to unload nf_defrag_ipv6... ** ** Attempting to load nf_dup_ipv4... ** ** Attempting to unload nf_dup_ipv4... ** ** Attempting to load nf_dup_ipv6... ** ** Attempting to unload nf_dup_ipv6... ** ** Attempting to load nf_dup_netdev... ** ** Attempting to unload nf_dup_netdev... ** ** Attempting to load nf_log_arp... ** ** Attempting to unload nf_log_arp... ** ** Attempting to load nf_log_bridge... ** ** Attempting to unload nf_log_bridge... ** ** Attempting to load nf_log_ipv4... ** ** Attempting to unload nf_log_ipv4... ** ** Attempting to load nf_log_ipv6... ** ** Attempting to unload nf_log_ipv6... ** ** Attempting to load nf_log_netdev... ** ** Attempting to unload nf_log_netdev... ** ** Attempting to load nf_log_syslog... ** ** Attempting to unload nf_log_syslog... ** ** Attempting to load nf_nat... ** ** Attempting to unload nf_nat... ** ** Attempting to load nf_nat_amanda... ** ** Attempting to unload nf_nat_amanda... ** ** Attempting to load nf_nat_ftp... ** ** Attempting to unload nf_nat_ftp... ** ** Attempting to load nf_nat_h323... ** ** Attempting to unload nf_nat_h323... ** ** Attempting to load nf_nat_irc... ** ** Attempting to unload nf_nat_irc... ** ** Attempting to load nf_nat_pptp... ** ** Attempting to unload nf_nat_pptp... ** ** Attempting to load nf_nat_sip... ** ** Attempting to unload nf_nat_sip... ** ** Attempting to load nf_nat_snmp_basic... ** ** Attempting to unload nf_nat_snmp_basic... ** ** Attempting to load nf_nat_tftp... ** ** Attempting to unload nf_nat_tftp... ** ** Attempting to load nf_reject_ipv4... ** ** Attempting to unload nf_reject_ipv4... ** ** Attempting to load nf_reject_ipv6... ** ** Attempting to unload nf_reject_ipv6... ** ** Attempting to load nf_socket_ipv4... ** ** Attempting to unload nf_socket_ipv4... ** ** Attempting to load nf_socket_ipv6... ** ** Attempting to unload nf_socket_ipv6... ** ** Attempting to load nf_synproxy_core... ** ** Attempting to unload nf_synproxy_core... ** ** Attempting to load nf_tables... ** ** Attempting to unload nf_tables... ** ** Attempting to load nf_tproxy_ipv4... ** ** Attempting to unload nf_tproxy_ipv4... ** ** Attempting to load nf_tproxy_ipv6... ** ** Attempting to unload nf_tproxy_ipv6... ** ** Attempting to load nfnetlink... ** ** Attempting to unload nfnetlink... ** ** Attempting to load nfnetlink_cthelper... ** ** Attempting to unload nfnetlink_cthelper... ** ** Attempting to load nfnetlink_cttimeout... ** ** Attempting to unload nfnetlink_cttimeout... ** ** Attempting to load nfnetlink_log... ** ** Attempting to unload nfnetlink_log... ** ** Attempting to load nfnetlink_osf... ** ** Attempting to unload nfnetlink_osf... ** ** Attempting to load nfnetlink_queue... ** ** Attempting to unload nfnetlink_queue... ** ** Attempting to load nf_tables... ** ** Attempting to unload nf_tables... ** ** Attempting to load nft_chain_nat... ** ** Attempting to unload nft_chain_nat... ** ** Attempting to load nft_compat... ** [10060.044641] Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled ** Attempting to unload nft_compat... ** ** Attempting to load nft_connlimit... ** ** Attempting to unload nft_connlimit... ** ** Attempting to load nft_counter... ** ** Attempting to unload nft_counter... ** ** Attempting to load nft_ct... ** ** Attempting to unload nft_ct... ** ** Attempting to load nft_dup_ipv4... ** ** Attempting to unload nft_dup_ipv4... ** ** Attempting to load nft_dup_ipv6... ** ** Attempting to unload nft_dup_ipv6... ** ** Attempting to load nft_dup_netdev... ** ** Attempting to unload nft_dup_netdev... ** ** Attempting to load nft_fib... ** ** Attempting to unload nft_fib... ** ** Attempting to load nft_fib_inet... ** ** Attempting to unload nft_fib_inet... ** ** Attempting to load nft_fib_ipv4... ** ** Attempting to unload nft_fib_ipv4... ** ** Attempting to load nft_fib_ipv6... ** ** Attempting to unload nft_fib_ipv6... ** ** Attempting to load nft_fib_netdev... ** ** Attempting to unload nft_fib_netdev... ** ** Attempting to load nft_fwd_netdev... ** ** Attempting to unload nft_fwd_netdev... ** ** Attempting to load nft_hash... ** ** Attempting to unload nft_hash... ** ** Attempting to load nft_limit... ** ** Attempting to unload nft_limit... ** ** Attempting to load nft_log... ** ** Attempting to unload nft_log... ** ** Attempting to load nft_masq... ** ** Attempting to unload nft_masq... ** ** Attempting to load nft_meta_bridge... ** [10094.676432] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. ** Attempting to unload nft_meta_bridge... ** ** Attempting to load nft_nat... ** ** Attempting to unload nft_nat... ** ** Attempting to load nft_numgen... ** ** Attempting to unload nft_numgen... ** ** Attempting to load nft_objref... ** ** Attempting to unload nft_objref... ** ** Attempting to load nft_osf... ** ** Attempting to unload nft_osf... ** ** Attempting to load nft_queue... ** ** Attempting to unload nft_queue... ** ** Attempting to load nft_quota... ** ** Attempting to unload nft_quota... ** ** Attempting to load nft_redir... ** ** Attempting to unload nft_redir... ** ** Attempting to load nft_reject... ** ** Attempting to unload nft_reject... ** ** Attempting to load nft_reject_bridge... ** [10115.210743] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. ** Attempting to unload nft_reject_bridge... ** ** Attempting to load nft_reject_inet... ** ** Attempting to unload nft_reject_inet... ** ** Attempting to load nft_reject_ipv4... ** ** Attempting to unload nft_reject_ipv4... ** ** Attempting to load nft_reject_ipv6... ** ** Attempting to unload nft_reject_ipv6... ** ** Attempting to load nft_reject_netdev... ** ** Attempting to unload nft_reject_netdev... ** ** Attempting to load nft_socket... ** ** Attempting to unload nft_socket... ** ** Attempting to load nft_tproxy... ** ** Attempting to unload nft_tproxy... ** ** Attempting to load nft_tunnel... ** ** Attempting to unload nft_tunnel... ** ** Attempting to load nft_xfrm... ** ** Attempting to unload nft_xfrm... ** ** Attempting to load nhpoly1305... ** ** Attempting to unload nhpoly1305... ** ** Attempting to load n_gsm... ** ** Attempting to unload n_gsm... ** ** Attempting to load nlmon... ** ** Attempting to unload nlmon... ** ** Attempting to load nls_cp1250... ** ** Attempting to unload nls_cp1250... ** ** Attempting to load nls_cp1251... ** ** Attempting to unload nls_cp1251... ** ** Attempting to load nls_cp1255... ** ** Attempting to unload nls_cp1255... ** ** Attempting to load nls_cp737... ** ** Attempting to unload nls_cp737... ** ** Attempting to load nls_cp775... ** ** Attempting to unload nls_cp775... ** ** Attempting to load nls_cp850... ** ** Attempting to unload nls_cp850... ** ** Attempting to load nls_cp852... ** ** Attempting to unload nls_cp852... ** ** Attempting to load nls_cp855... ** ** Attempting to unload nls_cp855... ** ** Attempting to load nls_cp857... ** ** Attempting to unload nls_cp857... ** ** Attempting to load nls_cp860... ** ** Attempting to unload nls_cp860... ** ** Attempting to load nls_cp861... ** ** Attempting to unload nls_cp861... ** ** Attempting to load nls_cp862... ** ** Attempting to unload nls_cp862... ** ** Attempting to load nls_cp863... ** ** Attempting to unload nls_cp863... ** ** Attempting to load nls_cp864... ** ** Attempting to unload nls_cp864... ** ** Attempting to load nls_cp865... ** ** Attempting to unload nls_cp865... ** ** Attempting to load nls_cp866... ** ** Attempting to unload nls_cp866... ** ** Attempting to load nls_cp869... ** ** Attempting to unload nls_cp869... ** ** Attempting to load nls_cp874... ** ** Attempting to unload nls_cp874... ** ** Attempting to load nls_cp932... ** ** Attempting to unload nls_cp932... ** ** Attempting to load nls_cp936... ** ** Attempting to unload nls_cp936... ** ** Attempting to load nls_cp949... ** ** Attempting to unload nls_cp949... ** ** Attempting to load nls_cp950... ** ** Attempting to unload nls_cp950... ** ** Attempting to load nls_euc_jp... ** ** Attempting to unload nls_euc_jp... ** ** Attempting to load nls_iso8859_1... ** ** Attempting to unload nls_iso8859_1... ** ** Attempting to load nls_iso8859_13... ** ** Attempting to unload nls_iso8859_13... ** ** Attempting to load nls_iso8859_14... ** ** Attempting to unload nls_iso8859_14... ** ** Attempting to load nls_iso8859_15... ** ** Attempting to unload nls_iso8859_15... ** ** Attempting to load nls_iso8859_2... ** ** Attempting to unload nls_iso8859_2... ** ** Attempting to load nls_iso8859_3... ** ** Attempting to unload nls_iso8859_3... ** ** Attempting to load nls_iso8859_4... ** ** Attempting to unload nls_iso8859_4... ** ** Attempting to load nls_iso8859_5... ** ** Attempting to unload nls_iso8859_5... ** ** Attempting to load nls_iso8859_6... ** ** Attempting to unload nls_iso8859_6... ** ** Attempting to load nls_iso8859_7... ** ** Attempting to unload nls_iso8859_7... ** ** Attempting to load nls_iso8859_9... ** ** Attempting to unload nls_iso8859_9... ** ** Attempting to load nls_koi8_r... ** ** Attempting to unload nls_koi8_r... ** ** Attempting to load nls_koi8_ru... ** ** Attempting to unload nls_koi8_ru... ** ** Attempting to load null_blk... ** [10195.019413] null_blk: disk nullb0 created [10195.019677] null_blk: module loaded ** Attempting to unload null_blk... ** ** Attempting to load nvme_loop... ** ** Attempting to unload nvme_loop... ** ** Attempting to load nvmet_fc... ** ** Attempting to unload nvmet_fc... ** ** Attempting to load nvmet_rdma... ** ** Attempting to unload nvmet_rdma... ** ** Attempting to load nvmet_tcp... ** [10203.606536] Warning: Unmaintained driver is detected: NVMe/TCP Target ** Attempting to unload nvmet_tcp... ** ** Attempting to load objagg... ** ** Attempting to unload objagg... ** ** Attempting to load openvswitch... ** [10207.460737] openvswitch: Open vSwitch switching datapath ** Attempting to unload openvswitch... ** ** Attempting to load parman... ** ** Attempting to unload parman... ** ** Attempting to load pcbc... ** ** Attempting to unload pcbc... ** ** Attempting to load pcrypt... ** ** Attempting to unload pcrypt... ** ** Attempting to load pkcs8_key_parser... ** [10216.602696] Asymmetric key parser 'pkcs8' registered ** Attempting to unload pkcs8_key_parser... ** [10217.118742] Asymmetric key parser 'pkcs8' unregistered ** Attempting to load poly1305_generic... ** ** Attempting to unload poly1305_generic... ** ** Attempting to load ppdev... ** [10219.917929] ppdev: user-space parallel port driver ** Attempting to unload ppdev... ** ** Attempting to load ppp_async... ** [10221.602196] PPP generic driver version 2.4.2 ** Attempting to unload ppp_async... ** ** Attempting to load ppp_deflate... ** [10223.315319] PPP generic driver version 2.4.2 [10223.331663] PPP Deflate Compression module registered ** Attempting to unload ppp_deflate... ** ** Attempting to load ppp_generic... ** [10224.991267] PPP generic driver version 2.4.2 ** Attempting to unload ppp_generic... ** [-- MARK -- Fri Feb 3 08:35:00 2023] ** Attempting to load ppp_synctty... ** [10226.607761] PPP generic driver version 2.4.2 ** Attempting to unload ppp_synctty... ** ** Attempting to load pppoe... ** [10228.332185] PPP generic driver version 2.4.2 [10228.348531] NET: Registered PF_PPPOX protocol family ** Attempting to unload pppoe... ** [10228.965669] NET: Unregistered PF_PPPOX protocol family ** Attempting to load pppox... ** [10230.098029] PPP generic driver version 2.4.2 [10230.116243] NET: Registered PF_PPPOX protocol family ** Attempting to unload pppox... ** [10230.647707] NET: Unregistered PF_PPPOX protocol family ** Attempting to load ppp_synctty... ** [10231.828444] PPP generic driver version 2.4.2 ** Attempting to unload ppp_synctty... ** ** Attempting to load pps_gpio... ** ** Attempting to unload pps_gpio... ** ** Attempting to load pps_ldisc... ** [10235.079671] pps_ldisc: PPS line discipline registered ** Attempting to unload pps_ldisc... ** ** Attempting to load pptp... ** [10236.709718] PPP generic driver version 2.4.2 [10236.725301] NET: Registered PF_PPPOX protocol family [10236.739721] gre: GRE over IPv4 demultiplexor driver [10236.763746] PPTP driver version 0.8.5 ** Attempting to unload pptp... ** [10237.318745] NET: Unregistered PF_PPPOX protocol family ** Attempting to load pwc... ** [10238.554310] mc: Linux media interface: v0.10 [10238.726354] videodev: Linux video capture interface: v2.00 [10238.830470] usbcore: registered new interface driver Philips webcam ** Attempting to unload pwc... ** [10239.330239] usbcore: deregistering interface driver Philips webcam ** Attempting to load psample... ** ** Attempting to unload psample... ** ** Attempting to load raid0... ** ** Attempting to unload raid0... ** ** Attempting to load raid1... ** ** Attempting to unload raid1... ** ** Attempting to load raid10... ** ** Attempting to unload raid10... ** ** Attempting to load raid456... ** [10247.177742] raid6: skip pq benchmark and using algorithm sse2x4 [10247.178489] raid6: using ssse3x2 recovery algorithm [10247.192316] async_tx: api initialized (async) ** Attempting to unload raid456... ** ** Attempting to load raid6_pq... ** [10249.101365] raid6: skip pq benchmark and using algorithm sse2x4 [10249.102282] raid6: using ssse3x2 recovery algorithm ** Attempting to unload raid6_pq... ** ** Attempting to load raid6test... ** [10250.730146] raid6: skip pq benchmark and using algorithm sse2x4 [10250.731006] raid6: using ssse3x2 recovery algorithm [10250.746410] async_tx: api initialized (async) [10250.799042] raid6test: testing the 4-disk case... [10250.799797] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [10250.800250] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(P) OK [10250.800732] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(Q) OK [10250.801163] raid6test: test_disks(1, 2): faila= 1(D) failb= 2(P) OK [10250.801609] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(Q) OK [10250.802084] raid6test: test_disks(2, 3): faila= 2(P) failb= 3(Q) OK [10250.802562] raid6test: testing the 5-disk case... [10250.803464] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [10250.804079] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(D) OK [10250.804730] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(P) OK [10250.805154] raid6test: test_disks(0, 4): faila= 0(D) failb= 4(Q) OK [10250.805549] raid6test: test_disks(1, 2): faila= 1(D) failb= 2(D) OK [10250.806003] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(P) OK [10250.806419] raid6test: test_disks(1, 4): faila= 1(D) failb= 4(Q) OK [10250.806879] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(P) OK [10250.807316] raid6test: test_disks(2, 4): faila= 2(D) failb= 4(Q) OK [10250.807782] raid6test: test_disks(3, 4): faila= 3(P) failb= 4(Q) OK [10250.808306] raid6test: testing the 11-disk case... [10250.809127] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [10250.809580] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(D) OK [10250.810063] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(D) OK [10250.810532] raid6test: test_disks(0, 4): faila= 0(D) failb= 4(D) OK [10250.810986] raid6test: test_disks(0, 5): faila= 0(D) failb= 5(D) OK [10250.811413] raid6test: test_disks(0, 6): faila= 0(D) failb= 6(D) OK [10250.811915] raid6test: test_disks(0, 7): faila= 0(D) failb= 7(D) OK [10250.812373] raid6test: test_disks(0, 8): faila= 0(D) failb= 8(D) OK [10250.812804] raid6test: test_disks(0, 9): faila= 0(D) failb= 9(P) OK [10250.813249] raid6test: test_disks(0, 10): faila= 0(D) failb= 10(Q) OK [10250.813769] raid6test: test_disks(1, 2): faila= 1(D) failb= 2(D) OK [10250.814217] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(D) OK [10250.814623] raid6test: test_disks(1, 4): faila= 1(D) failb= 4(D) OK [10250.815114] raid6test: test_disks(1, 5): faila= 1(D) failb= 5(D) OK [10250.815565] raid6test: test_disks(1, 6): faila= 1(D) failb= 6(D) OK [10250.816320] raid6test: test_disks(1, 7): faila= 1(D) failb= 7(D) OK [10250.816832] raid6test: test_disks(1, 8): faila= 1(D) failb= 8(D) OK [10250.817254] raid6test: test_disks(1, 9): faila= 1(D) failb= 9(P) OK [10250.817731] raid6test: test_disks(1, 10): faila= 1(D) failb= 10(Q) OK [10250.818164] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(D) OK [10250.818610] raid6test: test_disks[10250.919243] raid6test: test_disks(2, 5): faila= 2(D) failb= 5(D) OK [10250.919785] raid6test: test_disks(2, 6): faila= 2(D) failb= 6(D) OK [10250.9[10251.420677] raid6test: test_disks(2, 10): faila= 2(D) failb= 10(Q) OK [10251.421095] raid6test: test_disks(3, 4): faila= 3(D) failb= 4(D) OK [10251.421549] raid6test: test_disks(3, 5): faila= 3(D) failb= 5(D) OK [10251.422036] raid6test: test_disks(3, 6): faila= 3(D) failb= 6(D) OK [10251.422487] raid6test: test_disks(3, 7): faila= 3(D) failb= 7(D) OK [10251.422921] raid6test: test_disks(3, 8): faila= 3(D) failb= 8(D) OK [10251.423345] raid6test: test_disks(3, 9): faila= 3(D) failb= 9(P) OK [10251.423837] raid6test: test_disks(3, 10): faila= 3(D) failb= 10(Q) OK [10251.424267] raid6test: test_disks(4, 5): faila= 4(D) failb= 5(D) OK [10251.424754] raid6test: test_disks(4, 6): faila= 4(D) failb= 6(D) OK [10251.425179] raid6test: test_disks(4, 7): faila= 4(D) failb= 7(D) OK [10251.425703] raid6test: test_disks(4, 8): faila= 4(D) failb= 8(D) OK [10251.426149] raid6test: test_disks(4, 9): faila= 4(D) failb= 9(P) OK [10251.426562] raid6test: test_disks(4, 10): faila= 4(D) failb= 10(Q) OK [10251.427048] raid6test: test_disks(5, 6): faila= 5(D) failb= 6(D) OK [10251.427479] raid6test: test_disks(5, 7): faila= 5(D) failb= 7(D) OK [10251.427916] raid6test: test_disks(5, 8): faila= 5(D) failb= 8(D) OK [10251.428307] raid6test: test_disks(5, 9):isks(6, 8): faila= 6(D) failb= 8(D) OK [10251.929137] raid6test: test_disks(6, 9): faila= 6(D) failb= 9(P) OK [10251.929601] raid6test: test_disks(6, 10): faila= 6(D) failb= 10(Q) OK [10251.930083] raid6test: test_disks(7, 8): faila= 7(D) failb= 8(D) OK [10251.930512] raid6test: test_disks(7, 9): faila= 7(D) failb= 9(P) OK [10251.930942] raid6test: test_disks(7, 10): faila= 7(D) failb= 10(Q) OK [10251.931377] raid6test: test_disks(8, 9): faila= 8(D) failb= 9(P) OK [10251.931846] raid6test: test_disks(8, 10): faila= 8(D) failb= 10(Q) OK [10251.932314] raid6test: test_disks(9, 10): faila= 9(P) failb= 10(Q) OK [10251.932885] raid6test: testing the 12-disk case... [10251.933590] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [10251.934062] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(D) OK [10251.934491] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(D) OK [10251.934919] raid6test: test_disks(0, 4): faila= 0(D) failb= 4(D) OK [10251.935347] raid6test: test_disks(0, 5): faila= 0(D) failb= 5(D) OK [10251.935835] raid6test: test_disks(0, 6): faila= 0(D) failb= 6(D) OK [10251.936268] raid6test: test_disks(0, 7): faila= 0(D) failb= 7(D) OK [10251.936755] raid6test: aid6test: test_disks(0, 11): faila= 0(D) failb= 11(Q) OK [10252.437468] raid6test: test_disks(1, 2): faila= 1(D) failb= 2(D) OK [10252.437950] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(D) OK [10252.438382] raid6test: test_disks(1, 4): faila= 1(D) failb= 4(D) OK [10252.438871] raid6test: test_disks(1, 5): faila= 1(D) failb= 5(D) OK [10252.439306] raid6test: test_disks(1, 6): faila= 1(D) failb= 6(D) OK [10252.439789] raid6test: test_disks(1, 7): faila= 1(D) failb= 7(D) OK [10252.440218] raid6test: test_disks(1, 8): faila= 1(D) failb= 8(D) OK [10252.440705] raid6test: test_disks(1, 9): faila= 1(D) failb= 9(D) OK [10252.441138] raid6test: test_disks(1, 10): faila= 1(D) failb= 10(P) OK [10252.441593] raid6test: test_disks(1, 11): faila= 1(D) failb= 11(Q) OK [10252.442076] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(D) OK [10252.442487] raid6test: test_disks(2, 4): faila= 2(D) failb= 4(D) OK [10252.442917] raid6test: test_disks(2, 5): faila= 2(D) failb= 5(D) OK [10252.443348] raid6test: test_disks(2, 6): faila= 2(D) failb= 6(D) OK [10252.443838] raid6test: test_disks(2, 7): faila= 2(D) failb= 7(D) OK [10252.444265] raid6test: test_disks(2, 8): faila= 2(D) failb= 8(D) OK [10252.444755] raid6test: test_disks(2, 9): faila= 2(D) failb= 9(D) OK [10252.445184] raid6test: test_disks(2, 10): faila= 2isks(3, 5): faila= 3(D) failb= 5(D) OK [10252.945990] raid6test: test_disks(3, 6): faila= 3(D) failb= 6(D) OK [10252.946448] raid6test: test_disks(3, 7): faila= 3(D) failb= 7(D) OK [10252.947047] raid6test: test_disks(3, 8): faila= 3(D) failb= 8(D) OK [10252.947510] raid6test: test_disks(3, 9): faila= 3(D) failb= 9(D) OK [10252.947946] raid6test: test_disks(3, 10): faila= 3(D) failb= 10(P) OK [10252.948366] raid6test: test_disks(3, 11): faila= 3(D) failb= 11(Q) OK [10252.948869] raid6test: test_disks(4, 5): faila= 4(D) failb= 5(D) OK [10252.949309] raid6test: test_disks(4, 6): faila= 4(D) failb= 6(D) OK [10252.949768] raid6test: test_disks(4, 7): faila= 4(D) failb= 7(D) OK [10252.950181] raid6test: test_disks(4, 8): faila= 4(D) failb= 8(D) OK [10252.950590] raid6test: test_disks(4, 9): faila= 4(D) failb= 9(D) OK [10252.951029] raid6test: test_disks(4, 10): faila= 4(D) failb= 10(P) OK [10252.951472] raid6test: test_disks(4, 11): faila= 4(D) failb= 11(Q) OK [10252.951960] raid6test: test_disks(5, 6): faila= 5(D) failb= 6(D) OK [10252.952399] raid6test: test_disks(5, 7): faila= 5(D) failb= 7(D) OK [10252.979[10253.453224] raid6test: test_disks(5, 11): faila= 5(D) failb= 11(Q) OK [10253.453736] raid6test: test_disks(6, 7): faila= 6(D) failb= 7(D) OK [10253.454137] raid6test: test_disks(6, 8): faila= 6(D) failb= 8(D) OK [10253.454597] raid6test: test_disks(6, 9): faila= 6(D) failb= 9(D) OK [10253.455074] raid6test: test_disks(6, 10): faila= 6(D) failb= 10(P) OK [10253.455505] raid6test: test_disks(6, 11): faila= 6(D) failb= 11(Q) OK [10253.455933] raid6test: test_disks(7, 8): faila= 7(D) failb= 8(D) OK [10253.456357] raid6test: test_disks(7, 9): faila= 7(D) failb= 9(D) OK [10253.456835] raid6test: test_disks(7, 10): faila= 7(D) failb= 10(P) OK [10253.457267] raid6test: test_disks(7, 11): faila= 7(D) failb= 11(Q) OK [10253.457747] raid6test: test_disks(8, 9): faila= 8(D) failb= 9(D) OK [10253.458173] raid6test: test_disks(8, 10): faila= 8(D) failb= 10(P) OK [10253.458625] raid6test: test_disks(8, 11): faila= 8(D) failb= 11(Q) OK [10253.459098] raid6test: test_disks(9, 10): faila= 9(D) failb= 10(P) OK [10253.459529] raid6test: test_disks(9, 11): faila= 9(D) failb= 11(Q) OK [10253.459951] raid6test: test_disks(10, 11): faila= 10(P) failb= 11(Q) OK [10253.460639] raid6teisks(0, 3): faila= 0(D) failb= 3(D) OK [10253.961296] raid6test: test_disks(0, 4): faila= 0(D) failb= 4(D) OK [10253.961802] raid6test: test_disks(0, 5): faila= 0(D) failb= 5(D) OK [10253.962251] raid6test: test_disks(0, 6): faila= 0(D) failb= 6(D) OK [10253.962726] raid6test: test_disks(0, 7): faila= 0(D) failb= 7(D) OK [10253.963173] raid6test: test_disks(0, 8): faila= 0(D) failb= 8(D) OK [10253.963642] raid6test: test_disks(0, 9): faila= 0(D) failb= 9(D) OK [10253.964121] raid6test: test_disks(0, 10): faila= 0(D) failb= 10(D) OK [10253.964557] raid6test: test_disks(0, 11): faila= 0(D) failb= 11(D) OK [10253.965004] raid6test: test_disks(0, 12): faila= 0(D) failb= 12(D) OK [10253.965448] raid6test: test_disks(0, 13): faila= 0(D) failb= 13(D) OK [10253.965946] raid6test: test_disks(0, 14): faila= 0(D) failb= 14(D) OK [10253.966360] raid6test: test_disks(0, 15): faila= 0(D) failb= 15(D) OK [10253.966829] raid6test: test_disks(0, 16): faila= 0(D) failb= 16(D) OK [10253.967274] raid6test: test_disks(0, 17): faila= 0(D) failb= 17(D) OK [10253.967767] raid6test: test_disks(0, 18): faila= 0(D) failb= 18(D) OK [10253.968177] raid6test: test_disks(0, 19): faila= 0(Dla= 0(D) failb= 22(P) OK [10254.468962] raid6test: test_disks(0, 23): faila= 0(D) failb= 23(Q) OK [10254.469417] raid6test: test_disks(1, 2): faila= 1(D) failb= 2(D) OK [10254.469901] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(D) OK [10254.470348] raid6test: test_disks(1, 4): faila= 1(D) failb= 4(D) OK [10254.470849] raid6test: test_disks(1, 5): faila= 1(D) failb= 5(D) OK [10254.471265] raid6test: test_disks(1, 6): faila= 1(D) failb= 6(D) OK [10254.471756] raid6test: test_disks(1, 7): faila= 1(D) failb= 7(D) OK [10254.472227] raid6test: test_disks(1, 8): faila= 1(D) failb= 8(D) OK [10254.472630] raid6test: test_disks(1, 9): faila= 1(D) failb= 9(D) OK [10254.473099] raid6test: test_disks(1, 10): faila= 1(D) failb= 10(D) OK [10254.473509] raid6test: test_disks(1, 11): faila= 1(D) failb= 11(D) OK [10254.473987] raid6test: test_disks(1, 12): faila= 1(D) failb= 12(D) OK [10254.474402] raid6test: test_disks(1, 13): faila= 1(D) failb= 13(D) OK [10254.474904] raid6test: test_disks(1, 14): faila= 1(D) failb= 14(D) OK [10254.475320] raid6test: test_disks(1, 15): faila= 1(D) failb= 15(D) OK [10254.475818] raid6teaid6test: test_disks(1, 19): faila= 1(D) failb= 19(D) OK [10254.976516] raid6test: test_disks(1, 20): faila= 1(D) failb= 20(D) OK [10254.976969] raid6test: test_disks(1, 21): faila= 1(D) failb= 21(D) OK [10254.977385] raid6test: test_disks(1, 22): faila= 1(D) failb= 22(P) OK [10254.977888] raid6test: test_disks(1, 23): faila= 1(D) failb= 23(Q) OK [10254.978331] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(D) OK [10254.978828] raid6test: test_disks(2, 4): faila= 2(D) failb= 4(D) OK [10254.979239] raid6test: test_disks(2, 5): faila= 2(D) failb= 5(D) OK [10254.979744] raid6test: test_disks(2, 6): faila= 2(D) failb= 6(D) OK [10254.980187] raid6test: test_disks(2, 7): faila= 2(D) failb= 7(D) OK [10254.980654] raid6test: test_disks(2, 8): faila= 2(D) failb= 8(D) OK [10254.981130] raid6test: test_disks(2, 9): faila= 2(D) failb= 9(D) OK [10254.981575] raid6test: test_disks(2, 10): faila= 2(D) failb= 10(D) OK [10254.982019] raid6test: test_disks(2, 11): faila= 2(D) failb= 11(D) OK [10254.982483] raid6test: test_disks(2, 12): faila= 2(D) failb= 12(D) OK [10254.982952] raid6test: test_disks(2, 13): faila= 2(D) failb= 13(D) OK [10254.983405] raid6test: test_disks(2, isks(2, 17): faila= 2(D) failb= 17(D) OK [10255.484181] raid6test: test_disks(2, 18): faila= 2(D) failb= 18(D) OK [10255.484661] raid6test: test_disks(2, 19): faila= 2(D) failb= 19(D) OK [10255.485108] raid6test: test_disks(2, 20): faila= 2(D) failb= 20(D) OK [10255.485561] raid6test: test_disks(2, 21): faila= 2(D) failb= 21(D) OK [10255.486044] raid6test: test_disks(2, 22): faila= 2(D) failb= 22(P) OK [10255.486457] raid6test: test_disks(2, 23): faila= 2(D) failb= 23(Q) OK [10255.486918] raid6test: test_disks(3, 4): faila= 3(D) failb= 4(D) OK [10255.487362] raid6test: test_disks(3, 5): faila= 3(D) failb= 5(D) OK [10255.487859] raid6test: test_disks(3, 6): faila= 3(D) failb= 6(D) OK [10255.488273] raid6test: test_disks(3, 7): faila= 3(D) failb= 7(D) OK [10255.488763] raid6test: test_disks(3, 8): faila= 3(D) failb= 8(D) OK [10255.489203] raid6test: test_disks(3, 9): faila= 3(D) failb= 9(D) OK [10255.489699] raid6test: test_disks(3, 10): faila= 3(D) failb= 10(D) OK [10255.490110] raid6test: test_disks(3, 11): faila= 3(D) failb= 11(D) OK [10255.490563] raid6test: test_disks(3, 12): faila= 3(D) failb= 12(D) OK [102[10255.991315] raid6test: test_disks(3, 16): faila= 3(D) failb= 16(D) OK [10255.991844] raid6test: test_disks(3, 17): faila= 3(D) failb= 17(D) OK [10255.992303] raid6test: test_disks(3, 18): faila= 3(D) failb= 18(D) OK [10255.992783] raid6test: test_disks(3, 19): faila= 3(D) failb= 19(D) OK [10255.993231] raid6test: test_disks(3, 20): faila= 3(D) failb= 20(D) OK [10255.993667] raid6test: test_disks(3, 21): faila= 3(D) failb= 21(D) OK [10255.994116] raid6test: test_disks(3, 22): faila= 3(D) failb= 22(P) OK [10255.994564] raid6test: test_disks(3, 23): faila= 3(D) failb= 23(Q) OK [10255.995043] raid6test: test_disks(4, 5): faila= 4(D) failb= 5(D) OK [10255.995457] raid6test: test_disks(4, 6): faila= 4(D) failb= 6(D) OK [10255.995955] raid6test: test_disks(4, 7): faila= 4(D) failb= 7(D) OK [10255.996396] raid6test: test_disks(4, 8): faila= 4(D) failb= 8(D) OK [10255.996865] raid6test: test_disks(4, 9): faila= 4(D) failb= 9(D) OK [10255.997276] raid6test: test_disks(4, 10): faila= 4(D) failb= 10(D) OK [10255.997771] raid6test: test_disks(4, 11): faila= 4(D) failb= 11(D) OK [10255.998184] raid6test: test_disks(4, 12): faila= 4(D) failb= 12(D) OK [10255.998652] raid6teaid6test: test_disks(4, 16): faila= 4(D) failb= 16(D) OK [10256.499382] raid6test: test_disks(4, 17): faila= 4(D) failb= 17(D) OK [10256.499885] raid6test: test_disks(4, 18): faila= 4(D) failb= 18(D) OK [10256.500301] raid6test: test_disks(4, 19): faila= 4(D) failb= 19(D) OK [10256.500785] raid6test: test_disks(4, 20): faila= 4(D) failb= 20(D) OK [10256.501197] raid6test: test_disks(4, 21): faila= 4(D) failb= 21(D) OK [10256.501668] raid6test: test_disks(4, 22): faila= 4(D) failb= 22(P) OK [10256.502172] raid6test: test_disks(4, 23): faila= 4(D) failb= 23(Q) OK [10256.502595] raid6test: test_disks(5, 6): faila= 5(D) failb= 6(D) OK [10256.503062] raid6test: test_disks(5, 7): faila= 5(D) failb= 7(D) OK [10256.503476] raid6test: test_disks(5, 8): faila= 5(D) failb= 8(D) OK [10256.503964] raid6test: test_disks(5, 9): faila= 5(D) failb= 9(D) OK [10256.504401] raid6test: test_disks(5, 10): faila= 5(D) failb= 10(D) OK [10256.504870] raid6test: test_disks(5, 11): faila= 5(D) failb= 11(D) OK [10256.505315] raid6test: test_disks(5, 12): faila= 5(D) failb= 12(D) OK [10256.505804] raid6test: test_disks(5, 13): faila= 5(D) failb= 13(D) OK [10256.506243] raid6test: test_disks(5, 14): faila= 5(D) failb= 14(D) OK [10256.506746] raid6test: test_disks(5, 15): faila= 5(D) failb= 15(D) OK [10256.507157] raid6test: test_disks(5, 16):isks(5, 19): faila= 5(D) failb= 19(D) OK [10257.007908] raid6test: test_disks(5, 20): faila= 5(D) failb= 20(D) OK [10257.008361] raid6test: test_disks(5, 21): faila= 5(D) failb= 21(D) OK [10257.008864] raid6test: test_disks(5, 22): faila= 5(D) failb= 22(P) OK [10257.009275] raid6test: test_disks(5, 23): faila= 5(D) failb= 23(Q) OK [10257.009773] raid6test: test_disks(6, 7): faila= 6(D) failb= 7(D) OK [10257.010184] raid6test: test_disks(6, 8): faila= 6(D) failb= 8(D) OK [10257.010655] raid6test: test_disks(6, 9): faila= 6(D) failb= 9(D) OK [10257.011134] raid6test: test_disks(6, 10): faila= 6(D) failb= 10(D) OK [10257.011579] raid6test: test_disks(6, 11): faila= 6(D) failb= 11(D) OK [10257.012029] raid6test: test_disks(6, 12): faila= 6(D) failb= 12(D) OK [10257.012506] raid6test: test_disks(6, 13): faila= 6(D) failb= 13(D) OK [10257.012978] raid6test: test_disks(6, 14): faila= 6(D) failb= 14(D) OK [10257.013431] raid6test: test_disks(6, 15): faila= 6(D) failb= 15(D) OK [10257.013896] raid6test: test_disks(6, 16): faila= 6(D) failb= 16(D) OK [10257.014389] raid6test: test_disks(6, 17): faila= 6(D) failb= 17(D) OK = 20(D) OK [10257.515231] raid6test: test_disks(6, 21): faila= 6(D) failb= 21(D) OK [10257.515749] raid6test: test_disks(6, 22): faila= 6(D) failb= 22(P) OK [10257.516206] raid6test: test_disks(6, 23): faila= 6(D) failb= 23(Q) OK [10257.516672] raid6test: test_disks(7, 8): faila= 7(D) failb= 8(D) OK [10257.517156] raid6test: test_disks(7, 9): faila= 7(D) failb= 9(D) OK [10257.517564] raid6test: test_disks(7, 10): faila= 7(D) failb= 10(D) OK [10257.518041] raid6test: test_disks(7, 11): faila= 7(D) failb= 11(D) OK [10257.518854] raid6test: test_disks(7, 12): faila= 7(D) failb= 12(D) OK [10257.519305] raid6test: test_disks(7, 13): faila= 7(D) failb= 13(D) OK [10257.519802] raid6test: test_disks(7, 14): faila= 7(D) failb= 14(D) OK [10257.520218] raid6test: test_disks(7, 15): faila= 7(D) failb= 15(D) OK [10257.520720] raid6test: test_disks(7, 16): faila= 7(D) failb= 16(D) OK [10257.521134] raid6test: test_disks(7, 17): faila= 7(D) failb= 17(D) OK [10257.521581] raid6test: test_disks(7, 18): faila= 7(D) failb= 18(D) OK [10257.522033] raid6test: test_disks(7, 19): faila= 7(D) failb= 19(D) OK [10257.522505] raid6test: test_disks(7, 20): faila= 7(D) failb= 20(D) OK [10257.[10258.023247] raid6test: test_disks(8, 9): faila= 8(D) failb= 9(D) OK [10258.023760] raid6test: test_disks(8, 10): faila= 8(D) failb= 10(D) OK [10258.024212] raid6test: test_disks(8, 11): faila= 8(D) failb= 11(D) OK [10258.024721] raid6test: test_disks(8, 12): faila= 8(D) failb= 12(D) OK [10258.025141] raid6test: test_disks(8, 13): faila= 8(D) failb= 13(D) OK [10258.025588] raid6test: test_disks(8, 14): faila= 8(D) failb= 14(D) OK [10258.026073] raid6test: test_disks(8, 15): faila= 8(D) failb= 15(D) OK [10258.026481] raid6test: test_disks(8, 16): faila= 8(D) failb= 16(D) OK [10258.026982] raid6test: test_disks(8, 17): faila= 8(D) failb= 17(D) OK [10258.027424] raid6test: test_disks(8, 18): faila= 8(D) failb= 18(D) OK [10258.027923] raid6test: test_disks(8, 19): faila= 8(D) failb= 19(D) OK [10258.028333] raid6test: test_disks(8, 20): faila= 8(D) failb= 20(D) OK [10258.028827] raid6test: test_disks(8, 21): faila= 8(D) failb= 21(D) OK [10258.029239] raid6test: test_disks(8, 22): faila= 8(D) failb= 22(P) OK [10258.029734] raid6test: test_disks(8, 23): faila= 8(D) failb= 23(Q) OK [10258.030150] raid6test: test_disks(9, 10)isks(9, 13): faila= 9(D) failb= 13(D) OK [10258.530924] raid6test: test_disks(9, 14): faila= 9(D) failb= 14(D) OK [10258.531394] raid6test: test_disks(9, 15): faila= 9(D) failb= 15(D) OK [10258.531915] raid6test: test_disks(9, 16): faila= 9(D) failb= 16(D) OK [10258.532391] raid6test: test_disks(9, 17): faila= 9(D) failb= 17(D) OK [10258.532872] raid6test: test_disks(9, 18): faila= 9(D) failb= 18(D) OK [10258.533323] raid6test: test_disks(9, 19): faila= 9(D) failb= 19(D) OK [10258.533821] raid6test: test_disks(9, 20): faila= 9(D) failb= 20(D) OK [10258.534231] raid6test: test_disks(9, 21): faila= 9(D) failb= 21(D) OK [10258.534745] raid6test: test_disks(9, 22): faila= 9(D) failb= 22(P) OK [10258.535195] raid6test: test_disks(9, 23): faila= 9(D) failb= 23(Q) OK [10258.535642] raid6test: test_disks(10, 11): faila= 10(D) failb= 11(D) OK [10258.536145] raid6test: test_disks(10, 12): faila= 10(D) failb= 12(D) OK [10258.536573] raid6test: test_disks(10, 13): faila= 10(D) failb= 13(D) OK [10258.537054] raid6test: test_disks(10, 14): faila= 10(D) failb= 14(D) OK [10258.537506] raid6test: test_disks(10, 15): faila= 10(D) failb= 15(D) OK [10258.537955] raid6test: test_disks(10, 16): faila= 10(D) failb= 16(D) OK [10258.538449] raid6test: test_disks(10, 17): fisks(10, 20): faila= 10(D) failb= 20(D) OK [10259.039200] raid6test: test_disks(10, 21): faila= 10(D) failb= 21(D) OK [10259.039654] raid6test: test_disks(10, 22): faila= 10(D) failb= 22(P) OK [10259.040103] raid6test: test_disks(10, 23): faila= 10(D) failb= 23(Q) OK [10259.040553] raid6test: test_disks(11, 12): faila= 11(D) failb= 12(D) OK [10259.041052] raid6test: test_disks(11, 13): faila= 11(D) failb= 13(D) OK [10259.041495] raid6test: test_disks(11, 14): faila= 11(D) failb= 14(D) OK [10259.041996] raid6test: test_disks(11, 15): faila= 11(D) failb= 15(D) OK [10259.042483] raid6test: test_disks(11, 16): faila= 11(D) failb= 16(D) OK [10259.042962] raid6test: test_disks(11, 17): faila= 11(D) failb= 17(D) OK [10259.043409] raid6test: test_disks(11, 18): faila= 11(D) failb= 18(D) OK [10259.043906] raid6test: test_disks(11, 19): faila= 11(D) failb= 19(D) OK [10259.044318] raid6test: test_disks(11, 20): faila= 11(D) failb= 20(D) OK [10259.044816] raid6test: test_disks(11, 21): faila= 11(D) failb= 21(D) OK [10259.045228] raid6test: test_disks(11, 22): faila= 11(D) failb= 22(P) OK [10259.045733] raid6test: test_disks(11, 23): faila= 11(D) failb= 23(Q) OK [10259.046182] raid6test: test_disisks(12, 16): faila= 12(D) failb= 16(D) OK [10259.546923] raid6test: test_disks(12, 17): faila= 12(D) failb= 17(D) OK [10259.547379] raid6test: test_disks(12, 18): faila= 12(D) failb= 18(D) OK [10259.547876] raid6test: test_disks(12, 19): faila= 12(D) failb= 19(D) OK [10259.548288] raid6test: test_disks(12, 20): faila= 12(D) failb= 20(D) OK [10259.548775] raid6test: test_disks(12, 21): faila= 12(D) failb= 21(D) OK [10259.549219] raid6test: test_disks(12, 22): faila= 12(D) failb= 22(P) OK [10259.549676] raid6test: test_disks(12, 23): faila= 12(D) failb= 23(Q) OK [10259.550166] raid6test: test_disks(13, 14): faila= 13(D) failb= 14(D) OK [10259.550583] raid6test: test_disks(13, 15): faila= 13(D) failb= 15(D) OK [10259.551069] raid6test: test_disks(13, 16): faila= 13(D) failb= 16(D) OK [10259.551511] raid6test: test_disks(13, 17): faila= 13(D) failb= 17(D) OK [10259.552005] raid6test: test_disks(13, 18): faila= 13(D) failb= 18(D) OK [10259.552486] raid6test: test_disks(13, 19): faila= 13(D) failb= 19(D) OK [10259.552966] raid6test: test_disks(13, 20): faila= 13(D) failb= 20(D) OK [10259.553415] raid6test: test_disks(13, 21): faila= 13(D) failb= 21(D) OK [10259.553915] raid6test: test_disks(13, 22): faila= 13(D) failb= 22(P) OK [10259.554332] raid6test: test_disks(13, 23): faila= 13(D) failb= 23(Q) OK [10259.554835] raid6test: test_disks(14, 15): faila= 14(D) failbb= 18(D) OK [10260.055566] raid6test: test_disks(14, 19): faila= 14(D) failb= 19(D) OK [10260.056047] raid6test: test_disks(14, 20): faila= 14(D) failb= 20(D) OK [10260.056496] raid6test: test_disks(14, 21): faila= 14(D) failb= 21(D) OK [10260.056999] raid6test: test_disks(14, 22): faila= 14(D) failb= 22(P) OK [10260.057413] raid6test: test_disks(14, 23): faila= 14(D) failb= 23(Q) OK [10260.057914] raid6test: test_disks(15, 16): faila= 15(D) failb= 16(D) OK [10260.058323] raid6test: test_disks(15, 17): faila= 15(D) failb= 17(D) OK [10260.058822] raid6test: test_disks(15, 18): faila= 15(D) failb= 18(D) OK [10260.059232] raid6test: test_disks(15, 19): faila= 15(D) failb= 19(D) OK [10260.059683] raid6test: test_disks(15, 20): faila= 15(D) failb= 20(D) OK [10260.060129] raid6test: test_disks(15, 21): faila= 15(D) failb= 21(D) OK [10260.060575] raid6test: test_disks(15, 22): faila= 15(D) failb= 22(P) OK [10260.061074] raid6test: test_disks(15, 23): faila= 15(D) failb= 23(Q) OK [10260.061483] raid6test: test_disks(16, 17): faila= 16(D) failb= 17(D) OK [10260.061984] raid6test: test_disks(16, 18): faila= 16(D) failb= 18(D) OK [10260.062423] raid6test: test_disks(16, 19): faila= 16(D) failb=b= 22(P) OK [10260.563286] raid6test: test_disks(16, 23): faila= 16(D) failb= 23(Q) OK [10260.563862] raid6test: test_disks(17, 18): faila= 17(D) failb= 18(D) OK [10260.564356] raid6test: test_disks(17, 19): faila= 17(D) failb= 19(D) OK [10260.564924] raid6test: test_disks(17, 20): faila= 17(D) failb= 20(D) OK [10260.565410] raid6test: test_disks(17, 21): faila= 17(D) failb= 21(D) OK [10260.565979] raid6test: test_disks(17, 22): faila= 17(D) failb= 22(P) OK [10260.566507] raid6test: test_disks(17, 23): faila= 17(D) failb= 23(Q) OK [10260.567062] raid6test: test_disks(18, 19): faila= 18(D) failb= 19(D) OK [10260.567591] raid6test: test_disks(18, 20): faila= 18(D) failb= 20(D) OK [10260.568176] raid6test: test_disks(18, 21): faila= 18(D) failb= 21(D) OK [10260.568707] raid6test: test_disks(18, 22): faila= 18(D) failb= 22(P) OK [10260.569257] raid6test: test_disks(18, 23): faila= 18(D) failb= 23(Q) OK [10260.569836] raid6test: test_disks(19, 20): faila= 19(D) failb= 20(D) OK [10260.570327] raid6test: test_disks(19, 21): faila= 19(D) failb= 21(D) OK [10260.570842] raid6test: test_disks(19, 22): faila= 19(D) failb= 22(P) OK [10260.571335] raid6test: test_disks(19, 23): faila= 19(D) failb= 23(Q) OK [10260.571881] raid6test: test_disks(20, 21): faila= 20(D) failb= b= 22(P) OK [10261.072834] raid6test: test_disks(21, 23): faila= 21(D) failb= 23(Q) OK [10261.073329] raid6test: test_disks(22, 23): faila= 22(P) failb= 23(Q) OK [10261.074552] raid6test: testing the 64-disk case... [10261.075423] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [10261.075980] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(D) OK [10261.076466] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(D) OK [10261.077017] raid6test: test_disks(0, 4): faila= 0(D) failb= 4(D) OK [10261.077562] raid6test: test_disks(0, 5): faila= 0(D) failb= 5(D) OK [10261.078089] raid6test: test_disks(0, 6): faila= 0(D) failb= 6(D) OK [10261.078623] raid6test: test_disks(0, 7): faila= 0(D) failb= 7(D) OK [10261.079209] raid6test: test_disks(0, 8): faila= 0(D) failb= 8(D) OK [10261.079707] raid6test: test_disks(0, 9): faila= 0(D) failb= 9(D) OK [10261.080241] raid6test: test_disks(0, 10): faila= 0(D) failb= 10(D) OK [10261.080768] raid6test: test_disks(0, 11): faila= 0(D) failb= 11(D) OK [10261.081299] raid6test: test_disks(0, 12): faila= 0(D) failb= 12(D) OK [10261.081846] raid6test: test_disks(0, 13): faila= 0(D) failb= 13(D) OK [10261.082398] raid6test: test_disks(0, 14): faila= 0(D) failb= 14(D) OK [10261.082916] raid6test: test_disks(0, 15): faila= 0(D) failb= 18(D) OK [10261.583822] raid6test: test_disks(0, 19): faila= 0(D) failb= 19(D) OK [10261.584329] raid6test: test_disks(0, 20): faila= 0(D) failb= 20(D) OK [10261.584928] raid6test: test_disks(0, 21): faila= 0(D) failb= 21(D) OK [10261.585459] raid6test: test_disks(0, 22): faila= 0(D) failb= 22(D) OK [10261.586032] raid6test: test_disks(0, 23): faila= 0(D) failb= 23(D) OK [10261.586558] raid6test: test_disks(0, 24): faila= 0(D) failb= 24(D) OK [10261.587109] raid6test: test_disks(0, 25): faila= 0(D) failb= 25(D) OK [10261.587602] raid6test: test_disks(0, 26): faila= 0(D) failb= 26(D) OK [10261.588185] raid6test: test_disks(0, 27): faila= 0(D) failb= 27(D) OK [10261.588680] raid6test: test_disks(0, 28): faila= 0(D) failb= 28(D) OK [10261.589238] raid6test: test_disks(0, 29): faila= 0(D) failb= 29(D) OK [10261.589754] raid6test: test_disks(0, 30): faila= 0(D) failb= 30(D) OK [10261.590280] raid6test: test_disks(0, 31): faila= 0(D) failb= 31(D) OK [10261.590862] raid6test: test_disks(0, 32): faila= 0(D) failb= 32(D) OK [10261.591353] raid6test: test_disks(0, 33): faila= 0(D) failb= 33(D) OK [= 36(D) OK [10262.092369] raid6test: test_disks(0, 37): faila= 0(D) failb= 37(D) OK [10262.092932] raid6test: test_disks(0, 38): faila= 0(D) failb= 38(D) OK [10262.093431] raid6test: test_disks(0, 39): faila= 0(D) failb= 39(D) OK [10262.094028] raid6test: test_disks(0, 40): faila= 0(D) failb= 40(D) OK [10262.094527] raid6test: test_disks(0, 41): faila= 0(D) failb= 41(D) OK [10262.095123] raid6test: test_disks(0, 42): faila= 0(D) failb= 42(D) OK [10262.095669] raid6test: test_disks(0, 43): faila= 0(D) failb= 43(D) OK [10262.096233] raid6test: test_disks(0, 44): faila= 0(D) failb= 44(D) OK [10262.096759] raid6test: test_disks(0, 45): faila= 0(D) failb= 45(D) OK [10262.097244] raid6test: test_disks(0, 46): faila= 0(D) failb= 46(D) OK [10262.097772] raid6test: test_disks(0, 47): faila= 0(D) failb= 47(D) OK [10262.098317] raid6test: test_disks(0, 48): faila= 0(D) failb= 48(D) OK [10262.098864] raid6test: test_disks(0, 49): faila= 0(D) failb= 49(D) OK [10262.099338] raid6test: test_disks(0, 50): faila= 0(D) failb= 50(D) OK [10262.099894] raid6test: test_disks(0, 51): faila= 0(D) failb= 51(D) OK [10262.100425] raid6test: testaid6test: test_disks(0, 55): faila= 0(D) failb= 55(D) OK [10262.601405] raid6test: test_disks(0, 56): faila= 0(D) failb= 56(D) OK [10262.602032] raid6test: test_disks(0, 57): faila= 0(D) failb= 57(D) OK [10262.602587] raid6test: test_disks(0, 58): faila= 0(D) failb= 58(D) OK [10262.603143] raid6test: test_disks(0, 59): faila= 0(D) failb= 59(D) OK [10262.603687] raid6test: test_disks(0, 60): faila= 0(D) failb= 60(D) OK [10262.604202] raid6test: test_disks(0, 61): faila= 0(D) failb= 61(D) OK [10262.604773] raid6test: test_disks(0, 62): faila= 0(D) failb= 62(P) OK [10262.605319] raid6test: test_disks(0, 63): faila= 0(D) failb= 63(Q) OK [10262.605870] raid6test: test_disks(1, 2): faila= 1(D) failb= 2(D) OK [10262.606371] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(D) OK [10262.606973] raid6test: test_disks(1, 4): faila= 1(D) failb= 4(D) OK [10262.607565] raid6test: test_disks(1, 5): faila= 1(D) failb= 5(D) OK [10262.608134] raid6test: test_disks(1, 6): faila= 1(D) failb= 6(D) OK [10262.608666] raid6test: test_disks(1, 7): faila= 1(D) failb= 7(D) OK [10262.609229] raid6test: test_disks(1, 8): faila= 1(D) failb= 8(D) OK [10262.609765] raid6test: test_disks(1, isks(1, 12): faila= 1(D) failb= 12(D) OK [10263.110883] raid6test: test_disks(1, 13): faila= 1(D) failb= 13(D) OK [10263.111423] raid6test: test_disks(1, 14): faila= 1(D) failb= 14(D) OK [10263.112010] raid6test: test_disks(1, 15): faila= 1(D) failb= 15(D) OK [10263.112573] raid6test: test_disks(1, 16): faila= 1(D) failb= 16(D) OK [10263.113157] raid6test: test_disks(1, 17): faila= 1(D) failb= 17(D) OK [10263.113700] raid6test: test_disks(1, 18): faila= 1(D) failb= 18(D) OK [10263.114265] raid6test: test_disks(1, 19): faila= 1(D) failb= 19(D) OK [10263.114851] raid6test: test_disks(1, 20): faila= 1(D) failb= 20(D) OK [10263.115398] raid6test: test_disks(1, 21): faila= 1(D) failb= 21(D) OK [10263.115981] raid6test: test_disks(1, 22): faila= 1(D) failb= 22(D) OK [10263.116482] raid6test: test_disks(1, 23): faila= 1(D) failb= 23(D) OK [10263.117064] raid6test: test_disks(1, 24): faila= 1(D) failb= 24(D) OK [10263.117610] raid6test: test_disks(1, 25): faila= 1(D) failb= 25(D) OK [10263.118165] raid6test: test_disks(1, 26): faila= 1(D) failb= 26(D) OK [10263.118708] raid6test: test_disks(1, 27): faila= 1(D) failb= 27(D) OK [10263.119266] raid6test: test_disks(1, 28): failala= 1(D) failb= 31(D) OK [10263.620185] raid6test: test_disks(1, 32): faila= 1(D) failb= 32(D) OK [10263.620683] raid6test: test_disks(1, 33): faila= 1(D) failb= 33(D) OK [10263.621243] raid6test: test_disks(1, 34): faila= 1(D) failb= 34(D) OK [10263.621802] raid6test: test_disks(1, 35): faila= 1(D) failb= 35(D) OK [10263.622304] raid6test: test_disks(1, 36): faila= 1(D) failb= 36(D) OK [10263.622829] raid6test: test_disks(1, 37): faila= 1(D) failb= 37(D) OK [10263.623328] raid6test: test_disks(1, 38): faila= 1(D) failb= 38(D) OK [10263.623879] raid6test: test_disks(1, 39): faila= 1(D) failb= 39(D) OK [10263.624367] raid6test: test_disks(1, 40): faila= 1(D) failb= 40(D) OK [10263.624919] raid6test: test_disks(1, 41): faila= 1(D) failb= 41(D) OK [10263.625409] raid6test: test_disks(1, 42): faila= 1(D) failb= 42(D) OK [10263.625957] raid6test: test_disks(1, 43): faila= 1(D) failb= 43(D) OK [10263.626450] raid6test: test_disks(1, 44): faila= 1(D) failb= 44(D) OK [10263.626970] raid6test: test_disks(1, 45): faila= 1(D) failb= 45(D) OK [10263.627476] raid6test: test_disks(1, 46): faila= 1(D) failb= 46(D) OK [10263.628014] raid6test: test_disks(1, 47): faila= 1(D) failb= 47(D) OK [10263.628513] raid6test: test_disks(1, 51): faila= 1(D) failb= 51(D) OK [10264.129448] raid6test: test_disks(1, 52): faila= 1(D) failb= 52(D) OK [10264.129978] raid6test: test_disks(1, 53): faila= 1(D) failb= 53(D) OK [10264.130502] raid6test: test_disks(1, 54): faila= 1(D) failb= 54(D) OK [10264.131078] raid6test: test_disks(1, 55): faila= 1(D) failb= 55(D) OK [10264.131611] raid6test: test_disks(1, 56): faila= 1(D) failb= 56(D) OK [10264.132149] raid6test: test_disks(1, 57): faila= 1(D) failb= 57(D) OK [10264.132716] raid6test: test_disks(1, 58): faila= 1(D) failb= 58(D) OK [10264.133264] raid6test: test_disks(1, 59): faila= 1(D) failb= 59(D) OK [10264.133789] raid6test: test_disks(1, 60): faila= 1(D) failb= 60(D) OK [10264.134318] raid6test: test_disks(1, 61): faila= 1(D) failb= 61(D) OK [10264.134905] raid6test: test_disks(1, 62): faila= 1(D) failb= 62(P) OK [10264.135416] raid6test: test_disks(1, 63): faila= 1(D) failb= 63(Q) OK [10264.135992] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(D) OK [10264.136483] raid6test: test_disks(2, 4): faila= 2(D) failb= 4(D) OK [10264.137076] raid6test: test_disks(2, 5): faila= 2(D) failb= 5(D) OK [10264.137601] raid6test: test_disks(2, 6): faila= 2(D) failb= 6(D) OK [10264.138147] raid6test: test_disks(2, 7isks(2, 10): faila= 2(D) failb= 10(D) OK [10264.639135] raid6test: test_disks(2, 11): faila= 2(D) failb= 11(D) OK [10264.639653] raid6test: test_disks(2, 12): faila= 2(D) failb= 12(D) OK [10264.640200] raid6test: test_disks(2, 13): faila= 2(D) failb= 13(D) OK [10264.640685] raid6test: test_disks(2, 14): faila= 2(D) failb= 14(D) OK [10264.641223] raid6test: test_disks(2, 15): faila= 2(D) failb= 15(D) OK [10264.641789] raid6test: test_disks(2, 16): faila= 2(D) failb= 16(D) OK [10264.642314] raid6test: test_disks(2, 17): faila= 2(D) failb= 17(D) OK [10264.642850] raid6test: test_disks(2, 18): faila= 2(D) failb= 18(D) OK [10264.643352] raid6test: test_disks(2, 19): faila= 2(D) failb= 19(D) OK [10264.644220] raid6test: test_disks(2, 20): faila= 2(D) failb= 20(D) OK [10264.644714] raid6test: test_disks(2, 21): faila= 2(D) failb= 21(D) OK [10264.645233] raid6test: test_disks(2, 22): faila= 2(D) failb= 22(D) OK [10264.645723] raid6test: test_disks(2, 23): faila= 2(D) failb= 23(D) OK [10264.646241] raid6test: test_disks(2, 24): faila= 2(D) failb= 24(D) OK [10264.646788] raid6test: test_disks(2, 25): faila= 2(D) failb= 25(D) OK [10264.647312] raid6test: test_disks(2, 26): failla= 2(D) failb= 29(D) OK [10265.148326] raid6test: test_disks(2, 30): faila= 2(D) failb= 30(D) OK [10265.148926] raid6test: test_disks(2, 31): faila= 2(D) failb= 31(D) OK [10265.149474] raid6test: test_disks(2, 32): faila= 2(D) failb= 32(D) OK [10265.149998] raid6test: test_disks(2, 33): faila= 2(D) failb= 33(D) OK [10265.150514] raid6test: test_disks(2, 34): faila= 2(D) failb= 34(D) OK [10265.151094] raid6test: test_disks(2, 35): faila= 2(D) failb= 35(D) OK [10265.151639] raid6test: test_disks(2, 36): faila= 2(D) failb= 36(D) OK [10265.152198] raid6test: test_disks(2, 37): faila= 2(D) failb= 37(D) OK [10265.152804] raid6test: test_disks(2, 38): faila= 2(D) failb= 38(D) OK [10265.153334] raid6test: test_disks(2, 39): faila= 2(D) failb= 39(D) OK [10265.153862] raid6test: test_disks(2, 40): faila= 2(D) failb= 40(D) OK [10265.154385] raid6test: test_disks(2, 41): faila= 2(D) failb= 41(D) OK [10265.154971] raid6test: test_disks(2, 42): faila= 2(D) failb= 42(D) OK [10265.155514] raid6test: test_disks(2, 43): faila= 2(D) failb= 43(D) OK [10265.156088] raid6test: test_disks(2, 44): faila= 2(D) failb= 44(D) OK [10265.156632] raid6test: test_disks(2, 45): faila= 2(D) failb= 45(D) OK [10265.157197] raid6test: test_disks(2, 46): faila= 2(D) failb= 46(D) OK [10265.157741] raid6aid6test: test_disks(2, 50): faila= 2(D) failb= 50(D) OK [10265.658664] raid6test: test_disks(2, 51): faila= 2(D) failb= 51(D) OK [10265.659223] raid6test: test_disks(2, 52): faila= 2(D) failb= 52(D) OK [10265.659717] raid6test: test_disks(2, 53): faila= 2(D) failb= 53(D) OK [10265.660238] raid6test: test_disks(2, 54): faila= 2(D) failb= 54(D) OK [10265.660794] raid6test: test_disks(2, 55): faila= 2(D) failb= 55(D) OK [10265.661280] raid6test: test_disks(2, 56): faila= 2(D) failb= 56(D) OK [10265.661809] raid6test: test_disks(2, 57): faila= 2(D) failb= 57(D) OK [10265.662331] raid6test: test_disks(2, 58): faila= 2(D) failb= 58(D) OK [10265.662861] raid6test: test_disks(2, 59): faila= 2(D) failb= 59(D) OK [10265.663357] raid6test: test_disks(2, 60): faila= 2(D) failb= 60(D) OK [10265.663876] raid6test: test_disks(2, 61): faila= 2(D) failb= 61(D) OK [10265.664370] raid6test: test_disks(2, 62): faila= 2(D) failb= 62(P) OK [10265.664934] raid6test: test_disks(2, 63): faila= 2(D) failb= 63(Q) OK [10265.665421] raid6test: test_disks(3, 4): faila= 3(D) failb= 4(D) OK [10265.665932] raid6test: test_disks(3, 5): faila= 3(D) failb= 5(D) OK [10265.666424] raid6test: test_disks(3, 6): faila= 3(D) failb= 6(D) OK [10265.666981] raid6test: test_disks(3,isks(3, 10): faila= 3(D) failb= 10(D) OK [10266.167927] raid6test: test_disks(3, 11): faila= 3(D) failb= 11(D) OK [10266.168464] raid6test: test_disks(3, 12): faila= 3(D) failb= 12(D) OK [10266.169013] raid6test: test_disks(3, 13): faila= 3(D) failb= 13(D) OK [10266.169512] raid6test: test_disks(3, 14): faila= 3(D) failb= 14(D) OK [10266.170086] raid6test: test_disks(3, 15): faila= 3(D) failb= 15(D) OK [10266.170585] raid6test: test_disks(3, 16): faila= 3(D) failb= 16(D) OK [10266.171160] raid6test: test_disks(3, 17): faila= 3(D) failb= 17(D) OK [10266.171659] raid6test: test_disks(3, 18): faila= 3(D) failb= 18(D) OK [10266.172238] raid6test: test_disks(3, 19): faila= 3(D) failb= 19(D) OK [10266.172833] raid6test: test_disks(3, 20): faila= 3(D) failb= 20(D) OK [10266.173357] raid6test: test_disks(3, 21): faila= 3(D) failb= 21(D) OK [10266.173936] raid6test: test_disks(3, 22): faila= 3(D) failb= 22(D) OK [10266.174429] raid6test: test_disks(3, 23): faila= 3(D) failb= 23(D) OK [10266.175016] raid6test: test_disks(3, 24): faila= 3(D) failb= 24(D) OK [10266.175509] raid6test: test_disks(3, 25): faila= 3(D) failb= 25(D) OK= 28(D) OK [10266.676399] raid6test: test_disks(3, 29): faila= 3(D) failb= 29(D) OK [10266.676967] raid6test: test_disks(3, 30): faila= 3(D) failb= 30(D) OK [10266.677459] raid6test: test_disks(3, 31): faila= 3(D) failb= 31(D) OK [10266.678014] raid6test: test_disks(3, 32): faila= 3(D) failb= 32(D) OK [10266.678500] raid6test: test_disks(3, 33): faila= 3(D) failb= 33(D) OK [10266.679047] raid6test: test_disks(3, 34): faila= 3(D) failb= 34(D) OK [10266.679541] raid6test: test_disks(3, 35): faila= 3(D) failb= 35(D) OK [10266.680085] raid6test: test_disks(3, 36): faila= 3(D) failb= 36(D) OK [10266.680571] raid6test: test_disks(3, 37): faila= 3(D) failb= 37(D) OK [10266.681113] raid6test: test_disks(3, 38): faila= 3(D) failb= 38(D) OK [10266.681601] raid6test: test_disks(3, 39): faila= 3(D) failb= 39(D) OK [10266.682152] raid6test: test_disks(3, 40): faila= 3(D) failb= 40(D) OK [10266.682653] raid6test: test_disks(3, 41): faila= 3(D) failb= 41(D) OK [10266.683164] raid6test: test_disks(3, 42): faila= 3(D) failb= 42(D) OK [10266.683676] raid6test: test_disks(3, 43): faila= 3(D) failb= 43(D) OK [10266.684230] raid6test: test_disks(3, 44): faila= 3(D) failb= 44(D) OK [102[10267.185150] raid6test: test_disks(3, 48): faila= 3(D) failb= 48(D) OK [10267.185645] raid6test: test_disks(3, 49): faila= 3(D) failb= 49(D) OK [10267.186224] raid6test: test_disks(3, 50): faila= 3(D) failb= 50(D) OK [10267.186740] raid6test: test_disks(3, 51): faila= 3(D) failb= 51(D) OK [10267.187289] raid6test: test_disks(3, 52): faila= 3(D) failb= 52(D) OK [10267.187823] raid6test: test_disks(3, 53): faila= 3(D) failb= 53(D) OK [10267.188330] raid6test: test_disks(3, 54): faila= 3(D) failb= 54(D) OK [10267.188873] raid6test: test_disks(3, 55): faila= 3(D) failb= 55(D) OK [10267.189376] raid6test: test_disks(3, 56): faila= 3(D) failb= 56(D) OK [10267.189935] raid6test: test_disks(3, 57): faila= 3(D) failb= 57(D) OK [10267.190442] raid6test: test_disks(3, 58): faila= 3(D) failb= 58(D) OK [10267.191006] raid6test: test_disks(3, 59): faila= 3(D) failb= 59(D) OK [10267.191512] raid6test: test_disks(3, 60): faila= 3(D) failb= 60(D) OK [10267.192079] raid6test: test_disks(3, 61): faila= 3(D) failb= 61(D) OK [10267.192597] raid6test: test_disks(3, 62): faila= 3(D) failb= 62(P) OK [10267.193135] raid6test: test_disks(3, 63): faila= 3(D) failb= 63(Q) OK [10267.193609] raid6test: test_disks(4, 5): faila= 4(D) failb= 5(D) OK [10267.194176] raid6teaid6test: test_disks(4, 9): faila= 4(D) failb= 9(D) OK [10267.695038] raid6test: test_disks(4, 10): faila= 4(D) failb= 10(D) OK [10267.695556] raid6test: test_disks(4, 11): faila= 4(D) failb= 11(D) OK [10267.696120] raid6test: test_disks(4, 12): faila= 4(D) failb= 12(D) OK [10267.696627] raid6test: test_disks(4, 13): faila= 4(D) failb= 13(D) OK [10267.697193] raid6test: test_disks(4, 14): faila= 4(D) failb= 14(D) OK [10267.697698] raid6test: test_disks(4, 15): faila= 4(D) failb= 15(D) OK [10267.698261] raid6test: test_disks(4, 16): faila= 4(D) failb= 16(D) OK [10267.698766] raid6test: test_disks(4, 17): faila= 4(D) failb= 17(D) OK [10267.699283] raid6test: test_disks(4, 18): faila= 4(D) failb= 18(D) OK [10267.699828] raid6test: test_disks(4, 19): faila= 4(D) failb= 19(D) OK [10267.700304] raid6test: test_disks(4, 20): faila= 4(D) failb= 20(D) OK [10267.700854] raid6test: test_disks(4, 21): faila= 4(D) failb= 21(D) OK [10267.701369] raid6test: test_disks(4, 22): faila= 4(D) failb= 22(D) OK [10267.701921] raid6test: test_disks(4, 23): faila= 4(D) failb= 23(D) OK [10267.702454] raid6test: test_disks(4, 24): faila= 4(D) failb= 24(D) OK [10267.702977] raid6test: test_disks(4, 25): faila= 4(D) faila= 4(D) failb= 28(D) OK [10268.203816] raid6test: test_disks(4, 29): faila= 4(D) failb= 29(D) OK [10268.204330] raid6test: test_disks(4, 30): faila= 4(D) failb= 30(D) OK [10268.204867] raid6test: test_disks(4, 31): faila= 4(D) failb= 31(D) OK [10268.205377] raid6test: test_disks(4, 32): faila= 4(D) failb= 32(D) OK [10268.205921] raid6test: test_disks(4, 33): faila= 4(D) failb= 33(D) OK [10268.206431] raid6test: test_disks(4, 34): faila= 4(D) failb= 34(D) OK [10268.207024] raid6test: test_disks(4, 35): faila= 4(D) failb= 35(D) OK [10268.207529] raid6test: test_disks(4, 36): faila= 4(D) failb= 36(D) OK [10268.208083] raid6test: test_disks(4, 37): faila= 4(D) failb= 37(D) OK [10268.208632] raid6test: test_disks(4, 38): faila= 4(D) failb= 38(D) OK [10268.209189] raid6test: test_disks(4, 39): faila= 4(D) failb= 39(D) OK [10268.209826] raid6test: test_disks(4, 40): faila= 4(D) failb= 40(D) OK [10268.210342] raid6test: test_disks(4, 41): faila= 4(D) failb= 41(D) OK [10268.210952] raid6test: test_disks(4, 42): faila= 4(D) failb= 42(D) OK [10268.211449] raid6test: test_disks(4, 43): faila= 4(D) failla= 4(D) failb= 46(D) OK [10268.712271] raid6test: test_disks(4, 47): faila= 4(D) failb= 47(D) OK [10268.712841] raid6test: test_disks(4, 48): faila= 4(D) failb= 48(D) OK [10268.713359] raid6test: test_disks(4, 49): faila= 4(D) failb= 49(D) OK [10268.713903] raid6test: test_disks(4, 50): faila= 4(D) failb= 50(D) OK [10268.714417] raid6test: test_disks(4, 51): faila= 4(D) failb= 51(D) OK [10268.714976] raid6test: test_disks(4, 52): faila= 4(D) failb= 52(D) OK [10268.715484] raid6test: test_disks(4, 53): faila= 4(D) failb= 53(D) OK [10268.716045] raid6test: test_disks(4, 54): faila= 4(D) failb= 54(D) OK [10268.716551] raid6test: test_disks(4, 55): faila= 4(D) failb= 55(D) OK [10268.717117] raid6test: test_disks(4, 56): faila= 4(D) failb= 56(D) OK [10268.717615] raid6test: test_disks(4, 57): faila= 4(D) failb= 57(D) OK [10268.718171] raid6test: test_disks(4, 58): faila= 4(D) failb= 58(D) OK [10268.718674] raid6test: test_disks(4, 59): faila= 4(D) failb= 59(D) OK [10268.719237] raid6test: test_disks(4, 60): faila= 4(D) failb= 60(D) OK [10268.719740] raid6test: test_disks(4, 61): faila= 4(D) failb= 61(D) OK [10268.720298] raid6test: test_disks(4, 62): faila= 4(D) failb= 62(P) OK [10268.720845] raid6test: test_disks(4, 63): faila= 4(D) failb= 63 8(D) OK [10269.221646] raid6test: test_disks(5, 9): faila= 5(D) failb= 9(D) OK [10269.222218] raid6test: test_disks(5, 10): faila= 5(D) failb= 10(D) OK [10269.222717] raid6test: test_disks(5, 11): faila= 5(D) failb= 11(D) OK [10269.223295] raid6test: test_disks(5, 12): faila= 5(D) failb= 12(D) OK [10269.223830] raid6test: test_disks(5, 13): faila= 5(D) failb= 13(D) OK [10269.224345] raid6test: test_disks(5, 14): faila= 5(D) failb= 14(D) OK [10269.224880] raid6test: test_disks(5, 15): faila= 5(D) failb= 15(D) OK [10269.225393] raid6test: test_disks(5, 16): faila= 5(D) failb= 16(D) OK [10269.225937] raid6test: test_disks(5, 17): faila= 5(D) failb= 17(D) OK [10269.226444] raid6test: test_disks(5, 18): faila= 5(D) failb= 18(D) OK [10269.227014] raid6test: test_disks(5, 19): faila= 5(D) failb= 19(D) OK [10269.227487] raid6test: test_disks(5, 20): faila= 5(D) failb= 20(D) OK [10269.228052] raid6test: test_disks(5, 21): faila= 5(D) failb= 21(D) OK [10269.228560] raid6test: test_disks(5, 22): faila= 5(D) failb= 22(D) OK [10269.229119] raid6test: test_disks(5, 23): faila= 5(D) failb= 23(D) OK [10269.229624] raid6test: test_disks(5, 27): faila= 5(D) failb= 27(D) OK [10269.730513] raid6test: test_disks(5, 28): faila= 5(D) failb= 28(D) OK [10269.731099] raid6test: test_disks(5, 29): faila= 5(D) failb= 29(D) OK [10269.731581] raid6test: test_disks(5, 30): faila= 5(D) failb= 30(D) OK [10269.732147] raid6test: test_disks(5, 31): faila= 5(D) failb= 31(D) OK [10269.732653] raid6test: test_disks(5, 32): faila= 5(D) failb= 32(D) OK [10269.733225] raid6test: test_disks(5, 33): faila= 5(D) failb= 33(D) OK [10269.733737] raid6test: test_disks(5, 34): faila= 5(D) failb= 34(D) OK [10269.734295] raid6test: test_disks(5, 35): faila= 5(D) failb= 35(D) OK [10269.734823] raid6test: test_disks(5, 36): faila= 5(D) failb= 36(D) OK [10269.735329] raid6test: test_disks(5, 37): faila= 5(D) failb= 37(D) OK [10269.735863] raid6test: test_disks(5, 38): faila= 5(D) failb= 38(D) OK [10269.736368] raid6test: test_disks(5, 39): faila= 5(D) failb= 39(D) OK [10269.736902] raid6test: test_disks(5, 40): faila= 5(D) failb= 40(D) OK [10269.737426] raid6test: test_disks(5, 41): faila= 5(D) failb= 41(D) OK [10269.737989] raid6test: test_disks(5, 42): faila= 5(D) failb= 42(D) OK [10269.738497] raid6test: test_disks(5, 43): failla= 5(D) failb= 46(D) OK [10270.239307] raid6test: test_disks(5, 47): faila= 5(D) failb= 47(D) OK [10270.239830] raid6test: test_disks(5, 48): faila= 5(D) failb= 48(D) OK [10270.240312] raid6test: test_disks(5, 49): faila= 5(D) failb= 49(D) OK [10270.240855] raid6test: test_disks(5, 50): faila= 5(D) failb= 50(D) OK [10270.241368] raid6test: test_disks(5, 51): faila= 5(D) failb= 51(D) OK [10270.241906] raid6test: test_disks(5, 52): faila= 5(D) failb= 52(D) OK [10270.242418] raid6test: test_disks(5, 53): faila= 5(D) failb= 53(D) OK [10270.242982] raid6test: test_disks(5, 54): faila= 5(D) failb= 54(D) OK [10270.243488] raid6test: test_disks(5, 55): faila= 5(D) failb= 55(D) OK [10270.244059] raid6test: test_disks(5, 56): faila= 5(D) failb= 56(D) OK [10270.244569] raid6test: test_disks(5, 57): faila= 5(D) failb= 57(D) OK [10270.245128] raid6test: test_disks(5, 58): faila= 5(D) failb= 58(D) OK [10270.245634] raid6test: test_disks(5, 59): faila= 5(D) failb= 59(D) OK [10270.246194] raid6test: test_disks(5, 60): faila= 5(D) failb= 60(D) OK [10270.246697] raid6test: test_disks(5, 61): faila= 5(D) failb= 61(D) OK [10270.274698][10270.747639] raid6test: test_disks(6, 8): faila= 6(D) failb= 8(D) OK [10270.748350] raid6test: test_disks(6, 9): faila= 6(D) failb= 9(D) OK [10270.749151] raid6test: test_disks(6, 10): faila= 6(D) failb= 10(D) OK [10270.749756] raid6test: test_disks(6, 11): faila= 6(D) failb= 11(D) OK [10270.750394] raid6test: test_disks(6, 12): faila= 6(D) failb= 12(D) OK [10270.751034] raid6test: test_disks(6, 13): faila= 6(D) failb= 13(D) OK [10270.751637] raid6test: test_disks(6, 14): faila= 6(D) failb= 14(D) OK [10270.752267] raid6test: test_disks(6, 15): faila= 6(D) failb= 15(D) OK [10270.752928] raid6test: test_disks(6, 16): faila= 6(D) failb= 16(D) OK [10270.753500] raid6test: test_disks(6, 17): faila= 6(D) failb= 17(D) OK [10270.754175] raid6test: test_disks(6, 18): faila= 6(D) failb= 18(D) OK [10270.754779] raid6test: test_disks(6, 19): faila= 6(D) failb= 19(D) OK [10270.755403] raid6test: test_disks(6, 20): faila= 6(D) failb= 20(D) OK [10270.756045] raid6test: test_disks(6, 21): faila= 6(D) failb= 21(D) OK [10270.784996[10271.256843] raid6test: test_disks(6, 25): faila= 6(D) failb= 25(D) OK [10271.257370] raid6test: test_disks(6, 26): faila= 6(D) failb= 26(D) OK [10271.257924] raid6test: test_disks(6, 27): faila= 6(D) failb= 27(D) OK [10271.258433] raid6test: test_disks(6, 28): faila= 6(D) failb= 28(D) OK [10271.258972] raid6test: test_disks(6, 29): faila= 6(D) failb= 29(D) OK [10271.259487] raid6test: test_disks(6, 30): faila= 6(D) failb= 30(D) OK [10271.260044] raid6test: test_disks(6, 31): faila= 6(D) failb= 31(D) OK [10271.260554] raid6test: test_disks(6, 32): faila= 6(D) failb= 32(D) OK [10271.261109] raid6test: test_disks(6, 33): faila= 6(D) failb= 33(D) OK [10271.261619] raid6test: test_disks(6, 34): faila= 6(D) failb= 34(D) OK [10271.262180] raid6test: test_disks(6, 35): faila= 6(D) failb= 35(D) OK [10271.262702] raid6test: test_disks(6, 36): faila= 6(D) failb= 36(D) OK [10271.263298] raid6test: test_disks(6, 37): faila= 6(D) failb= 37(D) OK [10271.263856] raid6test: test_disks(6, 38): faila= 6(D) failb= 38(D) OK [10271.264396] raid6test: test_disks(6, 39): faila= 6(D) failb= 39(D) OK [10271.264973] raid6test: test_disks(6, 40): faila= 6(D) failb= 40(D) OK [10271.265491] raid6test: test_disks(6, 41): fisks(6, 44): faila= 6(D) failb= 44(D) OK [10271.766272] raid6test: test_disks(6, 45): faila= 6(D) failb= 45(D) OK [10271.766837] raid6test: test_disks(6, 46): faila= 6(D) failb= 46(D) OK [10271.767326] raid6test: test_disks(6, 47): faila= 6(D) failb= 47(D) OK [10271.767881] raid6test: test_disks(6, 48): faila= 6(D) failb= 48(D) OK [10271.768400] raid6test: test_disks(6, 49): faila= 6(D) failb= 49(D) OK [10271.769293] raid6test: test_disks(6, 50): faila= 6(D) failb= 50(D) OK [10271.769771] raid6test: test_disks(6, 51): faila= 6(D) failb= 51(D) OK [10271.770303] raid6test: test_disks(6, 52): faila= 6(D) failb= 52(D) OK [10271.770778] raid6test: test_disks(6, 53): faila= 6(D) failb= 53(D) OK [10271.771307] raid6test: test_disks(6, 54): faila= 6(D) failb= 54(D) OK [10271.771783] raid6test: test_disks(6, 55): faila= 6(D) failb= 55(D) OK [10271.772311] raid6test: test_disks(6, 56): faila= 6(D) failb= 56(D) OK [10271.772852] raid6test: test_disks(6, 57): faila= 6(D) failb= 57(D) OK [10271.773341] raid6test: test_disks(6, 58): faila= 6(D) failb= 58(D) OK [10271.773897] raid6test: test_disks(6, 59): faila= 6(D) failb= 59(D) OK [10271.774387] raid6test: test_disks(6, 60): failla= 6(D) failb= 63(Q) OK [10272.275155] raid6test: test_disks(7, 8): faila= 7(D) failb= 8(D) OK [10272.275677] raid6test: test_disks(7, 9): faila= 7(D) failb= 9(D) OK [10272.276243] raid6test: test_disks(7, 10): faila= 7(D) failb= 10(D) OK [10272.276750] raid6test: test_disks(7, 11): faila= 7(D) failb= 11(D) OK [10272.277308] raid6test: test_disks(7, 12): faila= 7(D) failb= 12(D) OK [10272.277851] raid6test: test_disks(7, 13): faila= 7(D) failb= 13(D) OK [10272.278329] raid6test: test_disks(7, 14): faila= 7(D) failb= 14(D) OK [10272.278872] raid6test: test_disks(7, 15): faila= 7(D) failb= 15(D) OK [10272.279382] raid6test: test_disks(7, 16): faila= 7(D) failb= 16(D) OK [10272.279926] raid6test: test_disks(7, 17): faila= 7(D) failb= 17(D) OK [10272.280434] raid6test: test_disks(7, 18): faila= 7(D) failb= 18(D) OK [10272.280974] raid6test: test_disks(7, 19): faila= 7(D) failb= 19(D) OK [10272.281489] raid6test: test_disks(7, 20): faila= 7(D) failb= 20(D) OK [10272.282148] raid6test: test_disks(7, 21): faila= 7(D) failb= 21(D) OK [10272.282709] raid6test: test_disks(7, 22): faila= 7(D) failb= 22(D) OK [10272.283289] raid6test: test_disks(7, 23): faila= 7(D) failb= 23(D) OK [10272.283804] raid6test: test_disks(7, 24): faila= 7(D) failb= 24(D) = 27(D) OK [10272.784680] raid6test: test_disks(7, 28): faila= 7(D) failb= 28(D) OK [10272.785253] raid6test: test_disks(7, 29): faila= 7(D) failb= 29(D) OK [10272.785731] raid6test: test_disks(7, 30): faila= 7(D) failb= 30(D) OK [10272.786299] raid6test: test_disks(7, 31): faila= 7(D) failb= 31(D) OK [10272.786806] raid6test: test_disks(7, 32): faila= 7(D) failb= 32(D) OK [10272.787326] raid6test: test_disks(7, 33): faila= 7(D) failb= 33(D) OK [10272.787868] raid6test: test_disks(7, 34): faila= 7(D) failb= 34(D) OK [10272.788387] raid6test: test_disks(7, 35): faila= 7(D) failb= 35(D) OK [10272.788921] raid6test: test_disks(7, 36): faila= 7(D) failb= 36(D) OK [10272.789497] raid6test: test_disks(7, 37): faila= 7(D) failb= 37(D) OK [10272.790073] raid6test: test_disks(7, 38): faila= 7(D) failb= 38(D) OK [10272.790582] raid6test: test_disks(7, 39): faila= 7(D) failb= 39(D) OK [10272.791147] raid6test: test_disks(7, 40): faila= 7(D) failb= 40(D) OK [10272.791656] raid6test: test_disks(7, 41): faila= 7(D) failb= 41(D) OK [10272.792214] raid6test: test_disks(7, 42): faila= 7(D) failb= 42(D) OK [10272.792696] raid6test: test_disks(7, 43): faila= 7(D) failb= 43(D) OK [10272.793229] raid6test: test_la= 7(D) failb= 46(D) OK [10273.193868] raid6test: test_disks(7, 47): faila=[10273.275877] raid6test: test_disks(7, 48): faila= 7(D) failb= 48(D) OK [10273.294678] raid6test: test_disks(7, 49): faila= 7(D) failb= 49(D) OK [10273.295228] raid6test: test_disks(7, 50): faila= 7(D) failb= 50(D) OK [10273.295732] raid6test: test_disks(7, 51): faila= 7(D) failb= 51(D) OK [10273.296315] raid6test: test_disks(7, 52): faila= 7(D) failb= 52(D) OK [10273.296894] raid6test: test_disks(7, 53): faila= 7(D) failb= 53(D) OK [10273.297421] raid6test: test_disks(7, 54): faila= 7(D) failb= 54(D) OK [10273.297963] raid6test: test_disks(7, 55): faila= 7(D) failb= 55(D) OK [10273.298480] raid6test: test_disks(7, 56): faila= 7(D) failb= 56(D) OK [10273.299042] raid6test: test_disks(7, 57): faila= 7(D) failb= 57(D) OK [10273.299555] raid6test: test_disks(7, 58): faila= 7(D) failb= 58(D) OK [10273.300126] raid6test: test_disks(7, 59): faila= 7(D) failb= 59(D) OK [10273.300606] raid6test: test_disks(7, 60): faila= 7(D) failb= 60(D) OK [10273.301162] raid6test: test_disks(7, 61): faila= 7(D) failb= 61(D) OK [10273.301669] raid6test: test_disks(7, 62): faila= 7(D) failb= 62(P) OK [10273.302232] raid6test: test_disks(7, 63): faila= 7(D) failb= 63(Q) OK [10273.302752] raidaid6test: test_disks(8, 12): faila= 8(D) failb= 12(D) OK [10273.803610] raid6test: test_disks(8, 13): faila= 8(D) failb= 13(D) OK [10273.804178] raid6test: test_disks(8, 14): faila= 8(D) failb= 14(D) OK [10273.804679] raid6test: test_disks(8, 15): faila= 8(D) failb= 15(D) OK [10273.805232] raid6test: test_disks(8, 16): faila= 8(D) failb= 16(D) OK [10273.805737] raid6test: test_disks(8, 17): faila= 8(D) failb= 17(D) OK [10273.806303] raid6test: test_disks(8, 18): faila= 8(D) failb= 18(D) OK [10273.806804] raid6test: test_disks(8, 19): faila= 8(D) failb= 19(D) OK [10273.807349] raid6test: test_disks(8, 20): faila= 8(D) failb= 20(D) OK [10273.807868] raid6test: test_disks(8, 21): faila= 8(D) failb= 21(D) OK [10273.808369] raid6test: test_disks(8, 22): faila= 8(D) failb= 22(D) OK [10273.808898] raid6test: test_disks(8, 23): faila= 8(D) failb= 23(D) OK [10273.809462] raid6test: test_disks(8, 24): faila= 8(D) failb= 24(D) OK [10273.809998] raid6test: test_disks(8, 25): faila= 8(D) failb= 25(D) OK [10273.810537] raid6test: test_disks(8, 26): faila= 8(D) failb= 26(D) OK [10273.811090] raid6test: test_disks(8, 27): faila= 8(D) failb= 27(D) OK [10273.811582] raid6test: test_di= 30(D) OK [10274.212225] raid6test: test_disks(8, 31): faila= 8(D) failb= 3aid6test: test_disks(8, 32): faila= 8(D) failb= 32(D) OK [10274.313112] raid6test: test_disks(8, 33): faila= 8(D) failb= 33(D) OK [10274.313595] raid6test: test_disks(8, 34): faila= 8(D) failb= 34(D) OK [10274.314161] raid6test: test_disks(8, 35): faila= 8(D) failb= 35(D) OK [10274.314667] raid6test: test_disks(8, 36): faila= 8(D) failb= 36(D) OK [10274.315226] raid6test: test_disks(8, 37): faila= 8(D) failb= 37(D) OK [10274.315737] raid6test: test_disks(8, 38): faila= 8(D) failb= 38(D) OK [10274.316300] raid6test: test_disks(8, 39): faila= 8(D) failb= 39(D) OK [10274.316805] raid6test: test_disks(8, 40): faila= 8(D) failb= 40(D) OK [10274.317331] raid6test: test_disks(8, 41): faila= 8(D) failb= 41(D) OK [10274.317872] raid6test: test_disks(8, 42): faila= 8(D) failb= 42(D) OK [10274.318382] raid6test: test_disks(8, 43): faila= 8(D) failb= 43(D) OK [10274.318948] raid6test: test_disks(8, 44): faila= 8(D) failb= 44(D) OK [10274.319492] raid6test: test_disks(8, 45): faila= 8(D) failb= 45(D) OK [10274.320025] raid6test: test_disks(8, 46): faila= 8(D) failb= 46(D) OK [10274.320515] raid6test: test_disks(8, 47): faila= 8(D) [10274.721229] raid6test: test_disks(8, 50): faila= 8(D) failb= 50(D) OK [10274.721775] raid6test: test_disks(8, 51): faila= 8(D) failb= 51(D) OK [10274isks(8, 52): faila= 8(D) failb= 52(D) OK [10274.822684] raid6test: test_disks(8, 53): faila= 8(D) failb= 53(D) OK [10274.823250] raid6test: test_disks(8, 54): faila= 8(D) failb= 54(D) OK [10274.823765] raid6test: test_disks(8, 55): faila= 8(D) failb= 55(D) OK [10274.824323] raid6test: test_disks(8, 56): faila= 8(D) failb= 56(D) OK [10274.824827] raid6test: test_disks(8, 57): faila= 8(D) failb= 57(D) OK [10274.825394] raid6test: test_disks(8, 58): faila= 8(D) failb= 58(D) OK [10274.825938] raid6test: test_disks(8, 59): faila= 8(D) failb= 59(D) OK [10274.826442] raid6test: test_disks(8, 60): faila= 8(D) failb= 60(D) OK [10274.826981] raid6test: test_disks(8, 61): faila= 8(D) failb= 61(D) OK [10274.827493] raid6test: test_disks(8, 62): faila= 8(D) failb= 62(P) OK [10274.828042] raid6test: test_disks(8, 63): faila= 8(D) failb= 63(Q) OK [10274.828552] raid6test: test_disks(9, 10): faila= 9(D) failb= 10(D) OK [10274.829109] raid6test: test_disks(9, 11): faila= 9(D) failb= 11(D) OK [10274.829618] raid6test: test_disks(9, 12)= 14(D) OK [10275.230224] raid6test: test_disks(9, 15): faila= 9(D) failb= 1aid6test: test_disks(9, 16): faila= 9(D) failb= 16(D) OK [10275.331200] raid6test: test_disks(9, 17): faila= 9(D) failb= 17(D) OK [10275.331723] raid6test: test_disks(9, 18): faila= 9(D) failb= 18(D) OK [10275.332287] raid6test: test_disks(9, 19): faila= 9(D) failb= 19(D) OK [10275.332822] raid6test: test_disks(9, 20): faila= 9(D) failb= 20(D) OK [10275.333351] raid6test: test_disks(9, 21): faila= 9(D) failb= 21(D) OK [10275.333886] raid6test: test_disks(9, 22): faila= 9(D) failb= 22(D) OK [10275.334404] raid6test: test_disks(9, 23): faila= 9(D) failb= 23(D) OK [10275.334946] raid6test: test_disks(9, 24): faila= 9(D) failb= 24(D) OK [10275.335457] raid6test: test_disks(9, 25): faila= 9(D) failb= 25(D) OK [10275.336003] raid6test: test_disks(9, 26): faila= 9(D) failb= 26(D) OK [10275.336518] raid6test: test_disks(9, 27): faila= 9(D) failb= 27(D) OK [10275.337090] raid6test: test_disks(9, 28): faila= 9(D) failb= 28(D) OK [10275.337580] raid6test: test_disks(9, 29): faila= 9(D) failb= 29(D) OK [10275.338147] raid6test: test_disks(9, 30): faila= 9(D) failb= 30(D) OK [10275.338644] raid6test: test_disks(9, 31): faila= 9(D) [10275.739304] raid6test: test_disks(9, 34): faila= 9(D) failb= 34(D) OK [10275.739904] raid6test: test_disks(9, 35): faila= 9(D) failb= 35(D) OK [10275.740405] raid6test: test_disks(9, 36): faila= 9(D) failb= 36(D) OK [10275.740927] raid6test: test_disks(9, 37): faila= 9(D) failb= 37(D) OK [10275.741420] raid6test: test_disks(9, 38): faila= 9(D) failb= 38(D) OK [10275.741958] raid6test: test_disks(9, 39): faila= 9(D) failb= aid6test: test_disks(9, 40): faila= 9(D) failb= 40(D) OK [10275.842785] raid6test: test_disks(9, 41): faila= 9(D) failb= 41(D) OK [10275.843358] raid6test: test_disks(9, 42): faila= 9(D) failb= 42(D) OK [10275.843903] raid6test: test_disks(9, 43): faila= 9(D) failb= 43(D) OK [10275.844409] raid6test: test_disks(9, 44): faila= 9(D) failb= 44(D) OK [10275.844955] raid6test: test_disks(9, 45): faila= 9(D) failb= 45(D) OK [10275.845461] raid6test: test_disks(9, 46): faila= 9(D) failb= 46(D) OK [10275.845999] raid6test: test_disks(9, 47): faila= 9(D) failb= 47(D) OK [10275.846509] raid6test: test_disks(9, 48): faila= 9(D) failb= 48(D) OK [10275.847045] raid6test: test_disks(9, 49): faila= 9(D) failb= 49(D) OK [10275.847557] raid6test: test_disks(9, 50):= 52(D) OK [10276.248369] raid6test: test_disks(9, 53): faila= 9(D) failb= 5aid6test: test_disks(9, 54): faila= 9(D) failb= 54(D) OK [10276.349295] raid6test: test_disks(9, 55): faila= 9(D) failb= 55(D) OK [10276.349777] raid6test: test_disks(9, 56): faila= 9(D) failb= 56(D) OK [10276.350341] raid6test: test_disks(9, 57): faila= 9(D) failb= 57(D) OK [10276.350874] raid6test: test_disks(9, 58): faila= 9(D) failb= 58(D) OK [10276.351406] raid6test: test_disks(9, 59): faila= 9(D) failb= 59(D) OK [10276.351946] raid6test: test_disks(9, 60): faila= 9(D) failb= 60(D) OK [10276.352457] raid6test: test_disks(9, 61): faila= 9(D) failb= 61(D) OK [10276.352989] raid6test: test_disks(9, 62): faila= 9(D) failb= 62(P) OK [10276.353511] raid6test: test_disks(9, 63): faila= 9(D) failb= 63(Q) OK [10276.354041] raid6test: test_disks(10, 11): faila= 10(D) failb= 11(D) OK [10276.354552] raid6test: test_disks(10, 12): faila= 10(D) failb= 12(D) OK [10276.355106] raid6test: test_disks(10, 13): faila= 10(D) failb= 13(D) OK [10276.355615] raid6test: test_disks(10, 14): faila= 10(D) failb= 14(D) OK [10276.356174] raid6test: test_disks(10, 15): faila= 10(D) failb= 15(D) OK [10276.356678] raid6test:ila= 10(D) failb= 18(D) OK [10276.757314] raid6test: test_disks(10, 19): fail[10276.839417] raid6test: test_disks(10, 20): faila= 10(D) failb= 20(D) OK [10276.858285] raid6test: test_disks(10, 21): faila= 10(D) failb= 21(D) OK [10276.858780] raid6test: test_disks(10, 22): faila= 10(D) failb= 22(D) OK [10276.859341] raid6test: test_disks(10, 23): faila= 10(D) failb= 23(D) OK [10276.859828] raid6test: test_disks(10, 24): faila= 10(D) failb= 24(D) OK [10276.860370] raid6test: test_disks(10, 25): faila= 10(D) failb= 25(D) OK [10276.860889] raid6test: test_disks(10, 26): faila= 10(D) failb= 26(D) OK [10276.861404] raid6test: test_disks(10, 27): faila= 10(D) failb= 27(D) OK [10276.861959] raid6test: test_disks(10, 28): faila= 10(D) failb= 28(D) OK [10276.862490] raid6test: test_disks(10, 29): faila= 10(D) failb= 29(D) OK [10276.863037] raid6test: test_disks(10, 30): faila= 10(D) failb= 30(D) OK [10276.863563] raid6test: test_disks(10, 31): faila= 10(D) failb= 31(D) OK [10276.864104] raid6test: test_disks(10, 32): faila= 10(D) failb= 32(D) OK [10276.864613] raid6test: test_disks(10, 33): faila= 10(D) failb= 33(D) OK [10276.865153] raid6test: test_disks(10, 34): faila= 10(D) failb= 34(D) OK [10276.865645] raid6test: teila= 10(D) failb= 37(D) OK [10277.266497] raid6test: test_disks(10, 38): faila= 10(D) failb= 38(D) OK [10277.267036] raid6test: test_disks(10, 39): faila= 10(D) failb= 39(D) OK [10277.267530] raid6test: test_disks(10, 40): faila= 1[10277.349521] raid6test: test_disks(10, 41): faila= 10(D) failb= 41(D) OK [10277.368401] raid6test: test_disks(10, 42): faila= 10(D) failb= 42(D) OK [10277.368980] raid6test: test_disks(10, 43): faila= 10(D) failb= 43(D) OK [10277.369523] raid6test: test_disks(10, 44): faila= 10(D) failb= 44(D) OK [10277.370084] raid6test: test_disks(10, 45): faila= 10(D) failb= 45(D) OK [10277.370619] raid6test: test_disks(10, 46): faila= 10(D) failb= 46(D) OK [10277.371203] raid6test: test_disks(10, 47): faila= 10(D) failb= 47(D) OK [10277.371739] raid6test: test_disks(10, 48): faila= 10(D) failb= 48(D) OK [10277.372313] raid6test: test_disks(10, 49): faila= 10(D) failb= 49(D) OK [10277.372905] raid6test: test_disks(10, 50): faila= 10(D) failb= 50(D) OK [10277.373428] raid6test: test_disks(10, 51): faila= 10(D) failb= 51(D) OK [10277.373991] raid6test: test_disks(10, 52): faila= 10(D) failb= 52(D) OK [10277.374483] raid6test: test_disks(10, 53): faila= 10(D) failb= 53(D) b= 56(D) OK [10277.875370] raid6test: test_disks(10, 57): faila= 10(D) failb= 57(D) OK [10277.875916] raid6test: test_disks(10, 58): faila= 10(D) failb= 58(D) OK [10277.876479] raid6test: test_disks(10, 59): faila= 10(D) failb= 59(D) OK [10277.877046] raid6test: test_disks(10, 60): faila= 10(D) failb= 60(D) OK [10277.877581] raid6test: test_disks(10, 61): faila= 10(D) failb= 61(D) OK [10277.878167] raid6test: test_disks(10, 62): faila= 10(D) failb= 62(P) OK [10277.878685] raid6test: test_disks(10, 63): faila= 10(D) failb= 63(Q) OK [10277.879280] raid6test: test_disks(11, 12): faila= 11(D) failb= 12(D) OK [10277.879784] raid6test: test_disks(11, 13): faila= 11(D) failb= 13(D) OK [10277.880371] raid6test: test_disks(11, 14): faila= 11(D) failb= 14(D) OK [10277.880906] raid6test: test_disks(11, 15): faila= 11(D) failb= 15(D) OK [10277.881469] raid6test: test_disks(11, 16): faila= 11(D) failb= 16(D) OK [10277.882037] raid6test: test_disks(11, 17): faila= 11(D) failb= 17(D) OK [10277.882572] raid6test: test_disks(11, 18): faila= 11(D) failb= 18(D) OK [10277.883146] raid6test: test_disks(11, 19): faila= 11(D) fail[10278.264474] raid6test: test_disks(11, 22): faila= 11(D) failb= 22(D) OK [10278.284367] raid6test: test_disks(11, 23): faila= 11(D) failb= 23(D) OK [10278.284953] raid6test: test_disks(11, 24): faila= 11(D) failb= 24(D) OK [10278.285448] raid6test: test_disks(11, 25): faila= 11(D) failb= 25(D) OK [10278.285983] raid6test: test_disks(11, 26): faila= 11(D) failb= 26(D) OK [10278.286483] raid6test: test_disks(11, 27): faila= 11(D) failb= 27(D) OK [10278.287020] raid6test: test_disks(11, 28): faila= 11(D) failb= 28(D) OK [10278.287630] raid6test: test_disks(11, 29): faila= 11(D) failb= 29(D) OK [10278.288157] raid6test: test_disks(11, 30): faila= 11(D) failb= 30(D) OK [10278.288665] raid6test: test_disks(11, 31): faila= 11(D) failb= 31(D) OK [10278.289251] raid6test: test_disks(11, 32): faila= 11(D) failb= 32(D) OK [10278.289767] raid6test: test_disks(11, 33): faila= 11(D) failb= 33(D) OK [10278.290330] raid6test: test_disks(11, 34): faila= 11(D) failb= 34(D) OK [10278.290843] raid6tesb= 35(D) OK [10278.391233] raid6test: test_disks(11, 36): faila= 11(D) failb= 36(D) OK [10278.391773] raid6test: test_disks(11, 37): faila= 11(D) failb= 37(D) OK [10278.392376] raid6test: test_disks(11, 38): faila= 11(D) failb= 38(b= 41(D) OK [10278.893328] raid6test: test_disks(11, 42): faila= 11(D) failb= 42(D) OK [10278.894233] raid6test: test_disks(11, 43): faila= 11(D) failb= 43(D) OK [10278.894748] raid6test: test_disks(11, 44): faila= 11(D) failb= 44(D) OK [10278.895306] raid6test: test_disks(11, 45): faila= 11(D) failb= 45(D) OK [10278.895794] raid6test: test_disks(11, 46): faila= 11(D) failb= 46(D) OK [10278.896331] raid6test: test_disks(11, 47): faila= 11(D) failb= 47(D) OK [10278.896823] raid6test: test_disks(11, 48): faila= 11(D) failb= 48(D) OK [10278.897368] raid6test: test_disks(11, 49): faila= 11(D) failb= 49(D) OK [10278.897904] raid6test: test_disks(11, 50): faila= 11(D) failb= 50(D) OK [10278.898425] raid6test: test_disks(11, 51): faila= 11(D) failb= 51(D) OK [10278.898952] raid6test: test_disks(11, 52): faila= 11(D) failb= 52(D) OK [10278.899450] raid6test: test_disks(11, 53): faila= 11(D) failb= 53(D) OK [10278.900010] raid6test: test_disks(11, 54): faila= 11(D) failb= 54(D) OK [10278.900539] raid6test: test_disks(11, 55): faila= 11(D) failb= 55(D) OK [10278.901110] raid6test: test_disks(11, 56): faila= 11(D) failb= 56(D) [10279.282516] raid6test: test_disks(11, 59): faila= 11(D) failb= 59(D) OK [1isks(11, 60): faila= 11(D) failb= 60(D) OK [10279.402528] raid6test: test_disks(11, 61): faila= 11(D) failb= 61(D) OK [10279.403094] raid6test: test_disks(11, 62): faila= 11(D) failb= 62(P) OK [10279.403640] raid6test: test_disks(11, 63): faila= 11(D) failb= 63(Q) OK [10279.404232] raid6test: test_disks(12, 13): faila= 12(D) failb= 13(D) OK [10279.404735] raid6test: test_disks(12, 14): faila= 12(D) failb= 14(D) OK [10279.405318] raid6test: test_disks(12, 15): faila= 12(D) failb= 15(D) OK [10279.405818] raid6test: test_disks(12, 16): faila= 12(D) failb= 16(D) OK [10279.406405] raid6test: test_disks(12, 17): faila= 12(D) failb= 17(D) OK [10279.406936] raid6test: test_disks(12, 18): faila= 12(D) failb= 18(D) OK [10279.407499] raid6test: test_disks(12, 19): faila= 12(D) failb= 19(D) OK [10279.408063] raid6test: test_disks(12, 20): faila= 12(D) failb= 20(D) OK [10279.408592] raid6test: test_disks(12, 21): faila= 12(D) failb= 21(D) OK [10279.409178] raid6test: test_disks(12, 22): faila= 12(D) failb= 22(D) OK [10279.409677] raid6test: test_disks(12, 23): faila= 12(D) failb= 23(D) OK [10279.410258] raid6test: test_disks(12, 24): fai[10279.811201] raid6test: test_disks(12, 27): faila= 12(D) failb= 27(D) OK [1isks(12, 28): faila= 12(D) failb= 28(D) OK [10279.912039] raid6test: test_disks(12, 29): faila= 12(D) failb= 29(D) OK [10279.912575] raid6test: test_disks(12, 30): faila= 12(D) failb= 30(D) OK [10279.913094] raid6test: test_disks(12, 31): faila= 12(D) failb= 31(D) OK [10279.913634] raid6test: test_disks(12, 32): faila= 12(D) failb= 32(D) OK [10279.914188] raid6test: test_disks(12, 33): faila= 12(D) failb= 33(D) OK [10279.914680] raid6test: test_disks(12, 34): faila= 12(D) failb= 34(D) OK [10279.915242] raid6test: test_disks(12, 35): faila= 12(D) failb= 35(D) OK [10279.915735] raid6test: test_disks(12, 36): faila= 12(D) failb= 36(D) OK [10279.916277] raid6test: test_disks(12, 37): faila= 12(D) failb= 37(D) OK [10279.916767] raid6test: test_disks(12, 38): faila= 12(D) failb= 38(D) OK [10279.917316] raid6test: test_disks(12, 39): faila= 12(D) failb= 39(D) OK [10279.917808] raid6test: test_disks(12, 40): faila= 12(D) failb= 40(D) OK [10279.918352] raid6test: test_disks(12, 41): faila= 12(D) failb= 41(D) OK [10279.918845] raid6test: test_disks(12,b= 44(D) OK [10280.319671] raid6test: test_disks(12, 45): faila= 12(D) failb=aid6test: test_disks(12, 46): faila= 12(D) failb= 46(D) OK [10280.420820] raid6test: test_disks(12, 47): faila= 12(D) failb= 47(D) OK [10280.421341] raid6test: test_disks(12, 48): faila= 12(D) failb= 48(D) OK [10280.421846] raid6test: test_disks(12, 49): faila= 12(D) failb= 49(D) OK [10280.422403] raid6test: test_disks(12, 50): faila= 12(D) failb= 50(D) OK [10280.422948] raid6test: test_disks(12, 51): faila= 12(D) failb= 51(D) OK [10280.423466] raid6test: test_disks(12, 52): faila= 12(D) failb= 52(D) OK [10280.424058] raid6test: test_disks(12, 53): faila= 12(D) failb= 53(D) OK [10280.424560] raid6test: test_disks(12, 54): faila= 12(D) failb= 54(D) OK [10280.425139] raid6test: test_disks(12, 55): faila= 12(D) failb= 55(D) OK [10280.425700] raid6test: test_disks(12, 56): faila= 12(D) failb= 56(D) OK [10280.426251] raid6test: test_disks(12, 57): faila= 12(D) failb= 57(D) OK [10280.426778] raid6test: test_disks(12, 58): faila= 12(D) failb= 58(D) OK [10280.427332] raid6test: test_disks(12, 59): faila= 12(D) failb= 59(D) OK [10280.427824] raid6test: test_disks(12, 60): faila= 12(D) failb= 60(D) OK [10280.428337] raid6test: test_disks(12, 61): faila= 12(D) failb= 61(D) OK [10280.4aid6test: test_disks(13, 14): faila= 13(D) failb= 14(D) OK [10280.829363] raib= 15(D) OK [10280.929772] raid6test: test_disks(13, 16): faila= 13(D) failb= 16(D) OK [10280.930348] raid6test: test_disks(13, 17): faila= 13(D) failb= 17(D) OK [10280.930914] raid6test: test_disks(13, 18): faila= 13(D) failb= 18(D) OK [10280.931411] raid6test: test_disks(13, 19): faila= 13(D) failb= 19(D) OK [10280.932003] raid6test: test_disks(13, 20): faila= 13(D) failb= 20(D) OK [10280.932501] raid6test: test_disks(13, 21): faila= 13(D) failb= 21(D) OK [10280.933082] raid6test: test_disks(13, 22): faila= 13(D) failb= 22(D) OK [10280.933588] raid6test: test_disks(13, 23): faila= 13(D) failb= 23(D) OK [10280.934140] raid6test: test_disks(13, 24): faila= 13(D) failb= 24(D) OK [10280.934630] raid6test: test_disks(13, 25): faila= 13(D) failb= 25(D) OK [10280.935144] raid6test: test_disks(13, 26): faila= 13(D) failb= 26(D) OK [10280.935639] raid6test: test_disks(13, 27): faila= 13(D) failb= 27(D) OK [10280.936182] raid6test: test_disks(13, 28): faila= 13(D) failb= 28(D) OK [10280.936725] raid6test: test_disks(13, 29): faila= 13(D) failb= 29(D) OK [10280.937334] raid6tila= 13(D) failb= 32(D) OK [10281.338063] raid6test: test_disks(13, 33): fail[10281.420100] raid6test: test_disks(13, 34): faila= 13(D) failb= 34(D) OK [10281.438917] raid6test: test_disks(13, 35): faila= 13(D) failb= 35(D) OK [10281.439432] raid6test: test_disks(13, 36): faila= 13(D) failb= 36(D) OK [10281.439948] raid6test: test_disks(13, 37): faila= 13(D) failb= 37(D) OK [10281.440461] raid6test: test_disks(13, 38): faila= 13(D) failb= 38(D) OK [10281.440984] raid6test: test_disks(13, 39): faila= 13(D) failb= 39(D) OK [10281.441495] raid6test: test_disks(13, 40): faila= 13(D) failb= 40(D) OK [10281.442056] raid6test: test_disks(13, 41): faila= 13(D) failb= 41(D) OK [10281.442566] raid6test: test_disks(13, 42): faila= 13(D) failb= 42(D) OK [10281.443084] raid6test: test_disks(13, 43): faila= 13(D) failb= 43(D) OK [10281.443608] raid6test: test_disks(13, 44): faila= 13(D) failb= 44(D) OK [10281.444124] raid6test: test_disks(13, 45): faila= 13(D) failb= 45(D) OK [10281.444616] raid6test: test_disks(13, 46): faila= 13(D) failb= 46(D) OK [10281.445136] raid6test: test_disks(13, 47): faila= 13(D) failb= 47(D) OK [10281.445628] raid6test: test_disks(13, 48): faila= 13(D) failb= 48(D) OK [10281.446143] raid6test: test_disks(13, 49): faila= 13(D) failb= 49(D) OK [10281.446632] raid6test: test_disks(13, 50): faila= 13(D) failb= 50(D)b= 53(D) OK [10281.947467] raid6test: test_disks(13, 54): faila= 13(D) failb= 54(D) OK [10281.948004] raid6test: test_disks(13, 55): faila= 13(D) failb= 55(D) OK [10281.948513] raid6test: test_disks(13, 56): faila= 13(D) failb= 56(D) OK [10281.949078] raid6test: test_disks(13, 57): faila= 13(D) failb= 57(D) OK [10281.949598] raid6test: test_disks(13, 58): faila= 13(D) failb= 58(D) OK [10281.950167] raid6test: test_disks(13, 59): faila= 13(D) failb= 59(D) OK [10281.950697] raid6test: test_disks(13, 60): faila= 13(D) failb= 60(D) OK [10281.951243] raid6test: test_disks(13, 61): faila= 13(D) failb= 61(D) OK [10281.951735] raid6test: test_disks(13, 62): faila= 13(D) failb= 62(P) OK [10281.952292] raid6test: test_disks(13, 63): faila= 13(D) failb= 63(Q) OK [10281.952826] raid6test: test_disks(14, 15): faila= 14(D) failb= 15(D) OK [10281.953378] raid6test: test_disks(14, 16): faila= 14(D) failb= 16(D) OK [10281.953870] raid6test: test_disks(14, 17): faila= 14(D) failb= 17(D) OK [10281.954414] raid6test: test_disks(14, 18): faila= 14(D) failb= 18(D) OK [10281.954958] raid6test: test_disks(14, 19): faila= 14(D) failb= 19(D) OK [10281.955466] raid6test: test_disks(14, 20): faila= 14(D) failb= 20(D) OK [10281.955992] raid6test: test_disks(14, 21): faila= 14(D) failb= 21(D) OK [10281.956503] raid6test: test_disks(14, 22): faila= 14(D) failb= 22b= 25(D) OK [10282.457392] raid6test: test_disks(14, 26): faila= 14(D) failb= 26(D) OK [10282.457930] raid6test: test_disks(14, 27): faila= 14(D) failb= 27(D) OK [10282.458493] raid6test: test_disks(14, 28): faila= 14(D) failb= 28(D) OK [10282.459071] raid6test: test_disks(14, 29): faila= 14(D) failb= 29(D) OK [10282.459602] raid6test: test_disks(14, 30): faila= 14(D) failb= 30(D) OK [10282.460166] raid6test: test_disks(14, 31): faila= 14(D) failb= 31(D) OK [10282.460658] raid6test: test_disks(14, 32): faila= 14(D) failb= 32(D) OK [10282.461253] raid6test: test_disks(14, 33): faila= 14(D) failb= 33(D) OK [10282.461762] raid6test: test_disks(14, 34): faila= 14(D) failb= 34(D) OK [10282.462332] raid6test: test_disks(14, 35): faila= 14(D) failb= 35(D) OK [10282.462874] raid6test: test_disks(14, 36): faila= 14(D) failb= 36(D) OK [10282.463438] raid6test: test_disks(14, 37): faila= 14(D) failb= 37(D) OK [10282.463961] raid6test: test_disks(14, 38): faila= 14(D) failb= 38(D) OK [10282.464486] raid6test: test_disks(14, 39): faila= 14(D) failb= 39(D) OK [10282.465033] raid6test: test_disks(14, 40): faila= 14(D) failb= 40(D) OK [10282.465538] raid6test: test_disks(14, 41): faila= 14(D) failb= 41(D) OK [10282.466090] raid6test: test_disks(14, 42): faila= 14(D) failb= 42(D) OK [10282.466611] raiaid6test: test_disks(14, 46): faila= 14(D) failb= 46(D) OK [10282.967406] raid6test: test_disks(14, 47): faila= 14(D) failb= 47(D) OK [10282.967989] raid6test: test_disks(14, 48): faila= 14(D) failb= 48(D) OK [10282.968508] raid6test: test_disks(14, 49): faila= 14(D) failb= 49(D) OK [10282.969069] raid6test: test_disks(14, 50): faila= 14(D) failb= 50(D) OK [10282.969605] raid6test: test_disks(14, 51): faila= 14(D) failb= 51(D) OK [10282.970230] raid6test: test_disks(14, 52): faila= 14(D) failb= 52(D) OK [10282.970711] raid6test: test_disks(14, 53): faila= 14(D) failb= 53(D) OK [10282.971332] raid6test: test_disks(14, 54): faila= 14(D) failb= 54(D) OK [10282.971829] raid6test: test_disks(14, 55): faila= 14(D) failb= 55(D) OK [10282.972400] raid6test: test_disks(14, 56): faila= 14(D) failb= 56(D) OK [10282.972979] raid6test: test_disks(14, 57): faila= 14(D) failb= 57(D) OK [10282.973499] raid6test: test_disks(14, 58): faila= 14(D) failb= 58(D) OK [10282.974021] raid6test: test_disks(14, 59): faila= 14(D) failb= 59(D) OK [10282.974535] raid6test: test_disks(14, 60): faila= 14(D) failb= 60(D) OK [10282.975052] raid6test: test_disks(15, 16): faila= 15(D) failb= 16(D) OK [10283.475829] raid6test: test_disks(15, 17): faila= 15(D) failb= 17(D) OK [10283.476320] raid6test: test_disks(15, 18): faila= 15(D) failb= 18(D) OK [10283.476831] raid6test: test_disks(15, 19): faila= 15(D) failb= 19(D) OK [10283.477393] raid6test: test_disks(15, 20): faila= 15(D) failb= 20(D) OK [10283.477971] raid6test: test_disks(15, 21): faila= 15(D) failb= 21(D) OK [10283.478514] raid6test: test_disks(15, 22): faila= 15(D) failb= 22(D) OK [10283.479047] raid6test: test_disks(15, 23): faila= 15(D) failb= 23(D) OK [10283.479552] raid6test: test_disks(15, 24): faila= 15(D) failb= 24(D) OK [10283.480090] raid6test: test_disks(15, 25): faila= 15(D) failb= 25(D) OK [10283.480595] raid6test: test_disks(15, 26): faila= 15(D) failb= 26(D) OK [10283.481130] raid6test: test_disks(15, 27): faila= 15(D) failb= 27(D) OK [10283.481640] raid6test: test_disks(15, 28): faila= 15(D) failb= 28(D) OK [10283.482174] raid6test: test_disks(15, 29): faila= 15(D) failb= 29(D) OK [10283.482706] raid6test: test_disks(15, 30): faila= 15(D) failb= 30(D) OK [10283.483263] raid6test: test_disks(15, 31isks(15, 34): faila= 15(D) failb= 34(D) OK [10283.984083] raid6test: test_disks(15, 35): faila= 15(D) failb= 35(D) OK [10283.984600] raid6test: test_disks(15, 36): faila= 15(D) failb= 36(D) OK [10283.985143] raid6test: test_disks(15, 37): faila= 15(D) failb= 37(D) OK [10283.985645] raid6test: test_disks(15, 38): faila= 15(D) failb= 38(D) OK [10283.986178] raid6test: test_disks(15, 39): faila= 15(D) failb= 39(D) OK [10283.986692] raid6test: test_disks(15, 40): faila= 15(D) failb= 40(D) OK [10283.987248] raid6test: test_disks(15, 41): faila= 15(D) failb= 41(D) OK [10283.987755] raid6test: test_disks(15, 42): faila= 15(D) failb= 42(D) OK [10283.988315] raid6test: test_disks(15, 43): faila= 15(D) failb= 43(D) OK [10283.988821] raid6test: test_disks(15, 44): faila= 15(D) failb= 44(D) OK [10283.989383] raid6test: test_disks(15, 45): faila= 15(D) failb= 45(D) OK [10283.989885] raid6test: test_disks(15, 46): faila= 15(D) failb= 46(D) OK [10283.990443] raid6test: test_disks(15, 47): faila= 15(D) failb= 47(D) OK [10283.990977] raid6test: test_disks(15, 48): faila= 15(D) failb= 48(D) OK [10283.991477] raid6test: test_disks(15, 52): faila= 15(D) failb= 52(D) OK [10284.492299] raid6test: test_disks(15, 53): faila= 15(D) failb= 53(D) OK [10284.492838] raid6test: test_disks(15, 54): faila= 15(D) failb= 54(D) OK [10284.493360] raid6test: test_disks(15, 55): faila= 15(D) failb= 55(D) OK [10284.493867] raid6test: test_disks(15, 56): faila= 15(D) failb= 56(D) OK [10284.494413] raid6test: test_disks(15, 57): faila= 15(D) failb= 57(D) OK [10284.494962] raid6test: test_disks(15, 58): faila= 15(D) failb= 58(D) OK [10284.495494] raid6test: test_disks(15, 59): faila= 15(D) failb= 59(D) OK [10284.496036] raid6test: test_disks(15, 60): faila= 15(D) failb= 60(D) OK [10284.496530] raid6test: test_disks(15, 61): faila= 15(D) failb= 61(D) OK [10284.497070] raid6test: test_disks(15, 62): faila= 15(D) failb= 62(P) OK [10284.497593] raid6test: test_disks(15, 63): faila= 15(D) failb= 63(Q) OK [10284.498134] raid6test: test_disks(16, 17): faila= 16(D) failb= 17(D) OK [10284.498638] raid6test: test_disks(16, 18): faila= 16(D) failb= 18(D) OK [10284.499177] raid6test: test_disks(16, 19): faila= 16(D) failb= 19(D) OK [10284.499684] raid6test: test_disks(16, 20): faila= 16(D) failb= 20(D) OK [10284.500245] raid6test: test_disks(16, 21): faila= 16(D) failb= 21(D) OK [10284.500752] raid6test: test_diskisks(16, 25): faila= 16(D) failb= 25(D) OK [10285.001559] raid6test: test_disks(16, 26): faila= 16(D) failb= 26(D) OK [10285.002103] raid6test: test_disks(16, 27): faila= 16(D) failb= 27(D) OK [10285.002613] raid6test: test_disks(16, 28): faila= 16(D) failb= 28(D) OK [10285.003165] raid6test: test_disks(16, 29): faila= 16(D) failb= 29(D) OK [10285.003675] raid6test: test_disks(16, 30): faila= 16(D) failb= 30(D) OK [10285.004214] raid6test: test_disks(16, 31): faila= 16(D) failb= 31(D) OK [10285.004720] raid6test: test_disks(16, 32): faila= 16(D) failb= 32(D) OK [10285.005303] raid6test: test_disks(16, 33): faila= 16(D) failb= 33(D) OK [10285.005857] raid6test: test_disks(16, 34): faila= 16(D) failb= 34(D) OK [10285.006442] raid6test: test_disks(16, 35): faila= 16(D) failb= 35(D) OK [10285.006983] raid6test: test_disks(16, 36): faila= 16(D) failb= 36(D) OK [10285.007477] raid6test: test_disks(16, 37): faila= 16(D) failb= 37(D) OK [10285.008027] raid6test: test_disks(16, 38): faila= 16(D) failb= 38(D) OK [10285.008560] raid6test: test_disks(16, 39): faila= 16(D) failb= 39(D) OK [10285.009099] raid6test: test_disks(16, 40): faila= 16(D) failb= 40(D) OK [10285.009608] raid6test: test_disks(16, 41): faila= 16(D) faila= 16(D) failb= 44(D) OK [10285.510456] raid6test: test_disks(16, 45): faila= 16(D) failb= 45(D) OK [10285.511000] raid6test: test_disks(16, 46): faila= 16(D) failb= 46(D) OK [10285.511494] raid6test: test_disks(16, 47): faila= 16(D) failb= 47(D) OK [10285.512039] raid6test: test_disks(16, 48): faila= 16(D) failb= 48(D) OK [10285.512572] raid6test: test_disks(16, 49): faila= 16(D) failb= 49(D) OK [10285.513124] raid6test: test_disks(16, 50): faila= 16(D) failb= 50(D) OK [10285.513640] raid6test: test_disks(16, 51): faila= 16(D) failb= 51(D) OK [10285.514179] raid6test: test_disks(16, 52): faila= 16(D) failb= 52(D) OK [10285.514688] raid6test: test_disks(16, 53): faila= 16(D) failb= 53(D) OK [10285.515230] raid6test: test_disks(16, 54): faila= 16(D) failb= 54(D) OK [10285.515744] raid6test: test_disks(16, 55): faila= 16(D) failb= 55(D) OK [10285.516299] raid6test: test_disks(16, 56): faila= 16(D) failb= 56(D) OK [10285.516808] raid6test: test_disks(16, 57): faila= 16(D) failb= 57(D) OK [10285.517374] raid6test: test_disks(16, 58): faila= 16(D) failb= 58(D) OK [10285.517885] raid6test: test_disks(16, 59): faila= 16(D) failb= 59(D) OK [10285.518428] raid6test: test_disks(16, 60): faiila= 16(D) failb= 63(Q) OK [10286.019585] raid6test: test_disks(17, 18): faila= 17(D) failb= 18(D) OK [10286.020123] raid6test: test_disks(17, 19): faila= 17(D) failb= 19(D) OK [10286.020630] raid6test: test_disks(17, 20): faila= 17(D) failb= 20(D) OK [10286.021167] raid6test: test_disks(17, 21): faila= 17(D) failb= 21(D) OK [10286.021670] raid6test: test_disks(17, 22): faila= 17(D) failb= 22(D) OK [10286.022210] raid6test: test_disks(17, 23): faila= 17(D) failb= 23(D) OK [10286.022753] raid6test: test_disks(17, 24): faila= 17(D) failb= 24(D) OK [10286.023283] raid6test: test_disks(17, 25): faila= 17(D) failb= 25(D) OK [10286.023790] raid6test: test_disks(17, 26): faila= 17(D) failb= 26(D) OK [10286.024363] raid6test: test_disks(17, 27): faila= 17(D) failb= 27(D) OK [10286.024867] raid6test: test_disks(17, 28): faila= 17(D) failb= 28(D) OK [10286.025432] raid6test: test_disks(17, 29): faila= 17(D) failb= 29(D) OK [10286.025974] raid6test: test_disks(17, 30): faila= 17(D) failb= 30(D) OK [10286.026507] raid6test: test_disks(17, 31): faila= 17(D) failb= 31(D) OK [10286.027056] raid6test: test_disks(17, 32): faila= 17(D) failb= 32(D) OK [10286.027587] raid6test: test_disks(17, 33): faila= 17(D) failb= 33(D) OK b= 36(D) OK [10286.528432] raid6test: test_disks(17, 37): faila= 17(D) failb= 37(D) OK [10286.528992] raid6test: test_disks(17, 38): faila= 17(D) failb= 38(D) OK [10286.529525] raid6test: test_disks(17, 39): faila= 17(D) failb= 39(D) OK [10286.530075] raid6test: test_disks(17, 40): faila= 17(D) failb= 40(D) OK [10286.530609] raid6test: test_disks(17, 41): faila= 17(D) failb= 41(D) OK [10286.531155] raid6test: test_disks(17, 42): faila= 17(D) failb= 42(D) OK [10286.531686] raid6test: test_disks(17, 43): faila= 17(D) failb= 43(D) OK [10286.532222] raid6test: test_disks(17, 44): faila= 17(D) failb= 44(D) OK [10286.532729] raid6test: test_disks(17, 45): faila= 17(D) failb= 45(D) OK [10286.533298] raid6test: test_disks(17, 46): faila= 17(D) failb= 46(D) OK [10286.533805] raid6test: test_disks(17, 47): faila= 17(D) failb= 47(D) OK [10286.534376] raid6test: test_disks(17, 48): faila= 17(D) failb= 48(D) OK [10286.534884] raid6test: test_disks(17, 49): faila= 17(D) failb= 49(D) OK [10286.535449] raid6test: test_disks(17, 50): faila= 17(D) failb= 50(D) OK [10286.535988] raid6test: test_disks(17, 51): faila= 17(D) failb= 51(D) OK [10286.536523] raid6test: test_disks(17, 52): faila= 17(D) failbb= 55(D) OK [10287.037335] raid6test: test_disks(17, 56): faila= 17(D) failb= 56(D) OK [10287.037866] raid6test: test_disks(17, 57): faila= 17(D) failb= 57(D) OK [10287.038404] raid6test: test_disks(17, 58): faila= 17(D) failb= 58(D) OK [10287.038880] raid6test: test_disks(17, 59): faila= 17(D) failb= 59(D) OK [10287.039440] raid6test: test_disks(17, 60): faila= 17(D) failb= 60(D) OK [10287.039986] raid6test: test_disks(17, 61): faila= 17(D) failb= 61(D) OK [10287.040515] raid6test: test_disks(17, 62): faila= 17(D) failb= 62(P) OK [10287.041069] raid6test: test_disks(17, 63): faila= 17(D) failb= 63(Q) OK [10287.041630] raid6test: test_disks(18, 19): faila= 18(D) failb= 19(D) OK [10287.042202] raid6test: test_disks(18, 20): faila= 18(D) failb= 20(D) OK [10287.042774] raid6test: test_disks(18, 21): faila= 18(D) failb= 21(D) OK [10287.043367] raid6test: test_disks(18, 22): faila= 18(D) failb= 22(D) OK [10287.043884] raid6test: test_disks(18, 23): faila= 18(D) failb= 23(D) OK [10287.044450] raid6test: test_disks(18, 24): faila= 18(D) failb= 24(D) OK [10287.044993] raid6test: test_disks(18, 25): faila= 18(D) failb= 25(D) OK [10287.045523] raid6test: test_disks(18, 26): faila= 18(D) failb=b= 29(D) OK [10287.546336] raid6test: test_disks(18, 30): faila= 18(D) failb= 30(D) OK [10287.546857] raid6test: test_disks(18, 31): faila= 18(D) failb= 31(D) OK [10287.547434] raid6test: test_disks(18, 32): faila= 18(D) failb= 32(D) OK [10287.547911] raid6test: test_disks(18, 33): faila= 18(D) failb= 33(D) OK [10287.548466] raid6test: test_disks(18, 34): faila= 18(D) failb= 34(D) OK [10287.549005] raid6test: test_disks(18, 35): faila= 18(D) failb= 35(D) OK [10287.549538] raid6test: test_disks(18, 36): faila= 18(D) failb= 36(D) OK [10287.550078] raid6test: test_disks(18, 37): faila= 18(D) failb= 37(D) OK [10287.550606] raid6test: test_disks(18, 38): faila= 18(D) failb= 38(D) OK [10287.551145] raid6test: test_disks(18, 39): faila= 18(D) failb= 39(D) OK [10287.551657] raid6test: test_disks(18, 40): faila= 18(D) failb= 40(D) OK [10287.552199] raid6test: test_disks(18, 41): faila= 18(D) failb= 41(D) OK [10287.552705] raid6test: test_disks(18, 42): faila= 18(D) failb= 42(D) OK [10287.553250] raid6test: test_disks(18, 43): faila= 18(D) failb= 43(D) OK [10287.553763] raid6test: test_disks(18, 44): faila= 18(D) failb= 44(D) OK [10287.554331] raid6test: test_disks(18, 45): faila= 18(D) failb= 45(D) OK [10287.581897] [10288.055155] raid6test: test_disks(18, 49): faila= 18(D) failb= 49(D) OK [10288.055685] raid6test: test_disks(18, 50): faila= 18(D) failb= 50(D) OK [10288.056241] raid6test: test_disks(18, 51): faila= 18(D) failb= 51(D) OK [10288.056759] raid6test: test_disks(18, 52): faila= 18(D) failb= 52(D) OK [10288.057331] raid6test: test_disks(18, 53): faila= 18(D) failb= 53(D) OK [10288.057903] raid6test: test_disks(18, 54): faila= 18(D) failb= 54(D) OK [10288.058477] raid6test: test_disks(18, 55): faila= 18(D) failb= 55(D) OK [10288.059020] raid6test: test_disks(18, 56): faila= 18(D) failb= 56(D) OK [10288.059554] raid6test: test_disks(18, 57): faila= 18(D) failb= 57(D) OK [10288.060100] raid6test: test_disks(18, 58): faila= 18(D) failb= 58(D) OK [10288.060634] raid6test: test_disks(18, 59): faila= 18(D) failb= 59(D) OK [10288.061178] raid6test: test_disks(18, 60): faila= 18(D) failb= 60(D) OK [10288.061659] raid6test: test_disks(18, 61): faila= 18(D) failb= 61(D) OK [10288.062209] raid6test: test_disks(18, 62): faila= 18(D) failb= 62(P) OK [10288.062727] raid6test: test_disks(18, 63): faila= 18(D) failb= 63(Q) OK [10288.063281] raid6test: test_disks(19, 20): faila= 19(D) failb= 20(D) OK [1[10288.564106] raid6test: test_disks(19, 24): faila= 19(D) failb= 24(D) OK [10288.564652] raid6test: test_disks(19, 25): faila= 19(D) failb= 25(D) OK [10288.565206] raid6test: test_disks(19, 26): faila= 19(D) failb= 26(D) OK [10288.565688] raid6test: test_disks(19, 27): faila= 19(D) failb= 27(D) OK [10288.566233] raid6test: test_disks(19, 28): faila= 19(D) failb= 28(D) OK [10288.566742] raid6test: test_disks(19, 29): faila= 19(D) failb= 29(D) OK [10288.567288] raid6test: test_disks(19, 30): faila= 19(D) failb= 30(D) OK [10288.567784] raid6test: test_disks(19, 31): faila= 19(D) failb= 31(D) OK [10288.568371] raid6test: test_disks(19, 32): faila= 19(D) failb= 32(D) OK [10288.568937] raid6test: test_disks(19, 33): faila= 19(D) failb= 33(D) OK [10288.569556] raid6test: test_disks(19, 34): faila= 19(D) failb= 34(D) OK [10288.570117] raid6test: test_disks(19, 35): faila= 19(D) failb= 35(D) OK [10288.570606] raid6test: test_disks(19, 36): faila= 19(D) failb= 36(D) OK [10288.571149] raid6test: test_disks(19, 37): faila= 19(D) failb= 37(D) OK [10288.571651] raid6test: test_disks(19, 38): faila= 19(D) failb= 38(D) OK [10288.572195] raid6test: test_disks(19, 39): faila= 19(D) failb= 39(D) OK [10288.572674] raid6test: test_disks(19, 40): faila= 19(D) failb= 40(D) OK [10[10289.073504] raid6test: test_disks(19, 44): faila= 19(D) failb= 44(D) OK [10289.074080] raid6test: test_disks(19, 45): faila= 19(D) failb= 45(D) OK [10289.074616] raid6test: test_disks(19, 46): faila= 19(D) failb= 46(D) OK [10289.075166] raid6test: test_disks(19, 47): faila= 19(D) failb= 47(D) OK [10289.075645] raid6test: test_disks(19, 48): faila= 19(D) failb= 48(D) OK [10289.076189] raid6test: test_disks(19, 49): faila= 19(D) failb= 49(D) OK [10289.076695] raid6test: test_disks(19, 50): faila= 19(D) failb= 50(D) OK [10289.077244] raid6test: test_disks(19, 51): faila= 19(D) failb= 51(D) OK [10289.077755] raid6test: test_disks(19, 52): faila= 19(D) failb= 52(D) OK [10289.078296] raid6test: test_disks(19, 53): faila= 19(D) failb= 53(D) OK [10289.078807] raid6test: test_disks(19, 54): faila= 19(D) failb= 54(D) OK [10289.079374] raid6test: test_disks(19, 55): faila= 19(D) failb= 55(D) OK [10289.079850] raid6test: test_disks(19, 56): faila= 19(D) failb= 56(D) OK [10289.080418] raid6test: test_disks(19, 57): faila= 19(D) failb= 57(D) OK [10289.080927] raid6test: test_disks(19, 58): faila= 19(D) failb= 58(D) OK [10289.081495] raid6test: teaid6test: test_disks(19, 62): faila= 19(D) failb= 62(P) OK [10289.582282] raid6test: test_disks(19, 63): faila= 19(D) failb= 63(Q) OK [10289.582764] raid6test: test_disks(20, 21): faila= 20(D) failb= 21(D) OK [10289.583311] raid6test: test_disks(20, 22): faila= 20(D) failb= 22(D) OK [10289.583827] raid6test: test_disks(20, 23): faila= 20(D) failb= 23(D) OK [10289.584363] raid6test: test_disks(20, 24): faila= 20(D) failb= 24(D) OK [10289.584866] raid6test: test_disks(20, 25): faila= 20(D) failb= 25(D) OK [10289.585434] raid6test: test_disks(20, 26): faila= 20(D) failb= 26(D) OK [10289.585936] raid6test: test_disks(20, 27): faila= 20(D) failb= 27(D) OK [10289.586489] raid6test: test_disks(20, 28): faila= 20(D) failb= 28(D) OK [10289.587027] raid6test: test_disks(20, 29): faila= 20(D) failb= 29(D) OK [10289.587559] raid6test: test_disks(20, 30): faila= 20(D) failb= 30(D) OK [10289.588101] raid6test: test_disks(20, 31): faila= 20(D) failb= 31(D) OK [10289.588634] raid6test: test_disks(20, 32): faila= 20(D) failb= 32(D) OK [10289.616215][10290.089438] raid6test: test_disks(20, 36): faila= 20(D) failb= 36(D) OK [10290.090019] raid6test: test_disks(20, 37): faila= 20(D) failb= 37(D) OK [10290.090541] raid6test: test_disks(20, 38): faila= 20(D) failb= 38(D) OK [10290.091090] raid6test: test_disks(20, 39): faila= 20(D) failb= 39(D) OK [10290.091583] raid6test: test_disks(20, 40): faila= 20(D) failb= 40(D) OK [10290.092125] raid6test: test_disks(20, 41): faila= 20(D) failb= 41(D) OK [10290.092656] raid6test: test_disks(20, 42): faila= 20(D) failb= 42(D) OK [10290.093213] raid6test: test_disks(20, 43): faila= 20(D) failb= 43(D) OK [10290.093733] raid6test: test_disks(20, 44): faila= 20(D) failb= 44(D) OK [10290.094265] raid6test: test_disks(20, 45): faila= 20(D) failb= 45(D) OK [10290.094769] raid6test: test_disks(20, 46): faila= 20(D) failb= 46(D) OK [10290.095310] raid6test: test_disks(20, 47): faila= 20(D) failb= 47(D) OK [10290.095817] raid6test: test_disks(20, 48): faila= 20(D) failb= 48(D) OK [10290.096390] raid6test: test_disks(20, 49): faila= 20(D) failb= 49(D) OK [10290.096897] raid6test: test_disks(20, 50): faila= 20(D) failb= 50(D) OK [10290.097455] raid6test: test_disks(20, 51): faila= 20(D) failb= 51(D) OK [10290.098013] raid6test: test_disks(20, 52): faila= 20(D) failb= 52(D) OK [10290.098533] raid6test: test_aid6test: test_disks(20, 56): faila= 20(D) failb= 56(D) OK [10290.599378] raid6test: test_disks(20, 57): faila= 20(D) failb= 57(D) OK [10290.599895] raid6test: test_disks(20, 58): faila= 20(D) failb= 58(D) OK [10290.600462] raid6test: test_disks(20, 59): faila= 20(D) failb= 59(D) OK [10290.600934] raid6test: test_disks(20, 60): faila= 20(D) failb= 60(D) OK [10290.601495] raid6test: test_disks(20, 61): faila= 20(D) failb= 61(D) OK [10290.602033] raid6test: test_disks(20, 62): faila= 20(D) failb= 62(P) OK [10290.602581] raid6test: test_disks(20, 63): faila= 20(D) failb= 63(Q) OK [10290.603122] raid6test: test_disks(21, 22): faila= 21(D) failb= 22(D) OK [10290.603666] raid6test: test_disks(21, 23): faila= 21(D) failb= 23(D) OK [10290.604238] raid6test: test_disks(21, 24): faila= 21(D) failb= 24(D) OK [10290.604807] raid6test: test_disks(21, 25): faila= 21(D) failb= 25(D) OK [10290.605383] raid6test: test_disks(21, 26): faila= 21(D) failb= 26(D) OK [10290.605892] raid6test: test_disks(21, 27): faila= 21(D) failb= 27(D) OK [10290.606457] raid6test: test_disks(21, 28): faila= 21(D) failb= 28(D) OK [10290.607006] raid6test: test_disks(21, 29): faila= 21(D) failb= 29(D) OK [10290.607491] raid6test: test_disks(21, 30): faila= 21(D) failb= 30(D) OK [10290.608038] raid6test: test_disks(21, 31): faila= 21(D) failb= 31(D) OK [10290.608570] raid6test: test_disks(21, 32): faila= 21(D) failb= 32(D) OK [10290.609115] raid6test: test_disks(21, 33): faila= 21(D) failb= 33(D) OK [10290.609648] raid6test: test_disks(21, 34): faila= 21(D) failb= 34(D) OK [10290.610193] raid6test: teaid6test: test_disks(21, 38): faila= 21(D) failb= 38(D) OK [10291.110995] raid6test: test_disks(21, 39): faila= 21(D) failb= 39(D) OK [10291.111498] raid6test: test_disks(21, 40): faila= 21(D) failb= 40(D) OK [10291.112044] raid6test: test_disks(21, 41): faila= 21(D) failb= 41(D) OK [10291.112574] raid6test: test_disks(21, 42): faila= 21(D) failb= 42(D) OK [10291.113146] raid6test: test_disks(21, 43): faila= 21(D) failb= 43(D) OK [10291.113683] raid6test: test_disks(21, 44): faila= 21(D) failb= 44(D) OK [10291.114228] raid6test: test_disks(21, 45): faila= 21(D) failb= 45(D) OK [10291.114736] raid6test: test_disks(21, 46): faila= 21(D) failb= 46(D) OK [10291.115276] raid6test: test_disks(21, 47): faila= 21(D) failb= 47(D) OK [10291.115787] raid6test: test_disks(21, 48): faila= 21(D) failb= 48(D) OK [10291.116334] raid6test: test_disks(21, 49): faila= 21(D) failb= 49(D) OK [10291.116840] raid6test: test_disks(21, 50): faila= 21(D) failb= 50(D) OK [10291.117404] raid6test: test_disks(21, 51): faila= 21(D) failb= 51(D) OK [10291.117905] raid6test: test_disks(21, 52): faila= 21(D) failb= 52(D) OK [10291.118465] raid6test: test_disks(21, 53): faila= 21(D) failb= 53(D) OK [10291.119015] raaid6test: test_disks(21, 57): faila= 21(D) failb= 57(D) OK [10291.619850] raid6test: test_disks(21, 58): faila= 21(D) failb= 58(D) OK [10291.620420] raid6test: test_disks(21, 59): faila= 21(D) failb= 59(D) OK [10291.620924] raid6test: test_disks(21, 60): faila= 21(D) failb= 60(D) OK [10291.621487] raid6test: test_disks(21, 61): faila= 21(D) failb= 61(D) OK [10291.622030] raid6test: test_disks(21, 62): faila= 21(D) failb= 62(P) OK [10291.622554] raid6test: test_disks(21, 63): faila= 21(D) failb= 63(Q) OK [10291.623086] raid6test: test_disks(22, 23): faila= 22(D) failb= 23(D) OK [10291.623625] raid6test: test_disks(22, 24): faila= 22(D) failb= 24(D) OK [10291.624165] raid6test: test_disks(22, 25): faila= 22(D) failb= 25(D) OK [10291.624696] raid6test: test_disks(22, 26): faila= 22(D) failb= 26(D) OK [10291.625240] raid6test: test_disks(22, 27): faila= 22(D) failb= 27(D) OK [10291.625750] raid6test: test_disks(22, 28): faila= 22(D) failb= 28(D) OK [10291.626296] raid6test: test_disks(22, 29): faila= 22(D) failb= 29(D) OK [10291.626807] raid6test: test_disks(22, 30): faila= 22(D) failb= 30(D) OK [10291.627343] raid6test: test_disks(22, 31): faila= 22(D) failb= 31(D) OK [10291.627851] raid6test: test_disks(22, 32): isks(22, 35): faila= 22(D) failb= 35(D) OK [10292.128652] raid6test: test_disks(22, 36): faila= 22(D) failb= 36(D) OK [10292.129206] raid6test: test_disks(22, 37): faila= 22(D) failb= 37(D) OK [10292.129678] raid6test: test_disks(22, 38): faila= 22(D) failb= 38(D) OK [10292.130215] raid6test: test_disks(22, 39): faila= 22(D) failb= 39(D) OK [10292.130723] raid6test: test_disks(22, 40): faila= 22(D) failb= 40(D) OK [10292.131255] raid6test: test_disks(22, 41): faila= 22(D) failb= 41(D) OK [10292.131749] raid6test: test_disks(22, 42): faila= 22(D) failb= 42(D) OK [10292.132300] raid6test: test_disks(22, 43): faila= 22(D) failb= 43(D) OK [10292.132816] raid6test: test_disks(22, 44): faila= 22(D) failb= 44(D) OK [10292.133384] raid6test: test_disks(22, 45): faila= 22(D) failb= 45(D) OK [10292.133891] raid6test: test_disks(22, 46): faila= 22(D) failb= 46(D) OK [10292.134462] raid6test: test_disks(22, 47): faila= 22(D) failb= 47(D) OK [10292.134966] raid6test: test_disks(22, 48): faila= 22(D) failb= 48(D) OK [10292.135534] raid6test: test_disks(22, 49): faila= 22(D) failb= 49(D) OK [10292.136075] raid6test: test_disks(22, 50): faila= 22(D) failb= 50(D) OK [10292.136611] raid6test: test_disks(22, 51): faila= 22(D) failb= 51(D) OK [10292.137157] raid6test: test_diskisks(22, 55): faila= 22(D) failb= 55(D) OK [10292.638031] raid6test: test_disks(22, 56): faila= 22(D) failb= 56(D) OK [10292.638538] raid6test: test_disks(22, 57): faila= 22(D) failb= 57(D) OK [10292.639051] raid6test: test_disks(22, 58): faila= 22(D) failb= 58(D) OK [10292.639553] raid6test: test_disks(22, 59): faila= 22(D) failb= 59(D) OK [10292.640062] raid6test: test_disks(22, 60): faila= 22(D) failb= 60(D) OK [10292.640567] raid6test: test_disks(22, 61): faila= 22(D) failb= 61(D) OK [10292.641077] raid6test: test_disks(22, 62): faila= 22(D) failb= 62(P) OK [10292.641592] raid6test: test_disks(22, 63): faila= 22(D) failb= 63(Q) OK [10292.642098] raid6test: test_disks(23, 24): faila= 23(D) failb= 24(D) OK [10292.642602] raid6test: test_disks(23, 25): faila= 23(D) failb= 25(D) OK [10292.643154] raid6test: test_disks(23, 26): faila= 23(D) failb= 26(D) OK [10292.643662] raid6test: test_disks(23, 27): faila= 23(D) failb= 27(D) OK [10292.644529] raid6test: test_disks(23, 28): faila= 23(D) failb= 28(D) OK [10292.645052] raid6test: test_disks(23, 29): faila= 23(D) failb= 29(D) OK [10292.645553] raid6test: test_aid6test: test_disks(23, 33): faila= 23(D) failb= 33(D) OK [10293.146331] raid6test: test_disks(23, 34): faila= 23(D) failb= 34(D) OK [10293.146840] raid6test: test_disks(23, 35): faila= 23(D) failb= 35(D) OK [10293.147406] raid6test: test_disks(23, 36): faila= 23(D) failb= 36(D) OK [10293.147911] raid6test: test_disks(23, 37): faila= 23(D) failb= 37(D) OK [10293.148478] raid6test: test_disks(23, 38): faila= 23(D) failb= 38(D) OK [10293.148954] raid6test: test_disks(23, 39): faila= 23(D) failb= 39(D) OK [10293.149520] raid6test: test_disks(23, 40): faila= 23(D) failb= 40(D) OK [10293.150028] raid6test: test_disks(23, 41): faila= 23(D) failb= 41(D) OK [10293.150556] raid6test: test_disks(23, 42): faila= 23(D) failb= 42(D) OK [10293.151094] raid6test: test_disks(23, 43): faila= 23(D) failb= 43(D) OK [10293.151625] raid6test: test_disks(23, 44): faila= 23(D) failb= 44(D) OK [10293.152174] raid6test: test_disks(23, 45): faila= 23(D) failb= 45(D) OK [10293.152704] raid6test: test_disks(23, 46): faila= 23(D) failb= 46(D) OK [10293.153270] raid6test: test_disks(23, 47): faila= 23(D) failb= 47(D) OK [10293.153784] raid6test: test_disks(23, 48): faila= 23(D) failb= 48(D) OK [10293.154326] raid6test: test_disks(23, 49): faila= 23(D) failb= 49(D) OK [10293.154834] raid6test: test_disks(23, 50): faila= 23(D) failb= 53(D) OK [10293.655781] raid6test: test_disks(23, 54): faila= 23(D) failb= 54(D) OK [10293.656328] raid6test: test_disks(23, 55): faila= 23(D) failb= 55(D) OK [10293.656858] raid6test: test_disks(23, 56): faila= 23(D) failb= 56(D) OK [10293.657436] raid6test: test_disks(23, 57): faila= 23(D) failb= 57(D) OK [10293.657947] raid6test: test_disks(23, 58): faila= 23(D) failb= 58(D) OK [10293.658513] raid6test: test_disks(23, 59): faila= 23(D) failb= 59(D) OK [10293.659047] raid6test: test_disks(23, 60): faila= 23(D) failb= 60(D) OK [10293.659581] raid6test: test_disks(23, 61): faila= 23(D) failb= 61(D) OK [10293.660116] raid6test: test_disks(23, 62): faila= 23(D) failb= 62(P) OK [10293.660657] raid6test: test_disks(23, 63): faila= 23(D) failb= 63(Q) OK [10293.661196] raid6test: test_disks(24, 25): faila= 24(D) failb= 25(D) OK [10293.661715] raid6test: test_disks(24, 26): faila= 24(D) failb= 26(D) OK [10293.662287] raid6test: test_disks(24, 27): faila= 24(D) failb= 27(D) OK [10293.662795] raid6test: test_disks(24, 28): faila= 24(D) failb= 28(D) OK [10293.663335] raid6test: test_disks(24, 29): faila= 24(D) failb= 29(D) OK [10293.663874] raid6test: test_disks(24, 30): faiila= 24(D) failb= 33(D) OK [10294.164890] raid6test: test_disks(24, 34): faila= 24(D) failb= 34(D) OK [10294.165505] raid6test: test_disks(24, 35): faila= 24(D) failb= 35(D) OK [10294.166047] raid6test: test_disks(24, 36): faila= 24(D) failb= 36(D) OK [10294.166585] raid6test: test_disks(24, 37): faila= 24(D) failb= 37(D) OK [10294.167169] raid6test: test_disks(24, 38): faila= 24(D) failb= 38(D) OK [10294.167732] raid6test: test_disks(24, 39): faila= 24(D) failb= 39(D) OK [10294.168259] raid6test: test_disks(24, 40): faila= 24(D) failb= 40(D) OK [10294.168801] raid6test: test_disks(24, 41): faila= 24(D) failb= 41(D) OK [10294.169338] raid6test: test_disks(24, 42): faila= 24(D) failb= 42(D) OK [10294.169884] raid6test: test_disks(24, 43): faila= 24(D) failb= 43(D) OK [10294.170443] raid6test: test_disks(24, 44): faila= 24(D) failb= 44(D) OK [10294.170935] raid6test: test_disks(24, 45): faila= 24(D) failb= 45(D) OK [10294.171450] raid6test: test_disks(24, 46): faila= 24(D) failb= 46(D) OK [10294.171942] raid6test: test_disks(24, 47): faila= 24(D) failb= 47(D) OK [10294.172497] raid6test: test_disks(24, 48): faila= 24(D) failb= 48(D) OK [10294.173055] raid6test: test_disks(24, 49): faila= 24(D) failb= 49(D) OK [10294.173556] raid6test: test_disks(24, 50): faila= 24(D) failb= 50(D) OK [1[10294.674418] raid6test: test_disks(24, 54): faila= 24(D) failb= 54(D) OK [10294.674955] raid6test: test_disks(24, 55): faila= 24(D) failb= 55(D) OK [10294.675552] raid6test: test_disks(24, 56): faila= 24(D) failb= 56(D) OK [10294.676081] raid6test: test_disks(24, 57): faila= 24(D) failb= 57(D) OK [10294.676572] raid6test: test_disks(24, 58): faila= 24(D) failb= 58(D) OK [10294.677149] raid6test: test_disks(24, 59): faila= 24(D) failb= 59(D) OK [10294.677696] raid6test: test_disks(24, 60): faila= 24(D) failb= 60(D) OK [10294.678255] raid6test: test_disks(24, 61): faila= 24(D) failb= 61(D) OK [10294.678745] raid6test: test_disks(24, 62): faila= 24(D) failb= 62(P) OK [10294.679311] raid6test: test_disks(24, 63): faila= 24(D) failb= 63(Q) OK [10294.679844] raid6test: test_disks(25, 26): faila= 25(D) failb= 26(D) OK [10294.680429] raid6test: test_disks(25, 27): faila= 25(D) failb= 27(D) OK [10294.680993] raid6test: test_disks(25, 28): faila= 25(D) failb= 28(D) OK [10294.681592] raid6test: test_disks(25, 29): faila= 25(D) failb= 29(D) OK [10294.682125] raid6test: test_disks(25, 30): faila= 25(D) failb= 30(D) OK [10294.682689] raid6test: test_disks(25, 31): faila= 25(D) failb= 31(D) OK [b= 34(D) OK [10295.183654] raid6test: test_disks(25, 35): faila= 25(D) failb= 35(D) OK [10295.184238] raid6test: test_disks(25, 36): faila= 25(D) failb= 36(D) OK [10295.184796] raid6test: test_disks(25, 37): faila= 25(D) failb= 37(D) OK [10295.185338] raid6test: test_disks(25, 38): faila= 25(D) failb= 38(D) OK [10295.185874] raid6test: test_disks(25, 39): faila= 25(D) failb= 39(D) OK [10295.186473] raid6test: test_disks(25, 40): faila= 25(D) failb= 40(D) OK [10295.187059] raid6test: test_disks(25, 41): faila= 25(D) failb= 41(D) OK [10295.187614] raid6test: test_disks(25, 42): faila= 25(D) failb= 42(D) OK [10295.188177] raid6test: test_disks(25, 43): faila= 25(D) failb= 43(D) OK [10295.188729] raid6test: test_disks(25, 44): faila= 25(D) failb= 44(D) OK [10295.189288] raid6test: test_disks(25, 45): faila= 25(D) failb= 45(D) OK [10295.189788] raid6test: test_disks(25, 46): faila= 25(D) failb= 46(D) OK [10295.190319] raid6test: test_disks(25, 47): faila= 25(D) failb= 47(D) OK [10295.190816] raid6test: test_disks(25, 48): faila= 25(D) failb= 48(D) OK [10295.191351] raid6test: test_disks(25, 49): faila= 25(D) failb= 49(D) OK [10295.191880] raid6test: test_disks(25, 50): faila= 25(D) failb=b= 53(D) OK [10295.692736] raid6test: test_disks(25, 54): faila= 25(D) failb= 54(D) OK [10295.693358] raid6test: test_disks(25, 55): faila= 25(D) failb= 55(D) OK [10295.693892] raid6test: test_disks(25, 56): faila= 25(D) failb= 56(D) OK [10295.694487] raid6test: test_disks(25, 57): faila= 25(D) failb= 57(D) OK [10295.695059] raid6test: test_disks(25, 58): faila= 25(D) failb= 58(D) OK [10295.695609] raid6test: test_disks(25, 59): faila= 25(D) failb= 59(D) OK [10295.696173] raid6test: test_disks(25, 60): faila= 25(D) failb= 60(D) OK [10295.696719] raid6test: test_disks(25, 61): faila= 25(D) failb= 61(D) OK [10295.697277] raid6test: test_disks(25, 62): faila= 25(D) failb= 62(P) OK [10295.697814] raid6test: test_disks(25, 63): faila= 25(D) failb= 63(Q) OK [10295.698453] raid6test: test_disks(26, 27): faila= 26(D) failb= 27(D) OK [10295.698984] raid6test: test_disks(26, 28): faila= 26(D) failb= 28(D) OK [10295.699568] raid6test: test_disks(26, 29): faila= 26(D) failb= 29(D) OK [10295.700140] raid6test: test_disks(26, 30): faila= 26(D) failb= 30(D) OK [10295.700699] raid6test: test_disks(26, 31): faila= 26(D) failb= 31(D) OK [10295.701256] raid6test: test_disks(26, 32): faila= 26(D) failb= 32(D) OK [10295.728858] [10296.202198] raid6test: test_disks(26, 36): faila= 26(D) failb= 36(D) OK [10296.202720] raid6test: test_disks(26, 37): faila= 26(D) failb= 37(D) OK [10296.203284] raid6test: test_disks(26, 38): faila= 26(D) failb= 38(D) OK [10296.203779] raid6test: test_disks(26, 39): faila= 26(D) failb= 39(D) OK [10296.204349] raid6test: test_disks(26, 40): faila= 26(D) failb= 40(D) OK [10296.204878] raid6test: test_disks(26, 41): faila= 26(D) failb= 41(D) OK [10296.205413] raid6test: test_disks(26, 42): faila= 26(D) failb= 42(D) OK [10296.205905] raid6test: test_disks(26, 43): faila= 26(D) failb= 43(D) OK [10296.206452] raid6test: test_disks(26, 44): faila= 26(D) failb= 44(D) OK [10296.206939] raid6test: test_disks(26, 45): faila= 26(D) failb= 45(D) OK [10296.207481] raid6test: test_disks(26, 46): faila= 26(D) failb= 46(D) OK [10296.208059] raid6test: test_disks(26, 47): faila= 26(D) failb= 47(D) OK [10296.208584] raid6test: test_disks(26, 48): faila= 26(D) failb= 48(D) OK [10296.209114] raid6test: test_disks(26, 49): faila= 26(D) failb= 49(D) OK [10296.209666] raid6test: test_disks(26, 50): faila= 26(D) failb= 50(D) OK b= 53(D) OK [10296.710639] raid6test: test_disks(26, 54): faila= 26(D) failb= 54(D) OK [10296.711188] raid6test: test_disks(26, 55): faila= 26(D) failb= 55(D) OK [10296.711717] raid6test: test_disks(26, 56): faila= 26(D) failb= 56(D) OK [10296.712287] raid6test: test_disks(26, 57): faila= 26(D) failb= 57(D) OK [10296.712824] raid6test: test_disks(26, 58): faila= 26(D) failb= 58(D) OK [10296.713394] raid6test: test_disks(26, 59): faila= 26(D) failb= 59(D) OK [10296.713923] raid6test: test_disks(26, 60): faila= 26(D) failb= 60(D) OK [10296.714506] raid6test: test_disks(26, 61): faila= 26(D) failb= 61(D) OK [10296.715000] raid6test: test_disks(26, 62): faila= 26(D) failb= 62(P) OK [10296.715594] raid6test: test_disks(26, 63): faila= 26(D) failb= 63(Q) OK [10296.716164] raid6test: test_disks(27, 28): faila= 27(D) failb= 28(D) OK [10296.716724] raid6test: test_disks(27, 29): faila= 27(D) failb= 29(D) OK [10296.717249] raid6test: test_disks(27, 30): faila= 27(D) failb= 30(D) OK [10296.717768] raid6test: test_disks(27, 31): faila= 27(D) failb= 31(D) OK [10296.718335] raid6test: test_disks(27, 32): faila= 27(D) failb= 32(D) OK [10296.718861] raid6test: test_disks(27, 33): faila= 27(D) failb= 33(D) OK [10296.719378] raid6test: test_disks(27, 34): faila= 27(D) failb= b= 37(D) OK [10297.220219] raid6test: test_disks(27, 38): faila= 27(D) failb= 38(D) OK [10297.220736] raid6test: test_disks(27, 39): faila= 27(D) failb= 39(D) OK [10297.221277] raid6test: test_disks(27, 40): faila= 27(D) failb= 40(D) OK [10297.221779] raid6test: test_disks(27, 41): faila= 27(D) failb= 41(D) OK [10297.222353] raid6test: test_disks(27, 42): faila= 27(D) failb= 42(D) OK [10297.222850] raid6test: test_disks(27, 43): faila= 27(D) failb= 43(D) OK [10297.223382] raid6test: test_disks(27, 44): faila= 27(D) failb= 44(D) OK [10297.223917] raid6test: test_disks(27, 45): faila= 27(D) failb= 45(D) OK [10297.224442] raid6test: test_disks(27, 46): faila= 27(D) failb= 46(D) OK [10297.224934] raid6test: test_disks(27, 47): faila= 27(D) failb= 47(D) OK [10297.225474] raid6test: test_disks(27, 48): faila= 27(D) failb= 48(D) OK [10297.225963] raid6test: test_disks(27, 49): faila= 27(D) failb= 49(D) OK [10297.226507] raid6test: test_disks(27, 50): faila= 27(D) failb= 50(D) OK [10297.227016] raid6test: test_disks(27, 51): faila= 27(D) failb= 51(D) OK [10297.227542] raid6test: test_disks(27, 52): faila= 27(D) failb= 52(D) OK [10297.255166][10297.728461] raid6test: test_disks(27, 56): faila= 27(D) failb= 56(D) OK [10297.728989] raid6test: test_disks(27, 57): faila= 27(D) failb= 57(D) OK [10297.729593] raid6test: test_disks(27, 58): faila= 27(D) failb= 58(D) OK [10297.730165] raid6test: test_disks(27, 59): faila= 27(D) failb= 59(D) OK [10297.730718] raid6test: test_disks(27, 60): faila= 27(D) failb= 60(D) OK [10297.731276] raid6test: test_disks(27, 61): faila= 27(D) failb= 61(D) OK [10297.731839] raid6test: test_disks(27, 62): faila= 27(D) failb= 62(P) OK [10297.732401] raid6test: test_disks(27, 63): faila= 27(D) failb= 63(Q) OK [10297.732927] raid6test: test_disks(28, 29): faila= 28(D) failb= 29(D) OK [10297.733501] raid6test: test_disks(28, 30): faila= 28(D) failb= 30(D) OK [10297.733998] raid6test: test_disks(28, 31): faila= 28(D) failb= 31(D) OK [10297.734580] raid6test: test_disks(28, 32): faila= 28(D) failb= 32(D) OK [10297.735122] raid6test: test_disks(28, 33): faila= 28(D) failb= 33(D) OK [10297.735672] raid6test: test_disks(28, 34): faila= 28(D) failb= 34(D) OK [10297.736233] raid6test: test_disks(28, 35): faila= 28(D) failb= 35(D) OK [10297.736794] raid6test: test_disks(28, 36): faila= 28(D) failb= 36(D) OK [b= 39(D) OK [10298.237801] raid6test: test_disks(28, 40): faila= 28(D) failb= 40(D) OK [10298.238329] raid6test: test_disks(28, 41): faila= 28(D) failb= 41(D) OK [10298.238818] raid6test: test_disks(28, 42): faila= 28(D) failb= 42(D) OK [10298.239380] raid6test: test_disks(28, 43): faila= 28(D) failb= 43(D) OK [10298.239876] raid6test: test_disks(28, 44): faila= 28(D) failb= 44(D) OK [10298.240408] raid6test: test_disks(28, 45): faila= 28(D) failb= 45(D) OK [10298.240954] raid6test: test_disks(28, 46): faila= 28(D) failb= 46(D) OK [10298.241514] raid6test: test_disks(28, 47): faila= 28(D) failb= 47(D) OK [10298.242008] raid6test: test_disks(28, 48): faila= 28(D) failb= 48(D) OK [10298.242568] raid6test: test_disks(28, 49): faila= 28(D) failb= 49(D) OK [10298.243129] raid6test: test_disks(28, 50): faila= 28(D) failb= 50(D) OK [10298.243645] raid6test: test_disks(28, 51): faila= 28(D) failb= 51(D) OK [10298.244182] raid6test: test_disks(28, 52): faila= 28(D) failb= 52(D) OK [10298.244717] raid6test: test_disks(28, 53): faila= 28(D) failb= 53(D) OK [10298.245230] raid6test: test_disks(28, 54): faila= 28(D) failb= 54(D) OK [10298.245708] raid6test: test_disks(28, 55): faila= 28(D) failb= 55(D) OK [10298.246237] raid6test: test_disks(28, 59): faila= 28(D) failb= 59(D) OK [10298.747136] raid6test: test_disks(28, 60): faila= 28(D) failb= 60(D) OK [10298.747668] raid6test: test_disks(28, 61): faila= 28(D) failb= 61(D) OK [10298.748221] raid6test: test_disks(28, 62): faila= 28(D) failb= 62(P) OK [10298.748717] raid6test: test_disks(28, 63): faila= 28(D) failb= 63(Q) OK [10298.749271] raid6test: test_disks(29, 30): faila= 29(D) failb= 30(D) OK [10298.749826] raid6test: test_disks(29, 31): faila= 29(D) failb= 31(D) OK [10298.750386] raid6test: test_disks(29, 32): faila= 29(D) failb= 32(D) OK [10298.750917] raid6test: test_disks(29, 33): faila= 29(D) failb= 33(D) OK [10298.751494] raid6test: test_disks(29, 34): faila= 29(D) failb= 34(D) OK [10298.751996] raid6test: test_disks(29, 35): faila= 29(D) failb= 35(D) OK [10298.752587] raid6test: test_disks(29, 36): faila= 29(D) failb= 36(D) OK [10298.753185] raid6test: test_disks(29, 37): faila= 29(D) failb= 37(D) OK [10298.753748] raid6test: test_disks(29, 38): faila= 29(D) failb= 38(D) OK [10298.754295] raid6test: test_disks(29, 39): faila= 29(D) failb= 39(D) OK [10298.754907] raid6test: test_disks(29, 40): faila= 29(D) failb= 40(D) OK [10298.755466] raid6test: test_disks(29, 41): faila= 29(D) failb= 41(D) OK [10298.755999] raidaid6test: test_disks(29, 45): faila= 29(D) failb= 45(D) OK [10299.256858] raid6test: test_disks(29, 46): faila= 29(D) failb= 46(D) OK [10299.257423] raid6test: test_disks(29, 47): faila= 29(D) failb= 47(D) OK [10299.257962] raid6test: test_disks(29, 48): faila= 29(D) failb= 48(D) OK [10299.258520] raid6test: test_disks(29, 49): faila= 29(D) failb= 49(D) OK [10299.259012] raid6test: test_disks(29, 50): faila= 29(D) failb= 50(D) OK [10299.259557] raid6test: test_disks(29, 51): faila= 29(D) failb= 51(D) OK [10299.260148] raid6test: test_disks(29, 52): faila= 29(D) failb= 52(D) OK [10299.260732] raid6test: test_disks(29, 53): faila= 29(D) failb= 53(D) OK [10299.261306] raid6test: test_disks(29, 54): faila= 29(D) failb= 54(D) OK [10299.261725] raid6test: test_disks(29, 55): faila= 29(D) failb= 55(D) OK [10299.262248] raid6test: test_disks(29, 56): faila= 29(D) failb= 56(D) OK [10299.262772] raid6test: test_disks(29, 57): faila= 29(D) failb= 57(D) OK [10299.263334] raid6test: test_disks(29, 58): faila= 29(D) failb= 58(D) OK [10299.263828] raid6test: test_disks(29, 59): faila= 29(D) failb= 59(D) OK [10299.264378] raid6test: test_disks(29, 60): faila= 29(D) failb= 60(D) OK [10299.264894] raiaid6test: test_disks(30, 31): faila= 30(D) failb= 31(D) OK [10299.765791] raid6test: test_disks(30, 32): faila= 30(D) failb= 32(D) OK [10299.766343] raid6test: test_disks(30, 33): faila= 30(D) failb= 33(D) OK [10299.766820] raid6test: test_disks(30, 34): faila= 30(D) failb= 34(D) OK [10299.767364] raid6test: test_disks(30, 35): faila= 30(D) failb= 35(D) OK [10299.767872] raid6test: test_disks(30, 36): faila= 30(D) failb= 36(D) OK [10299.768417] raid6test: test_disks(30, 37): faila= 30(D) failb= 37(D) OK [10299.768892] raid6test: test_disks(30, 38): faila= 30(D) failb= 38(D) OK [10299.769774] raid6test: test_disks(30, 39): faila= 30(D) failb= 39(D) OK [10299.770285] raid6test: test_disks(30, 40): faila= 30(D) failb= 40(D) OK [10299.770807] raid6test: test_disks(30, 41): faila= 30(D) failb= 41(D) OK [10299.771351] raid6test: test_disks(30, 42): faila= 30(D) failb= 42(D) OK [10299.771829] raid6test: test_disks(30, 43): faila= 30(D) failb= 43(D) OK [10299.772372] raid6test: test_disks(30, 44): faila= 30(D) failb= 44(D) OK [10299.772879] raid6test: test_disks(30, 45): faila= 30(D) failb= 45(D) OK [10299.773429] raid6test: test_disks(30, 46isks(30, 49): faila= 30(D) failb= 49(D) OK [10300.274242] raid6test: test_disks(30, 50): faila= 30(D) failb= 50(D) OK [10300.274764] raid6test: test_disks(30, 51): faila= 30(D) failb= 51(D) OK [10300.275313] raid6test: test_disks(30, 52): faila= 30(D) failb= 52(D) OK [10300.275843] raid6test: test_disks(30, 53): faila= 30(D) failb= 53(D) OK [10300.276386] raid6test: test_disks(30, 54): faila= 30(D) failb= 54(D) OK [10300.276897] raid6test: test_disks(30, 55): faila= 30(D) failb= 55(D) OK [10300.277446] raid6test: test_disks(30, 56): faila= 30(D) failb= 56(D) OK [10300.277955] raid6test: test_disks(30, 57): faila= 30(D) failb= 57(D) OK [10300.278495] raid6test: test_disks(30, 58): faila= 30(D) failb= 58(D) OK [10300.279005] raid6test: test_disks(30, 59): faila= 30(D) failb= 59(D) OK [10300.279570] raid6test: test_disks(30, 60): faila= 30(D) failb= 60(D) OK [10300.280114] raid6test: test_disks(30, 61): faila= 30(D) failb= 61(D) OK [10300.280645] raid6test: test_disks(30, 62): faila= 30(D) failb= 62(P) OK [10300.281239] raid6test: test_disks(30, 63): faila= 30(D) failb= 63(Q) OK [10300.281802] raid6test: test_disks(31, 35): faila= 31(D) failb= 35(D) OK [10300.782685] raid6test: test_disks(31, 36): faila= 31(D) failb= 36(D) OK [10300.783225] raid6test: test_disks(31, 37): faila= 31(D) failb= 37(D) OK [10300.783768] raid6test: test_disks(31, 38): faila= 31(D) failb= 38(D) OK [10300.784310] raid6test: test_disks(31, 39): faila= 31(D) failb= 39(D) OK [10300.784843] raid6test: test_disks(31, 40): faila= 31(D) failb= 40(D) OK [10300.785385] raid6test: test_disks(31, 41): faila= 31(D) failb= 41(D) OK [10300.785861] raid6test: test_disks(31, 42): faila= 31(D) failb= 42(D) OK [10300.786397] raid6test: test_disks(31, 43): faila= 31(D) failb= 43(D) OK [10300.786908] raid6test: test_disks(31, 44): faila= 31(D) failb= 44(D) OK [10300.787446] raid6test: test_disks(31, 45): faila= 31(D) failb= 45(D) OK [10300.787952] raid6test: test_disks(31, 46): faila= 31(D) failb= 46(D) OK [10300.788501] raid6test: test_disks(31, 47): faila= 31(D) failb= 47(D) OK [10300.789013] raid6test: test_disks(31, 48): faila= 31(D) failb= 48(D) OK [10300.789577] raid6test: test_disks(31, 49): faila= 31(D) failb= 49(D) OK [10300.790125] raid6test: test_disks(31, 50): faila= 31(D) failb= 50(D) OK [10300.790654] raid6test: test_disks(31, 51): faila= 31(D) failb= 51(D) OK [10300.791205] raid6test: test_disks(31, 52): faila= 31(D) faiila= 31(D) failb= 55(D) OK [10301.291977] raid6test: test_disks(31, 56): faila= 31(D) failb= 56(D) OK [10301.292544] raid6test: test_disks(31, 57): faila= 31(D) failb= 57(D) OK [10301.293141] raid6test: test_disks(31, 58): faila= 31(D) failb= 58(D) OK [10301.293768] raid6test: test_disks(31, 59): faila= 31(D) failb= 59(D) OK [10301.294288] raid6test: test_disks(31, 60): faila= 31(D) failb= 60(D) OK [10301.294796] raid6test: test_disks(31, 61): faila= 31(D) failb= 61(D) OK [10301.295344] raid6test: test_disks(31, 62): faila= 31(D) failb= 62(P) OK [10301.295879] raid6test: test_disks(31, 63): faila= 31(D) failb= 63(Q) OK [10301.296423] raid6test: test_disks(32, 33): faila= 32(D) failb= 33(D) OK [10301.296897] raid6test: test_disks(32, 34): faila= 32(D) failb= 34(D) OK [10301.297437] raid6test: test_disks(32, 35): faila= 32(D) failb= 35(D) OK [10301.297948] raid6test: test_disks(32, 36): faila= 32(D) failb= 36(D) OK [10301.298494] raid6test: test_disks(32, 37): faila= 32(D) failb= 37(D) OK [10301.299004] raid6test: test_disks(32, 38): faila= 32(D) failb= 38(D) OK [10301.299573] raid6test: test_disks(32, 39): faila= 32(D) failb= 39(D) OK [10301.300116] raid6test: test_disks(32, 40): failila= 32(D) failb= 43(D) OK [10301.800954] raid6test: test_disks(32, 44): faila= 32(D) failb= 44(D) OK [10301.801491] raid6test: test_disks(32, 45): faila= 32(D) failb= 45(D) OK [10301.801994] raid6test: test_disks(32, 46): faila= 32(D) failb= 46(D) OK [10301.802526] raid6test: test_disks(32, 47): faila= 32(D) failb= 47(D) OK [10301.803141] raid6test: test_disks(32, 48): faila= 32(D) failb= 48(D) OK [10301.803706] raid6test: test_disks(32, 49): faila= 32(D) failb= 49(D) OK [10301.804239] raid6test: test_disks(32, 50): faila= 32(D) failb= 50(D) OK [10301.804800] raid6test: test_disks(32, 51): faila= 32(D) failb= 51(D) OK [10301.805384] raid6test: test_disks(32, 52): faila= 32(D) failb= 52(D) OK [10301.805882] raid6test: test_disks(32, 53): faila= 32(D) failb= 53(D) OK [10301.806435] raid6test: test_disks(32, 54): faila= 32(D) failb= 54(D) OK [10301.806960] raid6test: test_disks(32, 55): faila= 32(D) failb= 55(D) OK [10301.807495] raid6test: test_disks(32, 56): faila= 32(D) failb= 56(D) OK [10301.808013] raid6test: test_disks(32, 57): faila= 32(D) failb= 57(D) OK [10301.808580] raid6test: test_disks(32, 58): faila= 32(D) failb= 58(D) OK [10301.809174] raid6test: test_disks(32, 59): failila= 32(D) failb= 62(P) OK [10302.310006] raid6test: test_disks(32, 63): faila= 32(D) failb= 63(Q) OK [10302.310581] raid6test: test_disks(33, 34): faila= 33(D) failb= 34(D) OK [10302.311129] raid6test: test_disks(33, 35): faila= 33(D) failb= 35(D) OK [10302.311624] raid6test: test_disks(33, 36): faila= 33(D) failb= 36(D) OK [10302.312175] raid6test: test_disks(33, 37): faila= 33(D) failb= 37(D) OK [10302.312635] raid6test: test_disks(33, 38): faila= 33(D) failb= 38(D) OK [10302.313207] raid6test: test_disks(33, 39): faila= 33(D) failb= 39(D) OK [10302.313748] raid6test: test_disks(33, 40): faila= 33(D) failb= 40(D) OK [10302.314300] raid6test: test_disks(33, 41): faila= 33(D) failb= 41(D) OK [10302.314792] raid6test: test_disks(33, 42): faila= 33(D) failb= 42(D) OK [10302.315345] raid6test: test_disks(33, 43): faila= 33(D) failb= 43(D) OK [10302.315886] raid6test: test_disks(33, 44): faila= 33(D) failb= 44(D) OK [10302.316431] raid6test: test_disks(33, 45): faila= 33(D) failb= 45(D) OK [10302.316939] raid6test: test_disks(33, 46): faila= 33(D) failb= 46(D) OK [10302.317483] raid6test: test_disks(33, 47): faila= 33(D) failb= 47(D) OK [10302.317995] raid6test: test_disks(33, 48): faila= 33(D) failb= 48(D) OK b= 51(D) OK [10302.818789] raid6test: test_disks(33, 52): faila= 33(D) failb= 52(D) OK [10302.819351] raid6test: test_disks(33, 53): faila= 33(D) failb= 53(D) OK [10302.819885] raid6test: test_disks(33, 54): faila= 33(D) failb= 54(D) OK [10302.820430] raid6test: test_disks(33, 55): faila= 33(D) failb= 55(D) OK [10302.820900] raid6test: test_disks(33, 56): faila= 33(D) failb= 56(D) OK [10302.821443] raid6test: test_disks(33, 57): faila= 33(D) failb= 57(D) OK [10302.821952] raid6test: test_disks(33, 58): faila= 33(D) failb= 58(D) OK [10302.822501] raid6test: test_disks(33, 59): faila= 33(D) failb= 59(D) OK [10302.823017] raid6test: test_disks(33, 60): faila= 33(D) failb= 60(D) OK [10302.823580] raid6test: test_disks(33, 61): faila= 33(D) failb= 61(D) OK [10302.824130] raid6test: test_disks(33, 62): faila= 33(D) failb= 62(P) OK [10302.824672] raid6test: test_disks(33, 63): faila= 33(D) failb= 63(Q) OK [10302.825224] raid6test: test_disks(34, 35): faila= 34(D) failb= 35(D) OK [10302.825737] raid6test: test_disks(34, 36): faila= 34(D) faila= 34(D) failb= 39(D) OK [10303.326610] raid6test: test_disks(34, 40): faila= 34(D) failb= 40(D) OK [10303.327164] raid6test: test_disks(34, 41): faila= 34(D) failb= 41(D) OK [10303.327662] raid6test: test_disks(34, 42): faila= 34(D) failb= 42(D) OK [10303.328208] raid6test: test_disks(34, 43): faila= 34(D) failb= 43(D) OK [10303.328738] raid6test: test_disks(34, 44): faila= 34(D) failb= 44(D) OK [10303.329286] raid6test: test_disks(34, 45): faila= 34(D) failb= 45(D) OK [10303.329818] raid6test: test_disks(34, 46): faila= 34(D) failb= 46(D) OK [10303.330360] raid6test: test_disks(34, 47): faila= 34(D) failb= 47(D) OK [10303.330893] raid6test: test_disks(34, 48): faila= 34(D) failb= 48(D) OK [10303.331443] raid6test: test_disks(34, 49): faila= 34(D) failb= 49(D) OK [10303.331971] raid6test: test_disks(34, 50): faila= 34(D) failb= 50(D) OK [10303.332527] raid6test: test_disks(34, 51): faila= 34(D) failb= 51(D) OK [10303.333038] raid6test: test_disks(34, 52): faila= 34(D) failb= 52(D) OK [10303.333611] raid6test: test_disks(34, 53): faila= 34(D) failb= 53(D) OK [10303.334155] raid6test: test_disks(34, 54): faila= 34(D) failb= 54(D) OK [10303.334646] raid6test: test_disks(34, 55): faila= 34(D) failb= 55(D) OK b= 58(D) OK [10303.835453] raid6test: test_disks(34, 59): faila= 34(D) failb= 59(D) OK [10303.835973] raid6test: test_disks(34, 60): faila= 34(D) failb= 60(D) OK [10303.836528] raid6test: test_disks(34, 61): faila= 34(D) failb= 61(D) OK [10303.837034] raid6test: test_disks(34, 62): faila= 34(D) failb= 62(P) OK [10303.837618] raid6test: test_disks(34, 63): faila= 34(D) failb= 63(Q) OK [10303.838177] raid6test: test_disks(35, 36): faila= 35(D) failb= 36(D) OK [10303.838708] raid6test: test_disks(35, 37): faila= 35(D) failb= 37(D) OK [10303.839255] raid6test: test_disks(35, 38): faila= 35(D) failb= 38(D) OK [10303.839734] raid6test: test_disks(35, 39): faila= 35(D) failb= 39(D) OK [10303.840273] raid6test: test_disks(35, 40): faila= 35(D) failb= 40(D) OK [10303.840768] raid6test: test_disks(35, 41): faila= 35(D) failb= 41(D) OK [10303.841311] raid6test: test_disks(35, 42): faila= 35(D) failb= 42(D) OK [10303.841840] raid6test: test_disks(35, 43): faila= 35(D) failb= 43(D) OK [10303.842388] raid6test: test_disks(35, 44): faila= 35(D) failb= 44(D) OK [10303.842915] raid6test: test_disks(35, 45): faila= 35(D) failb= 45(D) OK [10303.843474] raid6test: test_disks(35, 46): faila= 35(D) failb=b= 49(D) OK [10304.344306] raid6test: test_disks(35, 50): faila= 35(D) failb= 50(D) OK [10304.344854] raid6test: test_disks(35, 51): faila= 35(D) failb= 51(D) OK [10304.345422] raid6test: test_disks(35, 52): faila= 35(D) failb= 52(D) OK [10304.345940] raid6test: test_disks(35, 53): faila= 35(D) failb= 53(D) OK [10304.346492] raid6test: test_disks(35, 54): faila= 35(D) failb= 54(D) OK [10304.346973] raid6test: test_disks(35, 55): faila= 35(D) failb= 55(D) OK [10304.347524] raid6test: test_disks(35, 56): faila= 35(D) failb= 56(D) OK [10304.348033] raid6test: test_disks(35, 57): faila= 35(D) failb= 57(D) OK [10304.348604] raid6test: test_disks(35, 58): faila= 35(D) failb= 58(D) OK [10304.349149] raid6test: test_disks(35, 59): faila= 35(D) failb= 59(D) OK [10304.349669] raid6test: test_disks(35, 60): faila= 35(D) failb= 60(D) OK [10304.350219] raid6test: test_disks(35, 61): faila= 35(D) failb= 61(D) OK [10304.350753] raid6test: test_disks(35, 62): faila= 35(D) failb= 62(P) OK [10304.351319] raid6test: test_disks(35, 63): faila= 35(D) failb= 63(Q) OK [10304.351850] raid6test: test_disks(36, 37): faila= 36(D) failb= 40(D) OK [10304.852645] raid6test: test_disks(36, 41): faila= 36(D) failb= 41(D) OK [10304.853236] raid6test: test_disks(36, 42): faila= 36(D) failb= 42(D) OK [10304.853765] raid6test: test_disks(36, 43): faila= 36(D) failb= 43(D) OK [10304.854307] raid6test: test_disks(36, 44): faila= 36(D) failb= 44(D) OK [10304.854845] raid6test: test_disks(36, 45): faila= 36(D) failb= 45(D) OK [10304.855388] raid6test: test_disks(36, 46): faila= 36(D) failb= 46(D) OK [10304.855922] raid6test: test_disks(36, 47): faila= 36(D) failb= 47(D) OK [10304.856463] raid6test: test_disks(36, 48): faila= 36(D) failb= 48(D) OK [10304.856972] raid6test: test_disks(36, 49): faila= 36(D) failb= 49(D) OK [10304.857518] raid6test: test_disks(36, 50): faila= 36(D) failb= 50(D) OK [10304.857995] raid6test: test_disks(36, 51): faila= 36(D) failb= 51(D) OK [10304.858544] raid6test: test_disks(36, 52): faila= 36(D) failb= 52(D) OK [10304.859054] raid6test: test_disks(36, 53): faila= 36(D) failb= 53(D) OK [10304.859621] raid6test: test_disks(36, 54): faila= 36(D) failb= 54(D) OK [10304.860164] raid6test: test_disks(36, 55): faila= 36(D) failb= 55(D) OK [10304.887712[10305.360970] raid6test: test_disks(36, 59): faila= 36(D) failb= 59(D) OK [10305.361504] raid6test: test_disks(36, 60): faila= 36(D) failb= 60(D) OK [10305.361984] raid6test: test_disks(36, 61): faila= 36(D) failb= 61(D) OK [10305.362534] raid6test: test_disks(36, 62): faila= 36(D) failb= 62(P) OK [10305.363048] raid6test: test_disks(36, 63): faila= 36(D) failb= 63(Q) OK [10305.363561] raid6test: test_disks(37, 38): faila= 37(D) failb= 38(D) OK [10305.364069] raid6test: test_disks(37, 39): faila= 37(D) failb= 39(D) OK [10305.364643] raid6test: test_disks(37, 40): faila= 37(D) failb= 40(D) OK [10305.365186] raid6test: test_disks(37, 41): faila= 37(D) failb= 41(D) OK [10305.365686] raid6test: test_disks(37, 42): faila= 37(D) failb= 42(D) OK [10305.366230] raid6test: test_disks(37, 43): faila= 37(D) failb= 43(D) OK [10305.366732] raid6test: test_disks(37, 44): faila= 37(D) failb= 44(D) OK [10305.367283] raid6test: test_disks(37, 45): faila= 37(D) failb= 45(D) OK [10305.367771] raid6test: test_disks(37, 46): faila= 37(D) failb= 46(D) OK [10305.368318] raid6test: test_disks(37, 47): faila= 37(D) failb= 47(D) OK [10305.368852] raid6test: test_disks(37, 48): faila= 37(D) failb= 48(D) OK [1[10305.869642] raid6test: test_disks(37, 52): faila= 37(D) failb= 52(D) OK [10305.870205] raid6test: test_disks(37, 53): faila= 37(D) failb= 53(D) OK [10305.870694] raid6test: test_disks(37, 54): faila= 37(D) failb= 54(D) OK [10305.871244] raid6test: test_disks(37, 55): faila= 37(D) failb= 55(D) OK [10305.871764] raid6test: test_disks(37, 56): faila= 37(D) failb= 56(D) OK [10305.872314] raid6test: test_disks(37, 57): faila= 37(D) failb= 57(D) OK [10305.872843] raid6test: test_disks(37, 58): faila= 37(D) failb= 58(D) OK [10305.873414] raid6test: test_disks(37, 59): faila= 37(D) failb= 59(D) OK [10305.873945] raid6test: test_disks(37, 60): faila= 37(D) failb= 60(D) OK [10305.874491] raid6test: test_disks(37, 61): faila= 37(D) failb= 61(D) OK [10305.874998] raid6test: test_disks(37, 62): faila= 37(D) failb= 62(P) OK [10305.875556] raid6test: test_disks(37, 63): faila= 37(D) failb= 63(Q) OK [10305.876067] raid6test: test_disks(38, 39): faila= 38(D) failb= 39(D) OK [10305.876770] raid6test: test_disks(38, 40): faila= 38(D) failb= 40(D) OK [10305.877326] raid6test: test_disks(38, 41): faila= 38(D) failb= 41(D) OK [10305.877865] raid6test: taid6test: test_disks(38, 45): faila= 38(D) failb= 45(D) OK [10306.378672] raid6test: test_disks(38, 46): faila= 38(D) failb= 46(D) OK [10306.379193] raid6test: test_disks(38, 47): faila= 38(D) failb= 47(D) OK [10306.379724] raid6test: test_disks(38, 48): faila= 38(D) failb= 48(D) OK [10306.380271] raid6test: test_disks(38, 49): faila= 38(D) failb= 49(D) OK [10306.380771] raid6test: test_disks(38, 50): faila= 38(D) failb= 50(D) OK [10306.381319] raid6test: test_disks(38, 51): faila= 38(D) failb= 51(D) OK [10306.381854] raid6test: test_disks(38, 52): faila= 38(D) failb= 52(D) OK [10306.382395] raid6test: test_disks(38, 53): faila= 38(D) failb= 53(D) OK [10306.382931] raid6test: test_disks(38, 54): faila= 38(D) failb= 54(D) OK [10306.383503] raid6test: test_disks(38, 55): faila= 38(D) failb= 55(D) OK [10306.384021] raid6test: test_disks(38, 56): faila= 38(D) failb= 56(D) OK [10306.384569] raid6test: test_disks(38, 57): faila= 38(D) failb= 57(D) OK [10306.385076] raid6test: test_disks(38, 58): faila= 38(D) failb= 58(D) OK [10306.385732] raid6test: test_disks(38, 59): faila= 38(D) failb= 59(D) OK [10306.386284] raid6test: test_disks(38, 60): faila= 38(D) failb= 60(D) OK [10306.386807] raid6test: test_disks(38, 61): faila= 38(D) failb= 61(D) OK [10306.387351] raid6aid6test: test_disks(39, 41): faila= 39(D) failb= 41(D) OK [10306.888098] raid6test: test_disks(39, 42): faila= 39(D) failb= 42(D) OK [10306.888642] raid6test: test_disks(39, 43): faila= 39(D) failb= 43(D) OK [10306.889196] raid6test: test_disks(39, 44): faila= 39(D) failb= 44(D) OK [10306.889726] raid6test: test_disks(39, 45): faila= 39(D) failb= 45(D) OK [10306.890276] raid6test: test_disks(39, 46): faila= 39(D) failb= 46(D) OK [10306.890801] raid6test: test_disks(39, 47): faila= 39(D) failb= 47(D) OK [10306.891350] raid6test: test_disks(39, 48): faila= 39(D) failb= 48(D) OK [10306.891884] raid6test: test_disks(39, 49): faila= 39(D) failb= 49(D) OK [10306.892427] raid6test: test_disks(39, 50): faila= 39(D) failb= 50(D) OK [10306.892962] raid6test: test_disks(39, 51): faila= 39(D) failb= 51(D) OK [10306.893524] raid6test: test_disks(39, 52): faila= 39(D) failb= 52(D) OK [10306.894040] raid6test: test_disks(39, 53): faila= 39(D) failb= 53(D) OK [10306.894924] raid6test: test_disks(39, 54): faila= 39(D) failb= 54(D) OK [10306.895428] raid6test: test_disks(39, 55): faila= 39(D) failb= 55(D) OK [10306.895958] raid6test: test_disks(39, 56): faila= 39(D) failb= 56(D) OK [10306.896503] raiaid6test: test_disks(39, 60): faila= 39(D) failb= 60(D) OK [10307.397313] raid6test: test_disks(39, 61): faila= 39(D) failb= 61(D) OK [10307.397852] raid6test: test_disks(39, 62): faila= 39(D) failb= 62(P) OK [10307.398416] raid6test: test_disks(39, 63): faila= 39(D) failb= 63(Q) OK [10307.398947] raid6test: test_disks(40, 41): faila= 40(D) failb= 41(D) OK [10307.399496] raid6test: test_disks(40, 42): faila= 40(D) failb= 42(D) OK [10307.400002] raid6test: test_disks(40, 43): faila= 40(D) failb= 43(D) OK [10307.400543] raid6test: test_disks(40, 44): faila= 40(D) failb= 44(D) OK [10307.401050] raid6test: test_disks(40, 45): faila= 40(D) failb= 45(D) OK [10307.401588] raid6test: test_disks(40, 46): faila= 40(D) failb= 46(D) OK [10307.402100] raid6test: test_disks(40, 47): faila= 40(D) failb= 47(D) OK [10307.402657] raid6test: test_disks(40, 48): faila= 40(D) failb= 48(D) OK [10307.403226] raid6test: test_disks(40, 49): faila= 40(D) failb= 49(D) OK [10307.403765] raid6test: test_disks(40, 50): faila= 40(D) failb= 50(D) OK [10307.404304] raid6test: test_disks(40, 51): faila= 40(D) failb= 51(D) OK [10307.404827] raid6test: test_disks(40, 52isks(40, 55): faila= 40(D) failb= 55(D) OK [10307.905620] raid6test: test_disks(40, 56): faila= 40(D) failb= 56(D) OK [10307.906103] raid6test: test_disks(40, 57): faila= 40(D) failb= 57(D) OK [10307.906664] raid6test: test_disks(40, 58): faila= 40(D) failb= 58(D) OK [10307.907213] raid6test: test_disks(40, 59): faila= 40(D) failb= 59(D) OK [10307.907743] raid6test: test_disks(40, 60): faila= 40(D) failb= 60(D) OK [10307.908286] raid6test: test_disks(40, 61): faila= 40(D) failb= 61(D) OK [10307.908816] raid6test: test_disks(40, 62): faila= 40(D) failb= 62(P) OK [10307.909370] raid6test: test_disks(40, 63): faila= 40(D) failb= 63(Q) OK [10307.909903] raid6test: test_disks(41, 42): faila= 41(D) failb= 42(D) OK [10307.910444] raid6test: test_disks(41, 43): faila= 41(D) failb= 43(D) OK [10307.910972] raid6test: test_disks(41, 44): faila= 41(D) failb= 44(D) OK [10307.911519] raid6test: test_disks(41, 45): faila= 41(D) failb= 45(D) OK [10307.912024] raid6test: test_disks(41, 46): faila= 41(D) failb= 46(D) OK [10307.912563] raid6test: test_disks(41, 47): faila= 41(D) failb= 47(D) OK [10307.913069] raid6test: test_disks(41, 48): faila= 41(D) failb= 48(D) OK [10307.913613] raid6test: test_disks(41, 49): faila= 41(D) failb= 49(D) OK [10307.914168] raid6test: test_disks(41, 50): faila= 41(D) failb= 50(D) OK [10307.914781] raid6test: test_disks(4isks(41, 54): faila= 41(D) failb= 54(D) OK [10308.415593] raid6test: test_disks(41, 55): faila= 41(D) failb= 55(D) OK [10308.416110] raid6test: test_disks(41, 56): faila= 41(D) failb= 56(D) OK [10308.416670] raid6test: test_disks(41, 57): faila= 41(D) failb= 57(D) OK [10308.417230] raid6test: test_disks(41, 58): faila= 41(D) failb= 58(D) OK [10308.417732] raid6test: test_disks(41, 59): faila= 41(D) failb= 59(D) OK [10308.418282] raid6test: test_disks(41, 60): faila= 41(D) failb= 60(D) OK [10308.418784] raid6test: test_disks(41, 61): faila= 41(D) failb= 61(D) OK [10308.419329] raid6test: test_disks(41, 62): faila= 41(D) failb= 62(P) OK [10308.419861] raid6test: test_disks(41, 63): faila= 41(D) failb= 63(Q) OK [10308.420398] raid6test: test_disks(42, 43): faila= 42(D) failb= 43(D) OK [10308.420927] raid6test: test_disks(42, 44): faila= 42(D) failb= 44(D) OK [10308.421474] raid6test: test_disks(42, 45): faila= 42(D) failb= 45(D) OK [10308.421960] raid6test: test_disks(42, 46): faila= 42(D) failb= 46(D) OK [10308.422503] raid6test: test_disks(42, 47): faila= 42(D) failb= 47(D) OK [10308.422985] raid6test: test_aid6test: test_disks(42, 51): faila= 42(D) failb= 51(D) OK [10308.923843] raid6test: test_disks(42, 52): faila= 42(D) failb= 52(D) OK [10308.924362] raid6test: test_disks(42, 53): faila= 42(D) failb= 53(D) OK [10308.924932] raid6test: test_disks(42, 54): faila= 42(D) failb= 54(D) OK [10308.925497] raid6test: test_disks(42, 55): faila= 42(D) failb= 55(D) OK [10308.925991] raid6test: test_disks(42, 56): faila= 42(D) failb= 56(D) OK [10308.926510] raid6test: test_disks(42, 57): faila= 42(D) failb= 57(D) OK [10308.926983] raid6test: test_disks(42, 58): faila= 42(D) failb= 58(D) OK [10308.927526] raid6test: test_disks(42, 59): faila= 42(D) failb= 59(D) OK [10308.928003] raid6test: test_disks(42, 60): faila= 42(D) failb= 60(D) OK [10308.928548] raid6test: test_disks(42, 61): faila= 42(D) failb= 61(D) OK [10308.929025] raid6test: test_disks(42, 62): faila= 42(D) failb= 62(P) OK [10308.929580] raid6test: test_disks(42, 63): faila= 42(D) failb= 63(Q) OK [10308.930054] raid6test: test_disks(43, 44): faila= 43(D) failb= 44(D) OK [10308.930602] raid6test: test_disks(43, 45): faila= 43(D) failb= 45(D) OK [10308.931109] raid6test: test_disks(43, 46)isks(43, 49): faila= 43(D) failb= 49(D) OK [10309.431947] raid6test: test_disks(43, 50): faila= 43(D) failb= 50(D) OK [10309.432514] raid6test: test_disks(43, 51): faila= 43(D) failb= 51(D) OK [10309.432992] raid6test: test_disks(43, 52): faila= 43(D) failb= 52(D) OK [10309.433559] raid6test: test_disks(43, 53): faila= 43(D) failb= 53(D) OK [10309.434075] raid6test: test_disks(43, 54): faila= 43(D) failb= 54(D) OK [10309.434623] raid6test: test_disks(43, 55): faila= 43(D) failb= 55(D) OK [10309.435169] raid6test: test_disks(43, 56): faila= 43(D) failb= 56(D) OK [10309.435670] raid6test: test_disks(43, 57): faila= 43(D) failb= 57(D) OK [10309.436218] raid6test: test_disks(43, 58): faila= 43(D) failb= 58(D) OK [10309.436702] raid6test: test_disks(43, 59): faila= 43(D) failb= 59(D) OK [10309.437242] raid6test: test_disks(43, 60): faila= 43(D) failb= 60(D) OK [10309.437752] raid6test: test_disks(43, 61): faila= 43(D) failb= 61(D) OK [10309.438311] raid6test: test_disks(43, 62): faila= 43(D) failb= 62(P) OK [10309.438921] raid6test: test_disks(43, 63): faila= 43(D) failb= 63(Q) OK [10309.439470] raid6test: test_disks(44, 48): faila= 44(D) failb= 48(D) OK [10309.940347] raid6test: test_disks(44, 49): faila= 44(D) failb= 49(D) OK [10309.940910] raid6test: test_disks(44, 50): faila= 44(D) failb= 50(D) OK [10309.941437] raid6test: test_disks(44, 51): faila= 44(D) failb= 51(D) OK [10309.941981] raid6test: test_disks(44, 52): faila= 44(D) failb= 52(D) OK [10309.942562] raid6test: test_disks(44, 53): faila= 44(D) failb= 53(D) OK [10309.943080] raid6test: test_disks(44, 54): faila= 44(D) failb= 54(D) OK [10309.943645] raid6test: test_disks(44, 55): faila= 44(D) failb= 55(D) OK [10309.944174] raid6test: test_disks(44, 56): faila= 44(D) failb= 56(D) OK [10309.944691] raid6test: test_disks(44, 57): faila= 44(D) failb= 57(D) OK [10309.945215] raid6test: test_disks(44, 58): faila= 44(D) failb= 58(D) OK [10309.945697] raid6test: test_disks(44, 59): faila= 44(D) failb= 59(D) OK [10309.946226] raid6test: test_disks(44, 60): faila= 44(D) failb= 60(D) OK [10309.946746] raid6test: test_disks(44, 61): faila= 44(D) failb= 61(D) OK [10309.947289] raid6test: test_disks(44, 62): faila= 44(D) failb= 62(P) OK [10309.947782] raid6test: test_disks(44, 63): faila= 44(D) failb= 63(Q) OK [10309.948309] raid6test: test_disks(45, 46): faila= 45(D) failb= 46(D) OK [10309.948828] raid6test: test_disks(45, 47): faila= 45(D) faiila= 45(D) failb= 50(D) OK [10310.449871] raid6test: test_disks(45, 51): faila= 45(D) failb= 51(D) OK [10310.450462] raid6test: test_disks(45, 52): faila= 45(D) failb= 52(D) OK [10310.451023] raid6test: test_disks(45, 53): faila= 45(D) failb= 53(D) OK [10310.451589] raid6test: test_disks(45, 54): faila= 45(D) failb= 54(D) OK [10310.452132] raid6test: test_disks(45, 55): faila= 45(D) failb= 55(D) OK [10310.452728] raid6test: test_disks(45, 56): faila= 45(D) failb= 56(D) OK [10310.453330] raid6test: test_disks(45, 57): faila= 45(D) failb= 57(D) OK [10310.453871] raid6test: test_disks(45, 58): faila= 45(D) failb= 58(D) OK [10310.454451] raid6test: test_disks(45, 59): faila= 45(D) failb= 59(D) OK [10310.455015] raid6test: test_disks(45, 60): faila= 45(D) failb= 60(D) OK [10310.455584] raid6test: test_disks(45, 61): faila= 45(D) failb= 61(D) OK [10310.456119] raid6test: test_disks(45, 62): faila= 45(D) failb= 62(P) OK [10310.456723] raid6test: test_disks(45, 63): faila= 45(D) failb= 63(Q) OK [10310.457293] raid6test: test_disks(46, 47): faila= 46(D) failb= 47(D) OK [10310.457858] raid6test: test_disks(46, 48): faila= 46(D) failb= 48(D) OK [10310.458420] raid6test: test_disks(46, 49): failila= 46(D) failb= 52(D) OK [10310.959464] raid6test: test_disks(46, 53): faila= 46(D) failb= 53(D) OK [10310.960105] raid6test: test_disks(46, 54): faila= 46(D) failb= 54(D) OK [10310.960804] raid6test: test_disks(46, 55): faila= 46(D) failb= 55(D) OK [10310.961488] raid6test: test_disks(46, 56): faila= 46(D) failb= 56(D) OK [10310.962125] raid6test: test_disks(46, 57): faila= 46(D) failb= 57(D) OK [10310.962796] raid6test: test_disks(46, 58): faila= 46(D) failb= 58(D) OK [10310.963527] raid6test: test_disks(46, 59): faila= 46(D) failb= 59(D) OK [10310.964214] raid6test: test_disks(46, 60): faila= 46(D) failb= 60(D) OK [10310.964868] raid6test: test_disks(46, 61): faila= 46(D) failb= 61(D) OK [10310.965570] raid6test: test_disks(46, 62): faila= 46(D) failb= 62(P) OK [10310.966312] raid6test: test_disks(46, 63): faila= 46(D) failb= 63(Q) OK [10310.966944] raid6test: test_disks(47, 48): faila= 47(D) failb= 48(D) OK [10310.967633] raid6test: test_disks(47, 49): faila= 47(D) failb= 49(D) OK [10310.968332] raid6test: test_disks(47, 50): faila= 47(D) failb= 50(D) OK [10310.968987] raid6test: test_disks(47, 51): fb= 53(D) OK [10311.369713] raid6test: test_disks(47, 54): faila= 47(D) failb= 54(D) OK [10311.370260] raid6test: test_disks(47, 55): faila= 47(D) failb= 55(D) OK [10311.370801] raid6test: test_disks(47, 56): faila= 47(D) failb= 56(D) OK [10311.371341] raid6test: test_disks(47, 57): faila= 47(D) failb= 57(D)aid6test: test_disks(47, 58): faila= 47(D) failb= 58(D) OK [10311.472232] raid6test: test_disks(47, 59): faila= 47(D) failb= 59(D) OK [10311.472811] raid6test: test_disks(47, 60): faila= 47(D) failb= 60(D) OK [10311.473412] raid6test: test_disks(47, 61): faila= 47(D) failb= 61(D) OK [10311.473944] raid6test: test_disks(47, 62): faila= 47(D) failb= 62(P) OK [10311.474518] raid6test: test_disks(47, 63): faila= 47(D) failb= 63(Q) OK [10311.475078] raid6test: test_disks(48, 49): faila= 48(D) failb= 49(D) OK [10311.475636] raid6test: test_disks(48, 50): faila= 48(D) failb= 50(D) OK [10311.476288] raid6test: test_disks(48, 51): faila= 48(D) failb= 51(D) OK [10311.476850] raid6test: test_disks(48, 52): faila= 48(D) failb= 52(D) OK [10311.477379] raid6test: test_disks(48, 53): faila= 48(D) failb= 53(D) OK [10311.477892] raid6test: test_disks(48, 54): faila= 48(D) failb= 54(D) OK [10311.478453] raid6test: test_disks(4b= 57(D) OK [10311.879262] raid6test: test_disks(48, 58): faila= 48(D) failb= 58(D) OK [10311.879802] raid6test: test_disks(48, 59): faila= 48(D) failb= 5aid6test: test_disks(48, 60): faila= 48(D) failb= 60(D) OK [10311.980631] raid6test: test_disks(48, 61): faila= 48(D) failb= 61(D) OK [10311.981108] raid6test: test_disks(48, 62): faila= 48(D) failb= 62(P) OK [10311.981645] raid6test: test_disks(48, 63): faila= 48(D) failb= 63(Q) OK [10311.982309] raid6test: test_disks(49, 50): faila= 49(D) failb= 50(D) OK [10311.982835] raid6test: test_disks(49, 51): faila= 49(D) failb= 51(D) OK [10311.983428] raid6test: test_disks(49, 52): faila= 49(D) failb= 52(D) OK [10311.984039] raid6test: test_disks(49, 53): faila= 49(D) failb= 53(D) OK [10311.984678] raid6test: test_disks(49, 54): faila= 49(D) failb= 54(D) OK [10311.985213] raid6test: test_disks(49, 55): faila= 49(D) failb= 55(D) OK [10311.985931] raid6test: test_disks(49, 56): faila= 49(D) failb= 56(D) OK [10311.986503] raid6test: test_disks(49, 57): faila= 49(D) failb= 57(D) OK [10311.987031] raid6test: test_disks(49, 58): faila= 49(D) failb= 58(D) OK [10311.987655] raid6test: test_disks(49, 59): faila= 49(D) failb= 59(D) OK [10312.015294aid6test: test_disks(49, 62): faila= 49(D) failb= 62(P) OK [10312.388958] raid6test: test_disks(49, 63): faila= 49(D) failb= 63(Q) OK [10312.389547] raid6test: test_disks(50, 51): faila= 50(D) failb= 51(D) OK [10312.390054] raid6test: test_disks(50, 52): faila= 50(D) failb= 52(D) OK [10312.390590] raid6test: test_disks(50, 53): faila= 50(D) failb= 53(D) OK [10312.391088] raid6test: b= 54(D) OK [10312.491583] raid6test: test_disks(50, 55): faila= 50(D) failb= 55(D) OK [10312.492089] raid6test: test_disks(50, 56): faila= 50(D) failb= 56(D) OK [10312.492681] raid6test: test_disks(50, 57): faila= 50(D) failb= 57(D) OK [10312.493283] raid6test: test_disks(50, 58): faila= 50(D) failb= 58(D) OK [10312.493809] raid6test: test_disks(50, 59): faila= 50(D) failb= 59(D) OK [10312.494375] raid6test: test_disks(50, 60): faila= 50(D) failb= 60(D) OK [10312.494889] raid6test: test_disks(50, 61): faila= 50(D) failb= 61(D) OK [10312.495455] raid6test: test_disks(50, 62): faila= 50(D) failb= 62(P) OK [10312.496020] raid6test: test_disks(50, 63): faila= 50(D) failb= 63(Q) OK [10312.496581] raid6test: test_disks(51, 52): faila= 51(D) failb= 52(D) OK [10312.497111] raid6test: test_disks(51, 53): faila= 51(D) failb= 53(D) OK [10312[10312.998022] raid6test: test_disks(51, 57): faila= 51(D) failb= 57(D) OK [10312.998573] raid6test: test_disks(51, 58): faila= 51(D) failb= 58(D) OK [10312.999063] raid6test: test_disks(51, 59): faila= 51(D) failb= 59(D) OK [10312.999623] raid6test: test_disks(51, 60): faila= 51(D) failb= 60(D) OK [10313.000148] raid6test: test_disks(51, 61): faila= 51(D) failb= 61(D) OK [10313.000680] raid6test: test_disks(51, 62): faila= 51(D) failb= 62(P) OK [10313.001231] raid6test: test_disks(51, 63): faila= 51(D) failb= 63(Q) OK [10313.001747] raid6test: test_disks(52, 53): faila= 52(D) failb= 53(D) OK [10313.002304] raid6test: test_disks(52, 54): faila= 52(D) failb= 54(D) OK [10313.002878] raid6test: test_disks(52, 55): faila= 52(D) failb= 55(D) OK [10313.003441] raid6test: test_disks(52, 56): faila= 52(D) failb= 56(D) OK [10313.004010] raid6test: test_disks(52, 57): faila= 52(D) failb= 57(D) OK [10313.004549] raid6test: test_disks(52, 58): faila= 52(D) failb= 58(D) OK [10313.005065] raid6test: test_disks(52, 59): faila= 52(D) failb= 59(D) OK [10313.005633] raid6test: test_disks(52, 60): faila= 52(D) failb= 60(D) OK [10313.006155] raid6test: test_disks(52, 61): faila= 52(D) failb= 61(D) OK [10313.006681] raid6test: test_disks(52, 62): faila= 52(D) failb= 62(P) OK [10[10313.507673] raid6test: test_disks(53, 56): faila= 53(D) failb= 56(D) OK [10313.508268] raid6test: test_disks(53, 57): faila= 53(D) failb= 57(D) OK [10313.508786] raid6test: test_disks(53, 58): faila= 53(D) failb= 58(D) OK [10313.509304] raid6test: test_disks(53, 59): faila= 53(D) failb= 59(D) OK [10313.509846] raid6test: test_disks(53, 60): faila= 53(D) failb= 60(D) OK [10313.510360] raid6test: test_disks(53, 61): faila= 53(D) failb= 61(D) OK [10313.510831] raid6test: test_disks(53, 62): faila= 53(D) failb= 62(P) OK [10313.511359] raid6test: test_disks(53, 63): faila= 53(D) failb= 63(Q) OK [10313.511829] raid6test: test_disks(54, 55): faila= 54(D) failb= 55(D) OK [10313.512341] raid6test: test_disks(54, 56): faila= 54(D) failb= 56(D) OK [10313.512842] raid6test: test_disks(54, 57): faila= 54(D) failb= 57(D) OK [10313.513384] raid6test: test_disks(54, 58): faila= 54(D) failb= 58(D) OK [10313.513898] raid6test: test_disks(54, 59): faila= 54(D) failb= 59(D) OK [10313.514412] raid6test: test_disks(54, 60): faila= 54(D) failb= 60(D) OK [10313.514912] raid6test: test_disks(54, 61): faila= 54(D) failb= 61(D) OK [10313.515420] raid6test: test_disks(54, 62): faila= 54(D) failb= 62(P) OK [1[10314.016334] raid6test: test_disks(55, 58): faila= 55(D) failb= 58(D) OK [10314.016900] raid6test: test_disks(55, 59): faila= 55(D) failb= 59(D) OK [10314.017475] raid6test: test_disks(55, 60): faila= 55(D) failb= 60(D) OK [10314.018032] raid6test: test_disks(55, 61): faila= 55(D) failb= 61(D) OK [10314.018593] raid6test: test_disks(55, 62): faila= 55(D) failb= 62(P) OK [10314.019097] raid6test: test_disks(55, 63): faila= 55(D) failb= 63(Q) OK [10314.019994] raid6test: test_disks(56, 57): faila= 56(D) failb= 57(D) OK [10314.020557] raid6test: test_disks(56, 58): faila= 56(D) failb= 58(D) OK [10314.021106] raid6test: test_disks(56, 59): faila= 56(D) failb= 59(D) OK [10314.021669] raid6test: test_disks(56, 60): faila= 56(D) failb= 60(D) OK [10314.022244] raid6test: test_disks(56, 61): faila= 56(D) failb= 61(D) OK [10314.022768] raid6test: test_disks(56, 62): faila= 56(D) failb= 62(P) OK [10314.023380] raid6test: test_disks(56, 63): faila= 56(D) failb= 63(Q) OK [10314.023932] raid6test: test_disks(57, 58): faila= 57(D) failb= 58(D) OK [10314.024502] raid6test: test_disks(57, 59): faila= 57(D) failb= 59(D) OK [10314.025019] raid6test: test_disks(57, 60): faila= 57(D) failb= 60(D) OK [10314.025588] raid6test: tesaid6test: test_disks(58, 59): faila= 58(D) failb= 59(D) OK [10314.526993] raid6test: test_disks(58, 60): faila= 58(D) failb= 60(D) OK [10314.527517] raid6test: test_disks(58, 61): faila= 58(D) failb= 61(D) OK [10314.527996] raid6test: test_disks(58, 62): faila= 58(D) failb= 62(P) OK [10314.528529] raid6test: test_disks(58, 63): faila= 58(D) failb= 63(Q) OK [10314.529051] raid6test: test_disks(59, 60): faila= 59(D) failb= 60(D) OK [10314.529580] raid6test: test_disks(59, 61): faila= 59(D) failb= 61(D) OK [10314.530096] raid6test: test_disks(59, 62): faila= 59(D) failb= 62(P) OK [10314.530666] raid6test: test_disks(59, 63): faila= 59(D) failb= 63(Q) OK [10314.531222] raid6test: test_disks(60, 61): faila= 60(D) failb= 61(D) OK [10314.531713] raid6test: test_disks(60, 62): faila= 60(D) failb= 62(P) OK [10314.532252] raid6test: test_disks(60, 63): faila= 60(D) failb= 63(Q) OK [10314.532784] raid6test: test_disks(61, 62): faila= 61(D) failb= 62(P) OK [10314.533367] raid6test: test_disks(61, 63): faila= 61(D) failb= 63(Q) OK [10314.533848] raid6test: test_disks(62, 63): faila= 62(P) failb= 63(Q) OK [10314.534275] r** Attempting to unload raid6test... ** ** Attempting to load raid_class... ** ** Attempting to unload raid_class... ** ** Attempting to load ramoops... ** ** Attempting to unload ramoops... ** ** Attempting to load rbd... ** [10319.862142] Key type ceph registered [10319.865291] libceph: loaded (mon/osd proto 15/24) [10319.937562] rbd: loaded (major 252) ** Attempting to unload rbd... ** [10320.565544] Key type ceph unregistered ** Attempting to load rdma_cm... ** ** Attempting to unload rdma_cm... ** ** Attempting to load rdma_ucm... ** ** Attempting to unload rdma_ucm... ** ** Attempting to load reed_solomon... ** ** Attempting to unload reed_solomon... ** ** Attempting to load rfcomm... ** [10329.036218] Bluetooth: Core ver 2.22 [10329.036947] NET: Registered PF_BLUETOOTH protocol family [10329.037283] Bluetooth: HCI device and connection manager initialized [10329.037927] Bluetooth: HCI socket layer initialized [10329.039143] Bluetooth: L2CAP socket layer initialized [10329.039697] Bluetooth: SCO socket layer initialized [10329.072185] Bluetooth: RFCOMM TTY layer initialized [10329.072966] Bluetooth: RFCOMM socket layer initialized [10329.073563] Bluetooth: RFCOMM ver 1.11 ** Attempting to unload rfcomm... ** [10329.646553] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load ring_buffer_benchmark... ** [10331.292938] [10331.293909] ********************************************************** [10331.294311] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE ** [10331.294757] ** ** [10331.295117] ** trace_printk() being used. Allocating extra memory. ** [10331.295553] ** ** [10331.295965] ** This means that this is a DEBUG kernel and it is ** [10331.296393] ** unsafe for production use. ** [10331.296808] ** ** [10331.297203] ** If you see this message and you are not debugging ** [10331.297612] ** the kernel, report this immediately to your vendor! ** [10331.298021] ** ** [10331.298448] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE ** [10331.298844] ********************************************************** ** Attempting to unload ring_buffer_benchmark... ** ** Attempting to load rmd160... ** ** Attempting to unload rmd160... ** ** Attempting to load rpcrdma... ** [10338.043692] RPC: Registered rdma transport module. [10338.044462] RPC: Registered rdma backchannel transport module. ** Attempting to unload rpcrdma... ** [10338.562140] RPC: Unregistered rdma transport module. [10338.562515] RPC: Unregistered rdma backchannel transport module. ** Attempting to load sch_cake... ** ** Attempting to unload sch_cake... ** ** Attempting to load sch_cbs... ** ** Attempting to unload sch_cbs... ** ** Attempting to load sch_etf... ** ** Attempting to unload sch_etf... ** ** Attempting to load sch_ets... ** ** Attempting to unload sch_ets... ** ** Attempting to load sch_fq... ** ** Attempting to unload sch_fq... ** ** Attempting to load sch_hfsc... ** ** Attempting to unload sch_hfsc... ** ** Attempting to load sch_htb... ** ** Attempting to unload sch_htb... ** ** Attempting to load sch_ingress... ** ** Attempting to unload sch_ingress... ** ** Attempting to load sch_prio... ** ** Attempting to unload sch_prio... ** ** Attempting to load sch_sfq... ** ** Attempting to unload sch_sfq... ** ** Attempting to load sch_taprio... ** ** Attempting to unload sch_taprio... ** ** Attempting to load sch_tbf... ** ** Attempting to unload sch_tbf... ** ** Attempting to load scsi_transport_iscsi... ** [10360.060905] Loading iSCSI transport class v2.0-870. ** Attempting to unload scsi_transport_iscsi... ** ** Attempting to load serpent_generic... ** ** Attempting to unload serpent_generic... ** ** Attempting to load serport... ** ** Attempting to unload serport... ** ** Attempting to load sit... ** [10366.667415] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver ** Attempting to unload sit... ** ** Attempting to load snd... ** ** Attempting to unload snd... ** ** Attempting to load snd_hda_codec_hdmi... ** ** Attempting to unload snd_hda_codec_hdmi... ** ** Attempting to load snd_hrtimer... ** ** Attempting to unload snd_hrtimer... ** ** Attempting to load snd_seq_dummy... ** ** Attempting to unload snd_seq_dummy... ** ** Attempting to load snd_timer... ** ** Attempting to unload snd_timer... ** ** Attempting to load softdog... ** [10380.617191] softdog: initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0) [10380.618169] softdog: soft_reboot_cmd= soft_active_on_boot=0 ** Attempting to unload softdog... ** ** Attempting to load soundcore... ** ** Attempting to unload soundcore... ** ** Attempting to load sr_mod... ** ** Attempting to unload sr_mod... ** [10384.818669] cdrom: Uniform CD-ROM driver unloaded ** Attempting to load tap... ** ** Attempting to unload tap... ** ** Attempting to load target_core_file... ** [10388.557935] Rounding down aligned max_sectors from 4294967295 to 4294967288 [10388.559716] db_root: cannot open: /etc/target ** Attempting to unload target_core_file... ** ** Attempting to load target_core_iblock... ** [10390.669897] Rounding down aligned max_sectors from 4294967295 to 4294967288 [10390.671726] db_root: cannot open: /etc/target ** Attempting to unload target_core_iblock... ** ** Attempting to load target_core_mod... ** [10392.795984] Rounding down aligned max_sectors from 4294967295 to 4294967288 [10392.797910] db_root: cannot open: /etc/target ** Attempting to unload target_core_mod... ** ** Attempting to load target_core_pscsi... ** [10394.796615] Rounding down aligned max_sectors from 4294967295 to 4294967288 [10394.798416] db_root: cannot open: /etc/target ** Attempting to unload target_core_pscsi... ** ** Attempting to load target_core_user... ** [10396.839901] Rounding down aligned max_sectors from 4294967295 to 4294967288 [10396.841710] db_root: cannot open: /etc/target ** Attempting to unload target_core_user... ** ** Attempting to load tcm_fc... ** [10399.636600] Rounding down aligned max_sectors from 4294967295 to 4294967288 [10399.638222] db_root: cannot open: /etc/target ** Attempting to unload tcm_fc... ** ** Attempting to load tcm_loop... ** [10401.813135] Rounding down aligned max_sectors from 4294967295 to 4294967288 [10401.814775] db_root: cannot open: /etc/target ** Attempting to unload tcm_loop... ** ** Attempting to load tcp_bbr... ** ** Attempting to unload tcp_bbr... ** ** Attempting to load tcp_dctcp... ** ** Attempting to unload tcp_dctcp... ** ** Attempting to load tcp_nv... ** ** Attempting to unload tcp_nv... ** ** Attempting to load team... ** [10408.536639] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team... ** ** Attempting to load team_mode_activebackup... ** [10410.101264] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_activebackup... ** ** Attempting to load team_mode_broadcast... ** [10411.736147] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_broadcast... ** ** Attempting to load team_mode_loadbalance... ** [10413.430479] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_loadbalance... ** ** Attempting to load team_mode_random... ** [10415.100806] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_random... ** ** Attempting to load team_mode_roundrobin... ** [10416.729507] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_roundrobin... ** ** Attempting to load tipc... ** [10419.212738] tipc: Activated (version 2.0.0) [10419.214587] NET: Registered PF_TIPC protocol family [10419.217781] tipc: Started in single node mode ** Attempting to unload tipc... ** [10419.750204] NET: Unregistered PF_TIPC protocol family [10419.921487] tipc: Deactivated ** Attempting to load tls... ** ** Attempting to unload tls... ** ** Attempting to load ts_bm... ** ** Attempting to unload ts_bm... ** ** Attempting to load ts_fsm... ** ** Attempting to unload ts_fsm... ** ** Attempting to load tunnel4... ** ** Attempting to unload tunnel4... ** ** Attempting to load tunnel6... ** ** Attempting to unload tunnel6... ** ** Attempting to load twofish_common... ** ** Attempting to unload twofish_common... ** ** Attempting to load twofish_generic... ** ** Attempting to unload twofish_generic... ** ** Attempting to load ubi... ** ** Attempting to unload ubi... ** ** Attempting to load udf... ** ** Attempting to unload udf... ** [10436.180969] cdrom: Uniform CD-ROM driver unloaded ** Attempting to load udp_tunnel... ** ** Attempting to unload udp_tunnel... ** ** Attempting to load uhid... ** ** Attempting to unload uhid... ** ** Attempting to load uinput... ** ** Attempting to unload uinput... ** ** Attempting to load uio... ** ** Attempting to unload uio... ** ** Attempting to load uio_pci_generic... ** ** Attempting to unload uio_pci_generic... ** ** Attempting to load usb_wwan... ** ** Attempting to unload usb_wwan... ** ** Attempting to load usbnet... ** ** Attempting to unload usbnet... ** ** Attempting to load veth... ** ** Attempting to unload veth... ** ** Attempting to load vhost... ** ** Attempting to unload vhost... ** ** Attempting to load vhost_iotlb... ** ** Attempting to unload vhost_iotlb... ** ** Attempting to load vhost_net... ** ** Attempting to unload vhost_net... ** ** Attempting to load vhost_vdpa... ** ** Attempting to unload vhost_vdpa... ** ** Attempting to load vhost_vsock... ** [10458.271744] NET: Registered PF_VSOCK protocol family ** Attempting to unload vhost_vsock... ** [10458.995556] NET: Unregistered PF_VSOCK protocol family ** Attempting to load videodev... ** [10460.180240] mc: Linux media interface: v0.10 [10460.307202] videodev: Linux video capture interface: v2.00 ** Attempting to unload videodev... ** ** Attempting to load virtio_gpu... ** ** Attempting to unload virtio_gpu... ** ** Attempting to load virtio_balloon... ** ** Attempting to unload virtio_balloon... ** ** Attempting to load virtio_blk... ** ** Attempting to unload virtio_blk... ** ** Attempting to load virtio_dma_buf... ** ** Attempting to unload virtio_dma_buf... ** ** Attempting to load virtio_input... ** ** Attempting to unload virtio_input... ** ** Attempting to load virtio_net... ** ** Attempting to unload virtio_net... ** ** Attempting to load virtio_scsi... ** ** Attempting to unload virtio_scsi... ** ** Attempting to load virtio_vdpa... ** ** Attempting to unload virtio_vdpa... ** ** Attempting to load virtiofs... ** ** Attempting to unload virtiofs... ** ** Attempting to load vmac... ** ** Attempting to unload vmac... ** ** Attempting to load vport_geneve... ** [10478.779229] openvswitch: Open vSwitch switching datapath ** Attempting to unload vport_geneve... ** ** Attempting to load vport_gre... ** [10481.843806] gre: GRE over IPv4 demultiplexor driver [10482.298138] openvswitch: Open vSwitch switching datapath [10482.319429] ip_gre: GRE over IPv4 tunneling driver ** Attempting to unload vport_gre... ** ** Attempting to load vport_vxlan... ** [10485.961836] openvswitch: Open vSwitch switching datapath ** Attempting to unload vport_vxlan... ** ** Attempting to load vringh... ** ** Attempting to unload vringh... ** ** Attempting to load vsock_diag... ** [10490.748953] NET: Registered PF_VSOCK protocol family ** Attempting to unload vsock_diag... ** [10491.314858] NET: Unregistered PF_VSOCK protocol family ** Attempting to load vsockmon... ** [10492.360367] NET: Registered PF_VSOCK protocol family ** Attempting to unload vsockmon... ** [10492.919852] NET: Unregistered PF_VSOCK protocol family ** Attempting to load vxlan... ** ** Attempting to unload vxlan... ** ** Attempting to load wireguard... ** [10495.865472] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. [10495.866019] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. [10495.866608] TECH PREVIEW: WireGuard may not be fully supported. [10495.866608] Please review provided documentation for limitations. ** Attempting to unload wireguard... ** ** Attempting to load wp512... ** ** Attempting to unload wp512... ** ** Attempting to load xcbc... ** ** Attempting to unload xcbc... ** ** Attempting to load xfrm4_tunnel... ** ** Attempting to unload xfrm4_tunnel... ** ** Attempting to load xfrm6_tunnel... ** ** Attempting to unload xfrm6_tunnel... ** ** Attempting to load xfrm_interface... ** [10507.776971] IPsec XFRM device driver ** Attempting to unload xfrm_interface... ** ** Attempting to load xfrm_ipcomp... ** ** Attempting to unload xfrm_ipcomp... ** ** Attempting to load xt_addrtype... ** ** Attempting to unload xt_addrtype... ** ** Attempting to load xsk_diag... ** ** Attempting to unload xsk_diag... ** ** Attempting to load xt_AUDIT... ** ** Attempting to unload xt_AUDIT... ** ** Attempting to load xt_bpf... ** ** Attempting to unload xt_bpf... ** ** Attempting to load xt_cgroup... ** ** Attempting to unload xt_cgroup... ** ** Attempting to load xt_CHECKSUM... ** ** Attempting to unload xt_CHECKSUM... ** ** Attempting to load xt_CLASSIFY... ** ** Attempting to unload xt_CLASSIFY... ** ** Attempting to load xt_CONNSECMARK... ** ** Attempting to unload xt_CONNSECMARK... ** ** Attempting to load xt_CT... ** ** Attempting to unload xt_CT... ** [-- MARK -- Fri Feb 3 08:40:00 2023] ** Attempting to load xt_DSCP... ** ** Attempting to unload xt_DSCP... ** ** Attempting to load xt_HL... ** ** Attempting to unload xt_HL... ** ** Attempting to load xt_HMARK... ** ** Attempting to unload xt_HMARK... ** ** Attempting to load xt_IDLETIMER... ** ** Attempting to unload xt_IDLETIMER... ** ** Attempting to load xt_LOG... ** ** Attempting to unload xt_LOG... ** ** Attempting to load xt_MASQUERADE... ** ** Attempting to unload xt_MASQUERADE... ** ** Attempting to load xt_NETMAP... ** ** Attempting to unload xt_NETMAP... ** ** Attempting to load xt_NFLOG... ** ** Attempting to unload xt_NFLOG... ** ** Attempting to load xt_NFQUEUE... ** ** Attempting to unload xt_NFQUEUE... ** ** Attempting to load xt_RATEEST... ** ** Attempting to unload xt_RATEEST... ** ** Attempting to load xt_REDIRECT... ** ** Attempting to unload xt_REDIRECT... ** ** Attempting to load xt_SECMARK... ** ** Attempting to unload xt_SECMARK... ** ** Attempting to load xt_TCPMSS... ** ** Attempting to unload xt_TCPMSS... ** ** Attempting to load xt_TCPOPTSTRIP... ** ** Attempting to unload xt_TCPOPTSTRIP... ** ** Attempting to load xt_TEE... ** ** Attempting to unload xt_TEE... ** ** Attempting to load xt_TPROXY... ** ** Attempting to unload xt_TPROXY... ** ** Attempting to load xt_TRACE... ** ** Attempting to unload xt_TRACE... ** ** Attempting to load xt_addrtype... ** ** Attempting to unload xt_addrtype... ** ** Attempting to load xt_bpf... ** ** Attempting to unload xt_bpf... ** ** Attempting to load xt_cgroup... ** ** Attempting to unload xt_cgroup... ** ** Attempting to load xt_cluster... ** ** Attempting to unload xt_cluster... ** ** Attempting to load xt_comment... ** ** Attempting to unload xt_comment... ** ** Attempting to load xt_connbytes... ** ** Attempting to unload xt_connbytes... ** ** Attempting to load xt_connlabel... ** ** Attempting to unload xt_connlabel... ** ** Attempting to load xt_connlimit... ** ** Attempting to unload xt_connlimit... ** ** Attempting to load xt_connmark... ** ** Attempting to unload xt_connmark... ** ** Attempting to load xt_CONNSECMARK... ** ** Attempting to unload xt_CONNSECMARK... ** ** Attempting to load xt_conntrack... ** ** Attempting to unload xt_conntrack... ** ** Attempting to load xt_cpu... ** ** Attempting to unload xt_cpu... ** ** Attempting to load xt_CT... ** ** Attempting to unload xt_CT... ** ** Attempting to load xt_dccp... ** ** Attempting to unload xt_dccp... ** ** Attempting to load xt_devgroup... ** ** Attempting to unload xt_devgroup... ** ** Attempting to load xt_dscp... ** ** Attempting to unload xt_dscp... ** ** Attempting to load xt_DSCP... ** ** Attempting to unload xt_DSCP... ** ** Attempting to load xt_ecn... ** ** Attempting to unload xt_ecn... ** ** Attempting to load xt_esp... ** ** Attempting to unload xt_esp... ** ** Attempting to load xt_hashlimit... ** ** Attempting to unload xt_hashlimit... ** ** Attempting to load xt_helper... ** ** Attempting to unload xt_helper... ** ** Attempting to load xt_hl... ** ** Attempting to unload xt_hl... ** ** Attempting to load xt_HL... ** ** Attempting to unload xt_HL... ** ** Attempting to load xt_HMARK... ** ** Attempting to unload xt_HMARK... ** ** Attempting to load xt_IDLETIMER... ** ** Attempting to unload xt_IDLETIMER... ** ** Attempting to load xt_iprange... ** ** Attempting to unload xt_iprange... ** ** Attempting to load xt_ipvs... ** [10602.886350] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [10602.888051] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [10602.888994] IPVS: Each connection entry needs 416 bytes at least [10602.890684] IPVS: ipvs loaded. ** Attempting to unload xt_ipvs... ** [10603.468595] IPVS: ipvs unloaded. ** Attempting to load xt_length... ** ** Attempting to unload xt_length... ** ** Attempting to load xt_limit... ** ** Attempting to unload xt_limit... ** ** Attempting to load xt_LOG... ** ** Attempting to unload xt_LOG... ** ** Attempting to load xt_mac... ** ** Attempting to unload xt_mac... ** ** Attempting to load xt_mark... ** ** Attempting to unload xt_mark... ** ** Attempting to load xt_multiport... ** ** Attempting to unload xt_multiport... ** ** Attempting to load xt_nat... ** ** Attempting to unload xt_nat... ** ** Attempting to load xt_NETMAP... ** ** Attempting to unload xt_NETMAP... ** ** Attempting to load xt_NFLOG... ** ** Attempting to unload xt_NFLOG... ** ** Attempting to load xt_NFQUEUE... ** ** Attempting to unload xt_NFQUEUE... ** ** Attempting to load xt_osf... ** ** Attempting to unload xt_osf... ** ** Attempting to load xt_owner... ** ** Attempting to unload xt_owner... ** ** Attempting to load xt_physdev... ** ** Attempting to unload xt_physdev... ** ** Attempting to load xt_pkttype... ** ** Attempting to unload xt_pkttype... ** ** Attempting to load xt_policy... ** ** Attempting to unload xt_policy... ** ** Attempting to load xt_quota... ** ** Attempting to unload xt_quota... ** ** Attempting to load xt_rateest... ** ** Attempting to unload xt_rateest... ** ** Attempting to load xt_RATEEST... ** ** Attempting to unload xt_RATEEST... ** ** Attempting to load xt_realm... ** ** Attempting to unload xt_realm... ** ** Attempting to load xt_recent... ** ** Attempting to unload xt_recent... ** ** Attempting to load xt_REDIRECT... ** ** Attempting to unload xt_REDIRECT... ** ** Attempting to load xt_SECMARK... ** ** Attempting to unload xt_SECMARK... ** ** Attempting to load xt_set... ** ** Attempting to unload xt_set... ** ** Attempting to load xt_socket... ** ** Attempting to unload xt_socket... ** ** Attempting to load xt_state... ** ** Attempting to unload xt_state... ** ** Attempting to load xt_statistic... ** ** Attempting to unload xt_statistic... ** ** Attempting to load xt_string... ** ** Attempting to unload xt_string... ** ** Attempting to load xt_tcpmss... ** ** Attempting to unload xt_tcpmss... ** ** Attempting to load xt_TCPMSS... ** ** Attempting to unload xt_TCPMSS... ** ** Attempting to load xt_TCPOPTSTRIP... ** ** Attempting to unload xt_TCPOPTSTRIP... ** ** Attempting to load xt_TEE... ** ** Attempting to unload xt_TEE... ** ** Attempting to load xt_TPROXY... ** ** Attempting to unload xt_TPROXY... ** ** Attempting to load xt_TRACE... ** ** Attempting to unload xt_TRACE... ** ** Attempting to load xxhash_generic... ** ** Attempting to unload xxhash_generic... ** ** Attempting to load blowfish... ** ** Attempting to unload blowfish... ** ** Attempting to load 8021q... ** [10671.663911] 8021q: 802.1Q VLAN Support v1.8 ** Attempting to unload 8021q... ** ** Attempting to load act_bpf... ** ** Attempting to unload act_bpf... ** ** Attempting to load act_csum... ** ** Attempting to unload act_csum... ** ** Attempting to load act_gact... ** [10676.570464] GACT probability on ** Attempting to unload act_gact... ** ** Attempting to load act_mirred... ** [10678.185269] Mirror/redirect action on ** Attempting to unload act_mirred... ** ** Attempting to load act_pedit... ** ** Attempting to unload act_pedit... ** ** Attempting to load act_police... ** ** Attempting to unload act_police... ** ** Attempting to load act_sample... ** ** Attempting to unload act_sample... ** ** Attempting to load act_skbedit... ** ** Attempting to unload act_skbedit... ** ** Attempting to load act_tunnel_key... ** ** Attempting to unload act_tunnel_key... ** ** Attempting to load act_vlan... ** ** Attempting to unload act_vlan... ** ** Attempting to load adiantum... ** ** Attempting to unload adiantum... ** ** Attempting to load af_key... ** [10691.249923] NET: Registered PF_KEY protocol family ** Attempting to unload af_key... ** [10691.777566] NET: Unregistered PF_KEY protocol family ** Attempting to load ah4... ** ** Attempting to unload ah4... ** ** Attempting to load ah6... ** ** Attempting to unload ah6... ** ** Attempting to load ansi_cprng... ** [10696.300959] alg: No test for fips(ansi_cprng) (fips_ansi_cprng) ** Attempting to unload ansi_cprng... ** ** Attempting to load apple_bl... ** ** Attempting to unload apple_bl... ** ** Attempting to load aquantia... ** ** Attempting to unload aquantia... ** ** Attempting to load arc_ps2... ** ** Attempting to unload arc_ps2... ** ** Attempting to load arp_tables... ** [10703.464320] Warning: Deprecated Driver is detected: arptables will not be maintained in a future major release and may be disabled ** Attempting to unload arp_tables... ** ** Attempting to load arpt_mangle... ** ** Attempting to unload arpt_mangle... ** ** Attempting to load arptable_filter... ** [10706.600276] Warning: Deprecated Driver is detected: arptables will not be maintained in a future major release and may be disabled ** Attempting to unload arptable_filter... ** ** Attempting to load asym_tpm... ** ** Attempting to unload asym_tpm... ** ** Attempting to load async_memcpy... ** [10709.825544] async_tx: api initialized (async) ** Attempting to unload async_memcpy... ** ** Attempting to load async_pq... ** [10711.483729] raid6: skip pq benchmark and using algorithm sse2x4 [10711.484674] raid6: using ssse3x2 recovery algorithm [10711.499192] async_tx: api initialized (async) ** Attempting to unload async_pq... ** ** Attempting to load async_raid6_recov... ** [10713.216460] raid6: skip pq benchmark and using algorithm sse2x4 [10713.217323] raid6: using ssse3x2 recovery algorithm [10713.231848] async_tx: api initialized (async) ** Attempting to unload async_raid6_recov... ** ** Attempting to load async_tx... ** [10714.969507] async_tx: api initialized (async) ** Attempting to unload async_tx... ** ** Attempting to load async_xor... ** [10716.547691] async_tx: api initialized (async) ** Attempting to unload async_xor... ** ** Attempting to load bareudp... ** ** Attempting to unload bareudp... ** ** Attempting to load blowfish_common... ** ** Attempting to unload blowfish_common... ** ** Attempting to load blowfish_generic... ** ** Attempting to unload blowfish_generic... ** ** Attempting to load bluetooth... ** [10724.437670] Bluetooth: Core ver 2.22 [10724.438524] NET: Registered PF_BLUETOOTH protocol family [10724.438904] Bluetooth: HCI device and connection manager initialized [10724.439735] Bluetooth: HCI socket layer initialized [10724.440812] Bluetooth: L2CAP socket layer initialized [10724.441603] Bluetooth: SCO socket layer initialized ** Attempting to unload bluetooth... ** [10724.969056] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load bnep... ** [10726.337676] Bluetooth: Core ver 2.22 [10726.338320] NET: Registered PF_BLUETOOTH protocol family [10726.338739] Bluetooth: HCI device and connection manager initialized [10726.339400] Bluetooth: HCI socket layer initialized [10726.340412] Bluetooth: L2CAP socket layer initialized [10726.340992] Bluetooth: SCO socket layer initialized [10726.356175] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [10726.356514] Bluetooth: BNEP filters: protocol multicast [10726.356875] Bluetooth: BNEP socket layer initialized ** Attempting to unload bnep... ** [10726.870013] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load bonding... ** ** Attempting to unload bonding... ** ** Attempting to load br_netfilter... ** [10730.032489] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. [10730.047913] Bridge firewalling registered ** Attempting to unload br_netfilter... ** ** Attempting to load bridge... ** [10731.984258] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. ** Attempting to unload bridge... ** ** Attempting to load bsd_comp... ** [10733.731586] PPP generic driver version 2.4.2 [10733.747154] PPP BSD Compression module registered ** Attempting to unload bsd_comp... ** ** Attempting to load cachefiles... ** [10735.577523] CacheFiles: Loaded ** Attempting to unload cachefiles... ** [10736.073241] CacheFiles: Unloading ** Attempting to load camellia_generic... ** ** Attempting to unload camellia_generic... ** ** Attempting to load can... ** [10738.817749] can: controller area network core [10738.820076] NET: Registered PF_CAN protocol family ** Attempting to unload can... ** [10739.332997] NET: Unregistered PF_CAN protocol family ** Attempting to load can_bcm... ** [10740.454118] can: controller area network core [10740.456415] NET: Registered PF_CAN protocol family [10740.478434] can: broadcast manager protocol ** Attempting to unload can_bcm... ** [10741.015983] NET: Unregistered PF_CAN protocol family ** Attempting to load can_dev... ** ** Attempting to unload can_dev... ** ** Attempting to load can_gw... ** [10743.678389] can: controller area network core [10743.680582] NET: Registered PF_CAN protocol family [10743.700722] can: netlink gateway - max_hops=1 ** Attempting to unload can_gw... ** [10744.264009] NET: Unregistered PF_CAN protocol family ** Attempting to load can_isotp... ** [10745.355580] can: controller area network core [10745.357775] NET: Registered PF_CAN protocol family [10745.378726] can: isotp protocol ** Attempting to unload can_isotp... ** [10745.963030] NET: Unregistered PF_CAN protocol family ** Attempting to load can_j1939... ** [10747.112668] can: controller area network core [10747.115016] NET: Registered PF_CAN protocol family [10747.154215] can: SAE J1939 ** Attempting to unload can_j1939... ** [10747.726045] NET: Unregistered PF_CAN protocol family ** Attempting to load can_raw... ** [10748.844994] can: controller area network core [10748.847299] NET: Registered PF_CAN protocol family [10748.864606] can: raw protocol ** Attempting to unload can_raw... ** [10749.435059] NET: Unregistered PF_CAN protocol family ** Attempting to load cast5_generic... ** ** Attempting to unload cast5_generic... ** ** Attempting to load cast6_generic... ** ** Attempting to unload cast6_generic... ** ** Attempting to load cdc_acm... ** [10753.829641] usbcore: registered new interface driver cdc_acm [10753.830498] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters ** Attempting to unload cdc_acm... ** [10754.339393] usbcore: deregistering interface driver cdc_acm ** Attempting to load ceph... ** [10755.787316] Key type ceph registered [10755.790660] libceph: loaded (mon/osd proto 15/24) [10756.032401] ceph: loaded (mds proto 32) ** Attempting to unload ceph... ** [10756.619318] Key type ceph unregistered ** Attempting to load chacha20poly1305... ** ** Attempting to unload chacha20poly1305... ** ** Attempting to load cifs... ** [10761.126874] Key type cifs.spnego registered [10761.127294] Key type cifs.idmap registered ** Attempting to unload cifs... ** [10761.710321] Key type cifs.idmap unregistered [10761.711245] Key type cifs.spnego unregistered ** Attempting to load cls_bpf... ** ** Attempting to unload cls_bpf... ** ** Attempting to load cls_flow... ** ** Attempting to unload cls_flow... ** ** Attempting to load cls_flower... ** ** Attempting to unload cls_flower... ** ** Attempting to load cls_fw... ** ** Attempting to unload cls_fw... ** ** Attempting to load cls_matchall... ** ** Attempting to unload cls_matchall... ** ** Attempting to load cls_u32... ** [10771.833298] u32 classifier [10771.833510] Performance counters on [10771.833749] input device check on [10771.833980] Actions configured ** Attempting to unload cls_u32... ** ** Attempting to load cordic... ** ** Attempting to unload cordic... ** ** Attempting to load cqhci... ** ** Attempting to unload cqhci... ** ** Attempting to load crc_itu_t... ** ** Attempting to unload crc_itu_t... ** ** Attempting to load crc32_generic... ** ** Attempting to unload crc32_generic... ** ** Attempting to load crc7... ** ** Attempting to unload crc7... ** ** Attempting to load crc8... ** ** Attempting to unload crc8... ** ** Attempting to load des_generic... ** ** Attempting to unload des_generic... ** ** Attempting to load diag... ** [10786.644716] tipc: Activated (version 2.0.0) [10786.646460] NET: Registered PF_TIPC protocol family [10786.648139] tipc: Started in single node mode ** Attempting to unload diag... ** [10787.249373] NET: Unregistered PF_TIPC protocol family [10787.421590] tipc: Deactivated ** Attempting to load dm_bio_prison... ** ** Attempting to unload dm_bio_prison... ** ** Attempting to load dm_bufio... ** ** Attempting to unload dm_bufio... ** ** Attempting to load dm_cache_smq... ** ** Attempting to unload dm_cache_smq... ** ** Attempting to load dm_cache... ** ** Attempting to unload dm_cache... ** ** Attempting to load dm_crypt... ** ** Attempting to unload dm_crypt... ** ** Attempting to load dm_delay... ** ** Attempting to unload dm_delay... ** ** Attempting to load dm_era... ** ** Attempting to unload dm_era... ** ** Attempting to load dm_flakey... ** ** Attempting to unload dm_flakey... ** ** Attempting to load dm_integrity... ** [10802.073303] async_tx: api initialized (async) ** Attempting to unload dm_integrity... ** ** Attempting to load dm_io_affinity... ** ** Attempting to unload dm_io_affinity... ** ** Attempting to load dm_log_userspace... ** [10805.488091] device-mapper: dm-log-userspace: version 1.3.0 loaded ** Attempting to unload dm_log_userspace... ** [10806.005727] device-mapper: dm-log-userspace: version 1.3.0 unloaded ** Attempting to load dm_log_writes... ** ** Attempting to unload dm_log_writes... ** ** Attempting to load dm_multipath... ** ** Attempting to unload dm_multipath... ** ** Attempting to load dm_persistent_data... ** ** Attempting to unload dm_persistent_data... ** ** Attempting to load dm_queue_length... ** [10813.366718] device-mapper: multipath queue-length: version 0.2.0 loaded ** Attempting to unload dm_queue_length... ** ** Attempting to load dm_raid... ** [10815.093951] raid6: skip pq benchmark and using algorithm sse2x4 [10815.094772] raid6: using ssse3x2 recovery algorithm [10815.108658] async_tx: api initialized (async) [10815.237135] device-mapper: raid: Loading target version 1.15.1 ** Attempting to unload dm_raid... ** ** Attempting to load dm_round_robin... ** [10817.454483] device-mapper: multipath round-robin: version 1.2.0 loaded ** Attempting to unload dm_round_robin... ** ** Attempting to load dm_service_time... ** [10819.131145] device-mapper: multipath service-time: version 0.3.0 loaded ** Attempting to unload dm_service_time... ** ** Attempting to load dm_snapshot... ** ** Attempting to unload dm_snapshot... ** ** Attempting to load dm_switch... ** ** Attempting to unload dm_switch... ** ** Attempting to load dm_thin_pool... ** ** Attempting to unload dm_thin_pool... ** ** Attempting to load dm_verity... ** ** Attempting to unload dm_verity... ** [-- MARK -- Fri Feb 3 08:45:00 2023] ** Attempting to load dm_writecache... ** ** Attempting to unload dm_writecache... ** ** Attempting to load dm_zero... ** ** Attempting to unload dm_zero... ** ** Attempting to load dummy... ** ** Attempting to unload dummy... ** ** Attempting to load ebt_802_3... ** ** Attempting to unload ebt_802_3... ** ** Attempting to load ebt_among... ** ** Attempting to unload ebt_among... ** ** Attempting to load ebt_arp... ** ** Attempting to unload ebt_arp... ** ** Attempting to load ebt_arpreply... ** ** Attempting to unload ebt_arpreply... ** ** Attempting to load ebt_dnat... ** ** Attempting to unload ebt_dnat... ** ** Attempting to load ebt_ip... ** ** Attempting to unload ebt_ip... ** ** Attempting to load ebt_ip6... ** ** Attempting to unload ebt_ip6... ** ** Attempting to load ebt_limit... ** ** Attempting to unload ebt_limit... ** ** Attempting to load ebt_log... ** ** Attempting to unload ebt_log... ** ** Attempting to load ebt_mark... ** ** Attempting to unload ebt_mark... ** ** Attempting to load ebt_mark_m... ** ** Attempting to unload ebt_mark_m... ** ** Attempting to load ebt_nflog... ** ** Attempting to unload ebt_nflog... ** ** Attempting to load ebt_pkttype... ** ** Attempting to unload ebt_pkttype... ** ** Attempting to load ebt_redirect... ** ** Attempting to unload ebt_redirect... ** ** Attempting to load ebt_snat... ** ** Attempting to unload ebt_snat... ** ** Attempting to load ebt_stp... ** ** Attempting to unload ebt_stp... ** ** Attempting to load ebt_vlan... ** ** Attempting to unload ebt_vlan... ** ** Attempting to load ebtable_broute... ** [10859.232969] Warning: Deprecated Driver is detected: ebtables will not be maintained in a future major release and may be disabled ** Attempting to unload ebtable_broute... ** ** Attempting to load ebtable_filter... ** [10860.925968] Warning: Deprecated Driver is detected: ebtables will not be maintained in a future major release and may be disabled ** Attempting to unload ebtable_filter... ** ** Attempting to load ebtable_nat... ** [10862.593680] Warning: Deprecated Driver is detected: ebtables will not be maintained in a future major release and may be disabled ** Attempting to unload ebtable_nat... ** ** Attempting to load ebtables... ** [10864.226799] Warning: Deprecated Driver is detected: ebtables will not be maintained in a future major release and may be disabled ** Attempting to unload ebtables... ** ** Attempting to load echainiv... ** ** Attempting to unload echainiv... ** ** Attempting to load enclosure... ** ** Attempting to unload enclosure... ** ** Attempting to load esp4... ** ** Attempting to unload esp4... ** ** Attempting to load esp4_offload... ** ** Attempting to unload esp4_offload... ** ** Attempting to load esp6... ** ** Attempting to unload esp6... ** ** Attempting to load esp6_offload... ** ** Attempting to unload esp6_offload... ** ** Attempting to load essiv... ** ** Attempting to unload essiv... ** ** Attempting to load failover... ** ** Attempting to unload failover... ** ** Attempting to load faulty... ** ** Attempting to unload faulty... ** ** Attempting to load fcrypt... ** ** Attempting to unload fcrypt... ** ** Attempting to load geneve... ** ** Attempting to unload geneve... ** ** Attempting to load gfs2... ** [10888.373623] DLM installed [10888.561473] gfs2: GFS2 installed ** Attempting to unload gfs2... ** ** Attempting to load hci_uart... ** [10890.595521] Bluetooth: Core ver 2.22 [10890.596254] NET: Registered PF_BLUETOOTH protocol family [10890.596590] Bluetooth: HCI device and connection manager initialized [10890.597284] Bluetooth: HCI socket layer initialized [10890.598010] Bluetooth: L2CAP socket layer initialized [10890.598547] Bluetooth: SCO socket layer initialized [10890.618658] Bluetooth: HCI UART driver ver 2.3 [10890.619355] Bluetooth: HCI UART protocol H4 registered [10890.619659] Bluetooth: HCI UART protocol BCSP registered [10890.619973] Bluetooth: HCI UART protocol ATH3K registered ** Attempting to unload hci_uart... ** [10891.130398] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load hci_vhci... ** [10892.472385] Bluetooth: Core ver 2.22 [10892.473072] NET: Registered PF_BLUETOOTH protocol family [10892.473452] Bluetooth: HCI device and connection manager initialized [10892.474122] Bluetooth: HCI socket layer initialized [10892.475069] Bluetooth: L2CAP socket layer initialized [10892.475697] Bluetooth: SCO socket layer initialized ** Attempting to unload hci_vhci... ** [10892.999504] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load hidp... ** [10894.400899] Bluetooth: Core ver 2.22 [10894.401519] NET: Registered PF_BLUETOOTH protocol family [10894.401897] Bluetooth: HCI device and connection manager initialized [10894.402568] Bluetooth: HCI socket layer initialized [10894.403753] Bluetooth: L2CAP socket layer initialized [10894.404489] Bluetooth: SCO socket layer initialized [10894.421759] Bluetooth: HIDP (Human Interface Emulation) ver 1.2 [10894.422583] Bluetooth: HIDP socket layer initialized ** Attempting to unload hidp... ** [10894.962454] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load iavf... ** [10896.662451] iavf: Intel(R) Ethernet Adaptive Virtual Function Network Driver [10896.663277] Copyright (c) 2013 - 2018 Intel Corporation. ** Attempting to unload iavf... ** ** Attempting to load ib_cm... ** ** Attempting to unload ib_cm... ** ** Attempting to load ib_core... ** ** Attempting to unload ib_core... ** ** Attempting to load ib_iser... ** [10903.061119] Loading iSCSI transport class v2.0-870. [10903.127771] iscsi: registered transport (iser) ** Attempting to unload ib_iser... ** ** Attempting to load ib_isert... ** [10905.963873] Rounding down aligned max_sectors from 4294967295 to 4294967288 [10905.966069] db_root: cannot open: /etc/target ** Attempting to unload ib_isert... ** ** Attempting to load ib_srp... ** ** Attempting to unload ib_srp... ** ** Attempting to load ib_srpt... ** [10911.624852] Rounding down aligned max_sectors from 4294967295 to 4294967288 [10911.626598] db_root: cannot open: /etc/target ** Attempting to unload ib_srpt... ** ** Attempting to load ib_umad... ** ** Attempting to unload ib_umad... ** ** Attempting to load ib_uverbs... ** ** Attempting to unload ib_uverbs... ** ** Attempting to load ieee802154_6lowpan... ** ** Attempting to unload ieee802154_6lowpan... ** ** Attempting to load ieee802154_socket... ** [10920.173109] NET: Registered PF_IEEE802154 protocol family ** Attempting to unload ieee802154_socket... ** [10920.688544] NET: Unregistered PF_IEEE802154 protocol family ** Attempting to load ifb... ** ** Attempting to unload ifb... ** ** Attempting to load ipcomp... ** ** Attempting to unload ipcomp... ** ** Attempting to load ipcomp6... ** ** Attempting to unload ipcomp6... ** ** Attempting to load ip6_gre... ** [10926.906234] gre: GRE over IPv4 demultiplexor driver [10926.934398] ip6_gre: GRE over IPv6 tunneling driver ** Attempting to unload ip6_gre... ** ** Attempting to load ip6_tables... ** [10928.732390] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6_tables... ** ** Attempting to load ip6_tunnel... ** ** Attempting to unload ip6_tunnel... ** ** Attempting to load ip6_udp_tunnel... ** ** Attempting to unload ip6_udp_tunnel... ** ** Attempting to load ip6_vti... ** ** Attempting to unload ip6_vti... ** ** Attempting to load ip6t_NPT... ** ** Attempting to unload ip6t_NPT... ** ** Attempting to load ip6t_REJECT... ** ** Attempting to unload ip6t_REJECT... ** ** Attempting to load ip6t_SYNPROXY... ** ** Attempting to unload ip6t_SYNPROXY... ** ** Attempting to load ip6t_ah... ** ** Attempting to unload ip6t_ah... ** ** Attempting to load ip6t_eui64... ** ** Attempting to unload ip6t_eui64... ** ** Attempting to load ip6t_frag... ** ** Attempting to unload ip6t_frag... ** ** Attempting to load ip6t_hbh... ** ** Attempting to unload ip6t_hbh... ** ** Attempting to load ip6t_ipv6header... ** ** Attempting to unload ip6t_ipv6header... ** ** Attempting to load ip6t_mh... ** ** Attempting to unload ip6t_mh... ** ** Attempting to load ip6t_rpfilter... ** ** Attempting to unload ip6t_rpfilter... ** ** Attempting to load ip6t_rt... ** ** Attempting to unload ip6t_rt... ** ** Attempting to load ip6table_filter... ** [10952.983870] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_filter... ** ** Attempting to load ip6table_mangle... ** [10954.686751] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_mangle... ** ** Attempting to load ip6table_nat... ** [10956.633334] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_nat... ** ** Attempting to load ip6table_raw... ** [10959.575088] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_raw... ** ** Attempting to load ip6table_security... ** [10961.214261] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_security... ** ** Attempting to load ip_gre... ** [10962.851826] gre: GRE over IPv4 demultiplexor driver [10962.916972] ip_gre: GRE over IPv4 tunneling driver ** Attempting to unload ip_gre... ** ** Attempting to load ipip... ** [10964.713894] ipip: IPv4 and MPLS over IPv4 tunneling driver ** Attempting to unload ipip... ** ** Attempting to load ip_set... ** ** Attempting to unload ip_set... ** ** Attempting to load ip_set_bitmap_ip... ** ** Attempting to unload ip_set_bitmap_ip... ** ** Attempting to load ip_set_bitmap_ipmac... ** ** Attempting to unload ip_set_bitmap_ipmac... ** ** Attempting to load ip_set_bitmap_port... ** ** Attempting to unload ip_set_bitmap_port... ** ** Attempting to load ip_set_hash_ip... ** ** Attempting to unload ip_set_hash_ip... ** ** Attempting to load ip_set_hash_ipmac... ** ** Attempting to unload ip_set_hash_ipmac... ** ** Attempting to load ip_set_hash_ipmark... ** ** Attempting to unload ip_set_hash_ipmark... ** ** Attempting to load ip_set_hash_ipport... ** ** Attempting to unload ip_set_hash_ipport... ** ** Attempting to load ip_set_hash_ipportip... ** ** Attempting to unload ip_set_hash_ipportip... ** ** Attempting to load ip_set_hash_ipportnet... ** ** Attempting to unload ip_set_hash_ipportnet... ** ** Attempting to load ip_set_hash_mac... ** ** Attempting to unload ip_set_hash_mac... ** ** Attempting to load ip_set_hash_net... ** ** Attempting to unload ip_set_hash_net... ** ** Attempting to load ip_set_hash_netiface... ** ** Attempting to unload ip_set_hash_netiface... ** ** Attempting to load ip_set_hash_netnet... ** ** Attempting to unload ip_set_hash_netnet... ** ** Attempting to load ip_set_hash_netport... ** ** Attempting to unload ip_set_hash_netport... ** ** Attempting to load ip_set_hash_netportnet... ** ** Attempting to unload ip_set_hash_netportnet... ** ** Attempting to load ip_set_list_set... ** ** Attempting to unload ip_set_list_set... ** ** Attempting to load ip_tables... ** [10996.223262] Warning: Deprecated Driver is detected: iptables will not be maintained in a future major release and may be disabled ** Attempting to unload ip_tables... ** ** Attempting to load ip_tunnel... ** ** Attempting to unload ip_tunnel... ** ** Attempting to load ip_vs... ** [10999.804253] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [10999.806447] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [10999.807472] IPVS: Each connection entry needs 416 bytes at least [10999.809549] IPVS: ipvs loaded. ** Attempting to unload ip_vs... ** [11000.352947] IPVS: ipvs unloaded. ** Attempting to load ip_vs_dh... ** [11001.963000] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11001.965072] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11001.966207] IPVS: Each connection entry needs 416 bytes at least [11001.968413] IPVS: ipvs loaded. [11001.985659] IPVS: [dh] scheduler registered. ** Attempting to unload ip_vs_dh... ** [11002.521042] IPVS: [dh] scheduler unregistered. [11002.576039] IPVS: ipvs unloaded. ** Attempting to load ip_vs_fo... ** [11004.214940] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11004.217158] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11004.218027] IPVS: Each connection entry needs 416 bytes at least [11004.220026] IPVS: ipvs loaded. [11004.233716] IPVS: [fo] scheduler registered. ** Attempting to unload ip_vs_fo... ** [11004.777487] IPVS: [fo] scheduler unregistered. [11004.848331] IPVS: ipvs unloaded. ** Attempting to load ip_vs_ftp... ** [11006.515185] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11006.517401] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11006.518306] IPVS: Each connection entry needs 416 bytes at least [11006.520231] IPVS: ipvs loaded. ** Attempting to unload ip_vs_ftp... ** [11008.200847] IPVS: ipvs unloaded. ** Attempting to load ip_vs_lblc... ** [11009.846235] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11009.848447] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11009.849396] IPVS: Each connection entry needs 416 bytes at least [11009.851355] IPVS: ipvs loaded. [11009.869609] IPVS: [lblc] scheduler registered. ** Attempting to unload ip_vs_lblc... ** [11010.410098] IPVS: [lblc] scheduler unregistered. [11010.475982] IPVS: ipvs unloaded. ** Attempting to load ip_vs_lblcr... ** [11012.091535] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11012.093653] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11012.094577] IPVS: Each connection entry needs 416 bytes at least [11012.096730] IPVS: ipvs loaded. [11012.116023] IPVS: [lblcr] scheduler registered. ** Attempting to unload ip_vs_lblcr... ** [11012.653112] IPVS: [lblcr] scheduler unregistered. [11012.707004] IPVS: ipvs unloaded. ** Attempting to load ip_vs_lc... ** [11014.249013] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11014.251111] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11014.252306] IPVS: Each connection entry needs 416 bytes at least [11014.254359] IPVS: ipvs loaded. [11014.270643] IPVS: [lc] scheduler registered. ** Attempting to unload ip_vs_lc... ** [11014.802161] IPVS: [lc] scheduler unregistered. [11014.871054] IPVS: ipvs unloaded. ** Attempting to load ip_vs_nq... ** [11016.480519] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11016.482709] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11016.483759] IPVS: Each connection entry needs 416 bytes at least [11016.485840] IPVS: ipvs loaded. [11016.502201] IPVS: [nq] scheduler registered. ** Attempting to unload ip_vs_nq... ** [11017.028444] IPVS: [nq] scheduler unregistered. [11017.079702] IPVS: ipvs unloaded. ** Attempting to load ip_vs_ovf... ** [11018.710021] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11018.712103] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11018.713073] IPVS: Each connection entry needs 416 bytes at least [11018.715072] IPVS: ipvs loaded. [11018.729455] IPVS: [ovf] scheduler registered. ** Attempting to unload ip_vs_ovf... ** [11019.233956] IPVS: [ovf] scheduler unregistered. [11019.285526] IPVS: ipvs unloaded. ** Attempting to load ip_vs_pe_sip... ** [11020.963222] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11020.965460] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11020.966428] IPVS: Each connection entry needs 416 bytes at least [11020.968369] IPVS: ipvs loaded. [11020.984372] IPVS: [sip] pe registered. ** Attempting to unload ip_vs_pe_sip... ** [11021.522167] IPVS: [sip] pe unregistered. [11025.796808] IPVS: ipvs unloaded. ** Attempting to load ip_vs_rr... ** [11027.499113] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11027.501362] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11027.502478] IPVS: Each connection entry needs 416 bytes at least [11027.504766] IPVS: ipvs loaded. [11027.522365] IPVS: [rr] scheduler registered. ** Attempting to unload ip_vs_rr... ** [11028.050972] IPVS: [rr] scheduler unregistered. [11028.107173] IPVS: ipvs unloaded. ** Attempting to load ip_vs_sed... ** [11029.706005] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11029.708152] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11029.709092] IPVS: Each connection entry needs 416 bytes at least [11029.711296] IPVS: ipvs loaded. [11029.725581] IPVS: [sed] scheduler registered. ** Attempting to unload ip_vs_sed... ** [11030.223827] IPVS: [sed] scheduler unregistered. [11030.283634] IPVS: ipvs unloaded. ** Attempting to load ip_vs_sh... ** [11031.935889] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11031.937947] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11031.938998] IPVS: Each connection entry needs 416 bytes at least [11031.941080] IPVS: ipvs loaded. [11031.957218] IPVS: [sh] scheduler registered. ** Attempting to unload ip_vs_sh... ** [11032.469558] IPVS: [sh] scheduler unregistered. [11032.540514] IPVS: ipvs unloaded. ** Attempting to load ip_vs_wlc... ** [11034.179325] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11034.181488] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11034.182337] IPVS: Each connection entry needs 416 bytes at least [11034.184616] IPVS: ipvs loaded. [11034.199263] IPVS: [wlc] scheduler registered. ** Attempting to unload ip_vs_wlc... ** [11034.710078] IPVS: [wlc] scheduler unregistered. [11034.765179] IPVS: ipvs unloaded. ** Attempting to load ip_vs_wrr... ** [11036.359027] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11036.361123] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11036.362006] IPVS: Each connection entry needs 416 bytes at least [11036.364080] IPVS: ipvs loaded. [11036.380056] IPVS: [wrr] scheduler registered. ** Attempting to unload ip_vs_wrr... ** [11036.910588] IPVS: [wrr] scheduler unregistered. [11036.966329] IPVS: ipvs unloaded. ** Attempting to load ip_vti... ** [11038.255935] IPv4 over IPsec tunneling driver ** Attempting to unload ip_vti... ** ** Attempting to load ipcomp... ** ** Attempting to unload ipcomp... ** ** Attempting to load ipcomp6... ** ** Attempting to unload ipcomp6... ** ** Attempting to load ipip... ** [11043.396160] ipip: IPv4 and MPLS over IPv4 tunneling driver ** Attempting to unload ipip... ** ** Attempting to load ipvlan... ** ** Attempting to unload ipvlan... ** ** Attempting to load ipvtap... ** ** Attempting to unload ipvtap... ** ** Attempting to load ip_vti... ** [11048.472234] IPv4 over IPsec tunneling driver ** Attempting to unload ip_vti... ** ** Attempting to load isofs... ** ** Attempting to unload isofs... ** [11050.810483] cdrom: Uniform CD-ROM driver unloaded ** Attempting to load iw_cm... ** ** Attempting to unload iw_cm... ** ** Attempting to load kheaders... ** ** Attempting to unload kheaders... ** ** Attempting to load kmem... ** ** Attempting to unload kmem... ** ** Attempting to load linear... ** ** Attempting to unload linear... ** ** Attempting to load llc... ** ** Attempting to unload llc... ** ** Attempting to load lrw... ** ** Attempting to unload lrw... ** ** Attempting to load lz4_compress... ** ** Attempting to unload lz4_compress... ** ** Attempting to load mac_celtic... ** ** Attempting to unload mac_celtic... ** ** Attempting to load mac_centeuro... ** ** Attempting to unload mac_centeuro... ** ** Attempting to load mac_croatian... ** ** Attempting to unload mac_croatian... ** ** Attempting to load mac_cyrillic... ** ** Attempting to unload mac_cyrillic... ** ** Attempting to load mac_gaelic... ** ** Attempting to unload mac_gaelic... ** ** Attempting to load mac_greek... ** ** Attempting to unload mac_greek... ** ** Attempting to load mac_iceland... ** ** Attempting to unload mac_iceland... ** ** Attempting to load mac_inuit... ** ** Attempting to unload mac_inuit... ** ** Attempting to load mac_roman... ** ** Attempting to unload mac_roman... ** ** Attempting to load mac_romanian... ** ** Attempting to unload mac_romanian... ** ** Attempting to load mac_turkish... ** ** Attempting to unload mac_turkish... ** ** Attempting to load macsec... ** [11082.641512] MACsec IEEE 802.1AE ** Attempting to unload macsec... ** ** Attempting to load macvlan... ** ** Attempting to unload macvlan... ** ** Attempting to load macvtap... ** ** Attempting to unload macvtap... ** ** Attempting to load md4... ** ** Attempting to unload md4... ** ** Attempting to load michael_mic... ** ** Attempting to unload michael_mic... ** ** Attempting to load mip6... ** [11091.722655] mip6: Mobile IPv6 ** Attempting to unload mip6... ** ** Attempting to load mpt3sas... ** [11093.988795] mpt3sas version 43.100.00.00 loaded ** Attempting to unload mpt3sas... ** [11094.476371] mpt3sas version 43.100.00.00 unloading ** Attempting to load msdos... ** ** Attempting to unload msdos... ** ** Attempting to load mtd... ** ** Attempting to unload mtd... ** ** Attempting to load n_gsm... ** ** Attempting to unload n_gsm... ** ** Attempting to load nd_blk... ** ** Attempting to unload nd_blk... ** ** Attempting to load nd_btt... ** ** Attempting to unload nd_btt... ** ** Attempting to load nd_pmem... ** ** Attempting to unload nd_pmem... ** ** Attempting to load net_failover... ** ** Attempting to unload net_failover... ** ** Attempting to load netconsole... ** [11107.722194] printk: console [netcon0] enabled [11107.722872] netconsole: network logging started ** Attempting to unload netconsole... ** [11108.211794] printk: console [netcon_ext0] disabled [11108.212928] printk: console [netcon0] disabled ** Attempting to load nf_conncount... ** ** Attempting to unload nf_conncount... ** ** Attempting to load nf_conntrack... ** ** Attempting to unload nf_conntrack... ** ** Attempting to load nf_conntrack_amanda... ** ** Attempting to unload nf_conntrack_amanda... ** ** Attempting to load nf_conntrack_bridge... ** [11117.916256] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. ** Attempting to unload nf_conntrack_bridge... ** ** Attempting to load nf_conntrack_broadcast... ** ** Attempting to unload nf_conntrack_broadcast... ** ** Attempting to load nf_conntrack_ftp... ** ** Attempting to unload nf_conntrack_ftp... ** ** Attempting to load nf_conntrack_h323... ** ** At[-- MARK -- Fri Feb 3 08:50:00 2023] tempting to unload nf_conntrack_h323... ** ** Attempting to load nf_conntrack_irc... ** ** Attempting to unload nf_conntrack_irc... ** ** Attempting to load nf_conntrack_netbios_ns... ** ** Attempting to unload nf_conntrack_netbios_ns... ** ** Attempting to load nf_conntrack_netlink... ** ** Attempting to unload nf_conntrack_netlink... ** ** Attempting to load nf_conntrack_pptp... ** ** Attempting to unload nf_conntrack_pptp... ** ** Attempting to load nf_conntrack_sane... ** ** Attempting to unload nf_conntrack_sane... ** ** Attempting to load nf_conntrack_sip... ** ** Attempting to unload nf_conntrack_sip... ** ** Attempting to load nf_conntrack_snmp... ** ** Attempting to unload nf_conntrack_snmp... ** ** Attempting to load nf_conntrack_tftp... ** ** Attempting to unload nf_conntrack_tftp... ** ** Attempting to load nf_defrag_ipv4... ** ** Attempting to unload nf_defrag_ipv4... ** ** Attempting to load nf_defrag_ipv6... ** ** Attempting to unload nf_defrag_ipv6... ** ** Attempting to load nf_dup_ipv4... ** ** Attempting to unload nf_dup_ipv4... ** ** Attempting to load nf_dup_ipv6... ** ** Attempting to unload nf_dup_ipv6... ** ** Attempting to load nf_dup_netdev... ** ** Attempting to unload nf_dup_netdev... ** ** Attempting to load nf_log_arp... ** ** Attempting to unload nf_log_arp... ** ** Attempting to load nf_log_bridge... ** ** Attempting to unload nf_log_bridge... ** ** Attempting to load nf_log_ipv4... ** ** Attempting to unload nf_log_ipv4... ** ** Attempting to load nf_log_ipv6... ** ** Attempting to unload nf_log_ipv6... ** ** Attempting to load nf_log_netdev... ** ** Attempting to unload nf_log_netdev... ** ** Attempting to load nf_log_syslog... ** ** Attempting to unload nf_log_syslog... ** ** Attempting to load nf_nat... ** ** Attempting to unload nf_nat... ** ** Attempting to load nf_nat_amanda... ** ** Attempting to unload nf_nat_amanda... ** ** Attempting to load nf_nat_ftp... ** ** Attempting to unload nf_nat_ftp... ** ** Attempting to load nf_nat_h323... ** ** Attempting to unload nf_nat_h323... ** ** Attempting to load nf_nat_irc... ** ** Attempting to unload nf_nat_irc... ** ** Attempting to load nf_nat_pptp... ** ** Attempting to unload nf_nat_pptp... ** ** Attempting to load nf_nat_sip... ** ** Attempting to unload nf_nat_sip... ** ** Attempting to load nf_nat_snmp_basic... ** ** Attempting to unload nf_nat_snmp_basic... ** ** Attempting to load nf_nat_tftp... ** ** Attempting to unload nf_nat_tftp... ** ** Attempting to load nf_reject_ipv4... ** ** Attempting to unload nf_reject_ipv4... ** ** Attempting to load nf_reject_ipv6... ** ** Attempting to unload nf_reject_ipv6... ** ** Attempting to load nf_socket_ipv4... ** ** Attempting to unload nf_socket_ipv4... ** ** Attempting to load nf_socket_ipv6... ** ** Attempting to unload nf_socket_ipv6... ** ** Attempting to load nf_synproxy_core... ** ** Attempting to unload nf_synproxy_core... ** ** Attempting to load nf_tables... ** ** Attempting to unload nf_tables... ** ** Attempting to load nf_tproxy_ipv4... ** ** Attempting to unload nf_tproxy_ipv4... ** ** Attempting to load nf_tproxy_ipv6... ** ** Attempting to unload nf_tproxy_ipv6... ** ** Attempting to load nfnetlink... ** ** Attempting to unload nfnetlink... ** ** Attempting to load nfnetlink_cthelper... ** ** Attempting to unload nfnetlink_cthelper... ** ** Attempting to load nfnetlink_cttimeout... ** ** Attempting to unload nfnetlink_cttimeout... ** ** Attempting to load nfnetlink_log... ** ** Attempting to unload nfnetlink_log... ** ** Attempting to load nfnetlink_osf... ** ** Attempting to unload nfnetlink_osf... ** ** Attempting to load nfnetlink_queue... ** ** Attempting to unload nfnetlink_queue... ** ** Attempting to load nf_tables... ** ** Attempting to unload nf_tables... ** ** Attempting to load nft_chain_nat... ** ** Attempting to unload nft_chain_nat... ** ** Attempting to load nft_compat... ** [11262.704071] Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled ** Attempting to unload nft_compat... ** ** Attempting to load nft_connlimit... ** ** Attempting to unload nft_connlimit... ** ** Attempting to load nft_counter... ** ** Attempting to unload nft_counter... ** ** Attempting to load nft_ct... ** ** Attempting to unload nft_ct... ** ** Attempting to load nft_dup_ipv4... ** ** Attempting to unload nft_dup_ipv4... ** ** Attempting to load nft_dup_ipv6... ** ** Attempting to unload nft_dup_ipv6... ** ** Attempting to load nft_dup_netdev... ** ** Attempting to unload nft_dup_netdev... ** ** Attempting to load nft_fib... ** ** Attempting to unload nft_fib... ** ** Attempting to load nft_fib_inet... ** ** Attempting to unload nft_fib_inet... ** ** Attempting to load nft_fib_ipv4... ** ** Attempting to unload nft_fib_ipv4... ** ** Attempting to load nft_fib_ipv6... ** ** Attempting to unload nft_fib_ipv6... ** ** Attempting to load nft_fib_netdev... ** ** Attempting to unload nft_fib_netdev... ** ** Attempting to load nft_fwd_netdev... ** ** Attempting to unload nft_fwd_netdev... ** ** Attempting to load nft_hash... ** ** Attempting to unload nft_hash... ** ** Attempting to load nft_limit... ** ** Attempting to unload nft_limit... ** ** Attempting to load nft_log... ** ** Attempting to unload nft_log... ** ** Attempting to load nft_masq... ** ** Attempting to unload nft_masq... ** ** Attempting to load nft_meta_bridge... ** [11297.112496] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. ** Attempting to unload nft_meta_bridge... ** ** Attempting to load nft_nat... ** ** Attempting to unload nft_nat... ** ** Attempting to load nft_numgen... ** ** Attempting to unload nft_numgen... ** ** Attempting to load nft_objref... ** ** Attempting to unload nft_objref... ** ** Attempting to load nft_osf... ** ** Attempting to unload nft_osf... ** ** Attempting to load nft_queue... ** ** Attempting to unload nft_queue... ** ** Attempting to load nft_quota... ** ** Attempting to unload nft_quota... ** ** Attempting to load nft_redir... ** ** Attempting to unload nft_redir... ** ** Attempting to load nft_reject... ** ** Attempting to unload nft_reject... ** ** Attempting to load nft_reject_bridge... ** [11317.794488] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. ** Attempting to unload nft_reject_bridge... ** ** Attempting to load nft_reject_inet... ** ** Attempting to unload nft_reject_inet... ** ** Attempting to load nft_reject_ipv4... ** ** Attempting to unload nft_reject_ipv4... ** ** Attempting to load nft_reject_ipv6... ** ** Attempting to unload nft_reject_ipv6... ** ** Attempting to load nft_reject_netdev... ** ** Attempting to unload nft_reject_netdev... ** ** Attempting to load nft_socket... ** ** Attempting to unload nft_socket... ** ** Attempting to load nft_tproxy... ** ** Attempting to unload nft_tproxy... ** ** Attempting to load nft_tunnel... ** ** Attempting to unload nft_tunnel... ** ** Attempting to load nft_xfrm... ** ** Attempting to unload nft_xfrm... ** ** Attempting to load nhpoly1305... ** ** Attempting to unload nhpoly1305... ** ** Attempting to load n_gsm... ** ** Attempting to unload n_gsm... ** ** Attempting to load nlmon... ** ** Attempting to unload nlmon... ** ** Attempting to load nls_cp1250... ** ** Attempting to unload nls_cp1250... ** ** Attempting to load nls_cp1251... ** ** Attempting to unload nls_cp1251... ** ** Attempting to load nls_cp1255... ** ** Attempting to unload nls_cp1255... ** ** Attempting to load nls_cp737... ** ** Attempting to unload nls_cp737... ** ** Attempting to load nls_cp775... ** ** Attempting to unload nls_cp775... ** ** Attempting to load nls_cp850... ** ** Attempting to unload nls_cp850... ** ** Attempting to load nls_cp852... ** ** Attempting to unload nls_cp852... ** ** Attempting to load nls_cp855... ** ** Attempting to unload nls_cp855... ** ** Attempting to load nls_cp857... ** ** Attempting to unload nls_cp857... ** ** Attempting to load nls_cp860... ** ** Attempting to unload nls_cp860... ** ** Attempting to load nls_cp861... ** ** Attempting to unload nls_cp861... ** ** Attempting to load nls_cp862... ** ** Attempting to unload nls_cp862... ** ** Attempting to load nls_cp863... ** ** Attempting to unload nls_cp863... ** ** Attempting to load nls_cp864... ** ** Attempting to unload nls_cp864... ** ** Attempting to load nls_cp865... ** ** Attempting to unload nls_cp865... ** ** Attempting to load nls_cp866... ** ** Attempting to unload nls_cp866... ** ** Attempting to load nls_cp869... ** ** Attempting to unload nls_cp869... ** ** Attempting to load nls_cp874... ** ** Attempting to unload nls_cp874... ** ** Attempting to load nls_cp936... ** ** Attempting to unload nls_cp936... ** ** Attempting to load nls_cp949... ** ** Attempting to unload nls_cp949... ** ** Attempting to load nls_cp950... ** ** Attempting to unload nls_cp950... ** ** Attempting to load nls_euc_jp... ** ** Attempting to unload nls_euc_jp... ** ** Attempting to load nls_iso8859_1... ** ** Attempting to unload nls_iso8859_1... ** ** Attempting to load nls_iso8859_13... ** ** Attempting to unload nls_iso8859_13... ** ** Attempting to load nls_iso8859_14... ** ** Attempting to unload nls_iso8859_14... ** ** Attempting to load nls_iso8859_15... ** ** Attempting to unload nls_iso8859_15... ** ** Attempting to load nls_iso8859_2... ** ** Attempting to unload nls_iso8859_2... ** ** Attempting to load nls_iso8859_3... ** ** Attempting to unload nls_iso8859_3... ** ** Attempting to load nls_iso8859_4... ** ** Attempting to unload nls_iso8859_4... ** ** Attempting to load nls_iso8859_5... ** ** Attempting to unload nls_iso8859_5... ** ** Attempting to load nls_iso8859_6... ** ** Attempting to unload nls_iso8859_6... ** ** Attempting to load nls_iso8859_7... ** ** Attempting to unload nls_iso8859_7... ** ** Attempting to load nls_iso8859_9... ** ** Attempting to unload nls_iso8859_9... ** ** Attempting to load nls_koi8_r... ** ** Attempting to unload nls_koi8_r... ** ** Attempting to load nls_koi8_ru... ** ** Attempting to unload nls_koi8_ru... ** ** Attempting to load null_blk... ** [11396.068365] null_blk: disk nullb0 created [11396.068697] null_blk: module loaded ** Attempting to unload null_blk... ** ** Attempting to load nvme_loop... ** ** Attempting to unload nvme_loop... ** ** Attempting to load nvmet_fc... ** ** Attempting to unload nvmet_fc... ** ** Attempting to load nvmet_rdma... ** ** Attempting to unload nvmet_rdma... ** ** Attempting to load nvmet_tcp... ** [11404.696759] Warning: Unmaintained driver is detected: NVMe/TCP Target ** Attempting to unload nvmet_tcp... ** ** Attempting to load objagg... ** ** Attempting to unload objagg... ** ** Attempting to load openvswitch... ** [11408.581248] openvswitch: Open vSwitch switching datapath ** Attempting to unload openvswitch... ** ** Attempting to load parman... ** ** Attempting to unload parman... ** ** Attempting to load pcbc... ** ** Attempting to unload pcbc... ** ** Attempting to load pcrypt... ** ** Attempting to unload pcrypt... ** ** Attempting to load pkcs8_key_parser... ** [11417.589450] Asymmetric key parser 'pkcs8' registered ** Attempting to unload pkcs8_key_parser... ** [11418.092730] Asymmetric key parser 'pkcs8' unregistered ** Attempting to load poly1305_generic... ** ** Attempting to unload poly1305_generic... ** ** Attempting to load ppdev... ** [11420.813628] ppdev: user-space parallel port driver ** Attempting to unload ppdev... ** ** Attempting to load ppp_async... ** [11422.472871] PPP generic driver version 2.4.2 ** Attempting to unload ppp_async... ** ** Attempting to load ppp_deflate... ** [11424.133828] PPP generic driver version 2.4.2 [11424.148868] PPP Deflate Compression module registered ** Attempting to unload ppp_deflate... ** ** Attempting to load ppp_generic... ** [11425.861985] PPP generic driver version 2.4.2 [-- MARK -- Fri Feb 3 08:55:00 2023] ** Attempting to unload ppp_generic... ** ** Attempting to load ppp_synctty... ** [11427.494145] PPP generic driver version 2.4.2 ** Attempting to unload ppp_synctty... ** ** Attempting to load pppoe... ** [11429.141539] PPP generic driver version 2.4.2 [11429.156333] NET: Registered PF_PPPOX protocol family ** Attempting to unload pppoe... ** [11429.758942] NET: Unregistered PF_PPPOX protocol family ** Attempting to load pppox... ** [11430.889037] PPP generic driver version 2.4.2 [11430.904302] NET: Registered PF_PPPOX protocol family ** Attempting to unload pppox... ** [11431.444189] NET: Unregistered PF_PPPOX protocol family ** Attempting to load ppp_synctty... ** [11432.601220] PPP generic driver version 2.4.2 ** Attempting to unload ppp_synctty... ** ** Attempting to load pps_gpio... ** ** Attempting to unload pps_gpio... ** ** Attempting to load pps_ldisc... ** [11435.797995] pps_ldisc: PPS line discipline registered ** Attempting to unload pps_ldisc... ** ** Attempting to load pptp... ** [11437.378500] PPP generic driver version 2.4.2 [11437.393148] NET: Registered PF_PPPOX protocol family [11437.406694] gre: GRE over IPv4 demultiplexor driver [11437.421425] PPTP driver version 0.8.5 ** Attempting to unload pptp... ** [11437.979058] NET: Unregistered PF_PPPOX protocol family ** Attempting to load pwc... ** [11439.139470] mc: Linux media interface: v0.10 [11439.285976] videodev: Linux video capture interface: v2.00 [11439.422938] usbcore: registered new interface driver Philips webcam ** Attempting to unload pwc... ** [11439.958547] usbcore: deregistering interface driver Philips webcam ** Attempting to load psample... ** ** Attempting to unload psample... ** ** Attempting to load raid0... ** ** Attempting to unload raid0... ** ** Attempting to load raid1... ** ** Attempting to unload raid1... ** ** Attempting to load raid10... ** ** Attempting to unload raid10... ** ** Attempting to load raid456... ** [11447.696632] raid6: skip pq benchmark and using algorithm sse2x4 [11447.697487] raid6: using ssse3x2 recovery algorithm [11447.711809] async_tx: api initialized (async) ** Attempting to unload raid456... ** ** Attempting to load raid6_pq... ** [11449.623198] raid6: skip pq benchmark and using algorithm sse2x4 [11449.624115] raid6: using ssse3x2 recovery algorithm ** Attempting to unload raid6_pq... ** ** Attempting to load raid6test... ** [11451.239916] raid6: skip pq benchmark and using algorithm sse2x4 [11451.240755] raid6: using ssse3x2 recovery algorithm [11451.254635] async_tx: api initialized (async) [11451.314192] raid6test: testing the 4-disk case... [11451.314897] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [11451.315408] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(P) OK [11451.315834] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(Q) OK [11451.316306] raid6test: test_disks(1, 2): faila= 1(D) failb= 2(P) OK [11451.317405] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(Q) OK [11451.317809] raid6test: test_disks(2, 3): faila= 2(P) failb= 3(Q) OK [11451.318316] raid6test: testing the 5-disk case... [11451.319079] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [11451.319526] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(D) OK [11451.319933] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(P) OK [11451.320385] raid6test: test_disks(0, 4): faila= 0(D) failb= 4(Q) OK [11451.320808] raid6test: test_disks(1, 2): faila= 1(D) failb= 2(D) OK [11451.321262] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(P) OK [11451.321704] raid6test: test_disks(1, 4): faila= 1(D) failb= 4(Q) OK [11451.322155] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(P) OK [11451.322616] raid6test: test_disks(2, 4): faila= 2(D) failb= 4(Q) OK [11451.323075] raid6test: test_disks(3, 4): faila= 3(P) failb= 4(Q) OK [11451.323599] raid6test: testing the 11-disk case... [11451.324367] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [11451.324789] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(D) OK [11451.325254] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(D) OK [11451.325666] raid6test: test_disks(0, 4): faila= 0(D) failb= 4(D) OK [11451.326132] raid6test: test_disks(0, 5): faila= 0(D) failb= 5(D) OK [11451.326549] raid6test: test_disks(0, 6): faila= 0(D) failb= 6(D) OK [11451.327001] raid6test: test_disks(0, 7): faila= 0(D) failb= 7(D) OK [11451.327417] raid6test: test_disks(0, 8): faila= 0(D) failb= 8(D) OK [11451.327825] raid6test: test_disks(0, 9): faila= 0(D) failb= 9(P) OK [11451.328590] raid6test: test_disks(0, 10): faila= 0(D) failb= 10(Q) OK [11451.329110] raid6test: test_disks(1, 2): faila= 1(D) failb= 2(D) OK [11451.329535] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(D) OK [11451.329974] raid6test: test_disks(1, 4): faila= 1(D) failb= 4(D) OK [11451.330390] raid6test: test_disks(1, 5): faila= 1(D) failb= 5(D) OK [11451.330838] raid6test: test_disks(1, 6): faila= 1(D) failb= 6(D) OK [11451.331303] raid6test: test_disks(1, 7): faila= 1(D) failb= 7(D) aid6test: test_disks(1, 8): faila= 1(D) failb= 8(D) OK [11451.432105] raid6test: test_disks(1, 9): faila= 1(D) failb= 9(P) OK [11451.432511] raid6test: test_disks(1, 10): faila= 1(D) failb= 10(Q) OK [11451.432904] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(D) OK [11451.433358] raid6test: test_disks(2, 4): faila= 2(D) failb= 4(D) OK [11451.433783] raid6test: test_disks(2, 5): faila= 2(D) failb= 5(D) OK [11451.434247] raid6test: test_disks(2, 6): faila= 2(D) failb= 6(D) OK [11451.434652] raid6test: test_disks(2, 7): faila= 2(D) failb= 7(D) OK [11= 10(Q) OK [11451.935503] raid6test: test_disks(3, 4): faila= 3(D) failb= 4(D) OK [11451.935991] raid6test: test_disks(3, 5): faila= 3(D) failb= 5(D) OK [11451.936426] raid6test: test_disks(3, 6): faila= 3(D) failb= 6(D) OK [11451.936866] raid6test: test_disks(3, 7): faila= 3(D) failb= 7(D) OK [11451.937428] raid6test: test_disks(3, 8): faila= 3(D) failb= 8(D) OK [11451.937878] raid6test: test_disks(3, 9): faila= 3(D) failb= 9(P) OK [11451.938309] raid6test: test_disks(3, 10): faila= 3(D) failb= 10(Q) OK [11451.938806] raid6test: test_disks(4, 5): faila= 4(D) failb= 5(D) OK [11451.939234] raid6test: test_disks(4, 6): faila= 4(D) failb= 6(D) OK [11451.939654] raid6test: test_disks(4, 7): faila= 4(D) failb= 7(D) OK [11451.940122] raid6test: test_disks(4, 8): faila= 4(D) failb= 8(D) OK [11451.940579] raid6test: test_disks(4, 9): faila= 4(D) failb= 9(P) OK [11451.941016] raid6test: test_disks(4, 10): faila= 4(D) failb= 10(Q) OK [11451.941440] raid6test: test_disks(5, 6): faila= 5(D) failb= 6(D) OK [11451.941845] raid6test: test_disks(5, 7): faila= 5(D) failb= 7(D) OK [11451.942349] raid6test: test_disks(5, 8): faila= 5(D) failb= 8( 7(D) OK [11452.443047] raid6test: test_disks(6, 8): faila= 6(D) failb= 8(D) OK [11452.443479] raid6test: test_disks(6, 9): faila= 6(D) failb= 9(P) OK [11452.443890] raid6test: test_disks(6, 10): faila= 6(D) failb= 10(Q) OK [11452.444334] raid6test: test_disks(7, 8): faila= 7(D) failb= 8(D) OK [11452.444736] raid6test: test_disks(7, 9): faila= 7(D) failb= 9(P) OK [11452.445177] raid6test: test_disks(7, 10): faila= 7(D) failb= 10(Q) OK [11452.445598] raid6test: test_disks(8, 9): faila= 8(D) failb= 9(P) OK [11452.446030] raid6test: test_disks(8, 10): faila= 8(D) failb= 10(Q) OK [11452.446434] raid6test: test_disks(9, 10): faila= 9(P) failb= 10(Q) OK [11452.446985] raid6test: testing the 12-disk case... [11452.447774] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [11452.448249] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(D) OK [11452.448666] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(D) OK [11452.449119] raid6test: test_disks(0, 4): faila= 0(D) failb= 4(D) OK [11452.449521] raid6test: test_disks(0, 5): faila= 0(D) failb= 5(D) OK [11452.449929] raid6test: test_disks(0, 6): faila= 0(D) failb= 6(D) OK [11452.450386] raid6test: test_disks(0, 7): faila= 0(D) failb= 7(D) OK [11452.450790] raid6test: test_disks(0, 8): faila= 0(D) failb= 8(D) OK [11452.451254] raid6test: test_disks(0, 9): faila= 0(D) failb= 9(D) OK [11452.451664] raid6test: test_disks(0, 10): faila= 1(D) failb= 3(D) OK [11452.952409] raid6test: test_disks(1, 4): faila= 1(D) failb= 4(D) OK [11452.952817] raid6test: test_disks(1, 5): faila= 1(D) failb= 5(D) OK [11452.953312] raid6test: test_disks(1, 6): faila= 1(D) failb= 6(D) OK [11452.953772] raid6test: test_disks(1, 7): faila= 1(D) failb= 7(D) OK [11452.954229] raid6test: test_disks(1, 8): faila= 1(D) failb= 8(D) OK [11452.954666] raid6test: test_disks(1, 9): faila= 1(D) failb= 9(D) OK [11452.955167] raid6test: test_disks(1, 10): faila= 1(D) failb= 10(P) OK [11452.955599] raid6test: test_disks(1, 11): faila= 1(D) failb= 11(Q) OK [11452.956057] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(D) OK [11452.956489] raid6test: test_disks(2, 4): faila= 2(D) failb= 4(D) OK [11452.956918] raid6test: test_disks(2, 5): faila= 2(D) failb= 5(D) OK [11452.957415] raid6test: test_disks(2, 6): faila= 2(D) failb= 6(D) OK [11452.957838] raid6test: test_disks(2, 7): faila= 2(D) failb= 7(D) OK [11452.958332] raid6test: test_disks(2, 8): faila= 2(D) failb= 8(D) OK [11452.958766] raid6test: test_disks(2, 9): faila= 2(D) failb= 9(D) OK [11452.959229] raid6test: test_disks(2, 10): faila= 2(D) failb= 10(P) OK [11452.959682] raid6test: test_disks(2, 11): faila= 2(D) failb= 11(Q) OK [11452.960169] raid6test: test_disks(3, 4): faila= 3(D) failb= 4(D) OK [11452.960595] raid6test: test_disks(3,isks(3, 8): faila= 3(D) failb= 8(D) OK [11453.461335] raid6test: test_disks(3, 9): faila= 3(D) failb= 9(D) OK [11453.461782] raid6test: test_disks(3, 10): faila= 3(D) failb= 10(P) OK [11453.462293] raid6test: test_disks(3, 11): faila= 3(D) failb= 11(Q) OK [11453.462695] raid6test: test_disks(4, 5): faila= 4(D) failb= 5(D) OK [11453.463180] raid6test: test_disks(4, 6): faila= 4(D) failb= 6(D) OK [11453.463633] raid6test: test_disks(4, 7): faila= 4(D) failb= 7(D) OK [11453.464092] raid6test: test_disks(4, 8): faila= 4(D) failb= 8(D) OK [11453.464526] raid6test: test_disks(4, 9): faila= 4(D) failb= 9(D) OK [11453.465011] raid6test: test_disks(4, 10): faila= 4(D) failb= 10(P) OK [11453.465415] raid6test: test_disks(4, 11): faila= 4(D) failb= 11(Q) OK [11453.465848] raid6test: test_disks(5, 6): faila= 5(D) failb= 6(D) OK [11453.466353] raid6test: test_disks(5, 7): faila= 5(D) failb= 7(D) OK [11453.466785] raid6test: test_disks(5, 8): faila= 5(D) failb= 8(D) OK [11453.467301] raid6test: test_disks(5, 9): faila= 5(D) failb= 9(D) OK [11453.467754] raid6test: test_disks(5, 10): faila= 5(D) failb= 10(P) OK [11453.468194] raid6test: test_disks(5, 11): faila= 5(D) a= 6(D) failb= 9(D) OK [11453.969148] raid6test: test_disks(6, 10): faila= 6(D) failb= 10(P) OK [11453.969600] raid6test: test_disks(6, 11): faila= 6(D) failb= 11(Q) OK [11453.970080] raid6test: test_disks(7, 8): faila= 7(D) failb= 8(D) OK [11453.970515] raid6test: test_disks(7, 9): faila= 7(D) failb= 9(D) OK [11453.970954] raid6test: test_disks(7, 10): faila= 7(D) failb= 10(P) OK [11453.971443] raid6test: test_disks(7, 11): faila= 7(D) failb= 11(Q) OK [11453.971871] raid6test: test_disks(8, 9): faila= 8(D) failb= 9(D) OK [11453.972362] raid6test: test_disks(8, 10): faila= 8(D) failb= 10(P) OK [11453.972794] raid6test: test_disks(8, 11): faila= 8(D) failb= 11(Q) OK [11453.973283] raid6test: test_disks(9, 10): faila= 9(D) failb= 10(P) OK [11453.973724] raid6test: test_disks(9, 11): faila= 9(D) failb= 11(Q) OK [11453.974179] raid6test: test_disks(10, 11): faila= 10(P) failb= 11(Q) OK [11453.974871] raid6test: testing the 24-disk case... [11453.975626] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [11453.976124] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(D) OK [11453.976573] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(D) OK [11453.977056] raid6test: test_disks(0, 7): faila= 0(D) failb= 7(D) OK [11454.477770] raid6test: test_disks(0, 8): faila= 0(D) failb= 8(D) OK [11454.478248] raid6test: test_disks(0, 9): faila= 0(D) failb= 9(D) OK [11454.478665] raid6test: test_disks(0, 10): faila= 0(D) failb= 10(D) OK [11454.479142] raid6test: test_disks(0, 11): faila= 0(D) failb= 11(D) OK [11454.479555] raid6test: test_disks(0, 12): faila= 0(D) failb= 12(D) OK [11454.479964] raid6test: test_disks(0, 13): faila= 0(D) failb= 13(D) OK [11454.480427] raid6test: test_disks(0, 14): faila= 0(D) failb= 14(D) OK [11454.480884] raid6test: test_disks(0, 15): faila= 0(D) failb= 15(D) OK [11454.481358] raid6test: test_disks(0, 16): faila= 0(D) failb= 16(D) OK [11454.481773] raid6test: test_disks(0, 17): faila= 0(D) failb= 17(D) OK [11454.482242] raid6test: test_disks(0, 18): faila= 0(D) failb= 18(D) OK [11454.482660] raid6test: test_disks(0, 19): faila= 0(D) failb= 19(D) OK [11454.483105] raid6test: test_disks(0, 20): faila= 0(D) failb= 20(D) OK [11454.483524] raid6test: test_disks(0, 21): faila= 0(D) failb= 21(D) OK [11454.484049] raid6test: test_disks(0, 22): faila= 0(D) failb= 22(P) OK [11454.484525] raid6test: test_disks(0, 23): faila= 0(D) failb= 4(D) OK [11454.985598] raid6test: test_disks(1, 5): faila= 1(D) failb= 5(D) OK [11454.986159] raid6test: test_disks(1, 6): faila= 1(D) failb= 6(D) OK [11454.986610] raid6test: test_disks(1, 7): faila= 1(D) failb= 7(D) OK [11454.987329] raid6test: test_disks(1, 8): faila= 1(D) failb= 8(D) OK [11454.987839] raid6test: test_disks(1, 9): faila= 1(D) failb= 9(D) OK [11454.988425] raid6test: test_disks(1, 10): faila= 1(D) failb= 10(D) OK [11454.988907] raid6test: test_disks(1, 11): faila= 1(D) failb= 11(D) OK [11454.989444] raid6test: test_disks(1, 12): faila= 1(D) failb= 12(D) OK [11454.989925] raid6test: test_disks(1, 13): faila= 1(D) failb= 13(D) OK [11454.990459] raid6test: test_disks(1, 14): faila= 1(D) failb= 14(D) OK [11454.990943] raid6test: test_disks(1, 15): faila= 1(D) failb= 15(D) OK [11454.991517] raid6test: test_disks(1, 16): faila= 1(D) failb= 16(D) OK [11454.991964] raid6test: test_disks(1, 17): faila= 1(D) failb= 17(D) OK [11454.992489] raid6test: test_disks(1, 18): faila= 1(D) failb= 18(D) OK [11454.992932] raid6test: test_disks(1, 19): faila= 1(D) failb= 19(D) OK [11454.993433] raid6test: test_disks(1, 20): faila= 1(D) failb= 20(D) OK [11455.023= 23(Q) OK [11455.494412] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(D) OK [11455.494828] raid6test: test_disks(2, 4): faila= 2(D) failb= 4(D) OK [11455.495346] raid6test: test_disks(2, 5): faila= 2(D) failb= 5(D) OK [11455.495796] raid6test: test_disks(2, 6): faila= 2(D) failb= 6(D) OK [11455.496297] raid6test: test_disks(2, 7): faila= 2(D) failb= 7(D) OK [11455.496747] raid6test: test_disks(2, 8): faila= 2(D) failb= 8(D) OK [11455.497248] raid6test: test_disks(2, 9): faila= 2(D) failb= 9(D) OK [11455.497662] raid6test: test_disks(2, 10): faila= 2(D) failb= 10(D) OK [11455.498165] raid6test: test_disks(2, 11): faila= 2(D) failb= 11(D) OK [11455.498579] raid6test: test_disks(2, 12): faila= 2(D) failb= 12(D) OK [11455.499071] raid6test: test_disks(2, 13): faila= 2(D) failb= 13(D) OK [11455.499523] raid6test: test_disks(2, 14): faila= 2(D) failb= 14(D) OK [11455.499971] raid6test: test_disks(2, 15): faila= 2(D) failb= 15(D) OK [11455.500477] raid6test: test_disks(2, 16): faila= 2(D) failb= 16(D) OK [11455.500919] raid6test: test_disks(2, 17): faila= 2(D) failb= 17(D) OK [11455.501415] raid6test: test_disks(2, 18): faila= 2(D) failb= 18(D) OK [11455.501830] raid6test: test_disks(2, isks(2, 22): faila= 2(D) failb= 22(P) OK [11456.002577] raid6test: test_disks(2, 23): faila= 2(D) failb= 23(Q) OK [11456.003068] raid6test: test_disks(3, 4): faila= 3(D) failb= 4(D) OK [11456.003525] raid6test: test_disks(3, 5): faila= 3(D) failb= 5(D) OK [11456.004082] raid6test: test_disks(3, 6): faila= 3(D) failb= 6(D) OK [11456.004532] raid6test: test_disks(3, 7): faila= 3(D) failb= 7(D) OK [11456.005043] raid6test: test_disks(3, 8): faila= 3(D) failb= 8(D) OK [11456.005498] raid6test: test_disks(3, 9): faila= 3(D) failb= 9(D) OK [11456.005946] raid6test: test_disks(3, 10): faila= 3(D) failb= 10(D) OK [11456.006449] raid6test: test_disks(3, 11): faila= 3(D) failb= 11(D) OK [11456.006859] raid6test: test_disks(3, 12): faila= 3(D) failb= 12(D) OK [11456.007359] raid6test: test_disks(3, 13): faila= 3(D) failb= 13(D) OK [11456.007769] raid6test: test_disks(3, 14): faila= 3(D) failb= 14(D) OK [11456.008261] raid6test: test_disks(3, 15): faila= 3(D) failb= 15(D) OK [11456.008674] raid6test: test_disks(3, 16): faila= 3(D) failb= 16(D) OK [11456.009171] raid6test: test_disks(3, 17): faila= isks(3, 20): faila= 3(D) failb= 20(D) OK [11456.509969] raid6test: test_disks(3, 21): faila= 3(D) failb= 21(D) OK [11456.510477] raid6test: test_disks(3, 22): faila= 3(D) failb= 22(P) OK [11456.510891] raid6test: test_disks(3, 23): faila= 3(D) failb= 23(Q) OK [11456.511360] raid6test: test_disks(4, 5): faila= 4(D) failb= 5(D) OK [11456.511777] raid6test: test_disks(4, 6): faila= 4(D) failb= 6(D) OK [11456.512268] raid6test: test_disks(4, 7): faila= 4(D) failb= 7(D) OK [11456.512679] raid6test: test_disks(4, 8): faila= 4(D) failb= 8(D) OK [11456.513186] raid6test: test_disks(4, 9): faila= 4(D) failb= 9(D) OK [11456.513602] raid6test: test_disks(4, 10): faila= 4(D) failb= 10(D) OK [11456.514057] raid6test: test_disks(4, 11): faila= 4(D) failb= 11(D) OK [11456.514506] raid6test: test_disks(4, 12): faila= 4(D) failb= 12(D) OK [11456.514925] raid6test: test_disks(4, 13): faila= 4(D) failb= 13(D) OK [11456.515436] raid6test: test_disks(4, 14): faila= 4(D) failb= 14(D) OK [11456.515862] raid6test: test_disks(4, 15): faila= 4(D) failb= 15(D) OK [11456.516370] raid6test: test_disks(4, 16): faila= la= 4(D) failb= 19(D) OK [11457.017105] raid6test: test_disks(4, 20): faila= 4(D) failb= 20(D) OK [11457.017560] raid6test: test_disks(4, 21): faila= 4(D) failb= 21(D) OK [11457.018042] raid6test: test_disks(4, 22): faila= 4(D) failb= 22(P) OK [11457.018503] raid6test: test_disks(4, 23): faila= 4(D) failb= 23(Q) OK [11457.018947] raid6test: test_disks(5, 6): faila= 5(D) failb= 6(D) OK [11457.019455] raid6test: test_disks(5, 7): faila= 5(D) failb= 7(D) OK [11457.019901] raid6test: test_disks(5, 8): faila= 5(D) failb= 8(D) OK [11457.020411] raid6test: test_disks(5, 9): faila= 5(D) failb= 9(D) OK [11457.020860] raid6test: test_disks(5, 10): faila= 5(D) failb= 10(D) OK [11457.021400] raid6test: test_disks(5, 11): faila= 5(D) failb= 11(D) OK [11457.021834] raid6test: test_disks(5, 12): faila= 5(D) failb= 12(D) OK [11457.022353] raid6test: test_disks(5, 13): faila= 5(D) failb= 13(D) OK [11457.022806] raid6test: test_disks(5, 14): faila= 5(D) failb= 14(D) OK [11457.023296] raid6test: test_disks(5, 15): faila= 5(D) failb= 15(D) OK [11457.023755] raid6test: test_disks(5, 16): faila= 5(D) failb= 16(D) OK [11457.024195] raid6test: test_disks(5, 17): faila= 5(D) failb= 17(D) OK [11457.024659] raid6test: test_disks(5, 18): faila= 5(D) failb= 18(D) OK [11457.025110] raid6tesaid6test: test_disks(5, 22): faila= 5(D) failb= 22(P) OK [11457.525929] raid6test: test_disks(5, 23): faila= 5(D) failb= 23(Q) OK [11457.526436] raid6test: test_disks(6, 7): faila= 6(D) failb= 7(D) OK [11457.526877] raid6test: test_disks(6, 8): faila= 6(D) failb= 8(D) OK [11457.527371] raid6test: test_disks(6, 9): faila= 6(D) failb= 9(D) OK [11457.527784] raid6test: test_disks(6, 10): faila= 6(D) failb= 10(D) OK [11457.528282] raid6test: test_disks(6, 11): faila= 6(D) failb= 11(D) OK [11457.528695] raid6test: test_disks(6, 12): faila= 6(D) failb= 12(D) OK [11457.529205] raid6test: test_disks(6, 13): faila= 6(D) failb= 13(D) OK [11457.529625] raid6test: test_disks(6, 14): faila= 6(D) failb= 14(D) OK [11457.530078] raid6test: test_disks(6, 15): faila= 6(D) failb= 15(D) OK [11457.530528] raid6test: test_disks(6, 16): faila= 6(D) failb= 16(D) OK [11457.530980] raid6test: test_disks(6, 17): faila= 6(D) failb= 17(D) OK [11457.531444] raid6test: test_disks(6, 18): faila= 6(D) failb= 18(D) OK [11457.531851] raid6test: test_disks(6, 19): faila= 6(D) failb= 19(D) OK [11457.532352] raid6test: test_disisks(6, 23): faila= 6(D) failb= 23(Q) OK [11458.033103] raid6test: test_disks(7, 8): faila= 7(D) failb= 8(D) OK [11458.033553] raid6test: test_disks(7, 9): faila= 7(D) failb= 9(D) OK [11458.034035] raid6test: test_disks(7, 10): faila= 7(D) failb= 10(D) OK [11458.034515] raid6test: test_disks(7, 11): faila= 7(D) failb= 11(D) OK [11458.034970] raid6test: test_disks(7, 12): faila= 7(D) failb= 12(D) OK [11458.035467] raid6test: test_disks(7, 13): faila= 7(D) failb= 13(D) OK [11458.035937] raid6test: test_disks(7, 14): faila= 7(D) failb= 14(D) OK [11458.036438] raid6test: test_disks(7, 15): faila= 7(D) failb= 15(D) OK [11458.036900] raid6test: test_disks(7, 16): faila= 7(D) failb= 16(D) OK [11458.037398] raid6test: test_disks(7, 17): faila= 7(D) failb= 17(D) OK [11458.037863] raid6test: test_disks(7, 18): faila= 7(D) failb= 18(D) OK [11458.038360] raid6test: test_disks(7, 19): faila= 7(D) failb= 19(D) OK [11458.038821] raid6test: test_disks(7, 20): faila= 7(D) failb= 20(D) OK [11458.039319] raid6test: test_disks(7, 21): faila= 7(D) failb= 21(D) OK [11458.039788] raid6test: test_disks(7, 22): faila= 7(D) failb= 22(P) OK [11458.040632] raid6test: test_disks(7, 23): faila= 7(D) failb= 23(Q) OK [1= 11(D) OK [11458.541518] raid6test: test_disks(8, 12): faila= 8(D) failb= 12(D) OK [11458.542061] raid6test: test_disks(8, 13): faila= 8(D) failb= 13(D) OK [11458.542541] raid6test: test_disks(8, 14): faila= 8(D) failb= 14(D) OK [11458.542997] raid6test: test_disks(8, 15): faila= 8(D) failb= 15(D) OK [11458.543548] raid6test: test_disks(8, 16): faila= 8(D) failb= 16(D) OK [11458.543993] raid6test: test_disks(8, 17): faila= 8(D) failb= 17(D) OK [11458.544442] raid6test: test_disks(8, 18): faila= 8(D) failb= 18(D) OK [11458.544912] raid6test: test_disks(8, 19): faila= 8(D) failb= 19(D) OK [11458.545417] raid6test: test_disks(8, 20): faila= 8(D) failb= 20(D) OK [11458.545889] raid6test: test_disks(8, 21): faila= 8(D) failb= 21(D) OK [11458.546364] raid6test: test_disks(8, 22): faila= 8(D) failb= 22(P) OK [11458.546835] raid6test: test_disks(8, 23): faila= 8(D) failb= 23(Q) OK [11458.547338] raid6test: test_disks(9, 10): faila= 9(D) failb= 10(D) OK [11458.547807] raid6test: test_disks(9, 11): faila= 9(D) failb= 11(D) OK [11458.548314] raid6test: test_disks(9, 12): faila= 9(D) failb= 12(D) OK [11458.548784] raid6test: test_disks(9, 13): faila= 9(D) failb= 13(D) OK [11458.549292] raid6test: test_disks(9, 14): faila= 9(D) failb= 14(D) OK [11458.5[11459.050178] raid6test: test_disks(9, 18): faila= 9(D) failb= 18(D) OK [11459.050621] raid6test: test_disks(9, 19): faila= 9(D) failb= 19(D) OK [11459.051138] raid6test: test_disks(9, 20): faila= 9(D) failb= 20(D) OK [11459.051611] raid6test: test_disks(9, 21): faila= 9(D) failb= 21(D) OK [11459.052163] raid6test: test_disks(9, 22): faila= 9(D) failb= 22(P) OK [11459.052689] raid6test: test_disks(9, 23): faila= 9(D) failb= 23(Q) OK [11459.053296] raid6test: test_disks(10, 11): faila= 10(D) failb= 11(D) OK [11459.053858] raid6test: test_disks(10, 12): faila= 10(D) failb= 12(D) OK [11459.054457] raid6test: test_disks(10, 13): faila= 10(D) failb= 13(D) OK [11459.054996] raid6test: test_disks(10, 14): faila= 10(D) failb= 14(D) OK [11459.055562] raid6test: test_disks(10, 15): faila= 10(D) failb= 15(D) OK [11459.056084] raid6test: test_disks(10, 16): faila= 10(D) failb= 16(D) OK [11459.056553] raid6test: test_disks(10, 17): faila= 10(D) failb= 17(D) OK [11459.057005] raid6test: test_disks(10, 18): faila= 10(D) failb= 18(D) OK [11459.057558] raid6test: test_disks(10, 19): faila= 10(D) failb= 19(D) OK [11459.058055] raid6test: test_disks(10, 20): faila= 10(D) failb= 20(D) OK [11459.058557] raid6test: test_disks(10, 21): faila= 10(D) failb= 21(D) OK [11459.058981] raid6test: test_disks(10, 22): faila= 10(D) failb= 22(P) OK [11459.08655[11459.559780] raid6test: test_disks(11, 14): faila= 11(D) failb= 14(D) OK [11459.560315] raid6test: test_disks(11, 15): faila= 11(D) failb= 15(D) OK [11459.560785] raid6test: test_disks(11, 16): faila= 11(D) failb= 16(D) OK [11459.561304] raid6test: test_disks(11, 17): faila= 11(D) failb= 17(D) OK [11459.561774] raid6test: test_disks(11, 18): faila= 11(D) failb= 18(D) OK [11459.562289] raid6test: test_disks(11, 19): faila= 11(D) failb= 19(D) OK [11459.562761] raid6test: test_disks(11, 20): faila= 11(D) failb= 20(D) OK [11459.563269] raid6test: test_disks(11, 21): faila= 11(D) failb= 21(D) OK [11459.563763] raid6test: test_disks(11, 22): faila= 11(D) failb= 22(P) OK [11459.564242] raid6test: test_disks(11, 23): faila= 11(D) failb= 23(Q) OK [11459.564714] raid6test: test_disks(12, 13): faila= 12(D) failb= 13(D) OK [11459.565227] raid6test: test_disks(12, 14): faila= 12(D) failb= 14(D) OK [11459.565699] raid6test: test_disks(12, 15): faila= 12(D) failb= 15(D) OK [11459.566186] raid6test: test_disks(12, 16): faila= 12(D) failb= 16(D) OK [11459.566642] raid6test: test_disks(12, 17): faila= 12(D) failb= 17(D) OK [11459.567143] raid6test: taid6test: test_disks(12, 21): faila= 12(D) failb= 21(D) OK [11460.067973] raid6test: test_disks(12, 22): faila= 12(D) failb= 22(P) OK [11460.068494] raid6test: test_disks(12, 23): faila= 12(D) failb= 23(Q) OK [11460.068956] raid6test: test_disks(13, 14): faila= 13(D) failb= 14(D) OK [11460.069470] raid6test: test_disks(13, 15): faila= 13(D) failb= 15(D) OK [11460.069939] raid6test: test_disks(13, 16): faila= 13(D) failb= 16(D) OK [11460.070440] raid6test: test_disks(13, 17): faila= 13(D) failb= 17(D) OK [11460.070907] raid6test: test_disks(13, 18): faila= 13(D) failb= 18(D) OK [11460.071412] raid6test: test_disks(13, 19): faila= 13(D) failb= 19(D) OK [11460.071870] raid6test: test_disks(13, 20): faila= 13(D) failb= 20(D) OK [11460.072370] raid6test: test_disks(13, 21): faila= 13(D) failb= 21(D) OK [11460.072870] raid6test: test_disks(13, 22): faila= 13(D) failb= 22(P) OK [11460.073353] raid6test: test_disks(13, 23): faila= 13(D) failb= 23(Q) OK [11460.073823] raid6test: test_disks(14, 15): faila= 14(D) failb= 15(D) OK [11460.074307] raid6test: test_disks(14, 16): faila= 14(D) failb= 16(D) OK [11460.074781] raid6test: test_disks(14, 17): faila= 14(D) failb= 17(D) OK [11460.075256] raiaid6test: test_disks(14, 21): faila= 14(D) failb= 21(D) OK [11460.576171] raid6test: test_disks(14, 22): faila= 14(D) failb= 22(P) OK [11460.576628] raid6test: test_disks(14, 23): faila= 14(D) failb= 23(Q) OK [11460.577116] raid6test: test_disks(15, 16): faila= 15(D) failb= 16(D) OK [11460.577546] raid6test: test_disks(15, 17): faila= 15(D) failb= 17(D) OK [11460.578008] raid6test: test_disks(15, 18): faila= 15(D) failb= 18(D) OK [11460.578484] raid6test: test_disks(15, 19): faila= 15(D) failb= 19(D) OK [11460.578912] raid6test: test_disks(15, 20): faila= 15(D) failb= 20(D) OK [11460.579389] raid6test: test_disks(15, 21): faila= 15(D) failb= 21(D) OK [11460.579808] raid6test: test_disks(15, 22): faila= 15(D) failb= 22(P) OK [11460.580332] raid6test: test_disks(15, 23): faila= 15(D) failb= 23(Q) OK [11460.580810] raid6test: test_disks(16, 17): faila= 16(D) failb= 17(D) OK [11460.581257] raid6test: test_disks(16, 18): faila= 16(D) failb= 18(D) OK [11460.581701] raid6test: test_disks(16, 19): faila= 16(D) failb= 19(D) OK [11460.582168] raid6test: test_disks(16, 20): faila= 16(D) failb= 20(D) OK [11460.582634] raid6test: test_disks(16, 21): faila= 16(D) failb= 21(D) OK [11460.583092] raid6test: test_disks(16, 22): faila= 16(D) failb= 22(P) OK [11460.583548] raidaid6test: test_disks(17, 20): faila= 17(D) failb= 20(D) OK [11461.084454] raid6test: test_disks(17, 21): faila= 17(D) failb= 21(D) OK [11461.084917] raid6test: test_disks(17, 22): faila= 17(D) failb= 22(P) OK [11461.085437] raid6test: test_disks(17, 23): faila= 17(D) failb= 23(Q) OK [11461.085860] raid6test: test_disks(18, 19): faila= 18(D) failb= 19(D) OK [11461.086356] raid6test: test_disks(18, 20): faila= 18(D) failb= 20(D) OK [11461.086814] raid6test: test_disks(18, 21): faila= 18(D) failb= 21(D) OK [11461.087326] raid6test: test_disks(18, 22): faila= 18(D) failb= 22(P) OK [11461.087792] raid6test: test_disks(18, 23): faila= 18(D) failb= 23(Q) OK [11461.088293] raid6test: test_disks(19, 20): faila= 19(D) failb= 20(D) OK [11461.088716] raid6test: test_disks(19, 21): faila= 19(D) failb= 21(D) OK [11461.089197] raid6test: test_disks(19, 22): faila= 19(D) failb= 22(P) OK [11461.089648] raid6test: test_disks(19, 23): faila= 19(D) failb= 23(Q) OK [11461.090129] raid6test: test_disks(20, 21): faila= 20(D) failb= 21(D) OK [11461.090589] raid6test: test_disks(20, 22): faila= 20(D) failb= 22(P) OK [11461.091074] raid6test: test_disks(20, 23)isks(22, 23): faila= 22(P) failb= 23(Q) OK [11461.592697] raid6test: testing the 64-disk case... [11461.593513] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [11461.594089] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(D) OK [11461.594621] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(D) OK [11461.595149] raid6test: test_disks(0, 4): faila= 0(D) failb= 4(D) OK [11461.595683] raid6test: test_disks(0, 5): faila= 0(D) failb= 5(D) OK [11461.596242] raid6test: test_disks(0, 6): faila= 0(D) failb= 6(D) OK [11461.596594] raid6test: test_disks(0, 7): faila= 0(D) failb= 7(D) OK [11461.597121] raid6test: test_disks(0, 8): faila= 0(D) failb= 8(D) OK [11461.597654] raid6test: test_disks(0, 9): faila= 0(D) failb= 9(D) OK [11461.598172] raid6test: test_disks(0, 10): faila= 0(D) failb= 10(D) OK [11461.598660] raid6test: test_disks(0, 11): faila= 0(D) failb= 11(D) OK [11461.599180] raid6test: test_disks(0, 12): faila= 0(D) failb= 12(D) OK [11461.599685] raid6test: test_disks(0, 13): faila= 0(D) failb= 13(D) OK [11461.600208] raid6test: test_disks(0, 14): faila= 0(D) failb= 14(D) OK = 17(D) OK [11462.101175] raid6test: test_disks(0, 18): faila= 0(D) failb= 18(D) OK [11462.101675] raid6test: test_disks(0, 19): faila= 0(D) failb= 19(D) OK [11462.102205] raid6test: test_disks(0, 20): faila= 0(D) failb= 20(D) OK [11462.102727] raid6test: test_disks(0, 21): faila= 0(D) failb= 21(D) OK [11462.103248] raid6test: test_disks(0, 22): faila= 0(D) failb= 22(D) OK [11462.103769] raid6test: test_disks(0, 23): faila= 0(D) failb= 23(D) OK [11462.104321] raid6test: test_disks(0, 24): faila= 0(D) failb= 24(D) OK [11462.104808] raid6test: test_disks(0, 25): faila= 0(D) failb= 25(D) OK [11462.105347] raid6test: test_disks(0, 26): faila= 0(D) failb= 26(D) OK [11462.105830] raid6test: test_disks(0, 27): faila= 0(D) failb= 27(D) OK [11462.106365] raid6test: test_disks(0, 28): faila= 0(D) failb= 28(D) OK [11462.106856] raid6test: test_disks(0, 29): faila= 0(D) failb= 29(D) OK [11462.107399] raid6test: test_disks(0, 30): faila= 0(D) failb= 30(D) OK [11462.107882] raid6test: test_disks(0, 31): faila= 0(D) failb= 31(D) OK [11462.108424] raid6test: test_disks(0, 32): faila= 0(D) failb= 32(D) OK [11462.108908] raid6test: testaid6test: test_disks(0, 36): faila= 0(D) failb= 36(D) OK [11462.609785] raid6test: test_disks(0, 37): faila= 0(D) failb= 37(D) OK [11462.610368] raid6test: test_disks(0, 38): faila= 0(D) failb= 38(D) OK [11462.610867] raid6test: test_disks(0, 39): faila= 0(D) failb= 39(D) OK [11462.611451] raid6test: test_disks(0, 40): faila= 0(D) failb= 40(D) OK [11462.611948] raid6test: test_disks(0, 41): faila= 0(D) failb= 41(D) OK [11462.612513] raid6test: test_disks(0, 42): faila= 0(D) failb= 42(D) OK [11462.613011] raid6test: test_disks(0, 43): faila= 0(D) failb= 43(D) OK [11462.613589] raid6test: test_disks(0, 44): faila= 0(D) failb= 44(D) OK [11462.614130] raid6test: test_disks(0, 45): faila= 0(D) failb= 45(D) OK [11462.614685] raid6test: test_disks(0, 46): faila= 0(D) failb= 46(D) OK [11462.615240] raid6test: test_disks(0, 47): faila= 0(D) failb= 47(D) OK [11462.615769] raid6test: test_disks(0, 48): faila= 0(D) failb= 48(D) OK [11462.616344] raid6test: test_disks(0, 49): faila= 0(D) failb= 49(D) OK [11462.616838] raid6test: test_disks(0, 50): faila= 0(D) failb= 50(D) OK [11462.617423] raid6test: test_disks(0, 51): faila= 0(D) failb= 51(D) OK [11462.617913] raid6test: test_diisks(0, 55): faila= 0(D) failb= 55(D) OK [11463.118909] raid6test: test_disks(0, 56): faila= 0(D) failb= 56(D) OK [11463.119464] raid6test: test_disks(0, 57): faila= 0(D) failb= 57(D) OK [11463.119954] raid6test: test_disks(0, 58): faila= 0(D) failb= 58(D) OK [11463.120774] raid6test: test_disks(0, 59): faila= 0(D) failb= 59(D) OK [11463.121311] raid6test: test_disks(0, 60): faila= 0(D) failb= 60(D) OK [11463.121813] raid6test: test_disks(0, 61): faila= 0(D) failb= 61(D) OK [11463.122367] raid6test: test_disks(0, 62): faila= 0(D) failb= 62(P) OK [11463.122878] raid6test: test_disks(0, 63): faila= 0(D) failb= 63(Q) OK [11463.123422] raid6test: test_disks(1, 2): faila= 1(D) failb= 2(D) OK [11463.123934] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(D) OK [11463.124486] raid6test: test_disks(1, 4): faila= 1(D) failb= 4(D) OK [11463.124973] raid6test: test_disks(1, 5): faila= 1(D) failb= 5(D) OK [11463.125517] raid6test: test_disks(1, 6): faila= 1(D) failb= 6(D) OK [11463.126015] raid6test: test_disks(1, 7): faila= 1(D) failb= 7(D) OK [11463.126560] raid6test: test_disks(1, 8): faila= 1(D) failb= 8(D) OK [11463.127041] raid6test: test_disks(1, 9): faila= 1(D) la= 1(D) failb= 12(D) OK [11463.627934] raid6test: test_disks(1, 13): faila= 1(D) failb= 13(D) OK [11463.628487] raid6test: test_disks(1, 14): faila= 1(D) failb= 14(D) OK [11463.628972] raid6test: test_disks(1, 15): faila= 1(D) failb= 15(D) OK [11463.629525] raid6test: test_disks(1, 16): faila= 1(D) failb= 16(D) OK [11463.630017] raid6test: test_disks(1, 17): faila= 1(D) failb= 17(D) OK [11463.630572] raid6test: test_disks(1, 18): faila= 1(D) failb= 18(D) OK [11463.631090] raid6test: test_disks(1, 19): faila= 1(D) failb= 19(D) OK [11463.631617] raid6test: test_disks(1, 20): faila= 1(D) failb= 20(D) OK [11463.632136] raid6test: test_disks(1, 21): faila= 1(D) failb= 21(D) OK [11463.632651] raid6test: test_disks(1, 22): faila= 1(D) failb= 22(D) OK [11463.633202] raid6test: test_disks(1, 23): faila= 1(D) failb= 23(D) OK [11463.633728] raid6test: test_disks(1, 24): faila= 1(D) failb= 24(D) OK [11463.634280] raid6test: test_disks(1, 25): faila= 1(D) failb= 25(D) OK [11463.634800] raid6test: test_disks(1, 26): faila= 1(D) failb= 26(D) OK [11463.635319] raid6test: test_disks(1, 27): faila= 1(D) failb= 27(D) OK [11463.635800] raid6test: test_disks(1, 28): faila= 1(D) failb= 28(D) OK [11463.636318] raid6test: test_disks(1, 29): faila= 1(D) failb= 29(D) OK [11463.636811] raidaid6test: test_disks(1, 33): faila= 1(D) failb= 33(D) OK [11464.137655] raid6test: test_disks(1, 34): faila= 1(D) failb= 34(D) OK [11464.138162] raid6test: test_disks(1, 35): faila= 1(D) failb= 35(D) OK [11464.138648] raid6test: test_disks(1, 36): faila= 1(D) failb= 36(D) OK [11464.139188] raid6test: test_disks(1, 37): faila= 1(D) failb= 37(D) OK [11464.139691] raid6test: test_disks(1, 38): faila= 1(D) failb= 38(D) OK [11464.140230] raid6test: test_disks(1, 39): faila= 1(D) failb= 39(D) OK [11464.140735] raid6test: test_disks(1, 40): faila= 1(D) failb= 40(D) OK [11464.141276] raid6test: test_disks(1, 41): faila= 1(D) failb= 41(D) OK [11464.141786] raid6test: test_disks(1, 42): faila= 1(D) failb= 42(D) OK [11464.142329] raid6test: test_disks(1, 43): faila= 1(D) failb= 43(D) OK [11464.142836] raid6test: test_disks(1, 44): faila= 1(D) failb= 44(D) OK [11464.143390] raid6test: test_disks(1, 45): faila= 1(D) failb= 45(D) OK [11464.143922] raid6test: test_disks(1, 46): faila= 1(D) failb= 46(D) OK [11464.144445] raid6test: test_disks(1, 47): faila= 1(D) failb= 47(D) OK [11464.144946] raid6test: test_disks(1, 48): faila= 1(D) failb= 48(D) OK [11464.145505] raid6test: test_disisks(1, 52): faila= 1(D) failb= 52(D) OK [11464.646269] raid6test: test_disks(1, 53): faila= 1(D) failb= 53(D) OK [11464.646788] raid6test: test_disks(1, 54): faila= 1(D) failb= 54(D) OK [11464.647329] raid6test: test_disks(1, 55): faila= 1(D) failb= 55(D) OK [11464.647842] raid6test: test_disks(1, 56): faila= 1(D) failb= 56(D) OK [11464.648400] raid6test: test_disks(1, 57): faila= 1(D) failb= 57(D) OK [11464.648913] raid6test: test_disks(1, 58): faila= 1(D) failb= 58(D) OK [11464.649474] raid6test: test_disks(1, 59): faila= 1(D) failb= 59(D) OK [11464.649984] raid6test: test_disks(1, 60): faila= 1(D) failb= 60(D) OK [11464.650531] raid6test: test_disks(1, 61): faila= 1(D) failb= 61(D) OK [11464.651036] raid6test: test_disks(1, 62): faila= 1(D) failb= 62(P) OK [11464.651578] raid6test: test_disks(1, 63): faila= 1(D) failb= 63(Q) OK [11464.652048] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(D) OK [11464.652623] raid6test: test_disks(2, 4): faila= 2(D) failb= 4(D) OK [11464.653173] raid6test: test_disks(2, 5): faila= 2(D) failb= 5(D) OK [11464.653668] raid6test: test_disks(2, 6): faila= 2(D) failb= 6(D) OK [11464.654238] raid6test: test_disks(2, 7): faila= 2(D) failb= 7(D) OK [1146[11465.155100] raid6test: test_disks(2, 11): faila= 2(D) failb= 11(D) OK [11465.155588] raid6test: test_disks(2, 12): faila= 2(D) failb= 12(D) OK [11465.156132] raid6test: test_disks(2, 13): faila= 2(D) failb= 13(D) OK [11465.156668] raid6test: test_disks(2, 14): faila= 2(D) failb= 14(D) OK [11465.157206] raid6test: test_disks(2, 15): faila= 2(D) failb= 15(D) OK [11465.157711] raid6test: test_disks(2, 16): faila= 2(D) failb= 16(D) OK [11465.158247] raid6test: test_disks(2, 17): faila= 2(D) failb= 17(D) OK [11465.158736] raid6test: test_disks(2, 18): faila= 2(D) failb= 18(D) OK [11465.159242] raid6test: test_disks(2, 19): faila= 2(D) failb= 19(D) OK [11465.159741] raid6test: test_disks(2, 20): faila= 2(D) failb= 20(D) OK [11465.160269] raid6test: test_disks(2, 21): faila= 2(D) failb= 21(D) OK [11465.160804] raid6test: test_disks(2, 22): faila= 2(D) failb= 22(D) OK [11465.161375] raid6test: test_disks(2, 23): faila= 2(D) failb= 23(D) OK [11465.161886] raid6test: test_disks(2, 24): faila= 2(D) failb= 24(D) OK [11465.162452] raid6test: test_disks(2, 25): faila= 2(D) failb= 25(D) OK [11465.162957] raid6test: test_disks(2, 26): faila= 2(D) failb= 26(D) OK [11465.163522] raiaid6test: test_disks(2, 30): faila= 2(D) failb= 30(D) OK [11465.664294] raid6test: test_disks(2, 31): faila= 2(D) failb= 31(D) OK [11465.664816] raid6test: test_disks(2, 32): faila= 2(D) failb= 32(D) OK [11465.665677] raid6test: test_disks(2, 33): faila= 2(D) failb= 33(D) OK [11465.666186] raid6test: test_disks(2, 34): faila= 2(D) failb= 34(D) OK [11465.666705] raid6test: test_disks(2, 35): faila= 2(D) failb= 35(D) OK [11465.667256] raid6test: test_disks(2, 36): faila= 2(D) failb= 36(D) OK [11465.667764] raid6test: test_disks(2, 37): faila= 2(D) failb= 37(D) OK [11465.668303] raid6test: test_disks(2, 38): faila= 2(D) failb= 38(D) OK [11465.668816] raid6test: test_disks(2, 39): faila= 2(D) failb= 39(D) OK [11465.669358] raid6test: test_disks(2, 40): faila= 2(D) failb= 40(D) OK [11465.669900] raid6test: test_disks(2, 41): faila= 2(D) failb= 41(D) OK [11465.670489] raid6test: test_disks(2, 42): faila= 2(D) failb= 42(D) OK [11465.671021] raid6test: test_disks(2, 43): faila= 2(D) failb= 43(D) OK [11465.671590] raid6test: test_disks(2, 44): faila= 2(D) failb= 44(D) OK [11465.672128] raid6test: test_disks(2, 45): faila= 2(D) failb= 45(D) OK [11465.672625] raid6test: test_disks(2, 46): faila= 2(D) failb= 46(D) OK [11465.673176] raid6test: test_disks(2, 47): faila= 2(D) failb= 47(D) OK [11465.673705] raid6test: test_disks(2, isks(2, 51): faila= 2(D) failb= 51(D) OK [11466.174469] raid6test: test_disks(2, 52): faila= 2(D) failb= 52(D) OK [11466.174993] raid6test: test_disks(2, 53): faila= 2(D) failb= 53(D) OK [11466.175559] raid6test: test_disks(2, 54): faila= 2(D) failb= 54(D) OK [11466.176061] raid6test: test_disks(2, 55): faila= 2(D) failb= 55(D) OK [11466.176580] raid6test: test_disks(2, 56): faila= 2(D) failb= 56(D) OK [11466.177118] raid6test: test_disks(2, 57): faila= 2(D) failb= 57(D) OK [11466.177607] raid6test: test_disks(2, 58): faila= 2(D) failb= 58(D) OK [11466.178146] raid6test: test_disks(2, 59): faila= 2(D) failb= 59(D) OK [11466.178684] raid6test: test_disks(2, 60): faila= 2(D) failb= 60(D) OK [11466.179231] raid6test: test_disks(2, 61): faila= 2(D) failb= 61(D) OK [11466.179737] raid6test: test_disks(2, 62): faila= 2(D) failb= 62(P) OK [11466.180288] raid6test: test_disks(2, 63): faila= 2(D) failb= 63(Q) OK [11466.180762] raid6test: test_disks(3, 4): faila= 3(D) failb= 4(D) OK [11466.181301] raid6test: test_disks(3, 5): faila= 3(D) failb= 5(D) OK [11466.181807] raid6test: test_disks(3, 6): faila= 3(D) failb= 6(D) OK [11466.182342] raid6test: test_disks(3, 7): faila= 3(D) failb= 7(D) OK [114[11466.683195] raid6test: test_disks(3, 11): faila= 3(D) failb= 11(D) OK [11466.683739] raid6test: test_disks(3, 12): faila= 3(D) failb= 12(D) OK [11466.684308] raid6test: test_disks(3, 13): faila= 3(D) failb= 13(D) OK [11466.684789] raid6test: test_disks(3, 14): faila= 3(D) failb= 14(D) OK [11466.685332] raid6test: test_disks(3, 15): faila= 3(D) failb= 15(D) OK [11466.685836] raid6test: test_disks(3, 16): faila= 3(D) failb= 16(D) OK [11466.686375] raid6test: test_disks(3, 17): faila= 3(D) failb= 17(D) OK [11466.686888] raid6test: test_disks(3, 18): faila= 3(D) failb= 18(D) OK [11466.687450] raid6test: test_disks(3, 19): faila= 3(D) failb= 19(D) OK [11466.687955] raid6test: test_disks(3, 20): faila= 3(D) failb= 20(D) OK [11466.688515] raid6test: test_disks(3, 21): faila= 3(D) failb= 21(D) OK [11466.689025] raid6test: test_disks(3, 22): faila= 3(D) failb= 22(D) OK [11466.689577] raid6test: test_disks(3, 23): faila= 3(D) failb= 23(D) OK [11466.690125] raid6test: test_disks(3, 24): faila= 3(D) failb= 24(D) OK [11466.690658] raid6test: test_disks(3, 25): faila= 3(D) failb= 25(D) OK [11466.691201] raid6test: test_disks(3, 26): faila= 3(D) failb= 26(D) OK [11466.691692] raid6test: test_disks(3, 27): faila= 3(D) failb= 27(D) OK [11466.692233] raid6taid6test: test_disks(3, 31): faila= 3(D) failb= 31(D) OK [11467.193045] raid6test: test_disks(3, 32): faila= 3(D) failb= 32(D) OK [11467.193611] raid6test: test_disks(3, 33): faila= 3(D) failb= 33(D) OK [11467.194122] raid6test: test_disks(3, 34): faila= 3(D) failb= 34(D) OK [11467.194630] raid6test: test_disks(3, 35): faila= 3(D) failb= 35(D) OK [11467.195171] raid6test: test_disks(3, 36): faila= 3(D) failb= 36(D) OK [11467.195704] raid6test: test_disks(3, 37): faila= 3(D) failb= 37(D) OK [11467.196239] raid6test: test_disks(3, 38): faila= 3(D) failb= 38(D) OK [11467.196747] raid6test: test_disks(3, 39): faila= 3(D) failb= 39(D) OK [11467.197282] raid6test: test_disks(3, 40): faila= 3(D) failb= 40(D) OK [11467.197784] raid6test: test_disks(3, 41): faila= 3(D) failb= 41(D) OK [11467.198323] raid6test: test_disks(3, 42): faila= 3(D) failb= 42(D) OK [11467.198827] raid6test: test_disks(3, 43): faila= 3(D) failb= 43(D) OK [11467.199362] raid6test: test_disks(3, 44): faila= 3(D) failb= 44(D) OK [11467.199869] raid6test: test_disks(3, 45): faila= 3(D) failb= 45(D) OK [11467.200432] raid6test: test_disks(3, 46): faila= 3(D) failb= 46(D) OK [11467.200940] raid6test: test_diisks(3, 50): faila= 3(D) failb= 50(D) OK [11467.701703] raid6test: test_disks(3, 51): faila= 3(D) failb= 51(D) OK [11467.702244] raid6test: test_disks(3, 52): faila= 3(D) failb= 52(D) OK [11467.702751] raid6test: test_disks(3, 53): faila= 3(D) failb= 53(D) OK [11467.703291] raid6test: test_disks(3, 54): faila= 3(D) failb= 54(D) OK [11467.703795] raid6test: test_disks(3, 55): faila= 3(D) failb= 55(D) OK [11467.704344] raid6test: test_disks(3, 56): faila= 3(D) failb= 56(D) OK [11467.704857] raid6test: test_disks(3, 57): faila= 3(D) failb= 57(D) OK [11467.705395] raid6test: test_disks(3, 58): faila= 3(D) failb= 58(D) OK [11467.705906] raid6test: test_disks(3, 59): faila= 3(D) failb= 59(D) OK [11467.706460] raid6test: test_disks(3, 60): faila= 3(D) failb= 60(D) OK [11467.706964] raid6test: test_disks(3, 61): faila= 3(D) failb= 61(D) OK [11467.707527] raid6test: test_disks(3, 62): faila= 3(D) failb= 62(P) OK [11467.708041] raid6test: test_disks(3, 63): faila= 3(D) failb= 63(Q) OK [11467.708600] raid6test: test_disks(4, 5): faila= 4(D) failb= 5(D) OK [11467.709137] raid6test: test_disks(4, 6): faila= 4(D) failb= 6(D) OK 9(D) OK [11468.209927] raid6test: test_disks(4, 10): faila= 4(D) failb= 10(D) OK [11468.210469] raid6test: test_disks(4, 11): faila= 4(D) failb= 11(D) OK [11468.210945] raid6test: test_disks(4, 12): faila= 4(D) failb= 12(D) OK [11468.211511] raid6test: test_disks(4, 13): faila= 4(D) failb= 13(D) OK [11468.212015] raid6test: test_disks(4, 14): faila= 4(D) failb= 14(D) OK [11468.212576] raid6test: test_disks(4, 15): faila= 4(D) failb= 15(D) OK [11468.213137] raid6test: test_disks(4, 16): faila= 4(D) failb= 16(D) OK [11468.213656] raid6test: test_disks(4, 17): faila= 4(D) failb= 17(D) OK [11468.214226] raid6test: test_disks(4, 18): faila= 4(D) failb= 18(D) OK [11468.214757] raid6test: test_disks(4, 19): faila= 4(D) failb= 19(D) OK [11468.215299] raid6test: test_disks(4, 20): faila= 4(D) failb= 20(D) OK [11468.215804] raid6test: test_disks(4, 21): faila= 4(D) failb= 21(D) OK [11468.216340] raid6test: test_disks(4, 22): faila= 4(D) failb= 22(D) OK [11468.216845] raid6test: test_disks(4, 23): faila= 4(D) failb= 23(D) OK [11468.217382] raid6test: test_disks(4, 24): faila= 4(D) failb= 24(D) OK [1= 27(D) OK [11468.718222] raid6test: test_disks(4, 28): faila= 4(D) failb= 28(D) OK [11468.718753] raid6test: test_disks(4, 29): faila= 4(D) failb= 29(D) OK [11468.719315] raid6test: test_disks(4, 30): faila= 4(D) failb= 30(D) OK [11468.719790] raid6test: test_disks(4, 31): faila= 4(D) failb= 31(D) OK [11468.720333] raid6test: test_disks(4, 32): faila= 4(D) failb= 32(D) OK [11468.720835] raid6test: test_disks(4, 33): faila= 4(D) failb= 33(D) OK [11468.721373] raid6test: test_disks(4, 34): faila= 4(D) failb= 34(D) OK [11468.721888] raid6test: test_disks(4, 35): faila= 4(D) failb= 35(D) OK [11468.722443] raid6test: test_disks(4, 36): faila= 4(D) failb= 36(D) OK [11468.722950] raid6test: test_disks(4, 37): faila= 4(D) failb= 37(D) OK [11468.723513] raid6test: test_disks(4, 38): faila= 4(D) failb= 38(D) OK [11468.724044] raid6test: test_disks(4, 39): faila= 4(D) failb= 39(D) OK [11468.724609] raid6test: test_disks(4, 40): faila= 4(D) failb= 40(D) OK [11468.725151] raid6test: test_disks(4, 41): faila= 4(D) failb= 41(D) OK [11468.725689] raid6test: test_disks(4, 42): faila= 4(D) failb= 42(D) OK [11468.726223] raid6test: testaid6test: test_disks(4, 46): faila= 4(D) failb= 46(D) OK [11469.227062] raid6test: test_disks(4, 47): faila= 4(D) failb= 47(D) OK [11469.227615] raid6test: test_disks(4, 48): faila= 4(D) failb= 48(D) OK [11469.228188] raid6test: test_disks(4, 49): faila= 4(D) failb= 49(D) OK [11469.228729] raid6test: test_disks(4, 50): faila= 4(D) failb= 50(D) OK [11469.229270] raid6test: test_disks(4, 51): faila= 4(D) failb= 51(D) OK [11469.229774] raid6test: test_disks(4, 52): faila= 4(D) failb= 52(D) OK [11469.230314] raid6test: test_disks(4, 53): faila= 4(D) failb= 53(D) OK [11469.230818] raid6test: test_disks(4, 54): faila= 4(D) failb= 54(D) OK [11469.231352] raid6test: test_disks(4, 55): faila= 4(D) failb= 55(D) OK [11469.231856] raid6test: test_disks(4, 56): faila= 4(D) failb= 56(D) OK [11469.232391] raid6test: test_disks(4, 57): faila= 4(D) failb= 57(D) OK [11469.232906] raid6test: test_disks(4, 58): faila= 4(D) failb= 58(D) OK [11469.233461] raid6test: test_disks(4, 59): faila= 4(D) failb= 59(D) OK [11469.234041] raid6test: taid6test: test_disks(4, 63): faila= 4(D) failb= 63(Q) OK [11469.734882] raid6test: test_disks(5, 6): faila= 5(D) failb= 6(D) OK [11469.735430] raid6test: test_disks(5, 7): faila= 5(D) failb= 7(D) OK [11469.735947] raid6test: test_disks(5, 8): faila= 5(D) failb= 8(D) OK [11469.736510] raid6test: test_disks(5, 9): faila= 5(D) failb= 9(D) OK [11469.737019] raid6test: test_disks(5, 10): faila= 5(D) failb= 10(D) OK [11469.737608] raid6test: test_disks(5, 11): faila= 5(D) failb= 11(D) OK [11469.738185] raid6test: test_disks(5, 12): faila= 5(D) failb= 12(D) OK [11469.738726] raid6test: test_disks(5, 13): faila= 5(D) failb= 13(D) OK [11469.739272] raid6test: test_disks(5, 14): faila= 5(D) failb= 14(D) OK [11469.739753] raid6test: test_disks(5, 15): faila= 5(D) failb= 15(D) OK [11469.740293] raid6test: test_disks(5, 16): faila= 5(D) failb= 16(D) OK [11469.740802] raid6test: test_disks(5, 17): faila= 5(D) failb= 17(D) OK [11469.741336] raid6test: test_disks(5, 18): faila= 5(D) failb= 18(D) OK [11469.741844] raid6test: test_disks(5, 19): faila= 5(D) failb= 19(D) OK [11469.742383] raid6test: test_disks(5, 20): faila= 5(D) failb= 20(D) OK [11469.742882] raid6test: test_disks(5isks(5, 24): faila= 5(D) failb= 24(D) OK [11470.243782] raid6test: test_disks(5, 25): faila= 5(D) failb= 25(D) OK [11470.244349] raid6test: test_disks(5, 26): faila= 5(D) failb= 26(D) OK [11470.244864] raid6test: test_disks(5, 27): faila= 5(D) failb= 27(D) OK [11470.245399] raid6test: test_disks(5, 28): faila= 5(D) failb= 28(D) OK [11470.245903] raid6test: test_disks(5, 29): faila= 5(D) failb= 29(D) OK [11470.246438] raid6test: test_disks(5, 30): faila= 5(D) failb= 30(D) OK [11470.246946] raid6test: test_disks(5, 31): faila= 5(D) failb= 31(D) OK [11470.247506] raid6test: test_disks(5, 32): faila= 5(D) failb= 32(D) OK [11470.248015] raid6test: test_disks(5, 33): faila= 5(D) failb= 33(D) OK [11470.248577] raid6test: test_disks(5, 34): faila= 5(D) failb= 34(D) OK [11470.249079] raid6test: test_disks(5, 35): faila= 5(D) failb= 35(D) OK [11470.249631] raid6test: test_disks(5, 36): faila= 5(D) failb= 36(D) OK [11470.250160] raid6test: test_disks(5, 37): faila= 5(D) failb= 37(D) OK [11470.250696] raid6test: test_disks(5, 38): faila= 5(D) failb= 38(D) OK [11470.251239] raid6test: test_disks(5, 39): faila= 5(D) failb= 39(D) OK [11470.251772] raid6test: test_disks(5, 40): faila= 5(D) failb= 40(D) OK = 43(D) OK [11470.752629] raid6test: test_disks(5, 44): faila= 5(D) failb= 44(D) OK [11470.753208] raid6test: test_disks(5, 45): faila= 5(D) failb= 45(D) OK [11470.753737] raid6test: test_disks(5, 46): faila= 5(D) failb= 46(D) OK [11470.754275] raid6test: test_disks(5, 47): faila= 5(D) failb= 47(D) OK [11470.754804] raid6test: test_disks(5, 48): faila= 5(D) failb= 48(D) OK [11470.755336] raid6test: test_disks(5, 49): faila= 5(D) failb= 49(D) OK [11470.755843] raid6test: test_disks(5, 50): faila= 5(D) failb= 50(D) OK [11470.756380] raid6test: test_disks(5, 51): faila= 5(D) failb= 51(D) OK [11470.756892] raid6test: test_disks(5, 52): faila= 5(D) failb= 52(D) OK [11470.757431] raid6test: test_disks(5, 53): faila= 5(D) failb= 53(D) OK [11470.757944] raid6test: test_disks(5, 54): faila= 5(D) failb= 54(D) OK [11470.758514] raid6test: test_disks(5, 55): faila= 5(D) failb= 55(D) OK [11470.759030] raid6test: test_disks(5, 56): faila= 5(D) failb= 56(D) OK [11470.759582] raid6test: test_disks(5, 57): faila= 5(D) failb= 57(D) OK [11470.760084] raid6test: test_disks(5, 58): faila= 5(D) failb= 58(D) OK [11470.760644] raid6test: test_disks(5, 59): faila= 5(D) failb= 59(D) OK [1147aid6test: test_disks(5, 62): faila= 5(D) failb= 62(P) OK [11471.161729] raid= 63(Q) OK [11471.262208] raid6test: test_disks(6, 7): faila= 6(D) failb= 7(D) OK [11471.262630] raid6test: test_disks(6, 8): faila= 6(D) failb= 8(D) OK [11471.263192] raid6test: test_disks(6, 9): faila= 6(D) failb= 9(D) OK [11471.263725] raid6test: test_disks(6, 10): faila= 6(D) failb= 10(D) OK [11471.264292] raid6test: test_disks(6, 11): faila= 6(D) failb= 11(D) OK [11471.264814] raid6test: test_disks(6, 12): faila= 6(D) failb= 12(D) OK [11471.265347] raid6test: test_disks(6, 13): faila= 6(D) failb= 13(D) OK [11471.265856] raid6test: test_disks(6, 14): faila= 6(D) failb= 14(D) OK [11471.266392] raid6test: test_disks(6, 15): faila= 6(D) failb= 15(D) OK [11471.266912] raid6test: test_disks(6, 16): faila= 6(D) failb= 16(D) OK [11471.267448] raid6test: test_disks(6, 17): faila= 6(D) failb= 17(D) OK [11471.267960] raid6test: test_disks(6, 18): faila= 6(D) failb= 18(D) OK [11471.268519] raid6test: test_disks(6, 19): faila= 6(D) failb= 19(D) OK [11471.269017] raid6test: test_disks(6, 20): faila= 6(D) failb= 20(D) OK [11471.269578] raid6test: tesaid6test: test_disks(6, 24): faila= 6(D) failb= 24(D) OK [11471.770554] raid6test: test_disks(6, 25): faila= 6(D) failb= 25(D) OK [11471.771066] raid6test: test_disks(6, 26): faila= 6(D) failb= 26(D) OK [11471.771595] raid6test: test_disks(6, 27): faila= 6(D) failb= 27(D) OK [11471.772100] raid6test: test_disks(6, 28): faila= 6(D) failb= 28(D) OK [11471.772660] raid6test: test_disks(6, 29): faila= 6(D) failb= 29(D) OK [11471.773196] raid6test: test_disks(6, 30): faila= 6(D) failb= 30(D) OK [11471.773730] raid6test: test_disks(6, 31): faila= 6(D) failb= 31(D) OK [11471.774296] raid6test: test_disks(6, 32): faila= 6(D) failb= 32(D) OK [11471.774825] raid6test: test_disks(6, 33): faila= 6(D) failb= 33(D) OK [11471.775366] raid6test: test_disks(6, 34): faila= 6(D) failb= 34(D) OK [11471.775875] raid6test: test_disks(6, 35): faila= 6(D) failb= 35(D) OK [11471.776407] raid6test: test_disks(6, 36): faila= 6(D) failb= 36(D) OK [11471.776923] raid6test: test_disks(6, 37): faila= 6(D) failb= 37(D) OK [11471.777458] raid6test: test_disks(6, 38): faila= 6(D) failb= 38(D) OK [11471.777968] raid6test: test_isks(6, 41): faila= 6(D) failb= 41(D) OK [11472.178900] raid6test: test_disk[11472.279431] raid6test: test_disks(6, 43): faila= 6(D) failb= 43(D) OK [11472.280036] raid6test: test_disks(6, 44): faila= 6(D) failb= 44(D) OK [11472.280649] raid6test: test_disks(6, 45): faila= 6(D) failb= 45(D) OK [11472.281267] raid6test: test_disks(6, 46): faila= 6(D) failb= 46(D) OK [11472.281809] raid6test: test_disks(6, 47): faila= 6(D) failb= 47(D) OK [11472.282426] raid6test: test_disks(6, 48): faila= 6(D) failb= 48(D) OK [11472.282913] raid6test: test_disks(6, 49): faila= 6(D) failb= 49(D) OK [11472.283537] raid6test: test_disks(6, 50): faila= 6(D) failb= 50(D) OK [11472.284088] raid6test: test_disks(6, 51): faila= 6(D) failb= 51(D) OK [11472.284703] raid6test: test_disks(6, 52): faila= 6(D) failb= 52(D) OK [11472.285335] raid6test: test_disks(6, 53): faila= 6(D) failb= 53(D) OK [11472.285834] raid6test: test_disks(6, 54): faila= 6(D) failb= 54(D) OK [11472.286444] raid6test: test_disks(6, 55): faila= 6(D) failb= 55(D) OK [11472.286934] raid6test: test_disks(6, 56): faila= 6(D) failb= 56(D) OK [11472.287572] raid6test: test_disks(6, 57): faila= 6(D) failb= 57(D) OK [11472.315693] aid6test: test_disks(6, 60): faila= 6(D) failb= 60(D) OK [11472.688796] raid6test: test_disks(6, 61): faila= 6(D) failb= 61(D) OK [11472.689424] raid6test: test_disks(6, 62): faila= 6(D) failb= 62(P) OK [11472.689963] raid6test: test_disks(6, 63): faila= 6(D) failb= 63(Q) OK [11472.690660] raid6test: test_disks(7, 8): faila= 7(D) failb= 8(D) OK [11472.691255] raid6test: test_disks(7, 9): faila= 7(D) failb= 9(D) OK [11472.691807] raid6test: test_disks(7, 10): faila= 7(D) failb= 10(D) OK [11472.692407] raid6test: test_disks(7, 11): faila= 7(D) failb= 11(D) OK [11472.692992] raid6test: test_disks(7, 12): faila= 7(D) failb= 12(D) OK [11472.693958] raid6test: test_disks(7, 13): faila= 7(D) failb= 13(D) OK [11472.694623] raid6test: test_disks(7, 14): faila= 7(D) failb= 14(D) OK [11472.695252] raid6test: test_disks(7, 15): faila= 7(D) failb= 15(D) OK [11472.695821] raid6test: test_disks(7, 16): faila= 7(D) failb= 16(D) OK [11472.696426] raid6test: test_disks(7, 17): faila= 7(D) failb= 17(D) OK [11472.697098] raid6test: test_disks(7, 18): faila= 7(D)[11472.779123] raid6test: test_disks(7, 19): faila= 7(D) failb= 19(D) OK [11472.798028] raid6test: test_disks(7, 20): faila= 7(D) failb= 20(D) OK [11472.798625] raid6test: test_disks(7, 21): faila= 7(D) failb= 21(D) OK [11472.82[11473.299474] raid6test: test_disks(7, 25): faila= 7(D) failb= 25(D) OK [11473.300017] raid6test: test_disks(7, 26): faila= 7(D) failb= 26(D) OK [11473.300636] raid6test: test_disks(7, 27): faila= 7(D) failb= 27(D) OK [11473.301233] raid6test: test_disks(7, 28): faila= 7(D) failb= 28(D) OK [11473.301800] raid6test: test_disks(7, 29): faila= 7(D) failb= 29(D) OK [11473.302389] raid6test: test_disks(7, 30): faila= 7(D) failb= 30(D) OK [11473.302938] raid6test: test_disks(7, 31): faila= 7(D) failb= 31(D) OK [11473.303518] raid6test: test_disks(7, 32): faila= 7(D) failb= 32(D) OK [11473.304039] raid6test: test_disks(7, 33): faila= 7(D) failb= 33(D) OK [11473.304616] raid6test: test_disks(7, 34): faila= 7(D) failb= 34(D) OK [11473.305114] raid6test: test_disks(7, 35): faila= 7(D) failb= 35(D) OK [11473.305680] raid6test: test_disks(7, 36): faila= 7(D) failb= 36(D) OK [11473.306236] raid6test: test_disks(7, 37): faila= 7(D) failb= 37(D) OK [11473.306738] raid6test: test_disks(7, 38): faila= 7(D) failb= 38(D) OK [11473.307308] raid6test: test_disks(7, 39): faila= 7(D) failb= 39(D) OK [11473.33aid6test: test_disks(7, 42): faila= 7(D) failb= 42(D) OK [11473.708465] raid= 43(D) OK [11473.809398] raid6test: test_disks(7, 44): faila= 7(D) failb= 44(D) OK [11473.809986] raid6test: test_disks(7, 45): faila= 7(D) failb= 45(D) OK [11473.810583] raid6test: test_disks(7, 46): faila= 7(D) failb= 46(D) OK [11473.811097] raid6test: test_disks(7, 47): faila= 7(D) failb= 47(D) OK [11473.811722] raid6test: test_disks(7, 48): faila= 7(D) failb= 48(D) OK [11473.812356] raid6test: test_disks(7, 49): faila= 7(D) failb= 49(D) OK [11473.812875] raid6test: test_disks(7, 50): faila= 7(D) failb= 50(D) OK [11473.813488] raid6test: test_disks(7, 51): faila= 7(D) failb= 51(D) OK [11473.813992] raid6test: test_disks(7, 52): faila= 7(D) failb= 52(D) OK [11473.814561] raid6test: test_disks(7, 53): faila= 7(D) failb= 53(D) OK [11473.815067] raid6test: test_disks(7, 54): faila= 7(D) failb= 54(D) OK [11473.815635] raid6test: test_disks(7, 55): faila= 7(D) failb= 55(D) OK [11473.816188] raid6test: test_disks(7, 56): faila= 7(D) failb= 56(D) OK [11473.816723] raid6test: test_disks(7, 57): faila= 7(D) failb= 57(D) OK [11473.817287] raid6test: test_disks(7, 58): faila= 7(D) failb= 58(D) OK [11473.817826] raid6test: test_disks(7, 59): faila= 7(D) failb= 59(D) OK [11473.818367] raid6test: test_disks(7isks(7, 63): faila= 7(D) failb= 63(Q) OK [11474.319306] raid6test: test_disks(8, 9): faila= 8(D) failb= 9(D) OK [11474.319869] raid6test: test_disks(8, 10): faila= 8(D) failb= 10(D) OK [11474.320428] raid6test: test_disks(8, 11): faila= 8(D) failb= 11(D) OK [11474.320919] raid6test: test_disks(8, 12): faila= 8(D) failb= 12(D) OK [11474.321480] raid6test: test_disks(8, 13): faila= 8(D) failb= 13(D) OK [11474.322006] raid6test: test_disks(8, 14): faila= 8(D) failb= 14(D) OK [11474.322584] raid6test: test_disks(8, 15): faila= 8(D) failb= 15(D) OK [11474.323075] raid6test: test_disks(8, 16): faila= 8(D) failb= 16(D) OK [11474.323655] raid6test: test_disks(8, 17): faila= 8(D) failb= 17(D) OK [11474.324208] raid6test: test_disks(8, 18): faila= 8(D) failb= 18(D) OK [11474.324733] raid6test: test_disks(8, 19): faila= 8(D) failb= 19(D) OK [11474.325299] raid6test: test_disks(8, 20): faila= 8(D) failb= 20(D) OK [11474.325849] raid6test: test_disks(8, 21): faila= 8(D) failb= 21(D) OK [11474.326408] raid6test: test_disks(8, 22): faila= 8(D) failb= 22(D) OK [11474.326900] raid6test: test_disks(8, 23isks(8, 26): faila= 8(D) failb= 26(D) OK [11474.827751] raid6test: test_disks(8, 27): faila= 8(D) failb= 27(D) OK [11474.828320] raid6test: test_disks(8, 28): faila= 8(D) failb= 28(D) OK [11474.828887] raid6test: test_disks(8, 29): faila= 8(D) failb= 29(D) OK [11474.829446] raid6test: test_disks(8, 30): faila= 8(D) failb= 30(D) OK [11474.829942] raid6test: test_disks(8, 31): faila= 8(D) failb= 31(D) OK [11474.830498] raid6test: test_disks(8, 32): faila= 8(D) failb= 32(D) OK [11474.831035] raid6test: test_disks(8, 33): faila= 8(D) failb= 33(D) OK [11474.831630] raid6test: test_disks(8, 34): faila= 8(D) failb= 34(D) OK [11474.832194] raid6test: test_disks(8, 35): faila= 8(D) failb= 35(D) OK [11474.832710] raid6test: test_disks(8, 36): faila= 8(D) failb= 36(D) OK [11474.833284] raid6test: test_disks(8, 37): faila= 8(D) failb= 37(D) OK [11474.833808] raid6test: test_disks(8, 38): faila= 8(D) failb= 38(D) OK [11474.834363] raid6test: test_disks(8, 39): faila= 8(D) failb= 39(D) OK [11474.834904] raid6test: test_disks(8, 40): faila= 8(D) failb= 40(D) OK [11474.835465] raid6test: test_disks(8, 41): faila= 8(D) failb= 41(D) OK [11474.836000] raid6test: test_disks(8, 42): faila= 8(D) failb= 42(D) OK [11474.836582] raid6test: test_disks(8, 43): faila= 8(D) la= 8(D) failb= 46(D) OK [11475.338188] raid6test: test_disks(8, 47): faila= 8(D) failb= 47(D) OK [11475.338782] raid6test: test_disks(8, 48): faila= 8(D) failb= 48(D) OK [11475.339394] raid6test: test_disks(8, 49): faila= 8(D) failb= 49(D) OK [11475.339894] raid6test: test_disks(8, 50): faila= 8(D) failb= 50(D) OK [11475.340423] raid6test: test_disks(8, 51): faila= 8(D) failb= 51(D) OK [11475.340946] raid6test: test_disks(8, 52): faila= 8(D) failb= 52(D) OK [11475.341504] raid6test: test_disks(8, 53): faila= 8(D) failb= 53(D) OK [11475.342009] raid6test: test_disks(8, 54): faila= 8(D) failb= 54(D) OK [11475.342564] raid6test: test_disks(8, 55): faila= 8(D) failb= 55(D) OK [11475.343083] raid6test: test_disks(8, 56): faila= 8(D) failb= 56(D) OK [11475.343640] raid6test: test_disks(8, 57): faila= 8(D) failb= 57(D) OK [11475.344243] raid6test: test_disks(8, 58): faila= 8(D) failb= 58(D) OK [11475.344814] raid6test: test_disks(8, 59): faila= 8(D) failb= 59(D) OK [11475.345359] raid6test: test_disks(8, 60): faila= 8(D) failb= 60(D) OK [11475.345869] raid6test: test_disks(8, 61): faila= 8(D) failb= 61(D) OK [11475.346402] raid6test: test_disks(8, 62): faila= 8(D) failb= 62(P) OK [11475.346938] raid6test: test_disks(8, 63): faila= 8(D) failb= 63(Q) OK [11475.374524[11475.847910] raid6test: test_disks(9, 13): faila= 9(D) failb= 13(D) OK [11475.848579] raid6test: test_disks(9, 14): faila= 9(D) failb= 14(D) OK [11475.849204] raid6test: test_disks(9, 15): faila= 9(D) failb= 15(D) OK [11475.849752] raid6test: test_disks(9, 16): faila= 9(D) failb= 16(D) OK [11475.850395] raid6test: test_disks(9, 17): faila= 9(D) failb= 17(D) OK [11475.850932] raid6test: test_disks(9, 18): faila= 9(D) failb= 18(D) OK [11475.851573] raid6test: test_disks(9, 19): faila= 9(D) failb= 19(D) OK [11475.852207] raid6test: test_disks(9, 20): faila= 9(D) failb= 20(D) OK [11475.852763] raid6test: test_disks(9, 21): faila= 9(D) failb= 21(D) OK [11475.853402] raid6test: test_disks(9, 22): faila= 9(D) failb= 22(D) OK [11475.853942] raid6test: test_disks(9, 23): faila= 9(D) failb= 23(D) OK [11475.854573] raid6test: test_disks(9, 24): faila= 9(D) failb= 24(D) OK [11475.855150] raid6test: test_disks(9, 25): faila= 9(D) failb= 25(D) OK [11475.855802] raid6test: test_disks(9, 26): faila= 9(D) failb= 26(D) OK [11475.856430] raid6test: test_disks(9, 27): faila= 9(D) failb= 27(D) OK [11475.856993] raid6test: test_disks(9, 28): faila= 9(D) failb= 28(D) OK [11475.857588] raid6testaid6test: test_disks(9, 32): faila= 9(D) failb= 32(D) OK [11476.358499] raid6test: test_disks(9, 33): faila= 9(D) failb= 33(D) OK [11476.359029] raid6test: test_disks(9, 34): faila= 9(D) failb= 34(D) OK [11476.359627] raid6test: test_disks(9, 35): faila= 9(D) failb= 35(D) OK [11476.360153] raid6test: test_disks(9, 36): faila= 9(D) failb= 36(D) OK [11476.360703] raid6test: test_disks(9, 37): faila= 9(D) failb= 37(D) OK [11476.361279] raid6test: test_disks(9, 38): faila= 9(D) failb= 38(D) OK [11476.361828] raid6test: test_disks(9, 39): faila= 9(D) failb= 39(D) OK [11476.362405] raid6test: test_disks(9, 40): faila= 9(D) failb= 40(D) OK [11476.362892] raid6test: test_disks(9, 41): faila= 9(D) failb= 41(D) OK [11476.363466] raid6test: test_disks(9, 42): faila= 9(D) failb= 42(D) OK [11476.363994] raid6test: test_disks(9, 43): faila= 9(D) failb= 43(D) OK [11476.364556] raid6test: test_disks(9, 44): faila= 9(D) failb= 44(D) OK [11476.365091] raid6test: test_disks(9, 45): faila= 9(D) failb= 45(D) OK [11476.365691] raid6test: test_disks(9, 46): faila= 9(D) failb= 46(D) OK [11476.366301] raid6test: test_disks(9, 47): faila= 9(D) failb= 47(D) OK [11476.366825] raid6test: test_diisks(9, 51): faila= 9(D) failb= 51(D) OK [11476.867824] raid6test: test_disks(9, 52): faila= 9(D) failb= 52(D) OK [11476.868403] raid6test: test_disks(9, 53): faila= 9(D) failb= 53(D) OK [11476.868941] raid6test: test_disks(9, 54): faila= 9(D) failb= 54(D) OK [11476.869522] raid6test: test_disks(9, 55): faila= 9(D) failb= 55(D) OK [11476.870025] raid6test: test_disks(9, 56): faila= 9(D) failb= 56(D) OK [11476.870633] raid6test: test_disks(9, 57): faila= 9(D) failb= 57(D) OK [11476.871223] raid6test: test_disks(9, 58): faila= 9(D) failb= 58(D) OK [11476.871787] raid6test: test_disks(9, 59): faila= 9(D) failb= 59(D) OK [11476.872370] raid6test: test_disks(9, 60): faila= 9(D) failb= 60(D) OK [11476.872937] raid6test: test_disks(9, 61): faila= 9(D) failb= 61(D) OK [11476.873518] raid6test: test_disks(9, 62): faila= 9(D) failb= 62(P) OK [11476.874041] raid6test: test_disks(9, 63): faila= 9(D) failb= 63(Q) OK [11476.874629] raid6test: test_disks(10, 11): faila= 10(D) failb= 11(D) OK [11476.875160] raid6test: test_disks(10, 12): faila= 10(D) failb= 12(D) OK [11476.875733] raid6test: test_disks(10, 13): faila= 10(D) failb= 13(D) [11477.257130] raid6test: test_disks(10, 16): faila= 10(D) failb= 16(D) OK [1isks(10, 17): faila= 10(D) failb= 17(D) OK [11477.377389] raid6test: test_disks(10, 18): faila= 10(D) failb= 18(D) OK [11477.377939] raid6test: test_disks(10, 19): faila= 10(D) failb= 19(D) OK [11477.378533] raid6test: test_disks(10, 20): faila= 10(D) failb= 20(D) OK [11477.379058] raid6test: test_disks(10, 21): faila= 10(D) failb= 21(D) OK [11477.379653] raid6test: test_disks(10, 22): faila= 10(D) failb= 22(D) OK [11477.380233] raid6test: test_disks(10, 23): faila= 10(D) failb= 23(D) OK [11477.380785] raid6test: test_disks(10, 24): faila= 10(D) failb= 24(D) OK [11477.381360] raid6test: test_disks(10, 25): faila= 10(D) failb= 25(D) OK [11477.381914] raid6test: test_disks(10, 26): faila= 10(D) failb= 26(D) OK [11477.382489] raid6test: test_disks(10, 27): faila= 10(D) failb= 27(D) OK [11477.383020] raid6test: test_disks(10, 28): faila= 10(D) failb= 28(D) OK [11477.383613] raid6test: test_disks(10, 29): faila= 10(D) failb= 29(D) OK [11477.384211] raid6test: test_disks(10, 30): faila= 10(D) failb= 30(D) OK [11477.384758] raid6test: test_disks(10, 31): faila= 10(D) failb= 31(D) OK [11477.385323] raid6test: test_ila= 10(D) failb= 34(D) OK [11477.785979] raid6test: test_disks(10, 35): fail[11477.868063] raid6test: test_disks(10, 36): faila= 10(D) failb= 36(D) OK [11477.886863] raid6test: test_disks(10, 37): faila= 10(D) failb= 37(D) OK [11477.887463] raid6test: test_disks(10, 38): faila= 10(D) failb= 38(D) OK [11477.887994] raid6test: test_disks(10, 39): faila= 10(D) failb= 39(D) OK [11477.888596] raid6test: test_disks(10, 40): faila= 10(D) failb= 40(D) OK [11477.889092] raid6test: test_disks(10, 41): faila= 10(D) failb= 41(D) OK [11477.889651] raid6test: test_disks(10, 42): faila= 10(D) failb= 42(D) OK [11477.890135] raid6test: test_disks(10, 43): faila= 10(D) failb= 43(D) OK [11477.890704] raid6test: test_disks(10, 44): faila= 10(D) failb= 44(D) OK [11477.891241] raid6test: test_disks(10, 45): faila= 10(D) failb= 45(D) OK [11477.891756] raid6test: test_disks(10, 46): faila= 10(D) failb= 46(D) OK [11477.892293] raid6test: test_disks(10, 47): faila= 10(D) failb= 47(D) OK [11477.892805] raid6test: test_disks(10, 48): faila= 10(D) failb= 48(D) OK [11477.893340] raid6test: test_disks(10, 49): faila= 10(D) failb= 49(D) OK [11477.893917] raid6test: test_disks(10, 50): faila= 10(D) failb= 50(D) OK [11477.894533] raid6test: test_disks(10, 51): faila= 10(D) failb= 51(D) OK [11477.895075] raid6test: test_disks(10, 52): faila= 10(D) failb= 52(D) OK [11477.895711] raid6test: test_disks(10, 56): faila= 10(D) failb= 56(D) OK [11478.396694] raid6test: test_disks(10, 57): faila= 10(D) failb= 57(D) OK [11478.397283] raid6test: test_disks(10, 58): faila= 10(D) failb= 58(D) OK [11478.397826] raid6test: test_disks(10, 59): faila= 10(D) failb= 59(D) OK [11478.398402] raid6test: test_disks(10, 60): faila= 10(D) failb= 60(D) OK [11478.398957] raid6test: test_disks(10, 61): faila= 10(D) failb= 61(D) OK [11478.399526] raid6test: test_disks(10, 62): faila= 10(D) failb= 62(P) OK [11478.400072] raid6test: test_disks(10, 63): faila= 10(D) failb= 63(Q) OK [11478.400658] raid6test: test_disks(11, 12): faila= 11(D) failb= 12(D) OK [11478.401249] raid6test: test_disks(11, 13): faila= 11(D) failb= 13(D) OK [11478.401814] raid6test: test_disks(11, 14): faila= 11(D) failb= 14(D) OK [11478.402390] raid6test: test_disks(11, 15): faila= 11(D) failb= 15(D) OK [11478.402943] raid6test: test_disks(11, 16): faila= 11(D) failb= 16(D) OK [11478.403516] raid6test: test_disks(11, 17): faila= 11(D) failb= 17(D) OK [11478.404050] raid6test: test_disks(11, 18): faila= 11(D) failb= 18(D) OK [11478.404633] raid6test: test_disks(11, 19): faila= 11(D) failb= 19(D) OK [11478.405158] raid6test: test_disks(11, 20): faila= 11(D) failb= 20(D) OK [11478.405751] raid6test: test_diskisks(11, 24): faila= 11(D) failb= 24(D) OK [11478.906771] raid6test: test_disks(11, 25): faila= 11(D) failb= 25(D) OK [11478.907347] raid6test: test_disks(11, 26): faila= 11(D) failb= 26(D) OK [11478.907713] raid6test: test_disks(11, 27): faila= 11(D) failb= 27(D) OK [11478.908245] raid6test: test_disks(11, 28): faila= 11(D) failb= 28(D) OK [11478.908765] raid6test: test_disks(11, 29): faila= 11(D) failb= 29(D) OK [11478.909303] raid6test: test_disks(11, 30): faila= 11(D) failb= 30(D) OK [11478.909822] raid6test: test_disks(11, 31): faila= 11(D) failb= 31(D) OK [11478.910369] raid6test: test_disks(11, 32): faila= 11(D) failb= 32(D) OK [11478.910846] raid6test: test_disks(11, 33): faila= 11(D) failb= 33(D) OK [11478.911382] raid6test: test_disks(11, 34): faila= 11(D) failb= 34(D) OK [11478.911891] raid6test: test_disks(11, 35): faila= 11(D) failb= 35(D) OK [11478.912422] raid6test: test_disks(11, 36): faila= 11(D) failb= 36(D) OK [11478.912942] raid6test: test_disks(11, 37): faila= 11(D) failb= 37(D) OK [11478.913520] raid6test: test_disks(11, 38): faila= 11(D) failb= 38(D) OK [11478.914027] raid6test: test_disks(11, 39): faila= 11(D) failb= 39(D) OK [11478.914564] raid6test: test_disisks(11, 43): faila= 11(D) failb= 43(D) OK [11479.416056] raid6test: test_disks(11, 44): faila= 11(D) failb= 44(D) OK [11479.416705] raid6test: test_disks(11, 45): faila= 11(D) failb= 45(D) OK [11479.417244] raid6test: test_disks(11, 46): faila= 11(D) failb= 46(D) OK [11479.417765] raid6test: test_disks(11, 47): faila= 11(D) failb= 47(D) OK [11479.418412] raid6test: test_disks(11, 48): faila= 11(D) failb= 48(D) OK [11479.418938] raid6test: test_disks(11, 49): faila= 11(D) failb= 49(D) OK [11479.419483] raid6test: test_disks(11, 50): faila= 11(D) failb= 50(D) OK [11479.419985] raid6test: test_disks(11, 51): faila= 11(D) failb= 51(D) OK [11479.420562] raid6test: test_disks(11, 52): faila= 11(D) failb= 52(D) OK [11479.421092] raid6test: test_disks(11, 53): faila= 11(D) failb= 53(D) OK [11479.421622] raid6test: test_disks(11, 54): faila= 11(D) failb= 54(D) OK [11479.422246] raid6test: test_disks(11, 55): faila= 11(D) failb= 55(D) OK [11479.422784] raid6test: test_disks(11, 56): faila= 11(D) failb= 56(D) OK [11479.423685] raid6test: test_disks(11, 57): faila= 11(D) failb= 57(D) OK [11479.424318] raid6test: test_disks(11, 58): faila= 11(D) failb= 58(D) OK [11479.424859] raid6test: test_disks(11, 59): faila= 11(D) fila= 11(D) failb= 62(P) OK [11479.925905] raid6test: test_disks(11, 63): faila= 11(D) failb= 63(Q) OK [11479.926460] raid6test: test_disks(12, 13): faila= 12(D) failb= 13(D) OK [11479.926956] raid6test: test_disks(12, 14): faila= 12(D) failb= 14(D) OK [11479.927533] raid6test: test_disks(12, 15): faila= 12(D) failb= 15(D) OK [11479.928024] raid6test: test_disks(12, 16): faila= 12(D) failb= 16(D) OK [11479.928591] raid6test: test_disks(12, 17): faila= 12(D) failb= 17(D) OK [11479.929084] raid6test: test_disks(12, 18): faila= 12(D) failb= 18(D) OK [11479.929618] raid6test: test_disks(12, 19): faila= 12(D) failb= 19(D) OK [11479.930106] raid6test: test_disks(12, 20): faila= 12(D) failb= 20(D) OK [11479.930653] raid6test: test_disks(12, 21): faila= 12(D) failb= 21(D) OK [11479.931137] raid6test: test_disks(12, 22): faila= 12(D) failb= 22(D) OK [11479.931686] raid6test: test_disks(12, 23): faila= 12(D) failb= 23(D) OK [11479.932185] raid6test: test_disks(12, 24): faila= 12(D) failb= 24(D) OK [11479.932762] raid6test: test_disks(12, 25): faila= 12(D) failb= 25(D) OK [11479.933306] raid6test: test_disks(12, 26): faila= 12(D) failb= 26(D) OK [11479.933843] raid6test: test_disks(12, 27): fail[11480.334595] raid6test: test_disks(12, 30): faila= 12(D) failb= 30(D) OK [1isks(12, 31): faila= 12(D) failb= 31(D) OK [11480.435432] raid6test: test_disks(12, 32): faila= 12(D) failb= 32(D) OK [11480.435988] raid6test: test_disks(12, 33): faila= 12(D) failb= 33(D) OK [11480.436559] raid6test: test_disks(12, 34): faila= 12(D) failb= 34(D) OK [11480.437087] raid6test: test_disks(12, 35): faila= 12(D) failb= 35(D) OK [11480.437680] raid6test: test_disks(12, 36): faila= 12(D) failb= 36(D) OK [11480.438240] raid6test: test_disks(12, 37): faila= 12(D) failb= 37(D) OK [11480.438795] raid6test: test_disks(12, 38): faila= 12(D) failb= 38(D) OK [11480.439368] raid6test: test_disks(12, 39): faila= 12(D) failb= 39(D) OK [11480.439924] raid6test: test_disks(12, 40): faila= 12(D) failb= 40(D) OK [11480.440495] raid6test: test_disks(12, 41): faila= 12(D) failb= 41(D) OK [11480.441023] raid6test: test_disks(12, 42): faila= 12(D) failb= 42(D) OK [11480.441598] raid6test: test_disks(12, 43): faila= 12(D) failb= 43(D) OK [11480.442092] raid6test: test_disks(12, 44): faila= 12(D) failb= 44(D) OK [11480.442686] raid6test: test_disks(12, 45): faila= 12(D) failb= 45(D) OK [11480.443265] raid6test: test_ila= 12(D) failb= 48(D) OK [11480.843927] raid6test: test_disks(12, 49): fail[11480.926101] raid6test: test_disks(12, 50): faila= 12(D) failb= 50(D) OK [11480.944803] raid6test: test_disks(12, 51): faila= 12(D) failb= 51(D) OK [11480.945332] raid6test: test_disks(12, 52): faila= 12(D) failb= 52(D) OK [11480.945851] raid6test: test_disks(12, 53): faila= 12(D) failb= 53(D) OK [11480.946393] raid6test: test_disks(12, 54): faila= 12(D) failb= 54(D) OK [11480.946923] raid6test: test_disks(12, 55): faila= 12(D) failb= 55(D) OK [11480.947463] raid6test: test_disks(12, 56): faila= 12(D) failb= 56(D) OK [11480.947950] raid6test: test_disks(12, 57): faila= 12(D) failb= 57(D) OK [11480.948489] raid6test: test_disks(12, 58): faila= 12(D) failb= 58(D) OK [11480.948992] raid6test: test_disks(12, 59): faila= 12(D) failb= 59(D) OK [11480.949525] raid6test: test_disks(12, 60): faila= 12(D) failb= 60(D) OK [11480.950026] raid6test: test_disks(12, 61): faila= 12(D) failb= 61(D) OK [11480.950558] raid6test: test_disks(12, 62): faila= 12(D) failb= 62(P) OK [11480.951075] raid6test: test_disks(12, 63): faila= 12(D) failb= 63(Q) OK [11480.951610] raid6test: test_disks(13, 14): faila= 13(D) failb= 14(D) OK [11480.952123] raid6test: teila= 13(D) failb= 17(D) OK [11481.352791] raid6test: test_disks(13, 18): fail[11481.434861] raid6test: test_disks(13, 19): faila= 13(D) failb= 19(D) OK [11481.453734] raid6test: test_disks(13, 20): faila= 13(D) failb= 20(D) OK [11481.454322] raid6test: test_disks(13, 21): faila= 13(D) failb= 21(D) OK [11481.454854] raid6test: test_disks(13, 22): faila= 13(D) failb= 22(D) OK [11481.455409] raid6test: test_disks(13, 23): faila= 13(D) failb= 23(D) OK [11481.455898] raid6test: test_disks(13, 24): faila= 13(D) failb= 24(D) OK [11481.456455] raid6test: test_disks(13, 25): faila= 13(D) failb= 25(D) OK [11481.456984] raid6test: test_disks(13, 26): faila= 13(D) failb= 26(D) OK [11481.457538] raid6test: test_disks(13, 27): faila= 13(D) failb= 27(D) OK [11481.458048] raid6test: test_disks(13, 28): faila= 13(D) failb= 28(D) OK [11481.458601] raid6test: test_disks(13, 29): faila= 13(D) failb= 29(D) OK [11481.459113] raid6test: test_disks(13, 30): faila= 13(D) failb= 30(D) OK [11481.459687] raid6test: test_disks(13, 31): faila= 13(D) failb= 31(D) OK [11481.460165] raid6test: test_disks(13, 32): faila= 13(D) failb= 32(D) OK [11481.460738] raid6test: test_disks(13, 33): faila= 13(D) failb= 33(D) OK [11481.461250] raid6test: test_disks(13, 34): faila= 13(D) failb= 34(D) OK [b= 37(D) OK [11481.962064] raid6test: test_disks(13, 38): faila= 13(D) failb= 38(D) OK [11481.962640] raid6test: test_disks(13, 39): faila= 13(D) failb= 39(D) OK [11481.963123] raid6test: test_disks(13, 40): faila= 13(D) failb= 40(D) OK [11481.963708] raid6test: test_disks(13, 41): faila= 13(D) failb= 41(D) OK [11481.964294] raid6test: test_disks(13, 42): faila= 13(D) failb= 42(D) OK [11481.964824] raid6test: test_disks(13, 43): faila= 13(D) failb= 43(D) OK [11481.965380] raid6test: test_disks(13, 44): faila= 13(D) failb= 44(D) OK [11481.965912] raid6test: test_disks(13, 45): faila= 13(D) failb= 45(D) OK [11481.966457] raid6test: test_disks(13, 46): faila= 13(D) failb= 46(D) OK [11481.966992] raid6test: test_disks(13, 47): faila= 13(D) failb= 47(D) OK [11481.967531] raid6test: test_disks(13, 48): faila= 13(D) failb= 48(D) OK [11481.968006] raid6test: test_disks(13, 49): faila= 13(D) failb= 49(D) OK [11481.968552] raid6test: test_disks(13, 50): faila= 13(D) failb= 50(D) OK [11481.969031] raid6test: test_disks(13, 51): faila= 13(D) failb= 51(D) OK [11481.969572] raid6test: test_disks(13, 52): faila= 13(D) failb= 52(D) OK [11481.970076] raid6test: test_disks(13, 53): faila= 13(D) failb= 53(D) OK [11481.970624] rila= 13(D) failb= 56(D) OK [11482.371285] raid6test: test_disks(13, 57): fail[11482.453443] raid6test: test_disks(13, 58): faila= 13(D) failb= 58(D) OK [11482.472148] raid6test: test_disks(13, 59): faila= 13(D) failb= 59(D) OK [11482.472701] raid6test: test_disks(13, 60): faila= 13(D) failb= 60(D) OK [11482.473203] raid6test: test_disks(13, 61): faila= 13(D) failb= 61(D) OK [11482.473747] raid6test: test_disks(13, 62): faila= 13(D) failb= 62(P) OK [11482.474340] raid6test: test_disks(13, 63): faila= 13(D) failb= 63(Q) OK [11482.474861] raid6test: test_disks(14, 15): faila= 14(D) failb= 15(D) OK [11482.475414] raid6test: test_disks(14, 16): faila= 14(D) failb= 16(D) OK [11482.475938] raid6test: test_disks(14, 17): faila= 14(D) failb= 17(D) OK [11482.476492] raid6test: test_disks(14, 18): faila= 14(D) failb= 18(D) OK [11482.477010] raid6test: test_disks(14, 19): faila= 14(D) failb= 19(D) OK [11482.477565] raid6test: test_disks(14, 20): faila= 14(D) failb= 20(D) OK [11482.478077] raid6test: test_disks(14, 21): faila= 14(D) failb= 21(D) OK [11482.478628] raid6test: test_disks(14, 22): faila= 14(D) failb= 22(D) OK [11482.479137] raid6test: test_disks(14, 23): faila= 14(D) failb= 23(D) OK [11482.860621] raid6test: test_disks(14, 26): faila= 14(D) failb= 26(D) OK [1isks(14, 27): faila= 14(D) failb= 27(D) OK [11482.980470] raid6test: test_disks(14, 28): faila= 14(D) failb= 28(D) OK [11482.981006] raid6test: test_disks(14, 29): faila= 14(D) failb= 29(D) OK [11482.981578] raid6test: test_disks(14, 30): faila= 14(D) failb= 30(D) OK [11482.982086] raid6test: test_disks(14, 31): faila= 14(D) failb= 31(D) OK [11482.982641] raid6test: test_disks(14, 32): faila= 14(D) failb= 32(D) OK [11482.983151] raid6test: test_disks(14, 33): faila= 14(D) failb= 33(D) OK [11482.983729] raid6test: test_disks(14, 34): faila= 14(D) failb= 34(D) OK [11482.984268] raid6test: test_disks(14, 35): faila= 14(D) failb= 35(D) OK [11482.984810] raid6test: test_disks(14, 36): faila= 14(D) failb= 36(D) OK [11482.985359] raid6test: test_disks(14, 37): faila= 14(D) failb= 37(D) OK [11482.985853] raid6test: test_disks(14, 38): faila= 14(D) failb= 38(D) OK [11482.986432] raid6test: test_disks(14, 39): faila= 14(D) failb= 39(D) OK [11482.986953] raid6test: test_disks(14, 40): faila= 14(D) failb= 40(D) OK [11482.987533] raid6test: test_disks(14, 41): faila= 14(D) failb= 41(D) OK [11482.988032] raid6test: test_db= 44(D) OK [11483.388658] raid6test: test_disks(14, 45): faila= 14(D) failb= 45(D) OK [11483.389322] raid6test: test_disks(14, 46): faila= 14(D) failb= 46(D) OK [11483.389957] raid6test: test_disks(14, 47): faila= 14(D) failb= 47(D) OK [11483.390515] raid6test: test_disks(14, 48): faila= 14(D) failb= 48(D) OK [11483.391067] raid6test: test_disks(14, 49): faila= 14(D) failb= 49(D) OK [11483.391628] raid6test: test_disks(14, 50): faila= 14(D) failb= 50(D) OKaid6test: test_disks(14, 51): faila= 14(D) failb= 51(D) OK [11483.492508] raid6test: test_disks(14, 52): faila= 14(D) failb= 52(D) OK [11483.493035] raid6test: test_disks(14, 53): faila= 14(D) failb= 53(D) OK [11483.493588] raid6test: test_disks(14, 54): faila= 14(D) failb= 54(D) OK [11483.494097] raid6test: test_disks(14, 55): faila= 14(D) failb= 55(D) OK [11483.494683] raid6test: test_disks(14, 56): faila= 14(D) failb= 56(D) OK [11483.495198] raid6test: test_disks(14, 57): faila= 14(D) failb= 57(D) OK [11483.495760] raid6test: test_disks(14, 58): faila= 14(D) failb= 58(D) OK [11483.496325] raid6test: test_disks(14, 59): faila= 14(D) failb= 59(D) OK [11483.496821] raid6test: test_aid6test: test_disks(14, 63): faila= 14(D) failb= 63(Q) OK [11483.998065] raid6test: test_disks(15, 16): faila= 15(D) failb= 16(D) OK [11483.998622] raid6test: test_disks(15, 17): faila= 15(D) failb= 17(D) OK [11483.999132] raid6test: test_disks(15, 18): faila= 15(D) failb= 18(D) OK [11483.999680] raid6test: test_disks(15, 19): faila= 15(D) failb= 19(D) OK [11484.000183] raid6test: test_disks(15, 20): faila= 15(D) failb= 20(D) OK [11484.000758] raid6test: test_disks(15, 21): faila= 15(D) failb= 21(D) OK [11484.001272] raid6test: test_disks(15, 22): faila= 15(D) failb= 22(D) OK [11484.001773] raid6test: test_disks(15, 23): faila= 15(D) failb= 23(D) OK [11484.002324] raid6test: test_disks(15, 24): faila= 15(D) failb= 24(D) OK [11484.002780] raid6test: test_disks(15, 25): faila= 15(D) failb= 25(D) OK [11484.003333] raid6test: test_disks(15, 26): faila= 15(D) failb= 26(D) OK [11484.003829] raid6test: test_disks(15, 27): faila= 15(D) failb= 27(D) OK [11484.004416] raid6test: test_disks(15, 28): faila= 15(D) failb= 28(D) OK [11484.004978] raid6test: test_disks(15, 29): faila= 15(D) failb= 29(D) OK [11484.005534] raid6test: test_disks(15, 30): faila= 15(D) failb= 30(D) OK [11484.006006] raid6test: test_disks(15, 31): faila= 15(D) failb= 31(D) OK [11484.006562] raid6aid6test: test_disks(15, 35): faila= 15(D) failb= 35(D) OK [11484.507730] raid6test: test_disks(15, 36): faila= 15(D) failb= 36(D) OK [11484.508209] raid6test: test_disks(15, 37): faila= 15(D) failb= 37(D) OK [11484.508779] raid6test: test_disks(15, 38): faila= 15(D) failb= 38(D) OK [11484.509299] raid6test: test_disks(15, 39): faila= 15(D) failb= 39(D) OK [11484.509794] raid6test: test_disks(15, 40): faila= 15(D) failb= 40(D) OK [11484.510370] raid6test: test_disks(15, 41): faila= 15(D) failb= 41(D) OK [11484.510865] raid6test: test_disks(15, 42): faila= 15(D) failb= 42(D) OK [11484.511417] raid6test: test_disks(15, 43): faila= 15(D) failb= 43(D) OK [11484.511949] raid6test: test_disks(15, 44): faila= 15(D) failb= 44(D) OK [11484.512504] raid6test: test_disks(15, 45): faila= 15(D) failb= 45(D) OK [11484.513036] raid6test: test_disks(15, 46): faila= 15(D) failb= 46(D) OK [11484.513587] raid6test: test_disks(15, 47): faila= 15(D) failb= 47(D) OK [11484.514061] raid6test: test_disks(15, 48): faila= 15(D) failb= 48(D) OK [11484.514623] raid6test: test_disks(15, 49): faila= 15(D) failb= 49(D) OK [11484.515095] raid6test: test_disks(15, 50): faila= 15(D) failb= 50(D) OK [11484.515647] raila= 15(D) failb= 53(D) OK [11484.916339] raid6test: test_disks(15, 54): fail[11484.998182] raid6test: test_disks(15, 55): faila= 15(D) failb= 55(D) OK [11485.017370] raid6test: test_disks(15, 56): faila= 15(D) failb= 56(D) OK [11485.017933] raid6test: test_disks(15, 57): faila= 15(D) failb= 57(D) OK [11485.018483] raid6test: test_disks(15, 58): faila= 15(D) failb= 58(D) OK [11485.019045] raid6test: test_disks(15, 59): faila= 15(D) failb= 59(D) OK [11485.019620] raid6test: test_disks(15, 60): faila= 15(D) failb= 60(D) OK [11485.020165] raid6test: test_disks(15, 61): faila= 15(D) failb= 61(D) OK [11485.020718] raid6test: test_disks(15, 62): faila= 15(D) failb= 62(P) OK [11485.021317] raid6test: test_disks(15, 63): faila= 15(D) failb= 63(Q) OK [11485.021882] raid6test: test_disks(16, 17): faila= 16(D) failb= 17(D) OK [11485.022458] raid6test: test_disks(16, 18): faila= 16(D) failb= 18(D) OK [11485.023001] raid6test: test_disks(16, 19): faila= 16(D) failb= 19(D) OK [11485.023566] raid6test: test_disks(16, 20): faila= 16(D) failb= 20(D) OK [11485.024062] raid6test: test_disks(16, 21): faila= 16(D) failb= 21(D) OK [11485.024618] raid6test: test_disks(16, 22): faila= 16(D) failb= 22(D) OK [11485.025171] raid6test: test_disks(16, 23): faila= 16(D) failb= 23(D) OK [11485.025749] raid6test: testila= 16(D) failb= 26(D) OK [11485.426480] raid6test: test_disks(16, 27): fail[11485.508430] raid6test: test_disks(16, 28): faila= 16(D) failb= 28(D) OK [11485.527420] raid6test: test_disks(16, 29): faila= 16(D) failb= 29(D) OK [11485.527953] raid6test: test_disks(16, 30): faila= 16(D) failb= 30(D) OK [11485.528500] raid6test: test_disks(16, 31): faila= 16(D) failb= 31(D) OK [11485.529008] raid6test: test_disks(16, 32): faila= 16(D) failb= 32(D) OK [11485.529564] raid6test: test_disks(16, 33): faila= 16(D) failb= 33(D) OK [11485.530084] raid6test: test_disks(16, 34): faila= 16(D) failb= 34(D) OK [11485.530645] raid6test: test_disks(16, 35): faila= 16(D) failb= 35(D) OK [11485.531119] raid6test: test_disks(16, 36): faila= 16(D) failb= 36(D) OK [11485.531675] raid6test: test_disks(16, 37): faila= 16(D) failb= 37(D) OK [11485.532177] raid6test: test_disks(16, 38): faila= 16(D) failb= 38(D) OK [11485.532729] raid6test: test_disks(16, 39): faila= 16(D) failb= 39(D) OK [11485.533290] raid6test: test_disks(16, 40): faila= 16(D) failb= 40(D) OK [11485.533831] raid6test: test_disks(16, 41): faila= 16(D) failb= 41(D) OK [11485.534378] raid6test: test_disks(16, 42): faila= 16(D) failb= 42(D) OK [11485.534904] raid6test: test_disks(16, 43): faila= 16(D) failb= 43(D) OK [11485.535453] raid6test: test_disks(16, 44): faila= 16(D) failb= 44(D) OK [11485.535985] raid6test: test_disks(16, 45): faila= 16(D) failb= 45(D) OK [11485.536535] raid6test: test_disks(16, 46): faila= 16(D) failb= 46(D) OK [11485.537062] raid6test: test_disks(16, 47): faila= 16(D) failb= 47(D) OK [11485.56[11486.037901] raid6test: test_disks(16, 51): faila= 16(D) failb= 51(D) OK [11486.038460] raid6test: test_disks(16, 52): faila= 16(D) failb= 52(D) OK [11486.038995] raid6test: test_disks(16, 53): faila= 16(D) failb= 53(D) OK [11486.039550] raid6test: test_disks(16, 54): faila= 16(D) failb= 54(D) OK [11486.040078] raid6test: test_disks(16, 55): faila= 16(D) failb= 55(D) OK [11486.040963] raid6test: test_disks(16, 56): faila= 16(D) failb= 56(D) OK [11486.041385] raid6test: test_disks(16, 57): faila= 16(D) failb= 57(D) OK [11486.041906] raid6test: test_disks(16, 58): faila= 16(D) failb= 58(D) OK [11486.042455] raid6test: test_disks(16, 59): faila= 16(D) failb= 59(D) OK [11486.042986] raid6test: test_disks(16, 60): faila= 16(D) failb= 60(D) OK [11486.043515] raid6test: test_disks(16, 61): faila= 16(D) failb= 61(D) OK [11486.044011] raid6test: test_disks(16, 62): faila= 16(D) failb= 62(P) OK [11486.044548] raid6test: test_disks(16, 63): faila= 16(D) failb= 63(Q) OK [11486.045075] raid6test: test_disks(17, 18): faila= 17(D) failb= 18(D) OK [11486.045623] raid6test: test_disks(17, 19): faila= 17(D) failb= 19(D) OK [11486.046096] raid6test: test_disks(17, 20): faila= 17(D) failb= 20(D) OK [b= 23(D) OK [11486.546905] raid6test: test_disks(17, 24): faila= 17(D) failb= 24(D) OK [11486.547475] raid6test: test_disks(17, 25): faila= 17(D) failb= 25(D) OK [11486.548007] raid6test: test_disks(17, 26): faila= 17(D) failb= 26(D) OK [11486.548571] raid6test: test_disks(17, 27): faila= 17(D) failb= 27(D) OK [11486.549131] raid6test: test_disks(17, 28): faila= 17(D) failb= 28(D) OK [11486.549665] raid6test: test_disks(17, 29): faila= 17(D) failb= 29(D) OK [11486.550181] raid6test: test_disks(17, 30): faila= 17(D) failb= 30(D) OK [11486.550742] raid6test: test_disks(17, 31): faila= 17(D) failb= 31(D) OK [11486.551288] raid6test: test_disks(17, 32): faila= 17(D) failb= 32(D) OK [11486.551846] raid6test: test_disks(17, 33): faila= 17(D) failb= 33(D) OK [11486.552442] raid6test: test_disks(17, 34): faila= 17(D) failb= 34(D) OK [11486.552987] raid6test: test_disks(17, 35): faila= 17(D) failb= 35(D) OK [11486.553535] raid6test: test_disks(17, 36): faila= 17(D) failb= 36(D) OK [11486.554056] raid6test: test_disks(17, 37): faila= 17(D) failb= 37(D) OK [11486.554620] raid6test: test_disks(17, 38): faila= 17(D) failb= 38(D) OK [11486.555117] raid6test: test_disks(17, 39): faila= 17(D) failb= 39(D) OK [11486.555650] raid6test: test_disks(17, 43): faila= 17(D) failb= 43(D) OK [11487.056451] raid6test: test_disks(17, 44): faila= 17(D) failb= 44(D) OK [11487.056992] raid6test: test_disks(17, 45): faila= 17(D) failb= 45(D) OK [11487.057556] raid6test: test_disks(17, 46): faila= 17(D) failb= 46(D) OK [11487.058088] raid6test: test_disks(17, 47): faila= 17(D) failb= 47(D) OK [11487.058633] raid6test: test_disks(17, 48): faila= 17(D) failb= 48(D) OK [11487.059187] raid6test: test_disks(17, 49): faila= 17(D) failb= 49(D) OK [11487.059816] raid6test: test_disks(17, 50): faila= 17(D) failb= 50(D) OK [11487.060374] raid6test: test_disks(17, 51): faila= 17(D) failb= 51(D) OK [11487.060933] raid6test: test_disks(17, 52): faila= 17(D) failb= 52(D) OK [11487.061479] raid6test: test_disks(17, 53): faila= 17(D) failb= 53(D) OK [11487.062003] raid6test: test_disks(17, 54): faila= 17(D) failb= 54(D) OK [11487.062580] raid6test: test_disks(17, 55): faila= 17(D) failb= 55(D) OK [11487.063082] raid6test: test_disks(17, 56): faila= 17(D) failb= 56(D) OK [11487.063809] raid6test: test_disks(17, 57): faila= 17(D) failb= 57(D) OK [11487.064422] raid6test: test_disks(17, 58): faila= 17(D) failb= 58(D) OK [11487.064930] raid6test: test_disks(17, 59): faila= 17(D) failb= 59(D) OK [11487.065482] raidaid6test: test_disks(17, 63): faila= 17(D) failb= 63(Q) OK [11487.566285] raid6test: test_disks(18, 19): faila= 18(D) failb= 19(D) OK [11487.566832] raid6test: test_disks(18, 20): faila= 18(D) failb= 20(D) OK [11487.567387] raid6test: test_disks(18, 21): faila= 18(D) failb= 21(D) OK [11487.567847] raid6test: test_disks(18, 22): faila= 18(D) failb= 22(D) OK [11487.568408] raid6test: test_disks(18, 23): faila= 18(D) failb= 23(D) OK [11487.568905] raid6test: test_disks(18, 24): faila= 18(D) failb= 24(D) OK [11487.569465] raid6test: test_disks(18, 25): faila= 18(D) failb= 25(D) OK [11487.569992] raid6test: test_disks(18, 26): faila= 18(D) failb= 26(D) OK [11487.570552] raid6test: test_disks(18, 27): faila= 18(D) failb= 27(D) OK [11487.571087] raid6test: test_disks(18, 28): faila= 18(D) failb= 28(D) OK [11487.571641] raid6test: test_disks(18, 29): faila= 18(D) failb= 29(D) OK [11487.572112] raid6test: test_disks(18, 30): faila= 18(D) failb= 30(D) OK [11487.572662] raid6test: test_disks(18, 31): faila= 18(D) failb= 31(D) OK [11487.573140] raid6test: test_disks(18, 32): faila= 18(D) failb= 32(D) OK [11487.573695] raid6test: test_disks(18, 33): faila= 18(D) failb= 33(D) OK [11487.574170] raid6test: test_disks(18, 34): isks(18, 37): faila= 18(D) failb= 37(D) OK [11488.075090] raid6test: test_disks(18, 38): faila= 18(D) failb= 38(D) OK [11488.075642] raid6test: test_disks(18, 39): faila= 18(D) failb= 39(D) OK [11488.076117] raid6test: test_disks(18, 40): faila= 18(D) failb= 40(D) OK [11488.076668] raid6test: test_disks(18, 41): faila= 18(D) failb= 41(D) OK [11488.077142] raid6test: test_disks(18, 42): faila= 18(D) failb= 42(D) OK [11488.077690] raid6test: test_disks(18, 43): faila= 18(D) failb= 43(D) OK [11488.078162] raid6test: test_disks(18, 44): faila= 18(D) failb= 44(D) OK [11488.078715] raid6test: test_disks(18, 45): faila= 18(D) failb= 45(D) OK [11488.079221] raid6test: test_disks(18, 46): faila= 18(D) failb= 46(D) OK [11488.079775] raid6test: test_disks(18, 47): faila= 18(D) failb= 47(D) OK [11488.080325] raid6test: test_disks(18, 48): faila= 18(D) failb= 48(D) OK [11488.080784] raid6test: test_disks(18, 49): faila= 18(D) failb= 49(D) OK [11488.081342] raid6test: test_disks(18, 50): faila= 18(D) failb= 50(D) OK [11488.081836] raid6test: test_disks(18, 51): faila= 18(D) failb= 51(D) OK [11488.082391] raid6test: test_disks(18, 52): faila= 18(D) failb= 52(D) OK [11488.082884] raid6test: test_disks(18, 53): faila= 18(D) failb= 53(D) OK [11488.083435] raid6test: test_diskisks(18, 57): faila= 18(D) failb= 57(D) OK [11488.584334] raid6test: test_disks(18, 58): faila= 18(D) failb= 58(D) OK [11488.584889] raid6test: test_disks(18, 59): faila= 18(D) failb= 59(D) OK [11488.585449] raid6test: test_disks(18, 60): faila= 18(D) failb= 60(D) OK [11488.585981] raid6test: test_disks(18, 61): faila= 18(D) failb= 61(D) OK [11488.586540] raid6test: test_disks(18, 62): faila= 18(D) failb= 62(P) OK [11488.587116] raid6test: test_disks(18, 63): faila= 18(D) failb= 63(Q) OK [11488.587696] raid6test: test_disks(19, 20): faila= 19(D) failb= 20(D) OK [11488.588189] raid6test: test_disks(19, 21): faila= 19(D) failb= 21(D) OK [11488.588745] raid6test: test_disks(19, 22): faila= 19(D) failb= 22(D) OK [11488.589255] raid6test: test_disks(19, 23): faila= 19(D) failb= 23(D) OK [11488.589819] raid6test: test_disks(19, 24): faila= 19(D) failb= 24(D) OK [11488.590381] raid6test: test_disks(19, 25): faila= 19(D) failb= 25(D) OK [11488.590872] raid6test: test_disks(19, 26): faila= 19(D) failb= 26(D) OK [11488.591426] raid6test: test_disks(19, 27): faila= 19(D) failb= 27(D) OK [11488.591922] raid6test: test_db= 30(D) OK [11488.992604] raid6test: test_disks(19, 31): faila= 19(D) failb=aid6test: test_disks(19, 32): faila= 19(D) failb= 32(D) OK [11489.093509] raid6test: test_disks(19, 33): faila= 19(D) failb= 33(D) OK [11489.094039] raid6test: test_disks(19, 34): faila= 19(D) failb= 34(D) OK [11489.094623] raid6test: test_disks(19, 35): faila= 19(D) failb= 35(D) OK [11489.095167] raid6test: test_disks(19, 36): faila= 19(D) failb= 36(D) OK [11489.095735] raid6test: test_disks(19, 37): faila= 19(D) failb= 37(D) OK [11489.096245] raid6test: test_disks(19, 38): faila= 19(D) failb= 38(D) OK [11489.096823] raid6test: test_disks(19, 39): faila= 19(D) failb= 39(D) OK [11489.097341] raid6test: test_disks(19, 40): faila= 19(D) failb= 40(D) OK [11489.097799] raid6test: test_disks(19, 41): faila= 19(D) failb= 41(D) OK [11489.098352] raid6test: test_disks(19, 42): faila= 19(D) failb= 42(D) OK [11489.098845] raid6test: test_disks(19, 43): faila= 19(D) failb= 43(D) OK [11489.099407] raid6test: test_disks(19, 44): faila= 19(D) failb= 44(D) OK [11489.099898] raid6test: test_disks(19, 45): faila= 19(D) failb= 45(D) OK [11489.100446] raid6test: test_disks(19, 46): faila= 19(D) failb= 46(D) OK [11489.100940] raid6test: test_disks(19, 47b= 49(D) OK [11489.501544] raid6test: test_disks(19, 50): faila= 19(D) failb= 50(D) OK [11489.502093] raid6test: test_disks(19, 51): faila= 19(D) failb= 5aid6test: test_disks(19, 52): faila= 19(D) failb= 52(D) OK [11489.602960] raid6test: test_disks(19, 53): faila= 19(D) failb= 53(D) OK [11489.603519] raid6test: test_disks(19, 54): faila= 19(D) failb= 54(D) OK [11489.604051] raid6test: test_disks(19, 55): faila= 19(D) failb= 55(D) OK [11489.604628] raid6test: test_disks(19, 56): faila= 19(D) failb= 56(D) OK [11489.605143] raid6test: test_disks(19, 57): faila= 19(D) failb= 57(D) OK [11489.605717] raid6test: test_disks(19, 58): faila= 19(D) failb= 58(D) OK [11489.606265] raid6test: test_disks(19, 59): faila= 19(D) failb= 59(D) OK [11489.606862] raid6test: test_disks(19, 60): faila= 19(D) failb= 60(D) OK [11489.607424] raid6test: test_disks(19, 61): faila= 19(D) failb= 61(D) OK [11489.607880] raid6test: test_disks(19, 62): faila= 19(D) failb= 62(P) OK [11489.608450] raid6test: test_disks(19, 63): faila= 19(D) failb= 63(Q) OK [11489.608976] raid6test: test_disks(20, 21): faila= 20(D) failb= 21(D) OK [11489.63656aid6test: test_disks(20, 24): faila= 20(D) failb= 24(D) OK [11490.010145] raib= 25(D) OK [11490.110533] raid6test: test_disks(20, 26): faila= 20(D) failb= 26(D) OK [11490.111074] raid6test: test_disks(20, 27): faila= 20(D) failb= 27(D) OK [11490.111641] raid6test: test_disks(20, 28): faila= 20(D) failb= 28(D) OK [11490.112151] raid6test: test_disks(20, 29): faila= 20(D) failb= 29(D) OK [11490.112709] raid6test: test_disks(20, 30): faila= 20(D) failb= 30(D) OK [11490.113181] raid6test: test_disks(20, 31): faila= 20(D) failb= 31(D) OK [11490.113735] raid6test: test_disks(20, 32): faila= 20(D) failb= 32(D) OK [11490.114210] raid6test: test_disks(20, 33): faila= 20(D) failb= 33(D) OK [11490.114774] raid6test: test_disks(20, 34): faila= 20(D) failb= 34(D) OK [11490.115323] raid6test: test_disks(20, 35): faila= 20(D) failb= 35(D) OK [11490.115822] raid6test: test_disks(20, 36): faila= 20(D) failb= 36(D) OK [11490.116384] raid6test: test_disks(20, 37): faila= 20(D) failb= 37(D) OK [11490.116881] raid6test: test_disks(20, 38): faila= 20(D) failb= 38(D) OK [11490.117439] raid6test: test_disks(20, 39): faila= 20(D) failb= 39(D) OK [11490.117938] raid6test: test_disks(20, 40): faila= 20(D) fai[11490.518629] raid6test: test_disks(20, 43): faila= 20(D) failb= 43(D) OK [1isks(20, 44): faila= 20(D) failb= 44(D) OK [11490.619715] raid6test: test_disks(20, 45): faila= 20(D) failb= 45(D) OK [11490.620259] raid6test: test_disks(20, 46): faila= 20(D) failb= 46(D) OK [11490.620828] raid6test: test_disks(20, 47): faila= 20(D) failb= 47(D) OK [11490.621420] raid6test: test_disks(20, 48): faila= 20(D) failb= 48(D) OK [11490.621909] raid6test: test_disks(20, 49): faila= 20(D) failb= 49(D) OK [11490.622444] raid6test: test_disks(20, 50): faila= 20(D) failb= 50(D) OK [11490.622968] raid6test: test_disks(20, 51): faila= 20(D) failb= 51(D) OK [11490.623499] raid6test: test_disks(20, 52): faila= 20(D) failb= 52(D) OK [11490.624029] raid6test: test_disks(20, 53): faila= 20(D) failb= 53(D) OK [11490.624595] raid6test: test_disks(20, 54): faila= 20(D) failb= 54(D) OK [11490.625116] raid6test: test_disks(20, 55): faila= 20(D) failb= 55(D) OK [11490.625641] raid6test: test_disks(20, 56): faila= 20(D) failb= 56(D) OK [11490.626140] raid6test: test_disks(20, 57): faila= 20(D) failb= 57(D) OK [11490.626714] raid6test: test_disks(20, 58): faila= 20(D) failb= 58(D) OK [11490.627227] raid6test: test_disks(20, 59): faila= 20(D) failb= 59(D) OK [11490.627757] raid6test: test_disks(20, 60): faila= 20(D) faila= 20(D) failb= 63(Q) OK [11491.128729] raid6test: test_disks(21, 22): faila= 21(D) failb= 22(D) OK [11491.129230] raid6test: test_disks(21, 23): faila= 21(D) failb= 23(D) OK [11491.129765] raid6test: test_disks(21, 24): faila= 21(D) failb= 24(D) OK [11491.130350] raid6test: test_disks(21, 25): faila= 21(D) failb= 25(D) OK [11491.130865] raid6test: test_disks(21, 26): faila= 21(D) failb= 26(D) OK [11491.131437] raid6test: test_disks(21, 27): faila= 21(D) failb= 27(D) OK [11491.131969] raid6test: test_disks(21, 28): faila= 21(D) failb= 28(D) OK [11491.132538] raid6test: test_disks(21, 29): faila= 21(D) failb= 29(D) OK [11491.133090] raid6test: test_disks(21, 30): faila= 21(D) failb= 30(D) OK [11491.133615] raid6test: test_disks(21, 31): faila= 21(D) failb= 31(D) OK [11491.134182] raid6test: test_disks(21, 32): faila= 21(D) failb= 32(D) OK [11491.134767] raid6test: test_disks(21, 33): faila= 21(D) failb= 33(D) OK [11491.135265] raid6test: test_disks(21, 34): faila= 21(D) failb= 34(D) OK [11491.135786] raid6test: test_disks(21, 35): faila= 21(D) failb= 35(D) OK [11491.136368] raid6test: test_disks(21, 36): faila= 21(D) failb= 36(D) OK [11491.136926] raid6test: test_disks(21, 37): fai[11491.537644] raid6test: test_disks(21, 40): faila= 21(D) failb= 40(D) OK [1isks(21, 41): faila= 21(D) failb= 41(D) OK [11491.638591] raid6test: test_disks(21, 42): faila= 21(D) failb= 42(D) OK [11491.639153] raid6test: test_disks(21, 43): faila= 21(D) failb= 43(D) OK [11491.639732] raid6test: test_disks(21, 44): faila= 21(D) failb= 44(D) OK [11491.640268] raid6test: test_disks(21, 45): faila= 21(D) failb= 45(D) OK [11491.640874] raid6test: test_disks(21, 46): faila= 21(D) failb= 46(D) OK [11491.641456] raid6test: test_disks(21, 47): faila= 21(D) failb= 47(D) OK [11491.642003] raid6test: test_disks(21, 48): faila= 21(D) failb= 48(D) OK [11491.642574] raid6test: test_disks(21, 49): faila= 21(D) failb= 49(D) OK [11491.643102] raid6test: test_disks(21, 50): faila= 21(D) failb= 50(D) OK [11491.643686] raid6test: test_disks(21, 51): faila= 21(D) failb= 51(D) OK [11491.644222] raid6test: test_disks(21, 52): faila= 21(D) failb= 52(D) OK [11491.644800] raid6test: test_disks(21, 53): faila= 21(D) failb= 53(D) OK [11491.645364] raid6test: test_disks(21, 54): faila= 21(D) failb= 54(D) OK [11491.645932] raid6test: test_disks(21, 55): faila= 21(D) failb= 55(D) OK [11491.646513] raid6test: test_disks(21, 56): faila= 21(D) [11492.047258] raid6test: test_disks(21, 59): faila= 21(D) failb= 59(D) OK [1isks(21, 60): faila= 21(D) failb= 60(D) OK [11492.148162] raid6test: test_disks(21, 61): faila= 21(D) failb= 61(D) OK [11492.148759] raid6test: test_disks(21, 62): faila= 21(D) failb= 62(P) OK [11492.149358] raid6test: test_disks(21, 63): faila= 21(D) failb= 63(Q) OK [11492.149926] raid6test: test_disks(22, 23): faila= 22(D) failb= 23(D) OK [11492.150510] raid6test: test_disks(22, 24): faila= 22(D) failb= 24(D) OK [11492.151061] raid6test: test_disks(22, 25): faila= 22(D) failb= 25(D) OK [11492.151644] raid6test: test_disks(22, 26): faila= 22(D) failb= 26(D) OK [11492.152208] raid6test: test_disks(22, 27): faila= 22(D) failb= 27(D) OK [11492.152787] raid6test: test_disks(22, 28): faila= 22(D) failb= 28(D) OK [11492.153284] raid6test: test_disks(22, 29): faila= 22(D) failb= 29(D) OK [11492.153880] raid6test: test_disks(22, 30): faila= 22(D) failb= 30(D) OK [11492.154493] raid6test: test_disks(22, 31): faila= 22(D) failb= 31(D) OK [11492.155047] raid6test: test_disks(22, 32): faila= 22(D) failb= 32(D) OK [11492.155626] raid6test: test_disks(22, 33): faila= 22(D) failb= 33(D) OK [11492.156182] raid6test: test_disks(22, 34): faila= 22(D) failb= 34(D) OK [11492.156760] raid6test: test_diisks(22, 38): faila= 22(D) failb= 38(D) OK [11492.657700] raid6test: test_disks(22, 39): faila= 22(D) failb= 39(D) OK [11492.658239] raid6test: test_disks(22, 40): faila= 22(D) failb= 40(D) OK [11492.658820] raid6test: test_disks(22, 41): faila= 22(D) failb= 41(D) OK [11492.659391] raid6test: test_disks(22, 42): faila= 22(D) failb= 42(D) OK [11492.659956] raid6test: test_disks(22, 43): faila= 22(D) failb= 43(D) OK [11492.660530] raid6test: test_disks(22, 44): faila= 22(D) failb= 44(D) OK [11492.661082] raid6test: test_disks(22, 45): faila= 22(D) failb= 45(D) OK [11492.661669] raid6test: test_disks(22, 46): faila= 22(D) failb= 46(D) OK [11492.662232] raid6test: test_disks(22, 47): faila= 22(D) failb= 47(D) OK [11492.662802] raid6test: test_disks(22, 48): faila= 22(D) failb= 48(D) OK [11492.663340] raid6test: test_disks(22, 49): faila= 22(D) failb= 49(D) OK [11492.663923] raid6test: test_disks(22, 50): faila= 22(D) failb= 50(D) OK [11492.664523] raid6test: test_disks(22, 51): faila= 22(D) failb= 51(D) OK [11492.665041] raid6test: test_disks(22, 52): faila= 22(D) failb= 52(D) OK [11492.665960] raid6test: test_disks(22, 53): faila= 22(D) failb= 53(D) OK [11492.666546] raid6test: test_disks(22, 54): faila= 22(D) failb= 54(D) OK [11492.667092] raid6test: test_disksisks(22, 58): faila= 22(D) failb= 58(D) OK [11493.167967] raid6test: test_disks(22, 59): faila= 22(D) failb= 59(D) OK [11493.168556] raid6test: test_disks(22, 60): faila= 22(D) failb= 60(D) OK [11493.169104] raid6test: test_disks(22, 61): faila= 22(D) failb= 61(D) OK [11493.169681] raid6test: test_disks(22, 62): faila= 22(D) failb= 62(P) OK [11493.170242] raid6test: test_disks(22, 63): faila= 22(D) failb= 63(Q) OK [11493.170805] raid6test: test_disks(23, 24): faila= 23(D) failb= 24(D) OK [11493.171350] raid6test: test_disks(23, 25): faila= 23(D) failb= 25(D) OK [11493.171880] raid6test: test_disks(23, 26): faila= 23(D) failb= 26(D) OK [11493.172452] raid6test: test_disks(23, 27): faila= 23(D) failb= 27(D) OK [11493.173012] raid6test: test_disks(23, 28): faila= 23(D) failb= 28(D) OK [11493.173586] raid6test: test_disks(23, 29): faila= 23(D) failb= 29(D) OK [11493.174145] raid6test: test_disks(23, 30): faila= 23(D) failb= 30(D) OK [11493.174712] raid6test: test_disks(23, 31): faila= 23(D) failb= 31(D) OK [11493.175234] raid6test: test_disks(23, 32): faila= 23(D) failb= 32(D) OK [11493.175800] raid6test: test_disks(23, 33): faila= 23(D) ila= 23(D) failb= 36(D) OK [11493.676712] raid6test: test_disks(23, 37): faila= 23(D) failb= 37(D) OK [11493.677245] raid6test: test_disks(23, 38): faila= 23(D) failb= 38(D) OK [11493.677818] raid6test: test_disks(23, 39): faila= 23(D) failb= 39(D) OK [11493.678386] raid6test: test_disks(23, 40): faila= 23(D) failb= 40(D) OK [11493.678940] raid6test: test_disks(23, 41): faila= 23(D) failb= 41(D) OK [11493.679570] raid6test: test_disks(23, 42): faila= 23(D) failb= 42(D) OK [11493.680116] raid6test: test_disks(23, 43): faila= 23(D) failb= 43(D) OK [11493.680649] raid6test: test_disks(23, 44): faila= 23(D) failb= 44(D) OK [11493.681174] raid6test: test_disks(23, 45): faila= 23(D) failb= 45(D) OK [11493.681722] raid6test: test_disks(23, 46): faila= 23(D) failb= 46(D) OK [11493.682216] raid6test: test_disks(23, 47): faila= 23(D) failb= 47(D) OK [11493.682798] raid6test: test_disks(23, 48): faila= 23(D) failb= 48(D) OK [11493.683367] raid6test: test_disks(23, 49): faila= 23(D) failb= 49(D) OK [11493.683939] raid6test: test_disks(23, 50): faila= 23(D) failb= 50(D) OK [11493.684548] raid6test: test_disks(23, 51): faila= 23(D) failb= 51(D) OK [11493.685056] raid6test: test_disks(23, 52): failila= 23(D) failb= 55(D) OK [11494.185970] raid6test: test_disks(23, 56): faila= 23(D) failb= 56(D) OK [11494.186514] raid6test: test_disks(23, 57): faila= 23(D) failb= 57(D) OK [11494.187030] raid6test: test_disks(23, 58): faila= 23(D) failb= 58(D) OK [11494.187551] raid6test: test_disks(23, 59): faila= 23(D) failb= 59(D) OK [11494.188066] raid6test: test_disks(23, 60): faila= 23(D) failb= 60(D) OK [11494.188588] raid6test: test_disks(23, 61): faila= 23(D) failb= 61(D) OK [11494.189104] raid6test: test_disks(23, 62): faila= 23(D) failb= 62(P) OK [11494.189647] raid6test: test_disks(23, 63): faila= 23(D) failb= 63(Q) OK [11494.190160] raid6test: test_disks(24, 25): faila= 24(D) failb= 25(D) OK [11494.190683] raid6test: test_disks(24, 26): faila= 24(D) failb= 26(D) OK [11494.191194] raid6test: test_disks(24, 27): faila= 24(D) failb= 27(D) OK [11494.191743] raid6test: test_disks(24, 28): faila= 24(D) failb= 28(D) OK [11494.192269] raid6test: test_disks(24, 29): faila= 24(D) failb= 29(D) OK [11494.192823] raid6test: test_disks(24, 30): faila= 24(D) failb= 30(D) OK [11494.193398] raid6test: test_disks(24, 31): faila= 24(D) failb= 31(D) OK [11494.193915] raid6test: test_disks(24, 32): faiila= 24(D) failb= 35(D) OK [11494.694857] raid6test: test_disks(24, 36): faila= 24(D) failb= 36(D) OK [11494.695412] raid6test: test_disks(24, 37): faila= 24(D) failb= 37(D) OK [11494.695935] raid6test: test_disks(24, 38): faila= 24(D) failb= 38(D) OK [11494.696510] raid6test: test_disks(24, 39): faila= 24(D) failb= 39(D) OK [11494.697034] raid6test: test_disks(24, 40): faila= 24(D) failb= 40(D) OK [11494.697603] raid6test: test_disks(24, 41): faila= 24(D) failb= 41(D) OK [11494.698129] raid6test: test_disks(24, 42): faila= 24(D) failb= 42(D) OK [11494.698691] raid6test: test_disks(24, 43): faila= 24(D) failb= 43(D) OK [11494.699201] raid6test: test_disks(24, 44): faila= 24(D) failb= 44(D) OK [11494.699776] raid6test: test_disks(24, 45): faila= 24(D) failb= 45(D) OK [11494.700388] raid6test: test_disks(24, 46): faila= 24(D) failb= 46(D) OK [11494.700909] raid6test: test_disks(24, 47): faila= 24(D) failb= 47(D) OK [11494.701490] raid6test: test_disks(24, 48): faila= 24(D) failb= 48(D) OK [11494.702052] raid6test: test_disks(24, 49): faila= 24(D) failb= 49(D) OK [11494.702643] raid6test: test_disks(24, 50): faila= 24(D) failb= 50(D) OK [11494.703138] raid6test: test_disks(24, 51): faila= 24(D) failb= 51(D) OK [11494.703703] raid6test: test_disks(24, 52): faila= 24(D) failb= 52(D) OK [1[11495.204973] raid6test: test_disks(24, 56): faila= 24(D) failb= 56(D) OK [11495.205681] raid6test: test_disks(24, 57): faila= 24(D) failb= 57(D) OK [11495.206289] raid6test: test_disks(24, 58): faila= 24(D) failb= 58(D) OK [11495.206998] raid6test: test_disks(24, 59): faila= 24(D) failb= 59(D) OK [11495.207693] raid6test: test_disks(24, 60): faila= 24(D) failb= 60(D) OK [11495.208316] raid6test: test_disks(24, 61): faila= 24(D) failb= 61(D) OK [11495.209004] raid6test: test_disks(24, 62): faila= 24(D) failb= 62(P) OK [11495.209763] raid6test: test_disks(24, 63): faila= 24(D) failb= 63(Q) OK [11495.210430] raid6test: test_disks(25, 26): faila= 25(D) failb= 26(D) OK [11495.211072] raid6test: test_disks(25, 27): faila= 25(D) failb= 27(D) OK [11495.211773] raid6test: test_disks(25, 28): faila= 25(D) failb= 28(D) OK [11495.212461] raid6test: test_disks(25, 29): faila= 25(D) failb= 29(D) OK [11495.213076] raid6test: test_disks(25, 30): faila= 25(D) failb= 30(D) OK [11495.213782] raid6test: test_disks(25, 31): faila= 25(D) failb= 31(D) OKb= 34(D) OK [11495.714705] raid6test: test_disks(25, 35): faila= 25(D) failb= 35(D) OK [11495.715260] raid6test: test_disks(25, 36): faila= 25(D) failb= 36(D) OK [11495.715913] raid6test: test_disks(25, 37): faila= 25(D) failb= 37(D) OK [11495.716562] raid6test: test_disks(25, 38): faila= 25(D) failb= 38(D) OK [11495.717097] raid6test: test_disks(25, 39): faila= 25(D) failb= 39(D) OK [11495.717674] raid6test: test_disks(25, 40): faila= 25(D) failb= 40(D) OK [11495.718233] raid6test: test_disks(25, 41): faila= 25(D) failb= 41(D) OK [11495.718811] raid6test: test_disks(25, 42): faila= 25(D) failb= 42(D) OK [11495.719400] raid6test: test_disks(25, 43): faila= 25(D) failb= 43(D) OK [11495.719970] raid6test: test_disks(25, 44): faila= 25(D) failb= 44(D) OK [11495.720548] raid6test: test_disks(25, 45): faila= 25(D) failb= 45(D) OK [11495.721113] raid6test: test_disks(25, 46): faila= 25(D) failb= 46(D) OK [11495.721696] raid6test: test_disks(25, 47): faila= 25(D) failb= 47(D) OK [11495.722257] raid6test: test_disks(25, 48): faila= 25(D) failb= 48(D) OK [11495.722843] raid6test: test_disks(25, 49): faila= 25(D) failb= 49(D) OK [11495.723435] raid6test: test_disks(25, 50): faila= 25(D) failb= 50(D) OK [11495.724001] raid6test: test_disks(25, 51): faila= 25(D) failb= 51(D) OK [11495.724604] raaid6test: test_disks(25, 55): faila= 25(D) failb= 55(D) OK [11496.225499] raid6test: test_disks(25, 56): faila= 25(D) failb= 56(D) OK [11496.226076] raid6test: test_disks(25, 57): faila= 25(D) failb= 57(D) OK [11496.226654] raid6test: test_disks(25, 58): faila= 25(D) failb= 58(D) OK [11496.227161] raid6test: test_disks(25, 59): faila= 25(D) failb= 59(D) OK [11496.227679] raid6test: test_disks(25, 60): faila= 25(D) failb= 60(D) OK [11496.228246] raid6test: test_disks(25, 61): faila= 25(D) failb= 61(D) OK [11496.228782] raid6test: test_disks(25, 62): faila= 25(D) failb= 62(P) OK [11496.229292] raid6test: test_disks(25, 63): faila= 25(D) failb= 63(Q) OK [11496.229846] raid6test: test_disks(26, 27): faila= 26(D) failb= 27(D) OK [11496.230389] raid6test: test_disks(26, 28): faila= 26(D) failb= 28(D) OK [11496.230946] raid6test: test_disks(26, 29): faila= 26(D) failb= 29(D) OK [11496.231503] raid6test: test_disks(26, 30): faila= 26(D) failb= 30(D) OK [11496.232029] raid6test: test_disks(26, 31): faila= 26(D) failb= 31(D) OK [11496.232615] raid6test: test_disks(26, 32): faila= 26(D) failb= 32(D) OK [11496.233188] raid6test: test_disks(26, 33): faila= 26(D) failb= 33(D) OK [11496.233768] raid6test: test_disks(26, 34): faila= 26(D) failb= 34(D) OK [11496.234298] raid6aid6test: test_disks(26, 38): faila= 26(D) failb= 38(D) OK [11496.735163] raid6test: test_disks(26, 39): faila= 26(D) failb= 39(D) OK [11496.735707] raid6test: test_disks(26, 40): faila= 26(D) failb= 40(D) OK [11496.736222] raid6test: test_disks(26, 41): faila= 26(D) failb= 41(D) OK [11496.736747] raid6test: test_disks(26, 42): faila= 26(D) failb= 42(D) OK [11496.737260] raid6test: test_disks(26, 43): faila= 26(D) failb= 43(D) OK [11496.737827] raid6test: test_disks(26, 44): faila= 26(D) failb= 44(D) OK [11496.738392] raid6test: test_disks(26, 45): faila= 26(D) failb= 45(D) OK [11496.738882] raid6test: test_disks(26, 46): faila= 26(D) failb= 46(D) OK [11496.739420] raid6test: test_disks(26, 47): faila= 26(D) failb= 47(D) OK [11496.739932] raid6test: test_disks(26, 48): faila= 26(D) failb= 48(D) OK [11496.740463] raid6test: test_disks(26, 49): faila= 26(D) failb= 49(D) OK [11496.740943] raid6test: test_disks(26, 50): faila= 26(D) failb= 50(D) OK [11496.741474] raid6test: test_disks(26, 51): faila= 26(D) failb= 51(D) OK [11496.741999] raid6test: test_disks(26, 52): faila= 26(D) failb= 52(D) OK [11496.769609] [11497.242804] raid6test: test_disks(26, 56): faila= 26(D) failb= 56(D) OK [11497.243314] raid6test: test_disks(26, 57): faila= 26(D) failb= 57(D) OK [11497.243881] raid6test: test_disks(26, 58): faila= 26(D) failb= 58(D) OK [11497.244438] raid6test: test_disks(26, 59): faila= 26(D) failb= 59(D) OK [11497.244970] raid6test: test_disks(26, 60): faila= 26(D) failb= 60(D) OK [11497.245530] raid6test: test_disks(26, 61): faila= 26(D) failb= 61(D) OK [11497.246066] raid6test: test_disks(26, 62): faila= 26(D) failb= 62(P) OK [11497.246627] raid6test: test_disks(26, 63): faila= 26(D) failb= 63(Q) OK [11497.247148] raid6test: test_disks(27, 28): faila= 27(D) failb= 28(D) OK [11497.247703] raid6test: test_disks(27, 29): faila= 27(D) failb= 29(D) OK [11497.248234] raid6test: test_disks(27, 30): faila= 27(D) failb= 30(D) OK [11497.248785] raid6test: test_disks(27, 31): faila= 27(D) failb= 31(D) OK [11497.249263] raid6test: test_disks(27, 32): faila= 27(D) failb= 32(D) OK [11497.249813] raid6test: test_disks(27, 33): faila= 27(D) failb= 33(D) OK [11497.250287] raid6test: test_disks(27, 34): faila= 27(D) failb= 34(D) OK [11497.250843] raid6test: test_disks(27, 35): faila= 27(D) failb= 35(D) OK [11497.251396] raid6test: test_disks(27, 36): faila= 27(D) failb= 36(D) OK [11497.251870] raid6test: test_disks(27, 37): faila= 27(D) failb= 37(D) OK [11497.252426] raid6test: test_disisks(27, 41): faila= 27(D) failb= 41(D) OK [11497.753277] raid6test: test_disks(27, 42): faila= 27(D) failb= 42(D) OK [11497.753841] raid6test: test_disks(27, 43): faila= 27(D) failb= 43(D) OK [11497.754321] raid6test: test_disks(27, 44): faila= 27(D) failb= 44(D) OK [11497.754873] raid6test: test_disks(27, 45): faila= 27(D) failb= 45(D) OK [11497.755439] raid6test: test_disks(27, 46): faila= 27(D) failb= 46(D) OK [11497.755971] raid6test: test_disks(27, 47): faila= 27(D) failb= 47(D) OK [11497.756517] raid6test: test_disks(27, 48): faila= 27(D) failb= 48(D) OK [11497.757043] raid6test: test_disks(27, 49): faila= 27(D) failb= 49(D) OK [11497.757575] raid6test: test_disks(27, 50): faila= 27(D) failb= 50(D) OK [11497.758101] raid6test: test_disks(27, 51): faila= 27(D) failb= 51(D) OK [11497.758647] raid6test: test_disks(27, 52): faila= 27(D) failb= 52(D) OK [11497.759184] raid6test: test_disks(27, 53): faila= 27(D) failb= 53(D) OK [11497.759750] raid6test: test_disks(27, 54): faila= 27(D) failb= 54(D) OK [11497.760289] raid6test: test_disks(27, 55): faila= 27(D) failb= 55(D) OK [11497.760852] raid6test: test_disks(27, 56): faila= 27(D) failb= 56(D) OK [11497.761326] raid6test: test_diisks(27, 60): faila= 27(D) failb= 60(D) OK [11498.262150] raid6test: test_disks(27, 61): faila= 27(D) failb= 61(D) OK [11498.262703] raid6test: test_disks(27, 62): faila= 27(D) failb= 62(P) OK [11498.263247] raid6test: test_disks(27, 63): faila= 27(D) failb= 63(Q) OK [11498.263799] raid6test: test_disks(28, 29): faila= 28(D) failb= 29(D) OK [11498.264273] raid6test: test_disks(28, 30): faila= 28(D) failb= 30(D) OK [11498.264848] raid6test: test_disks(28, 31): faila= 28(D) failb= 31(D) OK [11498.265409] raid6test: test_disks(28, 32): faila= 28(D) failb= 32(D) OK [11498.265942] raid6test: test_disks(28, 33): faila= 28(D) failb= 33(D) OK [11498.266503] raid6test: test_disks(28, 34): faila= 28(D) failb= 34(D) OK [11498.266962] raid6test: test_disks(28, 35): faila= 28(D) failb= 35(D) OK [11498.267513] raid6test: test_disks(28, 36): faila= 28(D) failb= 36(D) OK [11498.268015] raid6test: test_disks(28, 37): faila= 28(D) failb= 37(D) OK [11498.268572] raid6test: test_disks(28, 38): faila= 28(D) failb= 38(D) OK [11498.269063] raid6test: test_disks(28, 39): faila= 28(D) failb= 39(D) OK [11498.269618] raid6test: test_disks(28, 40): faila= 28(D) failb= 40(D) OK [11498.270111] raid6test: test_disks(28, 41): faila= 28(D) failb= 41(D) OK [11498.270668] raid6test: test_diskisks(28, 45): faila= 28(D) failb= 45(D) OK [11498.771533] raid6test: test_disks(28, 46): faila= 28(D) failb= 46(D) OK [11498.772072] raid6test: test_disks(28, 47): faila= 28(D) failb= 47(D) OK [11498.772632] raid6test: test_disks(28, 48): faila= 28(D) failb= 48(D) OK [11498.773162] raid6test: test_disks(28, 49): faila= 28(D) failb= 49(D) OK [11498.773711] raid6test: test_disks(28, 50): faila= 28(D) failb= 50(D) OK [11498.774245] raid6test: test_disks(28, 51): faila= 28(D) failb= 51(D) OK [11498.774812] raid6test: test_disks(28, 52): faila= 28(D) failb= 52(D) OK [11498.775289] raid6test: test_disks(28, 53): faila= 28(D) failb= 53(D) OK [11498.775846] raid6test: test_disks(28, 54): faila= 28(D) failb= 54(D) OK [11498.776323] raid6test: test_disks(28, 55): faila= 28(D) failb= 55(D) OK [11498.776882] raid6test: test_disks(28, 56): faila= 28(D) failb= 56(D) OK [11498.777413] raid6test: test_disks(28, 57): faila= 28(D) failb= 57(D) OK [11498.777924] raid6test: test_disks(28, 58): faila= 28(D) failb= 58(D) OK [11498.778447] raid6test: test_disks(28, 59): faila= 28(D) failb= 59(D) OK [11498.778976] raid6test: test_disks(28, 60): faila= 28(D) failb= 60(D) OK [11498.779528] raid6test: test_disks(28, 61): faila= 28(D) faila= 29(D) failb= 30(D) OK [11499.280330] raid6test: test_disks(29, 31): faila= 29(D) failb= 31(D) OK [11499.280889] raid6test: test_disks(29, 32): faila= 29(D) failb= 32(D) OK [11499.281425] raid6test: test_disks(29, 33): faila= 29(D) failb= 33(D) OK [11499.281955] raid6test: test_disks(29, 34): faila= 29(D) failb= 34(D) OK [11499.282511] raid6test: test_disks(29, 35): faila= 29(D) failb= 35(D) OK [11499.283002] raid6test: test_disks(29, 36): faila= 29(D) failb= 36(D) OK [11499.283558] raid6test: test_disks(29, 37): faila= 29(D) failb= 37(D) OK [11499.284046] raid6test: test_disks(29, 38): faila= 29(D) failb= 38(D) OK [11499.284632] raid6test: test_disks(29, 39): faila= 29(D) failb= 39(D) OK [11499.285153] raid6test: test_disks(29, 40): faila= 29(D) failb= 40(D) OK [11499.285706] raid6test: test_disks(29, 41): faila= 29(D) failb= 41(D) OK [11499.286226] raid6test: test_disks(29, 42): faila= 29(D) failb= 42(D) OK [11499.286792] raid6test: test_disks(29, 43): faila= 29(D) failb= 43(D) OK [11499.287315] raid6test: test_disks(29, 44): faila= 29(D) failb= 44(D) OK [11499.287869] raid6test: test_disks(29, 45): faila= 29(D) failb= 45(D) OK [11499.288346] raid6test: test_disks(29, 46): faiila= 29(D) failb= 49(D) OK [11499.789190] raid6test: test_disks(29, 50): faila= 29(D) failb= 50(D) OK [11499.789694] raid6test: test_disks(29, 51): faila= 29(D) failb= 51(D) OK [11499.790192] raid6test: test_disks(29, 52): faila= 29(D) failb= 52(D) OK [11499.791099] raid6test: test_disks(29, 53): faila= 29(D) failb= 53(D) OK [11499.791741] raid6test: test_disks(29, 54): faila= 29(D) failb= 54(D) OK [11499.792244] raid6test: test_disks(29, 55): faila= 29(D) failb= 55(D) OK [11499.792800] raid6test: test_disks(29, 56): faila= 29(D) failb= 56(D) OK [11499.793324] raid6test: test_disks(29, 57): faila= 29(D) failb= 57(D) OK [11499.793874] raid6test: test_disks(29, 58): faila= 29(D) failb= 58(D) OK [11499.794351] raid6test: test_disks(29, 59): faila= 29(D) failb= 59(D) OK [11499.794866] raid6test: test_disks(29, 60): faila= 29(D) failb= 60(D) OK [11499.795427] raid6test: test_disks(29, 61): faila= 29(D) failb= 61(D) OK [11499.795903] raid6test: test_disks(29, 62): faila= 29(D) failb= 62(P) OK [11499.796477] raid6test: test_disks(29, 63): faila= 29(D) failb= 63(Q) OK [11499.797015] raid6test: test_disks(30, 31): faila= 30(D) failb= 31(D) OK [11499.797571] raid6test: test_disks(30, 32): faila= 30(D) failb= 32(D) OK b= 35(D) OK [11500.298354] raid6test: test_disks(30, 36): faila= 30(D) failb= 36(D) OK [11500.298905] raid6test: test_disks(30, 37): faila= 30(D) failb= 37(D) OK [11500.299469] raid6test: test_disks(30, 38): faila= 30(D) failb= 38(D) OK [11500.300007] raid6test: test_disks(30, 39): faila= 30(D) failb= 39(D) OK [11500.300569] raid6test: test_disks(30, 40): faila= 30(D) failb= 40(D) OK [11500.301068] raid6test: test_disks(30, 41): faila= 30(D) failb= 41(D) OK [11500.301623] raid6test: test_disks(30, 42): faila= 30(D) failb= 42(D) OK [11500.302080] raid6test: test_disks(30, 43): faila= 30(D) failb= 43(D) OK [11500.302630] raid6test: test_disks(30, 44): faila= 30(D) failb= 44(D) OK [11500.303123] raid6test: test_disks(30, 45): faila= 30(D) failb= 45(D) OK [11500.303679] raid6test: test_disks(30, 46): faila= 30(D) failb= 46(D) OK [11500.304158] raid6test: test_disks(30, 47): faila= 30(D) failb= 47(D) OK [11500.304726] raid6test: test_disks(30, 48): faila= 30(D) failb= 48(D) OK [11500.305252] raid6test: test_disks(30, 49): faila= 30(D) failb= 49(D) OK [11500.305807] raid6test: test_disks(30, 50): faila= 30(D) failb= 50(D) OK [11500.306333] raid6test: test_disks(30, 51): faila= 30(D) failbb= 54(D) OK [11500.807169] raid6test: test_disks(30, 55): faila= 30(D) failb= 55(D) OK [11500.807701] raid6test: test_disks(30, 56): faila= 30(D) failb= 56(D) OK [11500.808224] raid6test: test_disks(30, 57): faila= 30(D) failb= 57(D) OK [11500.808768] raid6test: test_disks(30, 58): faila= 30(D) failb= 58(D) OK [11500.809302] raid6test: test_disks(30, 59): faila= 30(D) failb= 59(D) OK [11500.809839] raid6test: test_disks(30, 60): faila= 30(D) failb= 60(D) OK [11500.810343] raid6test: test_disks(30, 61): faila= 30(D) failb= 61(D) OK [11500.810881] raid6test: test_disks(30, 62): faila= 30(D) failb= 62(P) OK [11500.811449] raid6test: test_disks(30, 63): faila= 30(D) failb= 63(Q) OK [11500.811982] raid6test: test_disks(31, 32): faila= 31(D) failb= 32(D) OK [11500.812525] raid6test: test_disks(31, 33): faila= 31(D) failb= 33(D) OK [11500.813018] raid6test: test_disks(31, 34): faila= 31(D) failb= 34(D) OK [11500.813561] raid6test: test_disks(31, 35): faila= 31(D) failb= 35(D) OK [11500.814089] raid6test: test_disks(31, 36): faila= 31(D) failb= 36(D) OK [11500.814649] raid6test: test_disks(31, 37): faila= 31(D) failb= 40(D) OK [11501.315498] raid6test: test_disks(31, 41): faila= 31(D) failb= 41(D) OK [11501.316037] raid6test: test_disks(31, 42): faila= 31(D) failb= 42(D) OK [11501.316588] raid6test: test_disks(31, 43): faila= 31(D) failb= 43(D) OK [11501.317110] raid6test: test_disks(31, 44): faila= 31(D) failb= 44(D) OK [11501.317745] raid6test: test_disks(31, 45): faila= 31(D) failb= 45(D) OK [11501.318280] raid6test: test_disks(31, 46): faila= 31(D) failb= 46(D) OK [11501.318818] raid6test: test_disks(31, 47): faila= 31(D) failb= 47(D) OK [11501.319213] raid6test: test_disks(31, 48): faila= 31(D) failb= 48(D) OK [11501.319749] raid6test: test_disks(31, 49): faila= 31(D) failb= 49(D) OK [11501.320242] raid6test: test_disks(31, 50): faila= 31(D) failb= 50(D) OK [11501.320778] raid6test: test_disks(31, 51): faila= 31(D) failb= 51(D) OK [11501.321311] raid6test: test_disks(31, 52): faila= 31(D) failb= 52(D) OK [11501.321846] raid6test: test_disks(31, 53): faila= 31(D) failb= 53(D) OK [11501.322355] raid6test: test_disks(31, 54): faila= 31(D) failb= 54(D) OK [11501.322889] raid6test: test_disks(31, 55): faila= 31(D) failb= 55(D) OK [11501.323445] raid6test: test_disks(31, 56): faila= 31(D) failb= 56(D) OK [11501.351012] [11501.824287] raid6test: test_disks(31, 60): faila= 31(D) failb= 60(D) OK [11501.824851] raid6test: test_disks(31, 61): faila= 31(D) failb= 61(D) OK [11501.825366] raid6test: test_disks(31, 62): faila= 31(D) failb= 62(P) OK [11501.825921] raid6test: test_disks(31, 63): faila= 31(D) failb= 63(Q) OK [11501.826476] raid6test: test_disks(32, 33): faila= 32(D) failb= 33(D) OK [11501.827014] raid6test: test_disks(32, 34): faila= 32(D) failb= 34(D) OK [11501.827570] raid6test: test_disks(32, 35): faila= 32(D) failb= 35(D) OK [11501.828029] raid6test: test_disks(32, 36): faila= 32(D) failb= 36(D) OK [11501.828582] raid6test: test_disks(32, 37): faila= 32(D) failb= 37(D) OK [11501.829079] raid6test: test_disks(32, 38): faila= 32(D) failb= 38(D) OK [11501.829634] raid6test: test_disks(32, 39): faila= 32(D) failb= 39(D) OK [11501.830126] raid6test: test_disks(32, 40): faila= 32(D) failb= 40(D) OK [11501.830697] raid6test: test_disks(32, 41): faila= 32(D) failb= 41(D) OK [11501.831226] raid6test: test_disks(32, 42): faila= 32(D) failb= 42(D) OK [11501.831776] raid6test: test_disks(32, 43): faila= 32(D) failb= 43(D) OK [11501.832271] raid6test: test_disks(32, 44): faila= 32(D) failb= 44(D) OK [1[11502.333127] raid6test: test_disks(32, 48): faila= 32(D) failb= 48(D) OK [11502.333713] raid6test: test_disks(32, 49): faila= 32(D) failb= 49(D) OK [11502.334243] raid6test: test_disks(32, 50): faila= 32(D) failb= 50(D) OK [11502.334827] raid6test: test_disks(32, 51): faila= 32(D) failb= 51(D) OK [11502.335326] raid6test: test_disks(32, 52): faila= 32(D) failb= 52(D) OK [11502.335889] raid6test: test_disks(32, 53): faila= 32(D) failb= 53(D) OK [11502.336445] raid6test: test_disks(32, 54): faila= 32(D) failb= 54(D) OK [11502.336915] raid6test: test_disks(32, 55): faila= 32(D) failb= 55(D) OK [11502.337469] raid6test: test_disks(32, 56): faila= 32(D) failb= 56(D) OK [11502.337945] raid6test: test_disks(32, 57): faila= 32(D) failb= 57(D) OK [11502.338457] raid6test: test_disks(32, 58): faila= 32(D) failb= 58(D) OK [11502.338927] raid6test: test_disks(32, 59): faila= 32(D) failb= 59(D) OK [11502.339483] raid6test: test_disks(32, 60): faila= 32(D) failb= 60(D) OK [11502.339953] raid6test: test_disks(32, 61): faila= 32(D) failb= 61(D) OK [11502.340506] raid6test: test_disks(32, 62): faila= 32(D) failb= 62(P) OK [11502.340986] raid6test: test_disks(32, 63): faila= 32(D) failb= 63(Q) OK [11502.341534] raid6test: tesaid6test: test_disks(33, 37): faila= 33(D) failb= 37(D) OK [11502.842356] raid6test: test_disks(33, 38): faila= 33(D) failb= 38(D) OK [11502.842916] raid6test: test_disks(33, 39): faila= 33(D) failb= 39(D) OK [11502.843468] raid6test: test_disks(33, 40): faila= 33(D) failb= 40(D) OK [11502.843940] raid6test: test_disks(33, 41): faila= 33(D) failb= 41(D) OK [11502.844459] raid6test: test_disks(33, 42): faila= 33(D) failb= 42(D) OK [11502.844977] raid6test: test_disks(33, 43): faila= 33(D) failb= 43(D) OK [11502.845542] raid6test: test_disks(33, 44): faila= 33(D) failb= 44(D) OK [11502.846073] raid6test: test_disks(33, 45): faila= 33(D) failb= 45(D) OK [11502.846629] raid6test: test_disks(33, 46): faila= 33(D) failb= 46(D) OK [11502.847120] raid6test: test_disks(33, 47): faila= 33(D) failb= 47(D) OK [11502.847687] raid6test: test_disks(33, 48): faila= 33(D) failb= 48(D) OK [11502.848218] raid6test: test_disks(33, 49): faila= 33(D) failb= 49(D) OK [11502.848770] raid6test: test_disks(33, 50): faila= 33(D) failb= 50(D) OK [11502.849294] raid6test: test_disks(33, 51): faila= 33(D) failb= 51(D) OK [11502.849840] raid6test: test_disks(33, 52): faila= 33(D) failb= 52(D) OK [11502.850369] raiaid6test: test_disks(33, 56): faila= 33(D) failb= 56(D) OK [11503.351160] raid6test: test_disks(33, 57): faila= 33(D) failb= 57(D) OK [11503.351716] raid6test: test_disks(33, 58): faila= 33(D) failb= 58(D) OK [11503.352211] raid6test: test_disks(33, 59): faila= 33(D) failb= 59(D) OK [11503.352771] raid6test: test_disks(33, 60): faila= 33(D) failb= 60(D) OK [11503.353311] raid6test: test_disks(33, 61): faila= 33(D) failb= 61(D) OK [11503.353874] raid6test: test_disks(33, 62): faila= 33(D) failb= 62(P) OK [11503.354461] raid6test: test_disks(33, 63): faila= 33(D) failb= 63(Q) OK [11503.354979] raid6test: test_disks(34, 35): faila= 34(D) failb= 35(D) OK [11503.355542] raid6test: test_disks(34, 36): faila= 34(D) failb= 36(D) OK [11503.356074] raid6test: test_disks(34, 37): faila= 34(D) failb= 37(D) OK [11503.356634] raid6test: test_disks(34, 38): faila= 34(D) failb= 38(D) OK [11503.357130] raid6test: test_disks(34, 39): faila= 34(D) failb= 39(D) OK [11503.357707] raid6test: test_disks(34, 40): faila= 34(D) failb= 40(D) OK [11503.358198] raid6test: test_disks(34, 41): faila= 34(D) failb= 41(D) OK [11503.358764] raid6test: test_disks(34, 42): faila= 34(D) failb= 42(D) OK [11503.359301] raid6test: test_disks(34, 43): faila= 34(D) failb= 43(D) OK [11503.359859] raid6test: test_disks(34, 44): faila= 34(D) failb= 44(D) OK [11503.360385] raid6teaid6test: test_disks(34, 48): faila= 34(D) failb= 48(D) OK [11503.861305] raid6test: test_disks(34, 49): faila= 34(D) failb= 49(D) OK [11503.861868] raid6test: test_disks(34, 50): faila= 34(D) failb= 50(D) OK [11503.862436] raid6test: test_disks(34, 51): faila= 34(D) failb= 51(D) OK [11503.862917] raid6test: test_disks(34, 52): faila= 34(D) failb= 52(D) OK [11503.863475] raid6test: test_disks(34, 53): faila= 34(D) failb= 53(D) OK [11503.863953] raid6test: test_disks(34, 54): faila= 34(D) failb= 54(D) OK [11503.864517] raid6test: test_disks(34, 55): faila= 34(D) failb= 55(D) OK [11503.865187] raid6test: test_disks(34, 56): faila= 34(D) failb= 56(D) OK [11503.865814] raid6test: test_disks(34, 57): faila= 34(D) failb= 57(D) OK [11503.866375] raid6test: test_disks(34, 58): faila= 34(D) failb= 58(D) OK [11503.866934] raid6test: test_disks(34, 59): faila= 34(D) failb= 59(D) OK [11503.867455] raid6test: test_disks(34, 60): faila= 34(D) failb= 60(D) OK [11503.867934] raid6test: test_disks(34, 61): faila= 34(D) failb= 61(D) OK [11503.868495] raid6test: test_disks(34, 62): faila= 34(D) failb= 62(P) OK [11503.868985] raid6test: test_disks(34, 63isks(35, 38): faila= 35(D) failb= 38(D) OK [11504.369798] raid6test: test_disks(35, 39): faila= 35(D) failb= 39(D) OK [11504.370321] raid6test: test_disks(35, 40): faila= 35(D) failb= 40(D) OK [11504.370880] raid6test: test_disks(35, 41): faila= 35(D) failb= 41(D) OK [11504.371458] raid6test: test_disks(35, 42): faila= 35(D) failb= 42(D) OK [11504.371979] raid6test: test_disks(35, 43): faila= 35(D) failb= 43(D) OK [11504.372542] raid6test: test_disks(35, 44): faila= 35(D) failb= 44(D) OK [11504.373009] raid6test: test_disks(35, 45): faila= 35(D) failb= 45(D) OK [11504.373529] raid6test: test_disks(35, 46): faila= 35(D) failb= 46(D) OK [11504.374064] raid6test: test_disks(35, 47): faila= 35(D) failb= 47(D) OK [11504.374618] raid6test: test_disks(35, 48): faila= 35(D) failb= 48(D) OK [11504.375146] raid6test: test_disks(35, 49): faila= 35(D) failb= 49(D) OK [11504.375711] raid6test: test_disks(35, 50): faila= 35(D) failb= 50(D) OK [11504.376209] raid6test: test_disks(35, 51): faila= 35(D) failb= 51(D) OK [11504.376766] raid6test: test_disks(35, 52): faila= 35(D) failb= 52(D) OK [11504.377295] raid6test: test_disks(35, 53): faila= 35(D) failb= 53(D) OK [11504.377851] raid6test: test_disks(35, 54): faila= 35(D) failb= 54(D) OK [11504.378387] raid6test: test_disksisks(35, 58): faila= 35(D) failb= 58(D) OK [11504.879234] raid6test: test_disks(35, 59): faila= 35(D) failb= 59(D) OK [11504.879805] raid6test: test_disks(35, 60): faila= 35(D) failb= 60(D) OK [11504.880373] raid6test: test_disks(35, 61): faila= 35(D) failb= 61(D) OK [11504.880934] raid6test: test_disks(35, 62): faila= 35(D) failb= 62(P) OK [11504.881504] raid6test: test_disks(35, 63): faila= 35(D) failb= 63(Q) OK [11504.882011] raid6test: test_disks(36, 37): faila= 36(D) failb= 37(D) OK [11504.882568] raid6test: test_disks(36, 38): faila= 36(D) failb= 38(D) OK [11504.883064] raid6test: test_disks(36, 39): faila= 36(D) failb= 39(D) OK [11504.883612] raid6test: test_disks(36, 40): faila= 36(D) failb= 40(D) OK [11504.884108] raid6test: test_disks(36, 41): faila= 36(D) failb= 41(D) OK [11504.884652] raid6test: test_disks(36, 42): faila= 36(D) failb= 42(D) OK [11504.885158] raid6test: test_disks(36, 43): faila= 36(D) failb= 43(D) OK [11504.885717] raid6test: test_disks(36, 44): faila= 36(D) failb= 44(D) OK [11504.886208] raid6test: test_disks(36, 45): faila= 36(D) failb= 45(D) OK [11504.886760] raid6test: test_disks(36, 46): faila= 36(D) failb= 46(D) OK [11504.887240] raid6test: test_diisks(36, 50): faila= 36(D) failb= 50(D) OK [11505.388181] raid6test: test_disks(36, 51): faila= 36(D) failb= 51(D) OK [11505.388871] raid6test: test_disks(36, 52): faila= 36(D) failb= 52(D) OK [11505.389541] raid6test: test_disks(36, 53): faila= 36(D) failb= 53(D) OK [11505.390153] raid6test: test_disks(36, 54): faila= 36(D) failb= 54(D) OK [11505.390833] raid6test: test_disks(36, 55): faila= 36(D) failb= 55(D) OK [11505.391500] raid6test: test_disks(36, 56): faila= 36(D) failb= 56(D) OK [11505.392094] raid6test: test_disks(36, 57): faila= 36(D) failb= 57(D) OK [11505.392776] raid6test: test_disks(36, 58): faila= 36(D) failb= 58(D) OK [11505.393392] raid6test: test_disks(36, 59): faila= 36(D) failb= 59(D) OK [11505.394047] raid6test: test_disks(36, 60): faila= 36(D) failb= 60(D) OK [11505.394725] raid6test: test_disks(36, 61): faila= 36(D) failb= 61(D) OK [11505.395323] raid6test: test_disks(36, 62): faila= 36(D) failb= 62(P) OK [11505.396033] raid6test: test_disks(36, 63): faila= 36(D) failb= 63(Q) OK [11505.396689] raid6test: test_disks(37, 38): faila= 37(D) failb= 38(D) OK [11505.397301] raid6test: test_disks(37, 39): faila= 37(D) failb= 39(D) OK [11505.397972] raid6test: test_disks(37, 40): faila= 37(D) failb= 40(D) OK [11505.398635] raid6test: test_disks(37, 41): faila= 37(D) failb= 44(D) OK [11505.899653] raid6test: test_disks(37, 45): faila= 37(D) failb= 45(D) OK [11505.900195] raid6test: test_disks(37, 46): faila= 37(D) failb= 46(D) OK [11505.900748] raid6test: test_disks(37, 47): faila= 37(D) failb= 47(D) OK [11505.901243] raid6test: test_disks(37, 48): faila= 37(D) failb= 48(D) OK [11505.901774] raid6test: test_disks(37, 49): faila= 37(D) failb= 49(D) OK [11505.902268] raid6test: test_disks(37, 50): faila= 37(D) failb= 50(D) OK [11505.902822] raid6test: test_disks(37, 51): faila= 37(D) failb= 51(D) OK [11505.903356] raid6test: test_disks(37, 52): faila= 37(D) failb= 52(D) OK [11505.903909] raid6test: test_disks(37, 53): faila= 37(D) failb= 53(D) OK [11505.904483] raid6test: test_disks(37, 54): faila= 37(D) failb= 54(D) OK [11505.905000] raid6test: test_disks(37, 55): faila= 37(D) failb= 55(D) OK [11505.905560] raid6test: test_disks(37, 56): faila= 37(D) failb= 56(D) OK [11505.906035] raid6test: test_disks(37, 57): faila= 37(D) failb= 57(D) OK [11505.906597] raid6test: test_disks(37, 58): faila= 37(D) failb= 58(D) OK [11505.907091] raid6test: test_disks(37, 59): faila= 37(D) fai[11506.307727] raid6test: test_disks(37, 62): faila= 37(D) failb= 62(P) OK [11506.308309] raid6test: test_disks(37, 63): faila= 37(D) failb= 63(Q) OK [11506.308858] raid6test: test_disks(38, 39): faila= 38(D) failb= 39(D) OK [11506.309407] raid6test: test_disks(38, 40): faila= 38(D) failb= 40(D) OK [11506.309976] raid6test: test_disks(38, 41): faila= 38(D) failb= 41(D) OK [11506.310524] raid6test: test_disks(38, 42): faila= 38(D) failb= 42(D) OK [11506.311091] raid6test: test_disks(38, 43): faila= 38(D) failb= 43(D) OK [11506.338713]isks(38, 44): faila= 38(D) failb= 44(D) OK [11506.411929] raid6test: test_disks(38, 45): faila= 38(D) failb= 45(D) OK [11506.412497] raid6test: test_disks(38, 46): faila= 38(D) failb= 46(D) OK [11506.412973] raid6test: test_disks(38, 47): faila= 38(D) failb= 47(D) OK [11506.413529] raid6test: test_disks(38, 48): faila= 38(D) failb= 48(D) OK [11506.414002] raid6test: test_disks(38, 49): faila= 38(D) failb= 49(D) OK [11506.414561] raid6test: test_disks(38, 50): faila= 38(D) failb= 50(D) OK [11506.415097] raid6test: test_disks(38, 51): faila= 38(D) failb= 51(D) OK [11506.415636] raid6test: test_disks(38, 52): faila= 38(D) failb= 52(D) OK [11506.416869] raid6test: test_disks(38, 53): fa[11506.817678] raid6test: test_disks(38, 56): faila= 38(D) failb= 56(D) OK [11506.818264] raid6test: test_disks(38, 57): faila= 38(D) failb= 57(D) OK [11506.818799] raid6test: test_disks(38, 58): faila= 38(D) failb= 58(D) OK [11506.819358] raid6test: test_disks(38, 59): faila= 38(D) failb= 59(D) OK [11506.819923] raid6test: test_disks(38, 60): faila= 38(D) failb= 60(D) OK [11506.820512] raid6test: test_disks(38, 61): faila= 38(D) failb= 61(D) OK [11506.821060] raid6test: test_disks(38, 62): faila= 38(D) failb= 62(P) OK [11506.821614] raid6test: test_disks(38, 63): faila= 38(D) failb= 63(Q) OK [11506.822182] raid6test: test_disks(39, 40): faila= 39(D) failb= 40(D) OK [11506.822759] raid6test: test_disks(39, 41): faila= 39(D) failb= 41(D) OK [11506.823315] raid6test: test_disks(39, 42): faila= 39(D) failb= 42(Daid6test: test_disks(39, 43): faila= 39(D) failb= 43(D) OK [11506.924319] raid6test: test_disks(39, 44): faila= 39(D) failb= 44(D) OK [11506.925222] raid6test: test_disks(39, 45): faila= 39(D) failb= 45(D) OK [11506.925755] raid6test: test_disks(39, 46): faila= 39(D) failb= 46(D) OK [11506.926282] raid6testaid6test: test_disks(39, 50): faila= 39(D) failb= 50(D) OK [11507.427510] raid6test: test_disks(39, 51): faila= 39(D) failb= 51(D) OK [11507.428049] raid6test: test_disks(39, 52): faila= 39(D) failb= 52(D) OK [11507.428649] raid6test: test_disks(39, 53): faila= 39(D) failb= 53(D) OK [11507.429214] raid6test: test_disks(39, 54): faila= 39(D) failb= 54(D) OK [11507.429796] raid6test: test_disks(39, 55): faila= 39(D) failb= 55(D) OK [11507.430350] raid6test: test_disks(39, 56): faila= 39(D) failb= 56(D) OK [11507.430932] raid6test: test_disks(39, 57): faila= 39(D) failb= 57(D) OK [11507.431536] raid6test: test_disks(39, 58): faila= 39(D) failb= 58(D) OK [11507.432069] raid6test: test_disks(39, 59): faila= 39(D) failb= 59(D) OK [11507.432648] raid6test: test_disks(39, 60): faila= 39(D) failb= 60(D) OK [11507.433218] raid6test: test_disks(39, 61): faila= 39(D) failb= 61(D) OK [11507.433795] raid6test: test_disks(39, 62): faila= 39(D) failb= 62(P) OK [11507.434364] raid6test: test_disks(39, 63): faila= 39(D) failb= 63(Q) OK [11507.434927] raid6test: test_disks(40, 41): faila= 40(D) failb= 41(D) OK [11507.462595]aid6test: test_disks(40, 44): faila= 40(D) failb= 44(D) OK [11507.836208] raid6test: test_disks(40, 45): faila= 40(D) failb= 45(D) OK [11507.836797] raid6test: test_disks(40, 46): faila= 40(D) failb= 46(D) OK [11507.837362] raid6test: test_disks(40, 47): faila= 40(D) failb= 47(D) OK [11507.837935] raid6test: test_disks(40, 48): faila= 40(D) failb= 48(D) OK [11507.838538] raid6test: test_disks(40, 49): faila= 40(D) failb= 49(D) OK [11507.839049] raid6test: test_disks(40, 50): faila= 40(D) failb= 50(D) OK [11507.839605] raid6test: test_disks(40, 51): faila= 40(D) failb= 51(D) OK [11507.840145] raid6test: test_disks(40, 52): faila= 40(D) failb= 52(D) OK [11507.840703] raid6test: test_disks(40, 53): faila= 40(D) failb= 53(D) OK [11507.841500] raid6test: test_disks(40, 54): faila= 40(D) failb= 54(D) OK [11507.842020] raid6test: test_disks(40, 55): faila= 40(D) failb= 55(D) OK [11507.842558] raid6test: test_disks(40, 56): faila= 40(D) failb= 56(D) OK [11507.843053] raid6test: test_disks(40, 57): faila= 40(D) failb= 57(D) OK [11507.843588] raid6test: test_disks(40, 58): faila= 40(D) failb= 58(D) OK [11507.844209] raid6test: test_disks(40, 59): faila= 40(D) failb= 59(D) OK [11507.844789] raid6test: test_disks(40, 60): fisks(40, 63): faila= 40(D) failb= 63(Q) OK [11508.345762] raid6test: test_dis[11508.446207] raid6test: test_disks(41, 43): faila= 41(D) failb= 43(D) OK [11508.446792] raid6test: test_disks(41, 44): faila= 41(D) failb= 44(D) OK [11508.447367] raid6test: test_disks(41, 45): faila= 41(D) failb= 45(D) OK [11508.447955] raid6test: test_disks(41, 46): faila= 41(D) failb= 46(D) OK [11508.448559] raid6test: test_disks(41, 47): faila= 41(D) failb= 47(D) OK [11508.449096] raid6test: test_disks(41, 48): faila= 41(D) failb= 48(D) OK [11508.449675] raid6test: test_disks(41, 49): faila= 41(D) failb= 49(D) OK [11508.450225] raid6test: test_disks(41, 50): faila= 41(D) failb= 50(D) OK [11508.450774] raid6test: test_disks(41, 51): faila= 41(D) failb= 51(D) OK [11508.451316] raid6test: test_disks(41, 52): faila= 41(D) failb= 52(D) OK [11508.451870] raid6test: test_disks(41, 53): faila= 41(D) failb= 53(D) OK [11508.452404] raid6test: test_disks(41, 54): faila= 41(D) failb= 54(D) OK [11508.452962] raid6test: test_disks(41, 55): faila= 41(D) failb= 55(D) OK [11508.453599] raid6test: test_disks(41, 56): faila= 41(D) failb= 56(D) OK [11508.454142] raid6test: test_disks(41, 57): faila= 41(D) failb= 57(D) OK[11508.835853] raid6test: test_disks(41, 60): faila= 41(D) failb= 60(D) OK [11508.855607] raid6test: test_disks(41, 61): faila= 41(D) failb= 61(D) OK [11508.856143] raid6test: test_disks(41, 62): faila= 41(D) failb= 62(P) OK [11508.856744] raid6test: test_disks(41, 63): faila= 41(D) failb= 63(Q) OK [11508.857306] raid6test: test_disks(42, 43): faila= 42(D) failb= 43(D) OK [11508.857846] raid6test: test_disks(42, 44): faila= 42(D) failb= 44(D) OK [11508.858406] raid6test: test_disks(42, 45): faila= 42(D) failb= 45(D) OK [11508.858989] raid6test: test_disks(42, 46): faila= 42(D) failb= 46(D) OK [11508.859572] raid6test: test_disks(42, 47): faila= 42(D) failb= 47(D) OK [11508.860099] raid6test: test_disks(42, 48): faila= 42(D) failb= 48(D) OK [11508.860737] raid6test: test_disks(42, 49): faila= 42(D) failb= 49(D) OK [11508.861377] raid6test: test_disks(42, 50): faila= 42(D) failb= 50(D) OK [11508.861928] raid6tesb= 51(D) OK [11508.962353] raid6test: test_disks(42, 52): faila= 42(D) failb= 52(D) OK [11508.962947] raid6test: test_disks(42, 53): faila= 42(D) failb= 53(D) OK [11508.963566] raid6test: test_disks(42, 54): faila= 42(D) failb= 54(D) OK [11508.964101] raid6test: test_disks(42, 55): faila= 42(D) failb= 55(D)b= 58(D) OK [11509.465326] raid6test: test_disks(42, 59): faila= 42(D) failb= 59(D) OK [11509.465890] raid6test: test_disks(42, 60): faila= 42(D) failb= 60(D) OK [11509.466388] raid6test: test_disks(42, 61): faila= 42(D) failb= 61(D) OK [11509.466906] raid6test: test_disks(42, 62): faila= 42(D) failb= 62(P) OK [11509.467405] raid6test: test_disks(42, 63): faila= 42(D) failb= 63(Q) OK [11509.467919] raid6test: test_disks(43, 44): faila= 43(D) failb= 44(D) OK [11509.468375] raid6test: test_disks(43, 45): faila= 43(D) failb= 45(D) OK [11509.468890] raid6test: test_disks(43, 46): faila= 43(D) failb= 46(D) OK [11509.469381] raid6test: test_disks(43, 47): faila= 43(D) failb= 47(D) OK [11509.469897] raid6test: test_disks(43, 48): faila= 43(D) failb= 48(D) OK [11509.470404] raid6test: test_disks(43, 49): faila= 43(D) failb= 49(D) OK [11509.470939] raid6test: test_disks(43, 50): faila= 43(D) failb= 50(D) OK [11509.471540] raid6test: test_disks(43, 51): faila= 43(D) failb= 51(D) OK [11509.472069] raid6test: test_disks(43, 52): faila= 43(D) failb= 52(D) OK [11509.472677] raid6test: test_disks(43, 53): faila= 43(D) failb= 53(D) OK [11509.473201] raid6test: test_disks(43, 54): faila= 43(D) failb=b= 57(D) OK [11509.974084] raid6test: test_disks(43, 58): faila= 43(D) failb= 58(D) OK [11509.974691] raid6test: test_disks(43, 59): faila= 43(D) failb= 59(D) OK [11509.975233] raid6test: test_disks(43, 60): faila= 43(D) failb= 60(D) OK [11509.975815] raid6test: test_disks(43, 61): faila= 43(D) failb= 61(D) OK [11509.976343] raid6test: test_disks(43, 62): faila= 43(D) failb= 62(P) OK [11509.976937] raid6test: test_disks(43, 63): faila= 43(D) failb= 63(Q) OK [11509.977547] raid6test: test_disks(44, 45): faila= 44(D) failb= 45(D) OK [11509.978076] raid6test: test_disks(44, 46): faila= 44(D) failb= 46(D) OK [11509.978648] raid6test: test_disks(44, 47): faila= 44(D) failb= 47(D) OK [11509.979173] raid6test: test_disks(44, 48): faila= 44(D) failb= 48(D) OK [11509.979702] raid6test: test_disks(44, 49): faila= 44(D) failb= 49(D) OK [11509.980237] raid6test: test_disks(44, 50): faila= 44(D) failb= 50(D) OK [11509.980774] raid6test: test_disks(44, 51): faila= 44(D) failb= 51(D) OK [11509.981292] raid6test: test_disks(44, 52): faila= 44(D) [11510.382133] raid6test: test_disks(44, 55): faila= 44(D) failb= 55(D) OK [1isks(44, 56): faila= 44(D) failb= 56(D) OK [11510.482993] raid6test: test_disks(44, 57): faila= 44(D) failb= 57(D) OK [11510.483577] raid6test: test_disks(44, 58): faila= 44(D) failb= 58(D) OK [11510.484123] raid6test: test_disks(44, 59): faila= 44(D) failb= 59(D) OK [11510.484665] raid6test: test_disks(44, 60): faila= 44(D) failb= 60(D) OK [11510.485197] raid6test: test_disks(44, 61): faila= 44(D) failb= 61(D) OK [11510.485728] raid6test: test_disks(44, 62): faila= 44(D) failb= 62(P) OK [11510.486261] raid6test: test_disks(44, 63): faila= 44(D) failb= 63(Q) OK [11510.486819] raid6test: test_disks(45, 46): faila= 45(D) failb= 46(D) OK [11510.487359] raid6test: test_disks(45, 47): faila= 45(D) failb= 47(D) OK [11510.487923] raid6test: test_disks(45, 48): faila= 45(D) failb= 48(D) OK [11510.488315] raid6test: test_disks(45, 49): faila= 45(D) failb= 49(D) OK [11510.488856] raid6test: test_disks(45, 50): faila= 45(D) failb= 50(D) OK [11510.489412] raid6test: test_disks(45, 51): faila= 45(D) failb= 51(D) OK [11510.490012] raid6test: test_disks(45, 52): faila= 45(D) failb= 52(D) OK [11510.490578] raid6test: test_disks(45, 53): faila= 45(D) f[11510.891599] raid6test: test_disks(45, 56): faila= 45(D) failb= 56(D) OK [1isks(45, 57): faila= 45(D) failb= 57(D) OK [11510.992547] raid6test: test_disks(45, 58): faila= 45(D) failb= 58(D) OK [11510.993050] raid6test: test_disks(45, 59): faila= 45(D) failb= 59(D) OK [11510.993613] raid6test: test_disks(45, 60): faila= 45(D) failb= 60(D) OK [11510.994166] raid6test: test_disks(45, 61): faila= 45(D) failb= 61(D) OK [11510.994755] raid6test: test_disks(45, 62): faila= 45(D) failb= 62(P) OK [11510.995327] raid6test: test_disks(45, 63): faila= 45(D) failb= 63(Q) OK [11510.995891] raid6test: test_disks(46, 47): faila= 46(D) failb= 47(D) OK [11510.996443] raid6test: test_disks(46, 48): faila= 46(D) failb= 48(D) OK [11510.997011] raid6test: test_disks(46, 49): faila= 46(D) failb= 49(D) OK [11510.997589] raid6test: test_disks(46, 50): faila= 46(D) failb= 50(D) OK [11510.998119] raid6test: test_disks(46, 51): faila= 46(D) failb= 51(D) OK [11510.998666] raid6test: test_disks(46, 52): faila= 46(D) failb= 52(D) OK [11510.999200] raid6test: test_disks(46, 53): faila= 46(D) failb= 53(D) OK [11510.999748] raid6test: test_disks(46, 54): faila= 46(D) failb= 54(D) OK [11511.000261] raid6test: test_disks(46, 55): faila= 46(D) failb= 55(D) OK [11511.000821] raid6test: test_disks(46, 59): faila= 46(D) failb= 59(D) OK [11511.502023] raid6test: test_disks(46, 60): faila= 46(D) failb= 60(D) OK [11511.502619] raid6test: test_disks(46, 61): faila= 46(D) failb= 61(D) OK [11511.503176] raid6test: test_disks(46, 62): faila= 46(D) failb= 62(P) OK [11511.503779] raid6test: test_disks(46, 63): faila= 46(D) failb= 63(Q) OK [11511.504336] raid6test: test_disks(47, 48): faila= 47(D) failb= 48(D) OK [11511.504944] raid6test: test_disks(47, 49): faila= 47(D) failb= 49(D) OK [11511.505425] raid6test: test_disks(47, 50): faila= 47(D) failb= 50(D) OK [11511.505998] raid6test: test_disks(47, 51): faila= 47(D) failb= 51(D) OK [11511.506605] raid6test: test_disks(47, 52): faila= 47(D) failb= 52(D) OK [11511.507141] raid6test: test_disks(47, 53): faila= 47(D) failb= 53(D) OK [11511.507715] raid6test: test_disks(47, 54): faila= 47(D) failb= 54(D) OK [11511.508287] raid6test: test_disks(47, 55): faila= 47(D) failb= 55(D) OK [11511.508861] raid6test: test_disks(47, 56): faila= 47(D) failb= 56(D) OK [11511.509417] raid6test: test_disks(47, 57): faila= 47(D) failb= 57(D) OK [11511.509991] raid6test: test_disks(47, 58): faila= 47(D) failb= 58(D) OK [11511.510596] raid6test: test_disks(47, 59): faila= 47(D) failb= 59(D) OK [11511.511128] raid6test: test_disks(47, 60): faila= 47(D) failb= 60(D) OK [11511.511699] raid6test: test_disks(isks(48, 49): faila= 48(D) failb= 49(D) OK [11512.012704] raid6test: test_disks(48, 50): faila= 48(D) failb= 50(D) OK [11512.013220] raid6test: test_disks(48, 51): faila= 48(D) failb= 51(D) OK [11512.013956] raid6test: test_disks(48, 52): faila= 48(D) failb= 52(D) OK [11512.014433] raid6test: test_disks(48, 53): faila= 48(D) failb= 53(D) OK [11512.015006] raid6test: test_disks(48, 54): faila= 48(D) failb= 54(D) OK [11512.015585] raid6test: test_disks(48, 55): faila= 48(D) failb= 55(D) OK [11512.016117] raid6test: test_disks(48, 56): faila= 48(D) failb= 56(D) OK [11512.016684] raid6test: test_disks(48, 57): faila= 48(D) failb= 57(D) OK [11512.017288] raid6test: test_disks(48, 58): faila= 48(D) failb= 58(D) OK [11512.017829] raid6test: test_disks(48, 59): faila= 48(D) failb= 59(D) OK [11512.018343] raid6test: test_disks(48, 60): faila= 48(D) failb= 60(D) OK [11512.018875] raid6test: test_disks(48, 61): faila= 48(D) failb= 61(D) OK [11512.019357] raid6test: test_disks(48, 62): faila= 48(D) failb= 62(P) OK [11512.019903] raid6test: test_disks(48, 63): faila= 48(D) failb= 63(Q) OK [11512.020426] raid6test: test_disks(49, 50): faila= 49(D) failb= 50(D) OK [11512.020957] raid6test: test_disks(49, 51): faila= 49(D) failb= 51(D) OK [11512.021434] raid6test: test_disks(49, 52): faila= 49(D) failb= 55(D) OK [11512.522330] raid6test: test_disks(49, 56): faila= 49(D) failb= 56(D) OK [11512.522937] raid6test: test_disks(49, 57): faila= 49(D) failb= 57(D) OK [11512.523538] raid6test: test_disks(49, 58): faila= 49(D) failb= 58(D) OK [11512.524065] raid6test: test_disks(49, 59): faila= 49(D) failb= 59(D) OK [11512.524646] raid6test: test_disks(49, 60): faila= 49(D) failb= 60(D) OK [11512.525205] raid6test: test_disks(49, 61): faila= 49(D) failb= 61(D) OK [11512.525778] raid6test: test_disks(49, 62): faila= 49(D) failb= 62(P) OK [11512.526309] raid6test: test_disks(49, 63): faila= 49(D) failb= 63(Q) OK [11512.526877] raid6test: test_disks(50, 51): faila= 50(D) failb= 51(D) OK [11512.527441] raid6test: test_disks(50, 52): faila= 50(D) failb= 52(D) OK [11512.527989] raid6test: test_disks(50, 53): faila= 50(D) failb= 53(D) OK [11512.528593] raid6test: test_disks(50, 54): faila= 50(D) failb= 54(D) OK [11512.529125] raid6test: test_disks(50, 55): faila= 50(D) failb= 55(D) OK [11512.529696] raid6test: test_disks(50, 56): faila= 50(D) failb= 56(D) OK [11512.530223] raid6test: test_disks(50, 57): faila= 50(D) failb= 57(D) OK [11512.530795] raid6test: test_disks(50, 58): faila= 50(D) failb= 58(D) OK [11512.531365] raid6test: test_disks(50, 59): faila= 50(D) failb= b= 62(P) OK [11513.032232] raid6test: test_disks(50, 63): faila= 50(D) failb= 63(Q) OK [11513.032783] raid6test: test_disks(51, 52): faila= 51(D) failb= 52(D) OK [11513.033286] raid6test: test_disks(51, 53): faila= 51(D) failb= 53(D) OK [11513.033803] raid6test: test_disks(51, 54): faila= 51(D) failb= 54(D) OK [11513.034308] raid6test: test_disks(51, 55): faila= 51(D) failb= 55(D) OK [11513.034852] raid6test: test_disks(51, 56): faila= 51(D) failb= 56(D) OK [11513.035362] raid6test: test_disks(51, 57): faila= 51(D) failb= 57(D) OK [11513.035873] raid6test: test_disks(51, 58): faila= 51(D) failb= 58(D) OK [11513.036379] raid6test: test_disks(51, 59): faila= 51(D) failb= 59(D) OK [11513.036887] raid6test: test_disks(51, 60): faila= 51(D) failb= 60(D) OK [11513.037390] raid6test: test_disks(51, 61): faila= 51(D) failb= 61(D) OK [11513.037898] raid6test: test_disks(51, 62): faila= 51(D) failb= 62(P) OK [11513.038407] raid6test: test_disks(51, 63): faila= 51(D) failb= 63(Q) OK [11513.038915] raid6test: test_disks(52, 53): faila= 52(D) failb= 53(D) OK [11513.039422] raid6test: test_disks(52, 54): faila= 52(D) failb= 54(D) OK [11513.067426][11513.540289] raid6test: test_disks(52, 58): faila= 52(D) failb= 58(D) OK [11513.540845] raid6test: test_disks(52, 59): faila= 52(D) failb= 59(D) OK [11513.541726] raid6test: test_disks(52, 60): faila= 52(D) failb= 60(D) OK [11513.542249] raid6test: test_disks(52, 61): faila= 52(D) failb= 61(D) OK [11513.542793] raid6test: test_disks(52, 62): faila= 52(D) failb= 62(P) OK [11513.543334] raid6test: test_disks(52, 63): faila= 52(D) failb= 63(Q) OK [11513.543877] raid6test: test_disks(53, 54): faila= 53(D) failb= 54(D) OK [11513.544403] raid6test: test_disks(53, 55): faila= 53(D) failb= 55(D) OK [11513.544946] raid6test: test_disks(53, 56): faila= 53(D) failb= 56(D) OK [11513.545508] raid6test: test_disks(53, 57): faila= 53(D) failb= 57(D) OK [11513.546010] raid6test: test_disks(53, 58): faila= 53(D) failb= 58(D) OK [11513.546569] raid6test: test_disks(53, 59): faila= 53(D) failb= 59(D) OK [11513.547075] raid6test: test_disks(53, 60): faila= 53(D) failb= 60(D) OK [11513.547615] raid6test: test_disks(53, 61): faila= 53(D) failb= 61(D) OK [11513.548122] raid6test: test_disks(53, 62): faila= 53(D) failb= 62(P) OK b= 56(D) OK [11514.048904] raid6test: test_disks(54, 57): faila= 54(D) failb= 57(D) OK [11514.049415] raid6test: test_disks(54, 58): faila= 54(D) failb= 58(D) OK [11514.049971] raid6test: test_disks(54, 59): faila= 54(D) failb= 59(D) OK [11514.050535] raid6test: test_disks(54, 60): faila= 54(D) failb= 60(D) OK [11514.051040] raid6test: test_disks(54, 61): faila= 54(D) failb= 61(D) OK [11514.051598] raid6test: test_disks(54, 62): faila= 54(D) failb= 62(P) OK [11514.052120] raid6test: test_disks(54, 63): faila= 54(D) failb= 63(Q) OK [11514.052657] raid6test: test_disks(55, 56): faila= 55(D) failb= 56(D) OK [11514.053163] raid6test: test_disks(55, 57): faila= 55(D) failb= 57(D) OK [11514.053673] raid6test: test_disks(55, 58): faila= 55(D) failb= 58(D) OK [11514.054191] raid6test: test_disks(55, 59): faila= 55(D) failb= 59(D) OK [11514.054697] raid6test: test_disks(55, 60): faila= 55(D) failb= 60(D) OK [11514.055242] raid6test: test_disks(55, 61): faila= 55(D) failb= 61(D) OK [11514.055779] raid6test: test_disks(55, 62): faila= 55(D) failb= 62(P) OK [11514.056325] raid6test: test_disks(55, 63): faila= 55(D) failb= 63(Q) OK [11514.056866] raid6test: test_disks(56, 57): faila= 56(D) failb=b= 60(D) OK [11514.557819] raid6test: test_disks(56, 61): faila= 56(D) failb= 61(D) OK [11514.558355] raid6test: test_disks(56, 62): faila= 56(D) failb= 62(P) OK [11514.558935] raid6test: test_disks(56, 63): faila= 56(D) failb= 63(Q) OK [11514.559479] raid6test: test_disks(57, 58): faila= 57(D) failb= 58(D) OK [11514.560000] raid6test: test_disks(57, 59): faila= 57(D) failb= 59(D) OK [11514.560528] raid6test: test_disks(57, 60): faila= 57(D) failb= 60(D) OK [11514.561035] raid6test: test_disks(57, 61): faila= 57(D) failb= 61(D) OK [11514.561592] raid6test: test_disks(57, 62): faila= 57(D) failb= 62(P) OK [11514.562112] raid6test: test_disks(57, 63): faila= 57(D) failb= 63(Q) OK [11514.562649] raid6test: test_disks(58, 59): faila= 58(D) failb= 59(D) OK [11514.563153] raid6test: test_disks(58, 60): faila= 58(D) failb= 60(D) OK [11514.563698] raid6test: test_disks(58, 61): faila= 58(D) failb= 61(D) OK [11514.564200] raid6test: test_disks(58, 62): faila= 58(D) failb= 62(P) OK [11514.564717] raid6test: test_disks(58, 63): faila= 58(D) failb= 63(Q) OK [11514.565261] raid6test: test_disks(59, 60): faila= 59(D) failb= 60(D) OK [11514.565801] raid6test: test_disks(59, 61): faila= 59(D) failb= 61(D) OK [11514.566333] raid6test: test_disks(59, 62): faila= 59(D) failb= 62(P) OK [11514.566886] raid6test: test_disks(59, 63): faila= 59(D) failb= 63(Q) OK [11514.567383] raidaid6test: test_disks(61, 62): faila= 61(D) failb= 62(P) OK [11515.068185] raid6test: test_disks(61, 63): faila= 61(D) failb= 63(Q) OK [11515.068722] raid6test: test_disks(62, 63): faila= 62(P) failb= 63(Q) OK [11515.069139] raid6test: [11515.069284] raid6test: complete (2429 tests, 0 failures) ** Attempting to unload raid6test... ** ** Attempting to load raid_class... ** ** Attempting to unload raid_class... ** ** Attempting to load ramoops... ** ** Attempting to unload ramoops... ** ** Attempting to load rbd... ** [11520.204341] Key type ceph registered [11520.207665] libceph: loaded (mon/osd proto 15/24) [11520.281062] rbd: loaded (major 252) ** Attempting to unload rbd... ** [11520.920872] Key type ceph unregistered ** Attempting to load rdma_cm... ** ** Attempting to unload rdma_cm... ** ** Attempting to load rdma_ucm... ** ** Attempting to unload rdma_ucm... ** ** Attempting to load reed_solomon... ** ** Attempting to unload reed_solomon... ** ** Attempting to load rfcomm... ** [11529.348553] Bluetooth: Core ver 2.22 [11529.349284] NET: Registered PF_BLUETOOTH protocol family [11529.349687] Bluetooth: HCI device and connection manager initialized [11529.350364] Bluetooth: HCI socket layer initialized [11529.351338] Bluetooth: L2CAP socket layer initialized [11529.351926] Bluetooth: SCO socket layer initialized [11529.381881] Bluetooth: RFCOMM TTY layer initialized [11529.382709] Bluetooth: RFCOMM socket layer initialized [11529.383239] Bluetooth: RFCOMM ver 1.11 ** Attempting to unload rfcomm... ** [11529.964912] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load ring_buffer_benchmark... ** ** Attempting to unload ring_buffer_benchmark... ** ** Attempting to load rmd160... ** ** Attempting to unload rmd160... ** ** Attempting to load rpcrdma... ** [11538.215900] RPC: Registered rdma transport module. [11538.216618] RPC: Registered rdma backchannel transport module. ** Attempting to unload rpcrdma... ** [11538.727186] RPC: Unregistered rdma transport module. [11538.727579] RPC: Unregistered rdma backchannel transport module. ** Attempting to load sch_cake... ** ** Attempting to unload sch_cake... ** ** Attempting to load sch_cbs... ** ** Attempting to unload sch_cbs... ** ** Attempting to load sch_etf... ** ** Attempting to unload sch_etf... ** ** Attempting to load sch_ets... ** ** Attempting to unload sch_ets... ** ** Attempting to load sch_fq... ** ** Attempting to unload sch_fq... ** ** Attempting to load sch_hfsc... ** ** Attempting to unload sch_hfsc... ** ** Attempting to load sch_htb... ** ** Attempting to unload sch_htb... ** ** Attempting to load sch_ingress... ** ** Attempting to unload sch_ingress... ** ** Attempting to load sch_prio... ** ** Attempting to unload sch_prio... ** ** Attempting to load sch_sfq... ** ** Attempting to unload sch_sfq... ** ** Attempting to load sch_taprio... ** ** Attempting to unload sch_taprio... ** ** Attempting to load sch_tbf... ** ** Attempting to unload sch_tbf... ** ** Attempting to load scsi_transport_iscsi... ** [11560.293424] Loading iSCSI transport class v2.0-870. ** Attempting to unload scsi_transport_iscsi... ** ** Attempting to load serpent_generic... ** ** Attempting to unload serpent_generic... ** ** Attempting to load serport... ** ** Attempting to unload serport... ** ** Attempting to load sit... ** [11566.914703] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver ** Attempting to unload sit... ** ** Attempting to load snd... ** ** Attempting to unload snd... ** ** Attempting to load snd_hda_codec_hdmi... ** ** Attempting to unload snd_hda_codec_hdmi... ** ** Attempting to load snd_hrtimer... ** ** Attempting to unload snd_hrtimer... ** ** Attempting to load snd_seq_dummy... ** ** Attempting to unload snd_seq_dummy... ** ** Attempting to load snd_timer... ** ** Attempting to unload snd_timer... ** ** Attempting to load softdog... ** [11580.463674] softdog: initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0) [11580.464655] softdog: soft_reboot_cmd= soft_active_on_boot=0 ** Attempting to unload softdog... ** ** Attempting to load soundcore... ** ** Attempting to unload soundcore... ** ** Attempting to load sr_mod... ** ** Attempting to unload sr_mod... ** [11584.623906] cdrom: Uniform CD-ROM driver unloaded ** Attempting to load tap... ** ** Attempting to unload tap... ** ** Attempting to load target_core_file... ** [11588.313461] Rounding down aligned max_sectors from 4294967295 to 4294967288 [11588.315279] db_root: cannot open: /etc/target ** Attempting to unload target_core_file... ** ** Attempting to load target_core_iblock... ** [11590.320414] Rounding down aligned max_sectors from 4294967295 to 4294967288 [11590.322012] db_root: cannot open: /etc/target ** Attempting to unload target_core_iblock... ** ** Attempting to load target_core_mod... ** [11592.445347] Rounding down aligned max_sectors from 4294967295 to 4294967288 [11592.446878] db_root: cannot open: /etc/target ** Attempting to unload target_core_mod... ** ** Attempting to load target_core_pscsi... ** [11594.492203] Rounding down aligned max_sectors from 4294967295 to 4294967288 [11594.493983] db_root: cannot open: /etc/target ** Attempting to unload target_core_pscsi... ** ** Attempting to load target_core_user... ** [11596.638426] Rounding down aligned max_sectors from 4294967295 to 4294967288 [11596.640149] db_root: cannot open: /etc/target ** Attempting to unload target_core_user... ** ** Attempting to load tcm_fc... ** [11599.307469] Rounding down aligned max_sectors from 4294967295 to 4294967288 [11599.309057] db_root: cannot open: /etc/target ** Attempting to unload tcm_fc... ** ** Attempting to load tcm_loop... ** [11601.497533] Rounding down aligned max_sectors from 4294967295 to 4294967288 [11601.499147] db_root: cannot open: /etc/target ** Attempting to unload tcm_loop... ** ** Attempting to load tcp_bbr... ** ** Attempting to unload tcp_bbr... ** ** Attempting to load tcp_dctcp... ** ** Attempting to unload tcp_dctcp... ** ** Attempting to load tcp_nv... ** ** Attempting to unload tcp_nv... ** ** Attempting to load team... ** [11608.111116] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team... ** ** Attempting to load team_mode_activebackup... ** [11609.719885] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_activebackup... ** ** Attempting to load team_mode_broadcast... ** [11611.352029] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_broadcast... ** ** Attempting to load team_mode_loadbalance... ** [11612.973138] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_loadbalance... ** ** Attempting to load team_mode_random... ** [11614.620832] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_random... ** ** Attempting to load team_mode_roundrobin... ** [11616.177646] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_roundrobin... ** ** Attempting to load tipc... ** [11618.638857] tipc: Activated (version 2.0.0) [11618.640557] NET: Registered PF_TIPC protocol family [11618.642213] tipc: Started in single node mode ** Attempting to unload tipc... ** [11619.230571] NET: Unregistered PF_TIPC protocol family [11619.412790] tipc: Deactivated ** Attempting to load tls... ** ** Attempting to unload tls... ** ** Attempting to load ts_bm... ** ** Attempting to unload ts_bm... ** ** Attempting to load ts_fsm... ** ** Attempting to unload ts_fsm... ** ** Attempting to load tun... ** [11626.115231] tun: Universal TUN/TAP device driver, 1.6 ** Attempting to unload tun... ** ** Attempting to load tunnel4... ** ** Attempting to unload tunnel4... ** ** Attempting to load tunnel6... ** ** Attempting to unload tunnel6... ** ** Attempting to load twofish_common... ** ** Attempting to unload twofish_common... ** ** Attempting to load twofish_generic... ** ** Attempting to unload twofish_generic... ** ** Attempting to load ubi... ** ** Attempting to unload ubi... ** ** Attempting to load udf... ** ** Attempting to unload udf... ** [11636.567297] cdrom: Uniform CD-ROM driver unloaded ** Attempting to load udp_tunnel... ** ** Attempting to unload udp_tunnel... ** ** Attempting to load uhid... ** ** Attempting to unload uhid... ** ** Attempting to load uinput... ** ** Attempting to unload uinput... ** ** Attempting to load uio... ** ** Attempting to unload uio... ** ** Attempting to load uio_pci_generic... ** ** Attempting to unload uio_pci_generic... ** ** Attempting to load usb_wwan... ** ** Attempting to unload usb_wwan... ** ** Attempting to load usbnet... ** ** Attempting to unload usbnet... ** ** Attempting to load veth... ** ** Attempting to unload veth... ** ** Attempting to load vhost... ** ** Attempting to unload vhost... ** ** Attempting to load vhost_iotlb... ** ** Attempting to unload vhost_iotlb... ** ** Attempting to load vhost_net... ** [11655.047503] tun: Universal TUN/TAP device driver, 1.6 ** Attempting to unload vhost_net... ** ** Attempting to load vhost_vdpa... ** ** Attempting to unload vhost_vdpa... ** ** Attempting to load vhost_vsock... ** [11658.478993] NET: Registered PF_VSOCK protocol family ** Attempting to unload vhost_vsock... ** [11659.183904] NET: Unregistered PF_VSOCK protocol family ** Attempting to load videodev... ** [11660.314810] mc: Linux media interface: v0.10 [11660.442384] videodev: Linux video capture interface: v2.00 ** Attempting to unload videodev... ** ** Attempting to load virtio_gpu... ** ** Attempting to unload virtio_gpu... ** ** Attempting to load virtio_balloon... ** ** Attempting to unload virtio_balloon... ** ** Attempting to load virtio_blk... ** ** Attempting to unload virtio_blk... ** ** Attempting to load virtio_dma_buf... ** ** Attempting to unload virtio_dma_buf... ** ** Attempting to load virtio_input... ** ** Attempting to unload virtio_input... ** ** Attempting to load virtio_net... ** ** Attempting to unload virtio_net... ** ** Attempting to load virtio_scsi... ** ** Attempting to unload virtio_scsi... ** ** Attempting to load virtio_vdpa... ** ** Attempting to unload virtio_vdpa... ** ** Attempting to load virtiofs... ** ** Attempting to unload virtiofs... ** ** Attempting to load vmac... ** ** Attempting to unload vmac... ** ** Attempting to load vport_geneve... ** [11678.761463] openvswitch: Open vSwitch switching datapath ** Attempting to unload vport_geneve... ** ** Attempting to load vport_gre... ** [11681.878103] gre: GRE over IPv4 demultiplexor driver [11682.333404] openvswitch: Open vSwitch switching datapath [11682.355394] ip_gre: GRE over IPv4 tunneling driver ** Attempting to unload vport_gre... ** ** Attempting to load vport_vxlan... ** [11685.935681] openvswitch: Open vSwitch switching datapath ** Attempting to unload vport_vxlan... ** ** Attempting to load vringh... ** ** Attempting to unload vringh... ** ** Attempting to load vsock_diag... ** [11690.600210] NET: Registered PF_VSOCK protocol family ** Attempting to unload vsock_diag... ** [11691.134146] NET: Unregistered PF_VSOCK protocol family ** Attempting to load vsockmon... ** [11692.229903] NET: Registered PF_VSOCK protocol family ** Attempting to unload vsockmon... ** [11692.821206] NET: Unregistered PF_VSOCK protocol family ** Attempting to load vxlan... ** ** Attempting to unload vxlan... ** ** Attempting to load wireguard... ** [11695.748994] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. [11695.749615] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. [11695.750237] TECH PREVIEW: WireGuard may not be fully supported. [11695.750237] Please review provided documentation for limitations. ** Attempting to unload wireguard... ** ** Attempting to load wp512... ** ** Attempting to unload wp512... ** ** Attempting to load xcbc... ** ** Attempting to unload xcbc... ** ** Attempting to load xfrm4_tunnel... ** ** Attempting to unload xfrm4_tunnel... ** ** Attempting to load xfrm6_tunnel... ** ** Attempting to unload xfrm6_tunnel... ** ** Attempting to load xfrm_interface... ** [11707.537205] IPsec XFRM device driver ** Attempting to unload xfrm_interface... ** ** Attempting to load xfrm_ipcomp... ** ** Attempting to unload xfrm_ipcomp... ** ** Attempting to load xt_addrtype... ** ** Attempting to unload xt_addrtype... ** ** Attempting to load xsk_diag... ** ** Attempting to unload xsk_diag... ** ** Attempting to load xt_AUDIT... ** ** Attempting to unload xt_AUDIT... ** ** Attempting to load xt_bpf... ** ** Attempting to unload xt_bpf... ** ** Attempting to load xt_cgroup... ** ** Attempting to unload xt_cgroup... ** ** Attempting to load xt_CHECKSUM... ** ** Attempting to unload xt_CHECKSUM... ** ** Attempting to load xt_CLASSIFY... ** ** Attempting to unload xt_CLASSIFY... ** ** Attempting to load xt_CONNSECMARK... ** ** Attempting to unload xt_CONNSECMARK... ** ** Attempting to load xt_CT... ** ** Attempting to unload xt_CT... ** ** Attempting to load xt_DSCP... ** [-- MARK -- Fri Feb 3 09:00:00 2023] ** Attempting to unload xt_DSCP... ** ** Attempting to load xt_HL... ** ** Attempting to unload xt_HL... ** ** Attempting to load xt_HMARK... ** ** Attempting to unload xt_HMARK... ** ** Attempting to load xt_IDLETIMER... ** ** Attempting to unload xt_IDLETIMER... ** ** Attempting to load xt_LOG... ** ** Attempting to unload xt_LOG... ** ** Attempting to load xt_MASQUERADE... ** ** Attempting to unload xt_MASQUERADE... ** ** Attempting to load xt_NETMAP... ** ** Attempting to unload xt_NETMAP... ** ** Attempting to load xt_NFLOG... ** ** Attempting to unload xt_NFLOG... ** ** Attempting to load xt_NFQUEUE... ** ** Attempting to unload xt_NFQUEUE... ** ** Attempting to load xt_RATEEST... ** ** Attempting to unload xt_RATEEST... ** ** Attempting to load xt_REDIRECT... ** ** Attempting to unload xt_REDIRECT... ** ** Attempting to load xt_SECMARK... ** ** Attempting to unload xt_SECMARK... ** ** Attempting to load xt_TCPMSS... ** ** Attempting to unload xt_TCPMSS... ** ** Attempting to load xt_TCPOPTSTRIP... ** ** Attempting to unload xt_TCPOPTSTRIP... ** ** Attempting to load xt_TEE... ** ** Attempting to unload xt_TEE... ** ** Attempting to load xt_TPROXY... ** ** Attempting to unload xt_TPROXY... ** ** Attempting to load xt_TRACE... ** ** Attempting to unload xt_TRACE... ** ** Attempting to load xt_addrtype... ** ** Attempting to unload xt_addrtype... ** ** Attempting to load xt_bpf... ** ** Attempting to unload xt_bpf... ** ** Attempting to load xt_cgroup... ** ** Attempting to unload xt_cgroup... ** ** Attempting to load xt_cluster... ** ** Attempting to unload xt_cluster... ** ** Attempting to load xt_comment... ** ** Attempting to unload xt_comment... ** ** Attempting to load xt_connbytes... ** ** Attempting to unload xt_connbytes... ** ** Attempting to load xt_connlabel... ** ** Attempting to unload xt_connlabel... ** ** Attempting to load xt_connlimit... ** ** Attempting to unload xt_connlimit... ** ** Attempting to load xt_connmark... ** ** Attempting to unload xt_connmark... ** ** Attempting to load xt_CONNSECMARK... ** ** Attempting to unload xt_CONNSECMARK... ** ** Attempting to load xt_conntrack... ** ** Attempting to unload xt_conntrack... ** ** Attempting to load xt_cpu... ** ** Attempting to unload xt_cpu... ** ** Attempting to load xt_CT... ** ** Attempting to unload xt_CT... ** ** Attempting to load xt_dccp... ** ** Attempting to unload xt_dccp... ** ** Attempting to load xt_devgroup... ** ** Attempting to unload xt_devgroup... ** ** Attempting to load xt_dscp... ** ** Attempting to unload xt_dscp... ** ** Attempting to load xt_DSCP... ** ** Attempting to unload xt_DSCP... ** ** Attempting to load xt_ecn... ** ** Attempting to unload xt_ecn... ** ** Attempting to load xt_esp... ** ** Attempting to unload xt_esp... ** ** Attempting to load xt_hashlimit... ** ** Attempting to unload xt_hashlimit... ** ** Attempting to load xt_helper... ** ** Attempting to unload xt_helper... ** ** Attempting to load xt_hl... ** ** Attempting to unload xt_hl... ** ** Attempting to load xt_HL... ** ** Attempting to unload xt_HL... ** ** Attempting to load xt_HMARK... ** ** Attempting to unload xt_HMARK... ** ** Attempting to load xt_IDLETIMER... ** ** Attempting to unload xt_IDLETIMER... ** ** Attempting to load xt_iprange... ** ** Attempting to unload xt_iprange... ** ** Attempting to load xt_ipvs... ** [11802.223270] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [11802.225394] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [11802.226329] IPVS: Each connection entry needs 416 bytes at least [11802.228532] IPVS: ipvs loaded. ** Attempting to unload xt_ipvs... ** [11802.784848] IPVS: ipvs unload** Attempting to load xt_length... ** ** Attempting to unload xt_length... ** ** Attempting to load xt_limit... ** ** Attempting to unload xt_limit... ** ** Attempting to load xt_LOG... ** ** Attempting to unload xt_LOG... ** ** Attempting to load xt_mac... ** ed. tempting to unload xt_mac... ** ** Attempting to load xt_mark... ** ** Attempting to unload xt_mark... ** ** Attempting to load xt_multiport... ** ** Attempting to unload xt_multiport... ** ** Attempting to load xt_nat... ** ** Attempting to unload xt_nat... ** ** Attempting to load xt_NETMAP... ** ** Attempting to unload xt_NETMAP... ** ** Attempting to load xt_NFLOG..** Attempting to unload xt_NFQUEUE... ** ** Attempting to load xt_osf... ** ** Attempting to unload xt_osf... ** ** Attempting to load xt_owner... ** ** Attempting to unload xt_owner... ** ** Attempting to load xt_physdev... ** ** Attempting to unload xt_physdev... ** ** Attempting to load xt_pkttype... ** ** Attempting to unload xt_pkttype... ** ** Attempting to load xt_policy... ** ** Attempting to unload xt_policy... ** ** Attempting to load xt_quota... ** ** Attempting to unload xt_quota... ** ** Attempting to load xt_rateest... ** ** Attempting to unload xt_rateest... ** ** Attempting to load xt_RATEEST... ** ** Attempting to unload xt_RATEEST... ** ** Attempting to load xt_realm... ** ** Attempting to unload xt_realm... ** ** Attempting to load xt_recent... ** ** Attempting to unload xt_recent... ** ** Attempting to load xt_REDIRECT... ** ** Attempting to unload xt_REDIRECT... ** ** Attempting to load xt_SECMARK... ** ** Attempting to unload xt_SECMARK... ** ** Attempting to load xt_set... ** ** Attempting to unload xt_set... ** ** Attempting to load xt_socket... ** ** Attempting to unload xt_socket... ** ** Attempting to load xt_state... ** ** Attempting to unload xt_state... ** ** Attempting to load xt_statistic... ** ** Attempting to unload xt_statistic... ** ** Attempting to load xt_string... ** ** Attempting to unload xt_string... ** ** Attempting to load xt_tcpmss... ** ** Attempting to unload xt_tcpmss... ** ** Attempting to load xt_TCPMSS... ** ** Attempting to unload xt_TCPMSS... ** ** Attempting to load xt_TCPOPTSTRIP... ** ** Attempting to unload xt_TCPOPTSTRIP... ** ** Attempting to load xt_TEE... ** ** Attempting to unload xt_TEE... ** ** Attempting to load xt_TPROXY... ** ** Attempting to unload xt_TPROXY... ** ** Attempting to load xt_TRACE... ** ** Attempting to unload xt_TRACE... ** ** Attempting to load xxhash_generic... ** ** Attempting to unload xxhash_generic... ** ** Attempting to load blowfish... ** ** Attempting to unload blowfish... ** ** Attempting to load 8021q... ** [11870.696927] 8021q: 802.1Q VLAN Support v1.8 ** Attempting to unload 8021q... ** ** Attempting to load act_bpf... ** ** Attempting to unload act_bpf... ** ** Attempting to load act_csum... ** ** Attempting to unload act_csum... ** ** Attempting to load act_gact... ** [11875.557918] GACT probability on ** Attempting to unload act_gact... ** ** Attempting to load act_mirred... ** [11877.110059] Mirror/redirect action on ** Attempting to unload act_mirred... ** ** Attempting to load act_pedit... ** ** Attempting to unload act_pedit... ** ** Attempting to load act_police... ** ** Attempting to unload act_police... ** ** Attempting to load act_sample... ** ** Attempting to unload act_sample... ** ** Attempting to load act_skbedit... ** ** Attempting to unload act_skbedit... ** ** Attempting to load act_tunnel_key... ** ** Attempting to unload act_tunnel_key... ** ** Attempting to load act_vlan... ** ** Attempting to unload act_vlan... ** ** Attempting to load adiantum... ** ** Attempting to unload adiantum... ** ** Attempting to load af_key... ** [11890.096154] NET: Registered PF_KEY protocol family ** Attempting to unload af_key... ** [11890.604868] NET: Unregistered PF_KEY protocol family ** Attempting to load ah4... ** ** Attempting to unload ah4... ** ** Attempting to load ah6... ** ** Attempting to unload ah6... ** ** Attempting to load ansi_cprng... ** [11895.135951] alg: No test for fips(ansi_cprng) (fips_ansi_cprng) ** Attempting to unload ansi_cprng... ** ** Attempting to load apple_bl... ** ** Attempting to unload apple_bl... ** ** Attempting to load aquantia... ** ** Attempting to unload aquantia... ** ** Attempting to load arc_ps2... ** ** Attempting to unload arc_ps2... ** ** Attempting to load arp_tables... ** [11902.324682] Warning: Deprecated Driver is detected: arptables will not be maintained in a future major release and may be disabled ** Attempting to unload arp_tables... ** ** Attempting to load arpt_mangle... ** ** Attempting to unload arpt_mangle... ** ** Attempting to load arptable_filter... ** [11905.444212] Warning: Deprecated Driver is detected: arptables will not be maintained in a future major release and may be disabled ** Attempting to unload arptable_filter... ** ** Attempting to load asym_tpm... ** ** Attempting to unload asym_tpm... ** ** Attempting to load async_memcpy... ** [11908.618037] async_tx: api initialized (async) ** Attempting to unload async_memcpy... ** ** Attempting to load async_pq... ** [11910.255335] raid6: skip pq benchmark and using algorithm sse2x4 [11910.256177] raid6: using ssse3x2 recovery algorithm [11910.271649] async_tx: api initialized (async) ** Attempting to unload async_pq... ** ** Attempting to load async_raid6_recov... ** [11911.993311] raid6: skip pq benchmark and using algorithm sse2x4 [11911.994179] raid6: using ssse3x2 recovery algorithm [11912.008803] async_tx: api initialized (async) ** Attempting to unload async_raid6_recov... ** ** Attempting to load async_tx... ** [11913.735234] async_tx: api initialized (async) ** Attempting to unload async_tx... ** ** Attempting to load async_xor... ** [11915.326231] async_tx: api initialized (async) ** Attempting to unload async_xor... ** ** Attempting to load bareudp... ** ** Attempting to unload bareudp... ** ** Attempting to load blowfish_common... ** ** Attempting to unload blowfish_common... ** ** Attempting to load blowfish_generic... ** ** Attempting to unload blowfish_generic... ** ** Attempting to load bluetooth... ** [11923.189105] Bluetooth: Core ver 2.22 [11923.189647] NET: Registered PF_BLUETOOTH protocol family [11923.189969] Bluetooth: HCI device and connection manager initialized [11923.190600] Bluetooth: HCI socket layer initialized [11923.191522] Bluetooth: L2CAP socket layer initialized [11923.192367] Bluetooth: SCO socket layer initialized ** Attempting to unload bluetooth... ** [11923.714287] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load bnep... ** [11925.096516] Bluetooth: Core ver 2.22 [11925.097251] NET: Registered PF_BLUETOOTH protocol family [11925.097650] Bluetooth: HCI device and connection manager initialized [11925.098282] Bluetooth: HCI socket layer initialized [11925.099293] Bluetooth: L2CAP socket layer initialized [11925.099883] Bluetooth: SCO socket layer initialized [11925.115238] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [11925.115607] Bluetooth: BNEP filters: protocol multicast [11925.115917] Bluetooth: BNEP socket layer initialized ** Attempting to unload bnep... ** [11925.628277] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load bonding... ** ** Attempting to unload bonding... ** ** Attempting to load br_netfilter... ** [11928.810094] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. [11928.824776] Bridge firewalling registered ** Attempting to unload br_netfilter... ** ** Attempting to load bridge... ** [11930.734132] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. ** Attempting to unload bridge... ** ** Attempting to load bsd_comp... ** [11932.438031] PPP generic driver version 2.4.2 [11932.454171] PPP BSD Compression module registered ** Attempting to unload bsd_comp... ** ** Attempting to load cachefiles... ** [11934.246239] CacheFiles: Loaded ** Attempting to unload cachefiles... ** [11934.778308] CacheFiles: Unloading ** Attempting to load camellia_generic... ** ** Attempting to unload camellia_generic... ** ** Attempting to load can... ** [11937.457015] can: controller area network core [11937.459307] NET: Registered PF_CAN protocol family ** Attempting to unload can... ** [11937.990269] NET: Unregistered PF_CAN protocol family ** Attempting to load can_bcm... ** [11939.116677] can: controller area network core [11939.118948] NET: Registered PF_CAN protocol family [11939.143589] can: broadcast manager protocol ** Attempting to unload can_bcm... ** [11939.720281] NET: Unregistered PF_CAN protocol family ** Attempting to load can_dev... ** ** Attempting to unload can_dev... ** ** Attempting to load can_gw... ** [11942.467544] can: controller area network core [11942.469842] NET: Registered PF_CAN protocol family [11942.489911] can: netlink gateway - max_hops=1 ** Attempting to unload can_gw... ** [11943.051338] NET: Unregistered PF_CAN protocol family ** Attempting to load can_isotp... ** [11944.119338] can: controller area network core [11944.121574] NET: Registered PF_CAN protocol family [11944.142667] can: isotp protocol ** Attempting to unload can_isotp... ** [11944.723320] NET: Unregistered PF_CAN protocol family ** Attempting to load can_j1939... ** [11945.873042] can: controller area network core [11945.875327] NET: Registered PF_CAN protocol family [11945.915545] can: SAE J1939 ** Attempting to unload can_j1939... ** [11946.495343] NET: Unregistered PF_CAN protocol family ** Attempting to load can_raw... ** [11947.585445] can: controller area network core [11947.587648] NET: Registered PF_CAN protocol family [11947.604926] can: raw protocol ** Attempting to unload can_raw... ** [11948.168361] NET: Unregistered PF_CAN protocol family ** Attempting to load cast5_generic... ** ** Attempting to unload cast5_generic... ** ** Attempting to load cast6_generic... ** ** Attempting to unload cast6_generic... ** ** Attempting to load cdc_acm... ** [11952.497153] usbcore: registered new interface driver cdc_acm [11952.497935] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters ** Attempting to unload cdc_acm... ** [11952.980460] usbcore: deregistering interface driver cdc_acm ** Attempting to load ceph... ** [11954.353149] Key type ceph registered [11954.356516] libceph: loaded (mon/osd proto 15/24) [11954.577335] ceph: loaded (mds proto 32) ** Attempting to unload ceph... ** [11955.167621] Key type ceph unregistered ** Attempting to load chacha20poly1305... ** ** Attempting to unload chacha20poly1305... ** ** Attempting to load cifs... ** [11959.488089] Key type cifs.spnego registered [11959.488459] Key type cifs.idmap registered ** Attempting to unload cifs... ** [11960.056620] Key type cifs.idmap unregistered [11960.057525] Key type cifs.spnego unregistered ** Attempting to load cls_bpf... ** ** Attempting to unload cls_bpf... ** ** Attempting to load cls_flow... ** ** Attempting to unload cls_flow... ** ** Attempting to load cls_flower... ** ** Attempting to unload cls_flower... ** ** Attempting to load cls_fw... ** ** Attempting to unload cls_fw... ** ** Attempting to load cls_matchall... ** ** Attempting to unload cls_matchall... ** ** Attempting to load cls_u32... ** [11970.096769] u32 classifier [11970.096968] Performance counters on [11970.097244] input device check on [11970.097556] Actions configured ** Attempting to unload cls_u32... ** ** Attempting to load cordic... ** ** Attempting to unload cordic... ** ** Attempting to load cqhci... ** ** Attempting to unload cqhci... ** ** Attempting to load crc_itu_t... ** ** Attempting to unload crc_itu_t... ** ** Attempting to load crc32_generic... ** ** Attempting to unload crc32_generic... ** ** Attempting to load crc7... ** ** Attempting to unload crc7... ** ** Attempting to load crc8... ** ** Attempting to unload crc8... ** ** Attempting to load des_generic... ** ** Attempting to unload des_generic... ** ** Attempting to load diag... ** [11984.900035] tipc: Activated (version 2.0.0) [11984.901684] NET: Registered PF_TIPC protocol family [11984.903323] tipc: Started in single node mode ** Attempting to unload diag... ** [11985.497665] NET: Unregistered PF_TIPC protocol family [11985.673953] tipc: Deactivated ** Attempting to load dm_bio_prison... ** ** Attempting to unload dm_bio_prison... ** ** Attempting to load dm_bufio... ** ** Attempting to unload dm_bufio... ** ** Attempting to load dm_cache_smq... ** ** Attempting to unload dm_cache_smq... ** ** Attempting to load dm_cache... ** ** Attempting to unload dm_cache... ** ** Attempting to load dm_crypt... ** ** Attempting to unload dm_crypt... ** ** Attempting to load dm_delay... ** ** Attempting to unload dm_delay... ** ** Attempting to load dm_era... ** ** Attempting to unload dm_era... ** ** Attempting to load dm_flakey... ** ** Attempting to unload dm_flakey... ** ** Attempting to load dm_integrity... ** [12000.204533] async_tx: api initialized (async) ** Attempting to unload dm_integrity... ** ** Attempting to load dm_io_affinity... ** ** Attempting to unload dm_io_affinity... ** ** Attempting to load dm_log_userspace... ** [12003.630117] device-mapper: dm-log-userspace: version 1.3.0 loaded ** Attempting to unload dm_log_userspace... ** [12004.111379] device-mapper: dm-log-userspace: version 1.3.0 unloaded ** Attempting to load dm_log_writes... ** ** Attempting to unload dm_log_writes... ** ** Attempting to load dm_multipath... ** ** Attempting to unload dm_multipath... ** ** Attempting to load dm_persistent_data... ** ** Attempting to unload dm_persistent_data... ** ** Attempting to load dm_queue_length... ** [12011.612059] device-mapper: multipath queue-length: version 0.2.0 loaded ** Attempting to unload dm_queue_length... ** ** Attempting to load dm_raid... ** [12013.270720] raid6: skip pq benchmark and using algorithm sse2x4 [12013.271582] raid6: using ssse3x2 recovery algorithm [12013.285339] async_tx: api initialized (async) [12013.484552] device-mapper: raid: Loading target version 1.15.1 ** Attempting to unload dm_raid... ** ** Attempting to load dm_round_robin... ** [12015.712268] device-mapper: multipath round-robin: version 1.2.0 loaded ** Attempting to unload dm_round_robin... ** ** Attempting to load dm_service_time... ** [12017.382882] device-mapper: multipath service-time: version 0.3.0 loaded ** Attempting to unload dm_service_time... ** ** Attempting to load dm_snapshot... ** ** Attempting to unload dm_snapshot... ** ** Attempting to load dm_switch... ** ** Attempting to unload dm_switch... ** ** Attempting to load dm_thin_pool... ** ** Attempting to unload dm_thin_pool... ** ** Attempting to load dm_verity... ** ** Attempting to unload dm_verity... ** ** Attempting to load dm_writecache... ** ** Attempting to unload dm_writecache... ** [-- MARK -- Fri Feb 3 09:05:00 2023] ** Attempting to load dm_zero... ** ** Attempting to unload dm_zero... ** ** Attempting to load dummy... ** ** Attempting to unload dummy... ** ** Attempting to load ebt_802_3... ** ** Attempting to unload ebt_802_3... ** ** Attempting to load ebt_among... ** ** Attempting to unload ebt_among... ** ** Attempting to load ebt_arp... ** ** Attempting to unload ebt_arp... ** ** Attempting to load ebt_arpreply... ** ** Attempting to unload ebt_arpreply... ** ** Attempting to load ebt_dnat... ** ** Attempting to unload ebt_dnat... ** ** Attempting to load ebt_ip... ** ** Attempting to unload ebt_ip... ** ** Attempting to load ebt_ip6... ** ** Attempting to unload ebt_ip6... ** ** Attempting to load ebt_limit... ** ** Attempting to unload ebt_limit... ** ** Attempting to load ebt_log... ** ** Attempting to unload ebt_log... ** ** Attempting to load ebt_mark... ** ** Attempting to unload ebt_mark... ** ** Attempting to load ebt_mark_m... ** ** Attempting to unload ebt_mark_m... ** ** Attempting to load ebt_nflog... ** ** Attempting to unload ebt_nflog... ** ** Attempting to load ebt_pkttype... ** ** Attempting to unload ebt_pkttype... ** ** Attempting to load ebt_redirect... ** ** Attempting to unload ebt_redirect... ** ** Attempting to load ebt_snat... ** ** Attempting to unload ebt_snat... ** ** Attempting to load ebt_stp... ** ** Attempting to unload ebt_stp... ** ** Attempting to load ebt_vlan... ** ** Attempting to unload ebt_vlan... ** ** Attempting to load ebtable_broute... ** [12057.366127] Warning: Deprecated Driver is detected: ebtables will not be maintained in a future major release and may be disabled ** Attempting to unload ebtable_broute... ** ** Attempting to load ebtable_filter... ** [12059.000281] Warning: Deprecated Driver is detected: ebtables will not be maintained in a future major release and may be disabled ** Attempting to unload ebtable_filter... ** ** Attempting to load ebtable_nat... ** [12060.612891] Warning: Deprecated Driver is detected: ebtables will not be maintained in a future major release and may be disabled ** Attempting to unload ebtable_nat... ** ** Attempting to load ebtables... ** [12062.246606] Warning: Deprecated Driver is detected: ebtables will not be maintained in a future major release and may be disabled ** Attempting to unload ebtables... ** ** Attempting to load echainiv... ** ** Attempting to unload echainiv... ** ** Attempting to load enclosure... ** ** Attempting to unload enclosure... ** ** Attempting to load esp4... ** ** Attempting to unload esp4... ** ** Attempting to load esp4_offload... ** ** Attempting to unload esp4_offload... ** ** Attempting to load esp6... ** ** Attempting to unload esp6... ** ** Attempting to load esp6_offload... ** ** Attempting to unload esp6_offload... ** ** Attempting to load essiv... ** ** Attempting to unload essiv... ** ** Attempting to load failover... ** ** Attempting to unload failover... ** ** Attempting to load faulty... ** ** Attempting to unload faulty... ** ** Attempting to load fcrypt... ** ** Attempting to unload fcrypt... ** ** Attempting to load geneve... ** ** Attempting to unload geneve... ** ** Attempting to load gfs2... ** [12086.462227] DLM installed [12086.632120] gfs2: GFS2 installed ** Attempting to unload gfs2... ** ** Attempting to load hci_uart... ** [12088.680181] Bluetooth: Core ver 2.22 [12088.680983] NET: Registered PF_BLUETOOTH protocol family [12088.681384] Bluetooth: HCI device and connection manager initialized [12088.682091] Bluetooth: HCI socket layer initialized [12088.682978] Bluetooth: L2CAP socket layer initialized [12088.683540] Bluetooth: SCO socket layer initialized [12088.703800] Bluetooth: HCI UART driver ver 2.3 [12088.704538] Bluetooth: HCI UART protocol H4 registered [12088.704906] Bluetooth: HCI UART protocol BCSP registered [12088.705234] Bluetooth: HCI UART protocol ATH3K registered ** Attempting to unload hci_uart... ** [12089.254719] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load hci_vhci... ** [12090.614336] Bluetooth: Core ver 2.22 [12090.615015] NET: Registered PF_BLUETOOTH protocol family [12090.615354] Bluetooth: HCI device and connection manager initialized [12090.616008] Bluetooth: HCI socket layer initialized [12090.617104] Bluetooth: L2CAP socket layer initialized [12090.617716] Bluetooth: SCO socket layer initialized ** Attempting to unload hci_vhci... ** [12091.191785] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load hidp... ** [12092.588156] Bluetooth: Core ver 2.22 [12092.588771] NET: Registered PF_BLUETOOTH protocol family [12092.589152] Bluetooth: HCI device and connection manager initialized [12092.589818] Bluetooth: HCI socket layer initialized [12092.590805] Bluetooth: L2CAP socket layer initialized [12092.591368] Bluetooth: SCO socket layer initialized [12092.612715] Bluetooth: HIDP (Human Interface Emulation) ver 1.2 [12092.613608] Bluetooth: HIDP socket layer initialized ** Attempting to unload hidp... ** [12093.149740] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load iavf... ** [12094.834656] iavf: Intel(R) Ethernet Adaptive Virtual Function Network Driver [12094.835629] Copyright (c) 2013 - 2018 Intel Corporation. ** Attempting to unload iavf... ** ** Attempting to load ib_cm... ** ** Attempting to unload ib_cm... ** ** Attempting to load ib_core... ** ** Attempting to unload ib_core... ** ** Attempting to load ib_iser... ** [12101.261158] Loading iSCSI transport class v2.0-870. [12101.354475] iscsi: registered transport (iser) ** Attempting to unload ib_iser... ** ** Attempting to load ib_isert... ** [12104.164880] Rounding down aligned max_sectors from 4294967295 to 4294967288 [12104.167185] db_root: cannot open: /etc/target ** Attempting to unload ib_isert... ** ** Attempting to load ib_srp... ** ** Attempting to unload ib_srp... ** ** Attempting to load ib_srpt... ** [12109.895754] Rounding down aligned max_sectors from 4294967295 to 4294967288 [12109.897430] db_root: cannot open: /etc/target ** Attempting to unload ib_srpt... ** ** Attempting to load ib_umad... ** ** Attempting to unload ib_umad... ** ** Attempting to load ib_uverbs... ** ** Attempting to unload ib_uverbs... ** ** Attempting to load ieee802154_6lowpan... ** ** Attempting to unload ieee802154_6lowpan... ** ** Attempting to load ieee802154_socket... ** [12118.467283] NET: Registered PF_IEEE802154 protocol family ** Attempting to unload ieee802154_socket... ** [12118.976826] NET: Unregistered PF_IEEE802154 protocol family ** Attempting to load ifb... ** ** Attempting to unload ifb... ** ** Attempting to load ipcomp... ** ** Attempting to unload ipcomp... ** ** Attempting to load ipcomp6... ** ** Attempting to unload ipcomp6... ** ** Attempting to load ip6_gre... ** [12125.186218] gre: GRE over IPv4 demultiplexor driver [12125.215298] ip6_gre: GRE over IPv6 tunneling driver ** Attempting to unload ip6_gre... ** ** Attempting to load ip6_tables... ** [12127.002338] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6_tables... ** ** Attempting to load ip6_tunnel... ** ** Attempting to unload ip6_tunnel... ** ** Attempting to load ip6_udp_tunnel... ** ** Attempting to unload ip6_udp_tunnel... ** ** Attempting to load ip6_vti... ** ** Attempting to unload ip6_vti... ** ** Attempting to load ip6t_NPT... ** ** Attempting to unload ip6t_NPT... ** ** Attempting to load ip6t_REJECT... ** ** Attempting to unload ip6t_REJECT... ** ** Attempting to load ip6t_SYNPROXY... ** ** Attempting to unload ip6t_SYNPROXY... ** ** Attempting to load ip6t_ah... ** ** Attempting to unload ip6t_ah... ** ** Attempting to load ip6t_eui64... ** ** Attempting to unload ip6t_eui64... ** ** Attempting to load ip6t_frag... ** ** Attempting to unload ip6t_frag... ** ** Attempting to load ip6t_hbh... ** ** Attempting to unload ip6t_hbh... ** ** Attempting to load ip6t_ipv6header... ** ** Attempting to unload ip6t_ipv6header... ** ** Attempting to load ip6t_mh... ** ** Attempting to unload ip6t_mh... ** ** Attempting to load ip6t_rpfilter... ** ** Attempting to unload ip6t_rpfilter... ** ** Attempting to load ip6t_rt... ** ** Attempting to unload ip6t_rt... ** ** Attempting to load ip6table_filter... ** [12151.367714] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_filter... ** ** Attempting to load ip6table_mangle... ** [12152.992700] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_mangle... ** ** Attempting to load ip6table_nat... ** [12154.887684] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_nat... ** ** Attempting to load ip6table_raw... ** [12157.748423] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_raw... ** ** Attempting to load ip6table_security... ** [12159.361921] Warning: Deprecated Driver is detected: ip6tables will not be maintained in a future major release and may be disabled ** Attempting to unload ip6table_security... ** ** Attempting to load ip_gre... ** [12160.995460] gre: GRE over IPv4 demultiplexor driver [12161.043981] ip_gre: GRE over IPv4 tunneling driver ** Attempting to unload ip_gre... ** ** Attempting to load ipip... ** [12162.882172] ipip: IPv4 and MPLS over IPv4 tunneling driver ** Attempting to unload ipip... ** ** Attempting to load ip_set... ** ** Attempting to unload ip_set... ** ** Attempting to load ip_set_bitmap_ip... ** ** Attempting to unload ip_set_bitmap_ip... ** ** Attempting to load ip_set_bitmap_ipmac... ** ** Attempting to unload ip_set_bitmap_ipmac... ** ** Attempting to load ip_set_bitmap_port... ** ** Attempting to unload ip_set_bitmap_port... ** ** Attempting to load ip_set_hash_ip... ** ** Attempting to unload ip_set_hash_ip... ** ** Attempting to load ip_set_hash_ipmac... ** ** Attempting to unload ip_set_hash_ipmac... ** ** Attempting to load ip_set_hash_ipmark... ** ** Attempting to unload ip_set_hash_ipmark... ** ** Attempting to load ip_set_hash_ipport... ** ** Attempting to unload ip_set_hash_ipport... ** ** Attempting to load ip_set_hash_ipportip... ** ** Attempting to unload ip_set_hash_ipportip... ** ** Attempting to load ip_set_hash_ipportnet... ** ** Attempting to unload ip_set_hash_ipportnet... ** ** Attempting to load ip_set_hash_mac... ** ** Attempting to unload ip_set_hash_mac... ** ** Attempting to load ip_set_hash_net... ** ** Attempting to unload ip_set_hash_net... ** ** Attempting to load ip_set_hash_netiface... ** ** Attempting to unload ip_set_hash_netiface... ** ** Attempting to load ip_set_hash_netnet... ** ** Attempting to unload ip_set_hash_netnet... ** ** Attempting to load ip_set_hash_netport... ** ** Attempting to unload ip_set_hash_netport... ** ** Attempting to load ip_set_hash_netportnet... ** ** Attempting to unload ip_set_hash_netportnet... ** ** Attempting to load ip_set_list_set... ** ** Attempting to unload ip_set_list_set... ** ** Attempting to load ip_tables... ** [12194.354414] Warning: Deprecated Driver is detected: iptables will not be maintained in a future major release and may be disabled ** Attempting to unload ip_tables... ** ** Attempting to load ip_tunnel... ** ** Attempting to unload ip_tunnel... ** ** Attempting to load ip_vs... ** [12198.027685] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12198.029909] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12198.030888] IPVS: Each connection entry needs 416 bytes at least [12198.032900] IPVS: ipvs loaded. ** Attempting to unload ip_vs... ** [12198.585241] IPVS: ipvs unloaded. ** Attempting to load ip_vs_dh... ** [12200.223586] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12200.225701] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12200.226757] IPVS: Each connection entry needs 416 bytes at least [12200.228790] IPVS: ipvs loaded. [12200.245008] IPVS: [dh] scheduler registered. ** Attempting to unload ip_vs_dh... ** [12200.731962] IPVS: [dh] scheduler unregistered. [12200.789184] IPVS: ipvs unloaded. ** Attempting to load ip_vs_fo... ** [12202.379531] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12202.381688] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12202.382553] IPVS: Each connection entry needs 416 bytes at least [12202.384617] IPVS: ipvs loaded. [12202.398509] IPVS: [fo] scheduler registered. ** Attempting to unload ip_vs_fo... ** [12202.928959] IPVS: [fo] scheduler unregistered. [12202.995306] IPVS: ipvs unloaded. ** Attempting to load ip_vs_ftp... ** [12204.642056] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12204.644202] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12204.645175] IPVS: Each connection entry needs 416 bytes at least [12204.647622] IPVS: ipvs loaded. ** Attempting to unload ip_vs_ftp... ** [12206.351066] IPVS: ipvs unloaded. ** Attempting to load ip_vs_lblc... ** [12208.015913] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12208.018022] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12208.019132] IPVS: Each connection entry needs 416 bytes at least [12208.021093] IPVS: ipvs loaded. [12208.039239] IPVS: [lblc] scheduler registered. ** Attempting to unload ip_vs_lblc... ** [12208.554431] IPVS: [lblc] scheduler unregistered. [12208.610219] IPVS: ipvs unloaded. ** Attempting to load ip_vs_lblcr... ** [12210.241063] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12210.243182] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12210.244117] IPVS: Each connection entry needs 416 bytes at least [12210.246084] IPVS: ipvs loaded. [12210.268414] IPVS: [lblcr] scheduler registered. ** Attempting to unload ip_vs_lblcr... ** [12210.793375] IPVS: [lblcr] scheduler unregistered. [12210.850286] IPVS: ipvs unloaded. ** Attempting to load ip_vs_lc... ** [12212.485083] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12212.487182] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12212.488133] IPVS: Each connection entry needs 416 bytes at least [12212.490139] IPVS: ipvs loaded. [12212.506622] IPVS: [lc] scheduler registered. ** Attempting to unload ip_vs_lc... ** [12213.041320] IPVS: [lc] scheduler unregistered. [12213.105357] IPVS: ipvs unloaded. ** Attempting to load ip_vs_nq... ** [12214.719253] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12214.721409] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12214.722378] IPVS: Each connection entry needs 416 bytes at least [12214.724929] IPVS: ipvs loaded. [12214.740660] IPVS: [nq] scheduler registered. ** Attempting to unload ip_vs_nq... ** [12215.225787] IPVS: [nq] scheduler unregistered. [12215.278329] IPVS: ipvs unloaded. ** Attempting to load ip_vs_ovf... ** [12216.879356] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12216.881660] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12216.882570] IPVS: Each connection entry needs 416 bytes at least [12216.884635] IPVS: ipvs loaded. [12216.898967] IPVS: [ovf] scheduler registered. ** Attempting to unload ip_vs_ovf... ** [12217.434899] IPVS: [ovf] scheduler unregistered. [12217.488365] IPVS: ipvs unloaded. ** Attempting to load ip_vs_pe_sip... ** [12219.132675] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12219.134911] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12219.135967] IPVS: Each connection entry needs 416 bytes at least [12219.137910] IPVS: ipvs loaded. [12219.153856] IPVS: [sip] pe registered. ** Attempting to unload ip_vs_pe_sip... ** [12219.674348] IPVS: [sip] pe unregistered. [12223.888067] IPVS: ipvs unloaded. ** Attempting to load ip_vs_rr... ** [12225.606012] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12225.608135] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12225.609029] IPVS: Each connection entry needs 416 bytes at least [12225.611099] IPVS: ipvs loaded. [12225.625977] IPVS: [rr] scheduler registered. ** Attempting to unload ip_vs_rr... ** [12226.165923] IPVS: [rr] scheduler unregistered. [12226.223435] IPVS: ipvs unloaded. ** Attempting to load ip_vs_sed... ** [12227.851062] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12227.853257] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12227.854163] IPVS: Each connection entry needs 416 bytes at least [12227.856326] IPVS: ipvs loaded. [12227.871903] IPVS: [sed] scheduler registered. ** Attempting to unload ip_vs_sed... ** [12228.373813] IPVS: [sed] scheduler unregistered. [12228.425560] IPVS: ipvs unloaded. ** Attempting to load ip_vs_sh... ** [12230.067063] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12230.069321] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12230.070268] IPVS: Each connection entry needs 416 bytes at least [12230.072307] IPVS: ipvs loaded. [12230.089561] IPVS: [sh] scheduler registered. ** Attempting to unload ip_vs_sh... ** [12230.603220] IPVS: [sh] scheduler unregistered. [12230.680884] IPVS: ipvs unloaded. ** Attempting to load ip_vs_wlc... ** [12232.342103] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12232.344174] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12232.345339] IPVS: Each connection entry needs 416 bytes at least [12232.347556] IPVS: ipvs loaded. [12232.362327] IPVS: [wlc] scheduler registered. ** Attempting to unload ip_vs_wlc... ** [12232.868308] IPVS: [wlc] scheduler unregistered. [12232.922457] IPVS: ipvs unloaded. ** Attempting to load ip_vs_wrr... ** [12234.487738] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12234.489949] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12234.490849] IPVS: Each connection entry needs 416 bytes at least [12234.492845] IPVS: ipvs loaded. [12234.508164] IPVS: [wrr] scheduler registered. ** Attempting to unload ip_vs_wrr... ** [12235.042886] IPVS: [wrr] scheduler unregistered. [12235.094514] IPVS: ipvs unloaded. ** Attempting to load ip_vti... ** [12236.383227] IPv4 over IPsec tunneling driver ** Attempting to unload ip_vti... ** ** Attempting to load ipcomp... ** ** Attempting to unload ipcomp... ** ** Attempting to load ipcomp6... ** ** Attempting to unload ipcomp6... ** ** Attempting to load ipip... ** [12241.524888] ipip: IPv4 and MPLS over IPv4 tunneling driver ** Attempting to unload ipip... ** ** Attempting to load ipvlan... ** ** Attempting to unload ipvlan... ** ** Attempting to load ipvtap... ** ** Attempting to unload ipvtap... ** ** Attempting to load ip_vti... ** [12246.538074] IPv4 over IPsec tunneling driver ** Attempting to unload ip_vti... ** ** Attempting to load isofs... ** ** Attempting to unload isofs... ** [12248.962997] cdrom: Uniform CD-ROM driver unloaded ** Attempting to load iw_cm... ** ** Attempting to unload iw_cm... ** ** Attempting to load kheaders... ** ** Attempting to unload kheaders... ** ** Attempting to load kmem... ** ** Attempting to unload kmem... ** ** Attempting to load linear... ** ** Attempting to unload linear... ** ** Attempting to load llc... ** ** Attempting to unload llc... ** ** Attempting to load lrw... ** ** Attempting to unload lrw... ** ** Attempting to load lz4_compress... ** ** Attempting to unload lz4_compress... ** ** Attempting to load mac_celtic... ** ** Attempting to unload mac_celtic... ** ** Attempting to load mac_centeuro... ** ** Attempting to unload mac_centeuro... ** ** Attempting to load mac_croatian... ** ** Attempting to unload mac_croatian... ** ** Attempting to load mac_cyrillic... ** ** Attempting to unload mac_cyrillic... ** ** Attempting to load mac_gaelic... ** ** Attempting to unload mac_gaelic... ** ** Attempting to load mac_greek... ** ** Attempting to unload mac_greek... ** ** Attempting to load mac_iceland... ** ** Attempting to unload mac_iceland... ** ** Attempting to load mac_inuit... ** ** Attempting to unload mac_inuit... ** ** Attempting to load mac_roman... ** ** Attempting to unload mac_roman... ** ** Attempting to load mac_romanian... ** ** Attempting to unload mac_romanian... ** ** Attempting to load mac_turkish... ** ** Attempting to unload mac_turkish... ** ** Attempting to load macsec... ** [12280.873419] MACsec IEEE 802.1AE ** Attempting to unload macsec... ** ** Attempting to load macvlan... ** ** Attempting to unload macvlan... ** ** Attempting to load macvtap... ** ** Attempting to unload macvtap... ** ** Attempting to load md4... ** ** Attempting to unload md4... ** ** Attempting to load michael_mic... ** ** Attempting to unload michael_mic... ** ** Attempting to load mip6... ** [12289.981386] mip6: Mobile IPv6 ** Attempting to unload mip6... ** ** Attempting to load mpt3sas... ** [12292.330748] mpt3sas version 43.100.00.00 loaded ** Attempting to unload mpt3sas... ** [12292.829504] mpt3sas version 43.100.00.00 unloading ** Attempting to load msdos... ** ** Attempting to unload msdos... ** ** Attempting to load mtd... ** ** Attempting to unload mtd... ** ** Attempting to load n_gsm... ** ** Attempting to unload n_gsm... ** ** Attempting to load nd_blk... ** ** Attempting to unload nd_blk... ** ** Attempting to load nd_btt... ** ** Attempting to unload nd_btt... ** ** Attempting to load nd_pmem... ** ** Attempting to unload nd_pmem... ** ** Attempting to load net_failover... ** ** Attempting to unload net_failover... ** ** Attempting to load netconsole... ** [12306.110551] printk: console [netcon0] enabled [12306.111390] netconsole: network logging started ** Attempting to unload netconsole... ** [12306.596037] printk: console [netcon_ext0] disabled [12306.596851] printk: console [netcon0] disabled ** Attempting to load nf_conncount... ** ** Attempting to unload nf_conncount... ** ** Attempting to load nf_conntrack... ** ** Attempting to unload nf_conntrack... ** ** Attempting to load nf_conntrack_amanda... ** ** Attempting to unload nf_conntrack_amanda... ** ** Attempting to load nf_conntrack_bridge... ** [12316.367983] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. ** Attempting to unload nf_conntrack_bridge... ** ** Attempting to load nf_conntrack_broadcast... ** ** Attempting to unload nf_conntrack_broadcast... ** ** Attempting to load nf_conntrack_ftp... ** ** Attempting to unload nf_conntrack_ftp... ** ** Attempting to load nf_conntrack_h323... ** ** Attempting to unload nf_conntrack_h323... ** [-- MARK -- Fri Feb 3 09:10:00 2023] ** Attempting to load nf_conntrack_irc... ** ** Attempting to unload nf_conntrack_irc... ** ** Attempting to load nf_conntrack_netbios_ns... ** ** Attempting to unload nf_conntrack_netbios_ns... ** ** Attempting to load nf_conntrack_netlink... ** ** Attempting to unload nf_conntrack_netlink... ** ** Attempting to load nf_conntrack_pptp... ** ** Attempting to unload nf_conntrack_pptp... ** ** Attempting to load nf_conntrack_sane... ** ** Attempting to unload nf_conntrack_sane... ** ** Attempting to load nf_conntrack_sip... ** ** Attempting to unload nf_conntrack_sip... ** ** Attempting to load nf_conntrack_snmp... ** ** Attempting to unload nf_conntrack_snmp... ** ** Attempting to load nf_conntrack_tftp... ** ** Attempting to unload nf_conntrack_tftp... ** ** Attempting to load nf_defrag_ipv4... ** ** Attempting to unload nf_defrag_ipv4... ** ** Attempting to load nf_defrag_ipv6... ** ** Attempting to unload nf_defrag_ipv6... ** ** Attempting to load nf_dup_ipv4... ** ** Attempting to unload nf_dup_ipv4... ** ** Attempting to load nf_dup_ipv6... ** ** Attempting to unload nf_dup_ipv6... ** ** Attempting to load nf_dup_netdev... ** ** Attempting to unload nf_dup_netdev... ** ** Attempting to load nf_log_arp... ** ** Attempting to unload nf_log_arp... ** ** Attempting to load nf_log_bridge... ** ** Attempting to unload nf_log_bridge... ** ** Attempting to load nf_log_ipv4... ** ** Attempting to unload nf_log_ipv4... ** ** Attempting to load nf_log_ipv6... ** ** Attempting to unload nf_log_ipv6... ** ** Attempting to load nf_log_netdev... ** ** Attempting to unload nf_log_netdev... ** ** Attempting to load nf_log_syslog... ** ** Attempting to unload nf_log_syslog... ** ** Attempting to load nf_nat... ** ** Attempting to unload nf_nat... ** ** Attempting to load nf_nat_amanda... ** ** Attempting to unload nf_nat_amanda... ** ** Attempting to load nf_nat_ftp... ** ** Attempting to unload nf_nat_ftp... ** ** Attempting to load nf_nat_h323... ** ** Attempting to unload nf_nat_h323... ** ** Attempting to load nf_nat_irc... ** ** Attempting to unload nf_nat_irc... ** ** Attempting to load nf_nat_pptp... ** ** Attempting to unload nf_nat_pptp... ** ** Attempting to load nf_nat_sip... ** ** Attempting to unload nf_nat_sip... ** ** Attempting to load nf_nat_snmp_basic... ** ** Attempting to unload nf_nat_snmp_basic... ** ** Attempting to load nf_nat_tftp... ** ** Attempting to unload nf_nat_tftp... ** ** Attempting to load nf_reject_ipv4... ** ** Attempting to unload nf_reject_ipv4... ** ** Attempting to load nf_reject_ipv6... ** ** Attempting to unload nf_reject_ipv6... ** ** Attempting to load nf_socket_ipv4... ** ** Attempting to unload nf_socket_ipv4... ** ** Attempting to load nf_socket_ipv6... ** ** Attempting to unload nf_socket_ipv6... ** ** Attempting to load nf_synproxy_core... ** ** Attempting to unload nf_synproxy_core... ** ** Attempting to load nf_tables... ** ** Attempting to unload nf_tables... ** ** Attempting to load nf_tproxy_ipv4... ** ** Attempting to unload nf_tproxy_ipv4... ** ** Attempting to load nf_tproxy_ipv6... ** ** Attempting to unload nf_tproxy_ipv6... ** ** Attempting to load nfnetlink... ** ** Attempting to unload nfnetlink... ** ** Attempting to load nfnetlink_cthelper... ** ** Attempting to unload nfnetlink_cthelper... ** ** Attempting to load nfnetlink_cttimeout... ** ** Attempting to unload nfnetlink_cttimeout... ** ** Attempting to load nfnetlink_log... ** ** Attempting to unload nfnetlink_log... ** ** Attempting to load nfnetlink_osf... ** ** Attempting to unload nfnetlink_osf... ** ** Attempting to load nfnetlink_queue... ** ** Attempting to unload nfnetlink_queue... ** ** Attempting to load nf_tables... ** ** Attempting to unload nf_tables... ** ** Attempting to load nft_chain_nat... ** ** Attempting to unload nft_chain_nat... ** ** Attempting to load nft_compat... ** [12461.597297] Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled ** Attempting to unload nft_compat... ** ** Attempting to load nft_connlimit... ** ** Attempting to unload nft_connlimit... ** ** Attempting to load nft_counter... ** ** Attempting to unload nft_counter... ** ** Attempting to load nft_ct... ** ** Attempting to unload nft_ct... ** ** Attempting to load nft_dup_ipv4... ** ** Attempting to unload nft_dup_ipv4... ** ** Attempting to load nft_dup_ipv6... ** ** Attempting to unload nft_dup_ipv6... ** ** Attempting to load nft_dup_netdev... ** ** Attempting to unload nft_dup_netdev... ** ** Attempting to load nft_fib... ** ** Attempting to unload nft_fib... ** ** Attempting to load nft_fib_inet... ** ** Attempting to unload nft_fib_inet... ** ** Attempting to load nft_fib_ipv4... ** ** Attempting to unload nft_fib_ipv4... ** ** Attempting to load nft_fib_ipv6... ** ** Attempting to unload nft_fib_ipv6... ** ** Attempting to load nft_fib_netdev... ** ** Attempting to unload nft_fib_netdev... ** ** Attempting to load nft_fwd_netdev... ** ** Attempting to unload nft_fwd_netdev... ** ** Attempting to load nft_hash... ** ** Attempting to unload nft_hash... ** ** Attempting to load nft_limit... ** ** Attempting to unload nft_limit... ** ** Attempting to load nft_log... ** ** Attempting to unload nft_log... ** ** Attempting to load nft_masq... ** ** Attempting to unload nft_masq... ** ** Attempting to load nft_meta_bridge... ** [12495.703586] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. ** Attempting to unload nft_meta_bridge... ** ** Attempting to load nft_nat... ** ** Attempting to unload nft_nat... ** ** Attempting to load nft_numgen... ** ** Attempting to unload nft_numgen... ** ** Attempting to load nft_objref... ** ** Attempting to unload nft_objref... ** ** Attempting to load nft_osf... ** ** Attempting to unload nft_osf... ** ** Attempting to load nft_queue... ** ** Attempting to unload nft_queue... ** ** Attempting to load nft_quota... ** ** Attempting to unload nft_quota... ** ** Attempting to load nft_redir... ** ** Attempting to unload nft_redir... ** ** Attempting to load nft_reject... ** ** Attempting to unload nft_reject... ** ** Attempting to load nft_reject_bridge... ** [12516.133593] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. ** Attempting to unload nft_reject_bridge... ** ** Attempting to load nft_reject_inet... ** ** Attempting to unload nft_reject_inet... ** ** Attempting to load nft_reject_ipv4... ** ** Attempting to unload nft_reject_ipv4... ** ** Attempting to load nft_reject_ipv6... ** ** Attempting to unload nft_reject_ipv6... ** ** Attempting to load nft_reject_netdev... ** ** Attempting to unload nft_reject_netdev... ** ** Attempting to load nft_socket... ** ** Attempting to unload nft_socket... ** ** Attempting to load nft_tproxy... ** ** Attempting to unload nft_tproxy... ** ** Attempting to load nft_tunnel... ** ** Attempting to unload nft_tunnel... ** ** Attempting to load nft_xfrm... ** ** Attempting to unload nft_xfrm... ** ** Attempting to load nhpoly1305... ** ** Attempting to unload nhpoly1305... ** ** Attempting to load n_gsm... ** ** Attempting to unload n_gsm... ** ** Attempting to load nlmon... ** ** Attempting to unload nlmon... ** ** Attempting to load nls_cp1250... ** ** Attempting to unload nls_cp1250... ** ** Attempting to load nls_cp1251... ** ** Attempting to unload nls_cp1251... ** ** Attempting to load nls_cp1255... ** ** Attempting to unload nls_cp1255... ** ** Attempting to load nls_cp737... ** ** Attempting to unload nls_cp737... ** ** Attempting to load nls_cp775... ** ** Attempting to unload nls_cp775... ** ** Attempting to load nls_cp850... ** ** Attempting to unload nls_cp850... ** ** Attempting to load nls_cp852... ** ** Attempting to unload nls_cp852... ** ** Attempting to load nls_cp855... ** ** Attempting to unload nls_cp855... ** ** Attempting to load nls_cp857... ** ** Attempting to unload nls_cp857... ** ** Attempting to load nls_cp860... ** ** Attempting to unload nls_cp860... ** ** Attempting to load nls_cp861... ** ** Attempting to unload nls_cp861... ** ** Attempting to load nls_cp862... ** ** Attempting to unload nls_cp862... ** ** Attempting to load nls_cp863... ** ** Attempting to unload nls_cp863... ** ** Attempting to load nls_cp864... ** ** Attempting to unload nls_cp864... ** ** Attempting to load nls_cp865... ** ** Attempting to unload nls_cp865... ** ** Attempting to load nls_cp866... ** ** Attempting to unload nls_cp866... ** ** Attempting to load nls_cp869... ** ** Attempting to unload nls_cp869... ** ** Attempting to load nls_cp874... ** ** Attempting to unload nls_cp874... ** ** Attempting to load nls_cp936... ** ** Attempting to unload nls_cp936... ** ** Attempting to load nls_cp949... ** ** Attempting to unload nls_cp949... ** ** Attempting to load nls_cp950... ** ** Attempting to unload nls_cp950... ** ** Attempting to load nls_euc_jp... ** ** Attempting to unload nls_euc_jp... ** ** Attempting to load nls_iso8859_1... ** ** Attempting to unload nls_iso8859_1... ** ** Attempting to load nls_iso8859_13... ** ** Attempting to unload nls_iso8859_13... ** ** Attempting to load nls_iso8859_14... ** ** Attempting to unload nls_iso8859_14... ** ** Attempting to load nls_iso8859_15... ** ** Attempting to unload nls_iso8859_15... ** ** Attempting to load nls_iso8859_2... ** ** Attempting to unload nls_iso8859_2... ** ** Attempting to load nls_iso8859_3... ** ** Attempting to unload nls_iso8859_3... ** ** Attempting to load nls_iso8859_4... ** ** Attempting to unload nls_iso8859_4... ** ** Attempting to load nls_iso8859_5... ** ** Attempting to unload nls_iso8859_5... ** ** Attempting to load nls_iso8859_6... ** ** Attempting to unload nls_iso8859_6... ** ** Attempting to load nls_iso8859_7... ** ** Attempting to unload nls_iso8859_7... ** ** Attempting to load nls_iso8859_9... ** ** Attempting to unload nls_iso8859_9... ** ** Attempting to load nls_koi8_r... ** ** Attempting to unload nls_koi8_r... ** ** Attempting to load nls_koi8_ru... ** ** Attempting to unload nls_koi8_ru... ** ** Attempting to load null_blk... ** [12594.406028] null_blk: disk nullb0 created [12594.406303] null_blk: module loaded ** Attempting to unload null_blk... ** ** Attempting to load nvme_loop... ** ** Attempting to unload nvme_loop... ** ** Attempting to load nvmet_fc... ** ** Attempting to unload nvmet_fc... ** ** Attempting to load nvmet_rdma... ** ** Attempting to unload nvmet_rdma... ** ** Attempting to load nvmet_tcp... ** [12602.889431] Warning: Unmaintained driver is detected: NVMe/TCP Target ** Attempting to unload nvmet_tcp... ** ** Attempting to load objagg... ** ** Attempting to unload objagg... ** ** Attempting to load openvswitch... ** [12606.745400] openvswitch: Open vSwitch switching datapath ** Attempting to unload openvswitch... ** ** Attempting to load parman... ** ** Attempting to unload parman... ** ** Attempting to load pcbc... ** ** Attempting to unload pcbc... ** ** Attempting to load pcrypt... ** ** Attempting to unload pcrypt... ** ** Attempting to load pkcs8_key_parser... ** [12615.851642] Asymmetric key parser 'pkcs8' registered ** Attempting to unload pkcs8_key_parser... ** [12616.329980] Asymmetric key parser 'pkcs8' unregistered ** Attempting to load poly1305_generic... ** ** Attempting to unload poly1305_generic... ** ** Attempting to load ppdev... ** [12619.046130] ppdev: user-space parallel port driver ** Attempting to unload ppdev... ** ** Attempting to load ppp_async... ** [12620.738115] PPP generic driver version 2.4.2 ** Attempting to unload ppp_async... ** ** Attempting to load ppp_deflate... ** [12622.389478] PPP generic driver version 2.4.2 [12622.404789] PPP Deflate Compression module registered ** Attempting to unload ppp_deflate... ** ** Attempting to load ppp_generic... ** [12624.045523] PPP generic driver version 2.4.2 ** Attempting to unload ppp_generic... ** ** Attempting to load ppp_synctty... ** [12625.645716] PPP generic driver version 2.4.2 ** Attempting to unload ppp_synctty... ** [-- MARK -- Fri Feb 3 09:15:00 2023] ** Attempting to load pppoe... ** [12627.373086] PPP generic driver version 2.4.2 [12627.388259] NET: Registered PF_PPPOX protocol family ** Attempting to unload pppoe... ** [12627.991232] NET: Unregistered PF_PPPOX protocol family ** Attempting to load pppox... ** [12629.144555] PPP generic driver version 2.4.2 [12629.159323] NET: Registered PF_PPPOX protocol family ** Attempting to unload pppox... ** [12629.684343] NET: Unregistered PF_PPPOX protocol family ** Attempting to load ppp_synctty... ** [12630.848892] PPP generic driver version 2.4.2 ** Attempting to unload ppp_synctty... ** ** Attempting to load pps_gpio... ** ** Attempting to unload pps_gpio... ** ** Attempting to load pps_ldisc... ** [12634.005052] pps_ldisc: PPS line discipline registered ** Attempting to unload pps_ldisc... ** ** Attempting to load pptp... ** [12635.604620] PPP generic driver version 2.4.2 [12635.620983] NET: Registered PF_PPPOX protocol family [12635.634891] gre: GRE over IPv4 demultiplexor driver [12635.654847] PPTP driver version 0.8.5 ** Attempting to unload pptp... ** [12636.221323] NET: Unregistered PF_PPPOX protocol family ** Attempting to load pwc... ** [12637.414382] mc: Linux media interface: v0.10 [12637.557805] videodev: Linux video capture interface: v2.00 [12637.645339] usbcore: registered new interface driver Philips webcam ** Attempting to unload pwc... ** [12638.174768] usbcore: deregistering interface driver Philips webcam ** Attempting to load psample... ** ** Attempting to unload psample... ** ** Attempting to load raid0... ** ** Attempting to unload raid0... ** ** Attempting to load raid1... ** ** Attempting to unload raid1... ** ** Attempting to load raid10... ** ** Attempting to unload raid10... ** ** Attempting to load raid456... ** [12645.845628] raid6: skip pq benchmark and using algorithm sse2x4 [12645.846503] raid6: using ssse3x2 recovery algorithm [12645.870889] async_tx: api initialized (async) ** Attempting to unload raid456... ** ** Attempting to load raid6_pq... ** [12647.779715] raid6: skip pq benchmark and using algorithm sse2x4 [12647.780546] raid6: using ssse3x2 recovery algorithm ** Attempting to unload raid6_pq... ** ** Attempting to load raid6test... ** [12649.403968] raid6: skip pq benchmark and using algorithm sse2x4 [12649.404845] raid6: using ssse3x2 recovery algorithm [12649.419413] async_tx: api initialized (async) [12649.496155] raid6test: testing the 4-disk case... [12649.497012] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [12649.497584] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(P) OK [12649.498557] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(Q) OK [12649.499024] raid6test: test_disks(1, 2): faila= 1(D) failb= 2(P) OK [12649.499541] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(Q) OK [12649.500104] raid6test: test_disks(2, 3): faila= 2(P) failb= 3(Q) OK [12649.500746] raid6test: testing the 5-disk case... [12649.501601] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [12649.502303] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(D) OK [12649.502794] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(P) OK [12649.503336] raid6test: test_disks(0, 4): faila= 0(D) failb= 4(Q) OK [12649.503826] raid6test: test_disks(1, 2): faila= 1(D) failb= 2(D) OK [12649.504391] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(P) OK [12649.505503] raid6test: test_disks(1, 4): faila= 1(D) failb= 4(Q) OK [12649.505997] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(P) OK [12649.506544] raid6test: test_disks(2, 4): faila= 2(D) failb= 4(Q) OK [12649.507096] raid6test: test_disks(3, 4): faila= 3(P) failb= 4(Q) OK [12649.507833] raid6test: testing the 11-disk case... [12649.508694] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [12649.509354] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(D) OK [12649.509829] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(D) OK [12649.510409] raid6test: test_disks(0, 4): faila= 0(D) failb= 4(D) OK [12649.510910] raid6test: test_disks(0, 5): faila= 0(D) failb= 5(D) OK [12649.511439] raid6test: test_disks(0, 6): faila= 0(D) failb= 6(D) OK [12649.511909] raid6test: test_disks(0, 7): faila= 0(D) failb= 7(D) OK [12649.512475] raid6test: test_disks(0, 8): faila= 0(D) failb= 8(D) OK [12649.512932] raid6test: test_disks(0, 9): faila= 0(D) failb= 9(P) OK [12649.513457] raid6test: test_disks(0, 10): faila= 0(D) failb= 10(Q) OK [12649.513963] raid6test: test_disks(1, 2): faila= [12649.599804] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(D) OK [12649.614791] raid6test: test_disks(1, 4): faila= 1(D) failb= 4(D) OK [12649.615291] raid6test: test_disks(1, 5): faila= 1(D) failb= 5(D) OK [12649.615719] raid6test: test_disks(1, 6): faila= 1(D) failb= 6(D) OK [12649.616149] raid6test: test_disks(1, 7): faila= 1(D) failb= 7(D) OK [12649.616608] raid6test: test_disks(1, 8): faila= 1(D) failb= 8(D) OK [12649.617036] raid6test: test_disks(1, 9): faila= 1(D) failb= 9(P) OK [12649.617516] raid6test: test_disks(1, 10): faila= 1(D) failb= 10(Q) OK [12649.617952] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(D) OK [12649.618408] raid6test: test_disks(2, 4): faila= 2(D) failb= 4(D) OK [12649.618842] raid6test: test_disks(2, 5): faila= 2(D) failb= 5(D) OK [12649.619298] raid6test: test_disks(2, 6): faila= 2(D) failb= 6(D) OK [12649.619717] raid6test: test_disks(2, 7): faila= 2(D) failb= 7(D) OK [12649.620153] raid6test: test_disks(2, 8): faila= 2(D) failb= 8(D) OK [12649.620641] raid6test: test_disks(2, 9): faila= 2(D) failb= 9(P) OK [12649.621073] raid6test: test_disks(2, 10): faila= 2(D) faa= 3(D) failb= 6(D) OK [12650.122050] raid6test: test_disks(3, 7): faila= 3(D) failb= 7(D) OK [12650.122542] raid6test: test_disks(3, 8): faila= 3(D) failb= 8(D) OK [12650.123000] raid6test: test_disks(3, 9): faila= 3(D) failb= 9(P) OK [12650.123467] raid6test: test_disks(3, 10): faila= 3(D) failb= 10(Q) OK [12650.123930] raid6test: test_disks(4, 5): faila= 4(D) failb= 5(D) OK [12650.124412] raid6test: test_disks(4, 6): faila= 4(D) failb= 6(D) OK [12650.124832] raid6test: test_disks(4, 7): faila= 4(D) failb= 7(D) OK [12650.125331] raid6test: test_disks(4, 8): faila= 4(D) failb= 8(D) OK [12650.125783] raid6test: test_disks(4, 9): faila= 4(D) failb= 9(P) OK [12650.126187] raid6test: test_disks(4, 10): faila= 4(D) failb= 10(Q) OK [12650.126687] raid6test: test_disks(5, 6): faila= 5(D) failb= 6(D) OK [12650.127089] raid6test: test_disks(5, 7): faila= 5(D) failb= 7(D) OK [12650.127530] raid6test: test_disks(5, 8): faila= 5(D) failb= 8(D) OK [12650.127952] raid6test: test_disks(5, 9): faila= 5(D) failb= 9(P) OK [12650.128414] raid6test: test_disks(5, 10): faila= 5(D) failb= 10(Q) OK [12650.128826] raid6test: test_disks(6, 7): faila= 6(D) failb= 7(D) OK= 10(Q) OK [12650.629718] raid6test: test_disks(7, 8): faila= 7(D) failb= 8(D) OK [12650.630167] raid6test: test_disks(7, 9): faila= 7(D) failb= 9(P) OK [12650.630645] raid6test: test_disks(7, 10): faila= 7(D) failb= 10(Q) OK [12650.631077] raid6test: test_disks(8, 9): faila= 8(D) failb= 9(P) OK [12650.631536] raid6test: test_disks(8, 10): faila= 8(D) failb= 10(Q) OK [12650.631933] raid6test: test_disks(9, 10): faila= 9(P) failb= 10(Q) OK [12650.632532] raid6test: testing the 12-disk case... [12650.633296] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [12650.633750] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(D) OK [12650.634180] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(D) OK [12650.634693] raid6test: test_disks(0, 4): faila= 0(D) failb= 4(D) OK [12650.635144] raid6test: test_disks(0, 5): faila= 0(D) failb= 5(D) OK [12650.635611] raid6test: test_disks(0, 6): faila= 0(D) failb= 6(D) OK [12650.636045] raid6test: test_disks(0, 7): faila= 0(D) failb= 7(D) OK [12650.636537] raid6test: test_disks(0, 8): faila= 0(D) failb[12651.037342] raid6test: test_disks(0, 11): faila= 0(D) failb= 11(Q) OK [12651.037825] raid6test: test_disks(1, 2): faila= 1(D) failb= 2(D) OK [12651.038284] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(D) OK [12651.038744] raid6test: test_disks(1, 4): faila= 1(D) failb= 4(D) OK [12651.039152] raid6test: test_disks(1, 5): faila= 1(D) failb= 5(D) OK [12651.039619] raid6test: test_disks(1, 6): faila= 1(D) failb= 6(D) OK [12651.040080] raid6test: test_disks(1, 7): faila= 1(D) failb= 7(D) OK [12651.040565] raid6test: test_disks(1, 8): faila= 1(D) failb= 8(D) OK [12651.040984] raid6test: test_disks(1, 9): faila= 1(D) failb= 9(D) OK [12651.041451] raid6test: test_disk[12651.141781] raid6test: test_disks(1, 11): faila= 1(D) failb= 11(Q) OK [12651.142289] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(D) OK [12651.142747] raid6test: test_disks(2, 4): faila= 2(D) failb= 4(D) OK [12651.143185] raid6test: test_disks(2, 5): faila= 2(D) failb= 5(D) OK [12651.143702] raid6test: test_disks(2, 6): faila= 2(D) faa= 2(D) failb= 9(D) OK [12651.644484] raid6test: test_disks(2, 10): faila= 2(D) failb= 10(P) OK [12651.644914] raid6test: test_disks(2, 11): faila= 2(D) failb= 11(Q) OK [12651.645461] raid6test: test_disks(3, 4): faila= 3(D) failb= 4(D) OK [12651.645880] raid6test: test_disks(3, 5): faila= 3(D) failb= 5(D) OK [12651.646345] raid6test: test_disks(3, 6): faila= 3(D) failb= 6(D) OK [12651.646796] raid6test: test_disks(3, 7): faila= 3(D) failb= 7(D) OK [12651.647266] raid6test: test_disks(3, 8): faila= 3(D) failb= 8(D) OK [12651.647713] raid6test: test_disks(3, 9): faila= 3(D) failb= 9(D) OK [12651.648121] raid6test: test_disks(3, 10): faila= 3(D) failb= 10(P) OK [12651.648609] raid6test: test_disks(3, 11): faila= 3(D) failb= 11(Q) OK [12651.649026] raid6test: test_disks(4, 5): faila= 4(D) failb= 5(D) OK [12651.649497] raid6test: test_disks(4, 6): faila= 4(D) failb= 6(D) OK [12651.649919] raid6test: test_disks(4, 7): faila= 4(D) failb= 7(D) OK [12651.650371] raid6test: test_disks(4, 8): faila= 4(D) failb= 8(D) OK [12651.650811] raid6test: teaid6test: test_disks(5, 6): faila= 5(D) failb= 6(D) OK [12652.151666] raid6test: test_disks(5, 7): faila= 5(D) failb= 7(D) OK [12652.152108] raid6test: test_disks(5, 8): faila= 5(D) failb= 8(D) OK [12652.152631] raid6test: test_disks(5, 9): faila= 5(D) failb= 9(D) OK [12652.153072] raid6test: test_disks(5, 10): faila= 5(D) failb= 10(P) OK [12652.153569] raid6test: test_disks(5, 11): faila= 5(D) failb= 11(Q) OK [12652.154000] raid6test: test_disks(6, 7): faila= 6(D) failb= 7(D) OK [12652.154492] raid6test: test_disks(6, 8): faila= 6(D) failb= 8(D) OK [12652.154929] raid6test: test_disks(6, 9): faila= 6(D) failb= 9(D) OK [12652.155398] raid6test: test_disks(6, 10): faila= 6(D) failb= 10(P) OK [12652.155854] raid6test: test_disks(6, 11): faila= 6(D) failb= 11(Q) OK [12652.156351] raid6test: test_disks(7, 8): faila= 7(D) failb= 8(D) OK [12652.156812] raid6test: test_disks(7, 9): faila= 7(D) failb= 9(D) OK [12652.157323] raid6test: test_disks(7, 10): faila= 7(D) failb= 10(P) OK [12652.157783] raid6test: test_disks(7, 11): faila= 7(D) failb= 11(Q) OK [12652.158269] raid6test: test_disks(8, 9): faila= 8(D) failb= 9(D) OK [12652.158740] raid6test: test_disks(8, 10isks(9, 11): faila= 9(D) failb= 11(Q) OK [12652.659490] raid6test: test_disks(10, 11): faila= 10(P) failb= 11(Q) OK [12652.660181] raid6test: testing the 24-disk case... [12652.660952] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [12652.661424] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(D) OK [12652.661848] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(D) OK [12652.662366] raid6test: test_disks(0, 4): faila= 0(D) failb= 4(D) OK [12652.662830] raid6test: test_disks(0, 5): faila= 0(D) failb= 5(D) OK [12652.663315] raid6test: test_disks(0, 6): faila= 0(D) failb= 6(D) OK [12652.663774] raid6test: test_disks(0, 7): faila= 0(D) failb= 7(D) OK [12652.664195] raid6test: test_disks(0, 8): faila= 0(D) failb= 8(D) OK [12652.664672] raid6test: test_disks(0, 9): faila= 0(D) failb= 9(D) OK [12652.665135] raid6test: test_disks(0, 10): faila= 0(D) failb= 10(D) OK [12652.665634] raid6test: test_disks(0, 11): faila= 0(D) failb= 11(D) OK [12652.666074] raid6test: test_disks(0, 12): faila= 0(D) failb= 12(D) OK [12652.666558] raid6test: test_disks(0, 13): faila= 0(D) failb= 13(D) OK [12652.666992] raid6test: test_disks(0, 14): faila= 0(D) failb= 14(D) OK [12652.667458] raid6test: test_disks(0, 18): faila= 0(D) failb= 18(D) OK [12653.168289] raid6test: test_disks(0, 19): faila= 0(D) failb= 19(D) OK [12653.168764] raid6test: test_disks(0, 20): faila= 0(D) failb= 20(D) OK [12653.169211] raid6test: test_disks(0, 21): faila= 0(D) failb= 21(D) OK [12653.169730] raid6test: test_disks(0, 22): faila= 0(D) failb= 22(P) OK [12653.170186] raid6test: test_disks(0, 23): faila= 0(D) failb= 23(Q) OK [12653.170703] raid6test: test_disks(1, 2): faila= 1(D) failb= 2(D) OK [12653.171155] raid6test: test_disks(1, 3): faila= 1(D) failb= 3(D) OK [12653.171673] raid6test: test_disks(1, 4): faila= 1(D) failb= 4(D) OK [12653.172123] raid6test: test_disks(1, 5): faila= 1(D) failb= 5(D) OK [12653.172603] raid6test: test_disks(1, 6): faila= 1(D) failb= 6(D) OK [12653.173056] raid6test: test_disks(1, 7): faila= 1(D) failb= 7(D) OK [12653.173576] raid6test: test_disks(1, 8): faila= 1(D) failb= 8(D) OK [12653.174044] raid6test: test_disks(1, 9): faila= 1(D) failb= 9(D) OK [12653.174557] raid6test: test_disks(1, 10): faila= 1(D) failb= 10(D) OK [12653.174977] raid6test: test_disks(1, 11): faila= la= 1(D) failb= 14(D) OK [12653.675850] raid6test: test_disks(1, 15): faila= 1(D) failb= 15(D) OK [12653.676356] raid6test: test_disks(1, 16): faila= 1(D) failb= 16(D) OK [12653.676781] raid6test: test_disks(1, 17): faila= 1(D) failb= 17(D) OK [12653.677212] raid6test: test_disks(1, 18): faila= 1(D) failb= 18(D) OK [12653.677676] raid6test: test_disks(1, 19): faila= 1(D) failb= 19(D) OK [12653.678108] raid6test: test_disks(1, 20): faila= 1(D) failb= 20(D) OK [12653.678597] raid6test: test_disks(1, 21): faila= 1(D) failb= 21(D) OK [12653.679045] raid6test: test_disks(1, 22): faila= 1(D) failb= 22(P) OK [12653.679516] raid6test: test_disks(1, 23): faila= 1(D) failb= 23(Q) OK [12653.679958] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(D) OK [12653.680424] raid6test: test_disks(2, 4): faila= 2(D) failb= 4(D) OK [12653.680877] raid6test: test_disks(2, 5): faila= 2(D) failb= 5(D) OK [12653.681371] raid6test: test_disks(2, 6): faila= 2(D) failb= 6(D) OK [12653.681832] raid6test: test_disks(2, 7): faila= 2(D) failb= 7(D) OK [12653.682324] raid6test: test_disks(2, 8): faila= 2(D) failb= 8(D) OK [12653.682790] raid6test: test_disks(2, 9): faila= 2(D) failb= 9(D) OK [12653.683217] raid6test:aid6test: test_disks(2, 13): faila= 2(D) failb= 13(D) OK [12654.184160] raid6test: test_disks(2, 14): faila= 2(D) failb= 14(D) OK [12654.184678] raid6test: test_disks(2, 15): faila= 2(D) failb= 15(D) OK [12654.185130] raid6test: test_disks(2, 16): faila= 2(D) failb= 16(D) OK [12654.185615] raid6test: test_disks(2, 17): faila= 2(D) failb= 17(D) OK [12654.186075] raid6test: test_disks(2, 18): faila= 2(D) failb= 18(D) OK [12654.187318] raid6test: test_disks(2, 19): faila= 2(D) failb= 19(D) OK [12654.187813] raid6test: test_disks(2, 20): faila= 2(D) failb= 20(D) OK [12654.188313] raid6test: test_disks(2, 21): faila= 2(D) failb= 21(D) OK [12654.188800] raid6test: test_disks(2, 22): faila= 2(D) failb= 22(P) OK [12654.189307] raid6test: test_disks(2, 23): faila= 2(D) failb= 23(Q) OK [12654.189787] raid6test: test_disks(3, 4): faila= 3(D) failb= 4(D) OK [12654.190273] raid6test: test_disks(3, 5): faila= 3(D) failb= 5(D) OK [12654.190754] raid6test: test_disks(3, 6): faila= 3(D) failb= 6(D) OK [12654.191205] raid6test: test_disks(3, 7): faila= 3(D) failb= 7(D) OK [12654.191706] raid6test: test_diaid6test: test_disks(3, 11): faila= 3(D) failb= 11(D) OK [12654.692588] raid6test: test_disks(3, 12): faila= 3(D) failb= 12(D) OK [12654.693066] raid6test: test_disks(3, 13): faila= 3(D) failb= 13(D) OK [12654.693557] raid6test: test_disks(3, 14): faila= 3(D) failb= 14(D) OK [12654.694016] raid6test: test_disks(3, 15): faila= 3(D) failb= 15(D) OK [12654.694497] raid6test: test_disks(3, 16): faila= 3(D) failb= 16(D) OK [12654.694916] raid6test: test_disks(3, 17): faila= 3(D) failb= 17(D) OK [12654.695403] raid6test: test_disks(3, 18): faila= 3(D) failb= 18(D) OK [12654.695849] raid6test: test_disks(3, 19): faila= 3(D) failb= 19(D) OK [12654.696347] raid6test: test_disks(3, 20): faila= 3(D) failb= 20(D) OK [12654.696842] raid6test: test_disks(3, 21): faila= 3(D) failb= 21(D) OK [12654.697347] raid6test: test_disks(3, 22): faila= 3(D) failb= 22(P) OK [12654.697839] raid6test: test_disks(3, 23): faila= 3(D) failb= 23(Q) OK [12654.698332] raid6test: test_disks(4, 5): faila= 4(D) failb= 5(D) OK [12654.698826] raid6test: test_disks(4, 6): faila= 4(D) failb= 6(D) OK [12654.699325] raid6test: test_diisks(4, 10): faila= 4(D) failb= 10(D) OK [12655.200200] raid6test: test_disks(4, 11): faila= 4(D) failb= 11(D) OK [12655.200742] raid6test: test_disks(4, 12): faila= 4(D) failb= 12(D) OK [12655.201222] raid6test: test_disks(4, 13): faila= 4(D) failb= 13(D) OK [12655.201740] raid6test: test_disks(4, 14): faila= 4(D) failb= 14(D) OK [12655.202209] raid6test: test_disks(4, 15): faila= 4(D) failb= 15(D) OK [12655.202740] raid6test: test_disks(4, 16): faila= 4(D) failb= 16(D) OK [12655.203215] raid6test: test_disks(4, 17): faila= 4(D) failb= 17(D) OK [12655.203724] raid6test: test_disks(4, 18): faila= 4(D) failb= 18(D) OK [12655.204205] raid6test: test_disks(4, 19): faila= 4(D) failb= 19(D) OK [12655.204728] raid6test: test_disks(4, 20): faila= 4(D) failb= 20(D) OK [12655.205225] raid6test: test_disks(4, 21): faila= 4(D) failb= 21(D) OK [12655.205726] raid6test: test_disks(4, 22): faila= 4(D) failb= 22(P) OK [12655.206205] raid6test: test_disks(4, 23): faila= 4(D) failb= 23(Q) OK [12655.206716] raid6test: test_disks(5, 6): faila= 5(D) failb= 6(D) OK [12655.207194] raid6test: test_disks(5, 7): faila= 5(D) failb= 7(D) OK [12655.207711] raid6test: test_disks(5, 8): faila= 5(D) failb= 8(D) OK [12= 11(D) OK [12655.708537] raid6test: test_disks(5, 12): faila= 5(D) failb= 12(D) OK [12655.709010] raid6test: test_disks(5, 13): faila= 5(D) failb= 13(D) OK [12655.709529] raid6test: test_disks(5, 14): faila= 5(D) failb= 14(D) OK [12655.709965] raid6test: test_disks(5, 15): faila= 5(D) failb= 15(D) OK [12655.710449] raid6test: test_disks(5, 16): faila= 5(D) failb= 16(D) OK [12655.710910] raid6test: test_disks(5, 17): faila= 5(D) failb= 17(D) OK [12655.711418] raid6test: test_disks(5, 18): faila= 5(D) failb= 18(D) OK [12655.711907] raid6test: test_disks(5, 19): faila= 5(D) failb= 19(D) OK [12655.712419] raid6test: test_disks(5, 20): faila= 5(D) failb= 20(D) OK [12655.712876] raid6test: test_disks(5, 21): faila= 5(D) failb= 21(D) OK [12655.713367] raid6test: test_disks(5, 22): faila= 5(D) failb= 22(P) OK [12655.713857] raid6test: test_disks(5, 23): faila= 5(D) failb= 23(Q) OK [12655.714347] raid6test: test_disks(6, 7): faila= 6(D) failb= 7(D) OK [12655.714841] raid6test: test_disks(6, 8): faila= 6(D) failb= 8(D) OK [12655.715389] raid6test: test_disks(6, 9): faila= 6(D) failb= 9(D) OK [12655[12656.216305] raid6test: test_disks(6, 13): faila= 6(D) failb= 13(D) OK [12656.216774] raid6test: test_disks(6, 14): faila= 6(D) failb= 14(D) OK [12656.217233] raid6test: test_disks(6, 15): faila= 6(D) failb= 15(D) OK [12656.217767] raid6test: test_disks(6, 16): faila= 6(D) failb= 16(D) OK [12656.218235] raid6test: test_disks(6, 17): faila= 6(D) failb= 17(D) OK [12656.218720] raid6test: test_disks(6, 18): faila= 6(D) failb= 18(D) OK [12656.219174] raid6test: test_disks(6, 19): faila= 6(D) failb= 19(D) OK [12656.219722] raid6test: test_disks(6, 20): faila= 6(D) failb= 20(D) OK [12656.220195] raid6test: test_disks(6, 21): faila= 6(D) failb= 21(D) OK [12656.220734] raid6test: test_disks(6, 22): faila= 6(D) failb= 22(P) OK [12656.221196] raid6test: test_disks(6, 23): faila= 6(D) failb= 23(Q) OK [12656.221714] raid6test: test_disks(7, 8): faila= 7(D) failb= 8(D) OK [12656.222186] raid6test: test_disks(7, 9): faila= 7(D) failb= 9(D) OK [12656.222658] raid6test: test_disks(7, 10): faila= 7(D) failb= 10(D) OK [12656.223093] raid6test: test_disks(7, 11): faila= 7(D) failb= 11(D) OK [12656.223593] raid6test: test_disks(7, 12)isks(7, 15): faila= 7(D) failb= 15(D) OK [12656.724428] raid6test: test_disks(7, 16): faila= 7(D) failb= 16(D) OK [12656.724943] raid6test: test_disks(7, 17): faila= 7(D) failb= 17(D) OK [12656.725432] raid6test: test_disks(7, 18): faila= 7(D) failb= 18(D) OK [12656.725914] raid6test: test_disks(7, 19): faila= 7(D) failb= 19(D) OK [12656.726420] raid6test: test_disks(7, 20): faila= 7(D) failb= 20(D) OK [12656.726845] raid6test: test_disks(7, 21): faila= 7(D) failb= 21(D) OK [12656.727381] raid6test: test_disks(7, 22): faila= 7(D) failb= 22(P) OK [12656.727887] raid6test: test_disks(7, 23): faila= 7(D) failb= 23(Q) OK [12656.728389] raid6test: test_disks(8, 9): faila= 8(D) failb= 9(D) OK [12656.728877] raid6test: test_disks(8, 10): faila= 8(D) failb= 10(D) OK [12656.729363] raid6test: test_disks(8, 11): faila= 8(D) failb= 11(D) OK [12656.729860] raid6test: test_disks(8, 12): faila= 8(D) failb= 12(D) OK [12656.730357] raid6test: test_disks(8, 13): faila= 8(D) failb= 13(D) OK [12656.730805] raid6test: test_disks(8, 14): faila= 8(D) failb= 14(D) OK [12656.731236] raid6test: test_disks(8, 15): faila= 8(D) failb= 15(D) OK [12656.731774] raid6test: test_disks(8, 16): faila= 8(D) failb= 16(D) OK [12656.732209] raid6test: test_disks(8, 17): faila= 8(la= 8(D) failb= 20(D) OK [12657.233116] raid6test: test_disks(8, 21): faila= 8(D) failb= 21(D) OK [12657.233585] raid6test: test_disks(8, 22): faila= 8(D) failb= 22(P) OK [12657.234021] raid6test: test_disks(8, 23): faila= 8(D) failb= 23(Q) OK [12657.234473] raid6test: test_disks(9, 10): faila= 9(D) failb= 10(D) OK [12657.234923] raid6test: test_disks(9, 11): faila= 9(D) failb= 11(D) OK [12657.235423] raid6test: test_disks(9, 12): faila= 9(D) failb= 12(D) OK [12657.235842] raid6test: test_disks(9, 13): faila= 9(D) failb= 13(D) OK [12657.236340] raid6test: test_disks(9, 14): faila= 9(D) failb= 14(D) OK [12657.236793] raid6test: test_disks(9, 15): faila= 9(D) failb= 15(D) OK [12657.237221] raid6test: test_disks(9, 16): faila= 9(D) failb= 16(D) OK [12657.237727] raid6test: test_disks(9, 17): faila= 9(D) failb= 17(D) OK [12657.238157] raid6test: test_disks(9, 18): faila= 9(D) failb= 18(D) OK [12657.238668] raid6test: test_disks(9, 19): faila= 9(D) failb= 19(D) OK [12657.239092] raid6test: test_disks(9, 20): faila= 9(D) failb= 20(D) OK [12657.239543] raid6test: test_disks(9, 21): faila= 9(D) faiila= 10(D) failb= 11(D) OK [12657.740625] raid6test: test_disks(10, 12): faila= 10(D) failb= 12(D) OK [12657.741064] raid6test: test_disks(10, 13): faila= 10(D) failb= 13(D) OK [12657.741525] raid6test: test_disks(10, 14): faila= 10(D) failb= 14(D) OK [12657.741955] raid6test: test_disks(10, 15): faila= 10(D) failb= 15(D) OK [12657.742439] raid6test: test_disks(10, 16): faila= 10(D) failb= 16(D) OK [12657.742866] raid6test: test_disks(10, 17): faila= 10(D) failb= 17(D) OK [12657.743379] raid6test: test_disks(10, 18): faila= 10(D) failb= 18(D) OK [12657.743834] raid6test: test_disks(10, 19): faila= 10(D) failb= 19(D) OK [12657.744347] raid6test: test_disks(10, 20): faila= 10(D) failb= 20(D) OK [12657.744798] raid6test: test_disks(10, 21): faila= 10(D) failb= 21(D) OK [12657.745247] raid6test: test_disks(10, 22): faila= 10(D) failb= 22(P) OK [12657.745761] raid6test: test_disks(10, 23): faila= 10(D) failb= 23(Q) OK [12657.746176] raid6test: test_disks(11, 12): faila= 11(D) failb= 12(D) OK [12657.746669] raid6test: test_disks(11, 13): faila= 11(D) failb= 13(D) OK [12657.747084] raid6test: test_disks(11, 14): faila= 11(D) failb= 14(D) OK [12657.747531] raid6test: test_disks(11, 15): faila= 11(D) failb= 15(D) OK [12657.747963] raid6test: test_disks(11, 16): faila= 11(D) failb= 16(D) OK [1[12658.248943] raid6test: test_disks(11, 20): faila= 11(D) failb= 20(D) OK [12658.249443] raid6test: test_disks(11, 21): faila= 11(D) failb= 21(D) OK [12658.249922] raid6test: test_disks(11, 22): faila= 11(D) failb= 22(P) OK [12658.250417] raid6test: test_disks(11, 23): faila= 11(D) failb= 23(Q) OK [12658.250874] raid6test: test_disks(12, 13): faila= 12(D) failb= 13(D) OK [12658.251365] raid6test: test_disks(12, 14): faila= 12(D) failb= 14(D) OK [12658.251792] raid6test: test_disks(12, 15): faila= 12(D) failb= 15(D) OK [12658.252222] raid6test: test_disks(12, 16): faila= 12(D) failb= 16(D) OK [12658.252718] raid6test: test_disks(12, 17): faila= 12(D) failb= 17(D) OK [12658.253148] raid6test: test_disks(12, 18): faila= 12(D) failb= 18(D) OK [12658.253638] raid6test: test_disks(12, 19): faila= 12(D) failb= 19(D) OK [12658.254085] raid6test: test_disks(12, 20): faila= 12(D) failb= 20(D) OK [12658.254552] raid6test: test_disks(12, 21): faila= 12(D) failb= 21(D) OK [12658.254978] raid6test: test_disks(12, 22): faila= 12(D) failb= 22(P) OK [12658.255516] raid6test: test_disks(12, 23): faila= 12(D) failb= 23(Q) OK b= 16(D) OK [12658.756374] raid6test: test_disks(13, 17): faila= 13(D) failb= 17(D) OK [12658.756883] raid6test: test_disks(13, 18): faila= 13(D) failb= 18(D) OK [12658.757371] raid6test: test_disks(13, 19): faila= 13(D) failb= 19(D) OK [12658.757833] raid6test: test_disks(13, 20): faila= 13(D) failb= 20(D) OK [12658.758317] raid6test: test_disks(13, 21): faila= 13(D) failb= 21(D) OK [12658.758818] raid6test: test_disks(13, 22): faila= 13(D) failb= 22(P) OK [12658.759252] raid6test: test_disks(13, 23): faila= 13(D) failb= 23(Q) OK [12658.759736] raid6test: test_disks(14, 15): faila= 14(D) failb= 15(D) OK [12658.760206] raid6test: test_disks(14, 16): faila= 14(D) failb= 16(D) OK [12658.760718] raid6test: test_disks(14, 17): faila= 14(D) failb= 17(D) OK [12658.761152] raid6test: test_disks(14, 18): faila= 14(D) failb= 18(D) OK [12658.761660] raid6test: test_disks(14, 19): faila= 14(D) failb= 19(D) OK [12658.762106] raid6test: test_disks(14, 20): faila= 14(D) failb= 20(D) OK [12658.762575] raid6test: test_disks(14, 21): faila= 14(D) failb= 21(D) OK [12658.763052] raid6test: test_disks(14, 22): faila= 14(D) failb= 22(P) OK [12658.763506] raid6test: test_disks(14, 23): faila= 14(D) failb= 23(Q) OK [12658.791091] [12659.264542] raid6test: test_disks(15, 19): faila= 15(D) failb= 19(D) OK [12659.265081] raid6test: test_disks(15, 20): faila= 15(D) failb= 20(D) OK [12659.265671] raid6test: test_disks(15, 21): faila= 15(D) failb= 21(D) OK [12659.266149] raid6test: test_disks(15, 22): faila= 15(D) failb= 22(P) OK [12659.266724] raid6test: test_disks(15, 23): faila= 15(D) failb= 23(Q) OK [12659.267207] raid6test: test_disks(16, 17): faila= 16(D) failb= 17(D) OK [12659.267739] raid6test: test_disks(16, 18): faila= 16(D) failb= 18(D) OK [12659.268222] raid6test: test_disks(16, 19): faila= 16(D) failb= 19(D) OK [12659.268806] raid6test: test_disks(16, 20): faila= 16(D) failb= 20(D) OK [12659.269366] raid6test: test_disks(16, 21): faila= 16(D) failb= 21(D) OK [12659.269866] raid6test: test_disks(16, 22): faila= 16(D) failb= 22(P) OK [12659.270430] raid6test: test_disks(16, 23): faila= 16(D) failb= 23(Q) OK [12659.270962] raid6test: test_disks(17, 18): faila= 17(D) failb= 18(D) OK [12659.271509] raid6test: test_disks(17, 19): faila= 17(D) failb= 19(D) OK [12659.272046] raid6test: test_disks(17, 20): faila= 17(D) failb= 20(D) OK [12659.272592] raid6test: test_disks(17, 21): faila= 17(D) failb= 21(D) OK [1[12659.773418] raid6test: test_disks(18, 20): faila= 18(D) failb= 20(D) OK [12659.773911] raid6test: test_disks(18, 21): faila= 18(D) failb= 21(D) OK [12659.774397] raid6test: test_disks(18, 22): faila= 18(D) failb= 22(P) OK [12659.774829] raid6test: test_disks(18, 23): faila= 18(D) failb= 23(Q) OK [12659.775315] raid6test: test_disks(19, 20): faila= 19(D) failb= 20(D) OK [12659.775755] raid6test: test_disks(19, 21): faila= 19(D) failb= 21(D) OK [12659.776170] raid6test: test_disks(19, 22): faila= 19(D) failb= 22(P) OK [12659.776614] raid6test: test_disks(19, 23): faila= 19(D) failb= 23(Q) OK [12659.777044] raid6test: test_disks(20, 21): faila= 20(D) failb= 21(D) OK [12659.777478] raid6test: test_disks(20, 22): faila= 20(D) failb= 22(P) OK [12659.777941] raid6test: test_disks(20, 23): faila= 20(D) failb= 23(Q) OK [12659.778401] raid6test: test_disks(21, 22): faila= 21(D) failb= 22(P) OK [12659.778871] raid6test: test_disks(21, 23): faila= 21(D) failb= 23(Q) OK [12659.779325] raid6test: test_disks(22, 23): faila= 22(P) failb= 23(Q) OK [12659.780618] raid6test: testing the 64-disk case... [12659.781467] raid6test: test_disks(0, 1): faila= 0(D) failb= 1(D) OK [12659.781986] raid6test: test_disks(0, 2): faila= 0(D) failb= 2(D) OK [12659.782510] raid6test: test_disks(0, 3): faila= 0(D) failb= 3(D) OK [12659.783076] raid6test: test_disisks(0, 7): faila= 0(D) failb= 7(D) OK [12660.283877] raid6test: test_disks(0, 8): faila= 0(D) failb= 8(D) OK [12660.284419] raid6test: test_disks(0, 9): faila= 0(D) failb= 9(D) OK [12660.284922] raid6test: test_disks(0, 10): faila= 0(D) failb= 10(D) OK [12660.285455] raid6test: test_disks(0, 11): faila= 0(D) failb= 11(D) OK [12660.285992] raid6test: test_disks(0, 12): faila= 0(D) failb= 12(D) OK [12660.286536] raid6test: test_disks(0, 13): faila= 0(D) failb= 13(D) OK [12660.287066] raid6test: test_disks(0, 14): faila= 0(D) failb= 14(D) OK [12660.287604] raid6test: test_disks(0, 15): faila= 0(D) failb= 15(D) OK [12660.288116] raid6test: test_disks(0, 16): faila= 0(D) failb= 16(D) OK [12660.288652] raid6test: test_disks(0, 17): faila= 0(D) failb= 17(D) OK [12660.289165] raid6test: test_disks(0, 18): faila= 0(D) failb= 18(D) OK [12660.289697] raid6test: test_disks(0, 19): faila= 0(D) failb= 19(D) OK [12660.290208] raid6test: test_disks(0, 20): faila= 0(D) failb= 20(D) OK [12660.290767] raid6test: test_disks(0, 21): faila= 0(D) failb= 21(D) OK [12660.291315] raid6test: test_disks(0, 22): faila= 0(D) failb= 22(D) OK [12660.291814] raid6test: test_disks(0, 23): faila= 0(D) failb= 23(D) OK [12660.292358] raid6test: test_disks(0, 24): faila= 0(D) failb= 24(D) OK [12660.292886] raid6test: test_disks(0, 25): faila= 0(D) failb= 25(D) OK [12660.320[12660.793718] raid6test: test_disks(0, 29): faila= 0(D) failb= 29(D) OK [12660.794155] raid6test: test_disks(0, 30): faila= 0(D) failb= 30(D) OK [12660.794705] raid6test: test_disks(0, 31): faila= 0(D) failb= 31(D) OK [12660.795244] raid6test: test_disks(0, 32): faila= 0(D) failb= 32(D) OK [12660.795784] raid6test: test_disks(0, 33): faila= 0(D) failb= 33(D) OK [12660.796316] raid6test: test_disks(0, 34): faila= 0(D) failb= 34(D) OK [12660.796843] raid6test: test_disks(0, 35): faila= 0(D) failb= 35(D) OK [12660.797378] raid6test: test_disks(0, 36): faila= 0(D) failb= 36(D) OK [12660.797906] raid6test: test_disks(0, 37): faila= 0(D) failb= 37(D) OK [12660.798447] raid6test: test_disks(0, 38): faila= 0(D) failb= 38(D) OK [12660.798976] raid6test: test_disks(0, 39): faila= 0(D) failb= 39(D) OK [12660.799513] raid6test: test_disks(0, 40): faila= 0(D) failb= 40(D) OK [12660.800051] raid6test: test_disks(0, 41): faila= 0(D) failb= 41(D) OK [12660.800591] raid6test: test_disks(0, 42): faila= 0(D) failb= 42(D) OK [12660.801101] raid6test: test_disks(0, 43): faila= 0(D) failb= 43(D) OK [12660.829061][12661.301923] raid6test: test_disks(0, 47): faila= 0(D) failb= 47(D) OK [12661.302446] raid6test: test_disks(0, 48): faila= 0(D) failb= 48(D) OK [12661.302976] raid6test: test_disks(0, 49): faila= 0(D) failb= 49(D) OK [12661.303512] raid6test: test_disks(0, 50): faila= 0(D) failb= 50(D) OK [12661.303879] raid6test: test_disks(0, 51): faila= 0(D) failb= 51(D) OK [12661.304427] raid6test: test_disks(0, 52): faila= 0(D) failb= 52(D) OK [12661.304998] raid6test: test_disks(0, 53): faila= 0(D) failb= 53(D) OK [12661.305544] raid6test: test_disks(0, 54): faila= 0(D) failb= 54(D) OK [12661.306079] raid6test: test_disks(0, 55): faila= 0(D) failb= 55(D) OK [12661.306622] raid6test: test_disks(0, 56): faila= 0(D) failb= 56(D) OK [12661.307103] raid6test: test_disks(0, 57): faila= 0(D) failb= 57(D) OK [12661.307646] raid6test: test_disks(0, 58): faila= 0(D) failb= 58(D) OK [12661.308153] raid6test: test_disks(0, 59): faila= 0(D) failb= 59(D) OK [12661.308693] raid6test: test_disks(0, 60): faila= 0(D) failb= 60(D) OK [12661.309205] raid6test: test_disks(0, 61): faila= 0(D) failb= 61(D) OK [12661.309760] raid6test: test_disks(0, 62): faila= 0(D) failb= 62(P) OK [12661.310275] raaid6test: test_disks(1, 4): faila= 1(D) failb= 4(D) OK [12661.811059] raid6test: test_disks(1, 5): faila= 1(D) failb= 5(D) OK [12661.811951] raid6test: test_disks(1, 6): faila= 1(D) failb= 6(D) OK [12661.812451] raid6test: test_disks(1, 7): faila= 1(D) failb= 7(D) OK [12661.812949] raid6test: test_disks(1, 8): faila= 1(D) failb= 8(D) OK [12661.813492] raid6test: test_disks(1, 9): faila= 1(D) failb= 9(D) OK [12661.814020] raid6test: test_disks(1, 10): faila= 1(D) failb= 10(D) OK [12661.814561] raid6test: test_disks(1, 11): faila= 1(D) failb= 11(D) OK [12661.815089] raid6test: test_disks(1, 12): faila= 1(D) failb= 12(D) OK [12661.815643] raid6test: test_disks(1, 13): faila= 1(D) failb= 13(D) OK [12661.816158] raid6test: test_disks(1, 14): faila= 1(D) failb= 14(D) OK [12661.816700] raid6test: test_disks(1, 15): faila= 1(D) failb= 15(D) OK [12661.817216] raid6test: test_disks(1, 16): faila= 1(D) failb= 16(D) OK [12661.817783] raid6test: test_disks(1, 17): faila= 1(D) failb= 17(D) OK [12661.818323] raid6test: test_disks(1, 18): faila= 1(D) failb= 18(D) OK [12661.818854] raid6test: test_disks(1, 19): faila= 1(D) failb== 22(D) OK [12662.319651] raid6test: test_disks(1, 23): faila= 1(D) failb= 23(D) OK [12662.320173] raid6test: test_disks(1, 24): faila= 1(D) failb= 24(D) OK [12662.320731] raid6test: test_disks(1, 25): faila= 1(D) failb= 25(D) OK [12662.321233] raid6test: test_disks(1, 26): faila= 1(D) failb= 26(D) OK [12662.321797] raid6test: test_disks(1, 27): faila= 1(D) failb= 27(D) OK [12662.322334] raid6test: test_disks(1, 28): faila= 1(D) failb= 28(D) OK [12662.322864] raid6test: test_disks(1, 29): faila= 1(D) failb= 29(D) OK [12662.323404] raid6test: test_disks(1, 30): faila= 1(D) failb= 30(D) OK [12662.323930] raid6test: test_disks(1, 31): faila= 1(D) failb= 31(D) OK [12662.324473] raid6test: test_disks(1, 32): faila= 1(D) failb= 32(D) OK [12662.325011] raid6test: test_disks(1, 33): faila= 1(D) failb= 33(D) OK [12662.325571] raid6test: test_disks(1, 34): faila= 1(D) failb= 34(D) OK [12662.326106] raid6test: test_disks(1, 35): faila= 1(D) failb= 35(D) OK [12662.326653] raid6test: test_disks(1, 36): faila= 1(D) failb= 36(D) OK [12662.327162] raid6test: test_disks(1, 37): faila= 1(D) failb= 37(D) OK [12662.327706] raid6test: test_disks(1, 38): faila= 1(D) failb= 38(D) OK [126[12662.828505] raid6test: test_disks(1, 42): faila= 1(D) failb= 42(D) OK [12662.829057] raid6test: test_disks(1, 43): faila= 1(D) failb= 43(D) OK [12662.829612] raid6test: test_disks(1, 44): faila= 1(D) failb= 44(D) OK [12662.830090] raid6test: test_disks(1, 45): faila= 1(D) failb= 45(D) OK [12662.830632] raid6test: test_disks(1, 46): faila= 1(D) failb= 46(D) OK [12662.831141] raid6test: test_disks(1, 47): faila= 1(D) failb= 47(D) OK [12662.831676] raid6test: test_disks(1, 48): faila= 1(D) failb= 48(D) OK [12662.832186] raid6test: test_disks(1, 49): faila= 1(D) failb= 49(D) OK [12662.832716] raid6test: test_disks(1, 50): faila= 1(D) failb= 50(D) OK [12662.833233] raid6test: test_disks(1, 51): faila= 1(D) failb= 51(D) OK [12662.833790] raid6test: test_disks(1, 52): faila= 1(D) failb= 52(D) OK [12662.834342] raid6test: test_disks(1, 53): faila= 1(D) failb= 53(D) OK [12662.834871] raid6test: test_disks(1, 54): faila= 1(D) failb= 54(D) OK [12662.835401] raid6test: test_disks(1, 55): faila= 1(D) failb= 55(D) OK [12662.835942] raid6test: test_disks(1, 56): faila= 1(D) failb= 56(D) OK [12662.836487] raid6test: test_disks(1, 57): faila= 1(D) failb= 57(D) OK [12662.837019] raid6test: test_disks(1, 58): fisks(1, 61): faila= 1(D) failb= 61(D) OK [12663.337833] raid6test: test_disks(1, 62): faila= 1(D) failb= 62(P) OK [12663.338392] raid6test: test_disks(1, 63): faila= 1(D) failb= 63(Q) OK [12663.338887] raid6test: test_disks(2, 3): faila= 2(D) failb= 3(D) OK [12663.339426] raid6test: test_disks(2, 4): faila= 2(D) failb= 4(D) OK [12663.339951] raid6test: test_disks(2, 5): faila= 2(D) failb= 5(D) OK [12663.340486] raid6test: test_disks(2, 6): faila= 2(D) failb= 6(D) OK [12663.340987] raid6test: test_disks(2, 7): faila= 2(D) failb= 7(D) OK [12663.341530] raid6test: test_disks(2, 8): faila= 2(D) failb= 8(D) OK [12663.342058] raid6test: test_disks(2, 9): faila= 2(D) failb= 9(D) OK [12663.342623] raid6test: test_disks(2, 10): faila= 2(D) failb= 10(D) OK [12663.343135] raid6test: test_disks(2, 11): faila= 2(D) failb= 11(D) OK [12663.343677] raid6test: test_disks(2, 12): faila= 2(D) failb= 12(D) OK [12663.344184] raid6test: test_disks(2, 13): faila= 2(D) failb= 13(D) OK [12663.344723] raid6test: test_disks(2, 14): faila= 2(D) failb= 14(D) OK [12663.345240] raid6test: test_disks(2, 15): faila= la= 2(D) failb= 18(D) OK [12663.846070] raid6test: test_disks(2, 19): faila= 2(D) failb= 19(D) OK [12663.846610] raid6test: test_disks(2, 20): faila= 2(D) failb= 20(D) OK [12663.847093] raid6test: test_disks(2, 21): faila= 2(D) failb= 21(D) OK [12663.847630] raid6test: test_disks(2, 22): faila= 2(D) failb= 22(D) OK [12663.848144] raid6test: test_disks(2, 23): faila= 2(D) failb= 23(D) OK [12663.848691] raid6test: test_disks(2, 24): faila= 2(D) failb= 24(D) OK [12663.849204] raid6test: test_disks(2, 25): faila= 2(D) failb= 25(D) OK [12663.849743] raid6test: test_disks(2, 26): faila= 2(D) failb= 26(D) OK [12663.850256] raid6test: test_disks(2, 27): faila= 2(D) failb= 27(D) OK [12663.850821] raid6test: test_disks(2, 28): faila= 2(D) failb= 28(D) OK [12663.851356] raid6test: test_disks(2, 29): faila= 2(D) failb= 29(D) OK [12663.851886] raid6test: test_disks(2, 30): faila= 2(D) failb= 30(D) OK [12663.852430] raid6test: test_disks(2, 31): faila= 2(D) failb= 31(D) OK [12663.852964] raid6test: test_disks(2, 32): faila= 2(D) failb= 32(D) OK [12663.853501] raid6test: test_disks(2, 33): faila= 2(D) failb= 33(D) OK [12663.854033] raid6test: test_disks(2, 34): faila= 2(D) failb= = 37(D) OK [12664.354871] raid6test: test_disks(2, 38): faila= 2(D) failb= 38(D) OK [12664.355456] raid6test: test_disks(2, 39): faila= 2(D) failb= 39(D) OK [12664.356015] raid6test: test_disks(2, 40): faila= 2(D) failb= 40(D) OK [12664.356548] raid6test: test_disks(2, 41): faila= 2(D) failb= 41(D) OK [12664.357045] raid6test: test_disks(2, 42): faila= 2(D) failb= 42(D) OK [12664.357581] raid6test: test_disks(2, 43): faila= 2(D) failb= 43(D) OK [12664.358119] raid6test: test_disks(2, 44): faila= 2(D) failb= 44(D) OK [12664.358664] raid6test: test_disks(2, 45): faila= 2(D) failb= 45(D) OK [12664.359172] raid6test: test_disks(2, 46): faila= 2(D) failb= 46(D) OK [12664.359713] raid6test: test_disks(2, 47): faila= 2(D) failb= 47(D) OK [12664.360224] raid6test: test_disks(2, 48): faila= 2(D) failb= 48(D) OK [12664.360766] raid6test: test_disks(2, 49): faila= 2(D) failb= 49(D) OK [12664.361274] raid6test: test_disks(2, 50): faila= 2(D) failb= 50(D) OK [12664.361837] raid6test: test_disks(2, 51): faila= 2(D) failb= 51(D) OK [12664.362379] raid6test: test_disks(2, 52): faila= 2(D) failb= 52(D) OK [12664.362906] raid6test: test_disks(2, 53): faila= 2(D) failb= 53(D) OK [12664.363444] raid6test: test_disks(2, 57): faila= 2(D) failb= 57(D) OK [12664.864277] raid6test: test_disks(2, 58): faila= 2(D) failb= 58(D) OK [12664.864855] raid6test: test_disks(2, 59): faila= 2(D) failb= 59(D) OK [12664.865434] raid6test: test_disks(2, 60): faila= 2(D) failb= 60(D) OK [12664.865975] raid6test: test_disks(2, 61): faila= 2(D) failb= 61(D) OK [12664.866520] raid6test: test_disks(2, 62): faila= 2(D) failb= 62(P) OK [12664.867063] raid6test: test_disks(2, 63): faila= 2(D) failb= 63(Q) OK [12664.867607] raid6test: test_disks(3, 4): faila= 3(D) failb= 4(D) OK [12664.868143] raid6test: test_disks(3, 5): faila= 3(D) failb= 5(D) OK [12664.868684] raid6test: test_disks(3, 6): faila= 3(D) failb= 6(D) OK [12664.869197] raid6test: test_disks(3, 7): faila= 3(D) failb= 7(D) OK [12664.869734] raid6test: test_disks(3, 8): faila= 3(D) failb= 8(D) OK [12664.870248] raid6test: test_disks(3, 9): faila= 3(D) failb= 9(D) OK [12664.870781] raid6test: test_disks(3, 10): faila= 3(D) failb= 10(D) OK [12664.871294] raid6test: test_disks(3, 11): faila= 3(D) failb= 11(D) OK [12664.871857] raid6test: test_disks(3, 12): faila= 3(D) failb= 12(D) OK [12664.872417] raid6test: test_disks(3, 13): faila= 3(Dla= 3(D) failb= 16(D) OK [12665.373270] raid6test: test_disks(3, 17): faila= 3(D) failb= 17(D) OK [12665.373809] raid6test: test_disks(3, 18): faila= 3(D) failb= 18(D) OK [12665.374387] raid6test: test_disks(3, 19): faila= 3(D) failb= 19(D) OK [12665.374888] raid6test: test_disks(3, 20): faila= 3(D) failb= 20(D) OK [12665.375477] raid6test: test_disks(3, 21): faila= 3(D) failb= 21(D) OK [12665.376002] raid6test: test_disks(3, 22): faila= 3(D) failb= 22(D) OK [12665.376544] raid6test: test_disks(3, 23): faila= 3(D) failb= 23(D) OK [12665.377080] raid6test: test_disks(3, 24): faila= 3(D) failb= 24(D) OK [12665.377617] raid6test: test_disks(3, 25): faila= 3(D) failb= 25(D) OK [12665.378115] raid6test: test_disks(3, 26): faila= 3(D) failb= 26(D) OK [12665.378659] raid6test: test_disks(3, 27): faila= 3(D) failb= 27(D) OK [12665.379171] raid6test: test_disks(3, 28): faila= 3(D) failb= 28(D) OK [12665.379717] raid6test: test_disks(3, 29): faila= 3(D) failb= 29(D) OK [12665.380250] raid6test: test_disks(3, 30): faila= 3(D) failb= 30(D) OK [12665.380786] raid6test: test_disks(3, 31): faila= 3(D) failb= 31(D) OK [12665.381306] raid6test: test_disks(3, 32): faila= 3(D) failb= 32(D) OK [12665.381868] raid6test: test_disks(3, 36): faila= 3(D) failb= 36(D) OK [12665.882810] raid6test: test_disks(3, 37): faila= 3(D) failb= 37(D) OK [12665.883313] raid6test: test_disks(3, 38): faila= 3(D) failb= 38(D) OK [12665.883882] raid6test: test_disks(3, 39): faila= 3(D) failb= 39(D) OK [12665.884392] raid6test: test_disks(3, 40): faila= 3(D) failb= 40(D) OK [12665.884922] raid6test: test_disks(3, 41): faila= 3(D) failb= 41(D) OK [12665.885446] raid6test: test_disks(3, 42): faila= 3(D) failb= 42(D) OK [12665.885981] raid6test: test_disks(3, 43): faila= 3(D) failb= 43(D) OK [12665.886517] raid6test: test_disks(3, 44): faila= 3(D) failb= 44(D) OK [12665.887008] raid6test: test_disks(3, 45): faila= 3(D) failb= 45(D) OK [12665.887545] raid6test: test_disks(3, 46): faila= 3(D) failb= 46(D) OK [12665.888044] raid6test: test_disks(3, 47): faila= 3(D) failb= 47(D) OK [12665.888573] raid6test: test_disks(3, 48): faila= 3(D) failb= 48(D) OK [12665.889066] raid6test: test_disks(3, 49): faila= 3(D) failb= 49(D) OK [12665.889598] raid6test: test_disks(3, 50): faila= 3(D) failb= 50(D) OK [12665.890133] raid6test: test_aid6test: test_disks(3, 54): faila= 3(D) failb= 54(D) OK [12666.390913] raid6test: test_disks(3, 55): faila= 3(D) failb= 55(D) OK [12666.391463] raid6test: test_disks(3, 56): faila= 3(D) failb= 56(D) OK [12666.391996] raid6test: test_disks(3, 57): faila= 3(D) failb= 57(D) OK [12666.392535] raid6test: test_disks(3, 58): faila= 3(D) failb= 58(D) OK [12666.393035] raid6test: test_disks(3, 59): faila= 3(D) failb= 59(D) OK [12666.393579] raid6test: test_disks(3, 60): faila= 3(D) failb= 60(D) OK [12666.394112] raid6test: test_disks(3, 61): faila= 3(D) failb= 61(D) OK [12666.394651] raid6test: test_disks(3, 62): faila= 3(D) failb= 62(P) OK [12666.395164] raid6test: test_disks(3, 63): faila= 3(D) failb= 63(Q) OK [12666.395730] raid6test: test_disks(4, 5): faila= 4(D) failb= 5(D) OK [12666.396253] raid6test: test_disks(4, 6): faila= 4(D) failb= 6(D) OK [12666.396781] raid6test: test_disks(4, 7): faila= 4(D) failb= 7(D) OK [12666.397290] raid6test: test_disks(4, 8): faila= 4(D) failb= 8(D) OK [12666.397830] raid6test: test_disks(4, 9): faila= 4(D) failb= 9(D) OK [12666.398378] raid6test: test_disks(4, 10): faila= 4(D) failb= 10(D) OK [12666.398913] raid6test: test_disks(4,isks(4, 14): faila= 4(D) failb= 14(D) OK [12666.899777] raid6test: test_disks(4, 15): faila= 4(D) failb= 15(D) OK [12666.900293] raid6test: test_disks(4, 16): faila= 4(D) failb= 16(D) OK [12666.900829] raid6test: test_disks(4, 17): faila= 4(D) failb= 17(D) OK [12666.901382] raid6test: test_disks(4, 18): faila= 4(D) failb= 18(D) OK [12666.901880] raid6test: test_disks(4, 19): faila= 4(D) failb= 19(D) OK [12666.902426] raid6test: test_disks(4, 20): faila= 4(D) failb= 20(D) OK [12666.902953] raid6test: test_disks(4, 21): faila= 4(D) failb= 21(D) OK [12666.903487] raid6test: test_disks(4, 22): faila= 4(D) failb= 22(D) OK [12666.904014] raid6test: test_disks(4, 23): faila= 4(D) failb= 23(D) OK [12666.904566] raid6test: test_disks(4, 24): faila= 4(D) failb= 24(D) OK [12666.905104] raid6test: test_disks(4, 25): faila= 4(D) failb= 25(D) OK [12666.905654] raid6test: test_disks(4, 26): faila= 4(D) failb= 26(D) OK [12666.906186] raid6test: test_disks(4, 27): faila= 4(D) failb= 27(D) OK [12666.906730] raid6test: test_disks(4, 28): faila= 4(D) failb= 28(D) OK [12666.907238] raid6test: test_disks(4, 29): faila= 4(D) failb= 29(D) O= 32(D) OK [12667.408151] raid6test: test_disks(4, 33): faila= 4(D) failb= 33(D) OK [12667.408713] raid6test: test_disks(4, 34): faila= 4(D) failb= 34(D) OK [12667.409098] raid6test: test_disks(4, 35): faila= 4(D) failb= 35(D) OK [12667.409643] raid6test: test_disks(4, 36): faila= 4(D) failb= 36(D) OK [12667.410173] raid6test: test_disks(4, 37): faila= 4(D) failb= 37(D) OK [12667.410716] raid6test: test_disks(4, 38): faila= 4(D) failb= 38(D) OK [12667.411222] raid6test: test_disks(4, 39): faila= 4(D) failb= 39(D) OK [12667.411765] raid6test: test_disks(4, 40): faila= 4(D) failb= 40(D) OK [12667.412276] raid6test: test_disks(4, 41): faila= 4(D) failb= 41(D) OK [12667.412821] raid6test: test_disks(4, 42): faila= 4(D) failb= 42(D) OK [12667.413331] raid6test: test_disks(4, 43): faila= 4(D) failb= 43(D) OK [12667.413889] raid6test: test_disks(4, 44): faila= 4(D) failb= 44(D) OK [12667.414429] raid6test: test_disks(4, 45): faila= 4(D) failb= 45(D) OK [12667.414951] raid6test: test_disks(4, 46): faila= 4(D) failb= 46(D) OK [12667.415529] raid6test: test_disks(4, 47): faila= 4(D) failb= 47(D) OK [1= 50(D) OK [12667.916719] raid6test: test_disks(4, 51): faila= 4(D) failb= 51(D) OK [12667.917320] raid6test: test_disks(4, 52): faila= 4(D) failb= 52(D) OK [12667.917870] raid6test: test_disks(4, 53): faila= 4(D) failb= 53(D) OK [12667.918479] raid6test: test_disks(4, 54): faila= 4(D) failb= 54(D) OK [12667.919006] raid6test: test_disks(4, 55): faila= 4(D) failb= 55(D) OK [12667.919518] raid6test: test_disks(4, 56): faila= 4(D) failb= 56(D) OK [12667.920077] raid6test: test_disks(4, 57): faila= 4(D) failb= 57(D) OK [12667.920724] raid6test: test_disks(4, 58): faila= 4(D) failb= 58(D) OK [12667.921248] raid6test: test_disks(4, 59): faila= 4(D) failb= 59(D) OK [12667.921775] raid6test: test_disks(4, 60): faila= 4(D) failb= 60(D) OK [12667.922266] raid6test: test_disks(4, 61): faila= 4(D) failb= 61(D) OK [12667.922797] raid6test: test_disks(4, 62): faila= 4(D) failb= 62(P) OK [12667.923325] raid6test: test_disks(4, 63): faila= 4(D) failb= 63(Q) OK [12667.923915] raid6test: test_disks(5, 6): faila= 5(D) failb= 6(D) OK [12667.924469] raid6test: test_disks(5, 7): faila= 5(D) failb= 7(D) OK [12667.924980] raid6test: test_disks(5, 8): faila= 5(D) failb= 8(D) OK [12667.925487] raid6test: test_diskisks(5, 12): faila= 5(D) failb= 12(D) OK [12668.426434] raid6test: test_disks(5, 13): faila= 5(D) failb= 13(D) OK [12668.427017] raid6test: test_disks(5, 14): faila= 5(D) failb= 14(D) OK [12668.427544] raid6test: test_disks(5, 15): faila= 5(D) failb= 15(D) OK [12668.428107] raid6test: test_disks(5, 16): faila= 5(D) failb= 16(D) OK [12668.428644] raid6test: test_disks(5, 17): faila= 5(D) failb= 17(D) OK [12668.429218] raid6test: test_disks(5, 18): faila= 5(D) failb= 18(D) OK [12668.429753] raid6test: test_disks(5, 19): faila= 5(D) failb= 19(D) OK [12668.430291] raid6test: test_disks(5, 20): faila= 5(D) failb= 20(D) OK [12668.430821] raid6test: test_disks(5, 21): faila= 5(D) failb= 21(D) OK [12668.431315] raid6test: test_disks(5, 22): faila= 5(D) failb= 22(D) OK [12668.431845] raid6test: test_disks(5, 23): faila= 5(D) failb= 23(D) OK [12668.432338] raid6test: test_disks(5, 24): faila= 5(D) failb= 24(D) OK [12668.432893] raid6test: test_disks(5, 25): faila= 5(D) failb= 25(D) OK [12668.433412] raid6test: test_disks(5, 26): faila= 5(D) failb= 26(D) OK [12668.433940] raid6test: test_disks(5, 27): faila= 5(D) failb= 27(D) OK [12668.434460] raid6test: test_disks(5, 28): faila= 5(D) failb= 28(D) OK [12668.434985] raid6test: test_disks(5, 29): faila= 5(D) failb= 29(D) OK [12668.435547] raid6test: test_disks(5, 30): faila= 5(Dla= 5(D) failb= 33(D) OK [12668.936502] raid6test: test_disks(5, 34): faila= 5(D) failb= 34(D) OK [12668.937423] raid6test: test_disks(5, 35): faila= 5(D) failb= 35(D) OK [12668.937985] raid6test: test_disks(5, 36): faila= 5(D) failb= 36(D) OK [12668.938515] raid6test: test_disks(5, 37): faila= 5(D) failb= 37(D) OK [12668.939078] raid6test: test_disks(5, 38): faila= 5(D) failb= 38(D) OK [12668.939627] raid6test: test_disks(5, 39): faila= 5(D) failb= 39(D) OK [12668.940162] raid6test: test_disks(5, 40): faila= 5(D) failb= 40(D) OK [12668.940691] raid6test: test_disks(5, 41): faila= 5(D) failb= 41(D) OK [12668.941218] raid6test: test_disks(5, 42): faila= 5(D) failb= 42(D) OK [12668.941756] raid6test: test_disks(5, 43): faila= 5(D) failb= 43(D) OK [12668.942264] raid6test: test_disks(5, 44): faila= 5(D) failb= 44(D) OK [12668.942798] raid6test: test_disks(5, 45): faila= 5(D) failb= 45(D) OK [12668.943337] raid6test: test_disks(5, 46): faila= 5(D) failb= 46(D) OK [12668.943872] raid6test: test_disks(5, 47): faila= 5(D) failb= 47(D) OK [12668.944404] raid6test: test_disks(5, 48): faila= 5(D) failb= 48(D) OK [12668.944984] raid6test: test_disks(5, 49): faila= 5(D) failb= = 52(D) OK [12669.445849] raid6test: test_disks(5, 53): faila= 5(D) failb= 53(D) OK [12669.446407] raid6test: test_disks(5, 54): faila= 5(D) failb= 54(D) OK [12669.446905] raid6test: test_disks(5, 55): faila= 5(D) failb= 55(D) OK [12669.447451] raid6test: test_disks(5, 56): faila= 5(D) failb= 56(D) OK [12669.447963] raid6test: test_disks(5, 57): faila= 5(D) failb= 57(D) OK [12669.448521] raid6test: test_disks(5, 58): faila= 5(D) failb= 58(D) OK [12669.448980] raid6test: test_disks(5, 59): faila= 5(D) failb= 59(D) OK [12669.449517] raid6test: test_disks(5, 60): faila= 5(D) failb= 60(D) OK [12669.450052] raid6test: test_disks(5, 61): faila= 5(D) failb= 61(D) OK [12669.450598] raid6test: test_disks(5, 62): faila= 5(D) failb= 62(P) OK [12669.451147] raid6test: test_disks(5, 63): faila= 5(D) failb= 63(Q) OK [12669.451688] raid6test: test_disks(6, 7): faila= 6(D) failb= 7(D) OK [12669.452219] raid6test: test_disks(6, 8): faila= 6(D) failb= 8(D) OK [12669.452759] raid6test: test_disks(6, 9): faila= 6(D) failb= 9(D) OK [12669.453268] raid6test: test_disks(6, 10): faila= 6(D) failb= 10(D) OK [12669.453802] raid6test: test_disks(6, 11): faila= 6(D) failb= 11(D) OK [12669.454309] raid6test: test_diskisks(6, 15): faila= 6(D) failb= 15(D) OK [12669.955146] raid6test: test_disks(6, 16): faila= 6(D) failb= 16(D) OK [12669.955705] raid6test: test_disks(6, 17): faila= 6(D) failb= 17(D) OK [12669.956225] raid6test: test_disks(6, 18): faila= 6(D) failb= 18(D) OK [12669.956786] raid6test: test_disks(6, 19): faila= 6(D) failb= 19(D) OK [12669.957329] raid6test: test_disks(6, 20): faila= 6(D) failb= 20(D) OK [12669.957862] raid6test: test_disks(6, 21): faila= 6(D) failb= 21(D) OK [12669.958383] raid6test: test_disks(6, 22): faila= 6(D) failb= 22(D) OK [12669.958898] raid6test: test_disks(6, 23): faila= 6(D) failb= 23(D) OK [12669.959427] raid6test: test_disks(6, 24): faila= 6(D) failb= 24(D) OK [12669.959934] raid6test: test_disks(6, 25): faila= 6(D) failb= 25(D) OK [12669.960462] raid6test: test_disks(6, 26): faila= 6(D) failb= 26(D) OK [12669.960984] raid6test: test_disks(6, 27): faila= 6(D) failb= 27(D) OK [12669.961523] raid6test: test_disks(6, 28): faila= 6(D) failb= 28(D) OK [12669.962050] raid6test: test_disks(6, 29): faila= 6(D) failb= 29(D) OK [12669.962561] raid6test: test_disks(6, 30): faisks(6, 33): faila= 6(D) failb= 33(D) OK [12670.463560] raid6test: test_disks(6, 34): faila= 6(D) failb= 34(D) OK [12670.464088] raid6test: test_disks(6, 35): faila= 6(D) failb= 35(D) OK [12670.464612] raid6test: test_disks(6, 36): faila= 6(D) failb= 36(D) OK [12670.465138] raid6test: test_disks(6, 37): faila= 6(D) failb= 37(D) OK [12670.465689] raid6test: test_disks(6, 38): faila= 6(D) failb= 38(D) OK [12670.466210] raid6test: test_disks(6, 39): faila= 6(D) failb= 39(D) OK [12670.466743] raid6test: test_disks(6, 40): faila= 6(D) failb= 40(D) OK [12670.467235] raid6test: test_disks(6, 41): faila= 6(D) failb= 41(D) OK [12670.467796] raid6test: test_disks(6, 42): faila= 6(D) failb= 42(D) OK [12670.468323] raid6test: test_disks(6, 43): faila= 6(D) failb= 43(D) OK [12670.468841] raid6test: test_disks(6, 44): faila= 6(D) failb= 44(D) OK [12670.469416] raid6test: test_disks(6, 45): faila= 6(D) failb= 45(D) OK [12670.470097] raid6test: test_disks(6, 46): faila= 6(D) failb= 46(D) OK [12670.470624] raid6test: test_disks(6, 47): faila= 6(D) failb= 47(D) OK [12670.471148] raid6test: test_disks(6, 48): faila= 6(D) failb= 48(D) OK [12670.471674] raid6test: test_disks(6, 49): faila= 6(D) failb= 49(D) OK = 52(D) OK [12670.972636] raid6test: test_disks(6, 53): faila= 6(D) failb= 53(D) OK [12670.973201] raid6test: test_disks(6, 54): faila= 6(D) failb= 54(D) OK [12670.973777] raid6test: test_disks(6, 55): faila= 6(D) failb= 55(D) OK [12670.974311] raid6test: test_disks(6, 56): faila= 6(D) failb= 56(D) OK [12670.974874] raid6test: test_disks(6, 57): faila= 6(D) failb= 57(D) OK [12670.975473] raid6test: test_disks(6, 58): faila= 6(D) failb= 58(D) OK [12670.976005] raid6test: test_disks(6, 59): faila= 6(D) failb= 59(D) OK [12670.976569] raid6test: test_disks(6, 60): faila= 6(D) failb= 60(D) OK [12670.977093] raid6test: test_disks(6, 61): faila= 6(D) failb= 61(D) OK [12670.977657] raid6test: test_disks(6, 62): faila= 6(D) failb= 62(P) OK [12670.978227] raid6test: test_disks(6, 63): faila= 6(D) failb= 63(Q) OK [12670.978782] raid6test: test_disks(7, 8): faila= 7(D) failb= 8(D) OK [12670.979278] raid6test: test_disks(7, 9): faila= 7(D) failb= 9(D) OK [12670.979839] raid6test: test_disks(7, 10): faila= 7(D) failb= 10(D) OK [= 13(D) OK [12671.480639] raid6test: test_disks(7, 14): faila= 7(D) failb= 14(D) OK [12671.481171] raid6test: test_disks(7, 15): faila= 7(D) failb= 15(D) OK [12671.481759] raid6test: test_disks(7, 16): faila= 7(D) failb= 16(D) OK [12671.482297] raid6test: test_disks(7, 17): faila= 7(D) failb= 17(D) OK [12671.482867] raid6test: test_disks(7, 18): faila= 7(D) failb= 18(D) OK [12671.483436] raid6test: test_disks(7, 19): faila= 7(D) failb= 19(D) OK [12671.483996] raid6test: test_disks(7, 20): faila= 7(D) failb= 20(D) OK [12671.484568] raid6test: test_disks(7, 21): faila= 7(D) failb= 21(D) OK [12671.485090] raid6test: test_disks(7, 22): faila= 7(D) failb= 22(D) OK [12671.485669] raid6test: test_disks(7, 23): faila= 7(D) failb= 23(D) OK [12671.486233] raid6test: test_disks(7, 24): faila= 7(D) failb= 24(D) OK [12671.486808] raid6test: test_disks(7, 25): faila= 7(D) failb= 25(D) OK [12671.487349] raid6test: test_disks(7, 26): faila= 7(D) failb= 26(D) OK [12671.487921] raid6test: test_disks(7, 27): faila= 7(D) failb= 27(D) OK [12671.488494] raid6test: test_disks(7, 28): faila= 7(D) failb= 28(D) OK [12671.489024] raid6test: test_disks(7, 29): faila= 7(D) failb= 29(D) OK [12671.489591] raid6test: test_disks(7, 30): faila= 7(D) failb= 30(D) OK [12671.5[12671.990518] raid6test: test_disks(7, 34): faila= 7(D) failb= 34(D) OK [12671.991099] raid6test: test_disks(7, 35): faila= 7(D) failb= 35(D) OK [12671.991684] raid6test: test_disks(7, 36): faila= 7(D) failb= 36(D) OK [12671.992178] raid6test: test_disks(7, 37): faila= 7(D) failb= 37(D) OK [12671.992748] raid6test: test_disks(7, 38): faila= 7(D) failb= 38(D) OK [12671.993312] raid6test: test_disks(7, 39): faila= 7(D) failb= 39(D) OK [12671.993876] raid6test: test_disks(7, 40): faila= 7(D) failb= 40(D) OK [12671.994452] raid6test: test_disks(7, 41): faila= 7(D) failb= 41(D) OK [12671.995032] raid6test: test_disks(7, 42): faila= 7(D) failb= 42(D) OK [12671.995627] raid6test: test_disks(7, 43): faila= 7(D) failb= 43(D) OK [12671.996196] raid6test: test_disks(7, 44): faila= 7(D) failb= 44(D) OK [12671.996748] raid6test: test_disks(7, 45): faila= 7(D) failb= 45(D) OK [12671.997306] raid6test: test_disks(7, 46): faila= 7(D) failb= 46(D) OK [12671.997871] raid6test: test_disks(7, 47): faila= 7(D) failb= 47(D) OK [12671.998444] raid6test: test_disks(7, 48): faila= 7(D) failb= 48(D) OK [12671.999029] raid6test: test_disks(7, 49): faila= 7(D) failb= 49(D) OK [12671.999637] raid6test: test_disks(7, 50): isks(7, 53): faila= 7(D) failb= 53(D) OK [12672.500620] raid6test: test_disks(7, 54): faila= 7(D) failb= 54(D) OK [12672.501189] raid6test: test_disks(7, 55): faila= 7(D) failb= 55(D) OK [12672.501753] raid6test: test_disks(7, 56): faila= 7(D) failb= 56(D) OK [12672.502313] raid6test: test_disks(7, 57): faila= 7(D) failb= 57(D) OK [12672.502875] raid6test: test_disks(7, 58): faila= 7(D) failb= 58(D) OK [12672.503439] raid6test: test_disks(7, 59): faila= 7(D) failb= 59(D) OK [12672.503967] raid6test: test_disks(7, 60): faila= 7(D) failb= 60(D) OK [12672.504533] raid6test: test_disks(7, 61): faila= 7(D) failb= 61(D) OK [12672.505098] raid6test: test_disks(7, 62): faila= 7(D) failb= 62(P) OK [12672.505678] raid6test: test_disks(7, 63): faila= 7(D) failb= 63(Q) OK [12672.506248] raid6test: test_disks(8, 9): faila= 8(D) failb= 9(D) OK [12672.506815] raid6test: test_disks(8, 10): faila= 8(D) failb= 10(D) OK [12672.507350] raid6test: test_disks(8, 11): faila= 8(D) failb= 11(D) OK [12672.507912] raid6test: test_disks(8, 12): faila= 8(D) failb= 12(D) OK [12672.508489] raid6test: test_disks(8, 13): faila= 8(D) failb= 13(D) OK [12672.509053] raid6test: test_disks(8, 14): faila=la= 8(D) failb= 17(D) OK [12673.010087] raid6test: test_disks(8, 18): faila= 8(D) failb= 18(D) OK [12673.010639] raid6test: test_disks(8, 19): faila= 8(D) failb= 19(D) OK [12673.011182] raid6test: test_disks(8, 20): faila= 8(D) failb= 20(D) OK [12673.011734] raid6test: test_disks(8, 21): faila= 8(D) failb= 21(D) OK [12673.012264] raid6test: test_disks(8, 22): faila= 8(D) failb= 22(D) OK [12673.012816] raid6test: test_disks(8, 23): faila= 8(D) failb= 23(D) OK [12673.013326] raid6test: test_disks(8, 24): faila= 8(D) failb= 24(D) OK [12673.013903] raid6test: test_disks(8, 25): faila= 8(D) failb= 25(D) OK [12673.014436] raid6test: test_disks(8, 26): faila= 8(D) failb= 26(D) OK [12673.014963] raid6test: test_disks(8, 27): faila= 8(D) failb= 27(D) OK [12673.015503] raid6test: test_disks(8, 28): faila= 8(D) failb= 28(D) OK [12673.016058] raid6test: test_disks(8, 29): faila= 8(D) failb= 29(D) OK [12673.016608] raid6test: test_disks(8, 30): faila= 8(D) failb= 30(D) OK [12673.017145] raid6test: test_disks(8, 31): faila= 8(D) failb= 31(D) OK [12673.017676] raid6test: test_disks(8, 32): faila= 8(D) failb= 32(D) OK [12673.018209] raid6test: test_disks(8, 33): faila= 8(D) failb== 36(D) OK [12673.519178] raid6test: test_disks(8, 37): faila= 8(D) failb= 37(D) OK [12673.519752] raid6test: test_disks(8, 38): faila= 8( failb= 39(D) OK [12673.520790] raid6test: test_disks(8, 40): faila= 8(D) failb= 40(D) OK [12673.521290] raid6test: test_disks(8, 41): faila= 8(D) failb= 41(D) OK [12673.521843] raid6test: test_disks(8, 42): faila= 8(D) failb= 42(D) OK [12673.522358] raid6test: test_disks(8, 43): faila= 8(D) failb= 43(D) OK [12673.522912] raid6test: test_disks(8, 44): faila= 8(D) failb= 44(D) OK [12673.523477] raid6test: test_disks(8, 45): faila= 8(D) failb= 45(D) OK [12673.523992] raid6test: test_disks(8, 46): faila= 8(D) failb= 46(D) OK [12673.524517] raid6test: test_disks(8, 47): faila= 8(D) failb= 47(D) OK [12673.525039] raid6test: test_disks(8, 48): faila= 8(D) failb= 48(D) OK [12673.525583] raid6test: test_disks(8, 49): faila= 8(D) failb= 49(D) OK [12673.526105] raid6test: test_disks(8, 50): faila= 8(D) failb= 50(D) OK [12673.526628] raid6test: test_disks(8, 51): faila= 8(D) failb= 51(D) OK [12673.527138] raid6test: testaid6test: test_disks(8, 55): faila= 8(D) failb= 55(D) OK [12674.028201] raid6test: test_disks(8, 56): faila= 8(D) failb= 56(D) OK [12674.028768] raid6test: test_disks(8, 57): faila= 8(D) failb= 57(D) OK [12674.029328] raid6test: test_disks(8, 58): faila= 8(D) failb= 58(D) OK [12674.029887] raid6test: test_disks(8, 59): faila= 8(D) failb= 59(D) OK [12674.030448] raid6test: test_disks(8, 60): faila= 8(D) failb= 60(D) OK [12674.030960] raid6test: test_disks(8, 61): faila= 8(D) failb= 61(D) OK [12674.031528] raid6test: test_disks(8, 62): faila= 8(D) failb= 62(P) OK [12674.032059] raid6test: test_disks(8, 63): faila= 8(D) failb= 63(Q) OK [12674.032614] raid6test: test_disks(9, 10): faila= 9(D) failb= 10(D) OK [12674.033134] raid6test: test_disks(9, 11): faila= 9(D) failb= 11(D) OK [12674.033694] raid6test: test_disks(9, 12): faila= 9(D) failb= 12(D) OK [12674.034249] raid6test: test_disks(9, 13): faila= 9(D) failb= 13(D) OK [12674.034801] raid6test: test_disks(9, 14): faila= 9(D) failb= 14(D) OK [12674.035304] raid6test: test_disks(9, 15): faila= 9(D) failb= 15(D) OK [12674.035881] raid6test: test_aid6test: test_disks(9, 19): faila= 9(D) failb= 19(D) OK [12674.536855] raid6test: test_disks(9, 20): faila= 9(D) failb= 20(D) OK [12674.537384] raid6test: test_disks(9, 21): faila= 9(D) failb= 21(D) OK [12674.537979] raid6test: test_disks(9, 22): faila= 9(D) failb= 22(D) OK [12674.538519] raid6test: test_disks(9, 23): faila= 9(D) failb= 23(D) OK [12674.539038] raid6test: test_disks(9, 24): faila= 9(D) failb= 24(D) OK [12674.539578] raid6test: test_disks(9, 25): faila= 9(D) failb= 25(D) OK [12674.540094] raid6test: test_disks(9, 26): faila= 9(D) failb= 26(D) OK [12674.540634] raid6test: test_disks(9, 27): faila= 9(D) failb= 27(D) OK [12674.541152] raid6test: test_disks(9, 28): faila= 9(D) failb= 28(D) OK [12674.541690] raid6test: test_disks(9, 29): faila= 9(D) failb= 29(D) OK [12674.542224] raid6test: test_disks(9, 30): faila= 9(D) failb= 30(D) OK [12674.542771] raid6test: test_disks(9, 31): faila= 9(D) failb= 31(D) OK [12674.543288] raid6test: test_disks(9, 32): faila= 9(D) failb= 32(D) OK [12674.543824] raid6test: test_disks(9, 33): faila= 9(D) failb= 33(D) OK [12674.544313] raid6test: test_disks(9, 34): faila= 9(D) failb= 34(D) OK [12674.544881] raid6test: test_disks(9, 35): faila= 9(D) faisks(9, 38): faila= 9(D) failb= 38(D) OK [12675.045854] raid6test: test_disks(9, 39): faila= 9(D) failb= 39(D) OK [12675.046383] raid6test: test_disks(9, 40): faila= 9(D) failb= 40(D) OK [12675.046956] raid6test: test_disks(9, 41): faila= 9(D) failb= 41(D) OK [12675.047573] raid6test: test_disks(9, 42): faila= 9(D) failb= 42(D) OK [12675.048164] raid6test: test_disks(9, 43): faila= 9(D) failb= 43(D) OK [12675.048717] raid6test: test_disks(9, 44): faila= 9(D) failb= 44(D) OK [12675.049266] raid6test: test_disks(9, 45): faila= 9(D) failb= 45(D) OK [12675.049848] raid6test: test_disks(9, 46): faila= 9(D) failb= 46(D) OK [12675.050381] raid6test: test_disks(9, 47): faila= 9(D) failb= 47(D) OK [12675.050935] raid6test: test_disks(9, 48): faila= 9(D) failb= 48(D) OK [12675.051508] raid6test: test_disks(9, 49): faila= 9(D) failb= 49(D) OK [12675.052077] raid6test: test_disks(9, 50): faila= 9(D) failb= 50(D) OK [12675.052653] raid6test: test_disks(9, 51): faila= 9(D) failb= 51(D) OK [12675.053204] raid6test: test_disks(9, 52): faila= 9(D) failb= 52(D) OK [12675.053755] raid6test: test_disks(9, 53): faila= 9(D) failb= 53(D) OK [12675.054317] raid6test: test_disks(9, 54): failala= 9(D) failb= 57(D) OK [12675.555287] raid6test: test_disks(9, 58): faila= 9(D) failb= 58(D) OK [12675.555863] raid6test: test_disks(9, 59): faila= 9(D) failb= 59(D) OK [12675.556362] raid6test: test_disks(9, 60): faila= 9(D) failb= 60(D) OK [12675.556921] raid6test: test_disks(9, 61): faila= 9(D) failb= 61(D) OK [12675.557529] raid6test: test_disks(9, 62): faila= 9(D) failb= 62(P) OK [12675.558029] raid6test: test_disks(9, 63): faila= 9(D) failb= 63(Q) OK [12675.558565] raid6test: test_disks(10, 11): faila= 10(D) failb= 11(D) OK [12675.559081] raid6test: test_disks(10, 12): faila= 10(D) failb= 12(D) OK [12675.559618] raid6test: test_disks(10, 13): faila= 10(D) failb= 13(D) OK [12675.560159] raid6test: test_disks(10, 14): faila= 10(D) failb= 14(D) OK [12675.560682] raid6test: test_disks(10, 15): faila= 10(D) failb= 15(D) OK [12675.561241] raid6test: test_disks(10, 16): faila= 10(D) failb= 16(D) OK [12675.562151] raid6test: test_disks(10, 17): faila= 10(D) failb= 17(D) OK [12675.562735] raid6test: test_disks(10, 18): faila= 10(D) failb= 18(D) OK [12675.563284] raid6test: test_disks(10, 19): faila= 10(D) failb= 19(D) OK [12675.563847] raid6test: test_disks(10, 20): faila= 10(ila= 10(D) failb= 23(D) OK [12676.064734] raid6test: test_disks(10, 24): faila= 10(D) failb= 24(D) OK [12676.065262] raid6test: test_disks(10, 25): faila= 10(D) failb= 25(D) OK [12676.065842] raid6test: test_disks(10, 26): faila= 10(D) failb= 26(D) OK [12676.066334] raid6test: test_disks(10, 27): faila= 10(D) failb= 27(D) OK [12676.066907] raid6test: test_disks(10, 28): faila= 10(D) failb= 28(D) OK [12676.067472] raid6test: test_disks(10, 29): faila= 10(D) failb= 29(D) OK [12676.067974] raid6test: test_disks(10, 30): faila= 10(D) failb= 30(D) OK [12676.068531] raid6test: test_disks(10, 31): faila= 10(D) failb= 31(D) OK [12676.069025] raid6test: test_disks(10, 32): faila= 10(D) failb= 32(D) OK [12676.069575] raid6test: test_disks(10, 33): faila= 10(D) failb= 33(D) OK [12676.070101] raid6test: test_disks(10, 34): faila= 10(D) failb= 34(D) OK [12676.070650] raid6test: test_disks(10, 35): faila= 10(D) failb= 35(D) OK [12676.071174] raid6test: test_disks(10, 36): faila= 10(D) failb= 36(D) OK [12676.071708] raid6test: test_disks(10, 37): faila= 10(D) failb= 37(D) OK [12676.072231] raid6test: test_disks(10, 38): faila= 10(D) failb= 38(D) OK [12676.072806] raid6test: test_disks(10, 39): faila= 10(D) failb= 39(D) OK b= 42(D) OK [12676.573930] raid6test: test_disks(10, 43): faila= 10(D) failb= 43(D) OK [12676.574659] raid6test: test_disks(10, 44): faila= 10(D) failb= 44(D) OK [12676.575274] raid6test: test_disks(10, 45): faila= 10(D) failb= 45(D) OK [12676.575898] raid6test: test_disks(10, 46): faila= 10(D) failb= 46(D) OK [12676.576587] raid6test: test_disks(10, 47): faila= 10(D) failb= 47(D) OK [12676.577196] raid6test: test_disks(10, 48): faila= 10(D) failb= 48(D) OK [12676.577829] raid6test: test_disks(10, 49): faila= 10(D) failb= 49(D) OK [12676.578471] raid6test: test_disks(10, 50): faila= 10(D) failb= 50(D) OK [12676.579114] raid6test: test_disks(10, 51): faila= 10(D) failb= 51(D) OK [12676.579746] raid6test: test_disks(10, 52): faila= 10(D) failb= 52(D) OK [12676.580353] raid6test: test_disks(10, 53): faila= 10(D) failb= 53(D) OK [12676.580986] raid6test: test_disks(10, 54): faila= 10(D) failb= 54(D) OK [12676.581674] raid6test: test_disks(10, 55): faila= 10(D) failb= 55(D) OK [12676.582273] raid6test: test_disks(10, 56): faila= 10(D) failb= 56(D) OK [12676.582906] raid6test: test_disks(10, 57): faila= 10(D) failb= 57(D) OK [12676.583567] raid6test: test_disks(10, 58): faila= 10(D) failb= 58(D) OK [12676.584171] raid6test: test_disks(10, 59): faila= 10(D) failb= 5b= 62(P) OK [12677.084975] raid6test: test_disks(10, 63): faila= 10(D) failb= 63(Q) OK [12677.085568] raid6test: test_disks(11, 12): faila= 11(D) failb= 12(D) OK [12677.086116] raid6test: test_disks(11, 13): faila= 11(D) failb= 13(D) OK [12677.086658] raid6test: test_disks(11, 14): faila= 11(D) failb= 14(D) OK [12677.087189] raid6test: test_disks(11, 15): faila= 11(D) failb= 15(D) OK [12677.087730] raid6test: test_disks(11, 16): faila= 11(D) failb= 16(D) OK [12677.088264] raid6test: test_disks(11, 17): faila= 11(D) failb= 17(D) OK [12677.088805] raid6test: test_disks(11, 18): faila= 11(D) failb= 18(D) OK [12677.089347] raid6test: test_disks(11, 19): faila= 11(D) failb= 19(D) OK [12677.089882] raid6test: test_disks(11, 20): faila= 11(D) failb= 20(D) OK [12677.090394] raid6test: test_disks(11, 21): faila= 11(D) failb= 21(D) OK [12677.090934] raid6test: test_disks(11, 22): faila= 11(D) failb= 22(D) OK [12677.091474] raid6test: test_disks(11, 23): faila= 11(D) failb= 23(D) OK [12677.091986] raid6test: test_disks(11, 24): faila= 11(D) failb= 24(D) OK [12677.092492] raid6test: test_disks(11, 25): faila= 11(D) failb= 25(D) OK [12677.093023] raid6test: test_disks(11, 26): faila= 11(D) failb= 26(D) OK [12677.093559] raid6test: test_disks(11, 27): faila= 11(D) failb= 27(D) OK [12677.094093] raid6test: test_disks(11, 28): faila= 11(D) failb= 28(D) OK [12677.094637] raidaid6test: test_disks(11, 32): faila= 11(D) failb= 32(D) OK [12677.595487] raid6test: test_disks(11, 33): faila= 11(D) failb= 33(D) OK [12677.596028] raid6test: test_disks(11, 34): faila= 11(D) failb= 34(D) OK [12677.596574] raid6test: test_disks(11, 35): faila= 11(D) failb= 35(D) OK [12677.597104] raid6test: test_disks(11, 36): faila= 11(D) failb= 36(D) OK [12677.597645] raid6test: test_disks(11, 37): faila= 11(D) failb= 37(D) OK [12677.598181] raid6test: test_disks(11, 38): faila= 11(D) failb= 38(D) OK [12677.598718] raid6test: test_disks(11, 39): faila= 11(D) failb= 39(D) OK [12677.599244] raid6test: test_disks(11, 40): faila= 11(D) failb= 40(D) OK [12677.599793] raid6test: test_disks(11, 41): faila= 11(D) failb= 41(D) OK [12677.600292] raid6test: test_disks(11, 42): faila= 11(D) failb= 42(D) OK [12677.600825] raid6test: test_disks(11, 43): faila= 11(D) failb= 43(D) OK [12677.601394] raid6test: test_disks(11, 44): faila= 11(D) failb= 44(D) OK [12677.601927] raid6test: test_disks(11, 45): faila= 11(D) failb= 45(D) OK [12677.602404] raid6test: test_disks(11, 46): faila= 11(D) failb= 46(D) OK [12677.602972] raid6test: test_disks(11, 50): faila= 11(D) failb= 50(D) OK [12678.103794] raid6test: test_disks(11, 51): faila= 11(D) failb= 51(D) OK [12678.104400] raid6test: test_disks(11, 52): faila= 11(D) failb= 52(D) OK [12678.104961] raid6test: test_disks(11, 53): faila= 11(D) failb= 53(D) OK [12678.105537] raid6test: test_disks(11, 54): faila= 11(D) failb= 54(D) OK [12678.106090] raid6test: test_disks(11, 55): faila= 11(D) failb= 55(D) OK [12678.106627] raid6test: test_disks(11, 56): faila= 11(D) failb= 56(D) OK [12678.107155] raid6test: test_disks(11, 57): faila= 11(D) failb= 57(D) OK [12678.107703] raid6test: test_disks(11, 58): faila= 11(D) failb= 58(D) OK [12678.108226] raid6test: test_disks(11, 59): faila= 11(D) failb= 59(D) OK [12678.108757] raid6test: test_disks(11, 60): faila= 11(D) failb= 60(D) OK [12678.109256] raid6test: test_disks(11, 61): faila= 11(D) failb= 61(D) OK [12678.109831] raid6test: test_disks(11, 62): faila= 11(D) failb= 62(P) OK [12678.110411] raid6test: test_disks(11, 63): faila= 11(D) failb= 63(Q) OK [12678.110954] raid6test: test_disks(12, 13): faila= 12(D) failb= 13(D) OK [12678.111553] raid6test: test_disks(12, 14): faila= 12(D) failb= 14(D) OK [12678.112364] raaid6test: test_disks(12, 18): faila= 12(D) failb= 18(D) OK [12678.613387] raid6test: test_disks(12, 19): faila= 12(D) failb= 19(D) OK [12678.613926] raid6test: test_disks(12, 20): faila= 12(D) failb= 20(D) OK [12678.614515] raid6test: test_disks(12, 21): faila= 12(D) failb= 21(D) OK [12678.615015] raid6test: test_disks(12, 22): faila= 12(D) failb= 22(D) OK [12678.615545] raid6test: test_disks(12, 23): faila= 12(D) failb= 23(D) OK [12678.616090] raid6test: test_disks(12, 24): faila= 12(D) failb= 24(D) OK [12678.616615] raid6test: test_disks(12, 25): faila= 12(D) failb= 25(D) OK [12678.617146] raid6test: test_disks(12, 26): faila= 12(D) failb= 26(D) OK [12678.617722] raid6test: test_disks(12, 27): faila= 12(D) failb= 27(D) OK [12678.618245] raid6test: test_disks(12, 28): faila= 12(D) failb= 28(D) OK [12678.618763] raid6test: test_disks(12, 29): faila= 12(D) failb= 29(D) OK [12678.619292] raid6test: test_disks(12, 30): faila= 12(D) failb= 30(D) OK [12678.619875] raid6test: test_disks(12, 31): faila= 12(D) failb= 31(D) OK [12678.620457] raid6test: test_disks(12, 32): faila= 12(D) failb= 32(D) OK [12678.620965] raid6test: test_disks(12, 33): faila= 12(D) failb= 33(D) OK [12678.621487] raid6test: test_disks(12, 34): faila= 12(D) failb= 34(D) OK [12678.621986] raid6test: test_disks(12, 35): faila= 12(D) failb= 38(D) OK [12679.122771] raid6test: test_disks(12, 39): faila= 12(D) failb= 39(D) OK [12679.123306] raid6test: test_disks(12, 40): faila= 12(D) failb= 40(D) OK [12679.123858] raid6test: test_disks(12, 41): faila= 12(D) failb= 41(D) OK [12679.124389] raid6test: test_disks(12, 42): faila= 12(D) failb= 42(D) OK [12679.124926] raid6test: test_disks(12, 43): faila= 12(D) failb= 43(D) OK [12679.125512] raid6test: test_disks(12, 44): faila= 12(D) failb= 44(D) OK [12679.126012] raid6test: test_disks(12, 45): faila= 12(D) failb= 45(D) OK [12679.126649] raid6test: test_disks(12, 46): faila= 12(D) failb= 46(D) OK [12679.127234] raid6test: test_disks(12, 47): faila= 12(D) failb= 47(D) OK [12679.127774] raid6test: test_disks(12, 48): faila= 12(D) failb= 48(D) OK [12679.128313] raid6test: test_disks(12, 49): faila= 12(D) failb= 49(D) OK [12679.128835] raid6test: test_disks(12, 50): faila= 12(D) failb= 50(D) OK [12679.129353] raid6test: test_disks(12, 51): faila= 12(D) failb= 51(D) OK [12679.129883] raid6test: test_disks(12, 52): faila= 12(D) failb= 52(D) OK [12679.130390] raid6test: test_disks(12, 53): faila= 12(D) failb= 53(D) OK [12679.130914] raid6test: test_disks(12, 54): faila= 12(D) failb= 54(D) OK [12679.131401] raid6test: test_disks(12, 55): faila= 12(D) failb= 55(D) OK [12679.131912] raid6test: test_disks(12, 56): faila= 12(D) failb= 56(D) OK [12679.132409] raid6test: test_disks(12, 57): faila= 12(D) failb= 57(D) OK [12679.132919] raid6test: test_disks(12, 58): faila= 12(D) failb= 58(D) OK [12679.133406] raid6test: test_disks(12, 59): faila= 12(D) ila= 12(D) failb= 62(P) OK [12679.634277] raid6test: test_disks(12, 63): faila= 12(D) failb= 63(Q) OK [12679.634829] raid6test: test_disks(13, 14): faila= 13(D) failb= 14(D) OK [12679.635327] raid6test: test_disks(13, 15): faila= 13(D) failb= 15(D) OK [12679.635899] raid6test: test_disks(13, 16): faila= 13(D) failb= 16(D) OK [12679.636476] raid6test: test_disks(13, 17): faila= 13(D) failb= 17(D) OK [12679.637009] raid6test: test_disks(13, 18): faila= 13(D) failb= 18(D) OK [12679.637560] raid6test: test_disks(13, 19): faila= 13(D) failb= 19(D) OK [12679.638091] raid6test: test_disks(13, 20): faila= 13(D) failb= 20(D) OK [12679.638645] raid6test: test_disks(13, 21): faila= 13(D) failb= 21(D) OK [12679.639189] raid6test: test_disks(13, 22): faila= 13(D) failb= 22(D) OK [12679.639729] raid6test: test_disks(13, 23): faila= 13(D) failb= 23(D) OK [12679.640257] raid6test: test_disks(13, 24): faila= 13(D) failb= 24(D) OK [12679.640802] raid6test: test_disks(13, 25): faila= 13(D) failb= 25(D) OK [12679.641336] raid6test: test_disks(13, 26): faila= 13(D) failb= 26(D) OK [12679.641883] raid6test: test_disks(13, 27): fisks(13, 30): faila= 13(D) failb= 30(D) OK [12680.143008] raid6test: test_disks(13, 31): faila= 13(D) failb= 31(D) OK [12680.143596] raid6test: test_disks(13, 32): faila= 13(D) failb= 32(D) OK [12680.144364] raid6test: test_disks(13, 33): faila= 13(D) failb= 33(D) OK [12680.144947] raid6test: test_disks(13, 34): faila= 13(D) failb= 34(D) OK [12680.145494] raid6test: test_disks(13, 35): faila= 13(D) failb= 35(D) OK [12680.146028] raid6test: test_disks(13, 36): faila= 13(D) failb= 36(D) OK [12680.146560] raid6test: test_disks(13, 37): faila= 13(D) failb= 37(D) OK [12680.147125] raid6test: test_disks(13, 38): faila= 13(D) failb= 38(D) OK [12680.147651] raid6test: test_disks(13, 39): faila= 13(D) failb= 39(D) OK [12680.148224] raid6test: test_disks(13, 40): faila= 13(D) failb= 40(D) OK [12680.148754] raid6test: test_disks(13, 41): faila= 13(D) failb= 41(D) OK [12680.149310] raid6test: test_disks(13, 42): faila= 13(D) failb= 42(D) OK [12680.149890] raid6test: test_disks(13, 43): faila= 13(D) failb= 43(D) OK [12680.150414] raid6test: test_disks(13, 44): faila= 13(D) failb= 44(D) OK [12680.150933] raid6test: test_disks(13, 45): faila= 13(D) failb= 45(D) OK [12680.151548] raid6test: test_disks(13, 46): faila= 13(D) faila= 13(D) failb= 49(D) OK [12680.652690] raid6test: test_disks(13, 50): faila= 13(D) failb= 50(D) OK [12680.653240] raid6test: test_disks(13, 51): faila= 13(D) failb= 51(D) OK [12680.653794] raid6test: test_disks(13, 52): faila= 13(D) failb= 52(D) OK [12680.654325] raid6test: test_disks(13, 53): faila= 13(D) failb= 53(D) OK [12680.654869] raid6test: test_disks(13, 54): faila= 13(D) failb= 54(D) OK [12680.655403] raid6test: test_disks(13, 55): faila= 13(D) failb= 55(D) OK [12680.655996] raid6test: test_disks(13, 56): faila= 13(D) failb= 56(D) OK [12680.656533] raid6test: test_disks(13, 57): faila= 13(D) failb= 57(D) OK [12680.657041] raid6test: test_disks(13, 58): faila= 13(D) failb= 58(D) OK [12680.657592] raid6test: test_disks(13, 59): faila= 13(D) failb= 59(D) OK [12680.658124] raid6test: test_disks(13, 60): faila= 13(D) failb= 60(D) OK [12680.658656] raid6test: test_disks(13, 61): faila= 13(D) failb= 61(D) OK [12680.659185] raid6test: test_disks(13, 62): faila= 13(D) failb= 62(P) OK [12680.659736] raid6test: test_disks(13, 63): faila= 13(D) failb= 63(Q) OK [12680.660236] raid6test: test_disks(14, 15): faila= 14(D) failb= 15(D) OK [12680.660783] raid6test: test_disks(14, 16): faiila= 14(D) failb= 19(D) OK [12681.161651] raid6test: test_disks(14, 20): faila= 14(D) failb= 20(D) OK [12681.162194] raid6test: test_disks(14, 21): faila= 14(D) failb= 21(D) OK [12681.162743] raid6test: test_disks(14, 22): faila= 14(D) failb= 22(D) OK [12681.163243] raid6test: test_disks(14, 23): faila= 14(D) failb= 23(D) OK [12681.163782] raid6test: test_disks(14, 24): faila= 14(D) failb= 24(D) OK [12681.164305] raid6test: test_disks(14, 25): faila= 14(D) failb= 25(D) OK [12681.164856] raid6test: test_disks(14, 26): faila= 14(D) failb= 26(D) OK [12681.165358] raid6test: test_disks(14, 27): faila= 14(D) failb= 27(D) OK [12681.165924] raid6test: test_disks(14, 28): faila= 14(D) failb= 28(D) OK [12681.166482] raid6test: test_disks(14, 29): faila= 14(D) failb= 29(D) OK [12681.166991] raid6test: test_disks(14, 30): faila= 14(D) failb= 30(D) OK [12681.167533] raid6test: test_disks(14, 31): faila= 14(D) failb= 31(D) OK [12681.168044] raid6test: test_disks(14, 32): faila= 14(D) failb= 32(D) OK [12681.168591] raid6test: test_disks(14, 33): faila= 14(D) failb= 33(D) OK [12681.169118] raid6test: test_disks(14, 34): faila= 14(D) failb= 34(D) OK [12681.169653] raid6test: test_disks(14, 35): faila= 14(D) failb= 35(D) OK [12681.170189] raid6test: test_disks(14, 36): failaila= 14(D) failb= 39(D) OK [12681.670980] raid6test: test_disks(14, 40): faila= 14(D) failb= 40(D) OK [12681.671529] raid6test: test_disks(14, 41): faila= 14(D) failb= 41(D) OK [12681.672039] raid6test: test_disks(14, 42): faila= 14(D) failb= 42(D) OK [12681.672580] raid6test: test_disks(14, 43): faila= 14(D) failb= 43(D) OK [12681.673115] raid6test: test_disks(14, 44): faila= 14(D) failb= 44(D) OK [12681.673661] raid6test: test_disks(14, 45): faila= 14(D) failb= 45(D) OK [12681.674208] raid6test: test_disks(14, 46): faila= 14(D) failb= 46(D) OK [12681.674762] raid6test: test_disks(14, 47): faila= 14(D) failb= 47(D) OK [12681.675288] raid6test: test_disks(14, 48): faila= 14(D) failb= 48(D) OK [12681.675835] raid6test: test_disks(14, 49): faila= 14(D) failb= 49(D) OK [12681.676373] raid6test: test_disks(14, 50): faila= 14(D) failb= 50(D) OK [12681.676917] raid6test: test_disks(14, 51): faila= 14(D) failb= 51(D) OK [12681.677523] raid6test: test_disks(14, 52): faila= 14(D) failb= 52(D) OK [12681.678055] raid6test: test_disks(14, 53): faila= 14(D) failb= 53(D) OK [12681.678564] raid6test: test_disks(14, 54): faila= 14(D) failb= 54(D) OK [12681.679068] raid6test: test_disks(14, 55): faila= 14(D) failb= 55(D) OK [12681.679561] raid6test: test_disks(14, 56): faila= 14(D) failb= 56(D) OK [1[12682.180507] raid6test: test_disks(14, 60): faila= 14(D) failb= 60(D) OK [12682.181031] raid6test: test_disks(14, 61): faila= 14(D) failb= 61(D) OK [12682.181583] raid6test: test_disks(14, 62): faila= 14(D) failb= 62(P) OK [12682.182121] raid6test: test_disks(14, 63): faila= 14(D) failb= 63(Q) OK [12682.182661] raid6test: test_disks(15, 16): faila= 15(D) failb= 16(D) OK [12682.183190] raid6test: test_disks(15, 17): faila= 15(D) failb= 17(D) OK [12682.183730] raid6test: test_disks(15, 18): faila= 15(D) failb= 18(D) OK [12682.184266] raid6test: test_disks(15, 19): faila= 15(D) failb= 19(D) OK [12682.184802] raid6test: test_disks(15, 20): faila= 15(D) failb= 20(D) OK [12682.185324] raid6test: test_disks(15, 21): faila= 15(D) failb= 21(D) OK [12682.185883] raid6test: test_disks(15, 22): faila= 15(D) failb= 22(D) OK [12682.186426] raid6test: test_disks(15, 23): faila= 15(D) failb= 23(D) OK [12682.187310] raid6test: test_disks(15, 24): faila= 15(D) failb= 24(D) OK [12682.187818] raid6test: test_disks(15, 25): faila= 15(D) failb= 25(D) OK [12682.188352] raid6test: test_disks(15, 26): faila= 15(D) failb= 26(D) OK [12682.188895] raid6test: test_disks(15, 27): faila= 15(D) failb= 27(D) OK [b= 30(D) OK [12682.689793] raid6test: test_disks(15, 31): faila= 15(D) failb= 31(D) OK [12682.690327] raid6test: test_disks(15, 32): faila= 15(D) failb= 32(D) OK [12682.690886] raid6test: test_disks(15, 33): faila= 15(D) failb= 33(D) OK [12682.691383] raid6test: test_disks(15, 34): faila= 15(D) failb= 34(D) OK [12682.691925] raid6test: test_disks(15, 35): faila= 15(D) failb= 35(D) OK [12682.692456] raid6test: test_disks(15, 36): faila= 15(D) failb= 36(D) OK [12682.692998] raid6test: test_disks(15, 37): faila= 15(D) failb= 37(D) OK [12682.693553] raid6test: test_disks(15, 38): faila= 15(D) failb= 38(D) OK [12682.694027] raid6test: test_disks(15, 39): faila= 15(D) failb= 39(D) OK [12682.694569] raid6test: test_disks(15, 40): faila= 15(D) failb= 40(D) OK [12682.695083] raid6test: test_disks(15, 41): faila= 15(D) failb= 41(D) OK [12682.695659] raid6test: test_disks(15, 42): faila= 15(D) failb= 42(D) OK [12682.696205] raid6test: test_disks(15, 43): faila= 15(D) failb= 43(D) OK [12682.696748] raid6test: test_disks(15, 44): faila= 15(D) failb= 44(D) OK [12682.697285] raid6test: test_disks(15, 45): faila= 15(D) failb= 45(D) OK [12682.697835] raid6test: test_disks(15, 46): faila= 15(D) failb= 46(D) OK [12682.698372] raid6test: test_disks(15, 50): faila= 15(D) failb= 50(D) OK [12683.199231] raid6test: test_disks(15, 51): faila= 15(D) failb= 51(D) OK [12683.199827] raid6test: test_disks(15, 52): faila= 15(D) failb= 52(D) OK [12683.200363] raid6test: test_disks(15, 53): faila= 15(D) failb= 53(D) OK [12683.200899] raid6test: test_disks(15, 54): faila= 15(D) failb= 54(D) OK [12683.201393] raid6test: test_disks(15, 55): faila= 15(D) failb= 55(D) OK [12683.201935] raid6test: test_disks(15, 56): faila= 15(D) failb= 56(D) OK [12683.202500] raid6test: test_disks(15, 57): faila= 15(D) failb= 57(D) OK [12683.203015] raid6test: test_disks(15, 58): faila= 15(D) failb= 58(D) OK [12683.203561] raid6test: test_disks(15, 59): faila= 15(D) failb= 59(D) OK [12683.204067] raid6test: test_disks(15, 60): faila= 15(D) failb= 60(D) OK [12683.204609] raid6test: test_disks(15, 61): faila= 15(D) failb= 61(D) OK [12683.205138] raid6test: test_disks(15, 62): faila= 15(D) failb= 62(P) OK [12683.205715] raid6test: test_disks(15, 63): faila= 15(D) failb= 63(Q) OK [12683.206223] raid6test: test_disks(16, 17): faila= 16(D) failb= 17(D) OK [12683.206763] raid6test: test_disks(16, 18): faila= 16(D) failb= 18(D) OK [12683.207301] raid6test: test_disks(16, 19): faila= 16(D) failb= 19(D) OK [12683.207843] raidaid6test: test_disks(16, 23): faila= 16(D) failb= 23(D) OK [12683.708644] raid6test: test_disks(16, 24): faila= 16(D) failb= 24(D) OK [12683.709180] raid6test: test_disks(16, 25): faila= 16(D) failb= 25(D) OK [12683.709735] raid6test: test_disks(16, 26): faila= 16(D) failb= 26(D) OK [12683.710234] raid6test: test_disks(16, 27): faila= 16(D) failb= 27(D) OK [12683.710781] raid6test: test_disks(16, 28): faila= 16(D) failb= 28(D) OK [12683.711313] raid6test: test_disks(16, 29): faila= 16(D) failb= 29(D) OK [12683.711852] raid6test: test_disks(16, 30): faila= 16(D) failb= 30(D) OK [12683.712386] raid6test: test_disks(16, 31): faila= 16(D) failb= 31(D) OK [12683.712916] raid6test: test_disks(16, 32): faila= 16(D) failb= 32(D) OK [12683.713519] raid6test: test_disks(16, 33): faila= 16(D) failb= 33(D) OK [12683.714058] raid6test: test_disks(16, 34): faila= 16(D) failb= 34(D) OK [12683.714567] raid6test: test_disks(16, 35): faila= 16(D) failb= 35(D) OK [12683.715136] raid6test: test_disks(16, 36): faila= 16(D) failb= 36(D) OK [12683.715712] raid6test: test_disks(16, 37): faila= 16(D) failb= 37(D) OK [12683.716267] raid6test: test_disks(16, 38): faila= 16(D) failb= 38(D) OK [12683.716873] raiaid6test: test_disks(16, 42): faila= 16(D) failb= 42(D) OK [12684.217713] raid6test: test_disks(16, 43): faila= 16(D) failb= 43(D) OK [12684.218258] raid6test: test_disks(16, 44): faila= 16(D) failb= 44(D) OK [12684.218801] raid6test: test_disks(16, 45): faila= 16(D) failb= 45(D) OK [12684.219332] raid6test: test_disks(16, 46): faila= 16(D) failb= 46(D) OK [12684.219881] raid6test: test_disks(16, 47): faila= 16(D) failb= 47(D) OK [12684.220416] raid6test: test_disks(16, 48): faila= 16(D) failb= 48(D) OK [12684.220960] raid6test: test_disks(16, 49): faila= 16(D) failb= 49(D) OK [12684.221453] raid6test: test_disks(16, 50): faila= 16(D) failb= 50(D) OK [12684.221992] raid6test: test_disks(16, 51): faila= 16(D) failb= 51(D) OK [12684.222536] raid6test: test_disks(16, 52): faila= 16(D) failb= 52(D) OK [12684.223041] raid6test: test_disks(16, 53): faila= 16(D) failb= 53(D) OK [12684.223620] raid6test: test_disks(16, 54): faila= 16(D) failb= 54(D) OK [12684.224160] raid6test: test_disks(16, 55): faila= 16(D) failb= 55(D) OK [12684.224756] raid6test: test_disks(16, 56): faila= 16(D) failb= 56(D) OK [12684.225300] raid6test: test_disks(16, 57): faila= 16(D) failb= 57(D) OK [12684.225867] raid6test: test_disks(16, 58): faila= 16(D) failb= 58(D) OK [12684.226375] raid6test: test_disks(16, 59): faid6test: test_disks(16, 62): faila= 16(D) failb= 62(P) OK [12684.727342] raid6test: test_disks(16, 63): faila= 16(D) failb= 63(Q) OK [12684.728012] raid6test: test_disks(17, 18): faila= 17(D) failb= 18(D) OK [12684.728650] raid6test: test_disks(17, 19): faila= 17(D) failb= 19(D) OK [12684.729248] raid6test: test_disks(17, 20): faila= 17(D) failb= 20(D) OK [12684.729925] raid6test: test_disks(17, 21): faila= 17(D) failb= 21(D) OK [12684.730579] raid6test: test_disks(17, 22): faila= 17(D) failb= 22(D) OK [12684.731101] raid6test: test_disks(17, 23): faila= 17(D) failb= 23(D) OK [12684.731652] raid6test: test_disks(17, 24): faila= 17(D) failb= 24(D) OK [12684.732192] raid6test: test_disks(17, 25): faila= 17(D) failb= 25(D) OK [12684.732735] raid6test: test_disks(17, 26): faila= 17(D) failb= 26(D) OK [12684.733265] raid6test: test_disks(17, 27): faila= 17(D) failb= 27(D) OK [12684.733815] raid6test: test_disks(17, 28): faila= 17(D) failb= 28(D) OK [12684.734348] raid6test: test_disks(17, 29): faila= 17(D) failb= 29(D) OK [12684.734885] raid6test: test_disks(17, 30): faila= 17(D) failb= 30(D) OK [12684.735419] raid6test: test_disks(17, 34): faila= 17(D) failb= 34(D) OK [12685.236259] raid6test: test_disks(17, 35): faila= 17(D) failb= 35(D) OK [12685.236804] raid6test: test_disks(17, 36): faila= 17(D) failb= 36(D) OK [12685.237337] raid6test: test_disks(17, 37): faila= 17(D) failb= 37(D) OK [12685.237884] raid6test: test_disks(17, 38): faila= 17(D) failb= 38(D) OK [12685.238419] raid6test: test_disks(17, 39): faila= 17(D) failb= 39(D) OK [12685.238964] raid6test: test_disks(17, 40): faila= 17(D) failb= 40(D) OK [12685.239542] raid6test: test_disks(17, 41): faila= 17(D) failb= 41(D) OK [12685.240023] raid6test: test_disks(17, 42): faila= 17(D) failb= 42(D) OK [12685.240602] raid6test: test_disks(17, 43): faila= 17(D) failb= 43(D) OK [12685.241119] raid6test: test_disks(17, 44): faila= 17(D) failb= 44(D) OK [12685.241770] raid6test: test_disks(17, 45): faila= 17(D) failb= 45(D) OK [12685.242338] raid6test: test_disks(17, 46): faila= 17(D) failb= 46(D) OK [12685.243202] raid6test: test_disks(17, 47): faila= 17(D) failb= 47(D) OK [12685.243760] raid6test: test_disks(17, 48): faila= 17(D) failb= 48(D) OK [12685.271453] [12685.745202] raid6test: test_disks(17, 52): faila= 17(D) failb= 52(D) OK [12685.746105] raid6test: test_disks(17, 53): faila= 17(D) failb= 53(D) OK [12685.746799] raid6test: test_disks(17, 54): faila= 17(D) failb= 54(D) OK [12685.747427] raid6test: test_disks(17, 55): faila= 17(D) failb= 55(D) OK [12685.748095] raid6test: test_disks(17, 56): faila= 17(D) failb= 56(D) OK [12685.748743] raid6test: test_disks(17, 57): faila= 17(D) failb= 57(D) OK [12685.749364] raid6test: test_disks(17, 58): faila= 17(D) failb= 58(D) OK [12685.750019] raid6test: test_disks(17, 59): faila= 17(D) failb= 59(D) OK [12685.750677] raid6test: test_disks(17, 60): faila= 17(D) failb= 60(D) OK [12685.751297] raid6test: test_disks(17, 61): faila= 17(D) failb= 61(D) OK [12685.751958] raid6test: test_disks(17, 62): faila= 17(D) failb= 62(P) OK [12685.752663] raid6test: test_disks(17, 63): faila= 17(D) failb= 63(Q) OK [12685.753270] raid6test: test_disks(18, 19): faila= 18(D) failb= 19(D) OK [12685.753929] raid6test: test_disks(18, 20): faila= 18(D) failb= 20(D) OK [12685.754607] raid6test: test_disks(18, 21): faila= 18(D) failb= 21(D) OK [12685.755212] raid6test: te[12686.250990] raid6test: test_disks(18, 25): faila= 18(D) failb= 25(D) OK [12686.256106] raid6test: test_disks(18, 26): faila= 18(D) failb= 26(D) OK [12686.256635] raid6test: test_disks(18, 27): faila= 18(D) failb= 27(D) OK [12686.257221] raid6test: test_disks(18, 28): faila= 18(D) failb= 28(D) OK [12686.257730] raid6test: test_disks(18, 29): faila= 18(D) failb= 29(D) OK [12686.258290] raid6test: test_disks(18, 30): faila= 18(D) failb= 30(D) OK [12686.258823] raid6test: test_disks(18, 31): faila= 18(D) failb= 31(D) OK [12686.259347] raid6test: test_disks(18, 32): faila= 18(D) failb= 32(D) OK [12686.259863] raid6test: test_disks(18, 33): faila= 18(D) failb= 33(D) OK [12686.260387] raid6test: test_disks(18, 34): faila= 18(D) failb= 34(D) OK [12686.260943] raid6test: test_disks(18, 35): faila= 18(D) failb= 35(D) OK [12686.261526] raid6test: test_disks(18, 36): faila= 18(D) failb= 36(D) OK [12686.262047] raid6test: test_disks(18, 37): faila= 18(D) failb= 37(D) OK [12686.262591] raid6test: test_disks(18, 38): faila= 18(D) failb= 38(D) OK [12686.263132] raid6test: test_disks(18, 39): faila= 18(D) failb= 39(D) OK [12686.263680] raid6test: test_disks(18, 40): faila= 18(D) failb= 40(D) OK [ila= 18(D) failb= 43(D) OK [12686.764965] raid6test: test_disks(18, 44): faila= 18(D) failb= 44(D) OK [12686.765547] raid6test: test_disks(18, 45): faila= 18(D) failb= 45(D) OK [12686.766111] raid6test: test_disks(18, 46): faila= 18(D) failb= 46(D) OK [12686.766648] raid6test: test_disks(18, 47): faila= 18(D) failb= 47(D) OK [12686.767210] raid6test: test_disks(18, 48): faila= 18(D) failb= 48(D) OK [12686.767794] raid6test: test_disks(18, 49): faila= 18(D) failb= 49(D) OK [12686.768361] raid6test: test_disks(18, 50): faila= 18(D) failb= 50(D) OK [12686.768919] raid6test: test_disks(18, 51): faila= 18(D) failb= 51(D) OK [12686.769479] raid6test: test_disks(18, 52): faila= 18(D) failb= 52(D) OK [12686.770036] raid6test: test_disks(18, 53): faila= 18(D) failb= 53(D) OK [12686.770557] raid6test: test_disks(18, 54): faila= 18(D) failb= 54(D) OK [12686.771057] raid6test: test_disks(18, 55): faila= 18(D) failb= 55(D) OK [12686.771622] raid6test: test_disks(18, 56): faila= 18(D) failb= 56(D) OK [12686.772117] raid6test: test_disks(18, 57): faila= 18(D) failb= 57(D) OK [12686.772663] raid6test: test_disks(18, 58): faila= 18(D) failb= 58(D) OK [12686.773163] raid6test: test_disks(18, 59): faila= 18(D) failb= 59(D) OK ila= 18(D) failb= 62(P) OK [12687.274175] raid6test: test_disks(18, 63): faila= 18(D) failb= 63(Q) OK [12687.274730] raid6test: test_disks(19, 20): faila= 19(D) failb= 20(D) OK [12687.275270] raid6test: test_disks(19, 21): faila= 19(D) failb= 21(D) OK [12687.275837] raid6test: test_disks(19, 22): faila= 19(D) failb= 22(D) OK [12687.276372] raid6test: test_disks(19, 23): faila= 19(D) failb= 23(D) OK [12687.276915] raid6test: test_disks(19, 24): faila= 19(D) failb= 24(D) OK [12687.277481] raid6test: test_disks(19, 25): faila= 19(D) failb= 25(D) OK [12687.277981] raid6test: test_disks(19, 26): faila= 19(D) failb= 26(D) OK [12687.278577] raid6test: test_disks(19, 27): faila= 19(D) failb= 27(D) OK [12687.279051] raid6test: test_disks(19, 28): faila= 19(D) failb= 28(D) OK [12687.279536] raid6test: test_disks(19, 29): faila= 19(D) failb= 29(D) OK [12687.280016] raid6test: test_disks(19, 30): faila= 19(D) failb= 30(D) OK [12687.280581] raid6test: test_disks(19, 31): faila= 19(D) failb= 31(D) OK [12687.281094] raid6test: test_disks(19, 32): faila= 19(D) failb= 32(D) OK [12687.281584] raid6test: test_disks(19, 33): faila= 19(D) failb= 33(D) OK [12687.282088] raid6test: test_disks(19, 34): faiisks(19, 37): faila= 19(D) failb= 37(D) OK [12687.783011] raid6test: test_disks(19, 38): faila= 19(D) failb= 38(D) OK [12687.783549] raid6test: test_disks(19, 39): faila= 19(D) failb= 39(D) OK [12687.784061] raid6test: test_disks(19, 40): faila= 19(D) failb= 40(D) OK [12687.784590] raid6test: test_disks(19, 41): faila= 19(D) failb= 41(D) OK [12687.785102] raid6test: test_disks(19, 42): faila= 19(D) failb= 42(D) OK [12687.785595] raid6test: test_disks(19, 43): faila= 19(D) failb= 43(D) OK [12687.786102] raid6test: test_disks(19, 44): faila= 19(D) failb= 44(D) OK [12687.786607] raid6test: test_disks(19, 45): faila= 19(D) failb= 45(D) OK [12687.787125] raid6test: test_disks(19, 46): faila= 19(D) failb= 46(D) OK [12687.787622] raid6test: test_disks(19, 47): faila= 19(D) failb= 47(D) OK [12687.788129] raid6test: test_disks(19, 48): faila= 19(D) failb= 48(D) OK [12687.788627] raid6test: test_disks(19, 49): faila= 19(D) failb= 49(D) OK [12687.789136] raid6test: test_disks(19, 50): faila= 19(D) failb= 50(D) OK [12687.789632] raid6test: test_disks(19, 51): faila= 19(D) failb= 51(D) OK [12687.790142] raid6test: test_disks(19, 52): faila= 19(D) failb= 52(D) OK [12687.790642] raid6test: test_disaid6test: test_disks(19, 56): faila= 19(D) failb= 56(D) OK [12688.291552] raid6test: test_disks(19, 57): faila= 19(D) failb= 57(D) OK [12688.292010] raid6test: test_disks(19, 58): faila= 19(D) failb= 58(D) OK [12688.292508] raid6test: test_disks(19, 59): faila= 19(D) failb= 59(D) OK [12688.293003] raid6test: test_disks(19, 60): faila= 19(D) failb= 60(D) OK [12688.293495] raid6test: test_disks(19, 61): faila= 19(D) failb= 61(D) OK [12688.293996] raid6test: test_disks(19, 62): faila= 19(D) failb= 62(P) OK [12688.294504] raid6test: test_disks(19, 63): faila= 19(D) failb= 63(Q) OK [12688.294993] raid6test: test_disks(20, 21): faila= 20(D) failb= 21(D) OK [12688.295490] raid6test: test_disks(20, 22): faila= 20(D) failb= 22(D) OK [12688.295981] raid6test: test_disks(20, 23): faila= 20(D) failb= 23(D) OK [12688.296443] raid6test: test_disks(20, 24): faila= 20(D) failb= 24(D) OK [12688.296915] raid6test: test_disks(20, 25): faila= 20(D) failb= 25(D) OK [12688.297399] raid6test: test_disks(20, 26): faila= 20(D) failb= 26(D) OK [12688.297924] raid6test: test_disks(20, 27): faila= 20(D) failb= 27(D) OK [12688.298439] raid6test: test_disks(20, 28aid6test: test_disks(20, 31): faila= 20(D) failb= 31(D) OK [12688.799461] raid6test: test_disks(20, 32): faila= 20(D) failb= 32(D) OK [12688.800028] raid6test: test_disks(20, 33): faila= 20(D) failb= 33(D) OK [12688.800593] raid6test: test_disks(20, 34): faila= 20(D) failb= 34(D) OK [12688.801108] raid6test: test_disks(20, 35): faila= 20(D) failb= 35(D) OK [12688.801638] raid6test: test_disks(20, 36): faila= 20(D) failb= 36(D) OK [12688.802146] raid6test: test_disks(20, 37): faila= 20(D) failb= 37(D) OK [12688.802633] raid6test: test_disks(20, 38): faila= 20(D) failb= 38(D) OK [12688.803144] raid6test: test_disks(20, 39): faila= 20(D) failb= 39(D) OK [12688.803641] raid6test: test_disks(20, 40): faila= 20(D) failb= 40(D) OK [12688.804152] raid6test: test_disks(20, 41): faila= 20(D) failb= 41(D) OK [12688.804674] raid6test: test_disks(20, 42): faila= 20(D) failb= 42(D) OK [12688.805213] raid6test: test_disks(20, 43): faila= 20(D) failb= 43(D) OK [12688.805777] raid6test: test_disks(20, 44): faila= 20(D) failb= 44(D) OK [12688.834485]b= 47(D) OK [12689.306781] raid6test: test_disks(20, 48): faila= 20(D) failb= 48(D) OK [12689.307245] raid6test: test_disks(20, 49): faila= 20(D) failb= 49(D) OK [12689.307749] raid6test: test_disks(20, 50): faila= 20(D) failb= 50(D) OK [12689.308247] raid6test: test_disks(20, 51): faila= 20(D) failb= 51(D) OK [12689.308749] raid6test: test_disks(20, 52): faila= 20(D) failb= 52(D) OK [12689.309247] raid6test: test_disks(20, 53): faila= 20(D) failb= 53(D) OK [12689.309740] raid6test: test_disks(20, 54): faila= 20(D) failb= 54(D) OK [12689.310235] raid6test: test_disks(20, 55): faila= 20(D) failb= 55(D) OK [12689.310734] raid6test: test_disks(20, 56): faila= 20(D) failb= 56(D) OK [12689.311242] raid6test: test_disks(20, 57): faila= 20(D) failb= 57(D) OK [12689.311744] raid6test: test_disks(20, 58): faila= 20(D) failb= 58(D) OK [12689.312604] raid6test: test_disks(20, 59): faila= 20(D) failb= 59(D) OK [12689.313082] raid6test: test_disks(20, 60): faila= 20(D) failb= 60(D) OK [12689.313578] raid6test: test_disks(20, 61): faila= 20(D) failb= 61(D) OK [12689.314098] raid6test: test_disks(20, 62): faila= 20(D) failb= 62(P) OK [12689.314607] raid6test: test_disks(20, 63): faila= 20(D) failb= 63(Q) OK [12689.342877] b= 24(D) OK [12689.815564] raid6test: test_disks(21, 25): faila= 21(D) failb= 25(D) OK [12689.816144] raid6test: test_disks(21, 26): faila= 21(D) failb= 26(D) OK [12689.816664] raid6test: test_disks(21, 27): faila= 21(D) failb= 27(D) OK [12689.817175] raid6test: test_disks(21, 28): faila= 21(D) failb= 28(D) OK [12689.817700] raid6test: test_disks(21, 29): faila= 21(D) failb= 29(D) OK [12689.818231] raid6test: test_disks(21, 30): faila= 21(D) failb= 30(D) OK [12689.818765] raid6test: test_disks(21, 31): faila= 21(D) failb= 31(D) OK [12689.819256] raid6test: test_disks(21, 32): faila= 21(D) failb= 32(D) OK [12689.819793] raid6test: test_disks(21, 33): faila= 21(D) failb= 33(D) OK [12689.820330] raid6test: test_disks(21, 34): faila= 21(D) failb= 34(D) OK [12689.820875] raid6test: test_disks(21, 35): faila= 21(D) failb= 35(D) OK [12689.821373] raid6test: test_disks(21, 36): faila= 21(D) failb= 36(D) OK [12689.821915] raid6test: test_disks(21, 37): faila= 21(D) failb= 37(D) OK [12689.822408] raid6test: test_disks(21, 38): faila= 21(D) failb= 38(D) OK [12689.822946] raid6test: test_disks(21, 39): faila= 21(D) failb= 39(D) OK [12689.823432] raid6test: test_disks(21, 40): faila= 21(D) failb=ila= 21(D) failb= 43(D) OK [12690.324303] raid6test: test_disks(21, 44): faila= 21(D) failb= 44(D) OK [12690.324827] raid6test: test_disks(21, 45): faila= 21(D) failb= 45(D) OK [12690.325335] raid6test: test_disks(21, 46): faila= 21(D) failb= 46(D) OK [12690.325887] raid6test: test_disks(21, 47): faila= 21(D) failb= 47(D) OK [12690.326407] raid6test: test_disks(21, 48): faila= 21(D) failb= 48(D) OK [12690.326921] raid6test: test_disks(21, 49): faila= 21(D) failb= 49(D) OK [12690.327454] raid6test: test_disks(21, 50): faila= 21(D) failb= 50(D) OK [12690.327959] raid6test: test_disks(21, 51): faila= 21(D) failb= 51(D) OK [12690.328461] raid6test: test_disks(21, 52): faila= 21(D) failb= 52(D) OK [12690.328962] raid6test: test_disks(21, 53): faila= 21(D) failb= 53(D) OK [12690.329471] raid6test: test_disks(21, 54): faila= 21(D) failb= 54(D) OK [12690.330000] raid6test: test_disks(21, 55): faila= 21(D) failb= 55(D) OK [12690.330509] raid6test: test_disks(21, 56): faila= 21(D) failb= 56(D) OK [12690.331022] raid6test: test_disks(21, 57): faila= 21(D) failb= 57(D) OK [12690.331561] raid6test: test_disks(21, 58): faila= 21(D) failb= 58(D) OK [12690.332028] raid6test: test_disks(21, 59): faila= 21(D) failb= 59(D) OK [12690.332552] raid6test: test_disks(21, 60): faila=isks(21, 63): faila= 21(D) failb= 63(Q) OK [12690.833565] raid6test: test_disks(22, 23): faila= 22(D) failb= 23(D) OK [12690.834098] raid6test: test_disks(22, 24): faila= 22(D) failb= 24(D) OK [12690.834614] raid6test: test_disks(22, 25): faila= 22(D) failb= 25(D) OK [12690.835129] raid6test: test_disks(22, 26): faila= 22(D) failb= 26(D) OK [12690.835630] raid6test: test_disks(22, 27): faila= 22(D) failb= 27(D) OK [12690.836163] raid6test: test_disks(22, 28): faila= 22(D) failb= 28(D) OK [12690.836687] raid6test: test_disks(22, 29): faila= 22(D) failb= 29(D) OK [12690.837203] raid6test: test_disks(22, 30): faila= 22(D) failb= 30(D) OK [12690.837720] raid6test: test_disks(22, 31): faila= 22(D) failb= 31(D) OK [12690.838196] raid6test: test_disks(22, 32): faila= 22(D) failb= 32(D) OK [12690.838704] raid6test: test_disks(22, 33): faila= 22(D) failb= 33(D) OK [12690.839177] raid6test: test_disks(22, 34): faila= 22(D) failb= 34(D) OK [12690.839663] raid6test: test_disks(22, 35): faila= 22(D) failb= 35(D) OK [12690.840138] raid6test: test_disks(22, 36): faila= 22(D) failb= 36(D) OK [12690.840628] raid6test: test_disks(22, 37): faila= 22(D) failb= 37(D) OK [12690.841093] raid6test: test_disks(22, 38): faila= 22(D) failb= 38(D) OK [12690.841616] raid6test: test_disks(22, 39): faila= 22(D) failb= 39(D) OK [12690.842122] raid6test: test_disks(22, 40): faila= 22(D) failbila= 22(D) failb= 43(D) OK [12691.343176] raid6test: test_disks(22, 44): faila= 22(D) failb= 44(D) OK [12691.343708] raid6test: test_disks(22, 45): faila= 22(D) failb= 45(D) OK [12691.344184] raid6test: test_disks(22, 46): faila= 22(D) failb= 46(D) OK [12691.344699] raid6test: test_disks(22, 47): faila= 22(D) failb= 47(D) OK [12691.345169] raid6test: test_disks(22, 48): faila= 22(D) failb= 48(D) OK [12691.345693] raid6test: test_disks(22, 49): faila= 22(D) failb= 49(D) OK [12691.346184] raid6test: test_disks(22, 50): faila= 22(D) failb= 50(D) OK [12691.346669] raid6test: test_disks(22, 51): faila= 22(D) failb= 51(D) OK [12691.347144] raid6test: test_disks(22, 52): faila= 22(D) failb= 52(D) OK [12691.347635] raid6test: test_disks(22, 53): faila= 22(D) failb= 53(D) OK [12691.348133] raid6test: test_disks(22, 54): faila= 22(D) failb= 54(D) OK [12691.348625] raid6test: test_disks(22, 55): faila= 22(D) failb= 55(D) OK [12691.349128] raid6test: test_disks(22, 56): faila= 22(D) failb= 56(D) OK [12691.349622] raid6test: test_disks(22, 57): faila= 22(D) failb= 57(D) OK [12691.350125] raid6test: test_disks(22, 58): faila= 22(D) failb= 58(D) OK [12691.350615] raid6test: test_disks(22, 59): faila= 22(D) failb= 59(D) OK [12691.351110] raid6test: test_disks(22, 60): faila=isks(22, 63): faila= 22(D) failb= 63(Q) OK [12691.851982] raid6test: test_disks(23, 24): faila= 23(D) failb= 24(D) OK [12691.852489] raid6test: test_disks(23, 25): faila= 23(D) failb= 25(D) OK [12691.852997] raid6test: test_disks(23, 26): faila= 23(D) failb= 26(D) OK [12691.853493] raid6test: test_disks(23, 27): faila= 23(D) failb= 27(D) OK [12691.853981] raid6test: test_disks(23, 28): faila= 23(D) failb= 28(D) OK [12691.854479] raid6test: test_disks(23, 29): faila= 23(D) failb= 29(D) OK [12691.854967] raid6test: test_disks(23, 30): faila= 23(D) failb= 30(D) OK [12691.855434] raid6test: test_disks(23, 31): faila= 23(D) failb= 31(D) OK [12691.855972] raid6test: test_disks(23, 32): faila= 23(D) failb= 32(D) OK [12691.856487] raid6test: test_disks(23, 33): faila= 23(D) failb= 33(D) OK [12691.856970] raid6test: test_disks(23, 34): faila= 23(D) failb= 34(D) OK [12691.857433] raid6test: test_disks(23, 35): faila= 23(D) failb= 35(D) OK [12691.857929] raid6test: test_disks(23, 36): faila= 23(D) failb= 36(D) OK [12691.858426] raid6test: test_disks(23, 37): faila= 23(D) failb= 37(D) OK [12691.858928] raid6test: test_disks(23, 38): faila= 23(D) failb= 38(D) OK [12691.859434] raid6test: test_diaid6test: test_disks(23, 42): faila= 23(D) failb= 42(D) OK [12692.360344] raid6test: test_disks(23, 43): faila= 23(D) failb= 43(D) OK [12692.360875] raid6test: test_disks(23, 44): faila= 23(D) failb= 44(D) OK [12692.361377] raid6test: test_disks(23, 45): faila= 23(D) failb= 45(D) OK [12692.361904] raid6test: test_disks(23, 46): faila= 23(D) failb= 46(D) OK [12692.362426] raid6test: test_disks(23, 47): faila= 23(D) failb= 47(D) OK [12692.362956] raid6test: test_disks(23, 48): faila= 23(D) failb= 48(D) OK [12692.363471] raid6test: test_disks(23, 49): faila= 23(D) failb= 49(D) OK [12692.364000] raid6test: test_disks(23, 50): faila= 23(D) failb= 50(D) OK [12692.364498] raid6test: test_disks(23, 51): faila= 23(D) failb= 51(D) OK [12692.365016] raid6test: test_disks(23, 52): faila= 23(D) failb= 52(D) OK [12692.365588] raid6test: test_disks(23, 53): faila= 23(D) failb= 53(D) OK [12692.366133] raid6test: test_disks(23, 54): faila= 23(D) failb= 54(D) OK [12692.366658] raid6test: test_disks(23, 55): faila= 23(D) failb= 55(D) OK [12692.367164] raid6test: test_disks(23, 56): faila= 23(D) failb= 56(D) OK [12692.367683] raid6test: test_disks(23, 57): faila= 23(D) failb= 57(D) OK [12692.368192] raid6test: test_disks(23, 58): aid6test: test_disks(23, 61): faila= 23(D) failb= 61(D) OK [12692.869084] raid6test: test_disks(23, 62): faila= 23(D) failb= 62(P) OK [12692.869610] raid6test: test_disks(23, 63): faila= 23(D) failb= 63(Q) OK [12692.870094] raid6test: test_disks(24, 25): faila= 24(D) failb= 25(D) OK [12692.870639] raid6test: test_disks(24, 26): faila= 24(D) failb= 26(D) OK [12692.871130] raid6test: test_disks(24, 27): faila= 24(D) failb= 27(D) OK [12692.871667] raid6test: test_disks(24, 28): faila= 24(D) failb= 28(D) OK [12692.872154] raid6test: test_disks(24, 29): faila= 24(D) failb= 29(D) OK [12692.872668] raid6test: test_disks(24, 30): faila= 24(D) failb= 30(D) OK [12692.873152] raid6test: test_disks(24, 31): faila= 24(D) failb= 31(D) OK [12692.873667] raid6test: test_disks(24, 32): faila= 24(D) failb= 32(D) OK [12692.874161] raid6test: test_disks(24, 33): faila= 24(D) failb= 33(D) OK [12692.874685] raid6test: test_disks(24, 34): faila= 24(D) failb= 34(D) OK [12692.875186] raid6test: test_disks(24, 35): faila= 24(D) failb= 35(D) OK [12692.875666] raid6test: test_disks(24, 36): faila= 24(D) failb= 36(D) OK [12692.876181] raid6test: test_disks(24, 37): faila= 24(D) failb= 37(D) OK [12692.876693] ra[12693.351547] raid6test: test_disks(24, 41): faila= 24(D) failb= 41(D) OK [12693.377500] raid6test: test_disks(24, 42): faila= 24(D) failb= 42(D) OK [12693.378021] raid6test: test_disks(24, 43): faila= 24(D) failb= 43(D) OK [12693.378529] raid6test: test_disks(24, 44): faila= 24(D) failb= 44(D) OK [12693.378997] raid6test: test_disks(24, 45): faila= 24(D) failb= 45(D) OK [12693.379501] raid6test: test_disks(24, 46): faila= 24(D) failb= 46(D) OK [12693.379978] raid6test: test_disks(24, 47): faila= 24(D) failb= 47(D) OK [12693.380528] raid6test: test_disks(24, 48): faila= 24(D) failb= 48(D) OK [12693.381041] raid6test: test_disks(24, 49): faila= 24(D) failb= 49(D) OK [12693.381541] raid6test: test_disks(24, 50): faila= 24(D) failb= 50(D) OK [12693.382025] raid6test: test_disks(24, 51): faila= 24(D) failb= 51(D) OK [12693.382537] raid6test: test_disks(24, 52): faila= 24(D) failb= 52(D) OK [12693.383008] raid6test: test_disks(24, 53): faila= 24(D) failb= 53(D) OK [12693.383486] raid6test: test_disks(24, 54): faila= 24(D) failb= 54(D) OK [12693.384000] raid6test: test_disks(24, 55): faila= 24(D) failb= 55(D) OK [12693.384507] raid6test: test_disks(24, 56): faila= 24(D) failb= 56(D) OK [12693.384928] raid6test: test_disks(24, 57): faila= 24(D) failb= 57(D) OK [12693.385445] raid6test: test_daid6test: test_disks(24, 61): faila= 24(D) failb= 61(D) OK [12693.886301] raid6test: test_disks(24, 62): faila= 24(D) failb= 62(P) OK [12693.886825] raid6test: test_disks(24, 63): faila= 24(D) failb= 63(Q) OK [12693.887335] raid6test: test_disks(25, 26): faila= 25(D) failb= 26(D) OK [12693.887858] raid6test: test_disks(25, 27): faila= 25(D) failb= 27(D) OK [12693.888378] raid6test: test_disks(25, 28): faila= 25(D) failb= 28(D) OK [12693.888898] raid6test: test_disks(25, 29): faila= 25(D) failb= 29(D) OK [12693.889407] raid6test: test_disks(25, 30): faila= 25(D) failb= 30(D) OK [12693.889926] raid6test: test_disks(25, 31): faila= 25(D) failb= 31(D) OK [12693.890443] raid6test: test_disks(25, 32): faila= 25(D) failb= 32(D) OK [12693.890958] raid6test: test_disks(25, 33): faila= 25(D) failb= 33(D) OK [12693.891461] raid6test: test_disks(25, 34): faila= 25(D) failb= 34(D) OK [12693.891931] raid6test: test_disks(25, 35): faila= 25(D) failb= 35(D) OK [12693.892439] raid6test: test_disks(25, 36): faila= 25(D) failb= 36(D) OK [12693.892913] raid6test: test_disks(25, 37): faila= 25(D) failb= 37(D) OK [12693.921178] b= 40(D) OK [12694.393791] raid6test: test_disks(25, 41): faila= 25(D) failb= 41(D) OK [12694.394251] raid6test: test_disks(25, 42): faila= 25(D) failb= 42(D) OK [12694.394765] raid6test: test_disks(25, 43): faila= 25(D) failb= 43(D) OK [12694.395252] raid6test: test_disks(25, 44): faila= 25(D) failb= 44(D) OK [12694.395768] raid6test: test_disks(25, 45): faila= 25(D) failb= 45(D) OK [12694.396263] raid6test: test_disks(25, 46): faila= 25(D) failb= 46(D) OK [12694.396772] raid6test: test_disks(25, 47): faila= 25(D) failb= 47(D) OK [12694.397261] raid6test: test_disks(25, 48): faila= 25(D) failb= 48(D) OK [12694.397775] raid6test: test_disks(25, 49): faila= 25(D) failb= 49(D) OK [12694.398265] raid6test: test_disks(25, 50): faila= 25(D) failb= 50(D) OK [12694.398774] raid6test: test_disks(25, 51): faila= 25(D) failb= 51(D) OK [12694.399269] raid6test: test_disks(25, 52): faila= 25(D) failb= 52(D) OK [12694.399786] raid6test: test_disks(25, 53): faila= 25(D) failb= 53(D) OK [12694.400276] raid6test: test_disks(25, 54): faila= 25(D) failb= 54(D) OK [12694.400779] raid6test: test_disks(25, 55): faila= 25(D) failb= 55(D) OK [12694.401266] raid6test: test_disks(25, 56): faila= 25(D) failb= 56(D) OK [12694.401772] raid6test: test_disks(25, 57): faila= 25(D) failb= 5ila= 25(D) failb= 60(D) OK [12694.902711] raid6test: test_disks(25, 61): faila= 25(D) failb= 61(D) OK [12694.903206] raid6test: test_disks(25, 62): faila= 25(D) failb= 62(P) OK [12694.903691] raid6test: test_disks(25, 63): faila= 25(D) failb= 63(Q) OK [12694.904178] raid6test: test_disks(26, 27): faila= 26(D) failb= 27(D) OK [12694.904709] raid6test: test_disks(26, 28): faila= 26(D) failb= 28(D) OK [12694.905204] raid6test: test_disks(26, 29): faila= 26(D) failb= 29(D) OK [12694.905682] raid6test: test_disks(26, 30): faila= 26(D) failb= 30(D) OK [12694.906180] raid6test: test_disks(26, 31): faila= 26(D) failb= 31(D) OK [12694.906699] raid6test: test_disks(26, 32): faila= 26(D) failb= 32(D) OK [12694.907191] raid6test: test_disks(26, 33): faila= 26(D) failb= 33(D) OK [12694.907606] raid6test: test_disks(26, 34): faila= 26(D) failb= 34(D) OK [12694.908097] raid6test: test_disks(26, 35): faila= 26(D) failb= 35(D) OK [12694.908634] raid6test: test_disks(26, 36): faila= 26(D) failb= 36(D) OK [12694.909126] raid6test: test_disks(26, 37): faila= 26(D) failb= 37(D) OK [12694.909657] raid6test: test_disks(26, 38): faila= 26(D) failb= 38(D) Oila= 26(D) failb= 41(D) OK [12695.410522] raid6test: test_disks(26, 42): faila= 26(D) failb= 42(D) OK [12695.411050] raid6test: test_disks(26, 43): faila= 26(D) failb= 43(D) OK [12695.411562] raid6test: test_disks(26, 44): faila= 26(D) failb= 44(D) OK [12695.412067] raid6test: test_disks(26, 45): faila= 26(D) failb= 45(D) OK [12695.412596] raid6test: test_disks(26, 46): faila= 26(D) failb= 46(D) OK [12695.413082] raid6test: test_disks(26, 47): faila= 26(D) failb= 47(D) OK [12695.413618] raid6test: test_disks(26, 48): faila= 26(D) failb= 48(D) OK [12695.414103] raid6test: test_disks(26, 49): faila= 26(D) failb= 49(D) OK [12695.414635] raid6test: test_disks(26, 50): faila= 26(D) failb= 50(D) OK [12695.415127] raid6test: test_disks(26, 51): faila= 26(D) failb= 51(D) OK [12695.415675] raid6test: test_disks(26, 52): faila= 26(D) failb= 52(D) OK [12695.416183] raid6test: test_disks(26, 53): faila= 26(D) failb= 53(D) OK [12695.416715] raid6test: test_disks(26, 54): faila= 26(D) failb= 54(D) OK [12695.417204] raid6test: test_disks(26, 55): faila= 26(D) failb= 55(D) OK [12695.417731] raid6test: test_disks(26, 56): faila= 26(D) failb= 56(D) OK [12695.418220] raid6test: test_disks(26, 57): failisks(26, 60): faila= 26(D) failb= 60(D) OK [12695.919079] raid6test: test_disks(26, 61): faila= 26(D) failb= 61(D) OK [12695.919558] raid6test: test_disks(26, 62): faila= 26(D) failb= 62(P) OK [12695.920042] raid6test: test_disks(26, 63): faila= 26(D) failb= 63(Q) OK [12695.920550] raid6test: test_disks(27, 28): faila= 27(D) failb= 28(D) OK [12695.921066] raid6test: test_disks(27, 29): faila= 27(D) failb= 29(D) OK [12695.921603] raid6test: test_disks(27, 30): faila= 27(D) failb= 30(D) OK [12695.922093] raid6test: test_disks(27, 31): faila= 27(D) failb= 31(D) OK [12695.922638] raid6test: test_disks(27, 32): faila= 27(D) failb= 32(D) OK [12695.923124] raid6test: test_disks(27, 33): faila= 27(D) failb= 33(D) OK [12695.923619] raid6test: test_disks(27, 34): faila= 27(D) failb= 34(D) OK [12695.924107] raid6test: test_disks(27, 35): faila= 27(D) failb= 35(D) OK [12695.924629] raid6test: test_disks(27, 36): faila= 27(D) failb= 36(D) OK [12695.925119] raid6test: test_disks(27, 37): faila= 27(D) failb= 37(D) OK [12695.925658] raid6test: test_disks(27, 38): faila= 27(D) failb= 38(D) OK [12695.926165] raid6test: test_disks(27, 39): faila= 27(D) isks(27, 42): faila= 27(D) failb= 42(D) OK [12696.427081] raid6test: test_disks(27, 43): faila= 27(D) failb= 43(D) OK [12696.427565] raid6test: test_disks(27, 44): faila= 27(D) failb= 44(D) OK [12696.428075] raid6test: test_disks(27, 45): faila= 27(D) failb= 45(D) OK [12696.428626] raid6test: test_disks(27, 46): faila= 27(D) failb= 46(D) OK [12696.429111] raid6test: test_disks(27, 47): faila= 27(D) failb= 47(D) OK [12696.429657] raid6test: test_disks(27, 48): faila= 27(D) failb= 48(D) OK [12696.430142] raid6test: test_disks(27, 49): faila= 27(D) failb= 49(D) OK [12696.430641] raid6test: test_disks(27, 50): faila= 27(D) failb= 50(D) OK [12696.431129] raid6test: test_disks(27, 51): faila= 27(D) failb= 51(D) OK [12696.431663] raid6test: test_disks(27, 52): faila= 27(D) failb= 52(D) OK [12696.432155] raid6test: test_disks(27, 53): faila= 27(D) failb= 53(D) OK [12696.432683] raid6test: test_disks(27, 54): faila= 27(D) failb= 54(D) OK [12696.433172] raid6test: test_disks(27, 55): faila= 27(D) failb= 55(D) OK [12696.433708] raid6test: test_disks(27, 56): faila= 27(D) failb= 56(D) OK [12696.434197] raid6test: test_disks(27, 57): faila= 27(D) failb= 57(D) OK [12696.434734] raid6test: test_disaid6test: test_disks(27, 61): faila= 27(D) failb= 61(D) OK [12696.935612] raid6test: test_disks(27, 62): faila= 27(D) failb= 62(P) OK [12696.936144] raid6test: test_disks(27, 63): faila= 27(D) failb= 63(Q) OK [12696.936655] raid6test: test_disks(28, 29): faila= 28(D) failb= 29(D) OK [12696.937493] raid6test: test_disks(28, 30): faila= 28(D) failb= 30(D) OK [12696.937999] raid6test: test_disks(28, 31): faila= 28(D) failb= 31(D) OK [12696.938514] raid6test: test_disks(28, 32): faila= 28(D) failb= 32(D) OK [12696.939033] raid6test: test_disks(28, 33): faila= 28(D) failb= 33(D) OK [12696.939529] raid6test: test_disks(28, 34): faila= 28(D) failb= 34(D) OK [12696.940045] raid6test: test_disks(28, 35): faila= 28(D) failb= 35(D) OK [12696.940546] raid6test: test_disks(28, 36): faila= 28(D) failb= 36(D) OK [12696.941058] raid6test: test_disks(28, 37): faila= 28(D) failb= 37(D) OK [12696.941565] raid6test: test_disks(28, 38): faila= 28(D) failb= 38(D) OK [12696.942077] raid6test: test_disks(28, 39): faila= 28(D) failb= 39(D) OK [12696.942552] raid6test: test_disks(28, 40): faila= 28(D) failb= 40(D) OK [12696.943019] raid6test: test_disks(28, 41): faila= 28(D) failb= 41(D) OK [12696.943517] rai[12697.418019] raid6test: test_disks(28, 45): faila= 28(D) failb= 45(D) OK [12697.444338] raid6test: test_disks(28, 46): faila= 28(D) failb= 46(D) OK [12697.444852] raid6test: test_disks(28, 47): faila= 28(D) failb= 47(D) OK [12697.445361] raid6test: test_disks(28, 48): faila= 28(D) failb= 48(D) OK [12697.445838] raid6test: test_disks(28, 49): faila= 28(D) failb= 49(D) OK [12697.446333] raid6test: test_disks(28, 50): faila= 28(D) failb= 50(D) OK [12697.446850] raid6test: test_disks(28, 51): faila= 28(D) failb= 51(D) OK [12697.447360] raid6test: test_disks(28, 52): faila= 28(D) failb= 52(D) OK [12697.447871] raid6test: test_disks(28, 53): faila= 28(D) failb= 53(D) OK [12697.448378] raid6test: test_disks(28, 54): faila= 28(D) failb= 54(D) OK [12697.448857] raid6test: test_disks(28, 55): faila= 28(D) failb= 55(D) OK [12697.449371] raid6test: test_disks(28, 56): faila= 28(D) failb= 56(D) OK [12697.449844] raid6test: test_disks(28, 57): faila= 28(D) failb= 57(D) OK [12697.450360] raid6test: test_disks(28, 58): faila= 28(D) failb= 58(D) OK [12697.450885] raid6test: test_disks(28, 59): faila= 28(D) failb= 59(D) OK [12697.451395] raid6test: test_disks(28, 60): faila= 28(D) failb= 60(D) OK [12697.451937] raid6test: test_disks(28, 61): faila= 28(D) failb= 61(D) OK [12697.452471] raid6test: test_[12697.927007] raid6test: test_disks(29, 31): faila= 29(D) failb= 31(D) OK [12697.953332] raid6test: test_disks(29, 32): faila= 29(D) failb= 32(D) OK [12697.953869] raid6test: test_disks(29, 33): faila= 29(D) failb= 33(D) OK [12697.954398] raid6test: test_disks(29, 34): faila= 29(D) failb= 34(D) OK [12697.954922] raid6test: test_disks(29, 35): faila= 29(D) failb= 35(D) OK [12697.955400] raid6test: test_disks(29, 36): faila= 29(D) failb= 36(D) OK [12697.955951] raid6test: test_disks(29, 37): faila= 29(D) failb= 37(D) OK [12697.956466] raid6test: test_disks(29, 38): faila= 29(D) failb= 38(D) OK [12697.957014] raid6test: test_disks(29, 39): faila= 29(D) failb= 39(D) OK [12697.957547] raid6test: test_disks(29, 40): faila= 29(D) failb= 40(D) OK [12697.958020] raid6test: test_disks(29, 41): faila= 29(D) failb= 41(D) OK [12697.958462] raid6test: test_disks(29, 42): faila= 29(D) failb= 42(D) OK [12697.958933] raid6test: test_disks(29, 43): faila= 29(D) failb= 43(D) OK [12697.959397] raid6test: test_disks(29, 44): faila= 29(D) failb= 44(D) OK [12697.959866] raid6test: test_disks(29, 45): faila= 29(D) failb= 45(D) OK [12697.960311] raid6test: test_disks(29, 46): faila= 29(D) failb= 46(D) OK [1b= 49(D) OK [12698.461203] raid6test: test_disks(29, 50): faila= 29(D) failb= 50(D) OK [12698.461751] raid6test: test_disks(29, 51): faila= 29(D) failb= 51(D) OK [12698.462242] raid6test: test_disks(29, 52): faila= 29(D) failb= 52(D) OK [12698.462761] raid6test: test_disks(29, 53): faila= 29(D) failb= 53(D) OK [12698.463246] raid6test: test_disks(29, 54): faila= 29(D) failb= 54(D) OK [12698.463770] raid6test: test_disks(29, 55): faila= 29(D) failb= 55(D) OK [12698.464256] raid6test: test_disks(29, 56): faila= 29(D) failb= 56(D) OK [12698.464757] raid6test: test_disks(29, 57): faila= 29(D) failb= 57(D) OK [12698.465325] raid6test: test_disks(29, 58): faila= 29(D) failb= 58(D) OK [12698.465836] raid6test: test_disks(29, 59): faila= 29(D) failb= 59(D) OK [12698.466378] raid6test: test_disks(29, 60): faila= 29(D) failb= 60(D) OK [12698.466911] raid6test: test_disks(29, 61): faila= 29(D) failb= 61(D) OK [12698.467404] raid6test: test_disks(29, 62): faila= 29(D) failb= 62(P) OK [12698.467945] raid6test: test_disks(29, 63): faila= 29(D) failb= 63(Q) OK [12698.468466] raid6test: test_disks(30, 31): faila= 30(D) failb= 31(D) OK [12698.468968] raid6test: test_disks(30, 32): faila= 30(D) failbila= 30(D) failb= 35(D) OK [12698.969976] raid6test: test_disks(30, 36): faila= 30(D) failb= 36(D) OK [12698.970492] raid6test: test_disks(30, 37): faila= 30(D) failb= 37(D) OK [12698.971014] raid6test: test_disks(30, 38): faila= 30(D) failb= 38(D) OK [12698.971490] raid6test: test_disks(30, 39): faila= 30(D) failb= 39(D) OK [12698.972016] raid6test: test_disks(30, 40): faila= 30(D) failb= 40(D) OK [12698.972531] raid6test: test_disks(30, 41): faila= 30(D) failb= 41(D) OK [12698.973002] raid6test: test_disks(30, 42): faila= 30(D) failb= 42(D) OK [12698.973512] raid6test: test_disks(30, 43): faila= 30(D) failb= 43(D) OK [12698.974024] raid6test: test_disks(30, 44): faila= 30(D) failb= 44(D) OK [12698.974550] raid6test: test_disks(30, 45): faila= 30(D) failb= 45(D) OK [12698.975071] raid6test: test_disks(30, 46): faila= 30(D) failb= 46(D) OK [12698.975593] raid6test: test_disks(30, 47): faila= 30(D) failb= 47(D) OK [12698.976108] raid6test: test_disks(30, 48): faila= 30(D) failb= 48(D) OK [12698.976590] raid6test: test_disks(30, 49): faila= 30(D) failb= 49(D) OK [12698.977105] raid6test: test_disks(30, 50): faila= 30(D) failb= 50(D) OK [12698.977576] raid6test: test_disks(30, 51): faila= 30(D) failb= 51(D) OK ila= 30(D) failb= 54(D) OK [12699.478396] raid6test: test_disks(30, 55): faila= 30(D) failb= 55(D) OK [12699.478891] raid6test: test_disks(30, 56): faila= 30(D) failb= 56(D) OK [12699.479340] raid6test: test_disks(30, 57): faila= 30(D) failb= 57(D) OK [12699.479819] raid6test: test_disks(30, 58): faila= 30(D) failb= 58(D) OK [12699.480273] raid6test: test_disks(30, 59): faila= 30(D) failb= 59(D) OK [12699.480758] raid6test: test_disks(30, 60): faila= 30(D) failb= 60(D) OK [12699.481273] raid6test: test_disks(30, 61): faila= 30(D) failb= 61(D) OK [12699.481758] raid6test: test_disks(30, 62): faila= 30(D) failb= 62(P) OK [12699.482272] raid6test: test_disks(30, 63): faila= 30(D) failb= 63(Q) OK [12699.482750] raid6test: test_disks(31, 32): faila= 31(D) failb= 32(D) OK [12699.483253] raid6test: test_disks(31, 33): faila= 31(D) failb= 33(D) OK [12699.483738] raid6test: test_disks(31, 34): faila= 31(D) failb= 34(D) OK [12699.484248] raid6test: test_disks(31, 35): faila= 31(D) failb= 35(D) OK [12699.484748] raid6test: test_disks(31, 36): faila= 31(D) failb= 36(D) OK [12699.485251] raid6test: test_disks(31, 37): faid6test: test_disks(31, 40): faila= 31(D) failb= 40(D) OK [12699.986115] raid6test: test_disks(31, 41): faila= 31(D) failb= 41(D) OK [12699.986602] raid6test: test_disks(31, 42): faila= 31(D) failb= 42(D) OK [12699.987122] raid6test: test_disks(31, 43): faila= 31(D) failb= 43(D) OK [12699.987595] raid6test: test_disks(31, 44): faila= 31(D) failb= 44(D) OK [12699.988111] raid6test: test_disks(31, 45): faila= 31(D) failb= 45(D) OK [12699.988639] raid6test: test_disks(31, 46): faila= 31(D) failb= 46(D) OK [12699.989123] raid6test: test_disks(31, 47): faila= 31(D) failb= 47(D) OK [12699.989655] raid6test: test_disks(31, 48): faila= 31(D) failb= 48(D) OK [12699.990148] raid6test: test_disks(31, 49): faila= 31(D) failb= 49(D) OK [12699.990693] raid6test: test_disks(31, 50): faila= 31(D) failb= 50(D) OK [12699.991182] raid6test: test_disks(31, 51): faila= 31(D) failb= 51(D) OK [12699.991712] raid6test: test_disks(31, 52): faila= 31(D) failb= 52(D) OK [12699.992198] raid6test: test_disks(31, 53): faila= 31(D) failb= 53(D) OK [12699.992699] raid6test: test_disks(31, 54): faila= 31(D) failb= 54(D) OK [12699.993183] raid6test: test_disks(31, 55): faila= 31(D) failb= 55(D) OK [12699.993716] raid6test: test_disks(31, 56): faila= 31(D) failb= 56(D) OK [12699.994205] raid6test: test_disks(31, 57): faila= 31(D) failb= 57(D) OK [12699.994682] raid6test: test_disks(31, 58): failisks(31, 61): faila= 31(D) failb= 61(D) OK [12700.495527] raid6test: test_disks(31, 62): faila= 31(D) failb= 62(P) OK [12700.496083] raid6test: test_disks(31, 63): faila= 31(D) failb= 63(Q) OK [12700.496598] raid6test: test_disks(32, 33): faila= 32(D) failb= 33(D) OK [12700.497120] raid6test: test_disks(32, 34): faila= 32(D) failb= 34(D) OK [12700.497651] raid6test: test_disks(32, 35): faila= 32(D) failb= 35(D) OK [12700.498146] raid6test: test_disks(32, 36): faila= 32(D) failb= 36(D) OK [12700.498696] raid6test: test_disks(32, 37): faila= 32(D) failb= 37(D) OK [12700.499182] raid6test: test_disks(32, 38): faila= 32(D) failb= 38(D) OK [12700.499714] raid6test: test_disks(32, 39): faila= 32(D) failb= 39(D) OK [12700.500204] raid6test: test_disks(32, 40): faila= 32(D) failb= 40(D) OK [12700.500745] raid6test: test_disks(32, 41): faila= 32(D) failb= 41(D) OK [12700.501236] raid6test: test_disks(32, 42): faila= 32(D) failb= 42(D) OK [12700.501779] raid6test: test_disks(32, 43): faila= 32(D) failb= 43(D) OK [12700.502269] raid6test: test_disks(32, 44): faila= 32(D) failb= 44(D) OK [12700.502799] raid6test: test_disks(32, 45): faila= 32(D) failb= 45(D) OK [12700.503293] raid6test: test_diaid6test: test_disks(32, 49): faila= 32(D) failb= 49(D) OK [12701.004141] raid6test: test_disks(32, 50): faila= 32(D) failb= 50(D) OK [12701.004612] raid6test: test_disks(32, 51): faila= 32(D) failb= 51(D) OK [12701.005134] raid6test: test_disks(32, 52): faila= 32(D) failb= 52(D) OK [12701.005669] raid6test: test_disks(32, 53): faila= 32(D) failb= 53(D) OK [12701.006182] raid6test: test_disks(32, 54): faila= 32(D) failb= 54(D) OK [12701.006725] raid6test: test_disks(32, 55): faila= 32(D) failb= 55(D) OK [12701.007214] raid6test: test_disks(32, 56): faila= 32(D) failb= 56(D) OK [12701.007713] raid6test: test_disks(32, 57): faila= 32(D) failb= 57(D) OK [12701.008200] raid6test: test_disks(32, 58): faila= 32(D) failb= 58(D) OK [12701.008727] raid6test: test_disks(32, 59): faila= 32(D) failb= 59(D) OK [12701.009212] raid6test: test_disks(32, 60): faila= 32(D) failb= 60(D) OK [12701.009747] raid6test: test_disks(32, 61): faila= 32(D) failb= 61(D) OK [12701.010230] raid6test: test_disks(32, 62): faila= 32(D) failb= 62(P) OK [12701.010774] raid6test: test_disks(32, 63): faila= 32(D) failb= 63(Q) OK [12701.011263] raid6test: test_disks(33, 34): faila= 33(D) failb= 34(D) OK [12701.011803] raid6test: test_disks(33, 35): faila= 33(D) failb= 35(D) OK [12701.012287] raid6[12701.487046] raid6test: test_disks(33, 39): faila= 33(D) failb= 39(D) OK [12701.513137] raid6test: test_disks(33, 40): faila= 33(D) failb= 40(D) OK [12701.513673] raid6test: test_disks(33, 41): faila= 33(D) failb= 41(D) OK [12701.514160] raid6test: test_disks(33, 42): faila= 33(D) failb= 42(D) OK [12701.514706] raid6test: test_disks(33, 43): faila= 33(D) failb= 43(D) OK [12701.515200] raid6test: test_disks(33, 44): faila= 33(D) failb= 44(D) OK [12701.515737] raid6test: test_disks(33, 45): faila= 33(D) failb= 45(D) OK [12701.516248] raid6test: test_disks(33, 46): faila= 33(D) failb= 46(D) OK [12701.516797] raid6test: test_disks(33, 47): faila= 33(D) failb= 47(D) OK [12701.517267] raid6test: test_disks(33, 48): faila= 33(D) failb= 48(D) OK [12701.517820] raid6test: test_disks(33, 49): faila= 33(D) failb= 49(D) OK [12701.518327] raid6test: test_disks(33, 50): faila= 33(D) failb= 50(D) OK [12701.518810] raid6test: test_disks(33, 51): faila= 33(D) failb= 51(D) OK [12701.519311] raid6test: test_disks(33, 52): faila= 33(D) failb= 52(D) OK [12701.519819] raid6test: test_disks(33, 53): faila= 33(D) failb= 53(D) OK [12701.520331] raid6test: test_disks(33, 54): faila= 33(D) failb= 54(D) OK [12701.520823] raid6test: tes[12701.995713] raid6test: test_disks(33, 58): faila= 33(D) failb= 58(D) OK [12702.021788] raid6test: test_disks(33, 59): faila= 33(D) failb= 59(D) OK [12702.022281] raid6test: test_disks(33, 60): faila= 33(D) failb= 60(D) OK [12702.022813] raid6test: test_disks(33, 61): faila= 33(D) failb= 61(D) OK [12702.023308] raid6test: test_disks(33, 62): faila= 33(D) failb= 62(P) OK [12702.023829] raid6test: test_disks(33, 63): faila= 33(D) failb= 63(Q) OK [12702.024322] raid6test: test_disks(34, 35): faila= 34(D) failb= 35(D) OK [12702.024838] raid6test: test_disks(34, 36): faila= 34(D) failb= 36(D) OK [12702.025332] raid6test: test_disks(34, 37): faila= 34(D) failb= 37(D) OK [12702.025850] raid6test: test_disks(34, 38): faila= 34(D) failb= 38(D) OK [12702.026348] raid6test: test_disks(34, 39): faila= 34(D) failb= 39(D) OK [12702.026868] raid6test: test_disks(34, 40): faila= 34(D) failb= 40(D) OK [12702.027355] raid6test: test_disks(34, 41): faila= 34(D) failb= 41(D) OK [12702.027868] raid6test: test_disks(34, 42): faila= 34(D) failb= 42(D) OK [12702.028352] raid6test: test_disks(34, 43): faila= 34(D) failb= 43(D) OK ila= 34(D) failb= 46(D) OK [12702.529161] raid6test: test_disks(34, 47): faila= 34(D) failb= 47(D) OK [12702.529697] raid6test: test_disks(34, 48): faila= 34(D) failb= 48(D) OK [12702.530195] raid6test: test_disks(34, 49): faila= 34(D) failb= 49(D) OK [12702.530751] raid6test: test_disks(34, 50): faila= 34(D) failb= 50(D) OK [12702.531254] raid6test: test_disks(34, 51): faila= 34(D) failb= 51(D) OK [12702.531793] raid6test: test_disks(34, 52): faila= 34(D) failb= 52(D) OK [12702.532279] raid6test: test_disks(34, 53): faila= 34(D) failb= 53(D) OK [12702.532821] raid6test: test_disks(34, 54): faila= 34(D) failb= 54(D) OK [12702.533329] raid6test: test_disks(34, 55): faila= 34(D) failb= 55(D) OK [12702.533849] raid6test: test_disks(34, 56): faila= 34(D) failb= 56(D) OK [12702.534352] raid6test: test_disks(34, 57): faila= 34(D) failb= 57(D) OK [12702.534866] raid6test: test_disks(34, 58): faila= 34(D) failb= 58(D) OK [12702.535350] raid6test: test_disks(34, 59): faila= 34(D) failb= 59(D) OK [12702.535864] raid6test: test_disks(34, 60): faila= 34(D) failb= 60(D) OK [12702.536364] raid6test: test_disks(34, 61): faila= 34(D) failb= 61(D) OK [12702.536883] raid6test: test_disks(34, 62): faiisks(35, 37): faila= 35(D) failb= 37(D) OK [12703.037969] raid6test: test_disks(35, 38): faila= 35(D) failb= 38(D) OK [12703.038465] raid6test: test_disks(35, 39): faila= 35(D) failb= 39(D) OK [12703.038967] raid6test: test_disks(35, 40): faila= 35(D) failb= 40(D) OK [12703.039513] raid6test: test_disks(35, 41): faila= 35(D) failb= 41(D) OK [12703.040015] raid6test: test_disks(35, 42): faila= 35(D) failb= 42(D) OK [12703.040493] raid6test: test_disks(35, 43): faila= 35(D) failb= 43(D) OK [12703.040985] raid6test: test_disks(35, 44): faila= 35(D) failb= 44(D) OK [12703.041493] raid6test: test_disks(35, 45): faila= 35(D) failb= 45(D) OK [12703.042026] raid6test: test_disks(35, 46): faila= 35(D) failb= 46(D) OK [12703.042563] raid6test: test_disks(35, 47): faila= 35(D) failb= 47(D) OK [12703.043089] raid6test: test_disks(35, 48): faila= 35(D) failb= 48(D) OK [12703.043629] raid6test: test_disks(35, 49): faila= 35(D) failb= 49(D) OK [12703.044182] raid6test: test_disks(35, 50): faila= 35(D) failb= 50(D) OK [12703.044720] raid6test: test_disks(35, 51): faila= 35(D) failb= 51(D) OK [12703.045235] raid6test: test_disks(35, 52): faila= 35(D) failb= 52(D) OK [12703.045755] raid6test: test_disks(35, 53): faila= 35(D) failb= 53(D) OK [12703.046223] raid6test: test_disks(35, 54): faila= 35(D) failila= 35(D) failb= 57(D) OK [12703.547308] raid6test: test_disks(35, 58): faila= 35(D) failb= 58(D) OK [12703.547901] raid6test: test_disks(35, 59): faila= 35(D) failb= 59(D) OK [12703.548381] raid6test: test_disks(35, 60): faila= 35(D) failb= 60(D) OK [12703.548923] raid6test: test_disks(35, 61): faila= 35(D) failb= 61(D) OK [12703.549407] raid6test: test_disks(35, 62): faila= 35(D) failb= 62(P) OK [12703.550057] raid6test: test_disks(35, 63): faila= 35(D) failb= 63(Q) OK [12703.550594] raid6test: test_disks(36, 37): faila= 36(D) failb= 37(D) OK [12703.551094] raid6test: test_disks(36, 38): faila= 36(D) failb= 38(D) OK [12703.551619] raid6test: test_disks(36, 39): faila= 36(D) failb= 39(D) OK [12703.552117] raid6test: test_disks(36, 40): faila= 36(D) failb= 40(D) OK [12703.552577] raid6test: test_disks(36, 41): faila= 36(D) failb= 41(D) OK [12703.553071] raid6test: test_disks(36, 42): faila= 36(D) failb= 42(D) OK [12703.553559] raid6test: test_disks(36, 43): faila= 36(D) failb= 43(D) OK [12703.554040] raid6test: test_disks(36, 44): aid6test: test_disks(36, 47): faila= 36(D) failb= 47(D) OK [12704.054989] raid6test: test_disks(36, 48): faila= 36(D) failb= 48(D) OK [12704.055501] raid6test: test_disks(36, 49): faila= 36(D) failb= 49(D) OK [12704.056053] raid6test: test_disks(36, 50): faila= 36(D) failb= 50(D) OK [12704.056584] raid6test: test_disks(36, 51): faila= 36(D) failb= 51(D) OK [12704.057123] raid6test: test_disks(36, 52): faila= 36(D) failb= 52(D) OK [12704.057674] raid6test: test_disks(36, 53): faila= 36(D) failb= 53(D) OK [12704.058186] raid6test: test_disks(36, 54): faila= 36(D) failb= 54(D) OK [12704.058665] raid6test: test_disks(36, 55): faila= 36(D) failb= 55(D) OK [12704.059179] raid6test: test_disks(36, 56): faila= 36(D) failb= 56(D) OK [12704.059734] raid6test: test_disks(36, 57): faila= 36(D) failb= 57(D) OK [12704.060240] raid6test: test_disks(36, 58): faila= 36(D) failb= 58(D) OK [12704.060810] raid6test: test_disks(36, 59): faila= 36(D) failb= 59(D) OK [12704.061322] raid6test: test_disks(36, 60): faila= 36(D) failb= 60(D) OK [12704.061869] raid6test: test_disks(36, 61): faila= 36(D) failb= 61(D) OK [12704.062799] raid6test: test_disks(36, 62): faila= 36(D) failb= 62(P) OK [12704.063323] raid6test: test_disks(36, 63):aid6test: test_disks(37, 40): faila= 37(D) failb= 40(D) OK [12704.564291] raid6test: test_disks(37, 41): faila= 37(D) failb= 41(D) OK [12704.564865] raid6test: test_disks(37, 42): faila= 37(D) failb= 42(D) OK [12704.565373] raid6test: test_disks(37, 43): faila= 37(D) failb= 43(D) OK [12704.565899] raid6test: test_disks(37, 44): faila= 37(D) failb= 44(D) OK [12704.566405] raid6test: test_disks(37, 45): faila= 37(D) failb= 45(D) OK [12704.566932] raid6test: test_disks(37, 46): faila= 37(D) failb= 46(D) OK [12704.567449] raid6test: test_disks(37, 47): faila= 37(D) failb= 47(D) OK [12704.567969] raid6test: test_disks(37, 48): faila= 37(D) failb= 48(D) OK [12704.568509] raid6test: test_disks(37, 49): faila= 37(D) failb= 49(D) OK [12704.569049] raid6test: test_disks(37, 50): faila= 37(D) failb= 50(D) OK [12704.569586] raid6test: test_disks(37, 51): faila= 37(D) failb= 51(D) OK [12704.570118] raid6test: test_disks(37, 52): faila= 37(D) failb= 52(D) OK [12704.570682] raid6test: test_disks(37, 53): faila= 37(D) failb= 53(D) OK [12704.571197] raid6test: test_disks(37, 54): faila= 37(D) failb= 54(D) OK [12704.571753] raid6test: test_disks(37, 55): faila= 37(D) failb= 55(D) OK [12704.572261] raid6test: test_disks(37, 56): faila= 37(D) failb= 56(D) OK [12704.572822] raid6[12705.051358] raid6test: test_disks(37, 60): faila= 37(D) failb= 60(D) OK [12705.073793] raid6test: test_disks(37, 61): faila= 37(D) failb= 61(D) OK [12705.074284] raid6test: test_disks(37, 62): faila= 37(D) failb= 62(P) OK [12705.074841] raid6test: test_disks(37, 63): faila= 37(D) failb= 63(Q) OK [12705.075315] raid6test: test_disks(38, 39): faila= 38(D) failb= 39(D) OK [12705.075888] raid6test: test_disks(38, 40): faila= 38(D) failb= 40(D) OK [12705.076395] raid6test: test_disks(38, 41): faila= 38(D) failb= 41(D) OK [12705.076926] raid6test: test_disks(38, 42): faila= 38(D) failb= 42(D) OK [12705.077447] raid6test: test_disks(38, 43): faila= 38(D) failb= 43(D) OK [12705.077997] raid6test: test_disks(38, 44): faila= 38(D) failb= 44(D) OK [12705.078501] raid6test: test_disks(38, 45): faila= 38(D) failb= 45(D) OK [12705.079053] raid6test: test_disks(38, 46): faila= 38(D) failb= 46(D) OK [12705.079603] raid6test: test_disks(38, 47): faila= 38(D) failb= 47(D) OK [12705.080160] raid6test: test_disks(38, 48): faila= 38(D) failb= 48(D) OK [12705.080748] raid6test: test_disks(38, 49): faila= 38(D) failb= 49(D) OK [12705.081259] raid6test: test_disks(38, 50): faila= 38(D) failb= 50(D) OK [ila= 38(D) failb= 53(D) OK [12705.582212] raid6test: test_disks(38, 54): faila= 38(D) failb= 54(D) OK [12705.582739] raid6test: test_disks(38, 55): faila= 38(D) failb= 55(D) OK [12705.583232] raid6test: test_disks(38, 56): faila= 38(D) failb= 56(D) OK [12705.583798] raid6test: test_disks(38, 57): faila= 38(D) failb= 57(D) OK [12705.584328] raid6test: test_disks(38, 58): faila= 38(D) failb= 58(D) OK [12705.584858] raid6test: test_disks(38, 59): faila= 38(D) failb= 59(D) OK [12705.585345] raid6test: test_disks(38, 60): faila= 38(D) failb= 60(D) OK [12705.585875] raid6test: test_disks(38, 61): faila= 38(D) failb= 61(D) OK [12705.586439] raid6test: test_disks(38, 62): faila= 38(D) failb= 62(P) OK [12705.586969] raid6test: test_disks(38, 63): faila= 38(D) failb= 63(Q) OK [12705.587473] raid6test: test_disks(39, 40): faila= 39(D) failb= 40(D) OK [12705.588021] raid6test: test_disks(39, 41): faila= 39(D) failb= 41(D) OK [12705.588630] raid6test: test_disks(39, 42): faila= 39(D) failb= 42(D) OK [12705.589166] raid6test: test_disks(39, 43): faila= 39(D) failb= 43(D) OK [12705.589739] raid6test: test_disks(39, 44): faila= 39(D) failb= 44(D) OK [12705.590211] raid6test: test_disks(39, 45): faila= 39(D) failb= 45(D) OK ila= 39(D) failb= 48(D) OK [12706.091223] raid6test: test_disks(39, 49): faila= 39(D) failb= 49(D) OK [12706.091731] raid6test: test_disks(39, 50): faila= 39(D) failb= 50(D) OK [12706.092204] raid6test: test_disks(39, 51): faila= 39(D) failb= 51(D) OK [12706.092748] raid6test: test_disks(39, 52): faila= 39(D) failb= 52(D) OK [12706.093276] raid6test: test_disks(39, 53): faila= 39(D) failb= 53(D) OK [12706.093838] raid6test: test_disks(39, 54): faila= 39(D) failb= 54(D) OK [12706.094332] raid6test: test_disks(39, 55): faila= 39(D) failb= 55(D) OK [12706.094889] raid6test: test_disks(39, 56): faila= 39(D) failb= 56(D) OK [12706.095356] raid6test: test_disks(39, 57): faila= 39(D) failb= 57(D) OK [12706.095882] raid6test: test_disks(39, 58): faila= 39(D) failb= 58(D) OK [12706.096396] raid6test: test_disks(39, 59): faila= 39(D) failb= 59(D) OK [12706.096884] raid6test: test_disks(39, 60): faila= 39(D) failb= 60(D) OK [12706.097387] raid6test: test_disks(39, 61): faila= 39(D) failb= 61(D) OK [12706.097873] raid6test: test_disks(39, 62): faila= 39(D) failb= 62(P) OK [12706.098377] raid6test: test_disks(39, 63): faila= 39(D) failb= 63(Q) OK [12706.098877] raid6test: test_disks(40, 41): faila= 40(D) failb= 41(D) OK [12706.099384] raid6test: test_disks(40, 42): failaisks(40, 45): faila= 40(D) failb= 45(D) OK [12706.600283] raid6test: test_disks(40, 46): faila= 40(D) failb= 46(D) OK [12706.600807] raid6test: test_disks(40, 47): faila= 40(D) failb= 47(D) OK [12706.601278] raid6test: test_disks(40, 48): faila= 40(D) failb= 48(D) OK [12706.601813] raid6test: test_disks(40, 49): faila= 40(D) failb= 49(D) OK [12706.602283] raid6test: test_disks(40, 50): faila= 40(D) failb= 50(D) OK [12706.602801] raid6test: test_disks(40, 51): faila= 40(D) failb= 51(D) OK [12706.603309] raid6test: test_disks(40, 52): faila= 40(D) failb= 52(D) OK [12706.603784] raid6test: test_disks(40, 53): faila= 40(D) failb= 53(D) OK [12706.604296] raid6test: test_disks(40, 54): faila= 40(D) failb= 54(D) OK [12706.604850] raid6test: test_disks(40, 55): faila= 40(D) failb= 55(D) OK [12706.605352] raid6test: test_disks(40, 56): faila= 40(D) failb= 56(D) OK [12706.605906] raid6test: test_disks(40, 57): faila= 40(D) failb= 57(D) OK [12706.606412] raid6test: test_disks(40, 58): faila= 40(D) failb= 58(D) OK [12706.606947] raid6test: test_disks(40, 59): faila= 40(D) failb= 59(D) OK [12706.607461] raid6test: test_disks(40, 60): faila= 40(D) isks(40, 63): faila= 40(D) failb= 63(Q) OK [12707.108416] raid6test: test_disks(41, 42): faila= 41(D) failb= 42(D) OK [12707.108900] raid6test: test_disks(41, 43): faila= 41(D) failb= 43(D) OK [12707.109399] raid6test: test_disks(41, 44): faila= 41(D) failb= 44(D) OK [12707.109882] raid6test: test_disks(41, 45): faila= 41(D) failb= 45(D) OK [12707.110414] raid6test: test_disks(41, 46): faila= 41(D) failb= 46(D) OK [12707.110894] raid6test: test_disks(41, 47): faila= 41(D) failb= 47(D) OK [12707.111388] raid6test: test_disks(41, 48): faila= 41(D) failb= 48(D) OK [12707.111904] raid6test: test_disks(41, 49): faila= 41(D) failb= 49(D) OK [12707.112475] raid6test: test_disks(41, 50): faila= 41(D) failb= 50(D) OK [12707.113010] raid6test: test_disks(41, 51): faila= 41(D) failb= 51(D) OK [12707.113534] raid6test: test_disks(41, 52): faila= 41(D) failb= 52(D) OK [12707.114009] raid6test: test_disks(41, 53): faila= 41(D) failb= 53(D) OK [12707.114459] raid6test: test_disks(41, 54): faila= 41(D) failb= 54(D) OK [12707.114971] raid6test: test_disks(41, 55): faila= 41(D) failb= 55(D) OK [12707.115421] raid6test: test_disks(41, 56): faila= 41(D) failb= 56(D) OK [12707.115904] raid6test: test_diaid6test: test_disks(41, 60): faila= 41(D) failb= 60(D) OK [12707.616992] raid6test: test_disks(41, 61): faila= 41(D) failb= 61(D) OK [12707.617507] raid6test: test_disks(41, 62): faila= 41(D) failb= 62(P) OK [12707.618005] raid6test: test_disks(41, 63): faila= 41(D) failb= 63(Q) OK [12707.618560] raid6test: test_disks(42, 43): faila= 42(D) failb= 43(D) OK [12707.619083] raid6test: test_disks(42, 44): faila= 42(D) failb= 44(D) OK [12707.619580] raid6test: test_disks(42, 45): faila= 42(D) failb= 45(D) OK [12707.620105] raid6test: test_disks(42, 46): faila= 42(D) failb= 46(D) OK [12707.620628] raid6test: test_disks(42, 47): faila= 42(D) failb= 47(D) OK [12707.621151] raid6test: test_disks(42, 48): faila= 42(D) failb= 48(D) OK [12707.621641] raid6test: test_disks(42, 49): faila= 42(D) failb= 49(D) OK [12707.622167] raid6test: test_disks(42, 50): faila= 42(D) failb= 50(D) OK [12707.622665] raid6test: test_disks(42, 51): faila= 42(D) failb= 51(D) OK [12707.623177] raid6test: test_disks(42, 52): faila= 42(D) failb= 52(D) OK [12707.623644] raid6test: test_disks(42, 53): faila= 42(D) failb= 53(D) OK [12707.624137] raid6test: test_disks(42, 54): faila= 42(D) failb= 54(D) OK [12707.624635] rai[12708.099332] raid6test: test_disks(42, 58): faila= 42(D) failb= 58(D) OK [12708.125494] raid6test: test_disks(42, 59): faila= 42(D) failb= 59(D) OK [12708.126019] raid6test: test_disks(42, 60): faila= 42(D) failb= 60(D) OK [12708.126478] raid6test: test_disks(42, 61): faila= 42(D) failb= 61(D) OK [12708.127003] raid6test: test_disks(42, 62): faila= 42(D) failb= 62(P) OK [12708.127483] raid6test: test_disks(42, 63): faila= 42(D) failb= 63(Q) OK [12708.128006] raid6test: test_disks(43, 44): faila= 43(D) failb= 44(D) OK [12708.128473] raid6test: test_disks(43, 45): faila= 43(D) failb= 45(D) OK [12708.128986] raid6test: test_disks(43, 46): faila= 43(D) failb= 46(D) OK [12708.129463] raid6test: test_disks(43, 47): faila= 43(D) failb= 47(D) OK [12708.129982] raid6test: test_disks(43, 48): faila= 43(D) failb= 48(D) OK [12708.130486] raid6test: test_disks(43, 49): faila= 43(D) failb= 49(D) OK [12708.130970] raid6test: test_disks(43, 50): faila= 43(D) failb= 50(D) OK [12708.131440] raid6test: test_disks(43, 51): faila= 43(D) failb= 51(D) OK [12708.131926] raid6test: test_disks(43, 52): faila= 43(D) failb= 52(D) OK [12708.132421] raid6test: t[12708.607283] raid6test: test_disks(43, 56): faila= 43(D) failb= 56(D) OK [12708.633331] raid6test: test_disks(43, 57): faila= 43(D) failb= 57(D) OK [12708.633846] raid6test: test_disks(43, 58): faila= 43(D) failb= 58(D) OK [12708.634304] raid6test: test_disks(43, 59): faila= 43(D) failb= 59(D) OK [12708.634814] raid6test: test_disks(43, 60): faila= 43(D) failb= 60(D) OK [12708.635270] raid6test: test_disks(43, 61): faila= 43(D) failb= 61(D) OK [12708.635780] raid6test: test_disks(43, 62): faila= 43(D) failb= 62(P) OK [12708.636249] raid6test: test_disks(43, 63): faila= 43(D) failb= 63(Q) OK [12708.636764] raid6test: test_disks(44, 45): faila= 44(D) failb= 45(D) OK [12708.637223] raid6test: test_disks(44, 46): faila= 44(D) failb= 46(D) OK [12708.637744] raid6test: test_disks(44, 47): faila= 44(D) failb= 47(D) OK [12708.638198] raid6test: test_disks(44, 48): faila= 44(D) failb= 48(D) OK [12708.638708] raid6test: test_disks(44, 49): faila= 44(D) failb= 49(D) OK [12708.639168] raid6test: test_disks(44, 50): faila= 44(D) failb= 50(D) OK [12708.639647] raid6test: test_disks(44, 51): faila= 44(D) failb= 51(D) OK [12708.640132] raid6test: test_disks(44, 52): faila= 44(D) failb= 52(D) OK [1b= 55(D) OK [12709.141103] raid6test: test_disks(44, 56): faila= 44(D) failb= 56(D) OK [12709.141596] raid6test: test_disks(44, 57): faila= 44(D) failb= 57(D) OK [12709.142095] raid6test: test_disks(44, 58): faila= 44(D) failb= 58(D) OK [12709.142583] raid6test: test_disks(44, 59): faila= 44(D) failb= 59(D) OK [12709.143071] raid6test: test_disks(44, 60): faila= 44(D) failb= 60(D) OK [12709.143543] raid6test: test_disks(44, 61): faila= 44(D) failb= 61(D) OK [12709.144033] raid6test: test_disks(44, 62): faila= 44(D) failb= 62(P) OK [12709.144519] raid6test: test_disks(44, 63): faila= 44(D) failb= 63(Q) OK [12709.145061] raid6test: test_disks(45, 46): faila= 45(D) failb= 46(D) OK [12709.145542] raid6test: test_disks(45, 47): faila= 45(D) failb= 47(D) OK [12709.146039] raid6test: test_disks(45, 48): faila= 45(D) failb= 48(D) OK [12709.146516] raid6test: test_disks(45, 49): faila= 45(D) failb= 49(D) OK [12709.147009] raid6test: test_disks(45, 50): faila= 45(D) failb= 50(D) OK [12709.147471] raid6test: test_disks(45, 51): faila= 45(D) failb= 51(D) OK [12709.147966] raid6test: test_disks(45, 52): faila= 45(D) failb= 52(D) OK [12709.148473] raid6test: test_disks(45, 53): faila= 45(D) failb= 53(D) OK [12709.149001] raid6test: test_disks(45, 54): faila= 45(D) failb= ila= 45(D) failb= 57(D) OK [12709.650058] raid6test: test_disks(45, 58): faila= 45(D) failb= 58(D) OK [12709.650519] raid6test: test_disks(45, 59): faila= 45(D) failb= 59(D) OK [12709.651034] raid6test: test_disks(45, 60): faila= 45(D) failb= 60(D) OK [12709.651524] raid6test: test_disks(45, 61): faila= 45(D) failb= 61(D) OK [12709.652033] raid6test: test_disks(45, 62): faila= 45(D) failb= 62(P) OK [12709.652533] raid6test: test_disks(45, 63): faila= 45(D) failb= 63(Q) OK [12709.653011] raid6test: test_disks(46, 47): faila= 46(D) failb= 47(D) OK [12709.653503] raid6test: test_disks(46, 48): faila= 46(D) failb= 48(D) OK [12709.654017] raid6test: test_disks(46, 49): faila= 46(D) failb= 49(D) OK [12709.654510] raid6test: test_disks(46, 50): faila= 46(D) failb= 50(D) OK [12709.655012] raid6test: test_disks(46, 51): faila= 46(D) failb= 51(D) OK [12709.655497] raid6test: test_disks(46, 52): faila= 46(D) failb= 52(D) OK [12709.656009] raid6test: test_disks(46, 53): faila= 46(D) failb= 53(D) OK [12709.656509] raid6test: test_disks(46, 54): faila= 46(D) failb= 54(D) OK [12709.657026] raid6test: test_disks(46, 55): faila= 46(D) failb= 55(D) OK [12709.657518] raid6test: test_disks(46, 56): faila= 46(D) failb= 56(D) OK [12709.658027] raid6test: test_disks(46, 57): faila= 46(D) failb= 57(D) OK [1b= 60(D) OK [12710.158926] raid6test: test_disks(46, 61): faila= 46(D) failb= 61(D) OK [12710.159385] raid6test: test_disks(46, 62): faila= 46(D) failb= 62(P) OK [12710.159931] raid6test: test_disks(46, 63): faila= 46(D) failb= 63(Q) OK [12710.160497] raid6test: test_disks(47, 48): faila= 47(D) failb= 48(D) OK [12710.161131] raid6test: test_disks(47, 49): faila= 47(D) failb= 49(D) OK [12710.162026] raid6test: test_disks(47, 50): faila= 47(D) failb= 50(D) OK [12710.162510] raid6test: test_disks(47, 51): faila= 47(D) failb= 51(D) OK [12710.163065] raid6test: test_disks(47, 52): faila= 47(D) failb= 52(D) OK [12710.163560] raid6test: test_disks(47, 53): faila= 47(D) failb= 53(D) OK [12710.164090] raid6test: test_disks(47, 54): faila= 47(D) failb= 54(D) OK [12710.164590] raid6test: test_disks(47, 55): faila= 47(D) failb= 55(D) OK [12710.165084] raid6test: test_disks(47, 56): faila= 47(D) failb= 56(D) OK [12710.165544] raid6test: test_disks(47, 57): faila= 47(D) failb= 57(D) OK [12710.166056] raid6test: test_disks(47, 58): faila= 47(D) failb= 58(D) OK [12710.166521] raid6test: test_disks(47, 59): faila= 47(D) failila= 47(D) failb= 62(P) OK [12710.667378] raid6test: test_disks(47, 63): faila= 47(D) failb= 63(Q) OK [12710.667883] raid6test: test_disks(48, 49): faila= 48(D) failb= 49(D) OK [12710.668372] raid6test: test_disks(48, 50): faila= 48(D) failb= 50(D) OK [12710.668934] raid6test: test_disks(48, 51): faila= 48(D) failb= 51(D) OK [12710.669411] raid6test: test_disks(48, 52): faila= 48(D) failb= 52(D) OK [12710.669937] raid6test: test_disks(48, 53): faila= 48(D) failb= 53(D) OK [12710.670439] raid6test: test_disks(48, 54): faila= 48(D) failb= 54(D) OK [12710.670996] raid6test: test_disks(48, 55): faila= 48(D) failb= 55(D) OK [12710.671531] raid6test: test_disks(48, 56): faila= 48(D) failb= 56(D) OK [12710.672085] raid6test: test_disks(48, 57): faila= 48(D) failb= 57(D) OK [12710.672597] raid6test: test_disks(48, 58): faila= 48(D) failb= 58(D) OK [12710.673102] raid6test: test_disks(48, 59): faila= 48(D) failb= 59(D) OK [12710.673601] raid6test: test_disks(48, 60): faila= 48(D) failb= 60(D) OK [12710.674094] raid6test: test_disks(48, 61): faila= 48(D) failb= 61(D) OK [12710.674611] raid6test: test_disks(48, 62): faila= 48(D) failb= 62(P) OK [12710.675153] raid6test: test_disks(48, 63): faila= 48(D) failb= 63(Q) OK ila= 49(D) failb= 52(D) OK [12711.176061] raid6test: test_disks(49, 53): faila= 49(D) failb= 53(D) OK [12711.176563] raid6test: test_disks(49, 54): faila= 49(D) failb= 54(D) OK [12711.177072] raid6test: test_disks(49, 55): faila= 49(D) failb= 55(D) OK [12711.177559] raid6test: test_disks(49, 56): faila= 49(D) failb= 56(D) OK [12711.178064] raid6test: test_disks(49, 57): faila= 49(D) failb= 57(D) OK [12711.178556] raid6test: test_disks(49, 58): faila= 49(D) failb= 58(D) OK [12711.179072] raid6test: test_disks(49, 59): faila= 49(D) failb= 59(D) OK [12711.179568] raid6test: test_disks(49, 60): faila= 49(D) failb= 60(D) OK [12711.180077] raid6test: test_disks(49, 61): faila= 49(D) failb= 61(D) OK [12711.180566] raid6test: test_disks(49, 62): faila= 49(D) failb= 62(P) OK [12711.181095] raid6test: test_disks(49, 63): faila= 49(D) failb= 63(Q) OK [12711.181613] raid6test: test_disks(50, 51): faila= 50(D) failb= 51(D) OK [12711.182093] raid6test: test_disks(50, 52): faila= 50(D) failb= 52(D) OK [12711.182606] raid6test: test_disks(50, 53): faila= 50(D) failb= 53(D) OK [12711.183085] raid6test: test_disks(50, 54): faila= 50(D) failb= 54(D) OK [12711.183575] raid6test: test_disks(50, 55): faila= 50(D) failb= 55(D) OK [12711.184110] raid6test: test_disks(50, 56): faila=isks(50, 59): faila= 50(D) failb= 59(D) OK [12711.684946] raid6test: test_disks(50, 60): faila= 50(D) failb= 60(D) OK [12711.685440] raid6test: test_disks(50, 61): faila= 50(D) failb= 61(D) OK [12711.685977] raid6test: test_disks(50, 62): faila= 50(D) failb= 62(P) OK [12711.686496] raid6test: test_disks(50, 63): faila= 50(D) failb= 63(Q) OK [12711.687012] raid6test: test_disks(51, 52): faila= 51(D) failb= 52(D) OK [12711.688252] raid6test: test_disks(51, 53): faila= 51(D) failb= 53(D) OK [12711.688747] raid6test: test_disks(51, 54): faila= 51(D) failb= 54(D) OK [12711.689234] raid6test: test_disks(51, 55): faila= 51(D) failb= 55(D) OK [12711.689735] raid6test: test_disks(51, 56): faila= 51(D) failb= 56(D) OK [12711.690230] raid6test: test_disks(51, 57): faila= 51(D) failb= 57(D) OK [12711.690730] raid6test: test_disks(51, 58): faila= 51(D) failb= 58(D) OK [12711.691223] raid6test: test_disks(51, 59): faila= 51(D) failb= 59(D) OK [12711.691758] raid6test: test_disks(51, 60): faila= 51(D) failb= 60(D) OK [12711.692248] raid6test: test_disks(51, 61): faila= 51(D) failb= 61(D) OK [12711.692789] raid6test: test_disks(51, 62): faila= 51(D) failb= 62(P) OK [12711.693288] raid6test: test_diaid6test: test_disks(52, 55): faila= 52(D) failb= 55(D) OK [12712.194155] raid6test: test_disks(52, 56): faila= 52(D) failb= 56(D) OK [12712.194662] raid6test: test_disks(52, 57): faila= 52(D) failb= 57(D) OK [12712.195181] raid6test: test_disks(52, 58): faila= 52(D) failb= 58(D) OK [12712.195739] raid6test: test_disks(52, 59): faila= 52(D) failb= 59(D) OK [12712.196255] raid6test: test_disks(52, 60): faila= 52(D) failb= 60(D) OK [12712.196806] raid6test: test_disks(52, 61): faila= 52(D) failb= 61(D) OK [12712.197294] raid6test: test_disks(52, 62): faila= 52(D) failb= 62(P) OK [12712.197844] raid6test: test_disks(52, 63): faila= 52(D) failb= 63(Q) OK [12712.198327] raid6test: test_disks(53, 54): faila= 53(D) failb= 54(D) OK [12712.198861] raid6test: test_disks(53, 55): faila= 53(D) failb= 55(D) OK [12712.199351] raid6test: test_disks(53, 56): faila= 53(D) failb= 56(D) OK [12712.199893] raid6test: test_disks(53, 57): faila= 53(D) failb= 57(D) OK [12712.200381] raid6test: test_disks(53, 58): faila= 53(D) failb= 58(D) OK [12712.200917] raid6test: test_disks(53, 59): faila= 53(D) failb= 59(D) OK [12712.201402] raid6test: test_disks(53, 60)aid6test: test_disks(53, 63): faila= 53(D) failb= 63(Q) OK [12712.702264] raid6test: test_disks(54, 55): faila= 54(D) failb= 55(D) OK [12712.702808] raid6test: test_disks(54, 56): faila= 54(D) failb= 56(D) OK [12712.703299] raid6test: test_disks(54, 57): faila= 54(D) failb= 57(D) OK [12712.703831] raid6test: test_disks(54, 58): faila= 54(D) failb= 58(D) OK [12712.704321] raid6test: test_disks(54, 59): faila= 54(D) failb= 59(D) OK [12712.704864] raid6test: test_disks(54, 60): faila= 54(D) failb= 60(D) OK [12712.705351] raid6test: test_disks(54, 61): faila= 54(D) failb= 61(D) OK [12712.705850] raid6test: test_disks(54, 62): faila= 54(D) failb= 62(P) OK [12712.706358] raid6test: test_disks(54, 63): faila= 54(D) failb= 63(Q) OK [12712.706913] raid6test: test_disks(55, 56): faila= 55(D) failb= 56(D) OK [12712.707405] raid6test: test_disks(55, 57): faila= 55(D) failb= 57(D) OK [12712.707937] raid6test: test_disks(55, 58): faila= 55(D) failb= 58(D) OK [12712.708429] raid6test: test_disks(55, 59): faila= 55(D) failb= 59(D) OK [12712.708930] raid6test: test_disks(55, 60): faila= 55(D) failb= 60(D) OK [12712.737175] b= 63(Q) OK [12713.209824] raid6test: test_disks(56, 57): faila= 56(D) failb= 57(D) OK [12713.210326] raid6test: test_disks(56, 58): faila= 56(D) failb= 58(D) OK [12713.210870] raid6test: test_disks(56, 59): faila= 56(D) failb= 59(D) OK [12713.211362] raid6test: test_disks(56, 60): faila= 56(D) failb= 60(D) OK [12713.211859] raid6test: test_disks(56, 61): faila= 56(D) failb= 61(D) OK [12713.212342] raid6test: test_disks(56, 62): faila= 56(D) failb= 62(P) OK [12713.212883] raid6test: test_disks(56, 63): faila= 56(D) failb= 63(Q) OK [12713.213370] raid6test: test_disks(57, 58): faila= 57(D) failb= 58(D) OK [12713.213911] raid6test: test_disks(57, 59): faila= 57(D) failb= 59(D) OK [12713.214401] raid6test: test_disks(57, 60): faila= 57(D) failb= 60(D) OK [12713.214938] raid6test: test_disks(57, 61): faila= 57(D) failb= 61(D) OK [12713.215431] raid6test: test_disks(57, 62): faila= 57(D) failb= 62(P) OK [12713.215977] raid6test: test_disks(57, 63): faila= 57(D) failb= 63(Q) OK [12713.216455] raid6test: test_disks(58, 59): faila= 58(D) failb= 59(D) OK [12713.216959] raid6test: test_disks(58, 60): faila= 58(D) failb= 60(D) OK [12713.217446] raid6test: test_disks(58, 61): faila= 58(D) failb= 61(D) OK [12713.217979] r[12713.692755] raid6test: test_disks(59, 61): faila= 59(D) failb= 61(D) OK [12713.718808] raid6test: test_disks(59, 62): faila= 59(D) failb= 62(P) OK [12713.719313] raid6test: test_disks(59, 63): faila= 59(D) failb= 63(Q) OK [12713.719839] raid6test: test_disks(60, 61): faila= 60(D) failb= 61(D) OK [12713.720325] raid6test: test_disks(60, 62): faila= 60(D) failb= 62(P) OK [12713.720869] raid6test: test_disks(60, 63): faila= 60(D) failb= 63(Q) OK [12713.721359] raid6test: test_disks(61, 62): faila= 61(D) failb= 62(P) OK [12713.721903] raid6test: test_disks(61, 63): faila= 61(D) failb= 63(Q) OK [12713.722388] raid6test: test_disks(62, 63): faila= 62(P) failb= 63(Q) OK [12713.722828] raid6test: [12713.722982] raid6test: complete (2429 tests, 0 failures) ** Attempting to unload raid6test... ** ** Attempting to load raid_class... ** ** Attempting to unload raid_class... ** ** Attempting to load ramoops... ** ** Attempting to unload ramoops... ** ** Attempting to load rbd... ** [12718.772285] Key type ceph registered [12718.774803] libceph: loaded (mon/osd proto 15/24) [12718.816441] rbd: loaded (major 252) ** Attempting to unload rbd... ** [12719.444100] Key type ceph unregistered ** Attempting to load rdma_cm... ** ** Attempting to unload rdma_cm... ** ** Attempting to load rdma_ucm... ** ** Attempting to unload rdma_ucm... ** ** Attempting to load reed_solomon... ** ** Attempting to unload reed_solomon... ** ** Attempting to load rfcomm... ** [12727.859590] Bluetooth: Core ver 2.22 [12727.860328] NET: Registered PF_BLUETOOTH protocol family [12727.860658] Bluetooth: HCI device and connection manager initialized [12727.861389] Bluetooth: HCI socket layer initialized [12727.862684] Bluetooth: L2CAP socket layer initialized [12727.863244] Bluetooth: SCO socket layer initialized [12727.895964] Bluetooth: RFCOMM TTY layer initialized [12727.896695] Bluetooth: RFCOMM socket layer initialized [12727.897192] Bluetooth: RFCOMM ver 1.11 ** Attempting to unload rfcomm... ** [12728.473081] NET: Unregistered PF_BLUETOOTH protocol family ** Attempting to load ring_buffer_benchmark... ** ** Attempting to unload ring_buffer_benchmark... ** ** Attempting to load rmd160... ** ** Attempting to unload rmd160... ** ** Attempting to load rpcrdma... ** [12736.712806] RPC: Registered rdma transport module. [12736.713603] RPC: Registered rdma backchannel transport module. ** Attempting to unload rpcrdma... ** [12737.225053] RPC: Unregistered rdma transport module. [12737.225437] RPC: Unregistered rdma backchannel transport module. ** Attempting to load sch_cake... ** ** Attempting to unload sch_cake... ** ** Attempting to load sch_cbs... ** ** Attempting to unload sch_cbs... ** ** Attempting to load sch_etf... ** ** Attempting to unload sch_etf... ** ** Attempting to load sch_ets... ** ** Attempting to unload sch_ets... ** ** Attempting to load sch_fq... ** ** Attempting to unload sch_fq... ** ** Attempting to load sch_hfsc... ** ** Attempting to unload sch_hfsc... ** ** Attempting to load sch_htb... ** ** Attempting to unload sch_htb... ** ** Attempting to load sch_ingress... ** ** Attempting to unload sch_ingress... ** ** Attempting to load sch_prio... ** ** Attempting to unload sch_prio... ** ** Attempting to load sch_sfq... ** ** Attempting to unload sch_sfq... ** ** Attempting to load sch_taprio... ** ** Attempting to unload sch_taprio... ** ** Attempting to load sch_tbf... ** ** Attempting to unload sch_tbf... ** ** Attempting to load scsi_transport_iscsi... ** [12758.804369] Loading iSCSI transport class v2.0-870. ** Attempting to unload scsi_transport_iscsi... ** ** Attempting to load serpent_generic... ** ** Attempting to unload serpent_generic... ** ** Attempting to load serport... ** ** Attempting to unload serport... ** ** Attempting to load sit... ** [12765.379778] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver ** Attempting to unload sit... ** ** Attempting to load snd... ** ** Attempting to unload snd... ** ** Attempting to load snd_hda_codec_hdmi... ** ** Attempting to unload snd_hda_codec_hdmi... ** ** Attempting to load snd_hrtimer... ** ** Attempting to unload snd_hrtimer... ** ** Attempting to load snd_seq_dummy... ** ** Attempting to unload snd_seq_dummy... ** ** Attempting to load snd_timer... ** ** Attempting to unload snd_timer... ** ** Attempting to load softdog... ** [12779.126891] softdog: initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0) [12779.127889] softdog: soft_reboot_cmd= soft_active_on_boot=0 ** Attempting to unload softdog... ** ** Attempting to load soundcore... ** ** Attempting to unload soundcore... ** ** Attempting to load sr_mod... ** ** Attempting to unload sr_mod... ** [12783.263057] cdrom: Uniform CD-ROM driver unloaded ** Attempting to load tap... ** ** Attempting to unload tap... ** ** Attempting to load target_core_file... ** [12787.032740] Rounding down aligned max_sectors from 4294967295 to 4294967288 [12787.034524] db_root: cannot open: /etc/target ** Attempting to unload target_core_file... ** ** Attempting to load target_core_iblock... ** [12789.027298] Rounding down aligned max_sectors from 4294967295 to 4294967288 [12789.028892] db_root: cannot open: /etc/target ** Attempting to unload target_core_iblock... ** ** Attempting to load target_core_mod... ** [12791.272579] Rounding down aligned max_sectors from 4294967295 to 4294967288 [12791.274136] db_root: cannot open: /etc/target ** Attempting to unload target_core_mod... ** ** Attempting to load target_core_pscsi... ** [12793.215690] Rounding down aligned max_sectors from 4294967295 to 4294967288 [12793.217433] db_root: cannot open: /etc/target ** Attempting to unload target_core_pscsi... ** ** Attempting to load target_core_user... ** [12795.234550] Rounding down aligned max_sectors from 4294967295 to 4294967288 [12795.236060] db_root: cannot open: /etc/target ** Attempting to unload target_core_user... ** ** Attempting to load tcm_fc... ** [12797.832637] Rounding down aligned max_sectors from 4294967295 to 4294967288 [12797.834375] db_root: cannot open: /etc/target ** Attempting to unload tcm_fc... ** ** Attempting to load tcm_loop... ** [12799.961047] Rounding down aligned max_sectors from 4294967295 to 4294967288 [12799.962644] db_root: cannot open: /etc/target ** Attempting to unload tcm_loop... ** ** Attempting to load tcp_bbr... ** ** Attempting to unload tcp_bbr... ** ** Attempting to load tcp_dctcp... ** ** Attempting to unload tcp_dctcp... ** ** Attempting to load tcp_nv... ** ** Attempting to unload tcp_nv... ** ** Attempting to load team... ** [12806.535166] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team... ** ** Attempting to load team_mode_activebackup... ** [12808.140963] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_activebackup... ** ** Attempting to load team_mode_broadcast... ** [12809.775377] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_broadcast... ** ** Attempting to load team_mode_loadbalance... ** [12811.402317] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_loadbalance... ** ** Attempting to load team_mode_random... ** [12813.101098] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_random... ** ** Attempting to load team_mode_roundrobin... ** [12814.777863] Warning: Deprecated Driver is detected: team will not be maintained in a future major release and may be disabled ** Attempting to unload team_mode_roundrobin... ** ** Attempting to load tipc... ** [12817.282715] tipc: Activated (version 2.0.0) [12817.285335] NET: Registered PF_TIPC protocol family [12817.287759] tipc: Started in single node mode ** Attempting to unload tipc... ** [12817.864724] NET: Unregistered PF_TIPC protocol family [12818.040994] tipc: Deactivated ** Attempting to load ts_bm... ** ** Attempting to unload ts_bm... ** ** Attempting to load ts_fsm... ** ** Attempting to unload ts_fsm... ** ** Attempting to load tun... ** [12823.307193] tun: Universal TUN/TAP device driver, 1.6 ** Attempting to unload tun... ** ** Attempting to load tunnel4... ** ** Attempting to unload tunnel4... ** ** Attempting to load tunnel6... ** ** Attempting to unload tunnel6... ** ** Attempting to load twofish_common... ** ** Attempting to unload twofish_common... ** ** Attempting to load twofish_generic... ** ** Attempting to unload twofish_generic... ** ** Attempting to load ubi... ** ** Attempting to unload ubi... ** ** Attempting to load udf... ** ** Attempting to unload udf... ** [12833.711464] cdrom: Uniform CD-ROM driver unloaded ** Attempting to load udp_tunnel... ** ** Attempting to unload udp_tunnel... ** ** Attempting to load uhid... ** ** Attempting to unload uhid... ** ** Attempting to load uinput... ** ** Attempting to unload uinput... ** ** Attempting to load uio... ** ** Attempting to unload uio... ** ** Attempting to load uio_pci_generic... ** ** Attempting to unload uio_pci_generic... ** ** Attempting to load usb_wwan... ** ** Attempting to unload usb_wwan... ** ** Attempting to load usbnet... ** ** Attempting to unload usbnet... ** ** Attempting to load veth... ** ** Attempting to unload veth... ** ** Attempting to load vhost... ** ** Attempting to unload vhost... ** ** Attempting to load vhost_iotlb... ** ** Attempting to unload vhost_iotlb... ** ** Attempting to load vhost_net... ** [12852.101588] tun: Universal TUN/TAP device driver, 1.6 ** Attempting to unload vhost_net... ** ** Attempting to load vhost_vdpa... ** ** Attempting to unload vhost_vdpa... ** ** Attempting to load vhost_vsock... ** [12855.515481] NET: Registered PF_VSOCK protocol family ** Attempting to unload vhost_vsock... ** [12856.229091] NET: Unregistered PF_VSOCK protocol family ** Attempting to load videodev... ** [12857.441139] mc: Linux media interface: v0.10 [12857.578036] videodev: Linux video capture interface: v2.00 ** Attempting to unload videodev... ** ** Attempting to load virtio_gpu... ** ** Attempting to unload virtio_gpu... ** ** Attempting to load virtio_balloon... ** ** Attempting to unload virtio_balloon... ** ** Attempting to load virtio_blk... ** ** Attempting to unload virtio_blk... ** ** Attempting to load virtio_dma_buf... ** ** Attempting to unload virtio_dma_buf... ** ** Attempting to load virtio_input... ** ** Attempting to unload virtio_input... ** ** Attempting to load virtio_net... ** ** Attempting to unload virtio_net... ** ** Attempting to load virtio_scsi... ** ** Attempting to unload virtio_scsi... ** ** Attempting to load virtio_vdpa... ** ** Attempting to unload virtio_vdpa... ** ** Attempting to load virtiofs... ** ** Attempting to unload virtiofs... ** ** Attempting to load vmac... ** ** Attempting to unload vmac... ** ** Attempting to load vport_geneve... ** [12875.953414] openvswitch: Open vSwitch switching datapath ** Attempting to unload vport_geneve... ** ** Attempting to load vport_gre... ** [12879.009727] gre: GRE over IPv4 demultiplexor driver [12879.461818] openvswitch: Open vSwitch switching datapath [12879.482640] ip_gre: GRE over IPv4 tunneling driver ** Attempting to unload vport_gre... ** ** Attempting to load vport_vxlan... ** [12883.139314] openvswitch: Open vSwitch switching datapath ** Attempting to unload vport_vxlan... ** ** Attempting to load vringh... ** ** Attempting to unload vringh... ** ** Attempting to load vsock_diag... ** [12887.883150] NET: Registered PF_VSOCK protocol family ** Attempting to unload vsock_diag... ** [12888.400357] NET: Unregistered PF_VSOCK protocol family ** Attempting to load vsockmon... ** [12889.462619] NET: Registered PF_VSOCK protocol family ** Attempting to unload vsockmon... ** [12890.021338] NET: Unregistered PF_VSOCK protocol family ** Attempting to load vxlan... ** ** Attempting to unload vxlan... ** ** Attempting to load wireguard... ** [12892.967195] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. [12892.967711] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. [12892.968250] TECH PREVIEW: WireGuard may not be fully supported. [12892.968250] Please review provided documentation for limitations. ** Attempting to unload wireguard... ** ** Attempting to load wp512... ** ** Attempting to unload wp512... ** ** Attempting to load xcbc... ** ** Attempting to unload xcbc... ** ** Attempting to load xfrm4_tunnel... ** ** Attempting to unload xfrm4_tunnel... ** ** Attempting to load xfrm6_tunnel... ** ** Attempting to unload xfrm6_tunnel... ** ** Attempting to load xfrm_interface... ** [12904.609387] IPsec XFRM device driver ** Attempting to unload xfrm_interface... ** ** Attempting to load xfrm_ipcomp... ** ** Attempting to unload xfrm_ipcomp... ** ** Attempting to load xt_addrtype... ** ** Attempting to unload xt_addrtype... ** ** Attempting to load xsk_diag... ** ** Attempting to unload xsk_diag... ** ** Attempting to load xt_AUDIT... ** ** Attempting to unload xt_AUDIT... ** ** Attempting to load xt_bpf... ** ** Attempting to unload xt_bpf... ** ** Attempting to load xt_cgroup... ** ** Attempting to unload xt_cgroup... ** ** Attempting to load xt_CHECKSUM... ** ** Attempting to unload xt_CHECKSUM... ** ** Attempting to load xt_CLASSIFY... ** ** Attempting to unload xt_CLASSIFY... ** ** Attempting to load xt_CONNSECMARK... ** ** Attempting to unload xt_CONNSECMARK... ** ** Attempting to load xt_CT... ** ** Attempting to unload xt_CT... ** ** Attempting to load xt_DSCP... ** ** Attempting to unload xt_DSCP... ** ** Attempting to load xt_HL... ** ** Attempting to unload xt_HL... ** [-- MARK -- Fri Feb 3 09:20:00 2023] ** Attempting to load xt_HMARK... ** ** Attempting to unload xt_HMARK... ** ** Attempting to load xt_IDLETIMER... ** ** Attempting to unload xt_IDLETIMER... ** ** Attempting to load xt_LOG... ** ** Attempting to unload xt_LOG... ** ** Attempting to load xt_MASQUERADE... ** ** Attempting to unload xt_MASQUERADE... ** ** Attempting to load xt_NETMAP... ** ** Attempting to unload xt_NETMAP... ** ** Attempting to load xt_NFLOG... ** ** Attempting to unload xt_NFLOG... ** ** Attempting to load xt_NFQUEUE... ** ** Attempting to unload xt_NFQUEUE... ** ** Attempting to load xt_RATEEST... ** ** Attempting to unload xt_RATEEST... ** ** Attempting to load xt_REDIRECT... ** ** Attempting to unload xt_REDIRECT... ** ** Attempting to load xt_SECMARK... ** ** Attempting to unload xt_SECMARK... ** ** Attempting to load xt_TCPMSS... ** ** Attempting to unload xt_TCPMSS... ** ** Attempting to load xt_TCPOPTSTRIP... ** ** Attempting to unload xt_TCPOPTSTRIP... ** ** Attempting to load xt_TEE... ** ** Attempting to unload xt_TEE... ** ** Attempting to load xt_TPROXY... ** ** Attempting to unload xt_TPROXY... ** ** Attempting to load xt_TRACE... ** ** Attempting to unload xt_TRACE... ** ** Attempting to load xt_addrtype... ** ** Attempting to unload xt_addrtype... ** ** Attempting to load xt_bpf... ** ** Attempting to unload xt_bpf... ** ** Attempting to load xt_cgroup... ** ** Attempting to unload xt_cgroup... ** ** Attempting to load xt_cluster... ** ** Attempting to unload xt_cluster... ** ** Attempting to load xt_comment... ** ** Attempting to unload xt_comment... ** ** Attempting to load xt_connbytes... ** ** Attempting to unload xt_connbytes... ** ** Attempting to load xt_connlabel... ** ** Attempting to unload xt_connlabel... ** ** Attempting to load xt_connlimit... ** ** Attempting to unload xt_connlimit... ** ** Attempting to load xt_connmark... ** ** Attempting to unload xt_connmark... ** ** Attempting to load xt_CONNSECMARK... ** ** Attempting to unload xt_CONNSECMARK... ** ** Attempting to load xt_conntrack... ** ** Attempting to unload xt_conntrack... ** ** Attempting to load xt_cpu... ** ** Attempting to unload xt_cpu... ** ** Attempting to load xt_CT... ** ** Attempting to unload xt_CT... ** ** Attempting to load xt_dccp... ** ** Attempting to unload xt_dccp... ** ** Attempting to load xt_devgroup... ** ** Attempting to unload xt_devgroup... ** ** Attempting to load xt_dscp... ** ** Attempting to unload xt_dscp... ** ** Attempting to load xt_DSCP... ** ** Attempting to unload xt_DSCP... ** ** Attempting to load xt_ecn... ** ** Attempting to unload xt_ecn... ** ** Attempting to load xt_esp... ** ** Attempting to unload xt_esp... ** ** Attempting to load xt_hashlimit... ** ** Attempting to unload xt_hashlimit... ** ** Attempting to load xt_helper... ** ** Attempting to unload xt_helper... ** ** Attempting to load xt_hl... ** ** Attempting to unload xt_hl... ** ** Attempting to load xt_HL... ** ** Attempting to unload xt_HL... ** ** Attempting to load xt_HMARK... ** ** Attempting to unload xt_HMARK... ** ** Attempting to load xt_IDLETIMER... ** ** Attempting to unload xt_IDLETIMER... ** ** Attempting to load xt_iprange... ** ** Attempting to unload xt_iprange... ** ** Attempting to load xt_ipvs... ** [12999.610683] IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) [12999.612790] IPVS: Connection hash table configured (size=4096, memory=32Kbytes) [12999.613677] IPVS: Each connection entry needs 416 bytes at least [12999.615721] IPVS: ipvs loaded. ** Attempting to unload xt_ipvs... ** [13000.185055] IPVS: ipvs unloaded. ** Attempting to load xt_length... ** ** Attempting to unload xt_length... ** ** Attempting to load xt_limit... ** ** Attempting to unload xt_limit... ** ** Attempting to load xt_LOG... ** ** Attempting to unload xt_LOG... ** ** Attempting to load xt_mac... ** ** Attempting to unload xt_mac... ** ** Attempting to load xt_mark... ** ** Attempting to unload xt_mark... ** ** Attempting to load xt_multiport... ** ** Attempting to unload xt_multiport... ** ** Attempting to load xt_nat... ** ** Attempting to unload xt_nat... ** ** Attempting to load xt_NETMAP... ** ** Attempting to unload xt_NETMAP... ** ** Attempting to load xt_NFLOG... ** ** Attempting to unload xt_NFLOG... ** ** Attempting to load xt_NFQUEUE... ** ** Attempting to unload xt_NFQUEUE... ** ** Attempting to load xt_osf... ** ** Attempting to unload xt_osf... ** ** Attempting to load xt_owner... ** ** Attempting to unload xt_owner... ** ** Attempting to load xt_physdev... ** ** Attempting to unload xt_physdev... ** ** Attempting to load xt_pkttype... ** ** Attempting to unload xt_pkttype... ** ** Attempting to load xt_policy... ** ** Attempting to unload xt_policy... ** ** Attempting to load xt_quota... ** ** Attempting to unload xt_quota... ** ** Attempting to load xt_rateest... ** ** Attempting to unload xt_rateest... ** ** Attempting to load xt_RATEEST... ** ** Attempting to unload xt_RATEEST... ** ** Attempting to load xt_realm... ** ** Attempting to unload xt_realm... ** ** Attempting to load xt_recent... ** ** Attempting to unload xt_recent... ** ** Attempting to load xt_REDIRECT... ** ** Attempting to unload xt_REDIRECT... ** ** Attempting to load xt_SECMARK... ** [13039.997126] load/unload kernel module test - bare_metal hit test timeout, aborting it... ** Attempting to unload xt_SECMARK... ** ** Attempting to load xt_set... ** ** Attempting to unload xt_set... ** [13041.810929] sysrq: Show Memory [13041.812167] Mem-Info: [13041.812390] active_anon:672 inactive_anon:30470 isolated_anon:0 [13041.812390] active_file:71322 inactive_file:40526 isolated_file:0 [13041.812390] unevictable:768 dirty:2 writeback:0 [13041.812390] slab_reclaimable:97354 slab_unreclaimable:287962 [13041.812390] mapped:12299 shmem:8414 pagetables:727 bounce:0 [13041.812390] kernel_misc_reclaimable:0 [13041.812390] free:5893676 free_pcp:17325 free_cma:0 [13041.815020] Node 0 active_anon:2592kB inactive_anon:74956kB active_file:97660kB inactive_file:78656kB unevictable:3072kB isolated(anon):0kB isolated(file):0kB mapped:36280kB dirty:4kB writeback:0kB shmem:33120kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 6144kB writeback_tmp:0kB kernel_stack:6448kB pagetables:1900kB all_unreclaimable? yes [13041.816746] Node 1 active_anon:96kB inactive_anon:46924kB active_file:187628kB inactive_file:83448kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:12916kB dirty:4kB writeback:0kB shmem:536kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 10240kB writeback_tmp:0kB kernel_stack:6352kB pagetables:1008kB all_unreclaimable? no [13041.819159] Node 0 DMA free:11264kB boost:0kB min:52kB low:64kB high:76kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15980kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB [13041.820633] lowmem_reserve[]: 0 909 9558 9558 9558 [13041.821389] Node 0 DMA32 free:919940kB boost:0kB min:3092kB low:3984kB high:4876kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:3094192kB managed:931504kB mlocked:0kB bounce:0kB free_pcp:4476kB local_pcp:0kB free_cma:0kB [13041.822972] lowmem_reserve[]: 0 0 8648 8648 8648 [13041.824103] Node 0 Normal free:7566548kB boost:72448kB min:103044kB low:111896kB high:120748kB reserved_highatomic:0KB active_anon:2592kB inactive_anon:74956kB active_file:97660kB inactive_file:78656kB unevictable:3072kB writepending:4kB present:13631488kB managed:8855984kB mlocked:0kB bounce:0kB free_pcp:21496kB local_pcp:844kB free_cma:0kB [13041.855191] 0 0 0 0 0 [13041.925941] Node 1 Normal free:15077196kB boost:0kB min:56364kB low:72676kB high:88988kB reserved_highatomic:0KB active_anon:96kB inactive_anon:46896kB active_file:187628kB inactive_file:83448kB unevictable:0kB writepending:4kB present:16777212kB managed:16325676kB mlocked:0kB bounce:0kB free_pcp:43328kB local_pcp:0kB free_cma:0kB [13041.928001] lowmem_reserve[]: 0 0 0 0 0 [13041.928307] Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 2*4096kB (M) = 11264kB [13041.929375] Node 0 DMA32: 25*4kB (M) 18*8kB (M) 21*16kB (M) 22*32kB (M) 20*64kB (M) 19*128kB (M) 20*256kB (M) 7*512kB (UM) 5*1024kB (M) 2*2048kB (M) 219*4096kB (M) = 919940kB [13041.930912] Node 0 Normal: 276*4kB (UME) 1760*8kB (UME) 1650*16kB (UME) 1919*32kB (UME) 2543*64kB (UME) 1761*128kB (UME) 1805*256kB (UME) 1156*512kB (UM) 835*1024kB (UM) 163*2048kB (UME) 1180*4096kB (UME) = 7567248kB [13041.932279] Node 1 Normal: 793*4kB (UME) 3067total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB [13042.36058total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB [13042.434817] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB [13042.435695] Node 1 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB [13042.436557] 120262 total pagecache pages [13042.436791] 0 pages in swap cache [13042.437345] Swap cache stats: add 0, delete 0, find 0/0 [13042.437690] Free swap = 16502780kB [13042.438256] Total swap = 16502780kB [13042.438865] 8379718 pages RAM [13042.439469] 0 pages HighMem/MovableOnly [13042.439694] 1847587 pages reserved [13042.440252] 0 pages cma reserved [13042.440825] 887 pages hwpoisoned ** Attempting to load xt_socket... ** ** Attempting to unload xt_socket... ** ** Attempting to load xt_state... ** ** Attempting to unload xt_state... ** ** Attempting to load xt_statistic... ** ** Attempting to unload xt_statistic... ** ** Attempting to load xt_string... ** ** Attempting to unload xt_string... ** [13049.788724] sysrq: Show State [13049.789378] task:systemd state:S stack:22280 pid: 1 ppid: 0 flags:0x00000002 [13049.791962] Call Trace: [13049.792153] [13049.792773] __schedule+0x72e/0x1570 [13049.793122] ? io_schedule_timeout+0x160/0x160 [13049.794110] ? __lock_acquire+0xb72/0x1870 [13049.794555] schedule+0x128/0x220 [13049.795182] schedule_hrtimeout_range_clock+0x2b8/0x300 [13049.795564] ? hrtimer_nanosleep_restart+0x160/0x160 [13049.795895] ? lock_downgrade+0x130/0x130 [13049.796143] ? do_wait+0xb10/0xb10 [13049.796855] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13049.797706] ep_poll+0x7d2/0xa80 [13049.798359] ? ep_send_events+0x9f0/0x9f0 [13049.798691] ? __lock_release+0x4c1/0xa00 [13049.798970] ? prepare_to_wait_exclusive+0x2c0/0x2c0 [13049.799371] do_epoll_wait+0x12f/0x160 [13049.799738] __x64_sys_epoll_wait+0x12e/0x250 [13049.800412] ? lockdep_hardirqs_on+0x79/0x100 [13049.801255] ? __x64_sys_epoll_pwait2+0x240/0x240 [13049.802052] do_syscall_64+0x5c/0x90 [13049.802340] ? asm_exc_page_fault+0x22/0x30 [13049.802645] ? lockdep_hardirqs_on+0x79/0x100 [13049.803339] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13049.803733] RIP: 0033:0x7ff1fa34eb0e [13049.804003] RSP: 002b:00007ffd9e7ae7f0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8 [13049.805075] RAX: ffffffffffffffda RBX: 000056526f9091d0 RCX: 00007ff1fa34eb0e [13049.806032] RDX: 000000000000005c RSI: 000056526fa22820 RDI: 0000000000000004 [13049.806884] RBP: 000056526f909360 R08: 0000000000000000 R09: 0000000000000010 [13049.807729] R10: 00000000ffffffff R11: 0000000000000293 R12: 00000000000001e0 [13049.808649] R13: 000056526f9091d0 R14: 000000000000005c R15: 0000000000000030 [13049.809633] [13049.809838] task:kthreadd state:S stack:27936 pid: 2 ppid: 0 flags:0x00004000 [13049.810354] Call Trace: [13049.810565] [13049.811119] __schedule+0x72e/0x1570 [13049.811393] ? io_schedule_timeout+0x160/0x160 [13049.812221] ? lock_downgrade+0x130/0x130 [13049.812495] ? lockdep_hardirqs_on+0x79/0x100 [13049.813259] schedule+0x128/0x220 [13049.813973] kthreadd+0x9fb/0xd60 [13049.815068] ? kthread_is_per_cpu+0xc0/0xc0 [13049.815351] ret_from_fork+0x22/0x30 [13049.815890] [13049.816080] task:rcu_gp state:I stack:30328 pid: 3 ppid: 2 flags:0x00004000 [13049.816624] Call Trace: [13049.816806] [13049.817354] __schedule+0x72e/0x1570 [13049.817700] ? io_schedule_timeout+0x160/0x160 [13049.818783] ? lock_downgrade+0x130/0x130 [13049.819063] ? wait_for_completion_io_timeout+0x20/0x20 [13049.819448] schedule+0x128/0x220 [13049.820189] rescuer_thread+0x679/0xbb0 [13049.820553] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13049.820917] ? worker_thread+0xf90/0xf90 [13049.821171] ? __kthread_parkme+0xcc/0x200 [13049.850935][13050.322741] [13050.322964] task:rcu_par_gp state:I stack:29104 pid: 4 ppid: 2 flags:0x00004000 ** Attempting to[13050.323422] Workqueue: 0x0 (rcu_par_gp) load xt_tcpmss.[13050.323791] Call Trace: .. ** [13050.323962] [13050.325206] __schedule+0x72e/0x1570 [13050.325462] ? io_schedule_timeout+0x160/0x160 [13050.326309] ? lock_downgrade+0x130/0x130 [13050.326614] ? pwq_dec_nr_in_flight+0x230/0x230 [13050.327316] schedule+0x128/0x220 [13050.327980] rescuer_thread+0x679/0xbb0 [13050.328257] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13050.328616] ? worker_thread+0xf90/0xf90 [13050.328861] ? __kthread_parkme+0xcc/0x200 [13050.329093] ? worker_thread+0xf90/0xf90 [13050.329325] kthread+0x2a7/0x350 [13050.329953] ? kthread_complete_and_exit+0x20/0x20 [13050.330683] ret_from_fork+0x22/0x30 [13050.330961] [13050.331126] task:slub_flushwq state:I stack:31112 pid: 5 ppid: 2 flags:0x00004000 [13050.331630] Call Trace: [13050.331788] [13050.332310] __schedule+0x72e/0x1570 [13050.332602] ? io_schedule_timeout+0x160/0x160 [13050.333252] ? lock_downgrade+0x130/0x130 [13050.333490] ? wait_for_completion_io_timeout+0x20/0x20 [13050.333874] schedule+0x128/0x220 [13050.334567] rescuer_thread+0x679/0xbb0 [13050.334840] ? _raw_spin_unlock_irqrestore+0x59/[13050.826419] ? kthread_complete_and_exit+0x20/0x20 [13050.836135] ret_from_fork+0x22/0x30 [13050.836433] [13050.836652] task:netns state:I stack:31112 pid: 6 ppid: 2 flags:0x00004000 [13050.837140] Call Trace: [13050.837292] [13050.837846] __schedule+0x72e/0x1570 [13050.838108] ? io_schedule_timeout+0x160/0x160 [13050.838778] ? lock_downgrade+0x130/0x130 [13050.839013] ? wait_for_completion_io_timeout+0x20/0x20 [13050.839343] schedule+0x128/0x220 [13050.840089] rescuer_thread+0x679/0xbb0 [13050.840360] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13050.840798] ? worker_thread+0xf90/0xf90 [13050.841034] ? __kthread_parkme+0xcc/0x200 [13050.841265] ? worker_thread+0xf90/0xf90 [13050.841497] kthread+0x2a7/0x350 [13050.842133] ? kthread_complete_and_exit+0x20/0x20 [13050.842819] ret_from_fork+0x22/0x30 [13050.843128] [13050.843299] task:kworker/0:0H state:I stack:30048 pid: 8 ppid: 2 flags:0x00004000 [13050.843772] Workqueue: 0x0 (events_highpri) [13050.844448] Call Trace: [13050.844663] [13050.845361] __schedule+0x72e/0x1570 [13050.845761] ? io_schedule_timeout+0x160/0x160 [13050.846467] ? lock_downgrade+0x130/0x130 [13050.8739[13051.347065] kthread+0x2a7/0x350 [13051.347113] ? kthread_complete_and_exit+0x20/0x20 [13051.347133] ret_from_fork+0x22/0x30 [13051.347187] [13051.349567] task:kworker/0:1H state:I stack:27832 pid: 10 ppid: 2 flags:0x00004000 ** Attempting to[13051.350047] Workqueue: 0x0 (events_highpri) unload xt_tcpms[13051.350779] Call Trace: s... ** [13051.350965] [13051.351613] __schedule+0x72e/0x1570 [13051.351884] ? io_schedule_timeout+0x160/0x160 [13051.352668] ? lock_downgrade+0x130/0x130 [13051.353112] ? pwq_dec_nr_in_flight+0x230/0x230 [13051.353943] schedule+0x128/0x220 [13051.354599] worker_thread+0x152/0xf90 [13051.354898] ? process_one_work+0x1520/0x1520 [13051.355649] kthread+0x2a7/0x350 [13051.356258] ? kthread_complete_and_exit+0x20/0x20 [13051.357070] ret_from_fork+0x22/0x30 [13051.357474] [13051.357699] task:mm_percpu_wq state:I stack:31112 pid: 12 ppid: 2 flags:0x00004000 [13051.358184] Call Trace: [13051.358333] [13051.358888] __schedule+0x72e/0x1570 [13051.359145] ? io_schedule_timeout+0x160/0x160 [13051.359814] ? lock_downgrade+0x130/0x130 [13051.360051] ? wait_for_completion_io_timeout+0x20/0x20 [13051.360378] schedule+0x? __kthread_parkme+0xcc/0x200 [13051.860944] ? worker_thread+0xf90/0xf90 [13051.861215] kthread+0x2a7/0x350 [13051.862139] ? kthread_complete_and_exit+0x20/0x20 [13051.863015] ret_from_fork+0x22/0x30 [13051.863320] [13051.863486] task:rcu_tasks_kthre state:I stack:27928 pid: 13 ppid: 2 flags:0x00004000 [13051.864014] Call Trace: [13051.864295] [13051.864837] __schedule+0x72e/0x1570 [13051.865095] ? io_schedule_timeout+0x160/0x160 [13051.865908] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13051.866239] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13051.866575] ? rcu_tasks_need_gpcb+0x2e1/0x5e0 [13051.867324] ? rcu_tasks_kthread+0x152/0x550 [13051.867997] schedule+0x128/0x220 [13051.868673] rcu_tasks_kthread+0x152/0x550 [13051.868942] ? rcu_tasks_invoke_cbs_wq+0x60/0x60 [13051.869639] kthread+0x2a7/0x350 [13051.870231] ? kthread_complete_and_exit+0x20/0x20 [13051.870924] ret_from_fork+0x22/0x30 [13051.871208] [13051.871364] task:rcu_tasks_rude_ state:I stack:30056 pid: 14 ppid: 2 flags:0x00004000 [13051.871907] Call Trace: [13051.872099] [13051.872685] __schedule+0x72e/0x1570 [13051.872944] ? io_sched[13052.373385] ? rcu_tasks_kthread+0x152/0x550 [13052.374095] schedule+0x128/0x220 [13052.374747] rcu_tasks_kthread+0x152/0x550 [13052.375015] ? rcu_tasks_invoke_cbs_wq+0x60/0x60 [13052.375792] kthread+0x2a7/0x350 [13052.376388] ? kthread_complete_and_exit+0x20/0x20 [13052.377096] ret_from_fork+0x22/0x30 [13052.377381] [13052.377569] task:rcu_tasks_trace state:I stack:30168 pid: 15 ppid: 2 flags:0x00004000 [13052.378064] Call Trace: [13052.378252] [13052.378938] __schedule+0x72e/0x1570 [13052.379200] ? io_schedule_timeout+0x160/0x160 [13052.379857] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13052.380192] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13052.380490] ? rcu_tasks_need_gpcb+0x2e1/0x5e0 [13052.381191] ? rcu_tasks_kthread+0x152/0x550 [13052.381879] schedule+0x128/0x220 [13052.382477] rcu_tasks_kthread+0x152/0x550 [13052.382804] ? rcu_tasks_invoke_cbs_wq+0x60/0x60 [13052.383470] kthread+0x2a7/0x350 [13052.384088] ? kthread_complete_and_exit+0x20/0x20 [13052.384791] ret_from_fork+0x22/0x30 [13052.385078] [13052.385237] task:ksoftirqd/0 state:S stack:29616 pid: 17 ppid: [13052.877674] ? smpboot_thread_fn+0x6b/0x910 [13052.886078] schedule+0x128/0x220 [13052.886698] ? __local_bh_enable+0x90/0x90 [13052.886942] smpboot_thread_fn+0x253/0x910 [13052.887179] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13052.887484] kthread+0x2a7/0x350 [13052.888121] ? kthread_complete_and_exit+0x20/0x20 [13052.888811] ret_from_fork+0x22/0x30 [13052.889104] [13052.889263] task:rcu_preempt state:I stack:28536 pid: 18 ppid: 2 flags:0x00004000 [13052.889763] Call Trace: [13052.889933] [13052.890435] __schedule+0x72e/0x1570 [13052.890752] ? io_schedule_timeout+0x160/0x160 [13052.891408] ? timer_fixup_activate+0x2e0/0x2e0 [13052.892126] ? debug_object_deactivate+0x320/0x320 [13052.892882] schedule+0x128/0x220 [13052.893487] schedule_timeout+0x125/0x260 [13052.893754] ? usleep_range_state+0x190/0x190 [13052.894415] ? destroy_timer_on_stack+0x20/0x20 [13052.895124] ? lockdep_hardirqs_on+0x79/0x100 [13052.895818] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13052.896164] ? prepare_to_swait_event+0xf3/0x470 [13052.896857] rcu_gp_fqs_loop+0x18a/0x840 [13052.897114] ? force_qs_rnp+0x6c0/0x6c0 [13052.897361] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13052.898127] rcu_gp_kthreadp+0x840/0x840 [13053.398811] kthread+0x2a7/0x350 [13053.399437] ? kthread_complete_and_exit+0x20/0x20 [13053.400133] ret_from_fork+0x22/0x30 [13053.400422] [13053.400621] task:migration/0 state:S stack:30096 pid: 19 ppid: 2 flags:0x00004000 [13053.401115] Stopper: 0x0 <- 0x0 [13053.401726] Call Trace: [13053.401915] [13053.402441] __schedule+0x72e/0x1570 [13053.402767] ? io_schedule_timeout+0x160/0x160 [13053.403481] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13053.404260] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13053.404688] ? smpboot_thread_fn+0x6b/0x910 [13053.404938] schedule+0x128/0x220 [13053.405519] ? reboot_pid_ns+0xf0/0xf0 [13053.405819] smpboot_thread_fn+0x253/0x910 [13053.406068] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13053.406397] kthread+0x2a7/0x350 [13053.407010] ? kthread_complete_and_exit+0x20/0x20 [13053.407758] ret_from_fork+0x22/0x30 [13053.408045] [13053.408204] task:cpuhp/0 state:S stack:27320 pid: 20 ppid: 2 flags:0x00004000 [13053.408758] Call Trace: [13053.408929] [13053.409454] __schedule+0x72e/0x1570 [13053.409748] ? io_schedule_timeout+0x160/0x160 [13053.410417] ? lockdep_hardirqs_on+0x79/0x100 [13053.411127] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13053.411484] ? cpuhp_invoke_callback+0x830/0x830 [13053.412224] ? smpboot_thread_fn+0x6b/+0x2e0/0x2e0 [13053.913298] kthread+0x2a7/0x350 [13053.914042] ? kthread_complete_and_exit+0x20/0x20 [13053.914777] ret_from_fork+0x22/0x30 [13053.915110] [13053.915283] task:cpuhp/1 state:S stack:28424 pid: 21 ppid: 2 flags:0x00004000 [13053.915772] Call Trace: [13053.915953] [13053.916482] __schedule+0x72e/0x1570 [13053.916786] ? io_schedule_timeout+0x160/0x160 [13053.917626] ? lockdep_hardirqs_on+0x79/0x100 [13053.918395] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13053.918792] ? cpuhp_invoke_callback+0x830/0x830 [13053.919497] ? smpboot_thread_fn+0x6b/0x910 [13053.919829] schedule+0x128/0x220 [13053.920445] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13053.921218] smpboot_thread_fn+0x253/0x910 [13053.921480] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13053.921843] kthread+0x2a7/0x350 [13053.922436] ? kthread_complete_and_exit+0x20/0x20 [13053.923174] ret_from_fork+0x22/0x30 [13053.923479] [13053.923696] task:migration/1 state:S stack:29880 pid: 22 ppid: 2 flags:0x00004000 [13053.924209] Stopper: 0x0 <- 0x0 [13053.924815] Call Trace: [13053.924982] [13053.925527] __schedule+0x72e/0x1570 [13053.925853] ? io_schedule_timeout+0x160[13054.418138] schedule+0x128/0x220 [13054.426922] ? reboot_pid_ns+0xf0/0xf0 [13054.427182] smpboot_thread_fn+0x253/0x910 [13054.427436] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13054.427821] kthread+0x2a7/0x350 [13054.428447] ? kthread_complete_and_exit+0x20/0x20 [13054.429145] ret_from_fork+0x22/0x30 [13054.429438] [13054.429632] task:ksoftirqd/1 state:S stack:27336 pid: 23 ppid: 2 flags:0x00004000 [13054.430150] Call Trace: [13054.430305] [13054.430867] __schedule+0x72e/0x1570 [13054.431131] ? io_schedule_timeout+0x160/0x160 [13054.431847] ? smpboot_thread_fn+0x6b/0x910 [13054.432091] schedule+0x128/0x220 [13054.432739] ? __local_bh_enable+0x90/0x90 [13054.433013] smpboot_thread_fn+0x253/0x910 [13054.433282] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13054.433641] kthread+0x2a7/0x350 [13054.434236] ? kthread_complete_and_exit+0x20/0x20 [13054.434936] ret_from_fork+0x22/0x30 [13054.435226] [13054.435384] task:kworker/1:0H state:I stack:30048 pid: 25 ppid: 2 flags:0x00004000 [13054.435911] Workqueue: 0x0 (events_highpri) [13054.436626] Call Trace: [13054.436796] [13054.437309] __schedule+0x72e/0x1570 [13054.437615] ? io_schedule_timeout+0x160/0x160 [13054.438304] ? lock_downgrade+0x130/0x130 [13054.438594] ? pwq_dec_nr_in_fli350 [13054.939516] ? kthread_complete_and_exit+0x20/0x20 [13054.940231] ret_from_fork+0x22/0x30 [13054.940522] [13054.940737] task:cpuhp/2 state:S stack:28560 pid: 26 ppid: 2 flags:0x00004000 [13054.941204] Call Trace: [13054.941362] [13054.941919] __schedule+0x72e/0x1570 [13054.942246] ? io_schedule_timeout+0x160/0x160 [13054.942981] ? lockdep_hardirqs_on+0x79/0x100 [13054.943785] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13054.944164] ? cpuhp_invoke_callback+0x830/0x830 [13054.944888] ? smpboot_thread_fn+0x6b/0x910 [13054.945158] schedule+0x128/0x220 [13054.945788] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13054.946472] smpboot_thread_fn+0x253/0x910 [13054.946813] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13054.947143] kthread+0x2a7/0x350 [13054.947756] ? kthread_complete_and_exit+0x20/0x20 [13054.948482] ret_from_fork+0x22/0x30 [13054.948870] [13054.949052] task:migration/2 state:R running task stack:30000 pid: 27 ppid: 2 flags:0x00004000 [13054.950041] Stopper: 0x0 <- 0x0 [13054.950733] Call Trace: [13054.950941] [13054.951470] __schedule+0x72e/0x1570 [13054.951820] ? io_schedule_timeout+0x160/0x160 [13054.952575] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13054.953334] ? _raw_spin_unlock[13055.453853] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13055.454211] kthread+0x2a7/0x350 [13055.454826] ? kthread_complete_and_exit+0x20/0x20 [13055.455592] ret_from_fork+0x22/0x30 [13055.455909] [13055.456089] task:ksoftirqd/2 state:R running task stack:24760 pid: 28 ppid: 2 flags:0x00004000 [13055.457169] Call Trace: [13055.457367] [13055.457933] __schedule+0x72e/0x1570 [13055.458199] ? io_schedule_timeout+0x160/0x160 [13055.458989] ? smpboot_thread_fn+0x6b/0x910 [13055.459278] schedule+0x128/0x220 [13055.459921] ? __local_bh_enable+0x90/0x90 [13055.460188] smpboot_thread_fn+0x253/0x910 [13055.460463] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13055.460796] kthread+0x2a7/0x350 [13055.461377] ? kthread_complete_and_exit+0x20/0x20 [13055.462170] ret_from_fork+0x22/0x30 [13055.462479] [13055.462703] task:kworker/2:0H state:I stack:29656 pid: 30 ppid: 2 flags:0x00004000 [13055.463190] Workqueue: 0x0 (events_highpri) [13055.463892] Call Trace: [13055.464070] [13055.464663] __schedule+0x72e/0x1570 [13055.464941] ? io_schedule_timeout+0x160/0x160 [13055.465662] ? lock_downgrade+0x130/0x130 [13055.465922] ? pwq_dec_nr_in_flight+0x230/0x230 [13055.466702] schedule+0x128/0x220 [13055.467313] worker_thrret_from_fork+0x22/0x30 [13055.967954] [13055.968127] task:cpuhp/3 state:S stack:27896 pid: 31 ppid: 2 flags:0x00004000 [13055.968658] Call Trace: [13055.968823] [13055.969335] __schedule+0x72e/0x1570 [13055.969632] ? io_schedule_timeout+0x160/0x160 [13055.970333] ? lockdep_hardirqs_on+0x79/0x100 [13055.971040] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13055.971410] ? cpuhp_invoke_callback+0x830/0x830 [13055.972148] ? smpboot_thread_fn+0x6b/0x910 [13055.972406] schedule+0x128/0x220 [13055.973013] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13055.973761] smpboot_thread_fn+0x253/0x910 [13055.974024] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13055.974332] kthread+0x2a7/0x350 [13055.974938] ? kthread_complete_and_exit+0x20/0x20 [13055.975662] ret_from_fork+0x22/0x30 [13055.975944] [13055.976135] task:migration/3 state:S stack:30144 pid: 32 ppid: 2 flags:0x00004000 [13055.976795] Stopper: 0x0 <- 0x0 [13055.977424] Call Trace: [13055.977619] [13055.978170] __schedule+0x72e/0x1570 [13055.978419] ? io_schedule_timeout+0x160/0x160 [13055.979239] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13055.980023] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13055.980370] ? sm+0x2e0/0x2e0 [13056.480906] kthread+0x2a7/0x350 [13056.481550] ? kthread_complete_and_exit+0x20/0x20 [13056.482312] ret_from_fork+0x22/0x30 [13056.482660] [13056.482834] task:ksoftirqd/3 state:S stack:25176 pid: 33 ppid: 2 flags:0x00004000 [13056.483297] Call Trace: [13056.483449] [13056.484005] __schedule+0x72e/0x1570 [13056.484275] ? io_schedule_timeout+0x160/0x160 [13056.485013] ? smpboot_thread_fn+0x6b/0x910 [13056.485287] schedule+0x128/0x220 [13056.485885] ? __local_bh_enable+0x90/0x90 [13056.486142] smpboot_thread_fn+0x253/0x910 [13056.486390] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13056.486752] kthread+0x2a7/0x350 [13056.487345] ? kthread_complete_and_exit+0x20/0x20 [13056.488041] ret_from_fork+0x22/0x30 [13056.488334] [13056.488496] task:kworker/3:0H state:I stack:30048 pid: 35 ppid: 2 flags:0x00004000 [13056.489029] Workqueue: 0x0 (events_highpri) [13056.489759] Call Trace: [13056.489928] [13056.490454] __schedule+0x72e/0x1570 [13056.490790] ? io_schedule_timeout+0x160/0x160 [13056.491474] ? lock_downgrade+0x130/0x130 [13056.491766] ? pwq_dec_nr_in_flight+0x230/0x230 [13056.492479] schedule+0x128/0x220 [13056.51995[13056.993477] ret_from_fork+0x22/0x30 [13056.993836] [13056.994005] task:cpuhp/4 state:S stack:28560 pid: 36 ppid: 2 flags:0x00004000 [13056.994476] Call Trace: [13056.994658] [13056.995182] __schedule+0x72e/0x1570 [13056.995427] ? io_schedule_timeout+0x160/0x160 [13056.996161] ? lockdep_hardirqs_on+0x79/0x100 [13056.996880] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13056.997220] ? cpuhp_invoke_callback+0x830/0x830 [13056.997967] ? smpboot_thread_fn+0x6b/0x910 [13056.998222] schedule+0x128/0x220 [13056.998843] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13056.999543] smpboot_thread_fn+0x253/0x910 [13056.999855] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13057.000212] kthread+0x2a7/0x350 [13057.000819] ? kthread_complete_and_exit+0x20/0x20 [13057.001511] ret_from_fork+0x22/0x30 [13057.001879] [13057.002059] task:migration/4 state:S stack:29864 pid: 37 ppid: 2 flags:0x00004000 [13057.002552] Stopper: 0x0 <- 0x0 [13057.003191] Call Trace: [13057.003346] [13057.003902] __schedule+0x72e/0x1570 [13057.004163] ? io_schedule_timeout+0x160/0x160 [13057.004857] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13057.005634] ? _raw_spin_unlock_irqrestore+0x59/0x70 [130[13057.498048] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13057.506634] kthread+0x2a7/0x350 [13057.507248] ? kthread_complete_and_exit+0x20/0x20 [13057.507938] ret_from_fork+0x22/0x30 [13057.508236] [13057.508392] task:ksoftirqd/4 state:S stack:26984 pid: 38 ppid: 2 flags:0x00004000 [13057.508897] Call Trace: [13057.509059] [13057.509631] __schedule+0x72e/0x1570 [13057.509903] ? io_schedule_timeout+0x160/0x160 [13057.510663] ? smpboot_thread_fn+0x6b/0x910 [13057.510952] schedule+0x128/0x220 [13057.511527] ? __local_bh_enable+0x90/0x90 [13057.511833] smpboot_thread_fn+0x253/0x910 [13057.512092] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13057.512427] kthread+0x2a7/0x350 [13057.513098] ? kthread_complete_and_exit+0x20/0x20 [13057.513819] ret_from_fork+0x22/0x30 [13057.514121] [13057.514285] task:kworker/4:0H state:I stack:29208 pid: 40 ppid: 2 flags:0x00004000 [13057.514762] Workqueue: 0x0 (events_highpri) [13057.515497] Call Trace: [13057.515692] [13057.516212] __schedule+0x72e/0x1570 [13057.516464] ? io_schedule_timeout+0x160/0x160 [13057.517169] ? lock_downgrade+0x130/0x130 [13057.517430] ? pwq_dec_nr_in_flight+0x230/0x230 [13057.518175] schedule+0x128/0x220 [13057.518815] worker_thread+0x152/0xf90 [22/0x30 [13058.019670] [13058.019855] task:cpuhp/5 state:S stack:27544 pid: 41 ppid: 2 flags:0x00004000 [13058.020328] Call Trace: [13058.020477] [13058.021031] __schedule+0x72e/0x1570 [13058.021291] ? io_schedule_timeout+0x160/0x160 [13058.021978] ? lockdep_hardirqs_on+0x79/0x100 [13058.022808] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13058.023140] ? cpuhp_invoke_callback+0x830/0x830 [13058.023870] ? smpboot_thread_fn+0x6b/0x910 [13058.024128] schedule+0x128/0x220 [13058.024745] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13058.025465] smpboot_thread_fn+0x253/0x910 [13058.025778] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13058.026135] kthread+0x2a7/0x350 [13058.026766] ? kthread_complete_and_exit+0x20/0x20 [13058.027449] ret_from_fork+0x22/0x30 [13058.027820] [13058.027999] task:migration/5 state:S stack:30144 pid: 42 ppid: 2 flags:0x00004000 [13058.028458] Stopper: 0x0 <- 0x0 [13058.029081] Call Trace: [13058.029276] [13058.029848] __schedule+0x72e/0x1570 [13058.030129] ? io_schedule_timeout+0x160/0x160 [13058.030864] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13058.031640] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13058.0? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13058.532607] kthread+0x2a7/0x350 [13058.533234] ? kthread_complete_and_exit+0x20/0x20 [13058.533931] ret_from_fork+0x22/0x30 [13058.534246] [13058.534416] task:ksoftirqd/5 state:S stack:25592 pid: 43 ppid: 2 flags:0x00004000 [13058.534913] Call Trace: [13058.535086] [13058.535661] __schedule+0x72e/0x1570 [13058.535942] ? io_schedule_timeout+0x160/0x160 [13058.536698] ? smpboot_thread_fn+0x6b/0x910 [13058.536975] schedule+0x128/0x220 [13058.537552] ? __local_bh_enable+0x90/0x90 [13058.537863] smpboot_thread_fn+0x253/0x910 [13058.538121] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13058.538426] kthread+0x2a7/0x350 [13058.539023] ? kthread_complete_and_exit+0x20/0x20 [13058.539724] ret_from_fork+0x22/0x30 [13058.540016] [13058.540173] task:kworker/5:0H state:I stack:29656 pid: 45 ppid: 2 flags:0x00004000 [13058.540654] Workqueue: 0x0 (events_highpri) [13058.541366] Call Trace: [13058.541525] [13058.542088] __schedule+0x72e/0x1570 [13058.542410] ? io_schedule_timeout+0x160/0x160 [13058.543122] ? lock_downgrade+0x130/0x130 [13058.543359] ? kthread+0x2a7/0x350 [13059.044373] ? kthread_complete_and_exit+0x20/0x20 [13059.045095] ret_from_fork+0x22/0x30 [13059.045369] [13059.045529] task:cpuhp/6 state:S stack:27912 pid: 46 ppid: 2 flags:0x00004000 [13059.046057] Call Trace: [13059.046218] [13059.046786] __schedule+0x72e/0x1570 [13059.047055] ? io_schedule_timeout+0x160/0x160 [13059.047756] ? lockdep_hardirqs_on+0x79/0x100 [13059.048411] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13059.048762] ? cpuhp_invoke_callback+0x830/0x830 [13059.049435] ? smpboot_thread_fn+0x6b/0x910 [13059.049700] schedule+0x128/0x220 [13059.050291] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13059.050988] smpboot_thread_fn+0x253/0x910 [13059.051246] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13059.051600] kthread+0x2a7/0x350 [13059.052162] ? kthread_complete_and_exit+0x20/0x20 [13059.052884] ret_from_fork+0x22/0x30 [13059.053212] [13059.053396] task:migration/6 state:R running task stack:30144 pid: 47 ppid: 2 flags:0x00004000 [13059.054407] Stopper: multi_cpu_stop+0x0/0x370 <- migrate_swap+0x2db/0x520 [13059.054851] Call Trace: [13059.055007] [13059.055518] ? multi_cpu_stop+0x15c/0x370 [13059.08[13059.556209] ? cpu_stopper_thread+0x1f6/0x410 [13059.556949] ? cpu_stop_queue_two_works+0x650/0x650 [13059.557642] ? smpboot_thread_fn+0x6b/0x910 [13059.557918] ? smpboot_thread_fn+0x559/0x910 [13059.558621] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13059.558992] ? kthread+0x2a7/0x350 [13059.559556] ? kthread_complete_and_exit+0x20/0x20 [13059.560278] ? ret_from_fork+0x22/0x30 [13059.560654] [13059.560859] task:ksoftirqd/6 state:S stack:27024 pid: 48 ppid: 2 flags:0x00004000 [13059.561400] Call Trace: [13059.561546] [13059.562124] __schedule+0x72e/0x1570 [13059.562432] ? io_schedule_timeout+0x160/0x160 [13059.563185] ? smpboot_thread_fn+0x6b/0x910 [13059.563430] schedule+0x128/0x220 [13059.564083] ? __local_bh_enable+0x90/0x90 [13059.564334] smpboot_thread_fn+0x253/0x910 [13059.564571] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13059.565145] kthread+0x2a7/0x350 [13059.565804] ? kthread_complete_and_exit+0x20/0x20 [13059.566494] ret_from_fork+0x22/0x30 [13059.566872] [13059.567046] task:kworker/6:0H state:I stack:29456 pid: 50 ppid: 2 flags:0x00004000 [13059.567511] Workqueue: 0x0 (events_highpri) [13059.568266] Call Trace: [13059.568444] [13059.596112[13060.069661] schedule+0x128/0x220 [13060.070314] worker_thread+0x152/0xf90 [13060.070568] ? process_one_work+0x1520/0x1520 [13060.071288] kthread+0x2a7/0x350 [13060.071926] ? kthread_complete_and_exit+0x20/0x20 [13060.072646] ret_from_fork+0x22/0x30 [13060.072962] [13060.073124] task:cpuhp/7 state:S stack:28560 pid: 58 ppid: 2 flags:0x00004000 [13060.073573] Call Trace: [13060.073782] [13060.074300] __schedule+0x72e/0x1570 [13060.074543] ? io_schedule_timeout+0x160/0x160 [13060.075225] ? lockdep_hardirqs_on+0x79/0x100 [13060.075916] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13060.076267] ? cpuhp_invoke_callback+0x830/0x830 [13060.076977] ? smpboot_thread_fn+0x6b/0x910 [13060.077241] schedule+0x128/0x220 [13060.077837] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13060.078497] smpboot_thread_fn+0x253/0x910 [13060.078794] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13060.079146] kthread+0x2a7/0x350 [13060.079803] ? kthread_complete_and_exit+0x20/0x20 [13060.080474] ret_from_fork+0x22/0x30 [13060.080873] [13060.081047] task:migration/7 state:S stack:30144 pid: 59 ppid: 2 flags:0x00004000 [13060.081507] Stopper: 0x0 <- 0x0 [13060.082107] Call Trace: [13060.082304] [13060.082896] __schedule+0x72e/0x1570 [13060.? smpboot_thread_fn+0x6b/0x910 [13060.583541] schedule+0x128/0x220 [13060.584157] ? reboot_pid_ns+0xf0/0xf0 [13060.584398] smpboot_thread_fn+0x253/0x910 [13060.584675] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13060.585006] kthread+0x2a7/0x350 [13060.585562] ? kthread_complete_and_exit+0x20/0x20 [13060.586246] ret_from_fork+0x22/0x30 [13060.586561] [13060.586749] task:ksoftirqd/7 state:S stack:29256 pid: 60 ppid: 2 flags:0x00004000 [13060.587230] Call Trace: [13060.587383] [13060.587954] __schedule+0x72e/0x1570 [13060.588218] ? io_schedule_timeout+0x160/0x160 [13060.588913] ? smpboot_thread_fn+0x6b/0x910 [13060.589149] schedule+0x128/0x220 [13060.589753] ? __local_bh_enable+0x90/0x90 [13060.589997] smpboot_thread_fn+0x253/0x910 [13060.590244] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13060.590576] kthread+0x2a7/0x350 [13060.591185] ? kthread_complete_and_exit+0x20/0x20 [13060.591887] ret_from_fork+0x22/0x30 [13060.592172] [13060.592353] task:kworker/7:0H state:I stack:30048 pid: 62 ppid: 2 flags:0x00004000 [13060.592881] Workqueue: 0x0 (events_highpri) [13060.593548] Call Trace: [13060.593730] [13060.594235] __schedule+0x72e/x220 [13061.095103] worker_thread+0x152/0xf90 [13061.095371] ? process_one_work+0x1520/0x1520 [13061.096083] kthread+0x2a7/0x350 [13061.096709] ? kthread_complete_and_exit+0x20/0x20 [13061.097363] ret_from_fork+0x22/0x30 [13061.097716] [13061.097878] task:cpuhp/8 state:S stack:28448 pid: 63 ppid: 2 flags:0x00004000 [13061.098357] Call Trace: [13061.098504] [13061.099067] __schedule+0x72e/0x1570 [13061.099352] ? io_schedule_timeout+0x160/0x160 [13061.100031] ? lockdep_hardirqs_on+0x79/0x100 [13061.100689] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13061.101089] ? cpuhp_invoke_callback+0x830/0x830 [13061.101931] ? smpboot_thread_fn+0x6b/0x910 [13061.102182] schedule+0x128/0x220 [13061.102846] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13061.103514] smpboot_thread_fn+0x253/0x910 [13061.103819] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13061.104171] kthread+0x2a7/0x350 [13061.104799] ? kthread_complete_and_exit+0x20/0x20 [13061.105481] ret_from_fork+0x22/0x30 [13061.105816] [13061.105980] task:migration/8 state:S stack:30144 pid: 64 ppid: 2 flags:0x00004000 [13061.106450] Stopper: 0x0 <- 0x0 [13061.581376] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13061.607722] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13061.608080] ? smpboot_thread_fn+0x6b/0x910 [13061.608311] schedule+0x128/0x220 [13061.608918] ? reboot_pid_ns+0xf0/0xf0 [13061.609189] smpboot_thread_fn+0x253/0x910 [13061.609426] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13061.609748] kthread+0x2a7/0x350 [13061.610316] ? kthread_complete_and_exit+0x20/0x20 [13061.611031] ret_from_fork+0x22/0x30 [13061.611318] [13061.611478] task:ksoftirqd/8 state:S stack:29608 pid: 65 ppid: 2 flags:0x00004000 [13061.612021] Call Trace: [13061.612188] [13061.612788] __schedule+0x72e/0x1570 [13061.613053] ? io_schedule_timeout+0x160/0x160 [13061.613797] ? smpboot_thread_fn+0x6b/0x910 [13061.614061] schedule+0x128/0x220 [13061.614664] ? __local_bh_enable+0x90/0x90 [13061.614935] smpboot_thread_fn+0x253/0x910 [13061.615174] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13061.615476] kthread+0x2a7/0x350 [13061.616076] ? kthread_complete_and_exit+0x20/0x20 [13061.616887] ret_from_fork+0x22/0x30 [13061.617194] [13061.6445all Trace: [13062.017760] [13062.018287] __schedule+0x72e/0x1570 [13062.018566] ? io_schedule_timeout+0x160/0x160 [13062.019276] ? lock_downgrade+0x130/0x130 [13062.019522] ? pwq_dec_nr_in_flight+0x230/0x230 [13062.020240] schedule+0x128/0x220 [13062.021248] worker_thread+0x152/0xf90 [13062.021523] ? process_one_work+0x1520/0x1520 [13062.022253] kthread+0x2a7/0x350 [13062.022937] ? kthread_complete_and_exit+0x20/0x20 [13062.023713] ret_from_fork+0x22/0x30 [13062.024281] [13062.024528] task:cpuhp/9 state:S stack:28400 pid: 68 ppid: 2 flags:0x00004000 [13062.025058] Call Trace: [13062.025227] [13062.025826] __schedule+0x72e/0x1570 [13062.026082] ? io_schedule_timeout+0x160/0x160 [13062.026770] ? lockdep_hardirqs_on+0x79/0x100 [13062.027466] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13062.027859] ? cpuhp_invoke_callback+0x830/0x830 [13062.028742] ? smpboot_thread_fn+0x6b/0x910 [13062.029093] schedule+0x128/0x220 [13062.029712] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13062.030405] smpboot_thread_fn+0x253/0x910 [13062.030692] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13062.031062] kthread+0x2a7/0x350 [13062.031680] ? kthread_complete_and_exit+0xtopper: 0x0 <- 0x0 [13062.532762] Call Trace: [13062.532934] [13062.533449] __schedule+0x72e/0x1570 [13062.533752] ? io_schedule_timeout+0x160/0x160 [13062.534451] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13062.535369] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13062.535846] ? smpboot_thread_fn+0x6b/0x910 [13062.536093] schedule+0x128/0x220 [13062.536752] ? reboot_pid_ns+0xf0/0xf0 [13062.537009] smpboot_thread_fn+0x253/0x910 [13062.537240] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13062.537543] kthread+0x2a7/0x350 [13062.538175] ? kthread_complete_and_exit+0x20/0x20 [13062.538906] ret_from_fork+0x22/0x30 [13062.539200] [13062.539363] task:ksoftirqd/9 state:S stack:28968 pid: 70 ppid: 2 flags:0x00004000 [13062.539828] Call Trace: [13062.539987] [13062.540732] __schedule+0x72e/0x1570 [13062.541037] ? io_schedule_timeout+0x160/0x160 [13062.541954] ? smpboot_thread_fn+0x6b/0x910 [13062.542340] schedule+0x128/0x220 [13062.542987] ? __local_bh_enable+0x90/0x90 [13062.543247] smpboot_thread_fn+0x253/0x910 [13062.543484] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13062.543837] kthread+0x2a7/0x350 [13062.544437] ? kthread_complete_and_exit+0x20/0x20 [13062.545297] ret_f[13063.037426] Call Trace: [13063.045870] [13063.046489] __schedule+0x72e/0x1570 [13063.046823] ? io_schedule_timeout+0x160/0x160 [13063.047514] ? lock_downgrade+0x130/0x130 [13063.047790] ? pwq_dec_nr_in_flight+0x230/0x230 [13063.048500] schedule+0x128/0x220 [13063.049212] worker_thread+0x152/0xf90 [13063.049501] ? process_one_work+0x1520/0x1520 [13063.050410] kthread+0x2a7/0x350 [13063.051178] ? kthread_complete_and_exit+0x20/0x20 [13063.051905] ret_from_fork+0x22/0x30 [13063.052245] [13063.052423] task:cpuhp/10 state:S stack:28272 pid: 73 ppid: 2 flags:0x00004000 [13063.052953] Call Trace: [13063.053134] [13063.053715] __schedule+0x72e/0x1570 [13063.054006] ? io_schedule_timeout+0x160/0x160 [13063.054878] ? lockdep_hardirqs_on+0x79/0x100 [13063.055729] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13063.056082] ? cpuhp_invoke_callback+0x830/0x830 [13063.056799] ? smpboot_thread_fn+0x6b/0x910 [13063.057061] schedule+0x128/0x220 [13063.057680] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13063.058655] smpboot_thread_fn+0x253/0x910 [13063.059000] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13063.059311] kthread+0x2a7/0x350 [13063.059910] ? kthread_compl[13063.552020] Stopper: 0x0 <- 0x0 [13063.560885] Call Trace: [13063.561067] [13063.561705] __schedule+0x72e/0x1570 [13063.562018] ? io_schedule_timeout+0x160/0x160 [13063.562840] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13063.563577] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13063.563997] ? smpboot_thread_fn+0x6b/0x910 [13063.564260] schedule+0x128/0x220 [13063.564908] ? reboot_pid_ns+0xf0/0xf0 [13063.565188] smpboot_thread_fn+0x253/0x910 [13063.565443] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13063.565800] kthread+0x2a7/0x350 [13063.566661] ? kthread_complete_and_exit+0x20/0x20 [13063.567448] ret_from_fork+0x22/0x30 [13063.567792] [13063.567965] task:ksoftirqd/10 state:S stack:29256 pid: 75 ppid: 2 flags:0x00004000 [13063.568452] Call Trace: [13063.568939] [13063.569489] __schedule+0x72e/0x1570 [13063.569786] ? io_schedule_timeout+0x160/0x160 [13063.570498] ? smpboot_thread_fn+0x6b/0x910 [13063.570782] schedule+0x128/0x220 [13063.571472] ? __local_bh_enable+0x90/0x90 [13063.571758] smpboot_thread_fn+0x253/0x910 [13063.572019] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13063.572356] kthread+0x2a7/0x350 [13063.573028] ? kthread_complete_and_exit+0x20/0x20 [13063.573762] ret_from_fork+0x22/0x30 [13063.574074] [13063.574249] tas/0x1570 [13064.074818] ? io_schedule_timeout+0x160/0x160 [13064.075485] ? lock_downgrade+0x130/0x130 [13064.075757] ? pwq_dec_nr_in_flight+0x230/0x230 [13064.076464] schedule+0x128/0x220 [13064.077115] worker_thread+0x152/0xf90 [13064.077405] ? process_one_work+0x1520/0x1520 [13064.078112] kthread+0x2a7/0x350 [13064.078965] ? kthread_complete_and_exit+0x20/0x20 [13064.079695] ret_from_fork+0x22/0x30 [13064.080024] [13064.080334] task:cpuhp/11 state:S stack:28448 pid: 78 ppid: 2 flags:0x00004000 [13064.080822] Call Trace: [13064.080986] [13064.081504] __schedule+0x72e/0x1570 [13064.081776] ? io_schedule_timeout+0x160/0x160 [13064.082456] ? lockdep_hardirqs_on+0x79/0x100 [13064.083178] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13064.083526] ? cpuhp_invoke_callback+0x830/0x830 [13064.084271] ? smpboot_thread_fn+0x6b/0x910 [13064.084605] schedule+0x128/0x220 [13064.085249] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13064.085998] smpboot_thread_fn+0x253/0x910 [13064.086256] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13064.086604] kthread+0x2a7/0x350 [13064.087219] ? kthread_complete_and_exit+0x20/0x20 [13064.087923] ret_from_fork+0x22/0x30 [13064.088207] [13064.088366] task:migration/11 state:S stack:30144 pid: 79 ppid: ? io_schedule_timeout+0x160/0x160 [13064.589612] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13064.590511] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13064.590903] ? smpboot_thread_fn+0x6b/0x910 [13064.591201] schedule+0x128/0x220 [13064.591865] ? reboot_pid_ns+0xf0/0xf0 [13064.592120] smpboot_thread_fn+0x253/0x910 [13064.592382] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13064.592762] kthread+0x2a7/0x350 [13064.593367] ? kthread_complete_and_exit+0x20/0x20 [13064.594075] ret_from_fork+0x22/0x30 [13064.594374] [13064.594544] task:ksoftirqd/11 state:S stack:29256 pid: 80 ppid: 2 flags:0x00004000 [13064.595079] Call Trace: [13064.595257] [13064.595800] __schedule+0x72e/0x1570 [13064.596047] ? io_schedule_timeout+0x160/0x160 [13064.596806] ? smpboot_thread_fn+0x6b/0x910 [13064.597054] schedule+0x128/0x220 [13064.597677] ? __local_bh_enable+0x90/0x90 [13064.597936] smpboot_thread_fn+0x253/0x910 [13064.598178] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13064.598480] kthread+0x2a7/0x350 [13064.599086] ? kthread_complete_and_exit+0x20/0x20 [13064.599795] ret_from_fork+0x22/0x30 all Trace: [13065.100334] [13065.100939] __schedule+0x72e/0x1570 [13065.101232] ? io_schedule_timeout+0x160/0x160 [13065.101955] ? lock_downgrade+0x130/0x130 [13065.102230] ? pwq_dec_nr_in_flight+0x230/0x230 [13065.103036] schedule+0x128/0x220 [13065.103659] worker_thread+0x152/0xf90 [13065.103962] ? process_one_work+0x1520/0x1520 [13065.104600] kthread+0x2a7/0x350 [13065.105197] ? kthread_complete_and_exit+0x20/0x20 [13065.105916] ret_from_fork+0x22/0x30 [13065.106205] [13065.106359] task:cpuhp/12 state:S stack:28520 pid: 83 ppid: 2 flags:0x00004000 [13065.106905] Call Trace: [13065.107074] [13065.107577] __schedule+0x72e/0x1570 [13065.107884] ? io_schedule_timeout+0x160/0x160 [13065.108537] ? lockdep_hardirqs_on+0x79/0x100 [13065.109213] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13065.109565] ? cpuhp_invoke_callback+0x830/0x830 [13065.110248] ? smpboot_thread_fn+0x6b/0x910 [13065.110484] schedule+0x128/0x220 [13065.111095] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13065.111780] smpboot_thread_fn+0x253/0x910 [13065.112026] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13065.112358] kthread+0x2a7/0x350 [13065.112995] ? kthread_complete_and_exit+0x20/0x20 [13065.113726] ret_from_fork+0x22/0x30 [13065.114054] [13065.114223] task:migration/12 [13065.614869] ? io_schedule_timeout+0x160/0x160 [13065.615576] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13065.616330] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13065.616743] ? smpboot_thread_fn+0x6b/0x910 [13065.617034] schedule+0x128/0x220 [13065.617613] ? reboot_pid_ns+0xf0/0xf0 [13065.617912] smpboot_thread_fn+0x253/0x910 [13065.618178] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13065.618535] kthread+0x2a7/0x350 [13065.619248] ? kthread_complete_and_exit+0x20/0x20 [13065.619974] ret_from_fork+0x22/0x30 [13065.620272] [13065.620448] task:ksoftirqd/12 state:S stack:29616 pid: 85 ppid: 2 flags:0x00004000 [13065.620968] Call Trace: [13065.621133] [13065.621736] __schedule+0x72e/0x1570 [13065.622026] ? io_schedule_timeout+0x160/0x160 [13065.622784] ? smpboot_thread_fn+0x6b/0x910 [13065.623026] schedule+0x128/0x220 [13065.623583] ? __local_bh_enable+0x90/0x90 [13065.623884] smpboot_thread_fn+0x253/0x910 [13065.624134] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13065.624439] kthread+0x2a7/0x350 [13065.625021] ? kthread_complete_and_exit+0x20/0x20 [13065.625717] ret_from_fork+0x22/0x30 [13065.625985] [13065.626141] task:kworker/12:0H state:I stack:29656 pid: 87 ppid: 2 flags:0x00004000 [13065.626589] Workqueue: 0x0 (events_highpri) [13065.627301] Call Trace: [13065.627476] [130 [13066.128306] schedule+0x128/0x220 [13066.128960] worker_thread+0x152/0xf90 [13066.129239] ? process_one_work+0x1520/0x1520 [13066.129940] kthread+0x2a7/0x350 [13066.130531] ? kthread_complete_and_exit+0x20/0x20 [13066.131221] ret_from_fork+0x22/0x30 [13066.131507] [13066.131699] task:cpuhp/13 state:S stack:28560 pid: 88 ppid: 2 flags:0x00004000 [13066.132187] Call Trace: [13066.132359] [13066.132951] __schedule+0x72e/0x1570 [13066.133208] ? io_schedule_timeout+0x160/0x160 [13066.133877] ? lockdep_hardirqs_on+0x79/0x100 [13066.134533] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13066.134903] ? cpuhp_invoke_callback+0x830/0x830 [13066.135580] ? smpboot_thread_fn+0x6b/0x910 [13066.135844] schedule+0x128/0x220 [13066.136433] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13066.137145] smpboot_thread_fn+0x253/0x910 [13066.137425] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13066.137760] kthread+0x2a7/0x350 [13066.138366] ? kthread_complete_and_exit+0x20/0x20 [13066.139041] ret_from_fork+0x22/0x30 [13066.139329] [13066.139484] task:migration/13 state:S stack:30144 pid: 89 ppid: 2 flags:0x00004000 [13066.139988] Stopper: 0x0 <- 0x0 [13066.140554] Call Trace: [13066.140724] [13066.141238] __schedule+0x72e/0x1570 [1[13066.633492] ? smpboot_thread_fn+0x6b/0x910 [13066.641938] schedule+0x128/0x220 [13066.642544] ? reboot_pid_ns+0xf0/0xf0 [13066.642813] smpboot_thread_fn+0x253/0x910 [13066.643054] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13066.643362] kthread+0x2a7/0x350 [13066.643967] ? kthread_complete_and_exit+0x20/0x20 [13066.644632] ret_from_fork+0x22/0x30 [13066.644983] [13066.645149] task:ksoftirqd/13 state:S stack:27248 pid: 90 ppid: 2 flags:0x00004000 [13066.645621] Call Trace: [13066.645807] [13066.646332] __schedule+0x72e/0x1570 [13066.646600] ? io_schedule_timeout+0x160/0x160 [13066.647334] ? smpboot_thread_fn+0x6b/0x910 [13066.647595] schedule+0x128/0x220 [13066.648191] ? __local_bh_enable+0x90/0x90 [13066.648443] smpboot_thread_fn+0x253/0x910 [13066.648732] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13066.649090] kthread+0x2a7/0x350 [13066.649679] ? kthread_complete_and_exit+0x20/0x20 [13066.650368] ret_from_fork+0x22/0x30 [13066.650673] [13066.650861] task:kworker/13:0H state:I stack:28856 pid: 92 ppid: 2 flags:0x00004000 [13066.651326] Workqueue: 0x0 (events_highpri) [13066.651998] Call Trace: [13066.652165] [13066.652738] __schedule+0x72e/0x1570 [130x220 [13067.153696] worker_thread+0x152/0xf90 [13067.154001] ? process_one_work+0x1520/0x1520 [13067.154634] kthread+0x2a7/0x350 [13067.155283] ? kthread_complete_and_exit+0x20/0x20 [13067.156105] ret_from_fork+0x22/0x30 [13067.156395] [13067.156558] task:cpuhp/14 state:S stack:28560 pid: 93 ppid: 2 flags:0x00004000 [13067.157105] Call Trace: [13067.157272] [13067.157822] __schedule+0x72e/0x1570 [13067.158082] ? io_schedule_timeout+0x160/0x160 [13067.158745] ? lockdep_hardirqs_on+0x79/0x100 [13067.159316] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13067.159630] ? cpuhp_invoke_callback+0x830/0x830 [13067.160353] ? smpboot_thread_fn+0x6b/0x910 [13067.160621] schedule+0x128/0x220 [13067.161205] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13067.161935] smpboot_thread_fn+0x253/0x910 [13067.162190] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13067.162534] kthread+0x2a7/0x350 [13067.163154] ? kthread_complete_and_exit+0x20/0x20 [13067.163840] ret_from_fork+0x22/0x30 [13067.164116] [13067.164277] task:migration/14 state:S stack:30000 pid: 94 ppid: 2 flags:0x00004000 [13067.164775] Stopper: 0x0 <- 0x0 [13067.165396] Call Trace: [13067.165546] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13067.666164] ? smpboot_thread_fn+0x6b/0x910 [13067.666402] schedule+0x128/0x220 [13067.667016] ? reboot_pid_ns+0xf0/0xf0 [13067.667268] smpboot_thread_fn+0x253/0x910 [13067.667510] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13067.667864] kthread+0x2a7/0x350 [13067.668438] ? kthread_complete_and_exit+0x20/0x20 [13067.669138] ret_from_fork+0x22/0x30 [13067.669422] [13067.669579] task:ksoftirqd/14 state:S stack:27168 pid: 95 ppid: 2 flags:0x00004000 [13067.670123] Call Trace: [13067.670299] [13067.670848] __schedule+0x72e/0x1570 [13067.671107] ? io_schedule_timeout+0x160/0x160 [13067.671865] ? smpboot_thread_fn+0x6b/0x910 [13067.672131] schedule+0x128/0x220 [13067.672725] ? __local_bh_enable+0x90/0x90 [13067.672997] smpboot_thread_fn+0x253/0x910 [13067.673237] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13067.673539] kthread+0x2a7/0x350 [13067.674148] ? kthread_complete_and_exit+0x20/0x20 [13067.674815] ret_from_fork+0x22/0x30 [13067.675106] [13067.675264] task:kworker/14:0H state:I stack:29656 pid: 97 ppid: 2 flags:0x00004000 [13067.675783] Workqueue:[13068.176183] ? lock_downgrade+0x130/0x130 [13068.176471] ? pwq_dec_nr_in_flight+0x230/0x230 [13068.177201] schedule+0x128/0x220 [13068.177832] worker_thread+0x152/0xf90 [13068.178119] ? process_one_work+0x1520/0x1520 [13068.178799] kthread+0x2a7/0x350 [13068.179409] ? kthread_complete_and_exit+0x20/0x20 [13068.180099] ret_from_fork+0x22/0x30 [13068.180385] [13068.180543] task:cpuhp/15 state:S stack:28560 pid: 98 ppid: 2 flags:0x00004000 [13068.181091] Call Trace: [13068.181267] [13068.181836] __schedule+0x72e/0x1570 [13068.182100] ? io_schedule_timeout+0x160/0x160 [13068.182788] ? lockdep_hardirqs_on+0x79/0x100 [13068.183473] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13068.183834] ? cpuhp_invoke_callback+0x830/0x830 [13068.184516] ? smpboot_thread_fn+0x6b/0x910 [13068.184786] schedule+0x128/0x220 [13068.185401] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13068.186085] smpboot_thread_fn+0x253/0x910 [13068.186336] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13068.186648] kthread+0x2a7/0x350 [13068.187257] ? kthread_complete_and_exit+0x20/0x20 [13068.187980] ret_from_fork+0x22/0x30 [13068.188265] [13068.188423] task:migration/15 state:S stack:30144 pid: 99 ppid: 2 flags:0x00004000 [13068.188968] Stopper: 0x0 <- 0x0 [? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13068.690021] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13068.690362] ? smpboot_thread_fn+0x6b/0x910 [13068.690594] schedule+0x128/0x220 [13068.691191] ? reboot_pid_ns+0xf0/0xf0 [13068.691418] smpboot_thread_fn+0x253/0x910 [13068.691648] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13068.692029] kthread+0x2a7/0x350 [13068.692593] ? kthread_complete_and_exit+0x20/0x20 [13068.693279] ret_from_fork+0x22/0x30 [13068.693565] [13068.694066] task:ksoftirqd/15 state:S stack:26152 pid: 100 ppid: 2 flags:0x00004000 [13068.694578] Call Trace: [13068.694763] [13068.695305] __schedule+0x72e/0x1570 [13068.695588] ? io_schedule_timeout+0x160/0x160 [13068.696764] ? smpboot_thread_fn+0x6b/0x910 [13068.697088] schedule+0x128/0x220 [13068.697710] ? __local_bh_enable+0x90/0x90 [13068.697989] smpboot_thread_fn+0x253/0x910 [13068.698220] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13068.698527] kthread+0x2a7/0x350 [13068.699116] ? kthread_complete_and_exit+0x20/0x20 [13068.699797] ret_from_fork+0x22/0x30 [13068.700103] [13068.700261] task:kworker__schedule+0x72e/0x1570 [13069.200930] ? io_schedule_timeout+0x160/0x160 [13069.201582] ? lock_downgrade+0x130/0x130 [13069.201858] ? pwq_dec_nr_in_flight+0x230/0x230 [13069.202549] schedule+0x128/0x220 [13069.203176] worker_thread+0x152/0xf90 [13069.203450] ? process_one_work+0x1520/0x1520 [13069.204126] kthread+0x2a7/0x350 [13069.204753] ? kthread_complete_and_exit+0x20/0x20 [13069.205429] ret_from_fork+0x22/0x30 [13069.205761] [13069.205944] task:cpuhp/16 state:S stack:28448 pid: 103 ppid: 2 flags:0x00004000 [13069.206383] Call Trace: [13069.206532] [13069.207083] __schedule+0x72e/0x1570 [13069.207375] ? io_schedule_timeout+0x160/0x160 [13069.208046] ? lockdep_hardirqs_on+0x79/0x100 [13069.208764] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13069.209105] ? cpuhp_invoke_callback+0x830/0x830 [13069.209803] ? smpboot_thread_fn+0x6b/0x910 [13069.210068] schedule+0x128/0x220 [13069.210626] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13069.211302] smpboot_thread_fn+0x253/0x910 [13069.211552] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13069.211878] kthread+0x2a7/0x350 [13069.212480] ? kthread_complete_and_exit+0x20/0x20 [13069.213178] ret_from_fork+0x22/0x30 [13069.213465] [13069.213620] task:migration/16 state:S stack:30144 pid: 104 ppid: 2 flags:0meout+0x160/0x160 [13069.714936] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13069.715654] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13069.716042] ? smpboot_thread_fn+0x6b/0x910 [13069.716275] schedule+0x128/0x220 [13069.716875] ? reboot_pid_ns+0xf0/0xf0 [13069.717130] smpboot_thread_fn+0x253/0x910 [13069.717364] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13069.717687] kthread+0x2a7/0x350 [13069.718263] ? kthread_complete_and_exit+0x20/0x20 [13069.718994] ret_from_fork+0x22/0x30 [13069.719290] [13069.719479] task:ksoftirqd/16 state:S stack:25096 pid: 105 ppid: 2 flags:0x00004000 [13069.719984] Call Trace: [13069.720177] [13069.720731] __schedule+0x72e/0x1570 [13069.721028] ? io_schedule_timeout+0x160/0x160 [13069.721786] ? smpboot_thread_fn+0x6b/0x910 [13069.722064] schedule+0x128/0x220 [13069.722634] ? __local_bh_enable+0x90/0x90 [13069.722905] smpboot_thread_fn+0x253/0x910 [13069.723164] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13069.723471] kthread+0x2a7/0x350 [13069.724074] ? kthread_complete_and_exit+0x20/0x20 [13069.724823] ret_from_fork+0x22/0x30 [13069.725142] < [13070.226067] __schedule+0x72e/0x1570 [13070.226337] ? io_schedule_timeout+0x160/0x160 [13070.227022] ? lock_downgrade+0x130/0x130 [13070.227285] ? pwq_dec_nr_in_flight+0x230/0x230 [13070.228022] schedule+0x128/0x220 [13070.228648] worker_thread+0x152/0xf90 [13070.228975] ? process_one_work+0x1520/0x1520 [13070.229661] kthread+0x2a7/0x350 [13070.230287] ? kthread_complete_and_exit+0x20/0x20 [13070.231030] ret_from_fork+0x22/0x30 [13070.231324] [13070.231483] task:cpuhp/17 state:S stack:28560 pid: 108 ppid: 2 flags:0x00004000 [13070.232034] Call Trace: [13070.232210] [13070.232780] __schedule+0x72e/0x1570 [13070.233063] ? io_schedule_timeout+0x160/0x160 [13070.233741] ? lockdep_hardirqs_on+0x79/0x100 [13070.234455] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13070.234820] ? cpuhp_invoke_callback+0x830/0x830 [13070.235548] ? smpboot_thread_fn+0x6b/0x910 [13070.235817] schedule+0x128/0x220 [13070.236461] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13070.237156] smpboot_thread_fn+0x253/0x910 [13070.237413] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13070.237767] kthread+0x2a7/0x350 [13070.238408] ? kthr[13070.730646] Stopper: 0x0 <- 0x0 [13070.739429] Call Trace: [13070.739607] [13070.740152] __schedule+0x72e/0x1570 [13070.740418] ? io_schedule_timeout+0x160/0x160 [13070.741138] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13070.741915] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13070.742228] ? smpboot_thread_fn+0x6b/0x910 [13070.742478] schedule+0x128/0x220 [13070.743116] ? reboot_pid_ns+0xf0/0xf0 [13070.743374] smpboot_thread_fn+0x253/0x910 [13070.743614] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13070.743983] kthread+0x2a7/0x350 [13070.744607] ? kthread_complete_and_exit+0x20/0x20 [13070.745299] ret_from_fork+0x22/0x30 [13070.745597] [13070.745790] task:ksoftirqd/17 state:S stack:24760 pid: 110 ppid: 2 flags:0x00004000 [13070.746267] Call Trace: [13070.746414] [13070.746961] __schedule+0x72e/0x1570 [13070.747228] ? io_schedule_timeout+0x160/0x160 [13070.747972] ? smpboot_thread_fn+0x6b/0x910 [13070.748247] schedule+0x128/0x220 [13070.748853] ? __local_bh_enable+0x90/0x90 [13070.749132] smpboot_thread_fn+0x253/0x910 [13070.749374] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13070.749705] kthread+0x2a7/0x350 [13070.750293] ? kthread_compleorkqueue: 0x0 (events_highpri) [13071.251408] Call Trace: [13071.251604] [13071.252148] __schedule+0x72e/0x1570 [13071.252395] ? io_schedule_timeout+0x160/0x160 [13071.253114] ? lock_downgrade+0x130/0x130 [13071.253396] ? pwq_dec_nr_in_flight+0x230/0x230 [13071.254104] schedule+0x128/0x220 [13071.254769] worker_thread+0x152/0xf90 [13071.255080] ? process_one_work+0x1520/0x1520 [13071.255793] kthread+0x2a7/0x350 [13071.256415] ? kthread_complete_and_exit+0x20/0x20 [13071.257117] ret_from_fork+0x22/0x30 [13071.257413] [13071.257578] task:cpuhp/18 state:S stack:28560 pid: 113 ppid: 2 flags:0x00004000 [13071.258132] Call Trace: [13071.258313] [13071.258882] __schedule+0x72e/0x1570 [13071.259147] ? io_schedule_timeout+0x160/0x160 [13071.259816] ? lockdep_hardirqs_on+0x79/0x100 [13071.260551] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13071.260920] ? cpuhp_invoke_callback+0x830/0x830 [13071.261636] ? smpboot_thread_fn+0x6b/0x910 [13071.261905] schedule+0x128/0x220 [13071.262527] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13071.263239] smpboot_thread_fn+0x253/0x910 [13071.263498] ?[13071.738346] [13071.764120] task:migration/18 state:S stack:30144 pid: 114 ppid: 2 flags:0x00004000 [13071.764648] Stopper: 0x0 <- 0x0 [13071.765255] Call Trace: [13071.765427] [13071.766035] __schedule+0x72e/0x1570 [13071.766302] ? io_schedule_timeout+0x160/0x160 [13071.767048] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13071.767845] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13071.768209] ? smpboot_thread_fn+0x6b/0x910 [13071.768436] schedule+0x128/0x220 [13071.769047] ? reboot_pid_ns+0xf0/0xf0 [13071.769318] smpboot_thread_fn+0x253/0x910 [13071.769569] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13071.769906] kthread+0x2a7/0x350 [13071.770509] ? kthread_complete_and_exit+0x20/0x20 [13071.771238] ret_from_fork+0x22/0x30 [13071.771539] [13071.771732] task:ksoftirqd/18 state:S stack:29776 pid: 115 ppid: 2 flags:0x00004000 [13071.772279] Call Trace: [13071.772453] [13071.773048] __schedule+0x72e/0x1570 [13071.773341] ? io_schedule_timeout+0x160/0x160 [13071.774094] ? smpboot_thread_fn+0x6b/0x910 [13071.774341] schedule+0x128/0x220 [13071.775013] ? __local_bh_enable+0x90/0x90 [130[13072.275654] ret_from_fork+0x22/0x30 [13072.275982] [13072.276147] task:kworker/18:0H state:I stack:30048 pid: 117 ppid: 2 flags:0x00004000 [13072.276633] Workqueue: 0x0 (events_highpri) [13072.277307] Call Trace: [13072.277467] [13072.278030] __schedule+0x72e/0x1570 [13072.278292] ? io_schedule_timeout+0x160/0x160 [13072.278999] ? lock_downgrade+0x130/0x130 [13072.279261] ? pwq_dec_nr_in_flight+0x230/0x230 [13072.279978] schedule+0x128/0x220 [13072.280565] worker_thread+0x152/0xf90 [13072.280892] ? process_one_work+0x1520/0x1520 [13072.281634] kthread+0x2a7/0x350 [13072.282248] ? kthread_complete_and_exit+0x20/0x20 [13072.283027] ret_from_fork+0x22/0x30 [13072.283325] [13072.283484] task:cpuhp/19 state:S stack:28272 pid: 118 ppid: 2 flags:0x00004000 [13072.283990] Call Trace: [13072.284159] [13072.284708] __schedule+0x72e/0x1570 [13072.285007] ? io_schedule_timeout+0x160/0x160 [13072.285648] ? lockdep_hardirqs_on+0x79/0x100 [13072.286350] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13072.286767] ? cpuhp_invoke_callback+0x830/0x830 [13072.287571] ? smpboot_thread_fn+0x6b/0x910 [13072.287839] schedule+0x128/0x220 [13072.288499] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13072.289204] smpboot_thread_fn+0x253/0x910 [13072.289470] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13072.289812] kthread+0x2a7/0x350 [13072.290455] ? kthread_complete_and_exit+0x20/0x20 [1307x0 [13072.791522] Call Trace: [13072.791741] [13072.792303] __schedule+0x72e/0x1570 [13072.792591] ? io_schedule_timeout+0x160/0x160 [13072.793351] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13072.794108] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13072.794459] ? smpboot_thread_fn+0x6b/0x910 [13072.794713] schedule+0x128/0x220 [13072.795318] ? reboot_pid_ns+0xf0/0xf0 [13072.795559] smpboot_thread_fn+0x253/0x910 [13072.795835] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13072.796162] kthread+0x2a7/0x350 [13072.796771] ? kthread_complete_and_exit+0x20/0x20 [13072.797471] ret_from_fork+0x22/0x30 [13072.797817] [13072.798025] task:ksoftirqd/19 state:S stack:28520 pid: 120 ppid: 2 flags:0x00004000 [13072.798480] Call Trace: [13072.798629] [13072.799196] __schedule+0x72e/0x1570 [13072.799472] ? io_schedule_timeout+0x160/0x160 [13072.800230] ? smpboot_thread_fn+0x6b/0x910 [13072.800491] schedule+0x128/0x220 [13072.801110] ? __local_bh_enable+0x90/0x90 [13072.801362] smpboot_thread_fn+0x253/0x910 [13072.801598] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13072.801945] kthread+0x2a7/0x350 [13072.802563] ? kthread_complete_and_exit+0x20/0x20 [13072.8events_highpri) [13073.303752] Call Trace: [13073.303944] [13073.304470] __schedule+0x72e/0x1570 [13073.304759] ? io_schedule_timeout+0x160/0x160 [13073.305437] ? lock_downgrade+0x130/0x130 [13073.305671] ? pwq_dec_nr_in_flight+0x230/0x230 [13073.306415] schedule+0x128/0x220 [13073.307073] worker_thread+0x152/0xf90 [13073.307376] ? process_one_work+0x1520/0x1520 [13073.308095] kthread+0x2a7/0x350 [13073.308720] ? kthread_complete_and_exit+0x20/0x20 [13073.309413] ret_from_fork+0x22/0x30 [13073.309658] [13073.309857] task:cpuhp/20 state:S stack:28272 pid: 123 ppid: 2 flags:0x00004000 [13073.310411] Call Trace: [13073.310561] [13073.311156] __schedule+0x72e/0x1570 [13073.311429] ? io_schedule_timeout+0x160/0x160 [13073.312094] ? lockdep_hardirqs_on+0x79/0x100 [13073.312776] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13073.313140] ? cpuhp_invoke_callback+0x830/0x830 [13073.313895] ? smpboot_thread_fn+0x6b/0x910 [13073.314191] schedule+0x128/0x220 [13073.314799] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13073.315512] smpboot_thread_fn+0x253/0x910 [13073.315833] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13073.316184] kthread+0x2a7/0x350 [13073.316826] ? kthread_complete_and_exitopper: 0x0 <- 0x0 [13073.818049] Call Trace: [13073.818245] [13073.819124] __schedule+0x72e/0x1570 [13073.819425] ? io_schedule_timeout+0x160/0x160 [13073.820149] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13073.820939] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13073.821268] ? smpboot_thread_fn+0x6b/0x910 [13073.821497] schedule+0x128/0x220 [13073.822105] ? reboot_pid_ns+0xf0/0xf0 [13073.822370] smpboot_thread_fn+0x253/0x910 [13073.822650] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13073.823048] kthread+0x2a7/0x350 [13073.823677] ? kthread_complete_and_exit+0x20/0x20 [13073.824393] ret_from_fork+0x22/0x30 [13073.824686] [13073.824873] task:ksoftirqd/20 state:S stack:29960 pid: 125 ppid: 2 flags:0x00004000 [13073.825354] Call Trace: [13073.825503] [13073.826081] __schedule+0x72e/0x1570 [13073.826348] ? io_schedule_timeout+0x160/0x160 [13073.827096] ? smpboot_thread_fn+0x6b/0x910 [13073.827351] schedule+0x128/0x220 [13073.827981] ? __local_bh_enable+0x90/0x90 [13073.828235] smpboot_thread_fn+0x253/0x910 [13073.828474] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13073.828813] kthread+0x2a7/0x350 [13073.829450] ? kthread_complete_and_exit+0x2orkqueue: 0x0 (events_highpri) [13074.330561] Call Trace: [13074.330753] [13074.331275] __schedule+0x72e/0x1570 [13074.331526] ? io_schedule_timeout+0x160/0x160 [13074.332226] ? lock_downgrade+0x130/0x130 [13074.332534] ? pwq_dec_nr_in_flight+0x230/0x230 [13074.333318] schedule+0x128/0x220 [13074.333943] worker_thread+0x152/0xf90 [13074.334236] ? process_one_work+0x1520/0x1520 [13074.334985] kthread+0x2a7/0x350 [13074.335570] ? kthread_complete_and_exit+0x20/0x20 [13074.336273] ret_from_fork+0x22/0x30 [13074.336563] [13074.336762] task:cpuhp/21 state:S stack:28272 pid: 128 ppid: 2 flags:0x00004000 [13074.337240] Call Trace: [13074.337393] [13074.337955] __schedule+0x72e/0x1570 [13074.338221] ? io_schedule_timeout+0x160/0x160 [13074.338876] ? lockdep_hardirqs_on+0x79/0x100 [13074.339522] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13074.339922] ? cpuhp_invoke_callback+0x830/0x830 [13074.340645] ? smpboot_thread_fn+0x6b/0x910 [13074.340917] schedule+0x128/0x220 [13074.341523] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13074.342221] smpboot_thread_fn+0x253/0x910 [13074.342523] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13074.342887] kthread+0x2a7/0x350 [13074.343499] ? kthread_k:29960 pid: 129 ppid: 2 flags:0x00004000 [13074.744171] Stopper: 0x0 <- 0x0 [13074.744811] Call Trace: [13074.745011] [13074.745558] __schedule+0x72e/0x1570 [13074.745981] ? io_schedule_timeout+0x160/0x160 [13074.746740] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13074.747520] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13074.747892] ? smpboot_thread_fn+0x6b/0x910 [13074.748202] schedule+0x128/0x220 [13074.748896] ? reboot_pid_ns+0xf0/0xf0 [13074.749210] smpboot_thread_fn+0x253/0x910 [13074.749447] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13074.749801] kthread+0x2a7/0x350 [13074.750446] ? kthread_complete_and_exit+0x20/0x20 [13074.751412] ret_from_fork+0x22/0x30 [13074.751828] [13074.752036] task:ksoftirqd/21 state:S stack:29944 pid: 130 ppid: 2 flags:0x00004000 [13074.752510] Call Trace: [13074.752655] [13074.753249] __schedule+0x72e/0x1570 [13074.753696] ? io_schedule_timeout+0x160/0x160 [13074.754457] ? smpboot_thread_fn+0x6b/0x910 [13074.754743] schedule+0x128/0x220 [13074.755372] ? __local_bh_enable+0x90/0x90 [13074.755611] smpboot_thread_fn+0x253/0x910 [13074.755891] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13074.756259] kthread+0x2a7/0x350 [13074.756962] ? kthread_complete_and_exit+0x20/0x20 [13074.8kblockd) [13075.257777] Call Trace: [13075.257959] [13075.258597] __schedule+0x72e/0x1570 [13075.259012] ? io_schedule_timeout+0x160/0x160 [13075.259701] ? lock_downgrade+0x130/0x130 [13075.259990] ? pwq_dec_nr_in_flight+0x230/0x230 [13075.260698] schedule+0x128/0x220 [13075.261371] worker_thread+0x152/0xf90 [13075.261642] ? process_one_work+0x1520/0x1520 [13075.262355] kthread+0x2a7/0x350 [13075.263054] ? kthread_complete_and_exit+0x20/0x20 [13075.263844] ret_from_fork+0x22/0x30 [13075.264261] [13075.264437] task:cpuhp/22 state:S stack:28560 pid: 133 ppid: 2 flags:0x00004000 [13075.264920] Call Trace: [13075.265114] [13075.265627] __schedule+0x72e/0x1570 [13075.265922] ? io_schedule_timeout+0x160/0x160 [13075.266632] ? lockdep_hardirqs_on+0x79/0x100 [13075.267400] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13075.267779] ? cpuhp_invoke_callback+0x830/0x830 [13075.268692] ? smpboot_thread_fn+0x6b/0x910 [13075.268978] schedule+0x128/0x220 [13075.269673] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13075.270416] smpboot_thread_fn+0x253/0x910 [13075.270689] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13075.271017] kthread+0x2a7/0x350 [13075.271600] ? kth[13075.763571] Stopper: 0x0 <- 0x0 [13075.772865] Call Trace: [13075.773153] [13075.773663] __schedule+0x72e/0x1570 [13075.773957] ? io_schedule_timeout+0x160/0x160 [13075.774778] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13075.775541] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13075.775893] ? smpboot_thread_fn+0x6b/0x910 [13075.776191] schedule+0x128/0x220 [13075.776817] ? reboot_pid_ns+0xf0/0xf0 [13075.777238] smpboot_thread_fn+0x253/0x910 [13075.777666] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13075.778170] kthread+0x2a7/0x350 [13075.778812] ? kthread_complete_and_exit+0x20/0x20 [13075.779506] ret_from_fork+0x22/0x30 [13075.779855] [13075.780133] task:ksoftirqd/22 state:S stack:29864 pid: 135 ppid: 2 flags:0x00004000 [13075.780593] Call Trace: [13075.780784] [13075.781320] __schedule+0x72e/0x1570 [13075.781578] ? io_schedule_timeout+0x160/0x160 [13075.782338] ? smpboot_thread_fn+0x6b/0x910 [13075.782681] schedule+0x128/0x220 [13075.783337] ? __local_bh_enable+0x90/0x90 [13075.783585] smpboot_thread_fn+0x253/0x910 [13075.783859] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13075.784229] kthread+0x2a7/0x350 [13075.784927] ? kthread_complete_and_exit+0x20/0x20 [130kblockd) [13076.285694] Call Trace: [13076.285915] [13076.286471] __schedule+0x72e/0x1570 [13076.286769] ? io_schedule_timeout+0x160/0x160 [13076.287485] ? lock_downgrade+0x130/0x130 [13076.287799] ? pwq_dec_nr_in_flight+0x230/0x230 [13076.288544] schedule+0x128/0x220 [13076.289208] worker_thread+0x152/0xf90 [13076.289863] ? process_one_work+0x1520/0x1520 [13076.290591] kthread+0x2a7/0x350 [13076.291211] ? kthread_complete_and_exit+0x20/0x20 [13076.291964] ret_from_fork+0x22/0x30 [13076.292279] [13076.292449] task:cpuhp/23 state:S stack:28560 pid: 138 ppid: 2 flags:0x00004000 [13076.292997] Call Trace: [13076.293187] [13076.293993] __schedule+0x72e/0x1570 [13076.294317] ? io_schedule_timeout+0x160/0x160 [13076.295156] ? lockdep_hardirqs_on+0x79/0x100 [13076.295863] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13076.296243] ? cpuhp_invoke_callback+0x830/0x830 [13076.296978] ? smpboot_thread_fn+0x6b/0x910 [13076.297239] schedule+0x128/0x220 [13076.298149] ? cpu_mitigations_auto_nosmt+0x20/0x20 [13076.298967] smpboot_thread_fn+0x253/0x910 [13076.299248] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13076.299562] kthread+0x2a7/0x350 [13076.300168] ? kthread_complete_and_exit+0x20/0x20 [13076.300883] ret_from_fork+0x22/0x30 [13076.301192] [13076.301362] task:migration/23 state:S stack:30144 pid:[13076.793551] ? io_schedule_timeout+0x160/0x160 [13076.802886] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13076.803660] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13076.804021] ? smpboot_thread_fn+0x6b/0x910 [13076.804288] schedule+0x128/0x220 [13076.804920] ? reboot_pid_ns+0xf0/0xf0 [13076.805210] smpboot_thread_fn+0x253/0x910 [13076.805447] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13076.805792] kthread+0x2a7/0x350 [13076.806408] ? kthread_complete_and_exit+0x20/0x20 [13076.807129] ret_from_fork+0x22/0x30 [13076.807639] [13076.807912] task:ksoftirqd/23 state:S stack:29808 pid: 140 ppid: 2 flags:0x00004000 [13076.808464] Call Trace: [13076.808615] [13076.809301] __schedule+0x72e/0x1570 [13076.809571] ? io_schedule_timeout+0x160/0x160 [13076.810296] ? smpboot_thread_fn+0x6b/0x910 [13076.810584] schedule+0x128/0x220 [13076.811202] ? __local_bh_enable+0x90/0x90 [13076.811492] smpboot_thread_fn+0x253/0x910 [13076.811748] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13076.812102] kthread+0x2a7/0x350 [13076.812678] ? kthread_complete_and_exit+0x20/0x20 [13076.813393] ret_from_fork+0x22/0x30 [13076.813838] [13076.814012] task:kworker/23:0H state:I stack:29656 pid: 142 ppid: 2 flags:0x00004000 [13076.814488] Workqueue: 0x0 (events_highpri) [13076.815156] Call Trace: [13076.815318] [130 [13077.316214] schedule+0x128/0x220 [13077.316893] worker_thread+0x152/0xf90 [13077.317209] ? process_one_work+0x1520/0x1520 [13077.318039] kthread+0x2a7/0x350 [13077.318770] ? kthread_complete_and_exit+0x20/0x20 [13077.319837] ret_from_fork+0x22/0x30 [13077.320177] [13077.320344] task:kdevtmpfs state:S stack:27688 pid: 167 ppid: 2 flags:0x00004000 [13077.320839] Call Trace: [13077.321016] [13077.321789] __schedule+0x72e/0x1570 [13077.322111] ? io_schedule_timeout+0x160/0x160 [13077.322833] ? lock_downgrade+0x130/0x130 [13077.323122] schedule+0x128/0x220 [13077.323761] devtmpfs_work_loop+0x579/0x680 [13077.324068] ? public_dev_mount+0xe0/0xe0 [13077.324303] ? __lock_release+0x4c1/0xa00 [13077.324572] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13077.325351] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13077.325667] ? dmar_validate_one_drhd+0x1db/0x1db [13077.326373] devtmpfsd+0x2a/0x35 [13077.327214] kthread+0x2a7/0x350 [13077.327869] ? kthread_complete_and_exit+0x20/0x20 [13077.328593] ret_from_fork+0x22/0x30 [13077.328968] [13077.329181] task:inet_frag_wq state:I stack:30728 pid: 168 ppi[13077.821531] ? lock_downgrade+0x130/0x130 [13077.829987] ? wait_for_completion_io_timeout+0x20/0x20 [13077.830386] schedule+0x128/0x220 [13077.831014] rescuer_thread+0x679/0xbb0 [13077.831289] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13077.831625] ? worker_thread+0xf90/0xf90 [13077.831894] ? __kthread_parkme+0xcc/0x200 [13077.832173] ? worker_thread+0xf90/0xf90 [13077.832454] kthread+0x2a7/0x350 [13077.833211] ? kthread_complete_and_exit+0x20/0x20 [13077.833898] ret_from_fork+0x22/0x30 [13077.834207] [13077.834365] task:kauditd state:S stack:29832 pid: 182 ppid: 2 flags:0x00004000 [13077.834884] Call Trace: [13077.835075] [13077.835583] __schedule+0x72e/0x1570 [13077.835876] ? io_schedule_timeout+0x160/0x160 [13077.836616] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13077.837354] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13077.837696] ? lockdep_hardirqs_on+0x79/0x100 [13077.838356] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13077.838693] schedule+0x128/0x220 [13077.839308] kauditd_thread+0x461/0x840 [13077.839563] ? auditd_reset+0xf0/0xf0 [13077.839835] ? lockdep_hardirqs_kthread+0x2a7/0x350 [13078.340825] ? kthread_complete_and_exit+0x20/0x20 [13078.341588] ret_from_fork+0x22/0x30 [13078.341895] [13078.342092] task:khungtaskd state:S stack:29896 pid: 183 ppid: 2 flags:0x00004000 [13078.342597] Call Trace: [13078.342828] [13078.343367] __schedule+0x72e/0x1570 [13078.343625] ? io_schedule_timeout+0x160/0x160 [13078.344437] ? timer_fixup_activate+0x2e0/0x2e0 [13078.345157] ? find_held_lock+0x33/0x120 [13078.345414] ? debug_object_deactivate+0x320/0x320 [13078.346159] schedule+0x128/0x220 [13078.346793] schedule_timeout+0x125/0x260 [13078.347104] ? usleep_range_state+0x190/0x190 [13078.347802] ? destroy_timer_on_stack+0x20/0x20 [13078.348583] ? check_hung_uninterruptible_tasks+0x620/0x890 [13078.348934] ? lockdep_hardirqs_on+0x79/0x100 [13078.349595] watchdog+0xac/0x120 [13078.350203] ? check_hung_uninterruptible_tasks+0x890/0x890 [13078.350549] kthread+0x2a7/0x350 [13078.351137] ? kthread_complete_and_exit+0x20/0x20 [13078.351843] ret_from_fork+0x22/0x30 [13078.352149] [13078.352307] task:oom_reaper state:S stack:30680 pid: 184 ppid: 2 flags:0x00004000 [13078.352860] Call Trace: [13078.353024] [13078.353534] __schedule+0x72e/0x15[13078.845791] ? lockdep_hardirqs_on+0x79/0x100 [13078.854587] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13078.854969] schedule+0x128/0x220 [13078.855589] oom_reaper+0xc15/0xec0 [13078.856189] ? __lock_contended+0x980/0x980 [13078.856453] ? __oom_reap_task_mm+0x380/0x380 [13078.857136] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13078.857918] ? prepare_to_wait_exclusive+0x2c0/0x2c0 [13078.858285] ? __kthread_parkme+0xcc/0x200 [13078.858519] ? __oom_reap_task_mm+0x380/0x380 [13078.859199] kthread+0x2a7/0x350 [13078.859837] ? kthread_complete_and_exit+0x20/0x20 [13078.860524] ret_from_fork+0x22/0x30 [13078.860862] [13078.861027] task:writeback state:I stack:30728 pid: 185 ppid: 2 flags:0x00004000 [13078.861490] Call Trace: [13078.861636] [13078.862171] __schedule+0x72e/0x1570 [13078.862424] ? io_schedule_timeout+0x160/0x160 [13078.863157] ? lock_downgrade+0x130/0x130 [13078.863412] ? wait_for_completion_io_timeout+0x20/0x20 [13078.863821] schedule+0x128/0x220 [13078.864424] rescuer_thread+0x679/0xbb0 [13078.864685] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13078.865053] ? worker_thread+0xf90/0xf90 [13078.865313] ? __kthread_parkme+0xcc/0x200 [13078.8928[13079.357855] [13079.366009] task:kcompactd0 state:S stack:28016 pid: 186 ppid: 2 flags:0x00004000 [13079.366510] Call Trace: [13079.366656] [13079.367203] __schedule+0x72e/0x1570 [13079.367463] ? io_schedule_timeout+0x160/0x160 [13079.368144] ? timer_fixup_activate+0x2e0/0x2e0 [13079.368855] ? debug_object_deactivate+0x320/0x320 [13079.369545] schedule+0x128/0x220 [13079.370183] schedule_timeout+0x125/0x260 [13079.370457] ? usleep_range_state+0x190/0x190 [13079.371142] ? destroy_timer_on_stack+0x20/0x20 [13079.371837] ? lockdep_hardirqs_on+0x79/0x100 [13079.372530] ? prepare_to_wait_event+0xcd/0x690 [13079.373253] kcompactd+0x8bc/0xc80 [13079.373902] ? kcompactd_do_work+0x940/0x940 [13079.374586] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13079.375321] ? prepare_to_wait_exclusive+0x2c0/0x2c0 [13079.375657] ? __kthread_parkme+0xcc/0x200 [13079.375929] ? kcompactd_do_work+0x940/0x940 [13079.376627] kthread+0x2a7/0x350 [13079.377237] ? kthread_complete_and_exit+0x20/0x20 [13079.377962] ret_from_fork+0x22/0x30 [13079.378235] [13079.378394] task:kcompactd1 state:S stack:29416 pid: 187 ppid: 2 flags:0x00004000 [13079.378929] Call Trace: [[13079.879445] ? debug_object_deactivate+0x320/0x320 [13079.880203] schedule+0x128/0x220 [13079.880852] schedule_timeout+0x125/0x260 [13079.881167] ? usleep_range_state+0x190/0x190 [13079.881881] ? destroy_timer_on_stack+0x20/0x20 [13079.882572] ? lockdep_hardirqs_on+0x79/0x100 [13079.883283] ? prepare_to_wait_event+0xcd/0x690 [13079.884007] kcompactd+0x8bc/0xc80 [13079.884650] ? kcompactd_do_work+0x940/0x940 [13079.885376] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13079.886161] ? prepare_to_wait_exclusive+0x2c0/0x2c0 [13079.886508] ? __kthread_parkme+0xcc/0x200 [13079.886745] ? kcompactd_do_work+0x940/0x940 [13079.887451] kthread+0x2a7/0x350 [13079.888055] ? kthread_complete_and_exit+0x20/0x20 [13079.888743] ret_from_fork+0x22/0x30 [13079.889057] [13079.889222] task:ksmd state:S stack:28840 pid: 188 ppid: 2 flags:0x00004000 [13079.889667] Call Trace: [13079.889851] [13079.890361] __schedule+0x72e/0x1570 [13079.890629] ? io_schedule_timeout+0x160/0x160 [13079.891308] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13079.892065] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13079.892404] ? lockdep_hardirqs_on+0x79/0x100 [13079.893080] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13079.893410] schedule+0x128/0x220 [13079.894021] ksm_scan_thread+0x654/0x8[13080.386159] ? __kthread_parkme+0xcc/0x200 [13080.394737] ? cmp_and_merge_page+0x1190/0x1190 [13080.395470] kthread+0x2a7/0x350 [13080.396064] ? kthread_complete_and_exit+0x20/0x20 [13080.396742] ret_from_fork+0x22/0x30 [13080.397073] [13080.397230] task:khugepaged state:D stack:28432 pid: 189 ppid: 2 flags:0x00004000 [13080.397681] Call Trace: [13080.397859] [13080.398361] __schedule+0x72e/0x1570 [13080.398632] ? io_schedule_timeout+0x160/0x160 [13080.399319] ? __lock_acquire+0xb72/0x1870 [13080.399624] schedule+0x128/0x220 [13080.400249] schedule_timeout+0x1a9/0x260 [13080.400504] ? usleep_range_state+0x190/0x190 [13080.401187] ? lock_downgrade+0x130/0x130 [13080.401478] ? mark_held_locks+0xa5/0xf0 [13080.401718] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13080.402457] ? _raw_spin_unlock_irq+0x24/0x50 [13080.403209] __wait_for_common+0x37c/0x530 [13080.403452] ? usleep_range_state+0x190/0x190 [13080.404172] ? out_of_line_wait_on_bit_timeout+0x170/0x170 [13080.404523] ? start_flush_work+0x45e/0x8d0 [13080.404800] __flush_work+0x164/0x1a0 [13080.405039] ? start_flush_work+0x8d0/0x8d0 [13080.405300] ? flush_workqueue_prep_pwqs+0x3f0/0x3f0 [13080.432[13080.905880] khugepaged+0xe8/0x960 [13080.906529] ? khugepaged_scan_mm_slot+0xb30/0xb30 [13080.907230] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13080.907974] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13080.908332] ? lockdep_hardirqs_on+0x79/0x100 [13080.909010] ? __kthread_parkme+0xcc/0x200 [13080.909307] ? khugepaged_scan_mm_slot+0xb30/0xb30 [13080.909971] kthread+0x2a7/0x350 [13080.910575] ? kthread_complete_and_exit+0x20/0x20 [13080.911241] ret_from_fork+0x22/0x30 [13080.911515] [13080.911677] task:cryptd state:I stack:30728 pid: 190 ppid: 2 flags:0x00004000 [13080.912232] Call Trace: [13080.912407] [13080.912941] __schedule+0x72e/0x1570 [13080.913226] ? io_schedule_timeout+0x160/0x160 [13080.913882] ? lock_downgrade+0x130/0x130 [13080.914154] ? wait_for_completion_io_timeout+0x20/0x20 [13080.914488] schedule+0x128/0x220 [13080.915112] rescuer_thread+0x679/0xbb0 [13080.915390] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13080.915720] ? worker_thread+0xf90/0xf90 [13080.915975] ? __kthread_parkme+0xcc/0x200 [13080.916238] ? worker_thread+0xf90/0xf90 [13080.916481] kthrk:30728 pid: 191 ppid: 2 flags:0x00004000 [13081.417042] Call Trace: [13081.417238] [13081.417744] __schedule+0x72e/0x1570 [13081.418038] ? io_schedule_timeout+0x160/0x160 [13081.418727] ? lock_downgrade+0x130/0x130 [13081.418997] ? wait_for_completion_io_timeout+0x20/0x20 [13081.419364] schedule+0x128/0x220 [13081.419987] rescuer_thread+0x679/0xbb0 [13081.420252] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13081.420568] ? worker_thread+0xf90/0xf90 [13081.420844] ? __kthread_parkme+0xcc/0x200 [13081.421094] ? worker_thread+0xf90/0xf90 [13081.421315] kthread+0x2a7/0x350 [13081.421892] ? kthread_complete_and_exit+0x20/0x20 [13081.422575] ret_from_fork+0x22/0x30 [13081.422912] [13081.423078] task:kblockd state:I stack:30728 pid: 192 ppid: 2 flags:0x00004000 [13081.423554] Call Trace: [13081.423704] [13081.424249] __schedule+0x72e/0x1570 [13081.424507] ? io_schedule_timeout+0x160/0x160 [13081.425181] ? lock_downgrade+0x130/0x130 [13081.425423] ? wait_for_completion_io_timeout+0x20/0x20 [13081.425795] schedule+0x128/0x220 [13081.426370] rescuer_thread+0x679/0xbb0 [13081.426641] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13081.427019] ? worker_thread+0xf90/0xf90 [13081.427296] ? __kthread_parkme+0xcc/0x200 [13081.427536] ? worker_thread+0xf90/0xf90 [13081.427801] kthread+0x2a7/0x350 [13081.428365] ? kthread_complete_and_exit+0x20/0x20 [13081.429086] ret_from_fork+0x22/0x30 [13081.429398] [13081/0x1570 [13081.929912] ? io_schedule_timeout+0x160/0x160 [13081.930610] ? lock_downgrade+0x130/0x130 [13081.930885] ? wait_for_completion_io_timeout+0x20/0x20 [13081.931255] schedule+0x128/0x220 [13081.931878] rescuer_thread+0x679/0xbb0 [13081.932181] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13081.932484] ? worker_thread+0xf90/0xf90 [13081.932739] ? __kthread_parkme+0xcc/0x200 [13081.933036] ? worker_thread+0xf90/0xf90 [13081.933310] kthread+0x2a7/0x350 [13081.933896] ? kthread_complete_and_exit+0x20/0x20 [13081.934586] ret_from_fork+0x22/0x30 [13081.934924] [13081.935084] task:tpm_dev_wq state:I stack:30728 pid: 200 ppid: 2 flags:0x00004000 [13081.935550] Call Trace: [13081.935697] [13081.936242] __schedule+0x72e/0x1570 [13081.936498] ? io_schedule_timeout+0x160/0x160 [13081.937174] ? lock_downgrade+0x130/0x130 [13081.937410] ? wait_for_completion_io_timeout+0x20/0x20 [13081.937759] schedule+0x128/0x220 [13081.938386] rescuer_thread+0x679/0xbb0 [13081.938647] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13081.939027] ? worker_thread+0xf90/0xf90 [13081.939304] ? __kthread_parkme+0xcc/0x200 [13081.939546] ? worker_thread+0xf90/0xf90 [13081.939801] kthread+0x2k:30728 pid: 201 ppid: 2 flags:0x00004000 [13082.440366] Call Trace: [13082.440540] [13082.441071] __schedule+0x72e/0x1570 [13082.441337] ? io_schedule_timeout+0x160/0x160 [13082.441990] ? lock_downgrade+0x130/0x130 [13082.442261] ? wait_for_completion_io_timeout+0x20/0x20 [13082.442606] schedule+0x128/0x220 [13082.443239] rescuer_thread+0x679/0xbb0 [13082.443506] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13082.443845] ? worker_thread+0xf90/0xf90 [13082.444809] ? __kthread_parkme+0xcc/0x200 [13082.445058] ? worker_thread+0xf90/0xf90 [13082.445329] kthread+0x2a7/0x350 [13082.445924] ? kthread_complete_and_exit+0x20/0x20 [13082.446605] ret_from_fork+0x22/0x30 [13082.446939] [13082.447097] task:edac-poller state:I stack:30728 pid: 202 ppid: 2 flags:0x00004000 [13082.447564] Call Trace: [13082.447715] [13082.448265] __schedule+0x72e/0x1570 [13082.448521] ? io_schedule_timeout+0x160/0x160 [13082.449191] ? lock_downgrade+0x130/0x130 [13082.449433] ? wait_for_completion_io_timeout+0x20/0x20 [13082.449857] schedule+0x128/0x220 [13082.450460] rescuer_thread+0x679/0xbb0 [13082.450723] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13082.451092] ? worker_thread+[13082.951566] ret_from_fork+0x22/0x30 [13082.951926] [13082.952092] task:watchdogd state:S stack:30928 pid: 203 ppid: 2 flags:0x00004000 [13082.952562] Call Trace: [13082.952735] [13082.953313] __schedule+0x72e/0x1570 [13082.953575] ? io_schedule_timeout+0x160/0x160 [13082.954229] ? lock_downgrade+0x130/0x130 [13082.954521] schedule+0x128/0x220 [13082.955142] kthread_worker_fn+0x524/0xb00 [13082.955426] ? kthread_freezable_should_stop+0x1c0/0x1c0 [13082.955734] kthread+0x2a7/0x350 [13082.956332] ? kthread_complete_and_exit+0x20/0x20 [13082.957059] ret_from_fork+0x22/0x30 [13082.957386] [13082.957550] task:kworker/6:1H state:I stack:27832 pid: 204 ppid: 2 flags:0x00004000 [13082.958056] Workqueue: 0x0 (kblockd) [13082.958331] Call Trace: [13082.958464] [13082.959020] __schedule+0x72e/0x1570 [13082.959287] ? io_schedule_timeout+0x160/0x160 [13082.959956] ? lock_downgrade+0x130/0x130 [13082.960227] ? pwq_dec_nr_in_flight+0x230/0x230 [13082.960960] schedule+0x128/0x220 [13082.961563] worker_thread+0x152/0xf90 [13082.961884] ? process_one_work+0x1520/0x1520 [13082.962571] kthread+0x2a7/0k:27976 pid: 205 ppid: 2 flags:0x00004000 [13083.463251] Call Trace: [13083.463431] [13083.463984] __schedule+0x72e/0x1570 [13083.464291] ? io_schedule_timeout+0x160/0x160 [13083.464953] ? lock_downgrade+0x130/0x130 [13083.465205] ? cpumask_next+0x59/0x80 [13083.465459] schedule+0x128/0x220 [13083.466067] kswapd_try_to_sleep+0x468/0x520 [13083.466752] ? get_scan_count+0xc00/0xc00 [13083.467036] ? prepare_to_wait_exclusive+0x2c0/0x2c0 [13083.467394] kswapd+0x26d/0x6d0 [13083.468036] ? balance_pgdat+0x1090/0x1090 [13083.468315] kthread+0x2a7/0x350 [13083.468908] ? kthread_complete_and_exit+0x20/0x20 [13083.469595] ret_from_fork+0x22/0x30 [13083.469933] [13083.470092] task:kswapd1 state:S stack:30392 pid: 206 ppid: 2 flags:0x00004000 [13083.470563] Call Trace: [13083.470713] [13083.471261] __schedule+0x72e/0x1570 [13083.471516] ? io_schedule_timeout+0x160/0x160 [13083.472193] ? __zone_watermark_ok+0x288/0x420 [13083.472903] ? cpumask_next+0x59/0x80 [13083.473175] schedule+0x128/0x220 [13083.5d0 [13083.974157] ? balance_pgdat+0x1090/0x1090 [13083.974416] kthread+0x2a7/0x350 [13083.975012] ? kthread_complete_and_exit+0x20/0x20 [13083.975697] ret_from_fork+0x22/0x30 [13083.976031] [13083.976212] task:kthrotld state:I stack:29928 pid: 213 ppid: 2 flags:0x00004000 [13083.976664] Call Trace: [13083.976851] [13083.977363] __schedule+0x72e/0x1570 [13083.977635] ? io_schedule_timeout+0x160/0x160 [13083.978303] ? lock_downgrade+0x130/0x130 [13083.978547] ? wait_for_completion_io_timeout+0x20/0x20 [13083.978920] schedule+0x128/0x220 [13083.979513] rescuer_thread+0x679/0xbb0 [13083.979772] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13083.980130] ? worker_thread+0xf90/0xf90 [13083.980367] ? __kthread_parkme+0xcc/0x200 [13083.980600] ? worker_thread+0xf90/0xf90 [13083.980899] kthread+0x2a7/0x350 [13083.981506] ? kthread_complete_and_exit+0x20/0x20 [13083.982225] ret_from_fork+0x22/0x30 [13083.982510] [13083.982690] task:acpi_thermal_pm state:I stack:30728 pid: 222 ppid: 2 flags:0x00004000 [13083.983217] Call Trace: [13083.983390] [13083.983937] __schedule+0x72e/0x1570 [13083.984215] ? io_schedule_timeout+0x160/0x160 [13083.984884] ? lock_downgrade+0x130/0x130 [1[13084.485425] ? worker_thread+0xf90/0xf90 [13084.485704] ? __kthread_parkme+0xcc/0x200 [13084.486004] ? worker_thread+0xf90/0xf90 [13084.486283] kthread+0x2a7/0x350 [13084.486878] ? kthread_complete_and_exit+0x20/0x20 [13084.487566] ret_from_fork+0x22/0x30 [13084.487905] [13084.488067] task:kmpath_rdacd state:I stack:30728 pid: 223 ppid: 2 flags:0x00004000 [13084.488554] Call Trace: [13084.488704] [13084.489248] __schedule+0x72e/0x1570 [13084.489493] ? io_schedule_timeout+0x160/0x160 [13084.490158] ? lock_downgrade+0x130/0x130 [13084.490403] ? wait_for_completion_io_timeout+0x20/0x20 [13084.490731] schedule+0x128/0x220 [13084.491348] rescuer_thread+0x679/0xbb0 [13084.491630] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13084.491964] ? worker_thread+0xf90/0xf90 [13084.492236] ? __kthread_parkme+0xcc/0x200 [13084.492478] ? worker_thread+0xf90/0xf90 [13084.492738] kthread+0x2a7/0x350 [13084.493356] ? kthread_complete_and_exit+0x20/0x20 [13084.494115] ret_from_fork+0x22/0x30 [13084.494421] [13084.494591] task:kaluad state:I stack:30728 pid: 224 ppid: 2 flags:0x00004000 [13084.495108] Call Trace: [13084.495311] [13084.495852] __schedule+schedule+0x128/0x220 [13084.996724] rescuer_thread+0x679/0xbb0 [13084.997025] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13084.997360] ? worker_thread+0xf90/0xf90 [13084.997593] ? __kthread_parkme+0xcc/0x200 [13084.997883] ? worker_thread+0xf90/0xf90 [13084.998135] kthread+0x2a7/0x350 [13084.998698] ? kthread_complete_and_exit+0x20/0x20 [13084.999362] ret_from_fork+0x22/0x30 [13084.999663] [13084.999860] task:kworker/2:1H state:R running task stack:27240 pid: 225 ppid: 2 flags:0x00004000 [13085.000859] Workqueue: 0x0 (kblockd) [13085.001121] Call Trace: [13085.001310] [13085.001857] __schedule+0x72e/0x1570 [13085.002113] ? io_schedule_timeout+0x160/0x160 [13085.002774] ? lock_downgrade+0x130/0x130 [13085.003051] ? pwq_dec_nr_in_flight+0x230/0x230 [13085.003728] schedule+0x128/0x220 [13085.004359] worker_thread+0x152/0xf90 [13085.004644] ? process_one_work+0x1520/0x1520 [13085.005312] kthread+0x2a7/0x350 [13085.005910] ? kthread_complete_and_exit+0x20/0x20 [13085.006564] ret_from_fork+0x22/0x30 [13085.006904] [13085.007061] task:kworker/3:1H state:I stack:27280 pid: 226 ppid: 2 flags:0x000040? io_schedule_timeout+0x160/0x160 [13085.508373] ? lock_downgrade+0x130/0x130 [13085.508623] ? pwq_dec_nr_in_flight+0x230/0x230 [13085.509322] schedule+0x128/0x220 [13085.509962] worker_thread+0x152/0xf90 [13085.510262] ? process_one_work+0x1520/0x1520 [13085.510936] kthread+0x2a7/0x350 [13085.511528] ? kthread_complete_and_exit+0x20/0x20 [13085.512218] ret_from_fork+0x22/0x30 [13085.512497] [13085.512657] task:kworker/5:1H state:I stack:28240 pid: 227 ppid: 2 flags:0x00004000 [13085.513226] Workqueue: 0x0 (events_highpri) [13085.513913] Call Trace: [13085.514086] [13085.514605] __schedule+0x72e/0x1570 [13085.514901] ? io_schedule_timeout+0x160/0x160 [13085.515563] ? lock_downgrade+0x130/0x130 [13085.515790] ? pwq_dec_nr_in_flight+0x230/0x230 [13085.516512] schedule+0x128/0x220 [13085.517142] worker_thread+0x152/0xf90 [13085.517420] ? process_one_work+0x1520/0x1520 [13085.518114] kthread+0x2a7/0x350 [13085.518730] ? kthread_complete_and_exit+0x20/0x20 [13085.519434] ret_from_fork+0x22/0x30 [13085.519743] [13085.519936] task:kworker/12:1H state:I stack:27832 pid: 229 ppid: [13085.902903] __schedule+0x72e/0x1570 [13085.920853] ? io_schedule_timeout+0[13086.003010] ? lock_downgrade+0x130/0x130 [13086.021421] ? pwq_dec_nr_in_flight+0x230/0x230 [13086.022156] schedule+0x128/0x220 [13086.022790] worker_thread+0x152/0xf90 [13086.023088] ? process_one_work+0x1520/0x1520 [13086.023768] kthread+0x2a7/0x350 [13086.024397] ? kthread_complete_and_exit+0x20/0x20 [13086.025128] ret_from_fork+0x22/0x30 [13086.025434] [13086.025605] task:kworker/13:1H state:I stack:27832 pid: 230 ppid: 2 flags:0x00004000 [13086.026119] Workqueue: 0x0 (events_highpri) [13086.026826] Call Trace: [13086.026996] [13086.027544] __schedule+0x72e/0x1570 [13086.027843] ? io_schedule_timeout+0x160/0x160 [13086.028495] ? lock_downgrade+0x130/0x130 [13086.028754] ? pwq_dec_nr_in_flight+0x230/0x230 [13086.029491] schedule+0x128/0x220 [13086.030108] worker_thread+0x152/0xf90 [13086.030403] ? process_one_work+0x1520/0x1520 [13086.031101] kthread+0x2a7/0x350 [13086.031713] ? kthread_complete_and_exit+0x20/0x20 [13086.032421] ret_from_fork+0x22/0x30 [13086.032762] [13086.032949] task:mld state:I stack:30728 pid: 231 ppid: 2 flags:0x00004000 [13086.033443] Call Trace: [13086.033590] [13086.034151] __schedule+0x72e/0x1570 [13086.034416] ? io_schedule_timeout+0x160/0x160 [13086.035076] ? lock_downx679/0xbb0 [13086.435693] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13086.46370xf90/0xf90 [13086.536351] ? __kthread_parkme+0xcc/0x200 [13086.536633] ? worker_thread+0xf90/0xf90 [13086.536930] kthread+0x2a7/0x350 [13086.537554] ? kthread_complete_and_exit+0x20/0x20 [13086.538275] ret_from_fork+0x22/0x30 [13086.538575] [13086.538734] task:kworker/1:1H state:I stack:27112 pid: 232 ppid: 2 flags:0x00004000 [13086.539213] Workqueue: 0x0 (events_highpri) [13086.539927] Call Trace: [13086.540112] [13086.540664] __schedule+0x72e/0x1570 [13086.540962] ? io_schedule_timeout+0x160/0x160 [13086.541684] ? lock_downgrade+0x130/0x130 [13086.541960] ? pwq_dec_nr_in_flight+0x230/0x230 [13086.542712] schedule+0x128/0x220 [13086.543380] worker_thread+0x152/0xf90 [13086.543666] ? process_one_work+0x1520/0x1520 [13086.544359] kthread+0x2a7/0x350 [13086.545024] ? kthread_complete_and_exit+0x20/0x20 [13086.545745] ret_from_fork+0x22/0x30 [13086.546085] [13086.546248] task:ipv6_addrconf state:I stack:30728 pid: 233 ppid: 2 flags:0x00004000 [13086.546726] Call Trace: [13086.546916] [13086.547448] __schedule+0x72e/0x1570 [13086.547725] ? io_schedule_timeout+0x160/0x160 [13086.548403] ? lock_downgrade+0x130/0x130 [13086.548663] ? wait_for_completion_io_timeout+0x2[13086.949179] ? worker_thread+0xf90/0xf90 [13086.949452] ? __kthread_parkme+0xcc/0x200 [13086.949689] ? worker_thread+0xf90/0xf90 [13086.949951] kthread+0x2a7/0x350 [13086.950530] ? kthread_complete_and_exit+0x20/0x20 [13086.951318] ret_from_fork+0x22/0x30 [13086.951633] [13086.951798] task:kstrp state:I stack:30728 pid: 234 ppid: 2 flags:0x00004000 [13086.952327] Call Trace: [13086.952480] [13086.953052] __schedule+0x72e/0x1570 [13086.953349] ? io_schedule_timeout+0x160/0x160 [13086.954049] ? lock_downgrade+0x130/0x130 [13086.954347] ? wait_for_completion_io_timeout+0x20/0x20 [13086.954681] schedule+0x128/0x220 [13086.955472] rescuer_thread+0x679/0xbb0 [13086.955793] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13086.956281] ? worker_thread+0xf90/0xf90 [13086.956548] ? __kthread_parkme+0xcc/0x200 [13086.956801] ? worker_thread+0xf90/0xf90 [13086.957100] kthread+0x2a7/0x350 [13086.957703] ? kthread_complete_and_exit+0x20/0x20 [13086.958410] ret_from_fork+0x22/0x30 [13086.958749] [13086.958946] task:zswap-shrink state:I stack:30728 pid: 246 ppid: 2 flags:0x00004000 [13086.959455] Call Trace: [13086.959607] [13086.960182] __schedule+schedule+0x128/0x220 [13087.461140] rescuer_thread+0x679/0xbb0 [13087.461436] ? __kthread_parkme+0x65/0x200 [13087.461703] ? worker_thread+0xf90/0xf90 [13087.462003] ? __kthread_parkme+0xcc/0x200 [13087.462287] ? worker_thread+0xf90/0xf90 [13087.462529] kthread+0x2a7/0x350 [13087.463184] ? kthread_complete_and_exit+0x20/0x20 [13087.464055] ret_from_fork+0x22/0x30 [13087.464474] [13087.464653] task:kworker/u131:0 state:I stack:30832 pid: 360 ppid: 2 flags:0x00004000 [13087.465133] Call Trace: [13087.465330] [13087.465905] __schedule+0x72e/0x1570 [13087.466349] ? io_schedule_timeout+0x160/0x160 [13087.467036] ? lock_downgrade+0x130/0x130 [13087.467332] ? wait_for_completion_io_timeout+0x20/0x20 [13087.467666] schedule+0x128/0x220 [13087.468328] worker_thread+0x152/0xf90 [13087.468574] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13087.469324] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13087.469848] ? process_one_work+0x1520/0x1520 [13087.470526] kthread+0x2a7/0x350 [13087.471147] ? kthread_complete_and_exit+0x20/0x20 [13087.471939] ret_from_fork+0x22/0x30 [13087.472240] [13087.472405] task:kworker/u132:0 state:I stack:30032 pid: 362 ppid: 2 flags? lock_downgrade+0x130/0x130 [13087.973306] ? wait_for_completion_io_timeout+0x20/0x20 [13087.973667] schedule+0x128/0x220 [13087.974578] worker_thread+0x152/0xf90 [13087.974861] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13087.975598] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13087.976019] ? process_one_work+0x1520/0x1520 [13087.976741] kthread+0x2a7/0x350 [13087.977376] ? kthread_complete_and_exit+0x20/0x20 [13087.978150] ret_from_fork+0x22/0x30 [13087.978454] [13087.978621] task:kworker/u133:0 state:I stack:30832 pid: 363 ppid: 2 flags:0x00004000 [13087.979116] Call Trace: [13087.979272] [13087.979793] __schedule+0x72e/0x1570 [13087.980077] ? io_schedule_timeout+0x160/0x160 [13087.980803] ? lock_downgrade+0x130/0x130 [13087.981064] ? wait_for_completion_io_timeout+0x20/0x20 [13087.981450] schedule+0x128/0x220 [13087.982110] worker_thread+0x152/0xf90 [13087.982403] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13087.983210] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13087.983552] ? process_one_work+0x1520/0x1520 [13087.984234] kthread+0x2a7/0x350 [13087.984804] ? kthread_complete_and_exit+0x20/0x20 [13087.985521] ret_from_fork+0x22/0x30 [13087.985840] [13087.986015] tas4 ppid: 2 flags:0x00004000 [13088.086522] Workqueue: 0x0 (events_highpri) [13088.087212] Call Trace: [13088.087375] [13088.087936] __schedule+0x72e/0x1570 [13088.088206] ? io_schedule_timeout+0x160/0x160 [13088.115707] [13088.580534] ? process_one_work+0x1520/0x1520 [13088.590053] kthread+0x2a7/0x350 [13088.590704] ? kthread_complete_and_exit+0x20/0x20 [13088.591394] ret_from_fork+0x22/0x30 [13088.591690] [13088.591896] task:kworker/19:1H state:I stack:27832 pid: 412 ppid: 2 flags:0x00004000 [13088.592447] Workqueue: 0x0 (kblockd) [13088.592685] Call Trace: [13088.592937] [13088.593478] __schedule+0x72e/0x1570 [13088.593754] ? io_schedule_timeout+0x160/0x160 [13088.594435] ? lock_downgrade+0x130/0x130 [13088.594698] ? pwq_dec_nr_in_flight+0x230/0x230 [13088.595410] schedule+0x128/0x220 [13088.596082] worker_thread+0x152/0xf90 [13088.596392] ? process_one_work+0x1520/0x1520 [13088.597079] kthread+0x2a7/0x350 [13088.597731] ? kthread_complete_and_exit+0x20/0x20 [13088.598436] ret_from_fork+0x22/0x30 [13088.598734] [13088.598928] task:kworker/9:1H state:I stack:27736 pid: 537 ppid: 2 flags:0x00004000 [13088.599447] Workqueue: 0x0 (events_highpri) [13088.600123] Call Trace: [13088.600338] [13088.600918] __schedule+0x72e/0x1570 [13088.601268] ? io_schedule_timeout+0x160/0x160 [13[13089.001902] worker_thread+0x152/0xf90 [13089.002217] ? process_one_work+0x1520/0x1520 [13089.002937] kthread+0x2a7/0x350 [13089.003535] ? kthread_complete_and_exit+0x20/0x20 [13089.004251] ret_from_fork+0x22/0x30 [13089.004545] [13089.004718] task:ata_sff state:I stack:30728 pid: 621 ppid: 2 flags:0x00004000 [13089.005235] Call Trace: [13089.005397] [13089.005966] __schedule+0x72e/0x1570 [13089.006269] ? io_schedule_timeout+0x160/0x160 [13089.007166] ? lock_downgrade+0x130/0x130 [13089.007557] ? wait_for_completion_io_timeout+0x20/0x20 [13089.007948] schedule+0x128/0x220 [13089.008550] rescuer_thread+0x679/0xbb0 [13089.008853] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13089.009171] ? worker_thread+0xf90/0xf90 [13089.009457] ? __kthread_parkme+0xcc/0x200 [13089.009690] ? worker_thread+0xf90/0xf90 [13089.009966] kthread+0x2a7/0x350 [13089.010707] ? kthread_complete_and_exit+0x20ret_from_fork+0x22/0x30 [13089.111301] [13089.111479] task:kworker/8:1H state:I stack:27832 pid: 625 ppid: 2 flags:0x00004000 [13089.111974] Workqueue: 0x0 (kblockd) [13089.112278] Call Trace: [13089.112431] [13089.112988] __schedule+0x72e/0x1570 [13089.113274] ? io_schedule_timeout+0x160/0x160 [13089.113949] ? lock_downgrade+0x130/ [13089.614936] kthread+0x2a7/0x350 [13089.615516] ? kthread_complete_and_exit+0x20/0x20 [13089.616237] ret_from_fork+0x22/0x30 [13089.616532] [13089.616710] task:scsi_eh_0 state:S stack:30128 pid: 628 ppid: 2 flags:0x00004000 [13089.617230] Call Trace: [13089.617397] [13089.617950] __schedule+0x72e/0x1570 [13089.618209] ? io_schedule_timeout+0x160/0x160 [13089.618896] ? lock_downgrade+0x130/0x130 [13089.619143] ? __lock_contended+0x980/0x980 [13089.619430] schedule+0x128/0x220 [13089.620077] scsi_error_handler+0x29d/0x5a0 [13089.620365] ? scsi_unjam_host+0x6a0/0x6a0 [13089.620602] kthread+0x2a7/0x350 [13089.621198] ? kthread_complete_and_exit+0x20/0x20 [13089.621916] ret_from_fork+0x22/0x30 [13089.622217] [13089.622387] task:scsi_tmf_0 state:I stack:30208 pid: 629 ppid: 2 flags:0x00004000 [13089.622915] Call Trace: [13089.623085] [13089.623587] __schedule+0x72e/0x1570 [13089.623861] ? io_schedule_timeout+0x160/0x160 [13089.624525] ? lock_downgrade+0x130/0x130 [13089.624779] ? wait_for_completion_io_timeout+0x20/0x20 [13089.652338][13090.007758] ? worker_thread+0xf90/0xf90 [13090.025781] ? __kthread_parkme+? worker_thread+0xf90/0xf90 [13090.126191] kthread+0x2a7/0x350 [13090.126784] ? kthread_complete_and_exit+0x20/0x20 [13090.127491] ret_from_fork+0x22/0x30 [13090.127816] [13090.128008] task:kworker/17:1H state:I stack:28040 pid: 633 ppid: 2 flags:0x00004000 [13090.128515] Workqueue: 0x0 (events_highpri) [13090.129206] Call Trace: [13090.129436] [13090.130098] __schedule+0x72e/0x1570 [13090.130401] ? io_schedule_timeout+0x160/0x160 [13090.131066] ? lock_downgrade+0x130/0x130 [13090.131307] ? pwq_dec_nr_in_flight+0x230/0x230 [13090.132065] schedule+0x128/0x220 [13090.132689] worker_thread+0x152/0xf90 [13090.133042] ? process_one_work+0x1520/0x1520 [13090.133765] kthread+0x2a7/0x350 [13090.134379] ? kthread_complete_and_exit+0x20/0x20 [13090.135120] ret_from_fork+0x22/0x30 [13090.135436] [13090.135594] task:scsi_eh_1 state:S stack:27888 pid: 634 ppid: 2 flags:0x00004000 [13090.136072] Call Trace: [13090.136239] [13090.136780] __schedule+0x72e/0x1570 [13090.164[13090.519700] scsi_error_handler+0x29d/0x5a0 [13090.537576] ? scsi_unjam_host+0x6a0/0x6a0 [13090.537905] kthread+0x2a7/0x350 [13090.538543] ? kthread_complete_and_exit+0x20/0x20 [13090.539253] ret_from_fork+0x22/0x30 [13090.539555] [13090.539727] task:scsi_tmf_1 state:I stack:30024 pid: 635 ppid: 2 flags:0x00004000 [13090.540211] Call Trace: [13090.540395] [13090.540998] __schedule+0x72e/0x1570 [13090.541271] ? io_schedule_timeout+0x160/0x160 [13090.542234] ? lock_downgrade+0x130/0x130 [13090.542602] ? wait_for_completion_io_timeout+0x20/0x20 [13090.543037] schedule+0x128/0x220 [13090.543671] rescuer_thread+0x679/0xbb0 [13090.543999] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13090.544345] ? worker_thread+0xf90/0xf90 [13090.544579] ? __kthread_parkme+0xcc/0x200 [13090.545016] ? worker_thread+0xf90/0xf90 [13090.545267] kthread+0x2a7/0x350 [13090.545920] ? kthread_complete_and_exit+0x20/0x20 [13090.546644] ret_from_fork+0x22/0x30 [13090.546984] [13090.547450] task:scsi_eh_2 state:S stack:28184 pid: 637 ppid: 2 flags:0x00004000 [13090.548086] Call Trace: [13090.548287] [13090.548816] __schedule+0x72e/0x1570 [13[13091.049463] ? scsi_unjam_host+0x6a0/0x6a0 [13091.049738] kthread+0x2a7/0x350 [13091.050375] ? kthread_complete_and_exit+0x20/0x20 [13091.051174] ret_from_fork+0x22/0x30 [13091.051508] [13091.051677] task:scsi_tmf_2 state:I stack:29928 pid: 638 ppid: 2 flags:0x00004000 [13091.052169] Call Trace: [13091.052325] [13091.052937] __schedule+0x72e/0x1570 [13091.053207] ? io_schedule_timeout+0x160/0x160 [13091.053938] ? lock_downgrade+0x130/0x130 [13091.054579] ? wait_for_completion_io_timeout+0x20/0x20 [13091.054986] schedule+0x128/0x220 [13091.055609] rescuer_thread+0x679/0xbb0 [13091.055975] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13091.056356] ? worker_thread+0xf90/0xf90 [13091.056592] ? __kthread_parkme+0xcc/0x200 [13091.056822] ? worker_thread+0xf90/0xf90 [13091.057103] kthread+0x2a7/0x350 [13091.057724] ? kthread_complete_and_exit+0x20/0x20 [13091.058487] ret_from_fork+0x22/0x30 [13091.058791] [13091.058991] task:kworker/4:1H state:I stack:27640 pid: 639 ppid: 2 flags:0x00004000 [13091.059512] Workqueue: 0x0 (events_highpri) [13091.060191] Call Trace: [13091.060383] [13091.060941] __schedule+0x72e/0x1570 [13091.061208] ? io_schedule_timeout+0x160/0x160 [13091.061906] ? lock_downgrk+0x1520/0x1520 [13091.562918] kthread+0x2a7/0x350 [13091.563517] ? kthread_complete_and_exit+0x20/0x20 [13091.564235] ret_from_fork+0x22/0x30 [13091.564543] [13091.564721] task:scsi_eh_3 state:S stack:30928 pid: 641 ppid: 2 flags:0x00004000 [13091.565194] Call Trace: [13091.565378] [13091.565935] __schedule+0x72e/0x1570 [13091.566195] ? io_schedule_timeout+0x160/0x160 [13091.566904] ? lock_downgrade+0x130/0x130 [13091.567156] ? __lock_contended+0x980/0x980 [13091.567461] schedule+0x128/0x220 [13091.59522er+0x29d/0x5a0 [13091.668197] ? scsi_unjam_host+0x6a0/0x6a0 [13091.668479] kthread+0x2a7/0x350 [13091.669094] ? kthread_complete_and_exit+0x20/0x20 [13091.669813] ret_from_fork+0x22/0x30 [13091.670158] [13091.670358] task:scsi_tmf_3 state:I stack:30728 pid: 642 ppid: 2 flags:0x00004000 [13091.670808] Call Trace: [13091.670991] [13091.671527] __schedule+0x72e/0x1570 [13091.671798] ? io_schedule_timeout+0x160/0x160 [13091.672481] ? lock_downgrade+0x130/0x130 [13091.672729] ? wait_for_completion_io_timeout+0x20/0x20 [13091.673181] schedule+0x128/0x220 [13091.673810] rescuer_thread+0x679/0xbb0 [13091.674133] kthread+0x2a7/0x350 [13092.175191] ? kthread_complete_and_exit+0x20/0x20 [13092.175958] ret_from_fork+0x22/0x30 [13092.176254] [13092.176443] task:kworker/7:1H state:I stack:27832 pid: 653 ppid: 2 flags:0x00004000 [13092.176947] Workqueue: 0x0 (kblockd) [13092.177216] Call Trace: [13092.177393] [13092.177952] __schedule+0x72e/0x1570 [13092.178219] ? io_schedule_timeout+0x160/0x160 [13092.178920] ? lock_downgrade+0x130/0x130 [13092.179169] ? pwq_dec_nr_in_flight+0x230/0x230 [13092.179928] schedule+0x128/0x220 [13092.180563] worker_thread+0x152/0xf90 [13092.180879] ? process_one_work+0x1520/0x1520 [13092.181592] kthread+0x2a7/0x350 [13092.182211] ? kthread_complete_and_exit+0x20/0x20 [13092.183006] ret_from_fork+0x22/0x30 [13092.183314] [13092.183475] task:kworker/11:1H state:I stack:27344 pid: 655 ppid: 2 flags:0x00004000 [13092.183967] Workqueue: 0x0 (events_highpri) [13092.184666] Call Trace: [13092.184824] [13092.185409] __schedule+0x72e/0x1570 [13092.185690] ? io_schedule_timeout+0x160/0x160 [13092.186401] ? lock_downgrade+0x130/0x130 [13092.186668] ? pwq_dec_nr_in_flight+0x230/0x230 [13092.187426] schedule+0x128/0x220 [13092.188077] worker_thread+0x152/0xf90 [13092.188427] ? process_ret_from_fork+0x22/0x30 [13092.588990] [13092.589166] task:kworker/18:1H state:I stack:27960 pid: 656 ppid: 2 flags:0x00004000 [13092.589650] Workqueue: 0x0 (events_highpri) [13092.590393] Call Trace: [13092.590592] [13092.591162] __schedule+0x72e/0x1570 [13092.591447] ? io_schedule_timeout+0x160/0x160 [13092.592178] ? lock_downgrade+0x130/0x130 [13092.592431] ? pwq_dec_nr_in_flight+0x230/0x230 [13092.593208] schedule+0x128/0x220 [13092.594069] worker_thread+0x152/0xf90 [13092.594454] ? process_one_work+0x1520/0x1520 [13092.595213] kthread+0x2a7/0x350 [13092.595889] ? kthread_complete_and_exit+0x20/0x20 [13092.596602] ret_from_fork+0x22/0x30 [13092.596969] [13092.597157] task:kworker/15:1H state:I stack:28040 pid: 657 ppid: 2 flags:0x00004000 [13092.597637] Workqueue: 0x0 (events_highpri) [13092.598405] Call Trace: [13092.598686] [13092.599313] __schedule+0x72e/0x1570 [13092.599582] ? io_schedule_timeout+0x160/0x160 [13092.600269] ? lock_downgrade+0x130/0x130 [13092.600512] ? pwq_dec_nr_in_flight+0x230/0x230 [13092.601228] schedule+0x128/0x220 [13092.602109] wret_from_fork+0x22/0x30 [13093.102904] [13093.103093] task:kworker/10:1H state:I stack:27960 pid: 718 ppid: 2 flags:0x00004000 [13093.103578] Workqueue: 0x0 (kblockd) [13093.103817] Call Trace: [13093.104005] [13093.104559] __schedule+0x72e/0x1570 [13093.104812] ? io_schedule_timeout+0x160/0x160 [13093.105510] ? lock_downgrade+0x130/0x130 [13093.105762] ? pwq_dec_nr_in_flight+0x230/0x230 [13093.106545] schedule+0x128/0x220 [13093.107216] worker_thread+0x152/0xf90 [13093.107525] ? process_one_work+0x1520/0x1520 [13093.108222] kthread+0x2a7/0x350 [13093.108904] ? kthread_complete_and_exit+0x20/0x20 [13093.109628] ret_from_fork+0x22/0x30 [13093.109971] [13093.110156] task:kdmflush/253:0 state:I stack:30208 pid: 721 ppid: 2 flags:0x00004000 [13093.110643] Call Trace: [13093.110793] [13093.111378] __schedule+0x72e/0x1570 [13093.111637] ? io_schedule_timeout+0x160/0x160 [13093.112442] ? lock_downgrade+0x130/0x130 [13093.112790] ? wait_for_completion_io_timeout+0x20/0x20 [13093.113162] schedule+0x128/0x220 [13093.113836] rescuer_thread+0x679/0xbb0 [13093.114161] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13093.114512] ? worker_thread+0xf90/0xf90 [13093.114740] ? __kthread_parkme+0xcc/0x200 [13093.114987] ? worker_thread+0xf90/0xf90 [13093.115392] ktk:30728 pid: 728 ppid: 2 flags:0x00004000 [13093.615996] Call Trace: [13093.616170] [13093.616723] __schedule+0x72e/0x1570 [13093.617025] ? io_schedule_timeout+0x160/0x160 [13093.617936] ? lock_downgrade+0x130/0x130 [13093.618188] ? wait_for_completion_io_timeout+0x20/0x20 [13093.618544] schedule+0x128/0x220 [13093.619205] rescuer_thread+0x679/0xbb0 [13093.619490] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13093.619806] ? worker_thread+0xf90/0xf90 [13093.620058] ? __kthread_parkme+0xcc/0x200 [13093.620309] ? worker_thread+0xf90/0xf90 [13093.620553] kthread+0x2a7/0x350 [13093.621237] ? kthread_complete_and_exit+0x20/0x20 [13093.622023] ret_from_fork+0x22/0x30 [13093.622320] [13093.622488] task:xfsalloc state:I stack:30728 pid: 747 ppid: 2 flags:0x00004000 [13093.622984] Call Trace: [13093.623154] [13093.623699] __schedule+0x72e/0x1570 [13093.623988] ? io_schedule_timeout+0x160/0x160 [13093.624797] ? lock_downgrade+0x130/0x130 [13093.625103] ? wait_for_completion_io_timeout+0x20/0x20 [13093.625493] schedule+0x128/0x220 [13093.626153] rescuer_thread+0x679/0xbb0 [13093.6ck_irqrestore+0x59/0x70 [13093.726686] ? worker_thread+0xf90/0xf90 [13093.726985] ? __kthread_parkme+0xcc/0x200 [13093.727224] ? worker_thread+0xf90/0xf90 [13093.727446] kthread+0x2a7/0x350 [13093.728049] ? kthread_complete_and_exit+0x20/0x20 [13093.728695] ret_from_fork+0x22/0x30 [13093.729036] [13094.229451] ? io_schedule_timeout+0x160/0x160 [13094.230273] ? lock_downgrade+0x130/0x130 [13094.230558] ? wait_for_completion_io_timeout+0x20/0x20 [13094.230977] schedule+0x128/0x220 [13094.231596] rescuer_thread+0x679/0xbb0 [13094.231950] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13094.232283] ? worker_thread+0xf90/0xf90 [13094.232544] ? __kthread_parkme+0xcc/0x200 [13094.232775] ? worker_thread+0xf90/0xf90 [13094.233083] kthread+0x2a7/0x350 [13094.233716] ? kthread_complete_and_exit+0x20/0x20 [13094.234447] ret_from_fork+0x22/0x30 [13094.234748] [13094.234950] task:xfs-buf/dm-0 state:I stack:30728 pid: 749 ppid: 2 flags:0x00004000 [13094.235495] Call Trace: [13094.235645] [13094.236247] __schedule+0x72e/0x1570 [13094.236543] ? io_schedule_timeout+0x160/0x160 [13094.237239] ? lock_downgrade+0x130/0x130 [13094.237517] ? wait_for_completion_io_timeout+0x20/0x20 [13094.237848] schedule+0x128/0x220 [13094.238478] rescuer_thread+0x679/0xbb0 [13094.238801] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13094.239147] ? worker_thread+0xf90/0xf90 [13094.239383] ? __kthread_parkme+0xcc/0x200 [13094.239615] ? worker_thread+0xf90/0xf90 [13094.239848] kthread+0x2a7/0x350 [13094.622968] task:xfs-conv/dm-0 state:I stack:30728 pid: 750 ppid: 2 flags:0x00004000 [13094.640738] Call Trace: [13094.640960] [13094.66832/0x1570 [13094.741684] ? io_schedule_timeout+0x160/0x160 [13094.742405] ? lock_downgrade+0x130/0x130 [13094.742655] ? wait_for_completion_io_timeout+0x20/0x20 [13094.743070] schedule+0x128/0x220 [13094.743657] rescuer_thread+0x679/0xbb0 [13094.743999] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13094.744334] ? worker_thread+0xf90/0xf90 [13094.744570] ? __kthread_parkme+0xcc/0x200 [13094.744800] ? worker_thread+0xf90/0xf90 [13094.745051] kthread+0x2a7/0x350 [13094.745620] ? kthread_complete_and_exit+0x20/0x20 [13094.746323] ret_from_fork+0x22/0x30 [13094.746612] [13094.746769] task:xfs-reclaim/dm- state:I stack:30728 pid: 751 ppid: 2 flags:0x00004000 [13094.747264] Call Trace: [13094.747456] [13094.748024] __schedule+0x72e/0x1570 [13094.748282] ? io_schedule_timeout+0x160/0x160 [13094.748962] ? lock_downgrade+0x130/0x130 [13094.749211] ? wait_for_completion_io_timeout+0x20/0x20 [13094.749580] schedule+0x128/0x220 [13094.750200] rescuer_thread+0x679/0xbb0 [13094.750505] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13094.750807] ? worker_thread+0xf90/0xf90 [13094.751075] ? __kthread_parkme+0xcc/0x200 [13094.751322] ? worker_thread+0xf90/0xf90 [13094.751571] kthread+0x2a7/0x350 [13094.752163] ? kthread_complete_and_exit+0x20/0x20 [13094.752914] ret_from_fork+0x22/0x30 [13[13095.153460] [13095.154210] __schedule+0x72e/0x1570 [13095.154505] ? io_schedule_timeout+0x160/0x160 [13095.155196] ? lock_downgrade+0x130/0x130 [13095.155482] ? wait_for_completion_io_timeout+0x20/0x20 [13095.155819] schedule+0x128/0x220 [13095.156467] rescuer_thread+0x679/0xbb0 [13095.156757] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13095.157101] ? worker_thread+0xf90/0xf90 [13095.157375] ? __kthread_parkme+0xcc/0x200 [13095.157620] ? worker_thread+0xf90/0xf90 [13095.157854] kthread+0x2a7/0x350 [13095.158560] ? kthread_complete_and_exit+0x20/0x20 [13095.159342] ret_from_fork+0x22/0x30 [13095.159638] [13095.159814] task:xfs-inodegc/dags:0x00004000 [13095.260135] Call Trace: [13095.260332] [13095.260848] __schedule+0x72e/0x1570 [13095.261132] ? io_schedule_timeout+0x160/0x160 [13095.261821] ? lock_downgrade+0x130/0x130 [13095.262091] ? wait_for_completion_io_timeout+0x20/0x20 [13095.262477] schedule+0x128/0x220 [13095.263109] rescuer_thread+0x679/0xbb0 [13095.263420] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13095.263722] ? worker_thread+0xf90/0xf90 [13095.263995] ? __kthread_pa/0x20 [13095.664920] ret_from_fork+0x22/0x30 [13095.665259] [13095. state:I stack:30728 pid: 754 ppid: 2 flags:0x00004000 [13095.765808] Call Trace: [13095.765996] [13095.766506] __schedule+0x72e/0x1570 [13095.766757] ? io_schedule_timeout+0x160/0x160 [13095.767449] ? lock_downgrade+0x130/0x130 [13095.767725] ? wait_for_completion_io_timeout+0x20/0x20 [13095.768076] schedule+0x128/0x220 [13095.768653] rescuer_thread+0x679/0xbb0 [13095.768985] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13095.769322] ? worker_thread+0xf90/0xf90 [13095.769565] ? __kthread_parkme+0xcc/0x200 [13095.769798] ? worker_thread+0xf90/0xf90 [13095.770075] kthread+0x2a7/0x350 [13095.770648] ? kthread_complete_and_exit+0x20/0x20 [13095.771349] ret_from_fork+0x22/0x30 [13095.771636] [13095.771796] task:xfs-cil/dm-0 state:I stack:30728 pid: 755 ppid: 2 flags:0x00004000 [13095.772324] Call Trace: [13095.772488] [13095.773035] __schedule+0x72e/0x1570 [13095.773296] ? io_schedule_timeout+0x160/0x160 [13095.773981] ? lock_downgrade+0x130/0x130 [13095.774225] ? wait_for_completion_io_timeout+0x20/0x20 [13095.774597] schedule+0x128/0x220 [13095.775220] rescuer_thread+0x679/0xbb0 [13095.775524] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13095.775826] [13096.176140] ? kthread_complete_and_exit+0x20/0x20 [13096.176970] ret_from_fork+0x22/0x30 [13096.177274] [13096.177469] task:xfsaild/dm-0 state:S stack:26728 pid: 756 ppid: 2 flags:0x00004000 [13096.177959] Call Trace: [13096.178129] [13096.178790] __schedule+0x72e/0x1570 [13096.179095] ? io_schedule_timeout+0x160/0x160 [13096.179788] ? timer_fixup_activate+0x2e0/0x2e0 [13096.180523] ? debug_object_deactivate+0x320/0x320 [13096.181267] schedule+0x128/0x220 [13096.181947] schedule_timeout+0x125/0x260 [13096.182652] ? usleep_range_state+0x190/0x190 [13096.183375] ? lock_downgrade+0x130/0x130 [13096.183661] ? destroy_timer_on_stack+0x20/0x20 [13096.184484] ? xfsaild+0x1f6/0x960 [xfs] [13096.185042] ? do_raw_spin_unlock+0x55/0x1f0 [13096.185761] xfsaild+0x485/0x960 [xfs] [13096.186277] ? xfsaild_push+0x1c30/0x1c30 [xfs] [13096.187230] kthread+0x2a7/0x350 [13096.188103] ? kthread_complete_and_exit+0x20/0x20 [13096.188825] ret_from_fork+0x22/0x30 [13096.189192] [13096.189407] task:kworker/22:1H state:I stack:27832 pid: 757 ppid: 2 flags:0x00004000 [13096.189864] Workqueue: 0x0 (kblockd) [13096.190154] Call Trace: [13096.190306] [13096.190853] __schedule+0x72e/0x157x220 [13096.691765] worker_thread+0x152/0xf90 [13096.692102] ? process_one_work+0x1520/0x1520 [13096.692854] kthread+0x2a7/0x350 [13096.693512] ? kthread_complete_and_exit+0x20/0x20 [13096.694548] ret_from_fork+0x22/0x30 [13096.695173] [13096.695354] task:kworker/21:1H state:I stack:27960 pid: 758 ppid: 2 flags:0x00004000 [13096.695802] Workqueue: 0x0 (events_highpri) [13096.696495] Call Trace: [13096.696668] [13096.697216] __schedule+0x72e/0x1570 [13096.697533] ? io_schedule_timeout+0x160/0x160 [13096.698218] ? lock_downgrade+0x130/0x130 [13096.698505] ? pwq_dec_nr_in_flight+0x230/0x230 [13096.699429] schedule+0x128/0x220 [13096.700106] worker_thread+0x152/0xf90 [13096.700399] ? process_one_work+0x1520/0x1520 [13096.701102] kthread+0x2a7/0x350 [13096.701776] ? kthread_complete_and_exit+0x20/0x20 [13096.702764] ret_from_fork+0x22/0x30 [13096.703231] [13096.703421] task:kworker/16:1H state:I stack:27832 pid: 789 ppid: 2 flags:0x00004000 [13096.703937] Workqueue: 0x0 (events_highpri) [13096.704621] Call Trace: [13096.704776] [13096.705347] __schedule+0x72e/0x1570 [x220 [13097.206515] worker_thread+0x152/0xf90 [13097.206811] ? process_one_work+0x1520/0x1520 [13097.207536] kthread+0x2a7/0x350 [13097.208181] ? kthread_complete_and_exit+0x20/0x20 [13097.208884] ret_from_fork+0x22/0x30 [13097.209202] [13097.209376] task:kworker/23:1H state:I stack:27832 pid: 800 ppid: 2 flags:0x00004000 [13097.209821] Workqueue: 0x0 (kblockd) [13097.210097] Call Trace: [13097.210264] [13097.210995] __schedule+0x72e/0x1570 [13097.211380] ? io_schedule_timeout+0x160/0x160 [13097.212082] ? lock_downgrade+0x130/0x130 [13097.212339] ? pwq_dec_nr_in_flight+0x230/0x230 [13097.213133] schedule+0x128/0x220 [13097.213792] worker_thread+0x152/0xf90 [13097.214124] ? process_one_work+0x1520/0x1520 [13097.215079] kthread+0x2a7/0x350 [13097.215804] ? kthread_complete_and_exit+0x20/0x20 [13097.216520] ret_from_fork+0x22/0x30 [13097.216854] [13097.217092] task:systemd-journal state:R running task stack:24616 pid: 830 ppid: 1 flags:0x00000000 [13097.218163] Call Trace: [13097.218338] [13097.219096] ? __schedule+0x72e/0x1570 [13097.219423] ? io_schedule_timeout+0x160/0x160 [13097.220180] ? check_prev_add+0x20f0/0x20f0 [13097.220442] ? walk_component+0x5b0/0x+0x2b8/0x300 [13097.721023] ? schedule+0x128/0x220 [13097.721656] ? schedule_hrtimeout_range_clock+0x2b8/0x300 [13097.722092] ? strncpy_from_user+0x6f/0x2d0 [13097.722356] ? kasan_set_free_info+0x20/0x40 [13097.723078] ? ep_send_events+0x9f0/0x9f0 [13097.723360] ? prepare_to_wait_exclusive+0x2c0/0x2c0 [13097.723676] ? sched_clock_cpu+0x15/0x1b0 [13097.723945] ? find_held_lock+0x33/0x120 [13097.724199] ? ksys_read+0xf9/0x1d0 [13097.724789] ? getname_flags.part.0+0x8e/0x450 [13097.725499] ? do_epoll_wait+0x12f/0x160 [13097.725748] ? do_syscall_64+0x5c/0x90 [13097.726024] ? do_syscall_64+0x69/0x90 [13097.726275] ? stream_open+0x70/0x70 [13097.726532] ? do_syscall_64+0x5c/0x90 [13097.726768] ? do_syscall_64+0x69/0x90 [13097.727031] ? lockdep_hardirqs_on+0x79/0x100 [13097.727734] ? do_syscall_64+0x5c/0x90 [13097.728008] ? asm_exc_page_fault+0x22/0x30 [13097.728260] ? lockdep_hardirqs_on+0x79/0x100 [13097.728985] ? entry_SYSCALL_64_after_hwframe+0x63/0xcd [13097.729357] [13097.729543] task:kworker/20:1H state:I stack:27960 pid: 842 ppid: 2 flags:0x00004000 [13097.730034] Workqueue: 0x0 (events_highpri) [13097.730727] Call Trace: [13097.730882] [13097.731529] __schedschedule+0x128/0x220 [13098.232584] worker_thread+0x152/0xf90 [13098.232863] ? process_one_work+0x1520/0x1520 [13098.233607] kthread+0x2a7/0x350 [13098.234245] ? kthread_complete_and_exit+0x20/0x20 [13098.235035] ret_from_fork+0x22/0x30 [13098.235427] [13098.235593] task:systemd-udevd state:S stack:24416 pid: 845 ppid: 1 flags:0x00000002 [13098.236082] Call Trace: [13098.236243] [13098.236790] __schedule+0x72e/0x1570 [13098.237061] ? io_schedule_timeout+0x160/0x160 [13098.237754] ? __lock_acquire+0xb72/0x1870 [13098.238091] schedule+0x128/0x220 [13098.238720] schedule_hrtimeout_range_clock+0x2b8/0x300 [13098.239087] ? hrtimer_nanosleep_restart+0x160/0x160 [13098.239437] ? lock_downgrade+0x130/0x130 [13098.239699] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13098.240531] ep_poll+0x7d2/0xa80 [13098.241424] ? ep_send_events+0x9f0/0x9f0 [13098.241670] ? lock_downgrade+0x130/0x130 [13098.241935] ? prepare_to_wait_exclusive+0x2c0/0x2c0 [13098.242275] do_epoll_wait+0x12f/0x160 [13098.242543] __x64_sys_epoll_wait+0x12e/0x250 [13098.243248] ? __x64_sys_epoll_pwait2+0x240/0x240 [13098.244064] do_syscall_64+0x5c/0x90 [13098.244319] ? do_syscall_64+0x69/0x90 [13098: 00000246 ORIG_RAX: 00000000000000e8 [13098.745514] RAX: ffffffffffffffda RBX: 0000558c15413d50 RCX: 00007f1a3e54eaca [13098.746382] RDX: 0000000000000132 RSI: 0000558c154ff5c0 RDI: 0000000000000009 [13098.747620] RBP: 0000558c15413ee0 R08: 0000000000000132 R09: 0000000000000000 [13098.748504] R10: 00000000ffffffff R11: 0000000000000246 R12: 000000000000005a [13098.749531] R13: 0000558c15413d50 R14: 0000000000000132 R15: 0000000000000009 [13098.750582] [13098.750769] task:ipmi-msghandler state:I stack:30728 pid: 931 ppid: 2 flags:0x00004000 [13098.751292] Call Trace: [13098.751486] [13098.752041] __schedule+0x72e/0x1570 [13098.752306] ? io_schedule_timeout+0x160/0x160 [13098.753289] ? lock_downgrade+0x130/0x130 [13098.753684] ? wait_for_completion_io_timeout+0x20/0x20 [13098.754083] schedule+0x128/0x220 [13098.754693] rescuer_thread+0x679/0xbb0 [13098.755042] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13098.755529] ? worker_thread+0xf90/0xf90 [13098.755769] ? __kthread_parkme+0xcc/0x200 [13098.756043] ? worker_thread+0xf90/0xf90 [13098.756295] kthread+0x2a7/0x350 [13098.756945] ? kthread_complete_and_exit+0x20/0x20 [13098.757637] ret_from_fork+0x22/0x30 [13098.758011] [13098.758198] task:kipmeout+0x160/0x160 [13099.259665] ? lock_downgrade+0x130/0x130 [13099.259991] schedule+0x128/0x220 [13099.260756] ipmi_thread+0x3b5/0x470 [ipmi_si] [13099.261589] ? flush_messages+0x40/0x40 [ipmi_si] [13099.262340] kthread+0x2a7/0x350 [13099.263015] ? kthread_complete_and_exit+0x20/0x20 [13099.263676] ret_from_fork+0x22/0x30 [13099.264015] [13099.264181] task:xfs-buf/sda1 state:I stack:30728 pid: 938 ppid: 2 flags:0x00004000 [13099.264658] Call Trace: [13099.264808] [13099.265351] __schedule+0x72e/0x1570 [13099.265659] ? io_schedule_timeout+0x160/0x160 [13099.266317] ? lock_downgrade+0x130/0x130 [13099.266537] ? wait_for_completion_io_timeout+0x20/0x20 [13099.266895] schedule+0x128/0x220 [13099.267528] rescuer_thread+0x679/0xbb0 [13099.267814] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13099.268138] ? worker_thread+0xf90/0xf90 [13099.268379] ? __kthread_parkme+0xcc/0x200 [13099.268646] ? worker_thread+0xf90/0xf90 [13099.268883] kthread+0x2a7/0x350 [13099.269494] ? kthread_complete_and_exit+0x20/0x20 [13099.270196] ret_from_fork+0x22/0x30 [13099.270501] [13099.270657] task:xfs-conv/sda1 state:I stack:30728 pid: 939 ppid: 2 flags:0x00004000 [13099.2+0x130/0x130 [13099.771447] ? wait_for_completion_io_timeout+0x20/0x20 [13099.771833] schedule+0x128/0x220 [13099.772510] rescuer_thread+0x679/0xbb0 [13099.772784] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13099.773156] ? worker_thread+0xf90/0xf90 [13099.773398] ? __kthread_parkme+0xcc/0x200 [13099.773637] ? worker_thread+0xf90/0xf90 [13099.773867] kthread+0x2a7/0x350 [13099.774527] ? kthread_complete_and_exit+0x20/0x20 [13099.775543] ret_from_fork+0x22/0x30 [13099.775831] [13099.776034] task:xfs-reclaim/sda state:I stack:30728 pid: 940 ppid: 2 flags:0x00004000 [13099.776692] Call Trace: [13099.776844] [13099.777413] __schedule+0x72e/0x1570 [13099.777681] ? io_schedule_timeout+0x160/0x160 [13099.778353] ? lock_downgrade+0x130/0x130 [13099.778633] ? wait_for_completion_io_timeout+0x20/0x20 [13099.779029] schedule+0x128/0x220 [13099.779674] rescuer_thread+0x679/0xbb0 [13099.779958] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13099.780326] ? worker_thread+0xf90/0xf90 [13099.780584] ? __kthread_parkme+0xcc/0x200 [13099.780813] ? worker_thread+0xf90/0xf90 [13099.781097] kthread+0x2a7/0x350 [11 ppid: 2 flags:0x00004000 [13100.281719] Call Trace: [13100.281893] [13100.282448] __schedule+0x72e/0x1570 [13100.282738] ? io_schedule_timeout+0x160/0x160 [13100.283442] ? lock_downgrade+0x130/0x130 [13100.283697] ? wait_for_completion_io_timeout+0x20/0x20 [13100.284091] schedule+0x128/0x220 [13100.284702] rescuer_thread+0x679/0xbb0 [13100.285054] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13100.285391] ? worker_thread+0xf90/0xf90 [13100.285650] ? __kthread_parkme+0xcc/0x200 [13100.285883] ? worker_thread+0xf90/0xf90 [13100.286159] kthread+0x2a7/0x350 [13100.286732] ? kthread_complete_and_exit+0x20/0x20 [13100.287436] ret_from_fork+0x22/0x30 [13100.287729] [13100.287894] task:xfs-inodegc/sda state:I stack:30728 pid: 942 ppid: 2 flags:0x00004000 [13100.288418] Call Trace: [13100.288580] [13100.289128] __schedule+0x72e/0x1570 [13100.289384] ? io_schedule_timeout+0x160/0x160 [13100.290066] ? lock_downgrade+0x130/0x130 [13100.290313] ? wait_for_completion_io_timeout+0x20/0x20 [13100.290668] schedule+0x128/0x220 [13100.291290] rescuer_thread+0x679/0xbb0 [13100.291591] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13100.291888] ? worker_thread+0xf90/0xf90 [13100.292143] ? __kthread_park[13100.792663] [13100.792839] task:xfs-log/sda1 state:I stack:30728 pid: 943 ppid: 2 flags:0x00004000 [13100.793362] Call Trace: [13100.793543] [13100.794068] __schedule+0x72e/0x1570 [13100.794311] ? io_schedule_timeout+0x160/0x160 [13100.795012] ? lock_downgrade+0x130/0x130 [13100.795258] ? wait_for_completion_io_timeout+0x20/0x20 [13100.795626] schedule+0x128/0x220 [13100.796251] rescuer_thread+0x679/0xbb0 [13100.796547] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13100.796847] ? worker_thread+0xf90/0xf90 [13100.797095] ? __kthread_parkme+0xcc/0x200 [13100.797339] ? worker_thread+0xf90/0xf90 [13100.797623] kthread+0x2a7/0x350 [13100.798210] ? kthread_complete_and_exit+0x20/0x20 [13100.798888] ret_from_fork+0x22/0x30 [13100.799223] [13100.799385] task:xfs-cil/sda1 state:I stack:30728 pid: 944 ppid: 2 flags:0x00004000 [13100.799861] Call Trace: [13100.800039] [13100.800602] __schedule+0x72e/0x1570 [13100.800872] ? io_schedule_timeout+0x160/0x160 [13100.801543] ? lock_downgrade+0x130/0x130 [13100.801794] ? wait_for_completion_io_timeout+0x20/0x20 [13100.802156] schme+0xcc/0x200 [13101.302619] ? worker_thread+0xf90/0xf90 [13101.302881] kthread+0x2a7/0x350 [13101.303543] ? kthread_complete_and_exit+0x20/0x20 [13101.304255] ret_from_fork+0x22/0x30 [13101.304559] [13101.304719] task:xfsaild/sda1 state:S stack:27864 pid: 945 ppid: 2 flags:0x00004000 [13101.305249] Call Trace: [13101.305416] [13101.305961] __schedule+0x72e/0x1570 [13101.306223] ? io_schedule_timeout+0x160/0x160 [13101.306879] ? lock_downgrade+0x130/0x130 [13101.307212] schedule+0x128/0x220 [13101.307806] xfsaild+0x657/0x960 [xfs] [13101.308314] ? xfsaild_push+0x1c30/0x1c30 [xfs] [13101.309229] kthread+0x2a7/0x350 [13101.309828] ? kthread_complete_and_exit+0x20/0x20 [13101.310536] ret_from_fork+0x22/0x30 [13101.310821] [13101.311008] task:kdmflush/253:2 state:I stack:29928 pid: 947 ppid: 2 flags:0x00004000 [13101.311500] Call Trace: [13101.311653] [13101.312198] __schedule+0x72e/0x1570 [13101.312458] ? io_schedule_timeout+0x160/0x160 [13101.313141] ? lock_downgrade+0x130/0x130 [13101.313386] ? wait_for_completion_io_timeout+0x20/0x20 [13101.313750] schedule+0x128/0x220 [13101.314375] rescuer_thread+0x679/0xbb0 [13101.314676] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13101.315025] ? worker_thread+0xf90/0xf90 [13101.315265] ? __kthread_park[13101.815778] [13101.815989] task:xfs-buf/dm-2 state:I stack:29928 pid: 967 ppid: 2 flags:0x00004000 [13101.816460] Call Trace: [13101.816623] [13101.817186] __schedule+0x72e/0x1570 [13101.817472] ? io_schedule_timeout+0x160/0x160 [13101.818132] ? lock_downgrade+0x130/0x130 [13101.818378] ? wait_for_completion_io_timeout+0x20/0x20 [13101.818750] schedule+0x128/0x220 [13101.819704] rescuer_thread+0x679/0xbb0 [13101.820123] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13101.820441] ? worker_thread+0xf90/0xf90 [13101.820697] ? __kthread_parkme+0xcc/0x200 [13101.820993] ? worker_thread+0xf90/0xf90 [13101.821250] kthread+0x2a7/0x350 [13101.821825] ? kthread_complete_and_exit+0x20/0x20 [13101.822568] ret_from_fork+0x22/0x30 [13101.822917] [13101.823159] task:xfs-conv/dm-2 state:I stack:29928 pid: 968 ppid: 2 flags:0x00004000 [13101.823732] Call Trace: [13101.823893] [13101.824490] __schedule+0x72e/0x1570 [13101.824809] ? io_schedule_timeout+0x160/0x160 [13101.825602] ? lock_downgrade+0x130/0x130 [13101.825912] ? wait_for_completion_io_timeout+0x20/0x20 [13101.826357] schedule+0x128/0x220 [13101.827053] rescuer_thread+0x679/0xbb0 [13101.827403] ?kthread+0x2a7/0x350 [13102.328280] ? kthread_complete_and_exit+0x20/0x20 [13102.328999] ret_from_fork+0x22/0x30 [13102.329286] [13102.329443] task:xfs-reclaim/dm- state:I stack:29928 pid: 969 ppid: 2 flags:0x00004000 [13102.329894] Call Trace: [13102.330078] [13102.330587] __schedule+0x72e/0x1570 [13102.330856] ? io_schedule_timeout+0x160/0x160 [13102.331539] ? lock_downgrade+0x130/0x130 [13102.331785] ? wait_for_completion_io_timeout+0x20/0x20 [13102.332182] schedule+0x128/0x220 [13102.332778] rescuer_thread+0x679/0xbb0 [13102.333127] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13102.333464] ? worker_thread+0xf90/0xf90 [13102.333707] ? __kthread_parkme+0xcc/0x200 [13102.333966] ? worker_thread+0xf90/0xf90 [13102.334212] kthread+0x2a7/0x350 [13102.334795] ? kthread_complete_and_exit+0x20/0x20 [13102.335492] ret_from_fork+0x22/0x30 [13102.335805] [13102.336007] task:xfs-blockgc/dm- state:I stack:29928 pid: 970 ppid: 2 flags:0x00004000 [13102.336514] Call Trace: [13102.336666] [13102.337204] __schedule+0x72e/0x1570 [13102.337463] ? io_schedule_timeout+0x160/0x160 [13102.338143] ? lock_downgrade+0x130/0x130 [13102.338392] ? wait_for_completion_io_0xf90/0xf90 [13102.839010] ? __kthread_parkme+0xcc/0x200 [13102.839273] ? worker_thread+0xf90/0xf90 [13102.839540] kthread+0x2a7/0x350 [13102.840140] ? kthread_complete_and_exit+0x20/0x20 [13102.840847] ret_from_fork+0x22/0x30 [13102.841182] [13102.841352] task:xfs-inodegc/dm- state:I stack:29928 pid: 971 ppid: 2 flags:0x00004000 [13102.841843] Call Trace: [13102.842024] [13102.842572] __schedule+0x72e/0x1570 [13102.842850] ? io_schedule_timeout+0x160/0x160 [13102.843592] ? lock_downgrade+0x130/0x130 [13102.843848] ? wait_for_completion_io_timeout+0x20/0x20 [13102.844221] schedule+0x128/0x220 [13102.844842] rescuer_thread+0x679/0xbb0 [13102.845169] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13102.845542] ? worker_thread+0xf90/0xf90 [13102.845776] ? __kthread_parkme+0xcc/0x200 [13102.846050] ? worker_thread+0xf90/0xf90 [13102.846302] kthread+0x2a7/0x350 [13102.846888] ? kthread_complete_and_exit+0x20/0x20 [13102.847605] ret_from_fork+0x22/0x30 [13102.847897] [13102.848085] task:xfs-log/dm-2 state:I stack:30728 pid: 972 ppid: 2 flags:0x00004000 [13102.848590] Call Trace: [13102.848745] [13102.849296] __schedule+0x72e/0x1570 [13102.849549] ? io_schedule_timeout+0x160/0xx679/0xbb0 [13103.350160] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13103.350555] ? worker_thread+0xf90/0xf90 [13103.350796] ? __kthread_parkme+0xcc/0x200 [13103.351073] ? worker_thread+0xf90/0xf90 [13103.351339] kthread+0x2a7/0x350 [13103.351963] ? kthread_complete_and_exit+0x20/0x20 [13103.352626] ret_from_fork+0x22/0x30 [13103.352902] [13103.353134] task:xfs-cil/dm-2 state:I stack:29928 pid: 973 ppid: 2 flags:0x00004000 [13103.353819] Call Trace: [13103.354009] [13103.354571] __schedule+0x72e/0x1570 [13103.354845] ? io_schedule_timeout+0x160/0x160 [13103.355520] ? lock_downgrade+0x130/0x130 [13103.355771] ? wait_for_completion_io_timeout+0x20/0x20 [13103.356162] schedule+0x128/0x220 [13103.356783] rescuer_thread+0x679/0xbb0 [13103.357118] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13103.357471] ? worker_thread+0xf90/0xf90 [13103.357704] ? __kthread_parkme+0xcc/0x200 [13103.357938] ? worker_thread+0xf90/0xf90 [13103.358223] kthread+0x2a7/0x350 [13103.358804] ? kthread_complete_and_exit+0x20/0x20 [13103.359503] ret_from_fork+0x22/0x30 [13103.359807] [13103.359991] task:xfsaild/dm-2 state:S stack:28024 pid: 974 ppid: 2 flags:0x00004000 [13103.38+0x130/0x130 [13103.861064] schedule+0x128/0x220 [13103.861709] xfsaild+0x657/0x960 [xfs] [13103.862239] ? xfsaild_push+0x1c30/0x1c30 [xfs] [13103.863226] kthread+0x2a7/0x350 [13103.863861] ? kthread_complete_and_exit+0x20/0x20 [13103.864584] ret_from_fork+0x22/0x30 [13103.864880] [13103.865069] task:rpcbind state:S stack:24320 pid: 1004 ppid: 1 flags:0x00000002 [13103.865549] Call Trace: [13103.865698] [13103.866250] __schedule+0x72e/0x1570 [13103.866564] ? io_schedule_timeout+0x160/0x160 [13103.867239] ? lock_downgrade+0x130/0x130 [13103.867534] schedule+0x128/0x220 [13103.868152] schedule_hrtimeout_range_clock+0x143/0x300 [13103.868535] ? hrtimer_nanosleep_restart+0x160/0x160 [13103.868844] ? hrtimer_init_sleeper_on_stack+0x90/0x90 [13103.869192] ? datagram_poll+0x236/0x410 [13103.869451] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13103.869754] ? tcp_shutdown+0xc0/0xc0 [13103.870043] poll_schedule_timeout.constprop.0+0xa6/0x170 [13103.870420] do_poll.constprop.0+0x459/0x860 [13103.871175] ? __ia32_compat_sys_pselect6_time32+0x250/0x250 [13103.872007] ? __might_fault+0xbc/0x160 [13103.872279] do_sys_poll+0x367/0x570 [13103.872556] ? do_poll.constprop.0+0x860/0x860 [13103.873289] ? validate_chain+0x154/0xdf0 [13103.873608] ? validate_chain+0x154/0xdf0 [1310[13104.374310] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13104.375246] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13104.376049] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13104.376813] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13104.377608] ? sched_clock_cpu+0x15/0x1b0 [13104.377859] ? find_held_lock+0x33/0x120 [13104.378125] ? __lock_release+0x4c1/0xa00 [13104.378373] ? lock_downgrade+0x130/0x130 [13104.378619] ? rcu_read_unlock+0x40/0x40 [13104.378847] ? __might_fault+0xbc/0x160 [13104.379151] ? nsec_to_clock_t+0x30/0x30 [13104.379393] ? ktime_get_ts64+0x1eb/0x270 [13104.379656] ? restore_fpregs_from_fpstate+0x9c/0x180 [13104.380002] ? kernel_fpu_begin_mask+0x1d0/0x1d0 [13104.380693] __x64_sys_poll+0x15d/0x430 [13104.381001] ? __ia32_sys_poll+0x430/0x430 [13104.381267] ? syscall_enter_from_user_mode+0x21/0x70 [13104.381581] do_syscall_64+0x5c/0x90 [13104.381822] ? lockdep_hardirqs_on+0x79/0x100 [13104.382499] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13104.382820] RIP: 0033:0x7fe855542987 [13104.383100] RSP: 002b:00007fffac279138 EFLAGS: 00000246 ORIG_RAX: 0000000000000007 [13104.41e1f34a9830 R09: 0000000000000000 [13104.884883] R10: 00007fffac2789b0 R11: 0000000000000246 R12: 00007fffac279140 [13104.885739] R13: 00007fffac2791e0 R14: 000055e1f22197ec R15: 0000000000000020 [13104.886651] [13104.886829] task:auditd state:S stack:25832 pid: 1011 ppid: 1 flags:0x00000002 [13104.887323] Call Trace: [13104.887492] [13104.888081] __schedule+0x72e/0x1570 [13104.888341] ? io_schedule_timeout+0x160/0x160 [13104.889038] ? lock_downgrade+0x130/0x130 [13104.889347] schedule+0x128/0x220 [13104.889939] schedule_hrtimeout_range_clock+0x143/0x300 [13104.890282] ? hrtimer_nanosleep_restart+0x160/0x160 [13104.890652] ? hrtimer_init_sleeper_on_stack+0x90/0x90 [13104.891016] poll_schedule_timeout.constprop.0+0xa6/0x170 [13104.891329] do_select+0x9e4/0xd20 [13104.892052] ? select_estimate_accuracy+0x2a0/0x2a0 [13104.892729] ? validate_chain+0x154/0xdf0 [13104.893041] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13104.893788] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13104.894555] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13104.895373] ? __lock_acquire+0xb72/0x1870 [13104.895677] ? sched_clock_cpu+0x15/0x1b0 [13104.895903] ? find_held_lock+0x33/0x120 [13104.896178] ? __lock_release+0x4c1/0xa00 [13104.896438] ? lock_do[13105.396885] ? static_obj+0x62/0xc0 [13105.397524] ? sched_clock_cpu+0x15/0x1b0 [13105.397785] ? __lock_release+0x4c1/0xa00 [13105.398074] ? lock_downgrade+0x130/0x130 [13105.398329] ? rcu_read_unlock+0x40/0x40 [13105.398606] ? nsec_to_clock_t+0x30/0x30 [13105.398846] ? ktime_get_ts64+0x1eb/0x270 [13105.399107] ? __set_current_blocked+0xf0/0xf0 [13105.399781] do_pselect.constprop.0+0x117/0x1e0 [13105.400509] ? __ia32_sys_select+0x150/0x150 [13105.401271] __x64_sys_pselect6+0x138/0x250 [13105.401519] ? syscall_trace_enter.constprop.0+0x9e/0x280 [13105.401833] do_syscall_64+0x5c/0x90 [13105.402099] ? do_syscall_64+0x69/0x90 [13105.402346] ? do_syscall_64+0x69/0x90 [13105.402592] ? do_syscall_64+0x69/0x90 [13105.402818] ? lockdep_hardirqs_on+0x79/0x100 [13105.403519] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13105.403832] RIP: 0033:0x7ff42474511d [13105.404105] RSP: 002b:00007fff7b21f540 EFLAGS: 00000246 ORIG_RAX: 000000000000010e [13105.404914] RAX: ffffffffffffffda RBX: 00007fff7b21f5d0 RCX: 00007ff42474511d [13105.405717] RDX: 000055dda6a61140 RSI: 000055dda6a[13105.897831] R13: 00007fff7b21f550 R14: 0000000000000000 R15: 000055dda6a61140 [13105.907235] [13105.907398] task:auditd state:S stack:22456 pid: 1012 ppid: 1 flags:0x00000002 [13105.907872] Call Trace: [13105.908054] [13105.908579] __schedule+0x72e/0x1570 [13105.908827] ? io_schedule_timeout+0x160/0x160 [13105.909486] ? lock_downgrade+0x130/0x130 [13105.909782] schedule+0x128/0x220 [13105.910556] futex_wait_queue+0x135/0x360 [13105.910823] futex_wait+0x28f/0x600 [13105.911426] ? futex_wait_setup+0x1b0/0x1b0 [13105.911710] ? __lock_acquire+0xb72/0x1870 [13105.912006] ? find_held_lock+0x33/0x120 [13105.912250] ? sched_clock_cpu+0x15/0x1b0 [13105.912475] ? find_held_lock+0x33/0x120 [13105.912733] do_futex+0x20b/0x340 [13105.913343] ? __ia32_sys_get_robust_list+0x310/0x310 [13105.913681] ? __seccomp_filter+0x92/0x8d0 [13105.913926] __x64_sys_futex+0x174/0x440 [13105.914211] ? __x64_sys_futex_time32+0x440/0x440 [13105.914940] do_syscall_64+0x5c/0x90 [13105.915252] ? asm_sysvec_apic_timer_interrupt+0x16/0x20 [13105.915615] ? lockdep_hardirqs_on+0x79/0x100 [13105.916292] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13105.916635] RIP: 0033:0x7ff42469c39a [13105.916859] RSP: 002b:00007ff423ffeb70 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca [10000 R08: 0000000000000000 R09: 00000000ffffffff [13106.418210] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [13106.419050] R13: 000055dda666110c R14: 0000000000000001 R15: 0000000000000000 [13106.419885] [13106.420085] task:rpciod state:I stack:30728 pid: 1015 ppid: 2 flags:0x00004000 [13106.420656] Call Trace: [13106.420811] [13106.421369] __schedule+0x72e/0x1570 [13106.421661] ? io_schedule_timeout+0x160/0x160 [13106.422343] ? lock_downgrade+0x130/0x130 [13106.422599] ? wait_for_completion_io_timeout+0x20/0x20 [13106.422926] schedule+0x128/0x220 [13106.423626] rescuer_thread+0x679/0xbb0 [13106.423905] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13106.424235] ? worker_thread+0xf90/0xf90 [13106.424477] ? __kthread_parkme+0xcc/0x200 [13106.424732] ? worker_thread+0xf90/0xf90 [13106.424964] kthread+0x2a7/0x350 [13106.425584] ? kthread_complete_and_exit+0x20/0x20 [13106.426323] ret_from_fork+0x22/0x30 [13106.426640] [13106.426804] task:xprtiod state:I stack:30728 pid: 1016 ppid: 2 flags:0x00004000 [13106.427291] Call Trace: [13106.427461] [13106.428050] __schedule+0x72e/0x1570 [13106.428322] ? io_schedule_timeout+0x160/0x160 [13106.429061] ? lock_downgrade+0x130/0x130 [13106.429321] ? wait_f? worker_thread+0xf90/0xf90 [13106.930118] ? __kthread_parkme+0xcc/0x200 [13106.930388] ? worker_thread+0xf90/0xf90 [13106.930682] kthread+0x2a7/0x350 [13106.931307] ? kthread_complete_and_exit+0x20/0x20 [13106.932062] ret_from_fork+0x22/0x30 [13106.932399] [13106.932592] task:NetworkManager state:D stack:20600 pid: 1055 ppid: 1 flags:0x00000002 [13106.933087] Call Trace: [13106.933279] [13106.933816] __schedule+0x72e/0x1570 [13106.934115] ? io_schedule_timeout+0x160/0x160 [13106.934792] ? default_device_exit_batch+0xed/0x370 [13106.935559] schedule+0x128/0x220 [13106.936195] schedule_preempt_disabled+0x14/0x30 [13106.936892] __mutex_lock+0xadd/0x1470 [13106.937188] ? rtnetlink_rcv_msg+0x2d7/0x880 [13106.937889] ? mutex_lock_io_nested+0x12d0/0x12d0 [13106.938638] ? lock_downgrade+0x130/0x130 [13106.938943] ? rtnetlink_rcv_msg+0x2d7/0x880 [13106.939656] rtnetlink_rcv_msg+0x2d7/0x880 [13106.939891] ? rtnl_link_fill+0x870/0x870 [13106.940181] ? sched_clock_cpu+0x15/0x1b0 [13106.940434] netlink_rcv_skb+0x120/0x380 [13106.940702] ? rtnl_link_fill+0x870/0x870 [13106.940943] ? netlink_ack+0x9c0/0x9c0 [13106.941283] netlink_unicast+0x439/0x710 [13106.941542] ? netlink_attachskb+0x750/0x750 [13106.942261] netlink_sendmsg+0x72a/0xc80 [13106.942563] ? netlink_unicast+0x710[13107.435062] ? __ia32_sys_recvmmsg+0x210/0x210 [13107.443702] ? __lock_acquire+0xb72/0x1870 [13107.444051] ___sys_sendmsg+0xe9/0x160 [13107.444313] ? sendmsg_copy_msghdr+0x110/0x110 [13107.445381] ? lock_downgrade+0x130/0x130 [13107.445648] ? __fget_files+0x1cc/0x400 [13107.445916] ? __fget_files+0x1e4/0x400 [13107.446225] ? __fget_light+0xc3/0x240 [13107.446493] __sys_sendmsg+0xb7/0x150 [13107.446759] ? __sys_sendmsg_sock+0x20/0x20 [13107.447044] ? ip6_rcv_core+0xd20/0x1c60 [13107.447328] ? syscall_trace_enter.constprop.0+0x19c/0x280 [13107.447688] do_syscall_64+0x5c/0x90 [13107.447934] ? do_syscall_64+0x69/0x90 [13107.448212] ? lockdep_hardirqs_on+0x79/0x100 [13107.448874] ? do_syscall_64+0x69/0x90 [13107.449146] ? asm_common_interrupt+0x22/0x40 [13107.449815] ? lockdep_hardirqs_on+0x79/0x100 [13107.450540] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13107.450859] RIP: 0033:0x7f500d54fa7d [13107.451107] RSP: 002b:00007ffd609318d0 EFLAGS: 00000293 ORIG_RAX: 000000000000002e [13107.451950] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f500d54fa7d [13107.452768] RDX: 0000000000000000 RSI: 00007ffd6093191[13107.944909] R13: 00007ffd60931a88 R14: 0000000000000000 R15: 000055be9786f040 [13107.954185] [13107.954358] task:gmain state:S stack:24504 pid: 1105 ppid: 1 flags:0x00000002 [13107.954854] Call Trace: [13107.955041] [13107.955563] __schedule+0x72e/0x1570 [13107.955810] ? io_schedule_timeout+0x160/0x160 [13107.956529] ? lock_downgrade+0x130/0x130 [13107.956829] schedule+0x128/0x220 [13107.957448] schedule_hrtimeout_range_clock+0x143/0x300 [13107.957767] ? hrtimer_nanosleep_restart+0x160/0x160 [13107.958118] ? hrtimer_init_sleeper_on_stack+0x90/0x90 [13107.958452] ? inotify_poll+0xf4/0x150 [13107.958729] poll_schedule_timeout.constprop.0+0xa6/0x170 [13107.959093] do_poll.constprop.0+0x459/0x860 [13107.959782] ? __ia32_compat_sys_pselect6_time32+0x250/0x250 [13107.960640] ? __might_fault+0xbc/0x160 [13107.960914] do_sys_poll+0x367/0x570 [13107.961180] ? do_poll.constprop.0+0x860/0x860 [13107.961845] ? arch_stack_walk+0x9e/0xf0 [13107.962183] ? kmem_cache_free+0x152/0x400 [13107.962452] ? stack_trace_save+0x91/0xd0 [13107.962706] ? filter_irq_stacks+0xa0/0xa0 [13107.962936] ? kmem_cache_free+0x152/0x400 [131[13108.455283] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13108.464228] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13108.465064] ? __lock_acquire+0xb72/0x1870 [13108.465338] ? sched_clock_cpu+0x15/0x1b0 [13108.465613] ? find_held_lock+0x33/0x120 [13108.465850] ? __lock_release+0x4c1/0xa00 [13108.466138] ? lock_downgrade+0x130/0x130 [13108.466382] ? rcu_read_unlock+0x40/0x40 [13108.466636] ? sched_clock_cpu+0x15/0x1b0 [13108.466889] ? nsec_to_clock_t+0x30/0x30 [13108.467163] ? ktime_get_ts64+0x1eb/0x270 [13108.467454] __x64_sys_poll+0x15d/0x430 [13108.467716] ? __ia32_sys_poll+0x430/0x430 [13108.467938] ? ktime_get_coarse_real_ts64+0x130/0x170 [13108.468315] do_syscall_64+0x5c/0x90 [13108.468557] ? do_syscall_64+0x69/0x90 [13108.468786] ? do_syscall_64+0x69/0x90 [13108.469053] ? do_syscall_64+0x69/0x90 [13108.469307] ? do_syscall_64+0x69/0x90 [13108.469534] ? lockdep_hardirqs_on+0x79/0x100 [13108.470209] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13108.470526] RIP: 0033:0x7f500d5429bf [13108.470773] RSP: 002b:00007f500bffdf80 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 [13108.471586] RAX: ffffffffffffffda RBX: 00007f500d6ff071 RCX: 00007f500d5429bf [13108.472403] RDX: 0000000000000f9c RSI: 0000000000000002 RDI: 000055be978561d0 [13108.473221] RBP: 000055be978561d0 R08: 0000000000000000 R09: 0000000000000000 [13108.474090] R10: 00007ffd609f9080 R11: 0000000000000293 R12: 0000000000000002 [13108.474878] R13: 0[13108.975350] [13108.975880] __schedule+0x72e/0x1570 [13108.976175] ? io_schedule_timeout+0x160/0x160 [13108.976841] ? find_held_lock+0x33/0x120 [13108.977133] ? __lock_release+0x4c1/0xa00 [13108.977388] schedule+0x128/0x220 [13108.977958] schedule_hrtimeout_range_clock+0x2b8/0x300 [13108.978299] ? hrtimer_nanosleep_restart+0x160/0x160 [13108.978671] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13108.979404] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13108.979754] ? lockdep_hardirqs_on+0x79/0x100 [13108.980428] ? unix_poll+0x26a/0x3b0 [13108.980739] poll_schedule_timeout.constprop.0+0xa6/0x170 [13108.981094] do_poll.constprop.0+0x459/0x860 [13108.981782] ? __ia32_compat_sys_pselect6_time32+0x250/0x250 [13108.982572] ? __might_fault+0xbc/0x160 [13108.982838] do_sys_poll+0x367/0x570 [13108.983143] ? do_poll.constprop.0+0x860/0x860 [13108.983839] ? copyout+0x83/0xa0 [13108.984481] ? _copy_to_iter+0x279/0x10c0 [13108.984737] ? lock_downgrade+0x130/0x130 [13108.984971] ? copy_page_from_iter+0x7b0/0x7b0 [13108.985707] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13108.986502] ? poll_schedule_timeo[13109.478651] ? sched_clock_cpu+0x15/0x1b0 [13109.487270] ? find_held_lock+0x33/0x120 [13109.487536] ? __lock_acquire+0xb72/0x1870 [13109.487859] ? sched_clock_cpu+0x15/0x1b0 [13109.488140] ? find_held_lock+0x33/0x120 [13109.488403] ? __lock_release+0x4c1/0xa00 [13109.488663] ? lock_downgrade+0x130/0x130 [13109.488904] ? rcu_read_unlock+0x40/0x40 [13109.489169] ? fsnotify_perm.part.0+0x14a/0x4c0 [13109.489887] __x64_sys_poll+0xd2/0x430 [13109.490182] ? __ia32_sys_poll+0x430/0x430 [13109.490420] ? ktime_get_coarse_real_ts64+0x130/0x170 [13109.490762] do_syscall_64+0x5c/0x90 [13109.491037] ? do_syscall_64+0x69/0x90 [13109.491281] ? lockdep_hardirqs_on+0x79/0x100 [13109.491912] ? do_syscall_64+0x69/0x90 [13109.492198] ? lockdep_hardirqs_on+0x79/0x100 [13109.492859] ? do_syscall_64+0x69/0x90 [13109.493128] ? asm_sysvec_apic_timer_interrupt+0x16/0x20 [13109.493443] ? lockdep_hardirqs_on+0x79/0x100 [13109.494094] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13109.494433] RIP: 0033:0x7f500d5429bf [13109.494643] RSP: 002b:00007f500b7fcf80 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 [13109.495453] RAX: ffffffffffffffda RBX: 00007f500d6ff071 RCX: 10: 00007ffd609f9080 R11: 0000000000000293 R12: 0000000000000003 [13109.996728] R13: 0000000000000003 R14: 00007f500b7fcff0 R15: 000055be97868ee0 [13109.997603] [13109.997771] task:irqbalance state:S stack:24944 pid: 1066 ppid: 1 flags:0x00000002 [13109.998281] Call Trace: [13109.998448] [13109.998963] __schedule+0x72e/0x1570 [13109.999233] ? io_schedule_timeout+0x160/0x160 [13109.999877] ? lock_downgrade+0x130/0x130 [13110.000216] schedule+0x128/0x220 [13110.000802] schedule_hrtimeout_range_clock+0x143/0x300 [13110.001146] ? hrtimer_nanosleep_restart+0x160/0x160 [13110.001498] ? hrtimer_init_sleeper_on_stack+0x90/0x90 [13110.001827] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13110.002164] ? unix_poll+0x2f9/0x3b0 [13110.002419] poll_schedule_timeout.constprop.0+0xa6/0x170 [13110.002743] do_poll.constprop.0+0x459/0x860 [13110.003429] ? __ia32_compat_sys_pselect6_time32+0x250/0x250 [13110.004228] ? __might_fault+0xbc/0x160 [13110.004489] do_sys_poll+0x367/0x570 [13110.004758] ? do_poll.constprop.0+0x860/0x860 [13110.005479] ? is_bpf_text_address+0x52/0xe0 [13110.006201] ? is_bpf_text_address+0x6a/0xe0 [13110.006852] ? kernel_text_address+0x11e/0x140 [13110.007526] ? __kernel_text_address+0xe/0x40 [13110.008243] ? unwind_get_return_address+0x5ap.0+0x170/0x170 [13110.509293] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13110.510104] ? __lock_acquire+0xb72/0x1870 [13110.510390] ? sched_clock_cpu+0x15/0x1b0 [13110.510644] ? find_held_lock+0x33/0x120 [13110.510885] ? __lock_release+0x4c1/0xa00 [13110.511181] ? lock_downgrade+0x130/0x130 [13110.511423] ? rcu_read_unlock+0x40/0x40 [13110.511672] ? sched_clock_cpu+0x15/0x1b0 [13110.511921] ? nsec_to_clock_t+0x30/0x30 [13110.512177] ? ktime_get_ts64+0x1eb/0x270 [13110.512458] __x64_sys_poll+0x15d/0x430 [13110.512725] ? __ia32_sys_poll+0x430/0x430 [13110.512958] ? ktime_get_coarse_real_ts64+0x130/0x170 [13110.513322] do_syscall_64+0x5c/0x90 [13110.513596] ? do_syscall_64+0x69/0x90 [13110.513839] ? lockdep_hardirqs_on+0x79/0x100 [13110.514498] ? do_syscall_64+0x69/0x90 [13110.514764] ? do_syscall_64+0x69/0x90 [13110.515022] ? lockdep_hardirqs_on+0x79/0x100 [13110.515695] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13110.515993] RIP: 0033:0x7fe915b429bf [13110.516270] RSP: 002b:00007ffd85621550 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 [13110.517105] RAX: ffffffffffffffda RBX: 00007fe915eee071 RCX: 00007fe915b429bf [13110.517910] RDX: 00000000000026f8 RSI: 0000000000000002 RDI: 000013: 0000000000000002 R14: 00007ffd856215c0 R15: 000055b4a4d9b4a0 [13111.019246] [13111.019406] task:gmain state:S stack:28800 pid: 1117 ppid: 1 flags:0x00000002 [13111.019940] Call Trace: [13111.020130] [13111.020683] __schedule+0x72e/0x1570 [13111.020953] ? io_schedule_timeout+0x160/0x160 [13111.021662] ? find_held_lock+0x33/0x120 [13111.021909] ? sched_clock_cpu+0x15/0x1b0 [13111.022205] schedule+0x128/0x220 [13111.022813] schedule_hrtimeout_range_clock+0x2b8/0x300 [13111.023153] ? hrtimer_nanosleep_restart+0x160/0x160 [13111.023496] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13111.024280] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13111.024629] poll_schedule_timeout.constprop.0+0xa6/0x170 [13111.024939] do_poll.constprop.0+0x459/0x860 [13111.025660] ? __ia32_compat_sys_pselect6_time32+0x250/0x250 [13111.026429] ? __might_fault+0xbc/0x160 [13111.026703] do_sys_poll+0x367/0x570 [13111.026954] ? do_poll.constprop.0+0x860/0x860 [13111.027628] ? sched_clock_cpu+0x15/0x1b0 [13111.027919] ? __lock_release+0x4c1/0xa00 [13111.028179] ? lock_downgrade+0x130/0x130 [13111.028452] ? copyout+0x83/0xa0 [13111.029043] ? _copy_to_iter+0x279/0x10c0 [13111.029301] ? poll_schedule_timeout.const+0xb72/0x1870 [13111.529844] ? sched_clock_cpu+0x15/0x1b0 [13111.530147] ? find_held_lock+0x33/0x120 [13111.530399] ? __lock_release+0x4c1/0xa00 [13111.530662] ? lock_downgrade+0x130/0x130 [13111.530889] ? rcu_read_unlock+0x40/0x40 [13111.531138] ? __seccomp_filter+0x92/0x8d0 [13111.531415] __x64_sys_poll+0xd2/0x430 [13111.531666] ? __ia32_sys_poll+0x430/0x430 [13111.531897] ? ktime_get_coarse_real_ts64+0x130/0x170 [13111.532273] do_syscall_64+0x5c/0x90 [13111.532533] ? do_syscall_64+0x69/0x90 [13111.532784] ? lockdep_hardirqs_on+0x79/0x100 [13111.533452] ? do_syscall_64+0x69/0x90 [13111.533697] ? do_syscall_64+0x69/0x90 [13111.533935] ? lockdep_hardirqs_on+0x79/0x100 [13111.534587] ? do_syscall_64+0x69/0x90 [13111.534844] ? do_syscall_64+0x69/0x90 [13111.535126] ? asm_exc_page_fault+0x22/0x30 [13111.535359] ? lockdep_hardirqs_on+0x79/0x100 [13111.536043] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13111.536383] RIP: 0033:0x7fe915b429bf [13111.536607] RSP: 002b:00007fe9159fed00 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 [13111.537414] RAX: ffffffffffffffda RBX: 00007fe915eee071 RCX: 00007fe915b429bf [13111.538248] RDX: 00000000ffffffff RSI: 0000000000000001 RDI: 000055b4a4d8c600 [13111.539112] RBP: 000055b4a4d8c600 R08: 0000000000000000 R09: 0000000000000000 [13111[13112.039844] task:rsyslogd state:S stack:24408 pid: 1068 ppid: 1 flags:0x00000002 [13112.040415] Call Trace: [13112.040597] [13112.041234] __schedule+0x72e/0x1570 [13112.041546] ? io_schedule_timeout+0x160/0x160 [13112.042283] ? lock_downgrade+0x130/0x130 [13112.042628] schedule+0x128/0x220 [13112.043366] schedule_hrtimeout_range_clock+0x143/0x300 [13112.043763] ? hrtimer_nanosleep_restart+0x160/0x160 [13112.044168] ? hrtimer_init_sleeper_on_stack+0x90/0x90 [13112.044549] ? poll_select_finish+0x480/0x480 [13112.045374] poll_schedule_timeout.constprop.0+0xa6/0x170 [13112.045772] do_select+0x9e4/0xd20 [13112.046568] ? sched_clock_cpu+0x15/0x1b0 [13112.046869] ? find_held_lock+0x33/0x120 [13112.047204] ? select_estimate_accuracy+0x2a0/0x2a0 [13112.047916] ? task_numa_fault+0xab/0xd00 [13112.048299] ? task_numa_free+0x550/0x550 [13112.048646] ? do_numa_page+0x731/0xfd0 [13112.048955] ? mark_lock.part.0+0xca/0xa40 [13112.049280] ? check_prev_add+0x20f0/0x20f0 [13112.049568] ? numa_migrate_prep+0x210/0x210 [13112.050341] ? validate_chain+0x154/0xdf0 [13112.050678] ? __lock_acquire+0xb72/0x1870 [13112.050981] ? sched_clock_cpu+0x15/0x1b0 [13112.051311] ? find_held_lock+0x33/0x120 [13112.051593] ? __lock_release+0x4c1/0xa00 [13112.051868] ? lock_downgrade+0x130/0x130 [13? __lock_release+0x4c1/0xa00 [13112.552545] ? lock_downgrade+0x130/0x130 [13112.552815] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13112.553631] ? _raw_spin_unlock_irq+0x24/0x50 [13112.554313] ? lockdep_hardirqs_on+0x79/0x100 [13112.554961] ? _raw_spin_unlock_irq+0x2f/0x50 [13112.555621] ? set_user_sigmask+0x1be/0x250 [13112.555883] ? __set_current_blocked+0xf0/0xf0 [13112.556543] ? __lock_release+0x4c1/0xa00 [13112.556812] do_pselect.constprop.0+0x117/0x1e0 [13112.557494] ? __ia32_sys_select+0x150/0x150 [13112.558237] ? ktime_get_coarse_real_ts64+0x130/0x170 [13112.558578] ? lockdep_hardirqs_on+0x79/0x100 [13112.559284] __x64_sys_pselect6+0x138/0x250 [13112.559530] ? syscall_trace_enter.constprop.0+0x19c/0x280 [13112.559867] do_syscall_64+0x5c/0x90 [13112.560155] ? do_syscall_64+0x69/0x90 [13112.560397] ? lockdep_hardirqs_on+0x79/0x100 [13112.561080] ? do_syscall_64+0x69/0x90 [13112.561323] ? asm_exc_page_fault+0x22/0x30 [13112.561567] ? lockdep_hardirqs_on+0x79/0x100 [13112.562224] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13112.562561] RIP: 0033:0x7f4c34145292 [13112.562777] RSP: 002b:00007ffdb32651[13113.063253] RBP: 00007ffdb32651f0 R08: 00007ffdb3265170 R09: 00007ffdb3265180 [13113.064095] R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000000 [13113.064902] R13: 0000562ebe937240 R14: 0000000000000000 R15: 00007f4c3445c000 [13113.065768] [13113.065931] task:in:imjournal state:D stack:23496 pid: 1079 ppid: 1 flags:0x00004002 [13113.066433] Call Trace: [13113.066600] [13113.067174] __schedule+0x72e/0x1570 [13113.067434] ? io_schedule_timeout+0x160/0x160 [13113.068127] ? __lock_acquire+0xb72/0x1870 [13113.068410] schedule+0x128/0x220 [13113.068996] schedule_timeout+0x1a9/0x260 [13113.069294] ? usleep_range_state+0x190/0x190 [13113.070278] ? lock_downgrade+0x130/0x130 [13113.070542] ? mark_held_locks+0xa5/0xf0 [13113.070813] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13113.071543] ? _raw_spin_unlock_irq+0x24/0x50 [13113.072274] __wait_for_common+0x37c/0x530 [13113.072515] ? usleep_range_state+0x190/0x190 [13113.073264] ? out_of_line_wait_on_bit_timeout+0x170/0x170 [13113.073614] ? lockdep_init_map_type+0x2ff/0x820 [13113.074359] stop_two_cpus+0x1d3/0x250 [13113.074625] ? multi_cpu_stop+0x370/0x370 [13113.074863] ? __lock_acquire+0xb72/0x1870 [13113.075121] ? __migrate_swap_task.part.0+0x520/0x520 [13113.075478] ? stop_machine_yield+0x10/0x10 [13113.075725] ? migrate_swap+0x2db/0x520 [13113.1031? __wait_for_common+0x9e/0x530 [13113.576458] migrate_swap+0x2db/0x520 [13113.576751] ? default_wake_function+0x60/0x60 [13113.577433] ? cpumask_next+0x59/0x80 [13113.577720] ? task_numa_find_cpu+0x152/0x460 [13113.578425] task_numa_migrate.isra.0+0xbab/0x1630 [13113.579155] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13113.579895] ? memset+0x20/0x50 [13113.580510] ? task_numa_find_cpu+0x460/0x460 [13113.581307] ? place_entity+0x1d0/0x1d0 [13113.581604] task_numa_fault+0x788/0xd00 [13113.581905] ? task_numa_free+0x550/0x550 [13113.582179] ? do_numa_page+0x731/0xfd0 [13113.582448] do_numa_page+0x990/0xfd0 [13113.582720] ? numa_migrate_prep+0x210/0x210 [13113.583444] __handle_mm_fault+0xdbc/0x1400 [13113.583761] ? vm_iomap_memory+0x150/0x150 [13113.583998] ? find_held_lock+0x33/0x120 [13113.584354] handle_mm_fault+0x16b/0x5e0 [13113.584589] do_user_addr_fault+0x34b/0xd90 [13113.584856] ? rcu_read_lock_sched_held+0x43/0x80 [13113.585551] exc_page_fault+0x5a/0xe0 [13113.585844] asm_exc_page_fault+0x22/0x30 [13113.586104] RIP: 0010:__get_user_8+0x1c/0x30 [13113.586758] Code: 00 c3 cc cc cc cc 0f 1f 84 00 00 00 00 00 48 ba f9 ef ff ff ff 7f 00 00 48 39 d0 0f 83 a0 00 00 00 48 19 d2 48 21 d0 0f 1f 00 <48> 8b 10 31DX: ffffffffffffffff RSI: ffffffff908e1860 RDI: ffff888442249d60 [13114.088566] RBP: ffffc90006ebfda0 R08: 0000000000000000 R09: ffffffff93b7f19f [13114.089427] R10: fffffbfff276fe33 R11: 0000000000000001 R12: ffff888442248000 [13114.090255] R13: ffffc90006ebff58 R14: ffffc90006ebffd8 R15: 00007f4c3413ec1f [13114.091173] rseq_get_rseq_cs+0x5a/0x5d0 [13114.091449] rseq_ip_fixup+0xa7/0x5a0 [13114.091697] ? rseq_update_cpu_id+0x2d0/0x2d0 [13114.092364] ? __blkcg_punt_bio_submit+0x1b0/0x1b0 [13114.093084] ? unlock_page_memcg+0x230/0x230 [13114.093797] __rseq_handle_notify_resume+0x58/0xd0 [13114.094495] exit_to_user_mode_loop+0xe8/0x160 [13114.095225] exit_to_user_mode_prepare+0x103/0x160 [13114.095886] syscall_exit_to_user_mode+0x19/0x50 [13114.096556] do_syscall_64+0x69/0x90 [13114.096821] ? do_syscall_64+0x69/0x90 [13114.097094] ? lockdep_hardirqs_on+0x79/0x100 [13114.097760] ? do_syscall_64+0x69/0x90 [13114.097990] ? do_syscall_64+0x69/0x90 [13114.098251] ? lockdep_hardirqs_on+0x79/0x100 [13114.098900] ? do_syscall_64+0x69/0x90 [13114.099158] ? do_syscall_64+0x69/0x90 [13114.099407] ? do_syscall_64+0x69/0x90 [13c33124a80 EFLAGS: 00000293 ORIG_RAX: 0000000000000001 [13114.600531] RAX: 000000000000007b RBX: 000000000000007b RCX: 00007f4c3413ec1f [13114.601369] RDX: 000000000000007b RSI: 00007f4c240352e0 RDI: 0000000000000008 [13114.602172] RBP: 00007f4c240352e0 R08: 0000000000000000 R09: 0000000000000000 [13114.602979] R10: 0000000000000010 R11: 0000000000000293 R12: 000000000000007b [13114.603813] R13: 00007f4c2400b890 R14: 000000000000007b R15: 00007f4c341f69e0 [13114.604721] [13114.604888] task:rs:main Q:Reg state:S stack:24912 pid: 1080 ppid: 1 flags:0x00000002 [13114.605374] Call Trace: [13114.605544] [13114.606086] __schedule+0x72e/0x1570 [13114.606340] ? io_schedule_timeout+0x160/0x160 [13114.606977] ? lock_downgrade+0x130/0x130 [13114.607310] schedule+0x128/0x220 [13114.607890] futex_wait_queue+0x135/0x360 [13114.608171] futex_wait+0x28f/0x600 [13114.608782] ? futex_wait_setup+0x1b0/0x1b0 [13114.609017] ? mark_lock.part.0+0xca/0xa40 [13114.609283] ? xfs_file_buffered_write+0x6f9/0x900 [xfs] [13114.609861] ? check_prev_add+0x20f0/0x20f0 [13114.610121] ? xfs_iunlock+0x316/0x490 [xfs] [13114.610999] ? xfs_file_buffered_write+0x6f9/0x900 [xfs] [13114.611562] ? sched_clock_cpu+0[13115.112088] ? rcu_read_unlock+0x40/0x40 [13115.112348] ? ksys_write+0x18a/0x1d0 [13115.112588] __x64_sys_futex+0x174/0x440 [13115.112847] ? __x64_sys_futex_time32+0x440/0x440 [13115.113528] ? lockdep_hardirqs_on+0x79/0x100 [13115.114226] ? ktime_get_coarse_real_ts64+0x130/0x170 [13115.114552] do_syscall_64+0x5c/0x90 [13115.114837] ? do_syscall_64+0x69/0x90 [13115.115112] ? lockdep_hardirqs_on+0x79/0x100 [13115.115762] ? do_syscall_64+0x69/0x90 [13115.115986] ? do_syscall_64+0x69/0x90 [13115.116242] ? lockdep_hardirqs_on+0x79/0x100 [13115.116899] ? do_syscall_64+0x69/0x90 [13115.117190] ? do_syscall_64+0x69/0x90 [13115.117425] ? lockdep_hardirqs_on+0x79/0x100 [13115.118117] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13115.118427] RIP: 0033:0x7f4c3409c39a [13115.118651] RSP: 002b:00007f4c32d24b00 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca [13115.119481] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4c3409c39a [13115.120309] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 0000562ec019eab0 [13115.121110] RBP: 0000000000000001 R08: 0000000000000000 R09: 00000000ffffffff [13115.121890] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [13115.122715] R13: 0000562ec019eab0 R14: 0000000000000000 R15: [13115.623350] __schedule+0x72e/0x1570 [13115.623621] ? io_schedule_timeout+0x160/0x160 [13115.624271] ? __lock_acquire+0xb72/0x1870 [13115.624555] schedule+0x128/0x220 [13115.625204] schedule_hrtimeout_range_clock+0x2b8/0x300 [13115.625551] ? hrtimer_nanosleep_restart+0x160/0x160 [13115.625873] ? lock_downgrade+0x130/0x130 [13115.626199] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13115.626971] ep_poll+0x7d2/0xa80 [13115.627633] ? ep_send_events+0x9f0/0x9f0 [13115.627886] ? sched_clock_cpu+0x15/0x1b0 [13115.628150] ? find_held_lock+0x33/0x120 [13115.628396] ? prepare_to_wait_exclusive+0x2c0/0x2c0 [13115.628735] ? __lock_release+0x4c1/0xa00 [13115.628986] do_epoll_wait+0x12f/0x160 [13115.629292] __x64_sys_epoll_wait+0x12e/0x250 [13115.629965] ? __x64_sys_epoll_pwait2+0x240/0x240 [13115.630642] ? lockdep_hardirqs_on+0x79/0x100 [13115.631360] ? ktime_get_coarse_real_ts64+0x130/0x170 [13115.631732] do_syscall_64+0x5c/0x90 [13115.631981] ? do_syscall_64+0x69/0x90 [13115.632242] ? lockdep_hardirqs_on+0x79/0x100 [13115.632898] ? ? lockdep_hardirqs_on+0x79/0x100 [13116.133846] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13116.134218] RIP: 0033:0x7f1930d4eaca [13116.134466] RSP: 002b:00007fff0df10b78 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8 [13116.135323] RAX: ffffffffffffffda RBX: 000055915d699d30 RCX: 00007f1930d4eaca [13116.136187] RDX: 000000000000001c RSI: 000055915d69a890 RDI: 0000000000000004 [13116.136999] RBP: 000055915d699ec0 R08: 000000000000001c R09: b854ebeb898a0a77 [13116.137809] R10: 00000000ffffffff R11: 0000000000000246 R12: 00000000000000a0 [13116.138659] R13: 000055915d699d30 R14: 000000000000001c R15: 0000000000000010 [13116.139583] [13116.139767] task:dbus-broker-lau state:S stack:24728 pid: 1093 ppid: 1 flags:0x00000002 [13116.140235] Call Trace: [13116.140389] [13116.140894] __schedule+0x72e/0x1570 [13116.141208] ? io_schedule_timeout+0x160/0x160 [13116.141861] ? __lock_acquire+0xb72/0x1870 [13116.142196] schedule+0x128/0x220 [13116.142789] schedule_hrtimeout_range_clock+0x2b8/0x300 [13116.143133] ? hrtimer_nanosleep_restart+0x160/0x160 [13116.143441] ? lock_downgrade+0x130/0x130 [13116.143710] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13116.144494] ep_poll+0x7d2/0xa80 [13116.145175] ? ep_send_events+0x9f0/0x9f0 [13116.145419] ? sched_clock_cpu+0x15/0x1b0 [13116.145653] ? fwait+0x12e/0x250 [13116.646516] ? __x64_sys_epoll_pwait2+0x240/0x240 [13116.647259] ? lockdep_hardirqs_on+0x79/0x100 [13116.647915] ? ktime_get_coarse_real_ts64+0x130/0x170 [13116.648305] do_syscall_64+0x5c/0x90 [13116.648569] ? lockdep_hardirqs_on+0x79/0x100 [13116.649264] ? do_syscall_64+0x69/0x90 [13116.649508] ? do_syscall_64+0x69/0x90 [13116.649771] ? lockdep_hardirqs_on+0x79/0x100 [13116.650440] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13116.650802] RIP: 0033:0x7f8772f4eaca [13116.651022] RSP: 002b:00007ffd0d7afee8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8 [13116.651817] RAX: ffffffffffffffda RBX: 000055d59fba89a0 RCX: 00007f8772f4eaca [13116.652620] RDX: 0000000000000016 RSI: 000055d59fbf68e0 RDI: 0000000000000005 [13116.653438] RBP: 000055d59fba8b30 R08: 0000000000000016 R09: 0000000000000000 [13116.654296] R10: 00000000ffffffff R11: 0000000000000246 R12: 000000000000006e [13116.655140] R13: 000055d59fba89a0 R14: 0000000000000016 R15: 000000000000000b [13116.655995] [13116.656195] task:dbus-broker state:S stack:25080 pid: 1121 ppid: 1093 flags:0x00000002 [13116.656672] Call Trace: [13116.656838] [13117.171652] task:sshd state:S stack:24296 pid: 1126 ppid: 1 flags:0x00000002 ? __lock_release+0x4c1/0xa00 [13117.672484] ? lock_downgrade+0x130/0x130 [13117.672750] schedule+0x128/0x220 [13117.673379] schedule_hrtimeout_range_clock+0x2b8/0x300 [13117.673861] ? hrtimer_nanosleep_restart+0x160/0x160 [13117.674218] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13117.674543] ? tcp_poll+0x5bd/0xd20 [13117.675146] ? lock_downgrade+0x130/0x130 [13117.675426] ? tcp_shutdown+0xc0/0xc0 [13117.675692] poll_schedule_timeout.constprop.0+0xa6/0x170 [13117.676031] do_select+0x9e4/0xd20 [13117.676756] ? select_estimate_accuracy+0x2a0/0x2a0 [13117.677527] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13117.678376] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13117.679151] ? check_prev_add+0x20f0/0x20f0 [13117.679459] ? mark_lock.part.0+0xca/0xa40 [13117.679717] ? __lock_acquire+0xb72/0x1870 [13117.679997] ? sched_clock_cpu+0x15/0x1b0 [13117.680270] ? find_held_lock+0x33/0x120 [13117.680549] ? __lock_release+0x4c1/0xa00 [13117.680814] ? lock_downgrade+0x130/0x130 [13117.681099] ? __might_fault+0xbc/0x160 [13117.681356] ? core_sys_select+0x30c/0x710 [13117.681608] core_sys_select+0x30c/0x710 [13117.681888] ? __x64_sys_poll+0ck_irq+0x24/0x50 [13118.082851] ? lockdep_hardirqs_on+0x79/0x100 [13118.083550] ? _raw_spin_unlock_irq+0x2f/0x50 [13118.084288] ? set_user_sigmask+0x1be/0? __set_current_blocked+0xf0/0xf0 [13118.185128] ? __lock_release+0x4c1/0xa00 [13118.185386] do_pselect.constprop.0+0x117/0x1e0 [13118.186022] ? __ia32_sys_select+0x150/0x150 [13118.186720] ? ktime_get_coarse_real_ts64+0x130/0x170 [13118.187086] ? lockdep_hardirqs_on+0x79/0x100 [13118.187757] __x64_sys_pselect6+0x138/0x250 [13118.187996] ? syscall_trace_enter.constprop.0+0x19c/0x280 [13118.188342] do_syscall_64+0x5c/0x90 [13118.188588] ? lockdep_hardirqs_on+0x79/0x100 [13118.189321] ? do_syscall_64+0x69/0x90 [13118.189565] ? do_syscall_64+0x69/0x90 [13118.189825] ? do_syscall_64+0x69/0x90 [13118.190109] ? asm_exc_page_fault+0x22/0x30 [13118.190357] ? lockdep_hardirqs_on+0x79/0x100 [13118.190984] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13118.191298] RIP: 0033:0x7f033fb45224 [13118.191533] RSP: 002b:00007ffe2bfae5d0 EFLAGS: 00000246 ORIG_RAX: 000000000000010e [13118.192388] RAX: ffffffffffffffda RBX: 0000000000000010 RCX: 00007f033fb45224 [13118.193240] RDX: 0000000000000000 RSI: 000055ae5eb30280 RDI: 0000000000000007 [13118.194034] RBP: 13: 00000000000002d8 R14: 0000000000000000 R15: 000055ae5e2bbfe0 [13118.595591] [13118.595795] task:systemd state:S stack:24352 pid: 1132 ppid: 1 flags:0x00000002 [13118.596288] Call Trace: [13118.596455] [13118.597053] __schedule+0x72e/0x1570 [13118.597367] ? io_schedule_timeout+0x160/0x160 [13118.598042] ? __lock_acquire+0xb72/0x1870 [13118.598420] schedule+0x128/0x220 [13118.599047] schedule_hrtimeout_range_clock+0x2b8/0x300 [13118.599397] ? hrtimer_nanosleep_restart+0x160/0x160 [13118.599699] ? lock_downgrade+0x130/0x130 [13118.599989] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13118.600880] ep_poll+0x7d2/0xa80 [13118.601773] ? ep_send_events+0x9f0/0x9f0 [13118.602020] ? sched_clock_cpu+0x15/0x1b0 [13118.602297] ? prepare_to_wait_exclusive+0x2c0/0x2c0 [13118.602631] ? __lock_release+0x4c1/0xa00 [13118.602915] do_epoll_wait+0x12f/0x160 [13118.603210] __x64_sys_epoll_wait+0x12e/0x250 [13118.604056] ? __x64_sys_epoll_pwait2+0x240/0x240 [13118.604820] ? ktime_get_coarse_real_ts64+0x130/0x170 [13118.605164] ? lockdep_hardirqs_on+0x79/0x100 [13118.605903] ? ktime_get_coarse_real_ts64+0x130/0x170 [13118.606419] do_syscall_64+0x5c/0x90 [13118.606687] ? 0x69/0x90 [13119.107368] ? do_syscall_64+0x69/0x90 [13119.107632] ? lockdep_hardirqs_on+0x79/0x100 [13119.108376] ? do_syscall_64+0x69/0x90 [13119.108634] ? asm_sysvec_apic_timer_interrupt+0x16/0x20 [13119.108970] ? lockdep_hardirqs_on+0x79/0x100 [13119.109656] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13119.110003] RIP: 0033:0x7f3300d4eaca [13119.110279] RSP: 002b:00007fff39d6c938 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8 [13119.111174] RAX: ffffffffffffffda RBX: 000055afd76c8040 RCX: 00007f3300d4eaca [13119.112113] RDX: 0000000000000020 RSI: 000055afd76dd020 RDI: 0000000000000004 [13119.112993] RBP: 000055afd76c81d0 R08: 0000000000000020 R09: 0000000000000005 [13119.113895] R10: 00000000ffffffff R11: 0000000000000246 R12: 00000000000000be [13119.114719] R13: 000055afd76c8040 R14: 0000000000000020 R15: 0000000000000013 [13119.115848] [13119.116103] task:gssproxy state:S stack:24968 pid: 1133 ppid: 1 flags:0x00000002 [13119.116616] Call Trace: [13119.116793] [13119.117387] __schedule+0x72e/0x1570 [13119.117842] ? io_schedule_timeout+0x160/0x160 [13119.118518] ? lock_downgrade+0x130/0x130 [13119.118802] schedule+0x128/0x220 [13119.119450] schedule_hrtimeout_range_clock+0x143/0x300 [13119.147027] rt.0+0x18c/0x370 [13119.620879] ep_poll+0x7d2/0xa80 [13119.621562] ? ep_send_events+0x9f0/0x9f0 [13119.622045] ? __fget_files+0x1cc/0x400 [13119.622528] ? prepare_to_wait_exclusive+0x2c0/0x2c0 [13119.622944] do_epoll_wait+0x12f/0x160 [13119.623218] __x64_sys_epoll_wait+0x12e/0x250 [13119.623904] ? __x64_sys_epoll_pwait2+0x240/0x240 [13119.624614] ? ktime_get_coarse_real_ts64+0x130/0x170 [13119.625031] do_syscall_64+0x5c/0x90 [13119.625317] ? do_syscall_64+0x69/0x90 [13119.625579] ? lockdep_hardirqs_on+0x79/0x100 [13119.626305] ? do_syscall_64+0x69/0x90 [13119.626570] ? do_syscall_64+0x69/0x90 [13119.626833] ? do_syscall_64+0x69/0x90 [13119.627059] ? lockdep_hardirqs_on+0x79/0x100 [13119.627929] ? do_syscall_64+0x69/0x90 [13119.628276] ? do_syscall_64+0x69/0x90 [13119.628513] ? do_syscall_64+0x69/0x90 [13119.628744] ? lockdep_hardirqs_on+0x79/0x100 [13119.629430] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13119.629749] RIP: 0033:0x7fbbf834eb0e [13119.629986] RSP: 002b:00007ffe3dc32c70 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8 [13119.630843] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fbbf834eb0e [13119.631683] RDX: 0000000000000040 RSI: 00005577a39d9020 RDI: 0000000000000005 [fffe R14: 0000000000000008 R15: 0000000000000000 [13120.133445] [13120.133623] task:gssproxy state:S stack:29952 pid: 1134 ppid: 1 flags:0x00000002 [13120.134157] Call Trace: [13120.134334] [13120.134887] __schedule+0x72e/0x1570 [13120.135202] ? io_schedule_timeout+0x160/0x160 [13120.135892] ? lock_downgrade+0x130/0x130 [13120.136241] schedule+0x128/0x220 [13120.136883] futex_wait_queue+0x135/0x360 [13120.137375] futex_wait+0x28f/0x600 [13120.138054] ? futex_wait_setup+0x1b0/0x1b0 [13120.138343] ? mark_lock.part.0+0xca/0xa40 [13120.138606] ? check_prev_add+0x20f0/0x20f0 [13120.138858] ? sched_clock_cpu+0x15/0x1b0 [13120.139129] ? find_held_lock+0x33/0x120 [13120.139579] ? sched_clock_cpu+0x15/0x1b0 [13120.139862] do_futex+0x20b/0x340 [13120.140490] ? __lock_release+0x4c1/0xa00 [13120.140750] ? __ia32_sys_get_robust_list+0x310/0x310 [13120.141069] ? lock_downgrade+0x130/0x130 [13120.141320] ? rcu_read_unlock+0x40/0x40 [13120.141576] ? __might_fault+0xbc/0x160 [13120.141842] __x64_sys_futex+0x174/0x440 [13120.142117] ? __x64_sys_futex_time32+0x440/0x440 [13120.142892] ? lockdep_hardirqs_on+0x79/0x100 [13120.143616] ? ktime_get_coarse_real_ts64+0x130/0x170 [13120.144031] do_syscall_60x69/0x90 [13120.644735] ? ret_from_fork+0x15/0x30 [13120.645024] ? lockdep_hardirqs_on+0x79/0x100 [13120.645795] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13120.646174] RIP: 0033:0x7fbbf829c39a [13120.646431] RSP: 002b:00007fbbf77fe770 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca [13120.647294] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fbbf829c39a [13120.648209] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005577a39d4b18 [13120.649033] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff [13120.649932] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [13120.650968] R13: 00005577a39d4b18 R14: 0000000000000000 R15: 0000000000000000 [13120.651847] [13120.652049] task:gssproxy state:S stack:30000 pid: 1135 ppid: 1 flags:0x00000002 [13120.652563] Call Trace: [13120.652741] [13120.653316] __schedule+0x72e/0x1570 [13120.653572] ? io_schedule_timeout+0x160/0x160 [13120.654561] ? lock_downgrade+0x130/0x130 [13120.654975] schedule+0x128/0x220 [13120.655625] futex_wait_queue+0x135/0x360 [13120.655930] futex_wait+0x28f/0x600 [13120.656725] ? futex_wait_setup+[13121.149160] ? __lock_release+0x4c1/0xa00 [13121.157583] ? __ia32_sys_get_robust_list+0x310/0x310 [13121.158140] ? lock_downgrade+0x130/0x130 [13121.158394] ? rcu_read_unlock+0x40/0x40 [13121.158650] ? _raw_spin_unlock_irq+0x2f/0x50 [13121.159407] __x64_sys_futex+0x174/0x440 [13121.159697] ? __x64_sys_futex_time32+0x440/0x440 [13121.160675] ? lockdep_hardirqs_on+0x79/0x100 [13121.161524] ? ktime_get_coarse_real_ts64+0x130/0x170 [13121.161907] do_syscall_64+0x5c/0x90 [13121.162190] ? do_syscall_64+0x69/0x90 [13121.162586] ? lockdep_hardirqs_on+0x79/0x100 [13121.163329] ? do_syscall_64+0x69/0x90 [13121.163612] ? do_syscall_64+0x69/0x90 [13121.163884] ? asm_exc_page_fault+0x22/0x30 [13121.164161] ? lockdep_hardirqs_on+0x79/0x100 [13121.164883] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13121.165233] RIP: 0033:0x7fbbf829c39a [13121.165486] RSP: 002b:00007fbbf6ffd770 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca [13121.166479] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fbbf829c39a [13121.167456] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005577a39d4cf8 [13121.168330] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff [13121.169224] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [13121.170036] R13: 00005577a39d4cf8 R14: 000[13121.662221] [13121.671053] __schedule+0x72e/0x1570 [13121.671346] ? io_schedule_timeout+0x160/0x160 [13121.671992] ? lock_downgrade+0x130/0x130 [13121.672302] schedule+0x128/0x220 [13121.672932] futex_wait_queue+0x135/0x360 [13121.673208] futex_wait+0x28f/0x600 [13121.673863] ? futex_wait_setup+0x1b0/0x1b0 [13121.674156] ? mark_lock.part.0+0xca/0xa40 [13121.674418] ? check_prev_add+0x20f0/0x20f0 [13121.674657] ? __lock_release+0x4c1/0xa00 [13121.674872] ? lock_downgrade+0x130/0x130 [13121.675172] ? sched_clock_cpu+0x15/0x1b0 [13121.675428] do_futex+0x20b/0x340 [13121.676004] ? __lock_release+0x4c1/0xa00 [13121.676285] ? __ia32_sys_get_robust_list+0x310/0x310 [13121.676638] ? lock_downgrade+0x130/0x130 [13121.676887] ? rcu_read_unlock+0x40/0x40 [13121.677153] ? __x64_sys_rt_sigprocmask+0x166/0x230 [13121.677868] __x64_sys_futex+0x174/0x440 [13121.678166] ? __x64_sys_futex_time32+0x440/0x440 [13121.678876] ? lockdep_hardirqs_on+0x79/0x100 [13121.679564] ? ktime_get_coarse_real_ts64+0x130/0x170 [13121.679939] do_syscall_64+0x5c/0x90 [13121.680204] ? lockdep_hardirqs_on+0x79/0x100 [13121.680892] ? do_syscall_64+0x69/0x90 [IP: 0033:0x7fbbf829c39a [13122.181526] RSP: 002b:00007fbbf67fc770 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca [13122.182424] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fbbf829c39a [13122.183242] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005577a39c5d58 [13122.184072] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff [13122.184928] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [13122.185796] R13: 00005577a39c5d58 R14: 0000000000000000 R15: 0000000000000000 [13122.186696] [13122.186900] task:gssproxy state:S stack:30064 pid: 1137 ppid: 1 flags:0x00000002 [13122.187386] Call Trace: [13122.187543] [13122.188065] __schedule+0x72e/0x1570 [13122.188373] ? io_schedule_timeout+0x160/0x160 [13122.189050] ? lock_downgrade+0x130/0x130 [13122.189348] schedule+0x128/0x220 [13122.189963] futex_wait_queue+0x135/0x360 [13122.190265] futex_wait+0x28f/0x600 [13122.190896] ? futex_wait_setup+0x1b0/0x1b0 [13122.191218] ? mark_lock.part.0+0xca/0xa40 [13122.191480] ? __lock_acquire+0xb72/0x1870 [13122.191711] ? check_prev_add+0x20f0/0x20f0 [13122.192024] ? sched_clock_cpu+0x15/0x1b0 [13122.192307] do_futex+0x20b/0x340 [13122.192907] ? 0x174/0x440 [13122.693428] ? __x64_sys_futex_time32+0x440/0x440 [13122.694093] ? lockdep_hardirqs_on+0x79/0x100 [13122.694949] ? ktime_get_coarse_real_ts64+0x130/0x170 [13122.695358] do_syscall_64+0x5c/0x90 [13122.695614] ? do_syscall_64+0x69/0x90 [13122.695860] ? lockdep_hardirqs_on+0x79/0x100 [13122.696533] ? do_syscall_64+0x69/0x90 [13122.696785] ? asm_sysvec_apic_timer_interrupt+0x16/0x20 [13122.697147] ? lockdep_hardirqs_on+0x79/0x100 [13122.697801] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13122.698145] RIP: 0033:0x7fbbf829c39a [13122.698400] RSP: 002b:00007fbbf5ffb770 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca [13122.699227] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fbbf829c39a [13122.700006] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005577a39c5f38 [13122.700836] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff [13122.701640] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [13122.702488] R13: 00005577a39c5f38 R14: 0000000000000000 R15: 0000000000000000 [13122.703405] [13122.703575] task:gssproxy state:S stack:29928 pid: 1138 ppid: 1 flags:0x00000002 [13122.704030] Call Trace: [13122.704215] [13122.704740] __schedule+0x72e/0x1570 [1312[13123.205317] futex_wait+0x28f/0x600 [13123.205937] ? futex_wait_setup+0x1b0/0x1b0 [13123.206209] ? mark_lock.part.0+0xca/0xa40 [13123.206465] ? check_prev_add+0x20f0/0x20f0 [13123.206708] ? sched_clock_cpu+0x15/0x1b0 [13123.206969] ? find_held_lock+0x33/0x120 [13123.207272] ? sched_clock_cpu+0x15/0x1b0 [13123.207523] do_futex+0x20b/0x340 [13123.208168] ? __lock_release+0x4c1/0xa00 [13123.208416] ? __ia32_sys_get_robust_list+0x310/0x310 [13123.208712] ? lock_downgrade+0x130/0x130 [13123.208958] ? rcu_read_unlock+0x40/0x40 [13123.209255] __x64_sys_futex+0x174/0x440 [13123.209520] ? __x64_sys_futex_time32+0x440/0x440 [13123.210214] ? lockdep_hardirqs_on+0x79/0x100 [13123.210909] ? ktime_get_coarse_real_ts64+0x130/0x170 [13123.211305] do_syscall_64+0x5c/0x90 [13123.211575] ? ktime_get_coarse_real_ts64+0x130/0x170 [13123.211934] ? do_syscall_64+0x69/0x90 [13123.212200] ? lockdep_hardirqs_on+0x79/0x100 [13123.212922] ? do_syscall_64+0x69/0x90 [13123.213194] ? asm_exc_page_fault+0x22/0x30 [13123.213475] ? lockdep_hardirqs_on+0x79/0x100 [13123.214192] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13123.214509] RIPDX: 0000000000000000 RSI: 0000000000000189 RDI: 00005577a39c6118 [13123.715754] RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000ffffffff [13123.716631] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [13123.717477] R13: 00005577a39c6118 R14: 0000000000000000 R15: 0000000000000000 [13123.718398] [13123.718569] task:agetty state:S stack:24992 pid: 1142 ppid: 1 flags:0x00000002 [13123.719056] Call Trace: [13123.719242] [13123.719756] __schedule+0x72e/0x1570 [13123.719991] ? io_schedule_timeout+0x160/0x160 [13123.720662] ? __lock_acquire+0xb72/0x1870 [13123.720949] ? sched_clock_cpu+0x15/0x1b0 [13123.721264] schedule+0x128/0x220 [13123.721888] schedule_hrtimeout_range_clock+0x2b8/0x300 [13123.722237] ? hrtimer_nanosleep_restart+0x160/0x160 [13123.722615] poll_schedule_timeout.constprop.0+0xa6/0x170 [13123.722984] do_select+0x9e4/0xd20 [13123.723888] ? select_estimate_accuracy+0x2a0/0x2a0 [13123.724651] ? sched_clock_cpu+0x15/0x1b0 [13123.724944] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13123.725846] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13123.726638] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13123.727488] ? __lock_acquire+0xb72/0x1870 [13123.727787] ? sched_clock_cpu+0x15/0x1b0 [13123.755288]? core_sys_select+0x30c/0x710 [13124.228697] core_sys_select+0x30c/0x710 [13124.228988] ? __x64_sys_poll+0x430/0x430 [13124.229291] ? check_prev_add+0x20f0/0x20f0 [13124.229538] ? xfs_iunlock+0x316/0x490 [xfs] [13124.230521] ? __lock_acquire+0xb72/0x1870 [13124.230811] ? sched_clock_cpu+0x15/0x1b0 [13124.231078] ? find_held_lock+0x33/0x120 [13124.231345] ? __set_current_blocked+0xf0/0xf0 [13124.232019] ? __lock_release+0x4c1/0xa00 [13124.232350] do_pselect.constprop.0+0x117/0x1e0 [13124.233036] ? __ia32_sys_select+0x150/0x150 [13124.233779] ? ktime_get_coarse_real_ts64+0x130/0x170 [13124.234182] ? lockdep_hardirqs_on+0x79/0x100 [13124.234861] __x64_sys_pselect6+0x138/0x250 [13124.235167] ? syscall_trace_enter.constprop.0+0x19c/0x280 [13124.235498] do_syscall_64+0x5c/0x90 [13124.235761] ? asm_exc_page_fault+0x22/0x30 [13124.236017] ? lockdep_hardirqs_on+0x79/0x100 [13124.236745] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13124.237082] RIP: 0033:0x7fe26a345089 [13124.237334] RSP: 002b:00007ffdb179d500 EFLAGS: 00000246 ORIG_RAX: 000000000000010e [13124.238190] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fe26a345089 [13124.238992] RDX: 0000000000000000 RSI: 00007ffdb179d5f0 RDI: 0000000000000005 [13124.266633]0000000015 R15: 0000000000000000 [13124.740787] [13124.740996] task:agetty state:S stack:26296 pid: 1143 ppid: 1 flags:0x00000002 [13124.741476] Call Trace: [13124.741652] [13124.742226] __schedule+0x72e/0x1570 [13124.742489] ? io_schedule_timeout+0x160/0x160 [13124.743169] ? __lock_acquire+0xb72/0x1870 [13124.743477] ? sched_clock_cpu+0x15/0x1b0 [13124.743761] schedule+0x128/0x220 [13124.744404] schedule_hrtimeout_range_clock+0x2b8/0x300 [13124.744750] ? hrtimer_nanosleep_restart+0x160/0x160 [13124.745144] poll_schedule_timeout.constprop.0+0xa6/0x170 [13124.745459] do_select+0x9e4/0xd20 [13124.746109] ? select_estimate_accuracy+0x2a0/0x2a0 [13124.746886] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13124.747671] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13124.748521] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13124.749401] ? __lock_acquire+0xb72/0x1870 [13124.749671] ? sched_clock_cpu+0x15/0x1b0 [13124.749929] ? find_held_lock+0x33/0x120 [13124.750217] ? __lock_release+0x4c1/0xa00 [13124.750471] ? lock_downgrade+0x130/0x130 [13124.750735] ? __might_fault+0xbc/0x16[13125.243411] ? __lock_acquire+0xb72/0x1870 [13125.251601] ? __handle_mm_fault+0xd72/0x1400 [13125.252332] ? sched_clock_cpu+0x15/0x1b0 [13125.252578] ? find_held_lock+0x33/0x120 [13125.252815] ? __set_current_blocked+0xf0/0xf0 [13125.253537] ? __lock_release+0x4c1/0xa00 [13125.253854] do_pselect.constprop.0+0x117/0x1e0 [13125.254563] ? __ia32_sys_select+0x150/0x150 [13125.255302] ? ktime_get_coarse_real_ts64+0x130/0x170 [13125.255642] ? lockdep_hardirqs_on+0x79/0x100 [13125.256334] __x64_sys_pselect6+0x138/0x250 [13125.256577] ? syscall_trace_enter.constprop.0+0x19c/0x280 [13125.256908] do_syscall_64+0x5c/0x90 [13125.257178] ? asm_exc_page_fault+0x22/0x30 [13125.257414] ? lockdep_hardirqs_on+0x79/0x100 [13125.258059] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13125.258394] RIP: 0033:0x7f5a10545089 [13125.258636] RSP: 002b:00007ffd85140c40 EFLAGS: 00000246 ORIG_RAX: 000000000000010e [13125.259486] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f5a10545089 [13125.260349] RDX: 0000000000000000 RSI: 00007ffd85140d30 RDI: 0000000000000005 [13125.261195] RBP: 00007ffd85140d30 R08: 0000000000000000 R09: 0000000000000000 [13125.261988] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000005 [13125.262789] R13:[13125.763212] [13125.763733] __schedule+0x72e/0x1570 [13125.764027] ? io_schedule_timeout+0x160/0x160 [13125.764686] ? __lock_acquire+0xb72/0x1870 [13125.764988] schedule+0x128/0x220 [13125.765598] schedule_hrtimeout_range_clock+0x2b8/0x300 [13125.765952] ? hrtimer_nanosleep_restart+0x160/0x160 [13125.766279] ? lock_downgrade+0x130/0x130 [13125.766553] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13125.767321] do_sigtimedwait+0x42a/0x720 [13125.767579] ? __set_task_blocked+0x170/0x170 [13125.768281] ? __might_fault+0xbc/0x160 [13125.768565] __x64_sys_rt_sigtimedwait+0x15a/0x230 [13125.769253] ? __ia32_sys_rt_sigtimedwait_time32+0x230/0x230 [13125.770006] ? ktime_get_coarse_real_ts64+0x130/0x170 [13125.770377] ? lockdep_hardirqs_on+0x79/0x100 [13125.771033] ? ktime_get_coarse_real_ts64+0x130/0x170 [13125.771417] do_syscall_64+0x5c/0x90 [13125.771671] ? asm_exc_page_fault+0x22/0x30 [13125.771917] ? lockdep_hardirqs_on+0x79/0x100 [13125.772583] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13125.772936] RIP: 0033:0x7ff1fa255aa8 [13125.773201] RSP: 002b:00007ffd9e7add10 EFLAGS: 00000246 ORIG_RAX: 00000000BP: 00007ffd9e7add40 R08: 0000000000000001 R09: 0000000000000000 [13126.274529] R10: 0000000000000008 R11: 0000000000000246 R12: 00007ffd9e7ade28 [13126.275389] R13: 00007ffd9e7aded0 R14: 0000000000000000 R15: 00007ffd9e7ade30 [13126.276301] [13126.276469] task:crond state:S stack:24240 pid: 1492 ppid: 1 flags:0x00000002 [13126.276950] Call Trace: [13126.277102] [13126.277650] __schedule+0x72e/0x1570 [13126.277904] ? io_schedule_timeout+0x160/0x160 [13126.278574] ? lock_downgrade+0x130/0x130 [13126.278864] schedule+0x128/0x220 [13126.279498] do_nanosleep+0x212/0x5c0 [13126.279782] ? schedule_timeout_idle+0x90/0x90 [13126.280480] ? memset+0x20/0x50 [13126.281054] ? __hrtimer_init+0x3a/0x1c0 [13126.281341] hrtimer_nanosleep+0x1a4/0x3b0 [13126.281590] ? nanosleep_copyout+0xd0/0xd0 [13126.281831] ? hrtimer_init_sleeper_on_stack+0x90/0x90 [13126.282221] ? get_timespec64+0x70/0x160 [13126.282472] ? __ia32_compat_sys_gettimeofday+0x190/0x190 [13126.282797] common_nsleep+0x79/0xc0 [13126.283023] __x64_sys_clock_nanosleep+0x251/0x3a0 [13126.283703] ? ktime_get_coarse_real_ts64+0x130/0x170 [130x69/0x90 [13126.784469] ? lockdep_hardirqs_on+0x79/0x100 [13126.785163] ? do_syscall_64+0x69/0x90 [13126.785412] ? do_syscall_64+0x69/0x90 [13126.785655] ? do_syscall_64+0x69/0x90 [13126.785878] ? do_syscall_64+0x69/0x90 [13126.786104] ? lockdep_hardirqs_on+0x79/0x100 [13126.786776] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13126.787101] RIP: 0033:0x7f28ef51395a [13126.787355] RSP: 002b:00007ffd3558dbb8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e6 [13126.788211] RAX: ffffffffffffffda RBX: fffffffffffffe98 RCX: 00007f28ef51395a [13126.789015] RDX: 00007ffd3558dbd0 RSI: 0000000000000000 RDI: 0000000000000000 [13126.789805] RBP: 000000000000000a R08: 000000000000ffff R09: 0000000063dc8c60 [13126.790630] R10: 00007ffd3558dbd0 R11: 0000000000000246 R12: 000000000000003c [13126.791490] R13: 0000000001aa1368 R14: 000000000000003b R15: 0000559a952c3a58 [13126.792417] [13126.792581] task:restraintd state:S stack:23920 pid: 1494 ppid: 1 flags:0x00000002 [13126.793073] Call Trace: [13126.793261] [13126.793782] __schedule+0x72e/0x1570 [13126.794040] ? io_schedule_timeout+0x160/0x160 [13126.794691] ? lock_downgrade+0x130/0x130 [13126.794993] schedule+0x128/0x220 [13126.795597] schedule_hrtimeout_range_clock+0x[13127.287968] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13127.296410] ? lockdep_hardirqs_on+0x79/0x100 [13127.297084] poll_schedule_timeout.constprop.0+0xa6/0x170 [13127.297436] do_poll.constprop.0+0x459/0x860 [13127.298122] ? __ia32_compat_sys_pselect6_time32+0x250/0x250 [13127.298900] ? __might_fault+0xbc/0x160 [13127.299214] do_sys_poll+0x367/0x570 [13127.299465] ? kernel_text_address+0x11e/0x140 [13127.300130] ? do_poll.constprop.0+0x860/0x860 [13127.300825] ? validate_chain+0x154/0xdf0 [13127.301061] ? mark_lock.part.0+0xca/0xa40 [13127.301345] ? filter_irq_stacks+0xa0/0xa0 [13127.301594] ? check_prev_add+0x20f0/0x20f0 [13127.301820] ? __stack_depot_save+0x35/0x4d0 [13127.302571] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13127.303378] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13127.304227] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13127.304984] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13127.305747] ? sched_clock_cpu+0x15/0x1b0 [13127.305974] ? find_held_lock+0x33/0x120 [13127.306249] ? __lock_release+0x4c1/0xa00 [13127.306493] ? lock_downgrade+0x130/0x130 [13127.306728] ? rcu_read_unlock+0x40/0x40 [13127.306975] ? sched_clock_cpu+0x15/0x1b0 [13127.33? ktime_get_coarse_real_ts64+0x130/0x170 [13127.807740] do_syscall_64+0x5c/0x90 [13127.807973] ? do_syscall_64+0x69/0x90 [13127.808244] ? lockdep_hardirqs_on+0x79/0x100 [13127.808902] ? do_syscall_64+0x69/0x90 [13127.809141] ? do_syscall_64+0x69/0x90 [13127.809421] ? lockdep_hardirqs_on+0x79/0x100 [13127.810071] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13127.810405] RIP: 0033:0x7f669ab429bf [13127.810749] RSP: 002b:00007ffd7fca4480 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 [13127.811580] RAX: ffffffffffffffda RBX: 00000000005e1c10 RCX: 00007f669ab429bf [13127.812426] RDX: 000000000000398f RSI: 0000000000000004 RDI: 00007f668c001cc0 [13127.813267] RBP: 00007f668c001cc0 R08: 0000000000000000 R09: 0000000000000003 [13127.814087] R10: 00007ffd7fd78080 R11: 0000000000000293 R12: 0000000000000004 [13127.814882] R13: 000000000000398f R14: 000000007fffffff R15: 0000000001d03810 [13127.815762] [13127.815924] task:gmain state:S stack:28848 pid: 1496 ppid: 1 flags:0x00000002 [13127.816435] Call Trace: [13127.816604] [13127.817103] __schedule+0x72e/0x1570 [13127.817366] ? io_schedule_timeout+0x160/0x160 [13127.818038] ? find_held_lock+0x33/0x120 [13127.818306] ? sched_clock_cpu+0x15/0x1b0 [13127.846190rt.0+0x18c/0x370 [13128.319272] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13128.319589] poll_schedule_timeout.constprop.0+0xa6/0x170 [13128.320229] do_poll.constprop.0+0x459/0x860 [13128.320919] ? __ia32_compat_sys_pselect6_time32+0x250/0x250 [13128.321669] ? __might_fault+0xbc/0x160 [13128.321931] do_sys_poll+0x367/0x570 [13128.322220] ? do_poll.constprop.0+0x860/0x860 [13128.322899] ? copyout+0x83/0xa0 [13128.323557] ? _copy_to_iter+0x279/0x10c0 [13128.323860] ? lock_downgrade+0x130/0x130 [13128.324119] ? copy_page_from_iter+0x7b0/0x7b0 [13128.324815] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13128.325611] ? eventfd_read+0x6a4/0x850 [13128.325860] ? validate_chain+0x154/0xdf0 [13128.326120] ? eventfd_ctx_remove_wait_queue+0x2b0/0x2b0 [13128.326462] ? mark_lock.part.0+0xca/0xa40 [13128.326710] ? check_prev_add+0x20f0/0x20f0 [13128.326957] ? wake_up_q+0xf0/0xf0 [13128.327566] ? sched_clock_cpu+0x15/0x1b0 [13128.327811] ? find_held_lock+0x33/0x120 [13128.328089] ? __lock_acquire+0xb72/0x1870 [13128.328365] ? sched_clock_cpu+0x15/0x1b0 [13128.328601] ? find_held_lock+0x33/0x120 [13128.328847] ? __lock_release+0x4c1/0xa00 [13128.329067] ? lock_downgrade+0x130/0x130 [13128.35649[13128.821390] ? ktime_get_coarse_real_ts64+0x130/0x170 [13128.829914] do_syscall_64+0x5c/0x90 [13128.830224] ? lockdep_hardirqs_on+0x79/0x100 [13128.830896] ? do_syscall_64+0x69/0x90 [13128.831217] ? asm_exc_page_fault+0x22/0x30 [13128.831481] ? lockdep_hardirqs_on+0x79/0x100 [13128.832127] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13128.832458] RIP: 0033:0x7f669ab429bf [13128.832734] RSP: 002b:00007f669a9fece0 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 [13128.833636] RAX: ffffffffffffffda RBX: 00000000005e1c10 RCX: 00007f669ab429bf [13128.834533] RDX: 00000000ffffffff RSI: 0000000000000001 RDI: 0000000001d1d360 [13128.835450] RBP: 0000000001d1d360 R08: 0000000000000000 R09: 0000000000000000 [13128.836294] R10: 0000000000000360 R11: 0000000000000293 R12: 0000000000000001 [13128.837101] R13: 00000000ffffffff R14: 000000007fffffff R15: 0000000001d1e400 [13128.838004] [13128.838227] task:pool-restraintd state:S stack:27056 pid: 1497 ppid: 1 flags:0x00000002 [13128.838723] Call Trace: [13128.838879] [13128.839469] __schedule+0x72e/0x1570 [13128.839736] ? io_schedule_timeout+0x160/0x160 [13128.840452] ? l[13129.340896] ? hrtimer_init_sleeper_on_stack+0x90/0x90 [13129.341391] ? lock_downgrade+0x130/0x130 [13129.341635] ? rcu_read_unlock+0x40/0x40 [13129.341874] ? __might_fault+0xbc/0x160 [13129.342139] do_futex+0x20b/0x340 [13129.342766] ? __ia32_sys_get_robust_list+0x310/0x310 [13129.343133] ? ktime_get+0x14e/0x180 [13129.343420] ? lockdep_hardirqs_on+0x79/0x100 [13129.344116] __x64_sys_futex+0x174/0x440 [13129.344432] ? __x64_sys_futex_time32+0x440/0x440 [13129.345182] ? ktime_get_coarse_real_ts64+0x130/0x170 [13129.345557] do_syscall_64+0x5c/0x90 [13129.345816] ? do_syscall_64+0x69/0x90 [13129.346072] ? lockdep_hardirqs_on+0x79/0x100 [13129.346752] ? do_syscall_64+0x69/0x90 [13129.347025] ? do_syscall_64+0x69/0x90 [13129.347283] ? do_syscall_64+0x69/0x90 [13129.347531] ? do_syscall_64+0x69/0x90 [13129.347773] ? do_syscall_64+0x69/0x90 [13129.348018] ? lockdep_hardirqs_on+0x79/0x100 [13129.348677] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13129.349027] RIP: 0033:0x7f669aa3ee5d [13129.349286] RSP: 002b:00007f669a1fdd08 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca [13129.350111] RAX: ffffffffffffffda RBX: 0000000001d1d210 RCX: 00007f669aa3ee5d [13129.350917] RDX: 0000000000000365 RSI: 0000000000000080 RDI: 0000000013: 000000001eb05a90 R14: 00007f669a1fdd10 R15: 0000000000000365 [13129.852276] [13129.852440] task:10_bash_login state:S stack:24728 pid: 1511 ppid: 1494 flags:0x00004002 [13129.852908] Call Trace: [13129.853076] [13129.853648] __schedule+0x72e/0x1570 [13129.853911] ? io_schedule_timeout+0x160/0x160 [13129.854581] ? lock_downgrade+0x130/0x130 [13129.854860] schedule+0x128/0x220 [13129.855491] do_wait+0x501/0xb10 [13129.856140] kernel_wait4+0xf3/0x1d0 [13129.856416] ? __wake_up_parent+0x60/0x60 [13129.856681] ? kill_orphaned_pgrp+0x2f0/0x2f0 [13129.857387] ? sched_clock_cpu+0x15/0x1b0 [13129.857649] ? find_held_lock+0x33/0x120 [13129.857906] __do_sys_wait4+0xf4/0x100 [13129.858208] ? kernel_wait4+0x1d0/0x1d0 [13129.858452] ? _copy_to_user+0x96/0xc0 [13129.858717] ? ktime_get_coarse_real_ts64+0x130/0x170 [13129.859077] ? lockdep_hardirqs_on+0x79/0x100 [13129.859736] ? ktime_get_coarse_real_ts64+0x130/0x170 [13129.860096] ? syscall_trace_enter.constprop.0+0x19c/0x280 [13129.860450] do_syscall_64+0x5c/0x90 [13129.860693] ? do_syscall_64+0x69/0x90 [13129.860934] ? asm_exc_page_fault+0x22/0x30 [13129.861188] ? lockdep_hardirqs_on+0x79/0x100 [13129.861831] entry_SYSCALL000000064d RCX: 00007faaa63182ea [13130.362755] RDX: 0000000000000000 RSI: 00007ffeab43c370 RDI: 00000000ffffffff [13130.363630] RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000000 [13130.364497] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [13130.365331] R13: 00007ffeab43c3d0 R14: 0000000000000000 R15: 0000000000000000 [13130.366212] [13130.366382] task:runtest.sh state:S stack:24288 pid: 1613 ppid: 1511 flags:0x00000002 [13130.366850] Call Trace: [13130.367024] [13130.367571] __schedule+0x72e/0x1570 [13130.367829] ? io_schedule_timeout+0x160/0x160 [13130.368509] ? lock_downgrade+0x130/0x130 [13130.368786] schedule+0x128/0x220 [13130.369409] do_wait+0x501/0xb10 [13130.370053] kernel_wait4+0xf3/0x1d0 [13130.370327] ? __wake_up_parent+0x60/0x60 [13130.370567] ? __lock_acquire+0xb72/0x1870 [13130.370814] ? kill_orphaned_pgrp+0x2f0/0x2f0 [13130.371523] ? sched_clock_cpu+0x15/0x1b0 [13130.371768] ? find_held_lock+0x33/0x120 [13130.372044] __do_sys_wait4+0xf4/0x100 [13130.372314] ? kernel_wait4+0x1d0/0x1d0 [13130.372551] ? ktime_get_coarse_real_ts64+0x130/0x170 [13130.372887] ? ktime_get_coarse_real_ts64+0x130/0x170 [13130.373271] ? lockdep_hardirqs_on+0x79/0x100 [13130.373964] ? ktime_get_coarse_real_ts64+0x130/0x170 [13130.374364] do_syscall_64+0x5c/0x90 [13130.374629] ? asm_exc_page_faudbc3bf158 EFLAGS: 00000246 ORIG_RAX: 000000000000003d [13130.875729] RAX: ffffffffffffffda RBX: 00000000000421ce RCX: 00007f59223182ea [13130.876549] RDX: 0000000000000000 RSI: 00007ffdbc3bf180 RDI: 00000000ffffffff [13130.877406] RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000001 [13130.878246] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [13130.879055] R13: 00007ffdbc3bf1e0 R14: 0000000000000000 R15: 0000000000000000 [13130.879924] [13130.880111] task:sshd state:S stack:26296 pid: 1819 ppid: 1126 flags:0x00000002 [13130.880579] Call Trace: [13130.880752] [13130.881300] __schedule+0x72e/0x1570 [13130.881557] ? io_schedule_timeout+0x160/0x160 [13130.882252] ? find_held_lock+0x33/0x120 [13130.882508] ? __lock_release+0x4c1/0xa00 [13130.882750] schedule+0x128/0x220 [13130.883375] schedule_hrtimeout_range_clock+0x2b8/0x300 [13130.883714] ? hrtimer_nanosleep_restart+0x160/0x160 [13130.884042] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13130.884776] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13130.885134] ? lockdep_hardirqs_on+0x79/0x100 [13130.885819] ? unix_poll+0x26a/0x3b0 [13130.9? __might_fault+0xbc/0x160 [13131.386615] do_sys_poll+0x367/0x570 [13131.386871] ? do_poll.constprop.0+0x860/0x860 [13131.387568] ? is_bpf_text_address+0x6a/0xe0 [13131.388285] ? __kernel_text_address+0xe/0x40 [13131.388937] ? unwind_get_return_address+0x5a/0xa0 [13131.389627] ? create_prof_cpu_mask+0x20/0x20 [13131.390298] ? arch_stack_walk+0x9e/0xf0 [13131.390540] ? validate_chain+0x154/0xdf0 [13131.390777] ? validate_chain+0x154/0xdf0 [13131.391047] ? mark_lock.part.0+0xca/0xa40 [13131.391331] ? check_prev_add+0x20f0/0x20f0 [13131.391598] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13131.392343] ? __lock_acquire+0xb72/0x1870 [13131.392595] ? validate_chain+0x154/0xdf0 [13131.392839] ? mark_lock.part.0+0xca/0xa40 [13131.393097] ? check_prev_add+0x20f0/0x20f0 [13131.393358] ? find_held_lock+0x33/0x120 [13131.393684] ? __lock_acquire+0xb72/0x1870 [13131.393944] ? sched_clock_cpu+0x15/0x1b0 [13131.394252] ? find_held_lock+0x33/0x120 [13131.394506] ? __lock_release+0x4c1/0xa00 [13131.394753] ? lock_downgrade+0x130/0x130 [13131.395021] ? rcu_read_unlock+0x40/0x40 [13131.395278] ? rseq_update_cpu_id+0x230/0x170 [13131.795842] do_syscall_64+0x5c/0x90 [13131.796138] ? do_syscall_[13131.878292] ? lockdep_hardirqs_on+0x79/0x100 [13131.897093] ? do_syscall_64+0x69/0x90 [13131.897387] ? do_syscall_64+0x69/0x90 [13131.897629] ? lockdep_hardirqs_on+0x79/0x100 [13131.898303] ? do_syscall_64+0x69/0x90 [13131.898548] ? lockdep_hardirqs_on+0x79/0x100 [13131.899175] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13131.899555] RIP: 0033:0x7f7327f42987 [13131.899783] RSP: 002b:00007ffde9c6d6c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000007 [13131.900614] RAX: ffffffffffffffda RBX: 00007ffde9c6d6e0 RCX: 00007f7327f42987 [13131.901415] RDX: 00000000ffffffff RSI: 0000000000000001 RDI: 00007ffde9c6d6e0 [13131.902206] RBP: 0000557a26a480c0 R08: 0000000000000001 R09: 00007ffde9c669f8 [13131.902971] R10: 0000000000000040 R11: 0000000000000246 R12: 0000557a26a480c0 [13131.903786] R13: 0000557a285a6480 R14: 0000557a2858b500 R15: 00000000ffffffff [13131.904662] [13131.904821] task:sshd state:S stack:27072 pid: 1822 ppid: 1819 flags:0x00000002 [13131.905289] Call Trace: [13131.905451] [13131.905946] __schedule+0x72e/0x1570 [13131.906259] ? io_schedule_timeout+0x160/0x160 [13131.906928] ? find_held_lock+0x33/0x120 [13131.907219] ? sched_clock_cpu+0x15/0x1b0 [13131.907457] ? find_held_lock+0x33/0x120 [13131.907698] schedule+0x128/0x220 [13131.908316] schedule_hrtimeout_range_clodo_select+0x9e4/0xd20 [13132.309390] ? select_estimate_accuracy+0x2a0/0x2a0 [13132.310095] ? sk_reset_timer+0x15/0x70 [13132.310372] ? mark_lock.part.0+0xca/0xa40 [13132.310647] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13132.311413] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13132.312172] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13132.312988] ? __lock_acquire+0xb72/0x1870 [13132.313330] ? sched_clock_cpu+0x15/0x1b0 [13132.313632] ? find_held_lock+0x33/0x120 [13132.313905] ? __lock_release+0x4c1/0xa00 [13132.314167] ? lock_downgrade+0x130/0x130 [13132.314498] ? __might_fault+0xbc/0x160 [13132.314780] ? core_sys_select+0x30c/0x710 [130x30c/0x710 [13132.415255] ? __x64_sys_poll+0x430/0x430 [13132.415571] ? lock_downgrade+0x130/0x130 [13132.415846] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13132.416615] ? _raw_spin_unlock_irq+0x24/0x50 [13132.417303] ? lockdep_hardirqs_on+0x79/0x100 [13132.417960] ? _raw_spin_unlock_irq+0x2f/0x50 [13132.418640] ? set_user_sigmask+0x1be/0x250 [13132.418891] ? __set_current_blocked+0xf0/0xf0 [13132.419577] ? __lock_release+0x4c1/0xa00 [13132.419830] do_pselect.constprop.0+0x117/0x1e0 [13132.420563] ? __ia32_sys_select+0x150/0x150 [13132.421281] ? ktime_get_coarse_real_ts64+0x130/0x170 [13132.421590] ? lockdep_hardirqs_on+0x79/0x100 [13132.422280] __x64_sys_pselect6+0x138/0x25 [13132.923221] ? do_syscall_64+0x69/0x90 [13132.923471] ? do_syscall_64+0x69/0x90 [13132.923718] ? lockdep_hardirqs_on+0x79/0x100 [13132.924401] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13132.924731] RIP: 0033:0x7f7327f45224 [13132.924953] RSP: 002b:00007ffde9c6d380 EFLAGS: 00000246 ORIG_RAX: 000000000000010e [13132.925776] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f7327f45224 [13132.926596] RDX: 0000557a2857e430 RSI: 0000557a285aabf0 RDI: 000000000000000f [13132.927443] RBP: 0000557a285aabf0 R08: 0000000000000000 R09: 00007ffde9c6d3c0 [13132.928288] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000008 [13132.929097] R13: 0000557a285a7a40 R14: 0000557a2857e430 R15: 0000557a285a6480 [13132.929970] [13132.930150] task:restraintd state:S stack:23448 pid: 1823 ppid: 1822 flags:0x00000002 [13132.930621] Call Trace: [13132.930777] [13132.931326] __schedule+0x72e/0x1570 [13132.931611] ? io_schedule_timeout+0x160/0x160 [13132.932320] ? find_held_lock+0x33/0x120 [13132.932585] ? sched_clock_cpu+0x15/0x1b0 [13132.932808] ? find_held_lock+0x33/0x120 [13132.933066] schedule+0x128/0x220 [13132.933678] sche[13133.334051] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13133.334398] ? lockdep_hardirqs_on+0x79/0x100 [13133.335089] poll_schedule_timeout.constprop.0+0xa6/0x170 [13133.335439] do_poll.constprop.0+0x459/0x860 [13133.336154] ? __ia32_compat_sys_pselect6_time32+0x250/0x250 [13133.337237] ? __might_fault+0xbc/0x160 [13133.337550] do_sys_poll+0x367/0x570 [13133.337783] ? __lock_release+0x4c1/0xa00 [13133.338028] ? do_poll.constprop.0+0x860/0x860 [13133.338755] ? copyout+0x83/0xa0 [13133.339563] ? _copy_to_iter+0x279/0x10c0 [13133.339833] ? lock_downgrade+0x130/0x130 [13133.340095] ? copy_page_from_iter+0x7b0/0x7b0 [13133.340845] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13133.341848] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13133.342741] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13133.343628] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13133.344453] ? sched_clock_cpu+0x15/0x1b0 [13133.344715] ? find_held_lock+0x33/0x120 [13133.344968] ? __lock_acquire+0xb72/0x1870 [13133.345313] ? sched_clock_cpu+0x15/0x1b0 [13133.345600] ? find_held_lock+0x33/0x120 [13133.345841] ? __lock_release+0x4c1/0[13133.838147] ? ktime_get_coarse_real_ts64+0x130/0x170 [13133.846741] do_syscall_64+0x5c/0x90 [13133.846997] ? lockdep_hardirqs_on+0x79/0x100 [13133.847710] ? do_syscall_64+0x69/0x90 [13133.847982] ? do_syscall_64+0x69/0x90 [13133.848277] ? lockdep_hardirqs_on+0x79/0x100 [13133.848941] ? do_syscall_64+0x69/0x90 [13133.849165] ? do_syscall_64+0x69/0x90 [13133.849435] ? lockdep_hardirqs_on+0x79/0x100 [13133.850122] ? do_syscall_64+0x69/0x90 [13133.850606] ? lockdep_hardirqs_on+0x79/0x100 [13133.851395] ? do_syscall_64+0x69/0x90 [13133.851648] ? asm_common_interrupt+0x22/0x40 [13133.852326] ? lockdep_hardirqs_on+0x79/0x100 [13133.853002] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13133.853356] RIP: 0033:0x7f41f5d429bf [13133.853651] RSP: 002b:00007ffedeb8f420 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 [13133.854733] RAX: ffffffffffffffda RBX: 00000000005e1c10 RCX: 00007f41f5d429bf [13133.855748] RDX: 00000000ffffffff RSI: 0000000000000004 RDI: 000000000144d740 [13133.856622] RBP: 000000000144d740 R08: 0000000000000000 R09: 0000000000000000 [13133.857531] R10: 000000000000001b R11: 0000000000000293 R12: 0000000000000004 [13133.858358] R13: 00[13134.358852] [13134.359667] __schedule+0x72e/0x1570 [13134.360316] ? io_schedule_timeout+0x160/0x160 [13134.361036] ? find_held_lock+0x33/0x120 [13134.361308] ? sched_clock_cpu+0x15/0x1b0 [13134.361578] schedule+0x128/0x220 [13134.362258] schedule_hrtimeout_range_clock+0x2b8/0x300 [13134.362590] ? hrtimer_nanosleep_restart+0x160/0x160 [13134.362917] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13134.363720] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13134.364088] poll_schedule_timeout.constprop.0+0xa6/0x170 [13134.364428] do_poll.constprop.0+0x459/0x860 [13134.365156] ? __ia32_compat_sys_pselect6_time32+0x250/0x250 [13134.366287] ? __might_fault+0xbc/0x160 [13134.366574] do_sys_poll+0x367/0x570 [13134.366828] ? do_poll.constprop.0+0x860/0x860 [13134.367588] ? try_to_wake_up+0x62d/0x1010 [13134.367850] ? do_raw_spin_unlock+0x55/0x1f0 [13134.368563] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13134.368885] ? try_to_wake_up+0x110/0x1010 [13134.369144] ? __lock_acquire+0xb72/0x1870 [13134.369417] ? sched_core_balance+0x420/0x420 [13134.370128] ? sched_clock_cpu+0x15/0x1b0 [13134.370554] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13134.371386] ? lock_downgrade+0x130/0x130 [13134.371657] ? validate_chain+0x154/0xdf0 [13134.371901] ? mark_lock.part.0+0x0x197/0x770 [13134.872505] ? __lock_acquire+0xb72/0x1870 [13134.872800] ? wake_up_q+0x42/0xf0 [13134.873523] ? sched_clock_cpu+0x15/0x1b0 [13134.873847] ? find_held_lock+0x33/0x120 [13134.874115] ? __lock_release+0x4c1/0xa00 [13134.874396] ? lock_downgrade+0x130/0x130 [13134.874642] ? rcu_read_unlock+0x40/0x40 [13134.874918] __x64_sys_poll+0xd2/0x430 [13134.875183] ? __ia32_sys_poll+0x430/0x430 [13134.875431] ? ktime_get_coarse_real_ts64+0x130/0x170 [13134.875766] do_syscall_64+0x5c/0x90 [13134.876021] ? asm_exc_page_fault+0x22/0x30 [13134.876298] ? lockdep_hardirqs_on+0x79/0x100 [13134.876976] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13134.877350] RIP: 0033:0x7f41f5d429bf [13134.877605] RSP: 002b:00007f41f5bfece0 EFLAGS: 00000293 ORIG_RAX: 0000000000000007 [13134.878468] RAX: ffffffffffffffda RBX: 00000000005e1c10 RCX: 00007f41f5d429bf [13134.879745] RDX: 00000000ffffffff RSI: 0000000000000001 RDI: 00000000013ffed0 [13134.880614] RBP: 00000000013ffed0 R08: 0000000000000000 R09: 00000 [13135.381439] task:nfsiod state:I stack:30728 pid: 2587 ppid: 2 flags:0x00004000 [13135.381937] Call Trace: [13135.382123] [13135.382705] __schedule+0x72e/0x1570 [13135.382982] ? io_schedule_timeout+0x160/0x160 [13135.383892] ? lock_downgrade+0x130/0x130 [13135.384301] ? wait_for_completion_io_timeout+0x20/0x20 [13135.384686] schedule+0x128/0x220 [13135.385360] rescuer_thread+0x679/0xbb0 [13135.385645] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13135.385953] ? worker_thread+0xf90/0xf90 [13135.386227] ? __kthread_parkme+0xcc/0x200 [13135.386468] ? worker_thread+0xf90/0xf90 [13135.386695] kthread+0x2a7/0x350 [13135.387320] ? kthread_complete_and_exit+0x20/0x20 [13135.388032] ret_from_fork+0x22/0x30 [13135.388414] [13135.388602] task:NFSv4 callback state:S stack:30632 pid: 2593 ppid: 2 flags:0x00004000 [13135.389081] Call Trace: [13135.389261] [13135.389823] __schedule+0x72e/0x1570 [13135.390291] ? io_schedule_timeout+0x160/0x160 [13135.390994] ? lock_downgrade+0x130/0x130 [13135.391361] schedule+0x128/0x220 [13135.392108] nfs41_callback_svc0/0x2c0 [13135.892945] ? __kthread_parkme+0xcc/0x200 [13135.893239] ? nfs_callback_authenticate+0x180/0x180 [nfsv4] [13135.894146] kthread+0x2a7/0x350 [13135.894878] ? kthread_complete_and_exit+0x20/0x20 [13135.895595] ret_from_fork+0x22/0x30 [13135.895918] [13135.896116] task:rngd state:S stack:27816 pid:57661 ppid: 1 flags:0x00000002 [13135.896605] Call Trace: [13135.896785] [13135.897342] __schedule+0x72e/0x1570 [13135.897633] ? io_schedule_timeout+0x160/0x160 [13135.898373] ? find_held_lock+0x33/0x120 [13135.898643] schedule+0x128/0x220 [13135.899279] schedule_hrtimeout_range_clock+0x2b8/0x300 [13135.899718] ? hrtimer_nanosleep_restart+0x160/0x160 [13135.900059] ? lock_downgrade+0x130/0x130 [13135.900392] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13135.901187] poll_schedule_timeout.constprop.0+0xa6/0x170 [13135.901542] do_poll.constprop.0+0x459/0x860 [13135.902321] ? __ia32_compat_sys_pselect6_time32+0x250/0x250 [13135.903120] ? __might_fault+0xbc/0x160 [13135.903433] do_sys_poll+0x367/0x570 [13135.903750] ? do_poll.constprop.0+0x860/0x860 [13135.904696] ? find_held_lock+0x33/0x120 [13135.905046] ? __lock_release? poll_schedule_timeout.constprop.0+0x170/0x170 [13136.406185] ? mark_lock.part.0+0xca/0xa40 [13136.406468] ? check_prev_add+0x20f0/0x20f0 [13136.406702] ? mark_lock.part.0+0xca/0xa40 [13136.406931] ? check_prev_add+0x20f0/0x20f0 [13136.407208] ? __lock_acquire+0xb72/0x1870 [13136.407489] ? sched_clock_cpu+0x15/0x1b0 [13136.407717] ? find_held_lock+0x33/0x120 [13136.407948] ? __lock_release+0x4c1/0xa00 [13136.408205] ? lock_downgrade+0x130/0x130 [13136.408470] ? rcu_read_unlock+0x40/0x40 [13136.408696] ? __lock_release+0x4c1/0xa00 [13136.408946] __x64_sys_poll+0xd2/0x430 [13136.409191] ? __ia32_sys_poll+0x430/0x430 [13136.409435] ? ktime_get_coarse_real_ts64+0x130/0x170 [13136.409755] do_syscall_64+0x5c/0x90 [13136.409977] ? do_syscall_64+0x69/0x90 [13136.410240] ? lockdep_hardirqs_on+0x79/0x100 [13136.410887] ? do_syscall_64+0x69/0x90 [13136.411196] ? do_syscall_64+0x69/0x90 [13136.411473] ? lockdep_hardirqs_on+0x79/0x100 [13136.412174] ? do_syscall_64+0x69/0x90 [13136.412451] ? lockdep_hardirqs_on+0x79/0x100 [13136.413142] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13136.413822] RIP: 0033:0x7f9c377429bf [13136.414195] RSP: 002b:00007fffbb95a940 EFLAGS: 0000029[13136.906395] RBP: 00007fffbb95b230 R08: 0000000000000000 R09: 0000000000000100 [13136.915600] R10: 00007fffbb95a9d0 R11: 0000000000000293 R12: 0000000000000040 [13136.916664] R13: 00007fffbb95b270 R14: 00007fffbb95a980 R15: 0000559640ade193 [13136.917722] [13136.917904] task:kworker/7:2 state:I stack:26496 pid:199954 ppid: 2 flags:0x00004000 [13136.918858] Workqueue: 0x0 (events) [13136.919153] Call Trace: [13136.919335] [13136.919871] __schedule+0x72e/0x1570 [13136.920181] ? io_schedule_timeout+0x160/0x160 [13136.920889] ? lock_downgrade+0x130/0x130 [13136.921208] ? pwq_dec_nr_in_flight+0x230/0x230 [13136.921930] schedule+0x128/0x220 [13136.922620] worker_thread+0x152/0xf90 [13136.922904] ? process_one_work+0x1520/0x1520 [13136.923616] kthread+0x2a7/0x350 [13136.924268] ? kthread_complete_and_exit+0x20/0x20 [13136.924985] ret_from_fork+0x22/0x30 [13136.925381] [13136.925561] task:kworker/16:0 state:I stack:27344 pid:200047 ppid: 2 flags:0x00004000 [13136.926495] Workqueue: 0x0 (events) [13136.926764] Call Trace: [13136.926917] [13136.954[13137.427857] schedule+0x128/0x220 [13137.428555] worker_thread+0x152/0xf90 [13137.428844] ? process_one_work+0x1520/0x1520 [13137.429570] kthread+0x2a7/0x350 [13137.430179] ? kthread_complete_and_exit+0x20/0x20 [13137.430902] ret_from_fork+0x22/0x30 [13137.431282] [13137.431465] task:dio/dm-0 state:I stack:30728 pid:200799 ppid: 2 flags:0x00004000 [13137.432405] Call Trace: [13137.432591] [13137.433151] __schedule+0x72e/0x1570 [13137.433449] ? io_schedule_timeout+0x160/0x160 [13137.434179] ? lock_downgrade+0x130/0x130 [13137.434450] ? wait_for_completion_io_timeout+0x20/0x20 [13137.434830] schedule+0x128/0x220 [13137.435480] rescuer_thread+0x679/0xbb0 [13137.435778] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13137.436089] ? worker_thread+0xf90/0xf90 [13137.436377] ? __kthread_parkme+0xcc/0x200 [13137.436627] ? worker_thread+0xf90/0xf90 [13137.436869] kthread+0x2a7/0x350 [13137.437477] ? kthread_complete_and_exit+0x20/0x20 [13137.438198] ret_from_fork+0x22/0x30 [13137.438546] [13137.438717] task:kworker/u130:3 state:I stack:23632 pid:200816 ppid: 2 flags:0x00004000 [13137.439645] Workqueue: 0x0 (flush-253:0) [13137.467477]+0x130/0x130 [13137.940366] ? pwq_dec_nr_in_flight+0x230/0x230 [13137.941075] schedule+0x128/0x220 [13137.941714] worker_thread+0x152/0xf90 [13137.942004] ? process_one_work+0x1520/0x1520 [13137.942704] kthread+0x2a7/0x350 [13137.943343] ? kthread_complete_and_exit+0x20/0x20 [13137.944042] ret_from_fork+0x22/0x30 [13137.944413] [13137.944597] task:kworker/15:3 state:I stack:27592 pid:200894 ppid: 2 flags:0x00004000 [13137.945841] Workqueue: 0x0 (rcu_gp) [13137.946099] Call Trace: [13137.946286] [13137.946831] __schedule+0x72e/0x1570 [13137.947115] ? io_schedule_timeout+0x160/0x160 [13137.947788] ? lock_downgrade+0x130/0x130 [13137.948050] ? pwq_dec_nr_in_flight+0x230/0x230 [13137.948797] schedule+0x128/0x220 [13137.949447] worker_thread+0x152/0xf90 [13137.949726] ? process_one_work+0x1520/0x1520 [13137.950448] kthread+0x2a7/0x350 [13137.951078] ? kthread_complete_and_exit+0x20/0x20 [13137.951795] ret_from_fork+0x22/0x30 [13137.952090] [13137.952277] task:chronyd state:S stack:24760 pid:201490 ppid: 1 flags:0x00000002 [13137.953201] Call Trace: [13137.953392] [13137.953927] __schedule+0x72e/0x1570 [13138x143/0x300 [13138.454535] ? hrtimer_nanosleep_restart+0x160/0x160 [13138.454865] ? hrtimer_init_sleeper_on_stack+0x90/0x90 [13138.455225] poll_schedule_timeout.constprop.0+0xa6/0x170 [13138.455641] do_select+0x9e4/0xd20 [13138.456367] ? select_estimate_accuracy+0x2a0/0x2a0 [13138.457091] ? __lock_release+0x4c1/0xa00 [13138.457388] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13138.458151] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13138.458938] ? poll_schedule_timeout.constprop.0+0x170/0x170 [13138.459801] ? __lock_acquire+0xb72/0x1870 [13138.460082] ? sched_clock_cpu+0x15/0x1b0 [13138.460406] ? find_held_lock+0x33/0x120 [13138.460672] ? __lock_release+0x4c1/0xa00 [13138.460918] ? lock_downgrade+0x130/0x130 [13138.461212] ? __might_fault+0xbc/0x160 [13138.461549] ? core_sys_select+0x30c/0x710 [13138.461839] core_sys_select+0x30c/0x710 [13138.462090] ? __x64_sys_poll+0x430/0x430 [13138.462372] ? static_obj+0x62/0xc0 [13138.462981] ? sched_clock_cpu+0x15/0x1b0 [13138.463336] ? __lock_release+0x4c1/0xa00 [13138.463587] ? lock_downgrade+0x130/0x130 [13138.463832] ? rcu_read_unlock+0x40/0x40 [13138.464131] ? nsec_to_clock_t+0x30/0x30 [13138.464397] ? ktime_get_ts64+0x1eb/0x270 [13138.464648] ? __set_current_blocked+0xf0/0xf0 [13138.465340] ? __lock_release+0x4c1/0xa00 [13138.465609] do_pselect.constprop.0[13138.957867] __x64_sys_pselect6+0x138/0x250 [13138.966399] ? syscall_trace_enter.constprop.0+0x19c/0x280 [13138.966749] do_syscall_64+0x5c/0x90 [13138.966981] ? asm_sysvec_apic_timer_interrupt+0x16/0x20 [13138.967343] ? lockdep_hardirqs_on+0x79/0x100 [13138.968022] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13138.968355] RIP: 0033:0x7ff9b2945089 [13138.968604] RSP: 002b:00007fffdbf35f10 EFLAGS: 00000246 ORIG_RAX: 000000000000010e [13138.969451] RAX: ffffffffffffffda RBX: 00007fffdbf36000 RCX: 00007ff9b2945089 [13138.970312] RDX: 0000000000000000 RSI: 00007fffdbf36120 RDI: 0000000000000008 [13138.971169] RBP: 00007fffdbf36120 R08: 00007fffdbf35f20 R09: 0000000000000000 [13138.971981] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000008 [13138.972849] R13: 00007fffdbf35f20 R14: 00007fffdbf35ff0 R15: 0000000000000000 [13138.973853] [13138.974032] task:kworker/12:39 state:I stack:27760 pid:202025 ppid: 2 flags:0x00004000 [13138.975015] Workqueue: 0x0 (events) [13138.975375] Call Trace: [13138.975610] [13138.976122] __schedule+0x72e/0x1570 [13138.976424] ? io_schedule_timeout+0x160/0x160 [13138.977119] ? lock_downgrade+0x130/0x130 [13138.977415] ? pwq_dec_nr_in_flight+0x230/0x230 [13138.978125] schedule+0x128/0x220 [13138.978752] worker_threadret_from_fork+0x22/0x30 [13139.479625] [13139.479818] task:bash state:S stack:27400 pid:203856 ppid: 1 flags:0x00000002 [13139.480824] Call Trace: [13139.481084] [13139.481666] __schedule+0x72e/0x1570 [13139.481937] ? io_schedule_timeout+0x160/0x160 [13139.482657] ? lock_downgrade+0x130/0x130 [13139.482981] schedule+0x128/0x220 [13139.483664] do_wait+0x501/0xb10 [13139.484408] kernel_wait4+0xf3/0x1d0 [13139.484660] ? __wake_up_parent+0x60/0x60 [13139.484910] ? kill_orphaned_pgrp+0x2f0/0x2f0 [13139.485618] ? sched_clock_cpu+0x15/0x1b0 [13139.485869] ? find_held_lock+0x33/0x120 [13139.486126] __do_sys_wait4+0xf4/0x100 [13139.486375] ? kernel_wait4+0x1d0/0x1d0 [13139.486660] ? ktime_get_coarse_real_ts64+0x130/0x170 [13139.486963] ? lockdep_hardirqs_on+0x79/0x100 [13139.487654] ? ktime_get_coarse_real_ts64+0x130/0x170 [13139.487978] ? syscall_trace_enter.constprop.0+0x19c/0x280 [13139.488397] do_syscall_64+0x5c/0x90 [13139.488640] ? lockdep_hardirqs_on+0x79/0x100 [13139.489357] ? do_syscall_64+0x69/0x90 [13139.489613] ? asm_exc_page_fault+0x22/0x30 [13139.489862] ? lockdep_hardirqs_on+0x79/0x100 [13139.490559] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13139.490909] RIP: 0033:0x7ff7b93182ea [13139.491155] RSP: 002b:00007ffdc88bc118 EFLAGS: 00000246 ORIG_RAX: 000000000000003d [13139.491980] RAX: ffffffffffffff[13139.992633] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [13139.993512] R13: 00007ffdc88bc1a0 R14: 0000000000000000 R15: 0000000000000000 [13139.994502] [13139.994674] task:kworker/20:2 state:I stack:28528 pid:217977 ppid: 2 flags:0x00004000 [13139.995629] Workqueue: 0x0 (rcu_par_gp) [13139.995906] Call Trace: [13139.996058] [13139.996648] __schedule+0x72e/0x1570 [13139.996910] ? io_schedule_timeout+0x160/0x160 [13139.997600] ? lock_downgrade+0x130/0x130 [13139.997849] ? pwq_dec_nr_in_flight+0x230/0x230 [13139.998616] schedule+0x128/0x220 [13139.999231] worker_thread+0x152/0xf90 [13139.999592] ? process_one_work+0x1520/0x1520 [13140.000338] kthread+0x2a7/0x350 [13140.000967] ? kthread_complete_and_exit+0x20/0x20 [13140.001667] ret_from_fork+0x22/0x30 [13140.001960] [13140.002118] task:kworker/2:1 state:I stack:27304 pid:218521 ppid: 2 flags:0x00004000 [13140.002999] Workqueue: 0x0 (rcu_par_gp) [13140.003312] Call Trace: [13140.003474] [13140.003985] __schedule+0x72e/0x1570 [13140.004305] ? io_schedule_timeout+0x160/0x160 [13140.004991] ? lock_downgrade+0x130/0x130 [1314[13140.505583] kthread+0x2a7/0x350 [13140.506187] ? kthread_complete_and_exit+0x20/0x20 [13140.506889] ret_from_fork+0x22/0x30 [13140.507181] [13140.507390] task:kworker/u129:3 state:I stack:24080 pid:221363 ppid: 2 flags:0x00004000 [13140.508301] Workqueue: 0x0 (flush-253:0) [13140.508593] Call Trace: [13140.508745] [13140.509246] __schedule+0x72e/0x1570 [13140.509540] ? io_schedule_timeout+0x160/0x160 [13140.510238] ? lock_downgrade+0x130/0x130 [13140.510512] ? pwq_dec_nr_in_flight+0x230/0x230 [13140.511218] schedule+0x128/0x220 [13140.511875] worker_thread+0x152/0xf90 [13140.512157] ? process_one_work+0x1520/0x1520 [13140.512833] kthread+0x2a7/0x350 [13140.513437] ? kthread_complete_and_exit+0x20/0x20 [13140.514154] ret_from_fork+0x22/0x30 [13140.514494] [13140.514682] task:kworker/5:1 state:I stack:28480 pid:221550 ppid: 2 flags:0x00004000 [13140.515577] Workqueue: 0x0 (rcu_gp) [13140.515854] Call Trace: [13140.516006] [13140.516578] __schedule+0x72e/0x1570 [13140.516866] ? io_schedule_timeout+0x160/0x160 [13140.517561] ? lock_downgrade+0x130/0x130 [13140.54[13141.018289] kthread+0x2a7/0x350 [13141.018902] ? kthread_complete_and_exit+0x20/0x20 [13141.019601] ret_from_fork+0x22/0x30 [13141.019894] [13141.020065] task:kworker/18:2 state:I stack:27816 pid:224773 ppid: 2 flags:0x00004000 [13141.020996] Workqueue: 0x0 (mm_percpu_wq) [13141.021332] Call Trace: [13141.021500] [13141.022034] __schedule+0x72e/0x1570 [13141.022352] ? io_schedule_timeout+0x160/0x160 [13141.023205] ? lock_downgrade+0x130/0x130 [13141.023488] ? pwq_dec_nr_in_flight+0x230/0x230 [13141.024290] schedule+0x128/0x220 [13141.024924] worker_thread+0x152/0xf90 [13141.025188] ? process_one_work+0x1520/0x1520 [13141.025882] kthread+0x2a7/0x350 [13141.026500] ? kthread_complete_and_exit+0x20/0x20 [13141.027197] ret_from_fork+0x22/0x30 [13141.027539] [13141.027716] task:kworker/9:2 state:D stack:27816 pid:233066 ppid: 2 flags:0x00004000 [13141.028631] Workqueue: events do_free_init [13141.028897] Call Trace: [13141.029071] [13141.029646] __schedule+0x72e/0x1570 [13141.029910] ? io_schedule_timeout+0x160/0x160 [13141.030618] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13141.031437] schedule+0x128/0x220 [13141.032079] schedule_timeout+0x1a9/0x260 [13141.032383] ? usleep_range_state+0x190/0x190 [13141.033061] ? lock_downgrade+0x13__wait_for_common+0x37c/0x530 [13141.533669] ? usleep_range_state+0x190/0x190 [13141.534405] ? out_of_line_wait_on_bit_timeout+0x170/0x170 [13141.534723] ? lockdep_init_map_type+0x2ff/0x820 [13141.535452] __wait_rcu_gp+0x254/0x390 [13141.535738] synchronize_rcu+0x15f/0x190 [13141.535977] ? synchronize_rcu_expedited+0x360/0x360 [13141.536336] ? rcu_gp_init+0x12c0/0x12c0 [13141.536581] ? rcu_tasks_pregp_step+0x10/0x10 [13141.537240] ? __wait_for_common+0x9e/0x530 [13141.537550] do_free_init+0x32/0xa0 [13141.538148] process_one_work+0x8e5/0x1520 [13141.538460] ? pwq_dec_nr_in_flight+0x230/0x230 [13141.539103] ? __lock_contended+0x980/0x980 [13141.539435] ? worker_thread+0x15a/0xf90 [13141.539716] worker_thread+0x59e/0xf90 [13141.539974] ? process_one_work+0x1520/0x1520 [13141.540693] kthread+0x2a7/0x350 [13141.541328] ? kthread_complete_and_exit+0x20/0x20 [13141.541994] ret_from_fork+0x22/0x30 [13141.542339] [13141.542511] task:kworker/3:3 state:I stack:26720 pid:233152 ppid: 2 flags:0x00004000 [13141.543418] Workqueue: 0x0 (rcu_gp) [13141.543727] Call Trace: [13141.543876] [13141.544457] __schedule+0x72e/0x1570 [13141.544716] ? io_schedule_timeout+0x160/0x160 [13142.045241] ? process_one_work+0x1520/0x1520 [13142.045947] kthread+0x2a7/0x350 [13142.046544] ? kthread_complete_and_exit+0x20/0x20 [13142.047225] ret_from_fork+0x22/0x30 [13142.047581] [13142.047746] task:kworker/23:0 state:I stack:29344 pid:233530 ppid: 2 flags:0x00004000 [13142.048662] Workqueue: 0x0 (events) [13142.048924] Call Trace: [13142.049090] [13142.049640] __schedule+0x72e/0x1570 [13142.049898] ? io_schedule_timeout+0x160/0x160 [13142.050552] ? lock_downgrade+0x130/0x130 [13142.050803] ? pwq_dec_nr_in_flight+0x230/0x230 [13142.051552] schedule+0x128/0x220 [13142.052142] worker_thread+0x152/0xf90 [13142.052477] ? process_one_work+0x1520/0x1520 [13142.053143] kthread+0x2a7/0x350 [13142.053766] ? kthread_complete_and_exit+0x20/0x20 [13142.054501] ret_from_fork+0x22/0x30 [13142.054789] [13142.054943] task:kworker/u128:2 state:D stack:27128 pid:233778 ppid: 2 flags:0x00004000 [13142.055881] Workqueue: netns cleanup_net [13142.056135] Call Trace: [13142.056359] [13142.056868] __schedule+0x72e/0x1570 [13142.057171] ? io_schedule_timeout+0x160/0x160 [13142.057856] ? mark_held_locks+0xa5/0xf0 [13142.058141] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13142.058890] ? _raw_spin_unlock_irqrestore+0x59/0x70synchronize_rcu_expedited+0x2a3/0x360 [13142.559995] ? wait_rcu_exp_gp+0x40/0x40 [13142.560346] ? prepare_to_wait_exclusive+0x2c0/0x2c0 [13142.560708] ? rcu_exp_wait_wake+0x170/0x170 [13142.561414] ? __mutex_unlock_slowpath+0x161/0x5e0 [13142.562063] ? mutex_lock_io_nested+0x12c3/0x12d0 [13142.562772] ? wait_for_completion_io_timeout+0x20/0x20 [13142.563121] ipv6_mc_down+0xf0/0x360 [13142.563414] addrconf_ifdown.isra.0+0xe36/0x1150 [13142.564105] ? sit_add_v4_addrs+0x6a0/0x6a0 [13142.564473] addrconf_notify+0xc8/0x1060 [13142.564714] ? sel_netif_netdev_notifier_handler+0x164/0x2c0 [13142.565473] notifier_call_chain+0x9e/0x180 [13142.565742] dev_close_many+0x28d/0x550 [13142.565991] ? dev_get_by_napi_id+0x120/0x120 [13142.566688] ? find_held_lock+0x33/0x120 [13142.566909] ? sched_clock_cpu+0x15/0x1b0 [13142.567146] unregister_netdevice_many+0x374/0x1210 [13142.567842] ? lock_downgrade+0x130/0x130 [13142.568075] ? netdev_pick_tx+0x620/0x620 [13142.568356] ? unregister_netdevice_queue+0x142/0x340 [13142.568699] ? unregister_netdevice_many+0x1210/0x1210 [13142.569000] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13142.569755] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13142.570425] default_device_exit_batch+0x2bb/0x9a0 [13143.070965] ? register_pernet_device+0x60/0x60 [13143.071761] process_one_work+0x8e5/0x1520 [13143.072049] ? pwq_dec_nr_in_flight+0x230/0x230 [13143.072755] ? __lock_contended+0x980/0x980 [13143.073016] ? worker_thread+0x15a/0xf90 [13143.073253] worker_thread+0x59e/0xf90 [13143.073573] ? process_one_work+0x1520/0x1520 [13143.074312] kthread+0x2a7/0x350 [13143.074890] ? kthread_complete_and_exit+0x20/0x20 [13143.075624] ret_from_fork+0x22/0x30 [13143.075900] [13143.076066] task:kworker/13:1 state:I stack:28320 pid:235504 ppid: 2 flags:0x00004000 [13143.076985] Workqueue: 0x0 (rcu_par_gp) [13143.077267] Call Trace: [13143.077439] [13143.077938] __schedule+0x72e/0x1570 [13143.078192] ? io_schedule_timeout+0x160/0x160 [13143.078874] ? lock_downgrade+0x130/0x130 [13143.079126] ? pwq_dec_nr_in_flight+0x230/0x230 [13143.079884] schedule+0x128/0x220 [13143.080512] worker_thread+0x152/0xf90 [13143.080791] ? process_one_work+0x1520/0x1520 [13143.081474] kthread+0x2a7/0x350 [13143.082067] ? kthread_complete_and_exit+0x20/0x20 [13143.082800] ret_from_fork+0x22/0x30 [13143.083087] [13143.083265] task:kworker/17:2 state:I stack:28480 pid:237245 ppid:[13143.575509] ? io_schedule_timeout+0x160/0x160 [13143.584624] ? lock_downgrade+0x130/0x130 [13143.584867] ? pwq_dec_nr_in_flight+0x230/0x230 [13143.585599] schedule+0x128/0x220 [13143.586236] worker_thread+0x152/0xf90 [13143.586554] ? process_one_work+0x1520/0x1520 [13143.587251] kthread+0x2a7/0x350 [13143.587865] ? kthread_complete_and_exit+0x20/0x20 [13143.588583] ret_from_fork+0x22/0x30 [13143.588870] [13143.589033] task:kworker/10:0 state:I stack:29240 pid:237518 ppid: 2 flags:0x00004000 [13143.589972] Workqueue: 0x0 (rcu_gp) [13143.590305] Call Trace: [13143.590458] [13143.590962] __schedule+0x72e/0x1570 [13143.591251] ? io_schedule_timeout+0x160/0x160 [13143.591921] ? lock_downgrade+0x130/0x130 [13143.592168] ? pwq_dec_nr_in_flight+0x230/0x230 [13143.592883] schedule+0x128/0x220 [13143.593525] worker_thread+0x152/0xf90 [13143.593795] ? process_one_work+0x1520/0x1520 [13143.594507] kthread+0x2a7/0x350 [13143.595102] ? kthread_complete_and_exit+0x20/0x20 [13143.595814] ret_from_fork+0x22/0x30 [13143.596101] [13143.596301] task:kworker/1:1 state:I stack:27048 pid:238691 ppid: 2 flags:0x0000? io_schedule_timeout+0x160/0x160 [13144.097581] ? lock_downgrade+0x130/0x130 [13144.097828] ? pwq_dec_nr_in_flight+0x230/0x230 [13144.098575] schedule+0x128/0x220 [13144.099170] worker_thread+0x152/0xf90 [13144.099517] ? process_one_work+0x1520/0x1520 [13144.100185] kthread+0x2a7/0x350 [13144.100785] ? kthread_complete_and_exit+0x20/0x20 [13144.101487] ret_from_fork+0x22/0x30 [13144.101772] [13144.101933] task:kworker/19:1 state:I stack:27760 pid:238944 ppid: 2 flags:0x00004000 [13144.102869] Workqueue: 0x0 (mm_percpu_wq) [13144.103133] Call Trace: [13144.103362] [13144.103880] __schedule+0x72e/0x1570 [13144.104147] ? io_schedule_timeout+0x160/0x160 [13144.104832] ? lock_downgrade+0x130/0x130 [13144.105063] ? pwq_dec_nr_in_flight+0x230/0x230 [13144.105826] schedule+0x128/0x220 [13144.106457] worker_thread+0x152/0xf90 [13144.106740] ? process_one_work+0x1520/0x1520 [13144.107423] kthread+0x2a7/0x350 [13144.108015] ? kthread_complete_and_exit+0x20/0x20 [13144.108730] ret_from_fork+0x22/0x30 [13144.109012] [13144.109169] task:kworker/11:1 state:I stack:29344 pid:239680 ppid: 2 flags:0x00004000 [13144.110055] Workqueue: 0x0 (mm_percpu_wq) [13144.1+0x130/0x130 [13144.610731] ? pwq_dec_nr_in_flight+0x230/0x230 [13144.611499] schedule+0x128/0x220 [13144.612121] worker_thread+0x152/0xf90 [13144.612438] ? process_one_work+0x1520/0x1520 [13144.613086] kthread+0x2a7/0x350 [13144.613715] ? kthread_complete_and_exit+0x20/0x20 [13144.614468] ret_from_fork+0x22/0x30 [13144.614770] [13144.614941] task:kworker/14:1 state:I stack:27760 pid:241561 ppid: 2 flags:0x00004000 [13144.615994] Workqueue: 0x0 (rcu_par_gp) [13144.616412] Call Trace: [13144.616569] [13144.617090] __schedule+0x72e/0x1570 [13144.617413] ? io_schedule_timeout+0x160/0x160 [13144.618090] ? lock_downgrade+0x130/0x130 [13144.618391] ? pwq_dec_nr_in_flight+0x230/0x230 [13144.619099] schedule+0x128/0x220 [13144.619768] worker_thread+0x152/0xf90 [13144.620054] ? process_one_work+0x1520/0x1520 [13144.620762] kthread+0x2a7/0x350 [13144.621410] ? kthread_complete_and_exit+0x20/0x20 [13144.622089] ret_from_fork+0x22/0x30 [13144.622456] [13144.622628] task:kworker/22:2 state:I stack:28624 pid:241683 ppid: 2 flags:0x00004000 [13144.623522] Workqueue: 0x0 (rcu_par_gp) [13144.623830] Call Trace: [13144.624026] [13144.624593] __schedule+0x72e/0x1570 [13144.624858] ? io_schedule_timeout+0x160/0x160 [13144.625545] ? lock_downgrade+0x130/0x130 [13144.625805] ? pwq_dec_nr_in_flight+0x230/0x230 [13144.626551] schedule+0x128/0x220 [13144.627178] worker_thread+0x152/0xf90 [13144.627536] ? process_one_work+0x1520/0x1520 [13144.628200] kthread+0x2a7/0x350 [13144.628824] ? kthread_complete_and_exit+0x20/0x20 [13144.629548] ret_from_fork+0x22/0x30 [13144.629855] [13144.630028] task:kworker/6:3 state:R running task stack:27648 pid:242864 ppid: 2 flags:0x00004000 [13144.631106] Workqueue: 0x0 (mm_percpu_wq) [13144.631475] Call Trace: [13144.631631] [13144.632170] __schedule+0x72e/0x1570 [13144.632462] ? io_schedule_timeout+0x160/0x160 [13144.633124] ? lock_downgrade+0x130/0x130 [13144.633421] ? pwq_dec_nr_in_flight+0x230/0x230 [13144.634137] schedule+0x128/0x220 [13144.634799] worker_thread+0x152/0xf90 [13144.635089] ? process_one_work+0x1520/0x1520 [13144.635788] kthread+0x2a7/0x350 [13144.636405] ? kthread_complete_and_exit+0x20/0x20 [13144.637187] ret_from_fork+0x22/0x30 [13144.637541] [13144.637722] task:kworker/21:0 state:I stack:28624 pid:244590 ppid: 2 flags:0x00004000 [13144.638615] Workqueue: 0x0 (mm_percpu_wq) [13144.638890] Call Trace: [13144.639042] [13145.143416] task:kworker/23:1 state:I stack:29344 pid:250831 ppid: 2 flags:0x00004000 [13145.144372] Workqueue: 0x0 (events_power_efficient) [13145.144738] Call Trace: [13145.144887] [13145.145596] __schedule+0x72e/0x1570 [13145.145858] ? io_schedule_timeout+0x160/0x160 [13145.146605] ? lock_downgrade+0x130/0x130 [13145.147010] ? pwq_dec_nr_in_flight+0x230/0x230 [13145.147788] schedule+0x128/0x220 [13145.148470] worker_thread+0x152/0xf90 [13145.148782] ? process_one_work+0x1520/0x1520 [13145.149680] kthread+0x2a7/0x350 [13145.150429] ? kthread_complete_and_exit+0x20/0x20 [13145.151107] ret_from_fork+0x22/0x30 [13145.151496] [13145.151683] task:kworker/16:2 state:I stack:29656 pid:251999 ppid: 2 flags:0x00004000 [13145.207213meout+0x160/0x160 [13145.653143] ? lock_downgrade+0x130/0x130 [13145.653560] schedule+0x128/0x220 [13145.654238] worker_thread+0x152/0xf90 [13145.654576] ? process_one_work+0x1520/0x1520 [13145.655284] kthread+0x2a7/0x350 [13145.655910] ? kthread_complete_and_exit+0x20/0x20 [13145.656658] ret_from_fork+0x22/0x30 [13145.657188] [13145.657425] task:kworker/22:1 state:D stack:29088 pid:254498 ppid: 2 flags:0x00004000 [13145.658392] Workqueue: rcu_gp wait_rcu_exp_gp [13145.659100] Call Trace: [13145.659353] [13145.659897] __schedule+0x72e/0x1570 [13145.660143] ? io_schedule_timeout+0x160/0x160 [13145.661091] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13145.662010] schedule+0x128/0x220 [13145.662659] schedule_timeout+0x125/0x260 [13145.662924] ? usleep_range_state+0x190/0x190 [13145.663628] ? destroy_timer_on_stack+0x20/0x20 [13145.664423] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13145.665168] ? lockdep_hardirqs_on+0x79/0x100 [13145.665923] ? _raw_spin_unlock_irqrestore+0x42/0x70 [13145.666413] synchronize_rcu_expedited_wait_once+0x115/0x190 [13145.667203] ? sync_rcu_exp_done_unlocked+0x1f0/0x1f0 [13145.667589] ? __wait_for_common+0x387/0x530 [13145.72276qs_on+0x79/0x100 [13146.168897] rcu_exp_wait_wake+0x17/0x170 [13146.169216] process_one_work+0x8e5/0x1520 [13146.169558] ? pwq_dec_nr_in_flight+0x230/0x230 [13146.170228] ? __lock_contended+0x980/0x980 [13146.170546] ? worker_thread+0x15a/0xf90 [13146.170823] worker_thread+0x59e/0xf90 [13146.171090] ? process_one_work+0x1520/0x1520 [13146.171813] kthread+0x2a7/0x350 [13146.172438] ? kthread_complete_and_exit+0x20/0x20 [13146.173119] ret_from_fork+0x22/0x30 [13146.173483] [13146.173644] task:kworker/20:0 state:I stack:28160 pid:254579 ppid: 2 flags:0x00004000 [13146.174585] Workqueue: 0x0 (events) [13146.174847] Call Trace: [13146.175010] [13146.175556] __schedule+0x72e/0x1570 [13146.175809] ? io_schedule_timeout+0x160/0x160 [13146.176471] ? lock_downgrade+0x130/0x130 [13146.176721] ? pwq_dec_nr_in_flight+0x230/0x230 [13146.177451] schedule+0x128/0x220 [13146.178043] worker_thread+0x152/0xf90 [13146.178396] ? process_one_work+0x1520/0x1520 [13146.179066] kthread+0x2a7/0x350 [13146.179661] ? kthread_complete_and_exit+0x20/0x20 [13146.180388] ret_from_fork+0x22/0x30 [13146.180674] [13146.180833] ta__schedule+0x72e/0x1570 [13146.681459] ? io_schedule_timeout+0x160/0x160 [13146.682121] ? lock_downgrade+0x130/0x130 [13146.682426] ? pwq_dec_nr_in_flight+0x230/0x230 [13146.683141] schedule+0x128/0x220 [13146.683834] worker_thread+0x152/0xf90 [13146.684136] ? process_one_work+0x1520/0x1520 [13146.684827] kthread+0x2a7/0x350 [13146.685427] ? kthread_complete_and_exit+0x20/0x20 [13146.686110] ret_from_fork+0x22/0x30 [13146.686476] [13146.686643] task:kworker/8:0 state:I stack:29344 pid:254821 ppid: 2 flags:0x00004000 [13146.687525] Workqueue: 0x0 (mm_percpu_wq) [13146.687791] Call Trace: [13146.687952] [13146.688503] __schedule+0x72e/0x1570 [13146.688762] ? io_schedule_timeout+0x160/0x160 [13146.689422] ? lock_downgrade+0x130/0x130 [13146.689670] ? pwq_dec_nr_in_flight+0x230/0x230 [13146.690419] schedule+0x128/0x220 [13146.691007] worker_thread+0x152/0xf90 [13146.691268] ? process_one_work+0x1520/0x1520 [13146.691953] kthread+0x2a7/0x350 [13146.692575] ? kthread_complete_and_exit+0x20/0x20 [13146.693245] ret_from_fork+0x22/0x30 [13146.693577] [13146.693735] task:kworker/0:2 state:I stack:26720 pid:254834 ppid: 2 flags:0x00004000 [13146.694658] Workqueue: 0x0 (events_freezable) [13146.695431] Call Trace: [13146.722832] ? pwq_dec_nr_in_flight+0x230/0x230 [13147.196457] schedule+0x128/0x220 [13147.197057] worker_thread+0x152/0xf90 [13147.197409] ? process_one_work+0x1520/0x1520 [13147.198075] kthread+0x2a7/0x350 [13147.198666] ? kthread_complete_and_exit+0x20/0x20 [13147.199396] ret_from_fork+0x22/0x30 [13147.199682] [13147.199842] task:kworker/14:0 state:I stack:26720 pid:255552 ppid: 2 flags:0x00004000 [13147.200756] Workqueue: 0x0 (mm_percpu_wq) [13147.201019] Call Trace: [13147.201183] [13147.201709] __schedule+0x72e/0x1570 [13147.201964] ? io_schedule_timeout+0x160/0x160 [13147.202645] ? lock_downgrade+0x130/0x130 [13147.202897] ? pwq_dec_nr_in_flight+0x230/0x230 [13147.203641] schedule+0x128/0x220 [13147.204297] worker_thread+0x152/0xf90 [13147.204608] ? process_one_work+0x1520/0x1520 [13147.205240] kthread+0x2a7/0x350 [13147.205858] ? kthread_complete_and_exit+0x20/0x20 [13147.206545] ret_from_fork+0x22/0x30 [13147.206834] [13147.206987] task:kworker/15:1 state:I stack:27376 pid:255553 ppid: 2 flags:0x00004000 [13147.207902] Workqueue: 0x0 (mm_percpu_wq) [13147.208185] Call Trace: [13147.208348] [13147.208869] __schedule+0x72e/0x1570 [13147.209137] ? io_schedule_timeout+0x1152/0xf90 [13147.709751] ? process_one_work+0x1520/0x1520 [13147.710466] kthread+0x2a7/0x350 [13147.711064] ? kthread_complete_and_exit+0x20/0x20 [13147.711778] ret_from_fork+0x22/0x30 [13147.712063] [13147.712222] task:kworker/u128:1 state:I stack:28856 pid:256840 ppid: 2 flags:0x00004000 [13147.713164] Workqueue: 0x0 (monitor_1_hpsa) [13147.713926] Call Trace: [13147.714088] [13147.714634] __schedule+0x72e/0x1570 [13147.714915] ? io_schedule_timeout+0x160/0x160 [13147.715581] ? lock_downgrade+0x130/0x130 [13147.715825] ? pwq_dec_nr_in_flight+0x230/0x230 [13147.716562] schedule+0x128/0x220 [13147.717146] worker_thread+0x152/0xf90 [13147.717494] ? process_one_work+0x1520/0x1520 [13147.718187] kthread+0x2a7/0x350 [13147.718774] ? kthread_complete_and_exit+0x20/0x20 [13147.719493] ret_from_fork+0x22/0x30 [13147.719781] [13147.719967] task:kworker/13:0 state:I stack:27456 pid:258198 ppid: 2 flags:0x00004000 [13147.720899] Workqueue: 0x0 (rcu_par_gp) [13147.721186] Call Trace: [13147.721341] [13147.721881] __schedule+0x72e/0x1570 [13147.722131] ? io_schedule_timeout+152/0xf90 [13148.222733] ? process_one_work+0x1520/0x1520 [13148.223426] kthread+0x2a7/0x350 [13148.224002] ? kthread_complete_and_exit+0x20/0x20 [13148.224711] ret_from_fork+0x22/0x30 [13148.224996] [13148.225154] task:kworker/12:1 state:I stack:27816 pid:258645 ppid: 2 flags:0x00004000 [13148.226088] Workqueue: 0x0 (mm_percpu_wq) [13148.226375] Call Trace: [13148.226534] [13148.227030] __schedule+0x72e/0x1570 [13148.227274] ? io_schedule_timeout+0x160/0x160 [13148.227944] ? lock_downgrade+0x130/0x130 [13148.228215] ? pwq_dec_nr_in_flight+0x230/0x230 [13148.228933] schedule+0x128/0x220 [13148.229567] worker_thread+0x152/0xf90 [13148.229842] ? process_one_work+0x1520/0x1520 [13148.230520] kthread+0x2a7/0x350 [13148.231115] ? kthread_complete_and_exit+0x20/0x20 [13148.231825] ret_from_fork+0x22/0x30 [13148.232110] [13148.232263] task:kworker/17:1 state:I stack:30032 pid:259368 ppid: 2 flags:0x00004000 [13148.233139] Workqueue: 0x0 (rcu_par_gp) [13148.233443] Call Trace: [13148.233602] [13148.234117] __schedule+0x72e/0x1570 [13148.234418] ? io_schedule_timeout+0x160/0x160 [13148.235066] ? lock_downgrade+0x130/0x130 [13148.235352] ? pwq_dec_nr_in_flight+0x230/0x230 [13148.735885] ? kthread_complete_and_exit+0x20/0x20 [13148.736611] ret_from_fork+0x22/0x30 [13148.736900] [13148.737058] task:kworker/6:0 state:I stack:29704 pid:259508 ppid: 2 flags:0x00004000 [13148.737996] Workqueue: 0x0 (events) [13148.738258] Call Trace: [13148.738464] [13148.738972] __schedule+0x72e/0x1570 [13148.739237] ? io_schedule_timeout+0x160/0x160 [13148.739938] ? lock_downgrade+0x130/0x130 [13148.740177] ? pwq_dec_nr_in_flight+0x230/0x230 [13148.740974] schedule+0x128/0x220 [13148.741661] worker_thread+0x152/0xf90 [13148.741972] ? process_one_work+0x1520/0x1520 [13148.742708] kthread+0x2a7/0x350 [13148.743362] ? kthread_complete_and_exit+0x20/0x20 [13148.744051] ret_from_fork+0x22/0x30 [13148.744363] [13148.744534] task:kworker/2:2 state:I stack:30032 pid:259532 ppid: 2 flags:0x00004000 [13148.745438] Workqueue: 0x0 (rcu_par_gp) [13148.745702] Call Trace: [13148.745865] [13148.746415] __schedule+0x72e/0x1570 [13148.746674] ? io_schedule_timeout+0x160/0x160 [13148.747317] ? lock_downgrade+0x130/0x130 [13148.747595] ? pwq_dec_nr_in_flight+0x230/0x230 [13148.748275] schedule+0x128/0x220 [13148.748901] worker_thread+0x152/022/0x30 [13149.249468] [13149.249658] task:kworker/4:1 state:I stack:27304 pid:259648 ppid: 2 flags:0x00004000 [13149.250535] Workqueue: 0x0 (rcu_par_gp) [13149.250797] Call Trace: [13149.250961] [13149.251484] __schedule+0x72e/0x1570 [13149.251736] ? io_schedule_timeout+0x160/0x160 [13149.252403] ? lock_downgrade+0x130/0x130 [13149.252661] ? pwq_dec_nr_in_flight+0x230/0x230 [13149.253362] schedule+0x128/0x220 [13149.253942] worker_thread+0x152/0xf90 [13149.254211] ? process_one_work+0x1520/0x1520 [13149.254935] kthread+0x2a7/0x350 [13149.255555] ? kthread_complete_and_exit+0x20/0x20 [13149.256203] ret_from_fork+0x22/0x30 [13149.256557] [13149.256721] task:kworker/11:0 state:I stack:28816 pid:260189 ppid: 2 flags:0x00004000 [13149.257618] Workqueue: 0x0 (mm_percpu_wq) [13149.257886] Call Trace: [13149.258051] [13149.258611] __schedule+0x72e/0x1570 [13149.258894] ? io_schedule_timeout+0x160/0x160 [13149.259581] ? lock_downgrade+0x130/0x130 [13149.259854] ? pwq_dec_nr_in_flight+0x230/0x230 [13149.260579] schedule+0x128/0x220 [13149.261161] worker_thread+0x152/0xf90 [13149.261511] ? process_one_work+0x1520/0x1520 [13149.262177] kthread+0xk:30048 pid:260406 ppid: 2 flags:0x00004000 [13149.763153] Call Trace: [13149.763387] [13149.763938] __schedule+0x72e/0x1570 [13149.764185] ? io_schedule_timeout+0x160/0x160 [13149.764889] ? lock_downgrade+0x130/0x130 [13149.765202] schedule+0x128/0x220 [13149.765847] worker_thread+0x152/0xf90 [13149.766119] ? process_one_work+0x1520/0x1520 [13149.766829] kthread+0x2a7/0x350 [13149.767452] ? kthread_complete_and_exit+0x20/0x20 [13149.768137] ret_from_fork+0x22/0x30 [13149.768495] [13149.768676] task:kworker/21:2 state:I stack:30032 pid:260440 ppid: 2 flags:0x00004000 [13149.769590] Workqueue: 0x0 (events) [13149.769854] Call Trace: [13149.770020] [13149.770567] __schedule+0x72e/0x1570 [13149.770821] ? io_schedule_timeout+0x160/0x160 [13149.771467] ? lock_downgrade+0x130/0x130 [13149.771706] ? pwq_dec_nr_in_flight+0x230/0x230 [13149.772466] schedule+0x128/0x220 [13149.773057] worker_thread+0x152/0xf90 [13149.773315] ? process_one_work+0x1520/0x1520 [13149.774000] kthread+0x2a7/0x350 [13149.774596] ? kthread_complete_and_exit+0x20/0x20 [13149.775242] ret_from_fork+0x22/0x30 [13149.775602] [13149.775762] task:kworker/7:1 state:I stack:27760 pid:260477 ppid: 2 flags:0x00004000 [13149.776641] Workqueue: 0x0 (mm_percpu_wq) [13149.776909] Call Trace: [13149.777072] [13149.777617] __schedule+0x72e/0x1570 [13149.777870] ? io_schedule_timeout+0x160/0x160 [13149.778536] ? lock_downgrade [13150.279452] kthread+0x2a7/0x350 [13150.280032] ? kthread_complete_and_exit+0x20/0x20 [13150.280744] ret_from_fork+0x22/0x30 [13150.281032] [13150.281188] task:kworker/u130:1 state:I stack:24248 pid:260585 ppid: 2 flags:0x00004000 [13150.282122] Workqueue: 0x0 (xfs-cil/dm-0) [13150.282410] Call Trace: [13150.282572] [13150.283097] __schedule+0x72e/0x1570 [13150.283419] ? io_schedule_timeout+0x160/0x160 [13150.284092] ? lock_downgrade+0x130/0x130 [13150.284388] ? pwq_dec_nr_in_flight+0x230/0x230 [13150.285076] schedule+0x128/0x220 [13150.285700] worker_thread+0x152/0xf90 [13150.285983] ? process_one_work+0x1520/0x1520 [13150.286668] kthread+0x2a7/0x350 [13150.287263] ? kthread_complete_and_exit+0x20/0x20 [13150.287968] ret_from_fork+0x22/0x30 [13150.288250] [13150.288463] task:tls-strp state:I stack:30728 pid:260723 ppid: 2 flags:0x00004000 [13150.289430] Call Trace: [13150.289600] [13150.290102] __schedule+0x72e/0x1570 [13150.290446] ? io_schedule_timeout+0x160/0x160 [13150.291167] ? lock_downgrade+0x130/0x130 [13150.291468] ? wait_for_completion_io_timeout[13150.792024] ? __kthread_parkme+0xcc/0x200 [13150.792287] ? worker_thread+0xf90/0xf90 [13150.792590] kthread+0x2a7/0x350 [13150.793185] ? kthread_complete_and_exit+0x20/0x20 [13150.793925] ret_from_fork+0x22/0x30 [13150.794221] [13150.794444] task:kworker/9:1 state:I stack:28856 pid:260779 ppid: 2 flags:0x00004000 [13150.795318] Workqueue: 0x0 (mm_percpu_wq) [13150.795619] Call Trace: [13150.795796] [13150.796295] __schedule+0x72e/0x1570 [13150.796577] ? io_schedule_timeout+0x160/0x160 [13150.797218] ? lock_downgrade+0x130/0x130 [13150.797503] ? pwq_dec_nr_in_flight+0x230/0x230 [13150.798189] schedule+0x128/0x220 [13150.798858] worker_thread+0x152/0xf90 [13150.799140] ? process_one_work+0x1520/0x1520 [13150.799830] kthread+0x2a7/0x350 [13150.800428] ? kthread_complete_and_exit+0x20/0x20 [13150.801094] ret_from_fork+0x22/0x30 [13150.801422] [13150.801612] task:kworker/u129:1 state:I stack:24248 pid:260891 ppid: 2 flags:0x00004000 [13150.802548] Workqueue: 0x0 (flush-253:0) [13150.802842] Call Trace: [13150.802994] [13150.803548] __schedule+0x72e/0x1570 [13150.8[13151.304188] worker_thread+0x152/0xf90 [13151.304545] ? process_one_work+0x1520/0x1520 [13151.305188] kthread+0x2a7/0x350 [13151.305802] ? kthread_complete_and_exit+0x20/0x20 [13151.306507] ret_from_fork+0x22/0x30 [13151.306792] [13151.306950] task:kworker/10:2 state:I stack:30032 pid:261418 ppid: 2 flags:0x00004000 [13151.307889] Workqueue: 0x0 (rcu_par_gp) [13151.308152] Call Trace: [13151.308318] [13151.308848] __schedule+0x72e/0x1570 [13151.309093] ? io_schedule_timeout+0x160/0x160 [13151.309778] ? lock_downgrade+0x130/0x130 [13151.310029] ? pwq_dec_nr_in_flight+0x230/0x230 [13151.310776] schedule+0x128/0x220 [13151.311424] worker_thread+0x152/0xf90 [13151.311699] ? process_one_work+0x1520/0x1520 [13151.312379] kthread+0x2a7/0x350 [13151.312950] ? kthread_complete_and_exit+0x20/0x20 [13151.313652] ret_from_fork+0x22/0x30 [13151.313983] [13151.314148] task:kworker/u128:0 state:I stack:29928 pid:262374 ppid: 2 flags:0x00004000 [13151.315057] Workqueue: 0x0 (monitor_0_hpsa) [13151.315725] Call Trace: [13151.315896] [13151.316458] __schedule+0x72e/0x1570 [13151.316718] ? io_schedule_timeout+0x160/0x160 [13151.317338] ? lock_downgrade+0x130/0x130 [13151.317612] ? kthread+0x2a7/0x350 [13151.818651] ? kthread_complete_and_exit+0x20/0x20 [13151.819393] ret_from_fork+0x22/0x30 [13151.819688] [13151.819853] task:kworker/1:0 state:D stack:27592 pid:263892 ppid: 2 flags:0x00004000 [13151.821143] Workqueue: ipv6_addrconf addrconf_verify_work [13151.821513] Call Trace: [13151.821669] [13151.822175] __schedule+0x72e/0x1570 [13151.822489] ? io_schedule_timeout+0x160/0x160 [13151.823212] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13151.824045] schedule+0x128/0x220 [13151.824711] schedule_preempt_disabled+0x14/0x30 [13151.825458] __mutex_lock+0xadd/0x1470 [13151.825756] ? addrconf_verify_work+0xa/0x20 [13151.826483] ? mutex_lock_io_nested+0x12d0/0x12d0 [13151.827193] ? rcu_read_unlock+0x40/0x40 [13151.827566] ? addrconf_verify_work+0xa/0x20 [13151.828247] addrconf_verify_work+0xa/0x20 [13151.828532] process_one_work+0x8e5/0x1520 [13151.828838] ? pwq_dec_nr_in_flight+0x230/0x230 [13151.829558] ? __lock_contended+0x980/0x980 [13151.829828] ? worker_thread+0x15a/0xf90 [13151.830074] worker_thread+0x59e/0xf90 [13151.830334] ? process_one_work+0x1520/0x1520 [13151.831026] kthread+0x2a7/0x350 [13151.831673] ? kthread_complete_and[13152.332632] Workqueue: 0x0 (mm_percpu_wq) [13152.332893] Call Trace: [13152.333046] [13152.333592] __schedule+0x72e/0x1570 [13152.333901] ? io_schedule_timeout+0x160/0x160 [13152.334594] ? lock_downgrade+0x130/0x130 [13152.334851] ? pwq_dec_nr_in_flight+0x230/0x230 [13152.335641] schedule+0x128/0x220 [13152.336280] worker_thread+0x152/0xf90 [13152.336638] ? process_one_work+0x1520/0x1520 [13152.337341] kthread+0x2a7/0x350 [13152.337960] ? kthread_complete_and_exit+0x20/0x20 [13152.338698] ret_from_fork+0x22/0x30 [13152.339010] [13152.339167] task:kworker/5:2 state:I stack:27760 pid:265040 ppid: 2 flags:0x00004000 [13152.340083] Workqueue: 0x0 (mm_percpu_wq) [13152.340330] Call Trace: [13152.340517] [13152.341074] __schedule+0x72e/0x1570 [13152.341320] ? io_schedule_timeout+0x160/0x160 [13152.342020] ? lock_downgrade+0x130/0x130 [13152.342280] ? pwq_dec_nr_in_flight+0x230/0x230 [13152.343066] schedule+0x128/0x220 [13152.343715] worker_thread+0x152/0xf90 [13152.344025] ? process_one_work+0x1520/0x1520 [13152.344759] kthread+0x2a7/0x350 [13152.345392] ? kthread_complete_and_exit+0x20/0x20 [13152.3729 [13152.746847] Workqueue: 0x0 (xfs-sync/dm-0) [13152.747102] Call Trace: [13152.747254] [13152.747820] __schedule+0x72e/0x1570 [13152.748088] ? io_schedule_timeout+0x160/0x160 [13152.748802] ? lock_downgrade+0x130/0x130 [13152.749070] ? pwq_dec_nr_in_flight+0x230/0x230 [13152.749834] schedule+0x128/0x220 [13152.750478] worker_thread+0x152/0xf90 [13152.750766] ? process_one_work+0x1520/0x1520 [13152.751589] kthread+0x2a7/0x350 [13152.752234] ? kthread_complete_and_exit+0x20/0x20 [13152.752984] ret_from_fork+0x22/0x30 [13152.753291] [13152.753514] task:kworker/u129:0 state:I stack:24248 pid:265825 ppid: 2 flags:0x00004000 [13152.754480] Workqueue: 0x0 (events_unbound) [13152.755315] Call Trace: [13152.755539] [13152.756087] __schedule+0x72e/0x1570 [13152.756340] ? io_schedule_timeout+0x160/0x160 [13152.757126] ? lock_downgrade+0x130/0x130 [13152.757463] ? pwq_dec_nr_in_flight+0x230/0x230 [13152.758152] schedule+0x128/0x220 [13152.758803] worker_thread+0x152/0xf90 [13152.759080] ? process_one_work+0x1520/0x1520 [13152.759777] kthread+0x2a7/0x350 [13152.760493] ? kthread_complete_and_exit+0x20/0x20 [13152.761289] ret_from_fork+0x22/0x30 [13152.761654] < [13153.262558] __schedule+0x72e/0x1570 [13153.262942] ? io_schedule_timeout+0x160/0x160 [13153.263637] ? lock_downgrade+0x130/0x130 [13153.263926] ? pwq_dec_nr_in_flight+0x230/0x230 [13153.264700] schedule+0x128/0x220 [13153.265315] worker_thread+0x152/0xf90 [13153.265692] ? process_one_work+0x1520/0x1520 [13153.266468] kthread+0x2a7/0x350 [13153.267151] ? kthread_complete_and_exit+0x20/0x20 [13153.268043] ret_from_fork+0x22/0x30 [13153.268337] [13153.268566] task:kworker/8:1 state:I stack:30208 pid:265994 ppid: 2 flags:0x00004000 [13153.269694] Call Trace: [13153.269866] [13153.270470] __schedule+0x72e/0x1570 [13153.270751] ? io_schedule_timeout+0x160/0x160 [13153.271510] ? lock_downgrade+0x130/0x130 [13153.271854] ? wait_for_completion_io_timeout+0x20/0x20 [13153.272188] schedule+0x128/0x220 [13153.272840] worker_thread+0x152/0xf90 [13153.273083] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13153.273874] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13153.274233] ? process_one_work+0x1520/0x1520 [13153.274971] kthread+0x2a7/0x350 [13153.275612] ? kthread_complete_and_exit+0[13153.776189] Workqueue: 0x0 (events) [13153.776617] Call Trace: [13153.776789] [13153.777309] __schedule+0x72e/0x1570 [13153.777650] ? io_schedule_timeout+0x160/0x160 [13153.778416] ? lock_downgrade+0x130/0x130 [13153.778672] ? pwq_dec_nr_in_flight+0x230/0x230 [13153.779355] schedule+0x128/0x220 [13153.780040] worker_thread+0x152/0xf90 [13153.780313] ? process_one_work+0x1520/0x1520 [13153.781268] kthread+0x2a7/0x350 [13153.782022] ? kthread_complete_and_exit+0x20/0x20 [13153.782834] ret_from_fork+0x22/0x30 [13153.783136] [13153.783306] task:kworker/6:2 state:R running task stack:30032 pid:266087 ppid: 2 flags:0x00004000 [13153.784327] Call Trace: [13153.784540] [13153.785510] __schedule+0x72e/0x1570 [13153.785833] ? io_schedule_timeout+0x160/0x160 [13153.786520] ? lock_downgrade+0x130/0x130 [13153.786794] ? wait_for_completion_io_timeout+0x20/0x20 [13153.787141] schedule+0x128/0x220 [13153.787765] worker_thread+0x152/0xf90 [13153.788025] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13153.788801] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13153.789150] ? process_one_work+0x1520/0x1520 [13153.790167] kthread+0x2a7/0x350 [13153.790982] ? kthread_complete_and_exit+0x20/0x20 [13153.791710] ret_from_fork+0x22/0x30 [13153.792008] [13153.792178] task:kworker/10:1 state:I stack:28312 pid:266384 ppid: 2 flags:0x0? io_schedule_timeout+0x160/0x160 [13154.293637] ? lock_downgrade+0x130/0x130 [13154.293889] ? pwq_dec_nr_in_flight+0x230/0x230 [13154.294647] schedule+0x128/0x220 [13154.295606] worker_thread+0x152/0xf90 [13154.295913] ? process_one_work+0x1520/0x1520 [13154.296614] kthread+0x2a7/0x350 [13154.297221] ? kthread_complete_and_exit+0x20/0x20 [13154.298000] ret_from_fork+0x22/0x30 [13154.298305] [13154.298558] task:kworker/18:0 state:I stack:30208 pid:266850 ppid: 2 flags:0x00004000 [13154.299543] Call Trace: [13154.299902] [13154.300525] __schedule+0x72e/0x1570 [13154.300835] ? io_schedule_timeout+0x160/0x160 [13154.301532] ? lock_downgrade+0x130/0x130 [13154.301780] ? pwq_dec_nr_in_flight+0x230/0x230 [13154.302561] schedule+0x128/0x220 [13154.303442] worker_thread+0x152/0xf90 [13154.303763] ? process_one_work+0x1520/0x1520 [13154.304559] kthread+0x2a7/0x350 [13154.305166] ? kthread_complete_and_exit+0x20/0x20 [13154.305940] ret_from_fork+0x22/0x30 [13154.306249] [13154.306480] task:kworker/u129:2 state:I stack:24080 pid:266923 ppid: 2 flags:0x00004000 [13154.307495] Workqueu[13154.808052] ? lock_downgrade+0x130/0x130 [13154.808320] ? pwq_dec_nr_in_flight+0x230/0x230 [13154.809081] schedule+0x128/0x220 [13154.809745] worker_thread+0x152/0xf90 [13154.810029] ? process_one_work+0x1520/0x1520 [13154.810765] kthread+0x2a7/0x350 [13154.811511] ? kthread_complete_and_exit+0x20/0x20 [13154.812365] ret_from_fork+0x22/0x30 [13154.812693] [13154.812862] task:kworker/u130:0 state:I stack:30832 pid:267029 ppid: 2 flags:0x00004000 [13154.813803] Call Trace: [13154.813960] [13154.814565] __schedule+0x72e/0x1570 [13154.814835] ? io_schedule_timeout+0x160/0x160 [13154.815543] ? lock_downgrade+0x130/0x130 [13154.815803] ? wait_for_completion_io_timeout+0x20/0x20 [13154.816151] schedule+0x128/0x220 [13154.816814] worker_thread+0x152/0xf90 [13154.817061] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13154.817818] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13154.818193] ? process_one_work+0x1520/0x1520 [13154.818943] kthread+0x2a7/0x350 [13154.819737] ? kthread_complete_and_exit+0x20/0x20 [13154.820585] ret_from_fork+0x22/0x30 [13154.820993] [13154.821156] task:kworker/2:0 state:R running task stack:29424 pid:267344 ppid: 2 flags:0x00004000 [13154.822179] Workqueue: 0x0 (mm_percpu_wq) [13154.822512] Call T[13155.314258] ? pwq_dec_nr_in_flight+0x230/0x230 [13155.323904] schedule+0x128/0x220 [13155.324556] worker_thread+0x152/0xf90 [13155.324952] ? process_one_work+0x1520/0x1520 [13155.325664] kthread+0x2a7/0x350 [13155.326270] ? kthread_complete_and_exit+0x20/0x20 [13155.327239] ret_from_fork+0x22/0x30 [13155.327690] [13155.327859] task:kworker/7:0 state:I stack:30208 pid:267438 ppid: 2 flags:0x00004000 [13155.328810] Call Trace: [13155.328993] [13155.329529] __schedule+0x72e/0x1570 [13155.329782] ? io_schedule_timeout+0x160/0x160 [13155.330470] ? lock_downgrade+0x130/0x130 [13155.330722] ? wait_for_completion_io_timeout+0x20/0x20 [13155.331058] schedule+0x128/0x220 [13155.331721] worker_thread+0x152/0xf90 [13155.331976] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13155.332728] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13155.333077] ? process_one_work+0x1520/0x1520 [13155.333799] kthread+0x2a7/0x350 [13155.334441] ? kthread_complete_and_exit+0x20/0x20 [13155.335109] ret_from_fork+0x22/0x30 [13155.335442] [13155.335633] task:kworker/20:1 state:I stack:29344 pid:267579 ppid: 2 flags:0x00004000 [13155.336545] Workqueue: 0x0 (mm_percpu_wq) [13155.336822] Call Trace: [13155.336990] [13155.337550] __schedule+0x72e/0x1570 [13155.337819] ? io_worker_thread+0x152/0xf90 [13155.838640] ? process_one_work+0x1520/0x1520 [13155.839346] kthread+0x2a7/0x350 [13155.839991] ? kthread_complete_and_exit+0x20/0x20 [13155.840732] ret_from_fork+0x22/0x30 [13155.841027] [13155.841196] task:kworker/3:1 state:I stack:30208 pid:268541 ppid: 2 flags:0x00004000 [13155.842411] Call Trace: [13155.842691] [13155.843209] __schedule+0x72e/0x1570 [13155.843525] ? io_schedule_timeout+0x160/0x160 [13155.844184] ? lock_downgrade+0x130/0x130 [13155.844521] ? wait_for_completion_io_timeout+0x20/0x20 [13155.844886] schedule+0x128/0x220 [13155.845567] worker_thread+0x152/0xf90 [13155.845828] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13155.846581] ? _raw_spin_unlock_irqrestore+0x59/0x70 [13155.846932] ? process_one_work+0x1520/0x1520 [13155.848094] kthread+0x2a7/0x350 [13155.848740] ? kthread_complete_and_exit+0x20/0x20 [13155.849540] ret_from_fork+0x22/0x30 [13155.849844] [13155.850013] task:10_bash_login state:S stack:28048 pid:270013 ppid: 1823 flags:0x00000002 [13155.851052] Call Trace: [13155.851243] [13155.851926] __schedule+0x72e/0x1570 [13155.852184] ? io_schedule_timeout+0x160/0x160 [13155.879687] ? __wake_up_parent+0x60/0x60 [13156.353367] ? kill_orphaned_pgrp+0x2f0/0x2f0 [13156.354239] ? sched_clock_cpu+0x15/0x1b0 [13156.354644] ? find_held_lock+0x33/0x120 [13156.354924] __do_sys_wait4+0xf4/0x100 [13156.355172] ? kernel_wait4+0x1d0/0x1d0 [13156.355466] ? ktime_get_coarse_real_ts64+0x130/0x170 [13156.355780] ? lockdep_hardirqs_on+0x79/0x100 [13156.356444] ? ktime_get_coarse_real_ts64+0x130/0x170 [13156.356759] ? syscall_trace_enter.constprop.0+0x19c/0x280 [13156.357087] do_syscall_64+0x5c/0x90 [13156.357314] ? do_syscall_64+0x69/0x90 [13156.357590] ? lockdep_hardirqs_on+0x79/0x100 [13156.358241] ? do_syscall_64+0x69/0x90 [13156.358540] ? asm_exc_page_fault+0x22/0x30 [13156.358792] ? lockdep_hardirqs_on+0x79/0x100 [13156.359497] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13156.359808] RIP: 0033:0x7ff85b5182ea [13156.360031] RSP: 002b:00007ffce325f828 EFLAGS: 00000246 ORIG_RAX: 000000000000003d [13156.360840] RAX: ffffffffffffffda RBX: 0000000000041f14 RCX: 00007ff85b5182ea [13156.361650] RDX: 0000000000000000 RSI: 00007ffce325f850 RDI: 00000000ffffffff [13156.362562] RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000000 [13[13156.863342] task:kworker/13:2 state:I stack:29704 pid:270037 ppid: 2 flags:0x00004000 [13156.864361] Workqueue: 0x0 (mm_percpu_wq) [13156.864676] Call Trace: [13156.864835] [13156.865365] __schedule+0x72e/0x1570 [13156.865673] ? io_schedule_timeout+0x160/0x160 [13156.866354] ? lock_downgrade+0x130/0x130 [13156.866718] ? pwq_dec_nr_in_flight+0x230/0x230 [13156.867450] schedule+0x128/0x220 [13156.868048] worker_thread+0x152/0xf90 [13156.868320] ? process_one_work+0x1520/0x1520 [13156.869044] kthread+0x2a7/0x350 [13156.869657] ? kthread_complete_and_exit+0x20/0x20 [13156.870363] ret_from_fork+0x22/0x30 [13156.870749] [13156.870941] task:kworker/17:0 state:I stack:29128 pid:270038 ppid: 2 flags:0x00004000 [13156.871869] Workqueue: 0x0 (mm_percpu_wq) [13156.872127] Call Trace: [13156.872279] [13156.872852] __schedule+0x72e/0x1570 [13156.873116] ? io_schedule_timeout+0x160/0x160 [13156.873836] ? lock_downgrade+0x130/0x130 [13156.874074] ? pwq_dec_nr_in_flight+0x230/0x230 [13156.874862] schedule+0x128/0x220 [13156.875546] worker_thread+0x152/0xf90 [13156.875966] ? process_one_work+0x1520/0x1520 [13156.876653] kthread+0x2a7/0x350 [13156.877245] ? kth[13157.377752] Call Trace: [13157.377936] [13157.378477] __schedule+0x72e/0x1570 [13157.378730] ? io_schedule_timeout+0x160/0x160 [13157.379448] ? lock_downgrade+0x130/0x130 [13157.379738] schedule+0x128/0x220 [13157.380467] do_wait+0x501/0xb10 [13157.381187] kernel_wait4+0xf3/0x1d0 [13157.381463] ? __wake_up_parent+0x60/0x60 [13157.381710] ? kill_orphaned_pgrp+0x2f0/0x2f0 [13157.382375] ? sched_clock_cpu+0x15/0x1b0 [13157.382714] ? find_held_lock+0x33/0x120 [13157.382977] __do_sys_wait4+0xf4/0x100 [13157.383212] ? kernel_wait4+0x1d0/0x1d0 [13157.383489] ? ktime_get_coarse_real_ts64+0x130/0x170 [13157.383810] ? lockdep_hardirqs_on+0x79/0x100 [13157.384552] ? ktime_get_coarse_real_ts64+0x130/0x170 [13157.384884] ? syscall_trace_enter.constprop.0+0x19c/0x280 [13157.385198] do_syscall_64+0x5c/0x90 [13157.385478] ? asm_exc_page_fault+0x22/0x30 [13157.385718] ? lockdep_hardirqs_on+0x79/0x100 [13157.386440] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13157.386751] RIP: 0033:0x7f60063182ea [13157.386976] RSP: 002b:00007ffdc42ea028 EFLAGS: 00000246 ORIG_RAX: 000000000000003d [13157.387832] RAX: ffffffffffffffda RBX: 00000000000420c4 RCX: 00007f60063182ea [13157.388663] RDX: 0000000000000[13157.889168] R13: 00007ffdc42ea0b0 R14: 0000000000000000 R15: 0000000000000000 [13157.890100] [13157.890280] task:20_sysinfo state:R running task stack:26320 pid:270532 ppid:270100 flags:0x0000000a [13157.891453] Call Trace: [13157.891723] [13157.892242] sched_show_task.part.0+0x388/0x396 [13157.892962] sched_show_task.cold+0x8/0xd [13157.893246] ? trace_event_raw_event_sched_switch+0x440/0x440 [13157.894059] ? cpumask_next+0x59/0x80 [13157.894313] ? touch_all_softlockup_watchdogs+0x7d/0xd0 [13157.894680] show_state_filter+0x143/0x320 [13157.894948] sysrq_handle_showstate+0xc/0x20 [13157.895859] __handle_sysrq.cold+0x11c/0x37f [13157.896661] write_sysrq_trigger+0x43/0x50 [13157.896906] proc_reg_write+0x1ac/0x280 [13157.897150] vfs_write+0x1c1/0x920 [13157.897772] ksys_write+0xf9/0x1d0 [13157.898372] ? __ia32_sys_read+0xa0/0xa0 [13157.898659] ? ktime_get_coarse_real_ts64+0x130/0x170 [13157.898998] do_syscall_64+0x5c/0x90 [13157.899233] ? lockdep_hardirqs_on+0x79/0x100 [13157.900121] ? do_syscall_64+0x69/0x90 [13157.900435] ? filp_close+0xf9/0x130 [13157.900816] ? do_syscall_64+0x69/0x90 [13157.901061] ? lockdep_hardirqs_on+0x79/0x100 [13157.901759] ? do_syscall_64+0x69/0x90 [13157.902009] ? lockdep_hardirqs_on+0x79/0x100 [13157.902677] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13157.903024] RIP: 0033:0x7fc6ec13ebe7 [13157.903256] Code: RAX: 0000000000000001 [13158.404218] RAX: ffffffffffffffda RBX: 0000000000000002 RCX: 00007fc6ec13ebe7 [13158.405071] RDX: 0000000000000002 RSI: 000055a10b8696e0 RDI: 0000000000000001 [13158.405890] RBP: 000055a10b8696e0 R08: 0000000000000000 R09: 00007fc6ec1b14e0 [13158.406696] R10: 00007fc6ec1b13e0 R11: 0000000000000246 R12: 0000000000000002 [13158.407537] R13: 00007fc6ec1fb780 R14: 0000000000000002 R15: 00007fc6ec1f69e0 [13158.408377] [13158.408552] task:kworker/16:1 state:I stack:28520 pid:270654 ppid: 2 flags:0x00004000 [13158.409387] Workqueue: 0x0 (mm_percpu_wq) [13158.409687] Call Trace: [13158.409853] [13158.410351] __schedule+0x72e/0x1570 [13158.410663] ? io_schedule_timeout+0x160/0x160 [13158.411298] ? lock_downgrade+0x130/0x130 [13158.411559] ? pwq_dec_nr_in_flight+0x230/0x230 [13158.412238] schedule+0x128/0x220 [13158.412870] worker_thread+0x152/0xf90 [13158.413144] ? process_one_work+0x1520/0x1520 [13158.413813] kthread+0x2a7/0x350 [13158.414469] ? kthread_complete_and_exit+0x20/0x20 [13158.415133] ret_from_fork+0x22/0x30 [13158.415470] [13158.415630] task:kworker/22:0 state:I stack:30032 pid:270739 ppid: 2 flags:0x00004000 [13158.416538] Work[13158.907915] ? lock_downgrade+0x130/0x130 [13158.917158] ? pwq_dec_nr_in_flight+0x230/0x230 [13158.917888] schedule+0x128/0x220 [13158.918535] worker_thread+0x152/0xf90 [13158.918817] ? process_one_work+0x1520/0x1520 [13158.919478] kthread+0x2a7/0x350 [13158.920060] ? kthread_complete_and_exit+0x20/0x20 [13158.920749] ret_from_fork+0x22/0x30 [13158.921019] [13158.921176] task:kworker/9:0 state:I stack:30032 pid:270757 ppid: 2 flags:0x00004000 [13158.922105] Workqueue: 0x0 (xfs-buf/dm-0) [13158.922352] Call Trace: [13158.922567] [13158.923077] __schedule+0x72e/0x1570 [13158.923342] ? io_schedule_timeout+0x160/0x160 [13158.924051] ? lock_downgrade+0x130/0x130 [13158.924290] ? pwq_dec_nr_in_flight+0x230/0x230 [13158.925012] schedule+0x128/0x220 [13158.925638] worker_thread+0x152/0xf90 [13158.925909] ? process_one_work+0x1520/0x1520 [13158.926581] kthread+0x2a7/0x350 [13158.927152] ? kthread_complete_and_exit+0x20/0x20 [13158.927839] ret_from_fork+0x22/0x30 [13158.928114] [13158.928270] task:modprobe state:D stack:28320 pid:270765 ppid:203856 flags:0x00000002 [13158.929210] Call Trace: [13158.929377] [13158.929922] __schedule+0x72e/0x157[13159.430456] ? usleep_range_state+0x190/0x190 [13159.431107] ? lock_downgrade+0x130/0x130 [13159.431358] ? mark_held_locks+0xa5/0xf0 [13159.431666] ? lockdep_hardirqs_on_prepare.part.0+0x18c/0x370 [13159.432377] ? _raw_spin_unlock_irq+0x24/0x50 [13159.433083] __wait_for_common+0x37c/0x530 [13159.433320] ? usleep_range_state+0x190/0x190 [13159.434055] ? out_of_line_wait_on_bit_timeout+0x170/0x170 [13159.434376] ? lockdep_init_map_type+0x2ff/0x820 [13159.435096] __wait_rcu_gp+0x254/0x390 [13159.435385] synchronize_rcu+0x15f/0x190 [13159.435642] ? synchronize_rcu_expedited+0x360/0x360 [13159.435965] ? rcu_gp_init+0x12c0/0x12c0 [13159.436191] ? wait_for_completion_io_timeout+0x20/0x20 [13159.436531] ? rcu_tasks_pregp_step+0x10/0x10 [13159.437215] ? __wait_for_common+0x9e/0x530 [13159.437494] ? module_bug_cleanup+0x1f/0x120 [13159.438144] free_module+0x2c0/0x750 [13159.438384] __do_sys_delete_module.constprop.0+0x37e/0x4e0 [13159.438734] ? free_module+0x750/0x750 [13159.438970] ? lockdep_hardirqs_on+0x79/0x100 [13159.439658] ? ktime_get_coarse_real_ts64+0x130/0x170 [13159.439997] ? syscall_trace_enter.constprop.0+0x19c/0x280 [13159.440305] do_syscall_64+0x5c/0x90 [13159.440576] ? lockdep_hardirqs_on+0x79/0x100 [x63/0xcd [13159.941138] RIP: 0033:0x7f201123f5ab [13159.941407] RSP: 002b:00007ffd5d26b3c8 EFLAGS: 00000206 ORIG_RAX: 00000000000000b0 [13159.942279] RAX: ffffffffffffffda RBX: 000055dcab61bd70 RCX: 00007f201123f5ab [13159.943104] RDX: 0000000000000000 RSI: 0000000000000800 RDI: 000055dcab61bdd8 [13159.943939] RBP: 000055dcab61bd70 R08: 0000000000000000 R09: 0000000000000000 [13159.944759] R10: 00007f201139eac0 R11: 0000000000000206 R12: 000055dcab61bdd8 [13159.945665] R13: 0000000000000000 R14: 000055dcab61bdd8 R15: 00007ffd5d26d6f8 [13159.946628] [13159.946797] task:kworker/u128:3 state:I stack:30032 pid:270792 ppid: 2 flags:0x00004000 [13159.947743] Workqueue: 0x0 (rescan_0_hpsa) [13159.948017] Call Trace: [13159.948169] [13159.948694] __schedule+0x72e/0x1570 [13159.948946] ? io_schedule_timeout+0x160/0x160 [13159.949613] ? lock_downgrade+0x130/0x130 [13159.949857] ? pwq_dec_nr_in_flight+0x230/0x230 [13159.950592] schedule+0x128/0x220 [13159.951188] worker_thread+0x152/0xf90 [13159.951485] ? process_one_work+0x1520/0x1520 [13160.442624] task:kworker/1:2 state:I stack:30032 pid:270807 ppid: 2 flags:0x00004000 [13160.452876] Workqueue: 0x0 (events) [13160.453125] Call Trace: [13160.453298] [13160.453817] __schedule+0x72e/0x1570 [13160.454089] ? io_schedule_timeout+0x160/0x160 [13160.454756] ? lock_downgrade+0x130/0x130 [13160.455012] ? pwq_dec_nr_in_flight+0x230/0x230 [13160.455725] schedule+0x128/0x220 [13160.456321] worker_thread+0x152/0xf90 [13160.456625] ? process_one_work+0x1520/0x1520 [13160.457287] kthread+0x2a7/0x350 [13160.457881] ? kthread_complete_and_exit+0x20/0x20 [13160.458590] ret_from_fork+0x22/0x30 [13160.458882] [13160.459038] task:sleep state:S stack:28240 pid:270811 ppid: 1613 flags:0x00000002 [13160.459973] Call Trace: [13160.460145] [13160.460677] __schedule+0x72e/0x1570 [13160.460932] ? io_schedule_timeout+0x160/0x160 [13160.461602] ? lock_downgrade+0x130/0x130 [13160.461879] schedule+0x128/0x220 [13160.462505] do_nanosleep+0x212/0x5c0 [13160.462758] ? schedule_timeout_idle+0x90/0x90 [13160.463464] ? memset+0x20/0x50 [13160.464089] ? __hrtimer_init+0x3a/0x1c0 [13160.464366] hrtimer_nanossys_gettimeofday+0x190/0x190 [13160.964909] common_nsleep+0x79/0xc0 [13160.965155] __x64_sys_clock_nanosleep+0x251/0x3a0 [13160.965849] ? ktime_get_coarse_real_ts64+0x130/0x170 [13160.966165] ? __ia32_sys_clock_adjtime+0x70/0x70 [13160.966854] ? ktime_get_coarse_real_ts64+0x130/0x170 [13160.967193] do_syscall_64+0x5c/0x90 [13160.967516] ? asm_exc_page_fault+0x22/0x30 [13160.967756] ? lockdep_hardirqs_on+0x79/0x100 [13160.968399] entry_SYSCALL_64_after_hwframe+0x63/0xcd [13160.968765] RIP: 0033:0x7fb6f9b1395a [13160.968999] RSP: 002b:00007ffe43b50248 EFLAGS: 00000246 ORIG_RAX: 00000000000000e6 [13160.969849] RAX: ffffffffffffffda RBX: 00007fb6f9dee6c0 RCX: 00007fb6f9b1395a [13160.970726] RDX: 00007ffe43b502a0 RSI: 0000000000000000 RDI: 0000000000000000 [13160.971589] RBP: 0000000000000005 R08: 0000000000000000 R09: 0000000000000000 [13160.972398] R10: 00007ffe43b50290 R11: 0000000000000246 R12: 00007ffe43b50290 [13160.973258] R13: 00007ffe43b502a0 R14: 00007ffe43b50418 R15: 000055d678ed5040 [13160.974153] [13160.974346] Sched Debug Version: v0.11, 5.14.0-256.2009_766119311.el9.x86_64+debug #1 [13160.974863] ktime : 13154289.510978 [13160.975248] sched_clk ched_clock_stable() : 1 [13161.475851] [13161.476346] sysctl_sched [13161.476540] .sysctl_sched_latency : 24.000000 [13161.477242] .sysctl_sched_min_granularity : 3.000000 [13161.478000] .sysctl_sched_idle_min_granularity : 0.750000 [13161.478751] .sysctl_sched_wakeup_granularity : 4.000000 [13161.479581] .sysctl_sched_child_runs_first : 0 [13161.479926] .sysctl_sched_features : 58611259 [13161.480662] .sysctl_sched_tunable_scaling : 1 (logarithmic) [13161.481035] [13161.481580] cpu#0, 2095.096 MHz [13161.482155] .nr_running : 0 [13161.482837] .nr_switches : 5201731 [13161.483180] .nr_uninterruptible : 1683 [13161.483841] .next_balance : 4307.818633 [13161.484161] .curr->pid : 0 [13161.484871] .clock : 13161484.470474 [13161.485697] .clock_task : 12719872.818578 [13161.486482] .avg_idle : 1000000 [13161.486821] .max_idle_balance_cost : 500000 [13161.487115] [13161.487889] rt_rq[0]: [13161.488046] .rt_nr_running : 0 [13161.488721] .rt_nr_migratory : 0 [13161.5161[13161.989632] dl_rq[0]: [13161.989797] .dl_nr_running : 0 [13161.990471] .dl_nr_migratory : 0 [13161.991137] .dl_bw->bw : 996147 [13161.991420] .dl_bw->total_bw : 0 [13161.992101] [13161.992614] runnable tasks: [13161.992792] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13161.993352] ------------------------------------------------------------------------------------------------------------- [13161.993970] I rcu_gp 3 10.321550 2 100 0.000000 0.080838 0.000000 0.000000 0 0 / [13161.995403] I slub_flushwq 5 13.850736 2 100 0.000000 0.027148 0.000000 0.000000 0 0 / [13161.996622] I netns 6 15.864752 2 100 0.000000 0.027425 0.000000 0.000000 0 0 / [13161.997754] I kworker/0:0H 8 19.885026 4 100 0.000000 0.148498 0.000000 0.000000 0 0 / [13161.998889] I kworker/0:1H 10 4799874.051139 4 / [13162.499828] I rcu_tasks_kthre 13 1273.695988 35 120 0.000000 2.431596 0.000000 0.000000 0 0 / [13162.500998] I rcu_tasks_rude_ 14 1064.408881 8 120 0.000000 0.700980 0.000000 0.000000 0 0 / [13162.502166] I rcu_tasks_trace 15 46.484284 6 120 0.000000 1.127146 0.000000 0.000000 0 0 / [13162.503328] S ksoftirqd/0 17 4802903.851833 114761 120 0.000000 9812.541855 0.000000 0.000000 0 0 / [13162.504560] S migration/0 19 0.000000 4866 0 0.000000 184.624564 0.000000 0.000000 0 0 / [13162.505670] S cpuhp/0 20 23432.596774 28 120 0.000000 16.614517 0.000000 0.000000 0 0 / [13162.506832] I kworker/u132:0 362 5872.344478 2 100 0.000000 0.061534 0.000000 0.000000 0 0 / [13162.507951] I xfs_mru_cache 748 13064.229245 2 100 0.000000 0.150959 0.000000 2 100 0.000000 0.068173 0.000000 0.000000 0 0 / [13163.009642] I xfs-reclaim/dm- 751 13100.480869 2 100 0.000000 0.147835 0.000000 0.000000 0 0 / [13163.010780] I xfs-blockgc/dm- 752 13112.596627 2 100 0.000000 0.125934 0.000000 0.000000 0 0 / [13163.011924] I xfs-inodegc/dm- 753 13124.652199 2 100 0.000000 0.065742 0.000000 0.000000 0 0 / [13163.013084] I xfs-log/dm-0 754 13136.745131 2 100 0.000000 0.110880 0.000000 0.000000 0 0 / [13163.014245] I xfs-cil/dm-0 755 13148.900772 2 100 0.000000 0.176330 0.000000 0.000000 0 0 / [13163.015417] S gdbus 1122 2705865.801245 861 120 0.000000 390.904760 0.000000 0.000000 0 1122 /system.slice [13163.016690] S NFSv4 callback 2593 40457.741167 2 120 0.000000 0.504436 0.000000 0.000000 0 0 / [13163.017824] S rngd 57661 4609.541870 45726 120 0.000000 1570.711213 0.000000 0.000000 051.992351 9 120 0.000000 0.787287 0.000000 0.000000 0 0 / [13163.519633] [13163.520121] cpu#1, 2095.096 MHz [13163.520693] .nr_running : 0 [13163.521339] .nr_switches : 2535895 [13163.521689] .nr_uninterruptible : 134 [13163.522352] .next_balance : 4307.820676 [13163.522702] .curr->pid : 0 [13163.523368] .clock : 13163522.471048 [13163.524136] .clock_task : 12922340.657788 [13163.524867] .avg_idle : 1000000 [13163.525188] .max_idle_balance_cost : 500000 [13163.525512] [13163.526024] cfs_rq[1]:/ [13163.526353] .exec_clock : 0.000000 [13163.526775] .MIN_vruntime : 0.000001 [13163.527102] .min_vruntime : 4925969.588777 [13163.527824] .max_vruntime : 0.000001 [13163.528151] .spread : 0.000000 [13163.528440] .spread0 : 122981.738763 [13163.529220] .nr_spread_over : 0 [13163.529880] .nr_running : 0 [13163.530548] .h_nr_running : 0 [13163.531174] .idle_nr_running : 0 [13163.558619] : 0 [13164.032556] .util_avg : 0 [13164.033187] .util_est_enqueued : 0 [13164.033853] .removed.load_avg : 0 [13164.034566] .removed.util_avg : 0 [13164.035198] .removed.runnable_avg : 0 [13164.035852] .tg_load_avg_contrib : 0 [13164.036507] .tg_load_avg : 0 [13164.037172] .throttled : 0 [13164.037827] .throttle_count : 0 [13164.038511] [13164.039010] rt_rq[1]: [13164.039145] .rt_nr_running : 0 [13164.039803] .rt_nr_migratory : 0 [13164.040503] .rt_throttled : 0 [13164.041168] .rt_time : 0.000000 [13164.041500] .rt_runtime : 950.000000 [13164.041842] [13164.042316] dl_rq[1]: [13164.042503] .dl_nr_running : 0 [13164.043166] .dl_nr_migratory : 0 [13164.043822] .dl_bw->bw : 996147 [13164.044159] .dl_bw->total_bw : 0 [13164.044853] [13164.045338] runnable tasks: [13164.045540] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13164.046112] ---------------------------------------------------------1 22 0.000000 4503 0 0.000000 3924.037082 0.000000 0.000000 0 0 / [13164.547639] S ksoftirqd/1 23 4925527.461508 55835 120 0.000000 6951.757555 0.000000 0.000000 0 0 / [13164.548773] I kworker/1:0H 25 931.426957 4 100 0.000000 0.244547 0.000000 0.000000 0 0 / [13164.549920] D khugepaged 189 4925957.407941 3341 139 0.000000 7242.956240 0.000000 0.000000 0 0 / [13164.551057] I kworker/1:1H 232 4923049.021687 34863 100 0.000000 4572.229685 0.000000 0.000000 0 0 / [13164.552219] S scsi_eh_1 634 6526.034701 31 120 0.000000 14.309359 0.000000 0.000000 0 0 / [13164.553383] I kdmflush/253:1 728 7242.801417 2 100 0.000000 0.089702 0.000000 0.000000 0 0 / [13164.554592] I xfs-conv/dm-2 968 23633.264904 2 100 0.000000 0.100290 0.000000 0.000000 0 0 / [13164.555736] I xfs-blockgc/dm- 970 23 /system.slice [13165.056293] S chronyd 201490 2174812.545795 201 120 0.000000 177.814716 0.000000 0.000000 0 0 /system.slice [13165.057529] I kworker/1:1 238691 4925957.556812 3984 120 0.000000 1175.824377 0.000000 0.000000 0 0 / [13165.058661] D kworker/1:0 263892 4925947.720133 1294 120 0.000000 108.664550 0.000000 0.000000 0 0 / [13165.059805] I kworker/u129:2 266923 4925957.845722 154 120 0.000000 53.921899 0.000000 0.000000 0 0 / [13165.060923] I kworker/1:2 270807 4925969.593908 18 120 0.000000 0.185973 0.000000 0.000000 0 0 / [13165.062086] [13165.062605] cpu#2, 2095.096 MHz [13165.063196] .nr_running : 5 [13165.063874] .nr_switches : 2409745 [13165.064206] .nr_uninterruptible : 144 [13165.064900] .next_balance : 4307.712089 [13165.065226] .curr->pid : 270532 [13165.065540] .clock : 13165065.525507 [13165.066272] .clock_task : 13003804.070643 [e/session-2.scope [13165.567378] .exec_clock : 0.000000 [13165.567743] .MIN_vruntime : 0.000001 [13165.568064] .min_vruntime : 341598.477469 [13165.568783] .max_vruntime : 0.000001 [13165.569115] .spread : 0.000000 [13165.569404] .spread0 : -4461389.372545 [13165.570148] .nr_spread_over : 0 [13165.571142] .nr_running : 1 [13165.571838] .h_nr_running : 1 [13165.572511] .idle_nr_running : 0 [13165.573189] .idle_h_nr_running : 0 [13165.573854] .load : 1048576 [13165.574162] .load_avg : 1024 [13165.574855] .runnable_avg : 1024 [13165.575536] .util_avg : 1024 [13165.576202] .util_est_enqueued : 0 [13165.576847] .removed.load_avg : 0 [13165.577528] .removed.util_avg : 0 [13165.578185] .removed.runnable_avg : 0 [13165.578854] .tg_load_avg_contrib : 1022 [13165.579530] .tg_load_avg : 1022 [13165.580194] .throttled : 0 [13165.580870] .throttle_count : 0 [13165.581548] .se->exec_start t : 1048576 [13166.082154] .se->avg.load_avg : 1023 [13166.082854] .se->avg.util_avg : 1023 [13166.083557] .se->avg.runnable_avg : 1024 [13166.084219] [13166.084733] cfs_rq[2]:/user.slice/user-0.slice [13166.085378] .exec_clock : 0.000000 [13166.085734] .MIN_vruntime : 0.000001 [13166.086071] .min_vruntime : 477237.089620 [13166.086785] .max_vruntime : 0.000001 [13166.087120] .spread : 0.000000 [13166.087409] .spread0 : -4325750.760394 [13166.088152] .nr_spread_over : 0 [13166.088833] .nr_running : 1 [13166.089521] .h_nr_running : 1 [13166.090202] .idle_nr_running : 0 [13166.090866] .idle_h_nr_running : 0 [13166.091543] .load : 1048576 [13166.091886] .load_avg : 1024 [13166.092547] .runnable_avg : 1024 [13166.093205] .util_avg : 1024 [13166.093869] .util_est_enqueued [13166.585225] .tg_load_avg_contrib : 1023 [13166.594940] .tg_load_avg : 1023 [13166.595619] .throttled : 0 [13166.596243] .throttle_count : 0 [13166.596901] .se->exec_start : 13005333.522272 [13166.597617] .se->vruntime : 476310.395543 [13166.598302] .se->sum_exec_runtime : 315940.587088 [13166.599014] .se->load.weight : 1048576 [13166.599315] .se->avg.load_avg : 1023 [13166.599974] .se->avg.util_avg : 1024 [13166.600625] .se->avg.runnable_avg : 1024 [13166.601262] [13166.601767] cfs_rq[2]:/user.slice [13166.602328] .exec_clock : 0.000000 [13166.602673] .MIN_vruntime : 0.000001 [13166.603002] .min_vruntime : 476315.317077 [13166.603721] .max_vruntime : 0.000001 [13166.604054] .spread : 0.000000 [13166.604367] .spread0 : -4326672.532937 [13166.605144] .nr_spread_over : 0 [13166.605837] .nr_running .load : 1048576 [13167.106563] .load_avg : 1024 [13167.107239] .runnable_avg : 1024 [13167.107897] .util_avg : 1024 [13167.108582] .util_est_enqueued : 0 [13167.109237] .removed.load_avg : 0 [13167.109922] .removed.util_avg : 0 [13167.110594] .removed.runnable_avg : 0 [13167.111230] .tg_load_avg_contrib : 1009 [13167.111895] .tg_load_avg : 1009 [13167.112554] .throttled : 0 [13167.113196] .throttle_count : 0 [13167.113864] .se->exec_start : 13005849.916957 [13167.114687] .se->vruntime : 5094426.944673 [13167.115403] .se->sum_exec_runtime : 316457.630430 [13167.116169] .se->load.weight : 1048576 [13167.116531] .se->avg.load_avg : 1023 [13167.117161] .se->avg.util_avg : 1024 [13167.117838] .se->avg.runnable_avg : 1024 [13167.118530] [13167.119041] cfs_rq[2]:/ [13167.119222] .exec_clock : 0.000000 [13167.119550] .MIN_vrun .spread : 0.000000 [13167.620193] .spread0 : 174946.701866 [13167.620946] .nr_spread_over : 0 [13167.621628] .nr_running : 4 [13167.622255] .h_nr_running : 4 [13167.622913] .idle_nr_running : 0 [13167.623582] .idle_h_nr_running : 0 [13167.624259] .load : 94036992 [13167.624577] .load_avg : 91012 [13167.624935] .runnable_avg : 4096 [13167.625593] .util_avg : 1024 [13167.626263] .util_est_enqueued : 124 [13167.626925] .removed.load_avg : 0 [13167.627599] .removed.util_avg : 0 [13167.628241] .removed.runnable_avg : 0 [13167.628893] .tg_load_avg_contrib : 0 [13167.629569] .tg_load_avg : 0 [13167.630232] .throttled : 0 [13167.630887] .throttle_count : 0 [13167.631566] [13167.632075] rt_rq[2]: [13167.632219] .rt_nr_running : 0 [13167.632879] .rt_nr_migratory : 0 [13167.633549] .rt_throttled : 0 [13167.634219] .r[13168.034569] .dl_nr_running : 0 [13168.035291] .dl_nr_migratory : 0 [13168.036009] .dl_bw->bw : 996147 [13168.036332] .dl_bw->total_bw : 0 [13168.037039] [13168.037593] runnable tasks: [13168.037778] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13168.038347] ------------------------------------------------------------------------------------------------------------- [13168.038973] S cpuhp/2 26 24753.400129 25 120 0.000000 7.518406 0.000000 0.000000 0 0 / [13168.040131] R migration/2 27 0.000000 3971 0 0.000000 3886.661491 0.000000 0.000000 0 0 / [13168.041412] R ksoftirqd/2 28 4977922.551880 48327 120 0.000000 4851.367923 0.000000 0.000000 0 0 / [13168.042587] I kworker/2:0H 30 436.751266 4 100 0.000000 0.143799 0.000000 0.000000 0 0 / [13168.043760] S kauditd 182 4975453.143713 721 120 0.000000 108.185472 0.000000 0.000000 0 0 / [13168.045043] R 0.000000 0 0 / [13168.546199] I xfs-reclaim/dm- 969 24083.007937 2 100 0.000000 0.097456 0.000000 0.000000 0 0 / [13168.547392] I xfs-inodegc/dm- 971 24095.092353 2 100 0.000000 0.107005 0.000000 0.000000 0 0 / [13168.548580] I xfs-log/dm-2 972 24107.152548 2 100 0.000000 0.070459 0.000000 0.000000 0 0 / [13168.549758] S systemd 1132 2683.085552 3070 120 0.000000 31651.832612 0.000000 0.000000 0 0 /user.slice/user-0.slice/user@0.service/init.scope [13168.551166] I kworker/2:1 218521 4957319.543179 3275 120 0.000000 834.571129 0.000000 0.000000 0 0 / [13168.552550] I kworker/2:2 259532 4957307.188571 5 120 0.000000 0.425910 0.000000 0.000000 0 0 / [13168.553764] R kworker/2:0 267344 4977922.551880 289 120 0.000000 518.891043 0.000000 0.000000 0 0 / [13168.554919] >R 20_sysinfo 270532 344582 .nr_running : 0 [13169.056181] .nr_switches : 2810013 [13169.056488] .nr_uninterruptible : 4294966073 [13169.056868] .next_balance : 4307.826208 [13169.057150] .curr->pid : 0 [13169.057834] .clock : 13169057.517523 [13169.058727] .clock_task : 13022957.623642 [13169.059584] .avg_idle : 1000000 [13169.059921] .max_idle_balance_cost : 500000 [13169.060212] [13169.060757] rt_rq[3]: [13169.060999] .rt_nr_running : 0 [13169.061681] .rt_nr_migratory : 0 [13169.062346] .rt_throttled : 0 [13169.063219] .rt_time : 0.000000 [13169.063556] .rt_runtime : 950.000000 [13169.064050] [13169.064598] dl_rq[3]: [13169.064789] .dl_nr_running : 0 [13169.065437] .dl_nr_migratory : 0 [13169.066161] .dl_bw->bw : 996147 [13169.066473] .dl_bw->total_bw : 0 [13169.067164] [13169.067754] runnable tasks: [13169.068087] S S cpuhp/3 31 30898.668796 24 120 0.000000 8.440259 0.000000 0.000000 0 0 / [13169.569727] S migration/3 32 119.104442 3914 0 0.000000 3887.426429 0.000000 0.000000 0 0 / [13169.571233] S ksoftirqd/3 33 4624433.825625 29795 120 0.000000 3493.875381 0.000000 0.000000 0 0 / [13169.572602] I kworker/3:0H 35 1192.012492 4 100 0.000000 0.152290 0.000000 0.000000 0 0 / [13169.573835] I kworker/3:1H 226 4624433.825625 13776 100 0.000000 2101.370338 0.000000 0.000000 0 0 / [13169.575064] I kdmflush/253:0 721 13087.971447 2 100 0.000000 0.169956 0.000000 0.000000 0 0 / [13169.576257] I xfsalloc 747 13187.207544 2 100 0.000000 0.151808 0.000000 0.000000 0 0 / [13169.577633] S irqbalance 1066 1563307.448416 1481 120 0.000000 28889.641036 0.000000 0.000000 0 0 /system.slice [13169.578405] I kworker/3:3 233152 4610695.072390 1580 120 0.0I kworker/3:1 268541 4610706.991562 2 120 0.000000 0.155047 0.000000 0.000000 0 0 / [13170.080267] [13170.081003] cpu#4, 2095.096 MHz [13170.081734] .nr_running : 0 [13170.082393] .nr_switches : 2702494 [13170.082757] .nr_uninterruptible : 4294966319 [13170.083060] .next_balance : 4307.827232 [13170.083344] .curr->pid : 0 [13170.084059] .clock : 13170083.549004 [13170.084848] .clock_task : 13054491.103200 [13170.085781] .avg_idle : 1000000 [13170.086176] .max_idle_balance_cost : 500000 [13170.086560] [13170.087157] rt_rq[4]: [13170.087314] .rt_nr_running : 0 [13170.087979] .rt_nr_migratory : 0 [13170.088750] .rt_throttled : 0 [13170.089839] .rt_time : 0.000000 [13170.090156] .rt_runtime : 950.000000 [13170.090443] [13170.091025] dl_rq[4]: [13170.091194] .dl_nr_running : 0 [13170.091882] .dl_nr_migratory : 0 [13170.092615] .dl_bw->bw : 996147 [13170.092949] .dl_bw->total_bw : 0 [13170.093813] [13170.148262]------------------------------------------------------------ [13170.594564] S cpuhp/4 36 24175.275786 25 120 0.000000 7.120675 0.000000 0.000000 0 0 / [13170.595839] S migration/4 37 150.181208 3823 0 0.000000 3885.588270 0.000000 0.000000 0 0 / [13170.596995] S ksoftirqd/4 38 3951549.134509 13115 120 0.000000 2278.219603 0.000000 0.000000 0 0 / [13170.598356] I kworker/4:0H 40 5421.512397 4 100 0.000000 0.478943 0.000000 0.000000 0 0 / [13170.599606] S kdevtmpfs 167 3930175.918949 687003 120 0.000000 132825.021202 0.000000 0.000000 0 0 / [13170.600808] I kworker/4:1H 639 3952108.513733 12008 100 0.000000 1397.289918 0.000000 0.000000 0 0 / [13170.601998] S auditd 1012 1313159.742501 85 116 0.000000 29.048261 0.000000 0.000000 0 0 /system.slice [13170.602813] S dbus-broker-lau 1093 1324272.054669 164 120 0.000000 332.449407 0.000000 0.000000 0 0 /system.sl 0.000000 0.152595 0.000000 0.000000 0 0 /system.slice [13171.104351] S gssproxy 1136 66.125106 2 120 0.000000 0.119026 0.000000 0.000000 0 0 /system.slice [13171.105169] S gssproxy 1137 78.243342 2 120 0.000000 0.118243 0.000000 0.000000 0 0 /system.slice [13171.105995] S gssproxy 1138 90.334671 1 120 0.000000 0.091336 0.000000 0.000000 0 0 /system.slice [13171.106939] S pool-restraintd 1497 1324313.461814 1758 120 0.000000 202.366939 0.000000 0.000000 0 0 /system.slice [13171.107794] I kworker/4:1 259648 3925926.530965 345 120 0.000000 59.019536 0.000000 0.000000 0 0 / [13171.108956] I tls-strp 260723 3911212.871503 2 100 0.000000 0.174517 0.000000 0.000000 0 0 / [13171.110117] I kworker/4:0 265039 3953106.848336 318 120 0.000000 512.280764 0.000000 0.000000 0 0 / [13171.111686] I kworker/u128:3 270792 3953107.138172 8 120 0.000000 [13171.603273] .nr_switches : 2449801 [13171.612787] .nr_uninterruptible : 4294966810 [13171.613098] .next_balance : 4307.828764 [13171.613385] .curr->pid : 0 [13171.614044] .clock : 13171613.542556 [13171.614811] .clock_task : 13065558.173503 [13171.615580] .avg_idle : 1000000 [13171.615913] .max_idle_balance_cost : 500000 [13171.616201] [13171.616719] rt_rq[5]: [13171.616877] .rt_nr_running : 0 [13171.617588] .rt_nr_migratory : 0 [13171.618250] .rt_throttled : 0 [13171.618975] .rt_time : 0.000000 [13171.619283] .rt_runtime : 950.000000 [13171.619593] [13171.620091] dl_rq[5]: [13171.620247] .dl_nr_running : 0 [13171.620906] .dl_nr_migratory : 0 [13171.621579] .dl_bw->bw : 996147 [13171.621921] .dl_bw->total_bw : 0 [13171.622591] [13171.623098] runnable tasks: [13171.623254] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13171.623871] --------------------------------------------------------------------2.257998 3888 0 0.000000 3898.237042 0.000000 0.000000 0 0 / [13172.125582] S ksoftirqd/5 43 3667206.189996 10205 120 0.000000 2088.960924 0.000000 0.000000 0 0 / [13172.126931] I kworker/5:0H 45 390.709177 4 100 0.000000 0.175530 0.000000 0.000000 0 0 / [13172.128072] I kworker/5:1H 227 3667085.034860 8070 100 0.000000 881.884373 0.000000 0.000000 0 0 / [13172.129359] S restraintd 1823 112969.323436 12161 120 0.000000 17669.241245 0.000000 0.000000 0 1823 /user.slice/user-0.slice/session-2.scope [13172.130337] I kworker/5:1 221550 3658839.962640 2267 120 0.000000 3623.375137 0.000000 0.000000 0 0 / [13172.131921] I kworker/5:2 265040 3668967.150823 336 120 0.000000 24.488400 0.000000 0.000000 0 0 / [13172.133131] [13172.133668] cpu#6, 2095.096 MHz [13172.134338] .nr_running : 4 [13172.135031] .nr_switches : 1320796 [13172.135347] .nr_uninterruptible : 4294966154 [13172.135663] .next_balance : 4307.829291 [13172.135993] .curr->pid : 47 [13172.136805] .clock : 13172136.52: 500000 [13172.637592] [13172.638089] cfs_rq[6]:/ [13172.638249] .exec_clock : 0.000000 [13172.638587] .MIN_vruntime : 5064361.621975 [13172.639332] .min_vruntime : 5064373.621975 [13172.640266] .max_vruntime : 5064361.621975 [13172.641027] .spread : 0.000000 [13172.641328] .spread0 : 261385.771961 [13172.642059] .nr_spread_over : 0 [13172.642732] .nr_running : 3 [13172.643423] .h_nr_running : 3 [13172.644120] .idle_nr_running : 0 [13172.644820] .idle_h_nr_running : 0 [13172.645905] .load : 3145728 [13172.646213] .load_avg : 3071 [13172.646881] .runnable_avg : 3071 [13172.647619] .util_avg : 0 [13172.648265] .util_est_enqueued : 796 [13172.648947] .removed.load_avg : 0 [13172.649670] .removed.util_avg : 0 [13172.650582] .removed.runnable_avg : 0 [13172.651381] .tg_load_avg_contrib : 0 [13172.652061] .tg_load_avg : 0 [13[13173.143406] .rt_nr_running : 0 [13173.153369] .rt_nr_migratory : 0 [13173.154363] .rt_throttled : 0 [13173.155213] .rt_time : 0.000000 [13173.155518] .rt_runtime : 950.000000 [13173.155913] [13173.156413] dl_rq[6]: [13173.156594] .dl_nr_running : 0 [13173.157379] .dl_nr_migratory : 0 [13173.158177] .dl_bw->bw : 996147 [13173.158484] .dl_bw->total_bw : 0 [13173.159172] [13173.159709] runnable tasks: [13173.159874] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13173.160447] ------------------------------------------------------------------------------------------------------------- [13173.161066] S cpuhp/6 46 30147.237258 28 120 0.000000 18.227855 0.000000 0.000000 1 0 / [13173.162633] >R migration/6 47 228.333806 3837 0 0.000000 3967.097662 0.000000 0.000000 1 0 / [13173.163837] S ksoftirqd/6 48 5059013.828988 25538 120 0.000000 3420.723470 0.000000 0.000000 1 0 / [13173.165025] I kworker/6:0H 50 / [13173.666029] S watchdogd 203 956.314705 2 49 0.000000 0.000000 0.000000 0.000000 1 0 / [13173.667233] I kworker/6:1H 204 5062953.106913 30151 100 0.000000 3725.493121 0.000000 0.000000 1 0 / [13173.668464] I scsi_tmf_1 635 8490.248330 2 100 0.000000 0.184306 0.000000 0.000000 1 0 / [13173.669582] S xfsaild/sda1 945 77638.907472 14 120 0.000000 3.103518 0.000000 0.000000 1 0 / [13173.670709] S gmain 1496 549.702604 2 120 0.000000 0.663566 0.000000 0.000000 1 0 /system.slice [13173.671471] R kworker/6:3 242864 5064361.621975 1327 120 0.000000 152.351609 0.000000 0.000000 1 0 / [13173.672682] I kworker/6:0 259508 5048208.595137 376 120 0.000000 27.824264 0.000000 0.000000 1 0 / [13173.673881] R kworker/6:1 266086 5064361.621975 3 120 0.000000 0.247100 0.000000 0.000000 1 0 / [13173.675067] R kworker/6:2 266087 5064361.621975 2 120 0.000000 0.143171 0.000000 0.000000 1 0 / [13173.676453] S 10_bash_login 270013 246213.938804 43 120 0.00: 0 [13174.177758] .nr_switches : 1149873 [13174.178054] .nr_uninterruptible : 1990 [13174.178715] .next_balance : 4307.831331 [13174.179022] .curr->pid : 0 [13174.179951] .clock : 13174179.565321 [13174.180695] .clock_task : 13042818.080393 [13174.181524] .avg_idle : 1000000 [13174.181891] .max_idle_balance_cost : 500000 [13174.182240] [13174.182762] cfs_rq[7]:/system.slice [13174.183380] .exec_clock : 0.000000 [13174.183714] .MIN_vruntime : 0.000001 [13174.184048] .min_vruntime : 1476416.415370 [13174.184868] .max_vruntime : 0.000001 [13174.185177] .spread : 0.000000 [13174.185467] .spread0 : -3326571.434644 [13174.186312] .nr_spread_over : 0 [13174.187089] .nr_running : 0 [13174.187760] .h_nr_running : 0 [13174.188398] .idle_nr_running : 0 [13174.189074] .idle_h_nr_running : 0 [13174.189761] .load : 0 [13174.217561]ued : 0 [13174.691280] .removed.load_avg : 0 [13174.691972] .removed.util_avg : 0 [13174.692767] .removed.runnable_avg : 0 [13174.693590] .tg_load_avg_contrib : 0 [13174.694317] .tg_load_avg : 18 [13174.695038] .throttled : 0 [13174.695904] .throttle_count : 0 [13174.696647] .se->exec_start : 13042188.372562 [13174.697593] .se->vruntime : 4077331.642281 [13174.698358] .se->sum_exec_runtime : 287089.790548 [13174.699169] .se->load.weight : 82037 [13174.699477] .se->avg.load_avg : 0 [13174.700173] .se->avg.util_avg : 0 [13174.700882] .se->avg.runnable_avg : 0 [13174.701776] [13174.702285] cfs_rq[7]:/ [13174.702436] .exec_clock : 0.000000 [13174.702775] .MIN_vruntime : 0.000001 [13174.703083] .min_vruntime : 4077331.642281 [13174.703848] .max_vruntime : 0.000001 [13174.704156] .spread : 0.000000 [13174.7316 .h_nr_running : 0 [13175.205413] .idle_nr_running : 0 [13175.206166] .idle_h_nr_running : 0 [13175.206884] .load : 0 [13175.207684] .load_avg : 0 [13175.208440] .runnable_avg : 0 [13175.209170] .util_avg : 0 [13175.209887] .util_est_enqueued : 0 [13175.210593] .removed.load_avg : 0 [13175.211435] .removed.util_avg : 0 [13175.212127] .removed.runnable_avg : 0 [13175.212851] .tg_load_avg_contrib : 0 [13175.213590] .tg_load_avg : 0 [13175.214595] .throttled : 0 [13175.215355] .throttle_count : 0 [13175.216086] [13175.216619] rt_rq[7]: [13175.216804] .rt_nr_running : 0 [13175.217452] .rt_nr_migratory : 0 [13175.218327] .rt_throttled : 0 [13175.219117] .rt_time : 0.000000 [13175.219430] .rt_runtime : 950.000000 [13175.219751] [13175.220251] dl_rq[7]: [13175.220408] .dl_nr_running : 0 [13175.221080] .dl_nr_migratory : 0 [13175.221882] .dl_bw->bw : 996147 [13175.222187] .dl_bw->total_bw : 0 [13175.222868] [13175.223343] runnable tasks: [13175.223490] S S cpuhp/7 58 27860.232018 24 120 0.000000 9.640767 0.000000 0.000000 1 0 / [13175.724884] S migration/7 59 0.000000 3768 0 0.000000 3977.528753 0.000000 0.000000 1 0 / [13175.726045] S ksoftirqd/7 60 4076875.132900 26554 120 0.000000 2806.078869 0.000000 0.000000 1 0 / [13175.727257] I kworker/7:0H 62 8661.496877 4 100 0.000000 0.263856 0.000000 0.000000 1 0 / [13175.728498] S oom_reaper 184 785.396304 2 120 0.000000 0.000000 0.000000 0.000000 1 0 / [13175.729646] I writeback 185 785.427753 2 100 0.000000 0.031449 0.000000 0.000000 1 0 / [13175.730902] S ksmd 188 1609267.275721 110 125 0.000000 19.498820 0.000000 0.000000 1 0 / [13175.732089] I cryptd 190 846.851815 2 100 0.000000 0.190813 0.000000 0.000000 1 0 / [13175.733287] I kintegrityd 191 858.933221 2 100 0.000000 0.081412 0.000000 0.000000 1 0 / [13175.734500] I kblockd 192 871.217487 2 100 0.000000 I tpm_dev_wq 200 895.976329 2 100 0.000000 0.313980 0.000000 0.000000 1 0 / [13176.236273] I md 201 908.318430 2 100 0.000000 0.342107 0.000000 0.000000 1 0 / [13176.237400] I edac-poller 202 920.318424 2 100 0.000000 0.000000 0.000000 0.000000 1 0 / [13176.238582] S kswapd1 206 1471.987531 3 120 0.000000 0.122319 0.000000 0.000000 1 0 / [13176.239753] I kworker/u131:0 360 1790.177511 2 100 0.000000 0.099235 0.000000 0.000000 1 0 / [13176.240871] I kworker/u133:0 363 1814.289015 2 100 0.000000 0.094473 0.000000 0.000000 1 0 / [13176.242004] I scsi_tmf_0 629 5879.956601 2 100 0.000000 0.106279 0.000000 0.000000 1 0 / [13176.243133] I kworker/7:1H 653 4076340.547890 28866 100 0.000000 3574.469457 0.000000 0.000000 1 0 / [ 0.000000 0.110789 0.000000 0.000000 1 0 / [13176.744968] S runtest.sh 1613 1476416.415370 5783 120 0.000000 25703.663796 0.000000 0.000000 1 0 /system.slice [13176.745756] I kworker/7:2 199954 4062577.164955 5938 120 0.000000 2505.340511 0.000000 0.000000 1 0 / [13176.746970] I kworker/7:1 260477 4077319.681751 634 120 0.000000 70.322862 0.000000 0.000000 1 0 / [13176.748128] I kworker/7:0 267438 4062589.050430 2 120 0.000000 0.165241 0.000000 0.000000 1 0 / [13176.749287] S run_plugins 270100 190398.592812 17 120 0.000000 418.886774 0.000000 0.000000 1 0 /user.slice/user-0.slice/session-2.scope [13176.750260] [13176.750783] cpu#8, 2095.096 MHz [13176.751351] .nr_running : 0 [13176.752011] .nr_switches : 1044418 [13176.752335] .nr_uninterruptible : 4294964354 [13176.752654] .next_balance : 4307.833904 [13176.752980] .curr->pid : 0 [13176.753637] .clock : 13176753.583346 [13176.754384] .clock_task : 13056797.426608 [13176.755110] .: 0 [13177.255921] .rt_nr_migratory : 0 [13177.256624] .rt_throttled : 0 [13177.257308] .rt_time : 0.000000 [13177.257627] .rt_runtime : 950.000000 [13177.257984] [13177.258458] dl_rq[8]: [13177.258639] .dl_nr_running : 0 [13177.259274] .dl_nr_migratory : 0 [13177.259936] .dl_bw->bw : 996147 [13177.260259] .dl_bw->total_bw : 0 [13177.260915] [13177.261392] runnable tasks: [13177.261545] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13177.262160] ------------------------------------------------------------------------------------------------------------- [13177.262789] S cpuhp/8 63 27558.649192 24 120 0.000000 9.453500 0.000000 0.000000 1 0 / [13177.263940] S migration/8 64 0.000000 3651 0 0.000000 3980.528458 0.000000 0.000000 1 0 / [13177.265130] S ksoftirqd/8 65 4209627.630173 20459 120 I kworker/8:1H 625 4213039.579748 22238 100 0.000000 2716.063899 0.000000 0.000000 1 0 / [13177.766872] I kworker/8:0 254821 4213039.999594 1121 120 0.000000 561.678927 0.000000 0.000000 1 0 / [13177.768057] I kworker/8:1 265994 4192869.495166 2 120 0.000000 0.157415 0.000000 0.000000 1 0 / [13177.769268] D modprobe 270765 187005.445443 1 120 0.000000 44.421711 0.000000 0.000000 1 0 /user.slice/user-0.slice/session-2.scope [13177.770235] [13177.770763] cpu#9, 2095.096 MHz [13177.771399] .nr_running : 0 [13177.772095] .nr_switches : 1067673 [13177.772390] .nr_uninterruptible : 4294966244 [13177.772697] .next_balance : 4307.834924 [13177.773017] .curr->pid : 0 [13177.773674] .clock : 13177773.584914 [13177.774441] .clock_task : 13055046.533873 [13177.775180] .avg_idle : 1000000 [13177.775517] .max_idle_balance_cost : 500000 [13177.775830] [13177.776373] cfs_rq[9]:/ [13177.776527] .exec_clock : 0.000000 [13177.776872] .MIN_vruntime [13178.277312] .spread0 : -405043.512519 [13178.278052] .nr_spread_over : 0 [13178.278750] .nr_running : 0 [13178.279408] .h_nr_running : 0 [13178.280092] .idle_nr_running : 0 [13178.280766] .idle_h_nr_running : 0 [13178.281428] .load : 0 [13178.282111] .load_avg : 0 [13178.282790] .runnable_avg : 0 [13178.283465] .util_avg : 0 [13178.284120] .util_est_enqueued : 0 [13178.284834] .removed.load_avg : 0 [13178.285461] .removed.util_avg : 0 [13178.286125] .removed.runnable_avg : 0 [13178.286803] .tg_load_avg_contrib : 0 [13178.287434] .tg_load_avg : 0 [13178.288115] .throttled : 0 [13178.288792] .throttle_count : 0 [13178.289468] [13178.289970] rt_rq[9]: [13178.290145] .rt_nr_running : 0 [13178.290825] .rt_nr_migratory : 0 [13178.291445] .rt_throttled : 0 [[13178.783207] .dl_nr_running : 0 [13178.792550] .dl_nr_migratory : 0 [13178.793259] .dl_bw->bw : 996147 [13178.793567] .dl_bw->total_bw : 0 [13178.794280] [13178.794831] runnable tasks: [13178.794990] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13178.795557] ------------------------------------------------------------------------------------------------------------- [13178.796180] S cpuhp/9 68 29981.874680 24 120 0.000000 8.894574 0.000000 0.000000 1 0 / [13178.797312] S migration/9 69 0.000000 3650 0 0.000000 3987.481867 0.000000 0.000000 1 0 / [13178.798446] S ksoftirqd/9 70 4391218.422917 18762 120 0.000000 1708.120003 0.000000 0.000000 1 0 / [13178.799619] I kworker/9:0H 72 2715.806697 4 100 0.000000 0.197124 0.000000 0.000000 1 0 / [13178.800754] S kcompactd1 187 4397932.558268 26125 120 0.000000 609.120808 0.000000 0.000000 1 0 / [13178.801675] I acpi_thermal_pm 222 975.557930 2 10[13179.293094] I kworker/9:1H 537 4397932.337495 21632 100 0.000000 2668.049321 0.000000 0.000000 1 0 / [13179.303463] S xfsaild/dm-0 756 4397944.337495 233321 120 0.000000 7438.727405 0.000000 0.000000 1 0 / [13179.304633] I xfs-cil/dm-2 973 28950.629028 2 100 0.000000 0.173834 0.000000 0.000000 1 0 / [13179.305824] S gmain 1852 206006.564274 47 120 0.000000 47.387444 0.000000 0.000000 1 0 /user.slice/user-0.slice/session-2.scope [13179.306768] D kworker/9:2 233066 4397901.126893 2552 120 0.000000 689.258567 0.000000 0.000000 1 0 / [13179.307944] I kworker/9:1 260779 4397915.116796 7 120 0.000000 0.410885 0.000000 0.000000 1 0 / [13179.309152] I kworker/9:0 270757 4397932.477698 26 120 0.000000 0.522875 0.000000 0.000000 1 0 / [13179.310330] [13179.310842] cpu#10, 2095.096 MHz [13179.311427] .nr_running : 0 [13179.339214 : 0 [13179.812922] .clock : 13179812.609945 [13179.813684] .clock_task : 13063274.752837 [13179.814482] .avg_idle : 1000000 [13179.814803] .max_idle_balance_cost : 500000 [13179.815147] [13179.815667] rt_rq[10]: [13179.815825] .rt_nr_running : 0 [13179.816526] .rt_nr_migratory : 0 [13179.817191] .rt_throttled : 0 [13179.817854] .rt_time : 0.000000 [13179.818172] .rt_runtime : 950.000000 [13179.818458] [13179.818960] dl_rq[10]: [13179.819136] .dl_nr_running : 0 [13179.819816] .dl_nr_migratory : 0 [13179.820460] .dl_bw->bw : 996147 [13179.821120] .dl_bw->total_bw : 0 [13179.821808] [13179.822299] runnable tasks: [13179.822446] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13179.823061] ------------------------------------------------------------------------------------------------------------- [13179.850 9.600273 0.000000 0.000000 1 0 / [13180.324640] S migration/10 74 0.000000 3727 0 0.000000 3995.870967 0.000000 0.000000 1 0 / [13180.325776] S ksoftirqd/10 75 3785610.166299 19366 120 0.000000 1718.069183 0.000000 0.000000 1 0 / [13180.326942] I kworker/10:0H 77 8331.244040 4 100 0.000000 0.293117 0.000000 0.000000 1 0 / [13180.328079] I kthrotld 213 852.888219 2 100 0.000000 0.093188 0.000000 0.000000 1 0 / [13180.329230] I kworker/10:1H 718 3785720.993352 22915 100 0.000000 3126.468168 0.000000 0.000000 1 0 / [13180.330399] S systemd-udevd 845 1535785.102437 131179 120 0.000000 1263450.485169 0.000000 0.000000 1 0 /system.slice [13180.331226] D in:imjournal 1079 1535768.477626 20175 120 0.000000 18300.174492 0.000000 0.000000 1 1079 /system.slice [13180.332454] S agetty 1143 128.147333 9 120 0.I kworker/10:0 237518 3767934.396479 1664 120 0.000000 151.552724 0.000000 0.000000 1 0 / [13180.834133] I kworker/u130:1 260585 3785846.567041 208 120 0.000000 102.434351 0.000000 0.000000 1 0 / [13180.835375] I kworker/10:1 266384 3785845.969261 333 120 0.000000 33.235484 0.000000 0.000000 1 0 / [13180.836545] [13180.837065] cpu#11, 2095.096 MHz [13180.837655] .nr_running : 0 [13180.838316] .nr_switches : 1308204 [13180.838650] .nr_uninterruptible : 4294967245 [13180.838990] .next_balance : 4307.837988 [13180.839279] .curr->pid : 0 [13180.839944] .clock : 13180839.637546 [13180.840699] .clock_task : 13058415.798773 [13180.841451] .avg_idle : 1000000 [13180.841767] .max_idle_balance_cost : 500000 [13180.842079] [13180.842562] cfs_rq[11]:/system.slice [13180.842818] .exec_clock : 0.000000 [13180.843160] .MIN_vruntime : 0.000001 [13180.843444] .min_vruntime 9 [13181.344258] .nr_spread_over : 0 [13181.344968] .nr_running : 0 [13181.345656] .h_nr_running : 0 [13181.346329] .idle_nr_running : 0 [13181.346982] .idle_h_nr_running : 0 [13181.347673] .load : 0 [13181.348335] .load_avg : 0 [13181.348987] .runnable_avg : 0 [13181.349661] .util_avg : 0 [13181.350323] .util_est_enqueued : 0 [13181.350990] .removed.load_avg : 0 [13181.351649] .removed.util_avg : 0 [13181.352318] .removed.runnable_avg : 0 [13181.352965] .tg_load_avg_contrib : 0 [13181.353662] .tg_load_avg : 68 [13181.354330] .throttled : 0 [13181.355027] .throttle_count : 0 [13181.355694] .se->exec_start : 13058270.145523 [13181.356434] .se->vruntime : 3871435.428254 [13181.357155] .se->sum_exec_runtime : 267421.079298 [13181.357918] .se->load.weight : 2 [13181.358556] .se->avg.load_avg : 0 [13181.359228] .se->avg.util_avg : 0 [13181.359928] .se->avg.runnable_avg : 0 [13181.387396] .min_vruntime : 3871446.050247 [13181.861498] .max_vruntime : 0.000001 [13181.861860] .spread : 0.000000 [13181.862153] .spread0 : -931541.799767 [13181.862868] .nr_spread_over : 0 [13181.863502] .nr_running : 0 [13181.864171] .h_nr_running : 0 [13181.864857] .idle_nr_running : 0 [13181.865491] .idle_h_nr_running : 0 [13181.866153] .load : 0 [13181.866840] .load_avg : 0 [13181.867499] .runnable_avg : 0 [13181.868162] .util_avg : 0 [13181.868807] .util_est_enqueued : 0 [13181.869489] .removed.load_avg : 0 [13181.870148] .removed.util_avg : 0 [13181.870836] .removed.runnable_avg : 0 [13181.871500] .tg_load_avg_contrib : 0 [13181.872177] .tg_load_avg : 0 [13181.872858] .throttled : 0 [13181.873489] .throttle_count : 0 [13181.874149] [13181.874661] rt_rq[11]: [13181.874816] .rt_nr_running : 0 [13181.875460] .rt_nr_migratory [13182.366673] [13182.376464] dl_rq[11]: [13182.376661] .dl_nr_running : 0 [13182.377324] .dl_nr_migratory : 0 [13182.377978] .dl_bw->bw : 996147 [13182.378277] .dl_bw->total_bw : 0 [13182.378944] [13182.379439] runnable tasks: [13182.379588] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13182.380199] ------------------------------------------------------------------------------------------------------------- [13182.380835] I rcu_par_gp 4 1130.056874 4 100 0.000000 0.636479 0.000000 0.000000 1 0 / [13182.381982] S cpuhp/11 78 23568.260047 25 120 0.000000 9.565177 0.000000 0.000000 1 0 / [13182.383134] S migration/11 79 0.000000 3676 0 0.000000 4005.044988 0.000000 0.000000 1 0 / [13182.384289] S ksoftirqd/11 80 3869555.491614 17254 120 0.000000 1473.201726 0.000000 0.000000 1 0 / [13182.385530] I kworker/11:0H 82 8345.680706 4 100 0.000000 I xfs-conv/sda1 939 12507.569954 2 100 0.000000 0.176770 0.000000 0.000000 1 0 / [13182.887277] S gmain 1105 1946702.557133 3368 120 0.000000 859.085161 0.000000 0.000000 1 1105 /system.slice [13182.888521] I kworker/11:1 239680 3841066.955401 774 120 0.000000 63.542402 0.000000 0.000000 1 0 / [13182.889703] I kworker/11:0 260189 3871421.851834 697 120 0.000000 38.377139 0.000000 0.000000 1 0 / [13182.890882] [13182.891360]82.892671] .nr_switches : 2404252 [13182.893015] .nr_uninterruptible : 4294964227 [13182.893303] .next_balance : 4307.840044 [13182.893592] .curr->pid : 0 [13182.894241] .clock : 13182893.634006 [13182.895059] .clock_task : 13053156.095659 [13182.895781] .avg_idle : 1000000 [13182.896105] .max_idle_balance_cost : 500000 [13182.92 .rt_throttled : 0 [13183.397205] .rt_time : 0.000000 [13183.397535] .rt_runtime : 950.000000 [13183.397857] [13183.398365] dl_rq[12]: [13183.398535] .dl_nr_running : 0 [13183.399194] .dl_nr_migratory : 0 [13183.399863] .dl_bw->bw : 996147 [13183.400178] .dl_bw->total_bw : 0 [13183.400836] [13183.401353] runnable tasks: [13183.401500] S task PID tree-key switches prio wait-time sum-exec sum-sleep [1 cpu#12, 2095.096 MHz [13182.891974] .nr_running : 0 [131------------------------------------------ [13183.402750] S cpuhp/12 83 22274.069063 24 120 0.000000 10.515549 0.000000 0.000000 0 0 / [13183.403873] S migration/12 84 0.000000 4412 0 0.000000 4091.022804 0.000000 0.000000 0 0 / [13183.405060] S ksoftirqd/12 85 3563333.409873 11121 120 0.000000 3318.413836 0.000000 0.000000 0 0 / [13183.406196] I kworker/12:0H 87 482.344035 4 100 0.000000 0.194096 0.000000 0.000000 0 0 / [13183.407347] I 0.000000 0 0 / [13183.908239] S dbus-broker 1121 830643.476570 10282 120 0.000000 6229.520967 0.000000 0.000000 0 0 /system.slice [13183.909064] S sshd 1822 152754.867396 1776 120 0.000000 976.002517 0.000000 0.000000 0 0 /user.slice/user-0.slice/session-2.scope [13183.910010] I/12:1 258645 3565968.542717 659 120 0.000000 561.746557 0.000000 0.000000 0 0 / [13183.912320] [13183.912860] cpu#13, 2095.096 MHz [13183.913459] .nr_running : 0 [13183.914129] .nr_switches : 1977147 [13183.914490] .nr_uninterruptible : 3725 [13183.915205] .next_balance : 4307.841067 [13183.915531] .curr->pid : 0 [13183.916205] .clock : 13183915.648891 [13183.916970] .clock_task : 13082403.063900 [13183.917726] .avg_idle : 1000000 [13183.918050] .max_idle_balance_cost : 500000 [13183.918346] [13183.918857] rt_rq[13]: [13183.919039] .rt_nr_running [13184.410241] .rt_runtime : 950.000000 [13184.419859] [13184.420364] dl_rq[13]: [13184.420514] .dl_nr_running : 0 [13184.421178] .dl_nr_migratory : 0 [13184.421873] .dl_bw->bw : 996147 [13184.422215] .dl_bw->total_bw : 0 [13184.422876] [13184.423376] runnable tasks: [13184.423529] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13184.424150] ------------------------------------------------------------------------------------------------------------- [13184.424781] S cpuhp/13 88 25851.050934 24 120 0.000000 7.378766 0.000000 0.000000 0 0 / [13184.425918] S migration/13 89 0.000000 3885 0 0.000000 4048.844502 0.000000 0.000000 0 0 / [13184.427031] S ksoftirqd/13 90 3196717.730288 12679 120 0.000000 2850.339575 0.000000 0.000000 0 0 / [13184.428198] I kworker/13:0H 92 1387[13189.432090] dl_rq[14]: [13189.432254] .dl_nr_running : 0 [13194.409653] .nr_running : 0 [13194.435810] .nr_switch: 0 [13195.937048] .rt_throttled : 0 [13195.937759] .rt_time : 0.000000 [13195.938058] .rt_runtime : 950.000000 [13195.938374] [13195.938878] dl_rq[15]: [13195.939036] .dl_nr_running : 0 [13195.939687] .dl_nr_migratory : 0 [13195.940367] .dl_bw->bw : 996147 [13195.940704] .dl_bw->total_bw : 0 [13195.941388] [13195.941916] runnable tasks: [13195.942108] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13195.942690] ------------------------------------------------------------------------------------------------------------- [13195.943272] S systemd 1 244.761052 18062 120 0.000000 62949.353524 0.000000 0.000000 0 0 /init.scope [13195.944121] S cpuhp/15 98 25512.042338 24 120 0.000000 7.610960 0.000000 0.000000 0 0 / [13195.945338] S migration/15 99 757.592191 3697 0 0.000000 4030.849330 0.000009.648311 4 100 0.000000 0.263229 0.000000 0.000000 0 0 / [13196.447112] S scsi_eh_0 628 6661.129188 2 120 0.000000 0.062630 0.000000 0.000000 0 0 / [13196.448264] I kworker/15:1H 657 3002185.502662 6475 100 0.000000 931.299274 0.000000 0.000000 0 0 / [13196.449405] S rpcbind 1004 758210.895949 469 120 0.000000 193.362804 0.000000 0.000000 0 0 /system.slice [13196.450231] I kworker/15:3 200894 2977103.091478 2155 120 0.000000 1369.601719 0.000000 0.000000 0 0 / [13196.451377] I kworker/15:1 255553 3002266.322482 665 120 0.000000 1983.061441 0.000000 0.000000 0 0 / [13196.452523] S sleep 270825 758350.340430 1 120 0.000000 11.144092 0.000000 0.000000 0 0 /system.slice [13196.453774] [13196.454254] cpu#16, 2095.096 MHz [13196.454880] .nr_running : 0 [13196.455536] .nr_switches [13196.946798] .clock : 13196946.776524 [13196.956692] .clock_task : 13120512.646830 [13196.957418] .avg_idle : 1000000 [13196.957772] .max_idle_balance_cost : 500000 [13196.958066] [13196.958546] rt_rq[16]: [13196.958697] .rt_nr_running : 0 [13196.959359] .rt_nr_migratory : 0 [13196.960018] .rt_throttled : 0 [13196.960677] .rt_time : 0.000000 [13196.960987] .rt_runtime : 950.000000 [13196.961271] [13196.961791] dl_rq[16]: [13196.961946] .dl_nr_running : 0 [13196.962589] .dl_nr_migratory : 0 [13196.963244] .dl_bw->bw : 996147 [13196.963543] .dl_bw->total_bw : 0 [13196.964214] [13196.964773] runnable tasks: [13196.964971] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13196.965559] ------------------------------------------------------------------------------------------------------------- [13196.966158] S cpuhp/16 103 20943.962929 24 120 0.000000 7.691716 0.000000 0.000000 0 0 / [13196.967268] S migration/16 104 809.679319 3651 0 0.000000 4036.131218 0.000000 0.000000 0 0 / [131 0.000000 0.136570 0.000000 0.000000 0 0 / [13197.468990] I kworker/16:1H 789 3113130.159017 9742 100 0.000000 1033.270299 0.000000 0.000000 0 0 / [13197.470163] S xfsaild/dm-2 974 2994038.554885 631 120 0.000000 20.373265 0.000000 0.000000 0 0 / [13197.471342] I kworker/16:0 200047 3112619.350528 3027 120 0.000000 1720.366401 0.000000 0.000000 0 0 / [13197.472508] I kworker/16:2 251999 3112618.756337 7 120 0.000000 0.565447 0.000000 0.000000 0 0 / [13197.473686] I kworker/16:1 270654 3113130.463814 51 120 0.000000 1.849655 0.000000 0.000000 0 0 / [13197.474896] [13197.475391] cpu#17, 2095.096 MHz [13197.476004] .nr_running : 1 [13197.476668] .nr_switches : 2036289 [13197.476984] .nr_uninterruptible : 4294964778 [13197.477300] .next_balance : 4[13197.977798] .avg_idle : 1000000 [13197.978113] .max_idle_balance_cost : 500000 [13197.978432] [13197.978941] cfs_rq[17]:/system.slice [13197.979191] .exec_clock : 0.000000 [13197.979492] .MIN_vruntime : 0.000001 [13197.979802] .min_vruntime : 668323.061802 [13197.980528] .max_vruntime : 0.000001 [13197.980846] .spread : 0.000000 [13197.981145] .spread0 : -4134664.788212 [13197.981876] .nr_spread_over : 0 [13197.982540] .nr_running : 0 [13197.983244] .h_nr_running : 0 [13197.983944] .idle_nr_running : 0 [13197.984600] .idle_h_nr_running : 0 [13197.985292] .load : 0 [13197.986001] .load_avg : 51 [13197.986688] .runnable_avg : 51 [13197.987368] .util_avg : 51 [13197.988049] .util_est_enqueued : 0 [13197.988795] .removed.load_avg : 0 [13197.989449] .removed.util_avg : 0 [13197.990148] .removed.runnable_avg : 0 [13197.990816] .tg_load_avg_contrib .throttle_count : 0 [13198.391810] .se->exec_start [13198.492075] .se->vruntime : 3638554.381102 [13198.492831] .se->sum_exec_runtime : 90658.737056 [13198.493167] .se->load.weight : 1048576 [13198.493472] .se->avg.load_avg : 5 [13198.494125] .se->avg.util_avg : 5 [13198.494824] .se->avg.runnable_avg : 5 [13198.495471] [13198.495973] cfs_rq[17]:/ [13198.496127] .exec_clock : 0.000000 [13198.496441] .MIN_vruntime : 0.000001 [13198.496727] .min_vruntime : 3638555.626518 [13198.497443] .max_vruntime : 0.000001 [13198.497802] .spread : 0.000000 [13198.498104] .spread0 : -1164432.223496 [13198.498861] .nr_spread_over : 0 [13198.499513] .nr_running : 0 [13198.500189] .h_nr_running : 0 [13198.500857] .idle_nr_running : 0 [13198.501516] .idle_h_nr_running : 0 [13198.502172] .load : 0 [13198.502828] .load_avg : 69 [13198.503537] .runnable_avg : 68 [13198.504232] .util_avg : 72 [13198.504923] .util_est_enqueued : 72 [13198.505548] .removed.loa[13198.887606] .tg_load_avg_contrib : 0 [13198.906803] .tg_load_avg : 0 [13198.907587] .throttled : 0 [13198.908294] .throttle_count : 0 [13198.908977] [13198.909483] rt_rq[17]: [13198.909639] .rt_nr_running : 0 [13198.910429] .rt_nr_migratory : 0 [13198.911256] .rt_throttled : 0 [13198.911945] .rt_time : 0.000000 [13198.912315] .rt_runtime : 950.000000 [13198.912606] [13198.913130] dl_rq[17]: [13198.913358] .dl_nr_running : 0 [13198.914023] .dl_nr_migratory : 0 [13198.914989] .dl_bw->bw : 996147 [13198.915326] .dl_bw->total_bw : 0 [13198.915995] [13198.916536] runnable tasks: [13198.916696] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13198.917320] ----------------------------------------------------------------------------------------------------- 0.000000 4048.986130 0.000000 0.000000 0 0 / [13199.418921] S ksoftirqd/17 110 3638527.060061 12802 120 0.000000 3447.884916 0.000000 0.000000 0 0 / [13199.420087] I kworker/17:0H 112 4371.154691 4 100 0.000000 0.158909 0.000000 0.000000 0 0 / [13199.421293] S kcompactd0 186 3638552.624434 26190 120 0.000000 633.335178 0.000000 0.000000 0 0 / [13199.422822] I kworker/17:1H 633 3638548.237646 18088 100 0.000000 1251.604050 0.000000 0.000000 0 0 / [13199.423997] >R systemd-journal 830 668340.389239 19634 120 0.000000 25080.435747 0.000000 0.000000 0 0 /system.slice [13199.424867] D NetworkManager 1055 668007.520859 22361 120 0.000000 12958.759114 0.000000 0.000000 0 1055 /system.slice [13199.426120] S rsyslogd 1068 665416.848780 52 120 0.000000 119.094148 0.000000 0.000000 0 0 /system.slice [13199.426930] S rs:main Q:Reg 1080 667997.570997 32022 120 0.000000 3605.058931 0.000000 0.000.824000 7 120 0.000000 0.595621 0.000000 0.000000 0 0 / [13199.928861] I kworker/17:0 270038 3638555.463881 175 120 0.000000 17.758695 0.000000 0.000000 0 0 / [13199.930037] [13199.930574] cpu#18, 2095.096 MHz [13199.931303] .nr_running : 0 [13199.932268] .nr_switches : 2007843 [13199.932585] .nr_uninterruptible : 4294966406 [13199.932901] .next_balance : 4307.857082 [13199.933225] .curr->pid : 0 [13199.933903] .clock : 13199933.761464 [13199.934639] .clock_task : 13087602.033802 [13199.935423] .avg_idle : 1000000 [13199.935806] .max_idle_balance_cost : 500000 [13199.936105] [13199.936822] rt_rq[18]: [13199.936996] .rt_nr_running : 0 [13199.937819] .rt_nr_migratory : 0 [13199.938489] .rt_throttled : 0 [13199.939166] .rt_time : 0.000000 [13199.939495] .rt_runtime : 950.000000 [13199.939807] [13199.940304] dl_rq[18]: [13199.940463] .dl_nr_running : 0 [13199.941332] .dl_nr_migratory : 0 [13199.99[13200.433387] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13200.442710] ------------------------------------------------------------------------------------------------------------- [13200.443341] S cpuhp/18 113 24021.954650 24 120 0.000000 9.930873 0.000000 0.000000 1 0 / [13200.444523] S migration/18 114 921.872322 3665 0 0.000000 4041.940531 0.000000 0.000000 1 0 / [13200.445797] S ksoftirqd/18 115 3919620.457825 27804 120 0.000000 2626.750786 0.000000 0.000000 1 0 / [13200.447312] I kworker/18:0H 117 6822.063014 4 100 0.000000 0.134038 0.000000 0.000000 1 0 / [13200.448493] I kworker/18:1H 656 3920501.146471 34962 100 0.000000 3739.177346 0.000000 0.000000 1 0 / [13200.449724] I xfs-blockgc/sda 941 12799.243659 2 100 0.000000 0.163173 0.000000 0.000000 1 0 / [13200.451095] I kworker/u130:3 200816 3921890.120943 [13200.951957] I kworker/18:0 266850 3917591.525050 3 120 0.000000 0.196095 0.000000 0.000000 1 0 / [13200.953323] [13200.953845] cpu#19, 2095.096 MHz [13200.954430] .nr_running : 0 [13200.955154] .nr_switches : 1168324 [13200.955494] .nr_uninterruptible : 4294965837 [13200.955827] .next_balance : 4307.858106 [13200.956137] .curr->pid : 0 [13200.956910] .clock : 13200956.785926 [13200.957821] .clock_task : 13082154.282252 [13200.958587] .avg_idle : 1000000 [13200.958912] .max_idle_balance_cost : 500000 [13200.959274] [13200.959811] rt_rq[19]: [13200.959979] .rt_nr_running : 0 [13200.960960] .rt_nr_migratory : 0 [13200.961827] .rt_throttled : 0 [13200.962485] .rt_time : 0.000000 [13200.962844] .rt_runtime : 950.000000 [13200.963141] [13200.963727] dl_rq[19]: [13200.963920] .dl_nr_running : 0 [13200.964577] .dl_nr_migratory : 0 [13200.965331] .dl_bw->bw : 996147 [13200.965641] .dl_bw->total_bw : 0 [13200.966319] [13200.967153] runnable tasks: [13200.967352] S tS cpuhp/19 118 24555.503980 24 120 0.000000 9.131089 0.000000 0.000000 1 0 / [13201.468875] S migration/19 119 0.000000 3612 0 0.000000 4047.197998 0.000000 0.000000 1 0 / [13201.470089] S ksoftirqd/19 120 4129095.555363 33297 120 0.000000 2733.768915 0.000000 0.000000 1 0 / [13201.471351] I kworker/19:0H 122 1904.542648 4 100 0.000000 0.221488 0.000000 0.000000 1 0 / [13201.472681] I kmpath_rdacd 223 827.727234 2 100 0.000000 0.188917 0.000000 0.000000 1 0 / [13201.473866] I kworker/19:1H 412 4130874.904636 30565 100 0.000000 5028.376926 0.000000 0.000000 1 0 / [13201.475040] I ipmi-msghandler 931 11997.560369 2 100 0.000000 0.116697 0.000000 0.000000 1 0 / [13201.476331] I xfs-buf/sda1 938 12765.391398 2 100 0.000000 0.171206 0.000000 0.000000 1 0 / [13201.477733] S gmain 1117 -1.048576 1 120 0.000000 0.449563 0.000000 0.000000 1 0 /system.slice [10345.023398 0.000000 0.000000 1 0 /user.slice/user-0.slice/session-2.scope [13201.979690] I kworker/19:1 238944 4126562.313801 1067 120 0.000000 622.700172 0.000000 0.000000 1 0 / [13201.980926] I kworker/19:0 265166 4130876.514700 340 120 0.000000 38.387454 0.000000 0.000000 1 0 / [13201.982270] [13201.982898] cpu#20, 2095.096 MHz [13201.983486] .nr_running : 0 [13201.984200] .nr_switches : 950504 [13201.984534] .nr_uninterruptible : 4294967270 [13201.984877] .next_balance : 4307.859135 [13201.985187] .curr->pid : 0 [13201.985869] .clock : 13201985.795218 [13201.986579] .clock_task : 13096550.934073 [13201.987335] .avg_idle : 1000000 [13201.987638] .max_idle_balance_cost : 500000 [13201.987955] [13201.988451] rt_rq[20]: [13201.988634] .rt_nr_running : 0 [13201.989325] .rt_nr_migratory : 0 [13201.989992] .rt_throttled : 0 [13202.044[13202.481807] .dl_nr_running : 0 [13202.491239] .dl_nr_migratory : 0 [13202.491918] .dl_bw->bw : 996147 [13202.492214] .dl_bw->total_bw : 0 [13202.492883] [13202.493369] runnable tasks: [13202.493530] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13202.494125] ------------------------------------------------------------------------------------------------------------- [13202.494823] S cpuhp/20 123 21912.834117 24 120 0.000000 9.247929 0.000000 0.000000 1 0 / [13202.496019] S migration/20 124 1032.368632 3571 0 0.000000 4053.670320 0.000000 0.000000 1 0 / [13202.497173] S ksoftirqd/20 125 3634117.596760 28112 120 0.000000 2058.715473 0.000000 0.000000 1 0 / [13202.498349] I kworker/20:0H 127 7773.983862 4 100 0.000000 0.027646 0.000000 0.000000 1 0 / [13202.499519] I kworker/20:1H 842 3631548.175104 / [13203.000549] I kdmflush/253:2 947 11618.450725 2 100 0.000000 0.105631 0.000000 0.000000 1 0 / [13203.001752] S sshd 1126 2354.657984 23 120 0.000000 212.170300 0.000000 0.000000 1 0 /system.slice [13203.002638] I kworker/20:2 217977 3634115.808309 1798 120 0.000000 158.180881 0.000000 0.000000 1 0 / [13203.003839] I kworker/20:0 254579 3621921.368613 655 120 0.000000 58.914740 0.000000 0.000000 1 0 / [13203.005015] I kworker/20:1 267579 3634264.385247 191 120 0.000000 11.368299 0.000000 0.000000 1 0 / [13203.006212] [13203.006731] cpu#21, 2095.096 MHz [13203.007354] .nr_running : 0 [13203.008028] .nr_switches : 844758 [13203.008387] .nr_uninterruptible : 5344 [13203.009057] .next_balance : 4307.860160 [13203.009376] .curr->pid : 0 [13203.010039] .clock : 13203007.841316 [13203[13203.501382] [13203.511262] rt_rq[21]: [13203.511554] .rt_nr_running : 0 [13203.512231] .rt_nr_migratory : 0 [13203.512928] .rt_throttled : 0 [13203.513572] .rt_time : 0.000000 [13203.513894] .rt_runtime : 950.000000 [13203.514218] [13203.514725] dl_rq[21]: [13203.514941] .dl_nr_running : 0 [13203.515648] .dl_nr_migratory : 0 [13203.516352] .dl_bw->bw : 996147 [13203.516658] .dl_bw->total_bw : 0 [13203.517343] [13203.517862] runnable tasks: [13203.518025] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13203.518620] ------------------------------------------------------------------------------------------------------------- [13203.519206] S cpuhp/21 128 22832.637380 25 120 0.000000 9.242261 0.000000 0.000000 1 0 / [13203.520350] S migration/21 129 0.000000 3481 0 0.000000 4049.879799 0.000000 0.000000 1 0 / [13203.521508] S ksoftirqd/21 130 3382915.925322 23195 120 0.000000 1783.102697 0.000000 0.000000 1 0 / [13203.522690] I kworker 0.000000 1 0 / [13204.023610] I kworker/21:1H 758 3383574.690895 16723 100 0.000000 1880.154010 0.000000 0.000000 1 0 / [13204.024764] I xfs-inodegc/sda 942 10455.172957 2 100 0.000000 0.151581 0.000000 0.000000 1 0 / [13204.025969] I rpciod 1015 23125.333487 2 100 0.000000 0.145877 0.000000 0.000000 1 0 / [13204.027116] I kworker/21:0 244590 3385868.904242 899 120 0.000000 115.438300 0.000000 0.000000 1 0 / [13204.028277] I kworker/21:2 260440 3357325.733164 6 120 0.000000 0.183358 0.000000 0.000000 1 0 / [13204.029396] I kworker/u128:0 262374 3385869.113956 39 120 0.000000 5.647820 0.000000 0.000000 1 0 / [13204.030576] [13204.031113] cpu#22, 2095.096 MHz [13204.031716] .nr_running : 0 [13204.032375] .nr_switches : 1454506 [13204.032678] .nr_uninterruptible : 4294966760 [13204.032986] .next_balance : 13092282.474181 [13204.534055] .avg_idle : 1000000 [13204.534385] .max_idle_balance_cost : 500000 [13204.534676] [13204.535219] rt_rq[22]: [13204.535403] .rt_nr_running : 0 [13204.536065] .rt_nr_migratory : 0 [13204.536733] .rt_throttled : 0 [13204.537406] .rt_time : 0.000000 [13204.537709] .rt_runtime : 950.000000 [13204.538023] [13204.538514] dl_rq[22]: [13204.538694] .dl_nr_running : 0 [13204.539375] .dl_nr_migratory : 0 [13204.540048] .dl_bw->bw : 996147 [13204.540409] .dl_bw->total_bw : 0 [13204.541069] [13204.541556] runnable tasks: [13204.541711] S task PID tree-key switches prio wait-time sum-exec sum-sleep [13204.542300] ------------------------------------------------------------------------------------------------------------- [13204.542893] S cpuhp/22 133 24172.449758 24 120 0.000000 8.889252 0.000000 0.000000 1 0 / [13204.544066] S migration/22 134 1140.490607 3593 0 0.000000 4142.388277 0.000000 0.000000 1 0 / [13204.545250] S ksoftirqd/22 13 / [13205.046212] I kworker/22:1H 757 3743027.788083 27126 100 0.000000 3203.966101 0.000000 0.000000 1 0 / [13205.047373] I xprtiod 1016 24353.225967 2 100 0.000000 0.114284 0.000000 0.000000 1 0 / [13205.048505] S gssproxy 1133 1439320.030793 229 120 0.000000 65.614665 0.000000 0.000000 1 0 /system.slice [13205.049303] S (sd-pam) 1149 -1.048576 1 120 0.000000 2.780117 0.000000 0.000000 1 0 /user.slice/user-0.slice/user@0.service/init.scope [13205.050664] I nfsiod 2587 32220.746951 2 100 0.000000 0.147214 0.000000 0.000000 1 0 / [13205.051832] D kworker/u128:2 233778 3744007.630101 249 120 0.000000 30.629141 0.000000 0.000000 1 0 / [13205.052993] I kworker/22:2 241683 3744004.701613 1148 120 0.000000 185.937413 0.000000 0.000000 1 0 / [13205.054144] D kworker/22:1 254498 3744024.767050 314[13205.545696] I kworker/22:0 270739 3744024.829405 10 120 0.000000 0.574665 0.000000 0.000000 1 0 / [13205.555951] [13205.556456] cpu#23, 2095.096 MHz [13205.557054] .nr_running : 0 [13205.557733] .nr_switches : 1115787 [13205.558059] .nr_uninterruptible : 4294966758 [13205.558353] .next_balance : 4307.862707 [13205.558643] .curr->pid : 0 [13205.559298] .clock : 13205558.843676 [13205.560036] .clock_task : 13102140.901786 [13205.560858] .avg_idle : 1000000 [13205.561165] .max_idle_balance_cost : 500000 [13205.561485] [13205.562002] cfs_rq[23]:/ [13205.562170] .exec_clock : 0.000000 [13205.562495] .MIN_vruntime : 0.000001 [13205.562787] .min_vruntime : 3645218.633747 [13205.563605] .max_vruntime : 0.000001 [13205.563945] .spread : 0.000000 [13205.564244] .spread0 : -1157769.216267 [13205.565034] .nr_spread_over : 0 [13205.565683] .nr_running [13206.056988] .load : 0 [13206.066851] .load_avg : 2 [13206.067525] .runnable_avg : 3 [13206.068197] .util_avg : 2 [13206.068901] .util_est_enqueued : 0 [13206.069545] .removed.load_avg : 0 [13206.070238] .removed.util_avg : 0 [13206.070940] .removed.runnable_avg : 0 [13206.071939] .tg_load_avg_contrib : 0 [13206.072597] .tg_load_avg : 0 [13206.073297] .throttled : 0 [13206.073973] .throttle_count : 0 [13206.074653] [13206.075213] rt_rq[23]: [13206.075362] .rt_nr_running : 0 [13206.076026] .rt_nr_migratory : 0 [13206.076681] .rt_throttled : 0 [13206.077418] .rt_time : 0.000000 [13206.077720] .rt_runtime : 950.000000 [13206.078032] [13206.078522] dl_rq[23]: [13206.078677] .dl_nr_running : 0 [13206.079366] .dl_nr_migratory : 0 [13206.080035] .dl_bw->bw : 996147 [13206.080355] .dl_bw->total_bw : 0 [13206.081008] [13206.081495] runnable tasks: [13206.081680] S task PID tree-key switches prio wait-time 0.000000 170115.771287 0.000000 0.000000 1 0 / [13206.583136] S cpuhp/23 138 25384.642401 24 120 0.000000 9.390167 0.000000 0.000000 1 0 / [13206.584309] S migration/23 139 1192.530406 3559 0 0.000000 4067.463001 0.000000 0.000000 1 0 / [13206.585472] S ksoftirqd/23 140 3627481.167152 26459 120 0.000000 1924.159082 0.000000 0.000000 1 0 / [13206.586650] I kworker/23:0H 142 8197.046236 4 100 0.000000 0.027419 0.000000 0.000000 1 0 / [13206.587797] I mld 231 775.927727 2 100 0.000000 0.178664 0.000000 0.000000 1 0 / [13206.588950] I ipv6_addrconf 233 800.231583 2 100 0.000000 0.193188 0.000000 0.000000 1 0 / [13206.590129] I kstrp 234 812.394618 2 100 0.000000 0.181560 0.000000 0.000000 1 0 / [13206.591287] I zswap-shrink 246 1023.998431 [13207.082854] S kipmi0 934 3369434.470069 869 139 0.000000 12.709550 0.000000 0.000000 1 0 / [13207.093022] I xfs-cil/sda1 944 12527.812964 2 100 0.000000 0.163667 0.000000 0.000000 1 0 / [13207.094181] S crond 1492 1350010.676295 280 120 0.000000 317.919542 0.000000 0.000000 1 0 /system.slice [13207.095039] I kworker/23:0 233530 3636877.370585 1565 120 0.000000 620.579402 0.000000 0.000000 1 0 / [13207.096229] I kworker/23:1 250831 3645169.379759 877 120 0.000000 593.826207 0.000000 0.000000 1 0 / [13207.097422] [13207.097939] [13207.097939] Showing all locks held in the system: [13207.098675] 1 lock held by khugepaged/189: [13207.098934] #0: ffffffff91ed6930 (lock#4){+.+.}-{3:3}, at: __lru_add_drain_all+0x57/0x5f0 [13207.099448] 2 locks held by systemd-journal/830: [13207.100096] #0: ffff8883da5f7458 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x30/0x120 [13207.101061] #1: ffff888490395d78 (&ep->lock){.-.-}-{2[13207.592508] 1 lock held by in:imjournal/1079: [13207.602310] #0: ffff8881bec57938 (&mm->mmap_lock#2){++++}-{3:3}, at: do_user_addr_fault+0x1fd/0xd90 [13207.602921] 2 locks held by kworker/9:2/233066: [13207.603568] #0: ffff88810005c148 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7c5/0x1520 [13207.604172] #1: ffffc9000639fdc0 (init_free_wq){+.+.}-{0:0}, at: process_one_work+0x7f4/0x1520 [13207.605182] 5 locks held by kworker/u128:2/233778: [13207.605927] #0: ffff888440027948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7c5/0x1520 [13207.606513] #1: ffffc900066afdc0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7f4/0x1520 [13207.607465] #2: ffffffff923a7058 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x9c/0x9a0 [13207.608473] #3: ffffffff923b3bd0 (rtnl_mutex){+.+.}-{3:3}, at: default_device_exit_batch+0xed/0x370 [13207.609053] #4: ffffffff91dec350 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock+0x250/0x360 [13207.609627] 2 locks held by kworker/22:1/254498: [13207.610329] #0: ffff88810005d948 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7c5/0x1520 [13207.610926] #1: ffffc90003e77dc0 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process#1: ffffc90003f17dc0 ((addr_chk_work).work){+.+.}-{0:0}, at: process_one_work+0x7f4/0x1520 [13208.112164] #2: ffffffff923b3bd0 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0xa/0x20 [13208.113122] 3 locks held by 20_sysinfo/270532: [13208.113787] #0: ffff888110f24470 (sb_writers#3){.+.+}-{0:0}, at: ksys_write+0xf9/0x1d0 [13208.114308] #1: ffffffff91d2a660 (rcu_read_lock){....}-{1:2}, at: __handle_sysrq+0x4a/0xe0 [13208.114772] #2: ffffffff91d2a660 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire.constprop.0+0x0/0x30 [13208.115445] 1 lock held by modprobe/270765: [13208.115693] #0: ffffffff91e23ed0 (module_mutex){+.+.}-{3:3}, at: free_module+0x188/0x750 [13208.116191] [13208.116664] ============================================= [13208.116664] [13208.117051] Showing busy workqueues and worker pools: [13208.117402] workqueue events: flags=0x0 [13208.117745] pwq 18: cpus=9 node=1 flags=0x0 nice=0 active=1/256 refcnt=2 [13208.117759] in-flight: 233066:do_free_init [13208.117789] workqueue events_highpri: flags=0x10 [13208.119601] pwq 5: cpus=2 node=0 flags=0x0 nice=-20 active=1/256 refcnt=2 [13208.119611] pending: mix_interrupt_randomness [13208.119651] workqueue events_power_efficient: flags= pwq 44: cpus=22 node=1 flags=0x0 nice=0 active=1/256 refcnt=2 [13208.585011] in-flight: 254498:wait_rcu_exp_gp [13208.585071] workqueue netns: flags=0xe000a [13208.622456] pwq 128: cpus=0-63 flags=0x4 nice=0 active=1/1 refcnt=4 [13208.622468] in-flight: 233778:cleanup_net [13208.622490] workqueue mm_percpu_wq: flags=0x8 [13208.624273] pwq 12: cpus=6 node=1 flags=0x0 nice=0 active=2/256 refcnt=4 [13208.624284] pending: vmstat_update, lru_add_drain_per_cpu BAR(189) [13208.624304] pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 [13208.624313] pending: vmstat_update [13208.624576] workqueue ipv6_addrconf: flags=0x40008 [13208.626706] pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/1 refcnt=2 [13208.626716] in-flight: 263892:addrconf_verify_work [13208.627752] workqueue xfs-sync/dm-2: flags=0x4 [13208.628503] pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 [13208.628513] pending: xfs_log_worker [xfs] [13208.629919] pool 2: cpus=1 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 270807 238691 [13208.629955] pool 18: cpus=9 node=1 flags=0x0 nice=0 hung=13s workers=3 idle: 270757 260779 [13208.630001] pool 44: cpus=22 node=1 flags=0x0 nice=0 hung=100s workers=3 idle: 270739 241683 [13208.63cu: 2-....: (4820 ticks this GP) idle=0df/1/0x4000000000000000 softirq=1055858/1055858 fqs=37040 [13209.134127] (detected by 6, t=159342 jiffies, g=6650153, q=14090 ncpus=24) [13209.134541] Sending NMI from CPU 6 to CPUs 0: [13209.135211] NMI backtrace for cpu 0 [13209.135221] CPU: 0 PID: 0 Comm: swapper/0 Kdump: loaded Tainted: G IOE X --------- --- 5.14.0-256.2009_766119311.el9.x86_64+debug #1 [13209.135229] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [13209.135233] RIP: 0010:mwait_idle_with_hints.constprop.0+0x82/0x160 [13209.135246] Code: 48 c1 ea 03 80 3c 02 00 0f 85 d8 00 00 00 49 8b 04 24 a8 08 75 14 eb 07 0f 00 2d c9 cc 91 01 b9 01 00 00 00 48 89 e8 0f 01 c9 08 00 00 00 65 48 8b 1c 25 00 32 02 00 48 89 df e8 78 da 78 ff [13209.135252] RSP: 0018:ffffffff91807d68 EFLAGS: 00000046 [13209.135258] RAX: 0000000000000020 RBX: ffffffff924a3880 RCX: 0000000000000001 [13209.135261] RDX: 1ffffffff2307968 RSI: 0000000000000008 RDI: ffffffff9183cb40 [13209.135265] RBP: 0000000000000020 R08: 0000000000000000 R09: ffffffff9183cb47 [13209.135268] R10: fffffbfff2307968 R11: 0000000000000001 R12: ffffffff9183cb40 [13209.135271] R13: ffffe8fba880517c R14: ffffffff920f9c78 R15: ffffe8fba8805178 5250 CR3: 00000005a4c2c001 CR4: 00000000001706f0 [13209.135285] Call Trace: [13209.135289] [13209.135294] intel_idle+0x4e/0x70 [13209.135306] cpuidle_enter_state+0x161/0x9b0 [13209.135319] cpuidle_enter+0x4a/0xa0 [13209.135326] cpuidle_idle_call+0x27d/0x3f0 [13209.135335] ? arch_cpu_idle_exit+0x40/0x40 [13209.135343] ? tsc_verify_tsc_adjust+0x5d/0x2e0 [13209.135355] do_idle+0x12a/0x200 [13209.135363] cpu_startup_entry+0x19/0x20 [13209.135369] rest_init+0x145/0x1f0 [13209.135378] arch_call_rest_init+0xf/0x19 [13209.135388] start_kernel+0x3df/0x401 [13209.135394] secondary_startup_64_no_verify+0xe5/0xeb [13209.135412] [13209.136204] Sending NMI from CPU 6 to CPUs 1: [13209.650636] NMI backtrace for cpu 1 [13209.650644] CPU: 1 PID: 0 Comm: swapper/1 Kdump: loaded Tainted: G IOE X --------- --- 5.14.0-256.2009_766119311.el9.x86_64+debug #1 [13209.650651] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [13209.650655] RIP: 0010:mwait_idle_with_hints.constprop.0+0x82/0x160 [13209.650664] Code: 48 c1 ea 03 80 3c 02 00 0f 85 d8 00 00 00 49 8b 04 24 a8 08 75 14 eb 07 0f 00 2d c9 cc 91 01 b9 01 00 00 00 48 89 e8 0f 01 c9 08 00 00 00 65 48 8b 1c 25 00 32 02 00 48 89 df e8 78 da 78 ff [13209.650670] RSP: 0018:ffffc9000229fd38 EFLAGS: 00000046 BP: 0000000000000020 R08: 0000000000000000 R09: ffff8881027cba07 [13209.650685] R10: ffffed10204f9740 R11: 0000000000000001 R12: ffff8881027cba00 [13209.650687] R13: ffffe8fba8c0517c R14: ffffffff920f9c78 R15: ffffe8fba8c05178 [13209.650691] FS: 0000000000000000(0000) GS:ffff8883d8c00000(0000) knlGS:0000000000000000 [13209.650694] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [13209.650698] CR2: 00005626ed5403f0 CR3: 00000005a4c2c003 CR4: 00000000001706e0 [13209.650701] Call Trace: [13209.650705] [13209.650710] intel_idle+0x4e/0x70 [13209.650719] cpuidle_enter_state+0x161/0x9b0 [13209.650729] cpuidle_enter+0x4a/0xa0 [13209.650736] cpuidle_idle_call+0x27d/0x3f0 [13209.650742] ? arch_cpu_idle_exit+0x40/0x40 [13209.650750] ? tsc_verify_tsc_adjust+0x5d/0x2e0 [13209.650759] do_idle+0x12a/0x200 [13209.650767] cpu_startup_entry+0x19/0x20 [13209.650772] start_secondary+0x22c/0x2b0 [13209.650778] ? set_cpu_sibling_map+0x2280/0x2280 [13209.650784] ? set_bringup_idt_handler.constprop.0+0x98/0xc0 [13209.650791] ? start_cpu0+0xc/0xc [13209.650799] secondary_startup_64_no_verify+0xe5/0xeb [13209.650814] [13209.651632] Sending NMI from CPU 6 to CPUs 2: [13210.167089] NMI backtrace for cpu 2 [13210.167098] CPU: 2 PID: 27 Comm: migration/2 Kdump: loaded Tainted: G IOE X --------- --- [13210.167133] RIP: 0010:debug_smp_processor_id+0x0/0x20 [13210.167144] Code: 7f 48 c7 c7 a0 bd b2 90 83 e9 01 e8 c9 47 f5 ff 48 8b 74 24 20 48 c7 c7 00 be b2 90 e8 b8 47 f5 ff e8 4c 83 f7 ff eb af 66 90 <48> c7 c6 40 be b2 90 48 c7 c7 80 be b2 90 e9 1d ff ff ff 66 66 2e [13210.167151] RSP: 0018:ffffc9000370fdd8 EFLAGS: 00000002 [13210.167159] RAX: 0000000000000002 RBX: 0000000000000002 RCX: ffffffff8e5182ac [13210.167165] RDX: 0000000000000000 RSI: ffffffff9090a000 RDI: 0000000000000002 [13210.167171] RBP: 00000000001f85f8 R08: 0000000000000001 R09: ffffc90006ebf407 [13210.167176] R10: fffff52000dd7e80 R11: 0000000000000001 R12: fffff52000dd7e80 [13210.167182] R13: ffffffff908a2060 R14: 0000000000000002 R15: 0000000000000002 [13210.167188] FS: 0000000000000000(0000) GS:ffff8883d9000000(0000) knlGS:0000000000000000 [13210.167195] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [13210.167202] CR2: 00007fc6ec1fb810 CR3: 00000001b6f6e002 CR4: 00000000001706e0 [13210.167208] Call Trace: [13210.167212] [13210.167216] rcu_dynticks_inc+0x10/0x30 [13210.167228] rcu_momentary_dyntick_idle+0x12/0x30 [13210.167238] multi_cpu_stop+0x1b_fn+0x6b/0x910 [13210.167303] smpboot_thread_fn+0x559/0x910 [13210.167315] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13210.167330] kthread+0x2a7/0x350 [13210.167337] ? kthread_complete_and_exit+0x20/0x20 [13210.167349] ret_from_fork+0x22/0x30 [13210.167379] [13210.168108] Sending NMI from CPU 6 to CPUs 3: [13211.182060] NMI backtrace for cpu 3 [13211.182067] CPU: 3 PID: 0 Comm: swapper/3 Kdump: loaded Tainted: G IOE X --------- --- 5.14.0-256.2009_766119311.el9.x86_64+debug #1 [13211.182075] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [13211.182078] RIP: 0010:mwait_idle_with_hints.constprop.0+0x82/0x160 [13211.182088] Code: 48 c1 ea 03 80 3c 02 00 0f 85 d8 00 00 00 49 8b 04 24 a8 08 75 14 eb 07 0f 00 2d c9 cc 91 01 b9 01 00 00 00 48 89 e8 0f 01 c9 08 00 00 00 65 48 8b 1c 25 00 32 02 00 48 89 df e8 78 da 78 ff [13211.182093] RSP: 0018:ffffc900022bfd38 EFLAGS: 00000046 [13211.182098] RAX: 0000000000000020 RBX: ffffffff924a3880 RCX: 0000000000000001 [13211.182102] RDX: 1ffff11020591740 RSI: 0000000000000008 RDI: ffff888102c8ba00 [13211.182105] RBP: 0000000000000020 R08: 0000000000000000 R09: ffff888102c8ba07 [13211.182108] R10: ffffed1020591740 R11: 0000000000000001 R12: ffff888102c8ba00 [1321S: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [13211.182123] CR2: 00007f59224b5250 CR3: 00000005a4c2c004 CR4: 00000000001706e0 [13211.182127] Call Trace: [13211.182130] [13211.182135] intel_idle+0x4e/0x70 [13211.182144] cpuidle_enter_state+0x161/0x9b0 [13211.182155] cpuidle_enter+0x4a/0xa0 [13211.182161] cpuidle_idle_call+0x27d/0x3f0 [13211.182168] ? arch_cpu_idle_exit+0x40/0x40 [13211.182176] ? tsc_verify_tsc_adjust+0x5d/0x2e0 [13211.182185] do_idle+0x12a/0x200 [13211.182194] cpu_startup_entry+0x19/0x20 [13211.182200] start_secondary+0x22c/0x2b0 [13211.182206] ? set_cpu_sibling_map+0x2280/0x2280 [13211.182211] ? set_bringup_idt_handler.constprop.0+0x98/0xc0 [13211.182220] ? start_cpu0+0xc/0xc [13211.182227] secondary_startup_64_no_verify+0xe5/0xeb [13211.182242] [13211.183078] Sending NMI from CPU 6 to CPUs 4: [13211.699302] NMI backtrace for cpu 4 [13211.699311] CPU: 4 PID: 0 Comm: swapper/4 Kdump: loaded Tainted: G IOE X --------- --- 5.14.0-256.2009_766119311.el9.x86_64+debug #1 [13211.699318] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [13211.699321] RIP: 0010:mwait_8 89 df e8 78 da 78 ff [13211.699336] RSP: 0018:ffffc900022cfd38 EFLAGS: 00000046 [13211.699341] RAX: 0000000000000020 RBX: ffffffff924a3880 RCX: 0000000000000001 [13211.699345] RDX: 1ffff11020592740 RSI: 0000000000000008 RDI: ffff888102c93a00 [13211.699348] RBP: 0000000000000020 R08: 0000000000000000 R09: ffff888102c93a07 [13211.699351] R10: ffffed1020592740 R11: 0000000000000001 R12: ffff888102c93a00 [13211.699354] R13: ffffe8fba980517c R14: ffffffff920f9c78 R15: ffffe8fba9805178 [13211.699357] FS: 0000000000000000(0000) GS:ffff8883d9800000(0000) knlGS:0000000000000000 [13211.699361] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [13211.699364] CR2: 00007f59224b5250 CR3: 00000005a4c2c004 CR4: 00000000001706e0 [13211.699367] Call Trace: [13211.699371] [13211.699376] intel_idle+0x4e/0x70 [13211.699385] cpuidle_enter_state+0x161/0x9b0 [13211.699395] cpuidle_enter+0x4a/0xa0 [13211.699402] cpuidle_idle_call+0x27d/0x3f0 [13211.699409] ? arch_cpu_idle_exit+0x40/0x40 [13211.699417] ? tsc_verify_tsc_adjust+0x5d/0x2e0 [13211.699425] do_idle+0x12a/0x200 [13211.699434] cpu_startup_entry+0x19/0x20 [13211.699440] start_sec[13211.699467] secondary_startup_64_no_verify+0xe5/0xeb [13211.699482] [13211.700297] Sending NMI from CPU 6 to CPUs 5: [13212.713926] NMI backtrace for cpu 5 [13212.713935] CPU: 5 PID: 0 Comm: swapper/5 Kdump: loaded Tainted: G IOE X --------- --- 5.14.0-256.2009_766119311.el9.x86_64+debug #1 [13212.713942] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [13212.713945] RIP: 0010:mwait_idle_with_hints.constprop.0+0x82/0x160 [13212.713955] Code: 48 c1 ea 03 80 3c 02 00 0f 85 d8 00 00 00 49 8b 04 24 a8 08 75 14 eb 07 0f 00 2d c9 cc 91 01 b9 01 00 00 00 48 89 e8 0f 01 c9 08 00 00 00 65 48 8b 1c 25 00 32 02 00 48 89 df e8 78 da 78 ff [13212.713960] RSP: 0018:ffffc900022dfd38 EFLAGS: 00000046 [13212.713966] RAX: 0000000000000020 RBX: ffffffff924a3880 RCX: 0000000000000001 [13212.713969] RDX: 1ffff11020592000 RSI: 0000000000000008 RDI: ffff888102c90000 [13212.713973] RBP: 0000000000000020 R08: 0000000000000000 R09: ffff888102c90007 [13212.713976] R10: ffffed1020592000 R11: 0000000000000001 R12: ffff888102c90000 [13212.713979] R13: ffffe8fba9c0517c R14: ffffffff920f9c78 R15: ffffe8fba9c05178 [13212.713982] FS: 0000000000000000(0000) GS:ffff8883d9c00000(0000) knlGS:0000000000000000 [13212.713987] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [13212.713990] CR2: 00007f26cf579000 CR3: 000ate+0x161/0x9b0 [13212.714022] cpuidle_enter+0x4a/0xa0 [13212.714029] cpuidle_idle_call+0x27d/0x3f0 [13212.714036] ? arch_cpu_idle_exit+0x40/0x40 [13212.714043] ? tsc_verify_tsc_adjust+0x5d/0x2e0 [13212.714053] do_idle+0x12a/0x200 [13212.714061] cpu_startup_entry+0x19/0x20 [13212.714066] start_secondary+0x22c/0x2b0 [13212.714073] ? set_cpu_sibling_map+0x2280/0x2280 [13212.714078] ? set_bringup_idt_handler.constprop.0+0x98/0xc0 [13212.714086] ? start_cpu0+0xc/0xc [13212.714093] secondary_startup_64_no_verify+0xe5/0xeb [13212.714108] [13212.714944] NMI backtrace for cpu 6 [13213.230395] CPU: 6 PID: 47 Comm: migration/6 Kdump: loaded Tainted: G IOE X --------- --- 5.14.0-256.2009_766119311.el9.x86_64+debug #1 [13213.231494] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [13213.231864] Stopper: multi_cpu_stop+0x0/0x370 <- migrate_swap+0x2db/0x520 [13213.232283] Call Trace: [13213.232454] [13213.232949] dump_stack_lvl+0x57/0x81 [13213.233196] nmi_cpu_backtrace.cold+0x18/0xa3 [13213.233854] ? lapic_can_unplug_cpu+0x80/0x80 [13213.234541] nmi_trigger_cpumask_backtrace+0x212/0x2e0 [1rcu_pending+0xbc/0x530 [13213.735556] rcu_sched_clock_irq+0x141/0x310 [13213.736194] update_process_times+0x136/0x1a0 [13213.736835] tick_sched_handle+0x74/0x140 [13213.737070] tick_sched_timer+0xb3/0xd0 [13213.737296] ? tick_sched_do_timer+0x2c0/0x2c0 [13213.737951] __hrtimer_run_queues+0x1b1/0xd00 [13213.738602] ? enqueue_hrtimer+0x350/0x350 [13213.738835] ? recalibrate_cpu_khz+0x10/0x10 [13213.739494] ? ktime_get_update_offsets_now+0xe0/0x2c0 [13213.739816] hrtimer_interrupt+0x2e9/0x780 [13213.740080] __sysvec_apic_timer_interrupt+0x187/0x640 [13213.740413] sysvec_apic_timer_interrupt+0x8e/0xc0 [13213.741057] [13213.741554] [13213.742045] asm_sysvec_apic_timer_interrupt+0x16/0x20 [13213.742355] RIP: 0010:check_preemption_disabled+0x0/0xd0 [13213.742675] Code: d6 f8 ff ff 83 fd 01 75 dd 65 48 8b 3c 25 00 32 02 00 e8 d3 d5 06 fe e8 fe 1f 53 fe eb c8 cc cc cc cc cc cc cc cc cc cc cc cc <41> 54 55 53 48 83 ec 08 65 44 8b 25 68 74 d4 6f 65 8b 05 f9 cb d4 [13213.744025] RSP: 0018:ffffc90003867dd8 EFLAGS: 00000246 [13213.744322] RAX: 0000000000000000 RBX: 0000000000000002 RCX: ffffffff8e5182ac [13213.745120] RDX: 0000000000000000 RSI: ffffffff90b2be40 RDI: ffffffff90b2be80 [13213.745943] RBP: 00000000001f85f8 R02060 R14: 0000000000000001 R15: 0000000000000001 [13214.147085] ? multi_cpu_stop+0x15c/0x370 [13214.147351] rcu_dynticks_inc+0x10/0x30 [13214.147581] rcu_momentary_dyntick_idle+0x12/0x30 [13214.148225] multi_cpu_stop+0x1b0/0x370 [13214.148485] ? stop_machine_yield+0x10/0x10 [13214.148741] cpu_stopper_thread+0x1f6/0x410 [13214.148991] ? cpu_stop_queue_two_[13214.234396] ? smpboot_thread_fn+0x6b/0x910 [13214.249566] smpboot_thread_fn+0x559/0x910 [13214.249800] ? __smpboot_create_thread.part.0+0x2e0/0x2e0 [13214.250098] kthread+0x2a7/0x350 [13214.250656] ? kthread_complete_and_exit+0x20/0x20 [13214.251307] ret_from_fork+0x22/0x30 [13214.251588] [13214.251753] Sending NMI from CPU 6 to CPUs 7: [13214.252420] NMI backtrace for cpu 7 [13214.252428] CPU: 7 PID: 0 Comm: swapper/7 Kdump: loaded Tainted: G IOE X --------- --- 5.14.0-256.2009_766119311.el9.x86_64+debug #1 [13214.252435] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [13214.252438] RIP: 0010:mwait_idle_with_hints.constprop.0+0x82/0x160 [13214.252449] Code: 48 c1 ea 03 80 3c 02 00 0f 85 d8 00 00 00 49 8b 04 24 a8 08 75 14 eb 07 0f 00 2d c9 cc 91 01 b9 01 00 00 00 48 89 e8 0f 01 c9 08 00 00 00 65 48 8b 1c 25 00 32 02 00 48 89 df e8 78 da 78 ff [13214.252454] RSP: 0018:ffffc900022ffd38 E[13214.252465] RBP: 0000000000000020 R08: 0000000000000000 R09: ffff888442488007 [13214.252468] R10: ffffed1088491000 R11: 0000000000000001 R12: ffff888442488000 [13214.252471] R13: ffffe8fff820517c R14: ffffffff920f9c78 R15: ffffe8fff8205178 [13214.252474] FS: 0000000000000000(0000) GS:ffff888828200000(0000) knlGS:0000000000000000 [13214.252478] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [13214.252481] CR2: 0000557f1678a008 CR3: 00000005a4c2c005 CR4: 00000000001706e0 [13214.252485] DR0: 0000000000000001 DR1: 0000000000000000 DR2: 0000000000000000 [13214.252487] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [13214.252490] Call Trace: [13214.252494] [13214.252499] intel_idle+0x4e/0x70 [13214.252509] cpuidle_enter_state+0x161/0x9b0 [13214.252520] cpuidle_enter+0x4a/0xa0 [13214.252527] cpuidle_idle_call+0x27d/0x3f0 [13214.252535] ? arch_cpu_idle_exit+0x40/0x40 [13214.252542] ? tsc_verify_tsc_adjust+0x5d/0x2e0 [13214.252551] do_idle+0x12a/0x200 [13214.252559] cpu_startup_entry+0x19/0x20 [13214.252565] start_secondary+0x22c/0x2b0 [13214.252571] ? set_cpu_sibling_map+0x2280/0x2280 [13214.252576] ? set_bringup_idt_handler.constprop.0+0x98/0xc0 [13214.252584] ? start_c [13215.169452] NMI backtrace for cpu 8 [13215.169459] CPU: 8 PID: 0 Comm: swapper/8 Kdump: loaded Tainted: G IOE X --------- --- 5.14.0-256.2009_766119311.el9.x86_64+debug #1 [13215.169466] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [13215.169469] RIP: 0010:mwait_idle_with_hints.constprop.0+0x82/0x160 [13215.169479] Code: 48 c1 ea 03 80 3c 02 00 0f 85 d8 00 00 00 49 8b 04 24 a8 08 75 14 eb 07 0f 00 2d c9 cc 91 01 b9 01 00 00 00 48 89 e8 0f 01 c9 08 00 00 00 65 48 8b 1c 25 00 32 02 00 48 89 df e8 78 da 78 ff [13215.169484] RSP: 0018:ffffc9000230fd38 EFLAGS: 00000046 [13215.169490] RAX: 0000000000000020 RBX: ffffffff924a3880 RCX: 0000000000000001 [13215.169493] RDX: 1ffff11088492740 RSI: 0000000000000008 RDI: ffff888442493a00 [13215.169496] RBP: 00ffff888442493a07 [13215.169499] R10: ffffed1088492740 R11: 0000000000000001 R12: ffff888442493a00 [13215.169502] R13: ffffe8fff860517c R14: ffffffff920f9c78 R15: ffffe8fff8605178 [13215.169505] FS: 0000000000000000(0000) GS:ffff888828600000(0000) knlGS:0000000000000000 [13215.169509] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [13215.169512] CR2: 0000558c15460b00 CR3: 00000005a4c2c006 CR4: 00000000001706e0 [13215.169515] DR0: 000000000042fbf0 DR1: 0000000000000000 DR2: 0000000000000000 [13215.169518] DR3: 0000000000000000 DR6: 00000000ffff0 [13215.169549] cpuidle_enter+0x4a/0xa0 [13215.169556] cpuidle_idle_call+0x27d/0x3f0 [13215.169563] ? arch_cpu_idle_exit+0x40/0x40 [13215.169570] ? tsc_verify_tsc_adjust+0x5d/0x2e0 [13215.169579] do_idle+0x12a/0x200 [13215.169587] cpu_startup_entry+0x19/0x20 [13215.169592] start_secondary+0x22c/0x2b0 [13215.169598] ? set_cpu_sibling_map+0x2280/0x2280 [13215.169603] ? set_bringup_idt_handler.constprop.0+0x98/0xc0 [13215.169611] ? start_cpu0+0xc/0xc [13215.169619] secondary_startup_64_no_verify+0xe5/0xeb [13215.169633] [13215.170447] Sending NMI from CPU 6 to CPUs 9: [13215.787296] NMI backtrace for cpu 9 [13215.787305] CPU: 9 PID: 0 Comm: swapper/9 Kdump: loaded Tainted: G IOE X --------- --- 5.14.0-256.2009_766119311.el9.x86_64+debug #1 [13215.787311] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [13215.787314] RIP: 0010:mwait_idle_with_hints.constprop.0+0x82/0x160 [13215.787324] Code: 48 c1 ea 03 80 3c 02 00 0f 85 d8 00 00 00 49 8b 04 24 a8 08 75 14 eb 07 0f 00 2d c9 cc 91 01 b9 01 00 00 00 48 89 e8 0f 01 c9 08 00 00 00 65 48 8b 1c 25 00 32 02 00 48 89 df e8 78 da 78 ff [13215.787329] RSP: 0018:ffffc9000231fd38 EFLAGS: 0000BP: 0000000000000020 R08: 0000000000000000 R09: ffff888442490007 [13215.787343] R10: ffffed1088492000 R11: 0000000000000001 R12: ffff888442490000 [13215.787346] R13: ffffe8fff8a0517c R14: ffffffff920f9c78 R15: ffffe8fff8a05178 [13215.787349] FS: 0000000000000000(0000) GS:ffff888828a00000(0000) knlGS:0000000000000000 [13215.787353] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [13215.787357] CR2: 00007ff7b958fe70 CR3: 00000005a4c2c005 CR4: 00000000001706e0 [13215.787360] Call Trace: [13215.787363] [13215.787368] intel_idle+0x4e/0x70 [13215.787377] cpuidle_enter_state+0x161/0x9b0 [13215.787387] cpuidle_enter+0x4a/0xa0 [13215.787394] cpuidle_idle_call+0x27d/0x3f0 [13215.787401] ? arch_cpu_idle_exit+0x40/0x40 [13215.787408] ? tsc_verify_tsc_adjust+0x5d/0x2e0 [13215.787417] do_idle+0x12a/0x200 [13215.787425] cpu_startup_entry+0x19/0x20 [13215.787431] start_secondary+0x22c/0x2b0 [13215.787437] ? set_cpu_sibling_map+0x2280/0x2280 [13215.787442] ? set_bringup_idt_handler.constprop.0+0x98/0xc0 [13215.787450] ? start_cpu0+0xc/0xc [13215.787457] secondary_startup_64_no_verify+0xe5/0xeb [13215.787472] [13215.788290] Sending NMI from CPU 6 to CPUs 10: [13216.303562] NMI bac[13216.303583] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [13216.303588] RIP: 0010:mwait_idle_with_hints.constprop.0+0x82/0x160 [13216.303600] Code: 48 c1 ea 03 80 3c 02 00 0f 85 d8 00 00 00 49 8b 04 24 a8 08 75 14 eb 07 0f 00 2d c9 cc 91 01 b9 01 00 00 00 48 89 e8 0f 01 c9 08 00 00 00 65 48 8b 1c 25 00 32 02 00 48 89 df e8 78 da 78 ff [13216.303608] RSP: 0018:ffffc9000232fd38 EFLAGS: 00000046 [13216.303616] RAX: 0000000000000020 RBX: ffffffff924a3880 RCX: 0000000000000001 [13216.303622] RDX: 1ffff11088498740 RSI: 0000000000000008 RDI: ffff8884424c3a00 [13216.303628] RBP: 0000ffff8884424c3a07 [13216.303633] R10: ffffed1088498740 R11: 0000000000000001 R12: ffff8884424c3a00 [13216.303638] R13: ffffe8fff8e0517c R14: ffffffff920f9c78 R15: ffffe8fff8e05178 [13216.303644] FS: 0000000000000000(0000) GS:ffff888828e00000(0000) knlGS:0000000000000000 [13216.303651] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [13216.303657] CR2: 0000558c15d4d860 CR3: 00000005a4c2c005 CR4: 00000000001706e0 [13216.303663] DR0: 0000000000000001 DR1: 0000000000000000 DR2: 0000000000000000 [13216.303668][13216.303698] cpuidle_enter_state+0x161/0x9b0 [13216.303714] cpuidle_enter+0x4a/0xa0 [13216.303726] cpuidle_idle_call+0x27d/0x3f0 [13216.303736] ? arch_cpu_idle_exit+0x40/0x40 [13216.303749] ? tsc_verify_tsc_adjust+0x5d/0x2e0 [13216.303764] do_idle+0x12a/0x200 [13216.303777] cpu_startup_entry+0x19/0x20 [13216.303785] start_secondary+0x22c/0x2b0 [13216.303795] ? set_cpu_sibling_map+0x2280/0x2280 [13216.303803] ? set_bringup_idt_handler.constprop.0+0x98/0xc0 [13216.303815] ? start_cpu0+0xc/0xc [13216.303827] secondary_startup_64_no_verify+0xe5/0xeb [13216.303853] [13216.304579] Sending NMI from CPU 6 to CPUs 11: [13217.320228] NMI backtrace for cpu 11 [13217.320238] CPU: 11 PID: 0 Comm: swapper/11 Kdump: loaded Tainted: G IOE X --------- --- 5.14.0-256.2009_766119311.el9.x86_64+debug #1 [13217.320249] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [13217.320253] RIP: 0010:mwait_idle_with_hints.constprop.0+0x82/0x160 [13217.320267] Code: 48 c1 ea 03 80 3c 02 00 0f 85 d8 00 00 00 49 8b 04 24 a8 08 75 14 eb 07 0f 00 2d c9 cc 91 01 b9 01 00 00 00 48 89 e8 0f 01 c9 08 00 00 00 65 48 8b 1c 25 00 32 02 00 48 89 df e8 78 da 78 ff [13217.320276] RSP: 0018:ffffc9000233fd38 EFLAGS: 00000046 [13217.320284] RAX: 0BP: 0000000000000020 R08: 0000000000000000 R09: ffff8884424c0007 [13217.320301]8000 R11: 0000000000000001 R12: ffff8884424c0000 [13217.320306] R13: ffffe8fff920517c R14: ffffffff920f9c78 R15: ffffe8fff9205178 [13217.320313] FS: 0000000000000000(0000) GS:ffff888829200000(0000) knlGS:0000000000000000 [13217.320320] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [13217.320325] CR2: 0000558c161d2ab0 CR3: 00000005a4c2c004 CR4: 00000000001706e0 [13217.320331] Call Trace: [13217.320335] [13217.320342] intel_idle+0x4e/0x70 [13217.320355] cpuidle_enter_state+0x161/0x9b0 [13217.320372] cpuidle_enter+0x4a/0xa0 [13217.320383] cpuidle_idle_call+0x27d/0x3f0 [13217.320394] ? arch_cpu_idle_exit+0x40/0x40 [13217.320406] ? tsc_verify_tsc_adjust+0x5d/0x2e0 [13217.320420] do_idle+0x12a/0x200 [13217.320435] cpu_startup_entry+0x19/0x20 [13217.320443] start_secondary+0x22c/0x2b0 [13217.320453] ? set_cpu_sibling_map+0x2280/0x2280 [13217.320461] ? set_bringup_idt_handler.constprop.0+0x98/0xc0 [13217.320474] ? start_cpu0+0xc/0xc [13217.320486] secondary_startup_64_no_verify+0xe5/0xeb [13217.320514] [13217.321246] Sending NMI from CPU 6 to CPUs 12: [13217.837326] NMI backtrace for cpu 12 [13217.837335] CPU: 12 PID: 0 Comm: swapp[13217.837345] RIP: 0010:mwait_idle_with_hints.constprop.0+0x82/0x160 [13217.837356] Code: 48 c1 ea 03 80 3c 02 00 0f 85 d8 00 00 00 49 8b 04 24 a8 08 75 14 eb 07 0f 00 2d c9 cc 91 01 b9 01 00 00 00 48 89 e8 0f 01 c9 08 00 00 00 65 48 8b 1c 25 00 32 02 00 48 89 df e8 78 da 78 ff [13217.837362] RSP: 0018:ffffc9000234fd38 EFLAGS: 00000046 [13217.837368] RAX: 0000000000000020 RBX: ffffffff924a3880 RCX: 0000000000000001 [13217.837372] RDX: 1ffff11020599000 RSI: 0000000000000008 RDI: ffff888102cc8000 [13217.837375] RBP: 0000000000000020 R08: 0000000000000000 R09: ffff888102cc8007 [13217.837378] R10: ffffed1020599000 R11: 0000000000000001 R12: ffff888102cc8000 [13217.837381] R13: ffffe8fbaa00517c R14: ffffffff920f9c78 R15: ffffe8fbaa005178 [13217.837384] FS: 0000000000000000(0000) GS:ffff8883da000000(0000) knlGS:0000000000000000 [13217.837389] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [13217.837392] CR2: 00007f4c312c2068 CR3: 00000005a4c2c001 CR4: 00000000001706e0 [13217.837396] Call Trace: [13217.837399] [13217.837404] intel_idle+0x4e/0x70 [13217.837414] cpuidle_enter_state+0x161/0x9b0 [13217.837424] cpuidle_enter+0do_idle+0x12a/0x200 [13217.837462] cpu_startup_entry+0x19/0x20 [13217.837468] start_secondary+0x22c/0x2b0 [13217.837475] ? set_cpu_sibling_map+0x2280/0x2280 [13217.837480] ? set_bringup_idt_handler.constprop.0+0x98/0xc0 [13217.837488] ? start_cpu0+0xc/0xc [13217.837495] secondary_startup_64_no_verify+0xe5/0xeb [13217.837510] [13217.838323] Sending NMI from CPU 6 to CPUs 13: [13218.754281] NMI backtrace for cpu 13 [13218.754290] CPU: 13 PID: 0 Comm: swapper/13 Kdump: loaded Tainted: G IOE X --------- --- 5.14.0-256.2009_766119311.el9.x86_64+debug #1 [13218.754297] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [13218.754300] RIP: 0010:mwait_idle_with_hints.constprop.0+0x82/0x160 [13218.754309] Code: 48 c1 ea 03 80 3c 02 00 0f 85 d8 00 00 00 49 8b 04 24 a8 08 75 14 eb 07 0f 00 2d c9 cc 91 01 b9 01 00 00 00 48 89 e8 0f 01 c9 08 00 00 00 65 48 8b 1c 25 00 32 02 00 48 89 df e8 78 da 78 ff [13218.754315] RSP: 0018:ffffc9000235fd38 EFLAGS: 00000046 [13218.754320] RAX: 0000000000000020 RBX: ffffffff924a3880 RCX: 0000000000000001 [13218.754324] RDX: 1ffff11020599740 RSI: 0000000000000008 RDI: ffff888102ccba00 [13218.754328] RBP: 0000000000000020 R08: 0000000000000000 R09: ffff888102ccba07 [13218.754331] R10: ffffed1020599740 R11: 0000000000000001 R12: ffff888102ccba00 [13218.754334] R13: ffffe8fbaa40517c R14: ffffffff920f9c78 R15: ffR2: 00007f26ceb30000 CR3: 00000005a4c2c005 CR4: 00000000001706e0 [13218.754347] Call Trace: [13218.754351] [13218.754356] intel_idle+0x4e/0x70 [13218.754365] cpuidle_enter_state+0x161/0x9b0 [13218.754375] cpuidle_enter+0x4a/0xa0 [13218.754382] cpuidle_idle_call+0x27d/0x3f0 [13218.754389] ? arch_cpu_idle_exit+0x40/0x40 [13218.754397] ? tsc_verify_tsc_adjust+0x5d/0x2e0 [13218.754405] do_idle+0x12a/0x200 [13218.754414] cpu_startup_entry+0x19/0x20 [13218.754419] start_secondary+0x22c/0x2b0 [13218.754425] ? set_cpu_sibling_map+0x2280/0x2280 [13218.754431] ? set_bringup_idt_handler.constprop.0+0x98/0xc0 [13218.754438] ? start_cpu0+0xc/0xc [13218.754445] secondary_startup_64_no_verify+0xe5/0xeb [13218.754460] [13218.755278] Sending NMI from CPU 6 to CPUs 14: [13219.272044] NMI backtrace for cpu 14 [13219.272052] CPU: 14 PID: 270532 Comm: 20_sysinfo Kdump: loaded Tainted: G IOE X --------- --- 5.14.0-256.2009_766119311.el9.x86_64+debug #1 [13219.272063] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [13219.272068] RIP: 0010:smp_call_function_many_cond+0x2b8/0xd00 [13219.272081] Code: c0 03 38 c8 7c 08 84 c9 0f 85 c9 08 00 00 8b 45 08 a8 01 74 2f 49 89 d6 49 89 d5 49 c1 ee 03 41 83 e5 07 49 0[ 0.000000] [ 0.000000] The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. [ 0.000000] Command line: elfcorehdr=0x3d000000 BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug ro resume=/dev/mapper/cs_hpe--dl360pgen8--08-swap console=ttyS1,115200n81 irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 acpi_no_memhotplug transparent_hugepage=never nokaslr hest_disable novmcoredd cma=0 hugetlb_cma=0 disable_cpu_apicid=0 hpwdt.pretimeout=0 hpwdt.kdumptimeout=0 [ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [ 0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. [ 0.000000] signal: max sigframe size: 1776 [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000001000-0x000000000009c7ff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009c800-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x000000003d001000-0x00000000bcffffff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bddac000-0x00000000bddddfff] ACPI data [ 0.000000] BIOS-e820: [mem 0x00000000bddde000-0x00000000cfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000fee0ffff] reserved [ 0.000000]IOS-e820: [mem 0x00000000ff800000-0x00000000ffffffff] reserved [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.8 present. [ 0.000000] DMI: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [ 0.000000] tsc: Fast TSC calibration using PIT [ 0.000000] tsc: Detected 2094.962 MHz processor [ 0.003099] last_pfn = 0xbd000 max_arch_pfn = 0x400000000 [ 0.004199] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [ 0.025432] found SMP MP-table at [mem 0x000f4f80-0x000f4f8f] [ 0.025542] Using GB pages for direct mapping [ 0.027172] RAMDISK: [mem 0xb2148000-0xb47fffff] [ 0.027258] ACPI: Early table checksum verification disabled [ 0.027278] ACPI: RSDP 0x00000000000F4F00 000024 (v02 HP ) [ 0.027311] ACPI: XSDT 0x00000000BDDAED00 0000E4 (v01 HP ProLiant 00000002 ? 0000162E) [ 0.027350] ACPI: FACP 0x00000000BDDAEE40 0000F4 (v03 HP ProLiant 00000002 ? 0000162E) [ 0.027384] ACPI BIOS Warning (bug): Invalid length for FADT/Pm1aControlBlock: 32, using default 16 (202112169) [ 0.027410] ACPI BIOS Warning (bug): Invalid length for FADT/Pm2ControlBlock: 32, using default 8 (20211217/tbfadt-669) [ 0.027439] ACPI: DSDT 0x00000000BDDAEF40 0026DC (v01 HP DSDT 00000001 INTL 20030228) [ 0.027469] ACPI: FACS 0x00000000BDDAC140 000040 [ 0.027494] ACPI: FACS 0x00000000BDDAC140 000040 [ 0.027519] ACPI: SPCR 0x00000000BDDAC180 000050 (v01 HP SPCRRBSU 00000001 ? 0000162E) [ 0.027547] ACPI: MCFG 0x00000000BDDAC200 00003C (v01 HP ProLiant 00000001 00000000) [ 0.027575] ACPI: HPET 0x00000000BDDAC240 000038 (v01 HP ProLiant 00000002 ? 0000162E) [ 0.027603] ACPI: FFFF 0x00000000BDDAC280 000064 (v02 HP ProLiant 00000002 ? 0000162E) [ 0.027630] ACPI: SPMI 0x00000000BDDAC300 000040 (v05 HP ProLiant 00000001 ? 0000162E) [ 0.027658] ACPI: ERST 0x00000000BDDAC340 000230 (v01 HP ProLiant 00000001 ? 0000162E) [ 0.027686] ACPI: APIC 0x00000000BDDAC580 00026A (v01 HP ProLiant 00000002 00000000) [ 0.027713] ACPI: SRAT 0x00000000BDDAC800 000750 (v01 HP Proliant 00000001 ? 0000162E) [ 0.027741] ACPI: FFFF 0x00000000BDDACF80 000176 (v01 HP ProLiant 00000001E) [ 0.027769] ACPI: BERT 0x00000000BDDAD100 000030 (v01 HP ProLiant 00000001 ? 0000162E) [ 0.027796] ACPI: HEST 0x00000000BDDAD140 0000BC (v01 HP ProLiant 00000001 ? 0000162E) [ 0.027824] ACPI: DMAR 0x00000000BDDAD200 00051C (v01 HP ProLiant 00000001 ? 0000162E) [ 0.027851] ACPI: FFFF 0x00000000BDDAEC40 000030 (v01 HP ProLiant 00000001 00000000) [ 0.027879] ACPI: PCCT 0x00000000BDDAEC80 00006E (v01 HP Proliant 00000001 PH 0000504D) [ 0.027907] ACPI: SSDT 0x00000000BDDB1640 0007EA (v01 HP DEV_PCI1 00000001 INTL 20120503) [ 0.027935] ACPI: SSDT 0x00000000BDDB1E40 000103 (v03 HP CRSPCI0 00000002 HP 00000001) [ 0.027963] ACPI: SSDT 0x00000000BDDB1F80 000098 (v03 HP CRSPCI1 00000002 HP 00000001) [ 0.027991] ACPI: SSDT 0x00000000BDDB2040 00038A (v02 HP riser0 00000002 INTL 20030228) [ 0.028018] ACPI: SSDT 0x00000000BDDB2400 000385 (v03 HP riser1a 00000002 INTL 20030228) [ 0.028046] ACPI: SSDT 0x00000000BDDB27C0 000BB9 (v01 HP pcc 00000001 INTL 20120503) [ 0.028074] ACPI: SSDT 0x00000000BDDB3380 000377 (v01 HP pmab 00000001 INTL 20120503) [ 0.028101] ACPI: SSDT 0x00000000BDDB3700 005524 (v01 HP pcc2 INTL 20120503) [ 0.028129] ACPI: SSDT 0x00000000BDDB8C40 003AEC (v01 INTEL PPM RCM 00000001 INTL 20061109) [ 0.028154] ACPI: Reserving FACP table memory at [mem 0xbddaee40-0xbddaef33] [ 0.028164] ACPI: Reserving DSDT table memory at [mem 0xbddaef40-0xbddb161b] [ 0.028173] ACPI: Reserving FACS table memory at [mem 0xbddac140-0xbddac17f] [ 0.028181] ACPI: Reserving FACS table memory at [mem 0xbddac140-0xbddac17f] [ 0.028189] ACPI: Reserving SPCR table memory at [mem 0xbddac180-0xbddac1cf] [ 0.028198] ACPI: Reserving MCFG table memory at [mem 0xbddac200-0xbddac23b] [ 0.028207] ACPI: Reserving HPET table memory at [mem 0xbddac240-0xbddac277] [ 0.028216] ACPI: Reserving FFFF table memory at [mem 0xbddac280-0xbddac2e3] [ 0.028225] ACPI: Reserving SPMI table memory at [mem 0xbddac300-0xbddac33f] [ 0.028233] ACPI: Reserving ERST table memory at [mem 0xbddac340-0xbddac56f] [ 0.028242] ACPI: Reserving APIC table memory at [mem 0xbddac580-0xbddac7e9] [ 0.028251] ACPI: Reserving SRAT table memory at [mem 0xbddac800-0xbddacf4f] [ 0.028260] ACPI: Reserving FFFF table memory at [mem 0xbddacf80-0xbddad0f5] [ 0.028268] ACPI: Reserving BERT table memory at [mem 0xbddad100-0xbddad12f] [ 0.028277] ACPI: Reserving HEST table memory at [mem 0xbddad140-0xbddad1fb] 285] ACPI: Reserving DMAR table memory at [mem 0xbddad200-0xbddad71b] [ 0.028294] ACPI: Reserving FFFF table memory at [mem 0xbddaec40-0xbddaec6f] [ 0.028303] ACPI: Reserving PCCT table memory at [mem 0xbddaec80-0xbddaeced] [ 0.028312] ACPI: Reserving SSDT table memory at [mem 0xbddb1640-0xbddb1e29] [ 0.028321] ACPI: Reserving SSDT table memory at [mem 0xbddb1e40-0xbddb1f42] [ 0.028330] ACPI: Reserving SSDT table memory at [mem 0xbddb1f80-0xbddb2017] [ [-- MARK -- Fri Feb 3 09:25:00 2023] 0.028339] ACPI: Reserving SSDT table memory at [mem 0xbddb2040-0xbddb23c9] [ 0.028348] ACPI: Reserving SSDT table memory at [mem 0xbddb2400-0xbddb2784] [ 0.028357] ACPI: Reserving SSDT table memory at [mem 0xbddb27c0-0xbddb3378] [ 0.028366] ACPI: Reserving SSDT table memory at [mem 0xbddb3380-0xbddb36f6] [ 0.028376] ACPI: Reserving SSDT table memory at [mem 0xbddb3700-0xbddb8c23] [ 0.028385] ACPI: Reserving SSDT table memory at [mem 0xbddb8c40-0xbddbc72b] [ 0.028538] NUMA turned off [ 0.028547] Faking a node at [mem 0x0000000000000000-0x00000000bcffffff] [ 0.028604] NODE_DATA(0) allocated [mem 0xbcfd5000-0xbcffffff] [ 0.039203] Zone ranges: [ 0.039222] DMA [mem 0x0000000000001000-0x0000000000ffffff] [ 0.039243] DMA32 [mem 0x00000000000000bcffffff] [ 0.039257] Normal empty [ 0.039269] Device empty [ 0.039282] Movable zone start for each node [ 0.039293] Early memory node ranges [ 0.039299] node 0: [mem 0x0000000000001000-0x000000000009bfff] [ 0.039309] node 0: [mem 0x000000003d001000-0x00000000bcffffff] [ 0.039323] Initmem setup node 0 [mem 0x0000000000001000-0x00000000bcffffff] [ 0.039370] On node 0, zone DMA: 1 pages in unavailable ranges [ 0.052703] On node 0, zone DMA32: 53093 pages in unavailable ranges [ 0.054523] On node 0, zone DMA32: 12288 pages in unavailable ranges [ 0.206664] kasan: KernelAddressSanitizer initialized [ 0.207195] ACPI: PM-Timer IO Port: 0x908 [ 0.207236] APIC: Disabling requested cpu. Processor 0/0x0 ignored. [ 0.207246] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 1/0x2 ignored. [ 0.207257] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 2/0x4 ignored. [ 0.207266] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 3/0x6 ignored7276] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 4/0x8 ignored. [ 0.207284] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 5/0xa ignored. [ 0.207293] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 6/0x20 ignored. [ 0.207302] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 7/0x22 ignored. [ 0.207311] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 8/0x24 ignored. [ 0.207320] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 9/0x26 ignored. [ 0.207329] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 10/0x28 ignored. [ 0.207339] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 11/0x2a ignored. [ 0.207348] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 12/0x1 ignored. [ 0.207357] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 13/0x3 ignored. [ 0.207365] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 14/0x5 ignored. [ 0.207374] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Proces ignored. [ 0.207382] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 16/0x9 ignored. [ 0.207390] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 17/0xb ignored. [ 0.207399] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 18/0x21 ignored. [ 0.207407] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 19/0x23 ignored. [ 0.207415] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 20/0x25 ignored. [ 0.207424] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 21/0x27 ignored. [ 0.207433] APIC: NR_CPUS/possible_cpus limit of 1 almost reached. Keeping one slot for boot cpu. Processor 22/0x29 ignored. [ 0.207457] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.207539] IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23 [ 0.207564] IOAPIC[1]: apic_id 0, version 32, address 0xfec10000, GSI 24-47 [ 0.207582] IOAPIC[2]: apic_id 10, version 32, address 0xfec40000, GSI 48-71 [ 0.207600] ACPI: INT 0 bus_irq 0 global_irq 2 high edge) [ 0.207614] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.207645] ACPI: Using ACPI (MADT) for SMP configuration information [ 0.207655] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.207680] ACPI: SPCR: SPCR table version 1 [ 0.207688] ACPI: SPCR: Unexpected SPCR Access Width. Defaulting to byte size [ 0.207698] ACPI: SPCR: console: uart,mmio,0x0,9600 [ 0.207711] TSC deadline timer available [ 0.207720] smpboot: 64 Processors exceeds NR_CPUS limit of 1 [ 0.207729] smpboot: Allowing 1 CPUs, 0 hotplug CPUs [ 0.207873] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] [ 0.207891] PM: hibernation: Registered nosave memory: [mem 0x0009c000-0x0009cfff] [ 0.207900] PM: hibernation: Registered nosave memory: [mem 0x0009d000-0x0009ffff] [ 0.207908] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.207917] PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.207925] PM: hibernation: Registered nosave memory: [mem 0x00100000-0x3d000fff] [ 0.207949] [mem 0x00100000-0x3d000fff] avCI devices [ 0.207959] Booting paravirtualized kernel on bare hardware [ 0.207985] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns [ 0.249858] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:1 nr_cpu_ids:1 nr_node_ids:1 [ 0.252103] percpu: Embedded 515 pages/cpu s2072576 r8192 d28672 u4194304 [ 0.252568] Fallback order for Node 0: 0 [ 0.252613] Built 1 zonelists, mobility grouping on. Total pages: 516247 [ 0.252623] Policy zone: DMA32 [ 0.252658] Kernel command line: elfcorehdr=0x3d000000 BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug ro resume=/dev/mapper/cs_hpe--dl360pgen8--08-swap console=ttyS1,115200n81 irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 acpi_no_memhotplug transparent_hugepage=never nokaslr hest_disable novmcoredd cma=0 hugetlb_cma=0 disable_cpu_apicid=0 hpwdt.pretimeout=0 hpwdt.kdumptimeout=0 [ 0.252918] Misrouted IRQ fixup and polling support enabled [ 0.252925] This may significantly impact system performance [ 0.253551] cgroup: Disabling memory control group subsystem [ 0.254196] Unknown kernel command line parameters "nokaslr BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug", will be passed to user space. [ 0.254879] Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, li0.255243] Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) [ 0.255395] mem auto-init: stack:off, heap alloc:off, heap free:off [ 0.255406] Stack Depot early init allocating hash table with memblock_alloc, 8388608 bytes [ 0.813598] Memory: 179964K/2097768K available (38920K kernel code, 13007K rwdata, 14984K rodata, 5300K init, 42020K bss, 735340K reserved, 0K cma-reserved) [ 0.813658] random: get_random_u64 called from kmem_cache_open+0x22/0x380 with crng_init=0 [ 0.818089] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1 [ 0.818104] kmemleak: Kernel memory leak detector disabled [ 0.824233] Kernel/User page tables isolation: enabled [ 0.824745] ftrace: allocating 45745 entries in 179 pages [ 0.904280] ftrace: allocated 179 pages with 5 groups [ 0.913033] Dynamic Preempt: voluntary [ 0.913884] Running RCU self tests [ 0.913934] rcu: Preemptible hierarchical RCU implementation. [ 0.913941] rcu: RCU lockdep checking is enabled. [ 0.913948] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=1. [ 0.913959] rcu: RCU callback double-/use-after-free debug i [ 0.913966] Trampoline variant of Tasks RCU enabled. [ 0.913972] Rude variant of Tasks RCU enabled. [ 0.913978] Tracing variant of Tasks RCU enabled. [ 0.913985] rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. [ 0.913994] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=1 [ 0.955229] NR_IRQS: 524544, nr_irqs: 256, preallocated irqs: 16 [ 0.956468] rcu: srcu_init: Setting srcu_struct sizes based on contention. [ 0.956607] random: crng init done (trusting CPU's manufacturer) [ 0.956943] Spurious LAPIC timer interrupt on cpu 0 [ 0.962157] Console: colour VGA+ 80x25 [ 6.659456] printk: console [ttyS1] enabled [ 6.660912] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar [ 6.663516] ... MAX_LOCKDEP_SUBCLASSES: 8 [ 6.664920] ... MAX_LOCK_DEPTH: 48 [ 6.666349] ... MAX_LOCKDEP_KEYS: 8192 [ 6.667889] ... CLASSHASH_SIZE: 4096 [ 6.669370] ... MAX_LOCKDEP_ENTRIES: 65536 [ 6.670899] ... MAX_LOCKDEP_CHAINS: 131072 [ 6.672432] ... CHAINHASH_SIZE: 65536 [ 6.673953] memory used by locknfo: 11641 kB [ 7.175737] memory used for stack traces: 4224 kB [ 7.177478] per task-struct memory footprint: 2688 bytes [ 7.179613] ACPI: Core revision 20211217 [ 7.183772] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns [ 7.187527] APIC: Switch to symmetric I/O mode setup [ 7.189348] DMAR: Host address width 46 [ 7.190779] DMAR: DRHD base: 0x000000fbefe000 flags: 0x0 [ 7.193066] DMAR: dmar0: reg_base_addr fbefe000 ver 1:0 cap d2078c106f0466 ecap f020de [ 7.195878] DMAR: DRHD base: 0x000000f4ffe000 flags: 0x1 [ 7.197961] DMAR: dmar1: reg_base_addr f4ffe000 ver 1:0 cap d2078c106f0466 ecap f020de [ 7.200741] DMAR: RMRR base: 0x000000bdffd000 end: 0x000000bdffffff [ 7.203036] DMAR: RMRR base: 0x000000bdff6000 end: 0x000000bdffcfff [ 7.205267] DMAR: RMRR base: 0x000000bdf83000 end: 0x000000bdf84fff [ 7.207595] DMAR: RMRR base: 0x000000bdf7f000 end: 0x000000bdf82fff [ 7.209839] DMAR: RMRR base: 0x000000bdf6f000 end: 0x000000bdf7efff [ 7.212166] DMAR: RMRR base: 0x000000bdf6e000 end: 0x000000bdf6efff [ 7.214400] DMAR: RMRR base: 0x000000000f4000 end: 0x000000000f4fff [ 7.216682] DMAR: RMRR base: 0x000000000e8000 end: 0x000000000e8fff [ 7.218926] DMAR: [Firmware Bug]: No firmware reserved region can cover this RMRR [0x00000000000e8000-0x00000000000e8fff], contact BIOS vendor for fixes [ 7.223605] DMABug]: Your BIOS is broken; bad RMRR [0x00000000000e8000-0x00000000000e8fff] [ 7.223605] BIOS vendor: HP; Ver: P71; Product Version: [ 7.728700] DMAR: RMRR base: 0x000000bddde000 end: 0x000000bdddefff [ 7.730991] DMAR: ATSR flags: 0x0 [ 7.732277] DMAR-IR: IOAPIC id 10 under DRHD base 0xfbefe000 IOMMU 0 [ 7.734555] DMAR-IR: IOAPIC id 8 under DRHD base 0xf4ffe000 IOMMU 1 [ 7.736840] DMAR-IR: IOAPIC id 0 under DRHD base 0xf4ffe000 IOMMU 1 [ 7.739139] DMAR-IR: HPET id 0 under DRHD base 0xf4ffe000 [ 7.741172] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit. [ 7.741182] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting. [ 7.751816] DMAR-IR: Copied IR table for dmar0 from previous kernel [ 7.759519] DMAR-IR: Copied IR table for dmar1 from previous kernel [ 7.762031] DMAR-IR: Enabled IRQ remapping in xapic mode [ 7.763869] x2apic: IRQ remapping doesn't support X2APIC mode [ 7.769937] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 7.776129] clocksource: tsc-early: mask: 0xffffmax_cycles: 0x1e329968412, max_idle_ns: 440795305758 ns [ 8.179879] Calibrating delay loop (skipped), value calculated using timer frequency.. 4189.92 BogoMIPS (lpj=2094962) [ 8.180862] pid_max: default: 32768 minimum: 301 [ 8.182874] LSM: Security Framework initializing [ 8.184899] Yama: becoming mindful. [ 8.186003] SELinux: Initializing. [ 8.188235] LSM support for eBPF active [ 8.190992] Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) [ 8.191885] Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) [ 8.201370] CPU0: Thermal monitoring enabled (TM1) [ 8.201886] process: using mwait in idle threads [ 8.202882] Last level iTLB entries: 4KB 512, 2MB 8, 4MB 8 [ 8.203855] Last level dTLB entries: 4KB 512, 2MB 0, 4MB 0, 1GB 4 [ 8.204884] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization [ 8.205863] Spectre V2 : Mitigation: Retpolines [ 8.206856] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch [ 8.208855] Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT [ 8.209856] Spectre V2 : Enabling Restricted Speculation for firmware calls [ 8.211865] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier [ 8.213856] Spectre V2 : User n: STIBP via prctl [ 8.215860] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl [ 8.216882] MDS: Mitigation: Clear CPU buffers [ 8.218855] MMIO Stale Data: Unknown: No mitigations [ 8.329669] Freeing SMP alternatives memory: 32K [ 8.331918] smpboot: CPU 0 Converting physical 1 to logical package 0 [ 8.333163] smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1170 [ 8.334908] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz (family: 0x6, model: 0x3e, stepping: 0x4) [ 8.340897] cblist_init_generic: Setting adjustable number of callback queues. [ 8.341825] cblist_init_generic: Setting shift to 0 and lim to 1. [ 8.343336] cblist_init_generic: Setting shift to 0 and lim to 1. [ 8.344379] cblist_init_generic: Setting shift to 0 and lim to 1. [ 8.345287] Running RCU-tasks wait API self tests [ 8.452253] Performance Events: PEBS fmt1+, IvyBridge events, 16-deep LBR, full-width counters, Broken BIOS detected, complain to your hardware vendor. [ 8.452828] [Firmware Bug]: the BIOS has corrupted hw-PMU resources (MSR 38d is 3b0) [ 8.] Intel PMU driver. [ 8.454852] ... version: 3 [ 8.455827] ... bit width: 48 [ 8.456825] ... generic registers: 4 [ 8.457824] ... value mask: 0000ffffffffffff [ 8.458825] ... max period: 00007fffffffffff [ 8.459824] ... fixed-purpose events: 3 [ 8.460825] ... event mask: 000000070000000f [ 8.463752] rcu: Hierarchical SRCU implementation. [ 8.463827] rcu: Max phase no-delay instances is 400. [ 8.467953] Callback from call_rcu_tasks_trace() invoked. [ 8.476293] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. [ 8.477613] smp: Bringing up secondary CPUs ... [ 8.477853] smp: Brought up 1 node, 1 CPU [ 8.478826] smpboot: Max logical packages: 64 [ 8.479827] smpboot: Total of 1 processors activated (4189.92 BogoMIPS) [ 8.557331] Callback from call_rcu_tasks_rude() invoked. [ 8.859865] node 0 deferred pages initialised in 376ms [ 8.862514] pgdatinit0 (20) used greatest stack depth: 29432 bytes left [ 8.863983] devtmpfs: initialized [ 8.866388] x86/mm: Memory block size: 128MB [ 8.867882] Callback from call_rcu_tasks() invoked. [ 8.902225] DMA-API: preallocated 65536 debug entries [ 8.902822] DMA-API: debugging enabled by kernel config [ 8.903822] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns [ 8.904847] futex hash table entries: 256 (order: 3, 32768 bytes, linear) [ 8.907042] prandom: seed boundary self test passed [ 8.908972] prandom: 100 self tests passed [ 8.914535] prandom32: self test passed (less than 6 bits correlated) [ 8.914829] pinctrl core: initialized pinctrl subsystem [ 8.917127] [ 8.917767] ************************************************************* [ 8.917822] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE ** [ 8.918819] ** ** [ 8.919819] ** IOMMU DebugFS SUPPORT HAS BEEN ENABLED IN THIS KERNEL ** [ 8.920819] ** ** [ 8.921819] ** This means that this kernel is built to expose internal ** [ 8.922819] ** IOMMU data structures, which may compromise security on ** [ 8.923819] ** your system. ** [ 8.924819] ** ** [ 8.925819] ** If you see this message and you are not debugging the ** [ 8.926819] ** kernel, report this immediately to your** [ 8.927820] ** ** [ 8.928819] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE ** [ 8.929819] ************************************************************* [ 8.930905] PM: RTC time: 04:25:06, date: 2023-02-03 [ 8.937331] NET: Registered PF_NETLINK/PF_ROUTE protocol family [ 8.941786] DMA: preallocated 256 KiB GFP_KERNEL pool for atomic allocations [ 8.941892] DMA: preallocated 256 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations [ 8.943054] audit: initializing netlink subsys (disabled) [ 8.947352] thermal_sys: Registered thermal governor 'fair_share' [ 8.947363] thermal_sys: Registered thermal governor 'step_wise' [ 8.947865] audit: type=2000 audit(1675398297.812:1): state=initialized audit_enabled=0 res=1 [ 8.949862] thermal_sys: Registered thermal governor 'user_space' [ 8.949968] cpuidle: using governor menu [ 8.952895] Detected 1 PCC Subspaces [ 8.953822] Registering PCC driver as Mailbox controller [ 8.955585] HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB [ 8.955854] ACPI FADT declares the system doesn't support PCIe ASPM, so d 8.956823] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 8.959163] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xc0000000-0xcfffffff] se 0xc0000000) [ 8.959829] PCI: MMCONFIG at [mem 0xc0000000-0xcfffffff] reserved in E820 [ 9.024873] PCI: Using configuration type 1 for base access [ 9.025868] PCI: HP ProLiant DL360 detected, enabling pci=bfsort. [ 9.027264] core: PMU erratum BJ122, BV98, HSD29 workaround disabled, HT off [ 9.090153] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. [ 9.137793] HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB [ 9.137844] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages [ 9.138821] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages [ 9.150798] cryptd: max_cpu_qlen set to 1000 [ 9.155025] ACPI: Added _OSI(Module Device) [ 9.155828] ACPI: Added _OSI(Processor Device) [ 9.156842] ACPI: Added _OSI(3.0 _SCP Extensions) [ 9.157821] ACPI: Added _OSI(Processor Aggregator Device) [ 9.158838] ACPI: Added _OSI(Linux-Dell-Video) [ 9.159832] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) [ 9.160834] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) [ 9.487448] ACPI: 10 ACPI AML tables successfully acquired and loaded [ 9.560513] ACPI: Interpreter enabled [ 9.561071] ACPI: PM: (supports S0 S4 S5) [ 9.561845] ACPI: Using IOAPIC for interrupt routing [ 9.563337] HEST: Table parsing disabled. [ 9.563825] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 9.564819] PCI: Using E820 reservations for host bridge windows [ 9.778993] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-1f]) [ 9.779872] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 9.783827] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER PCIeCapability LTR DPC] [ 9.784822] acpi PNP0A08:00: FADT indicates ASPM is unsupported, using BIOS configuration [ 9.794610] PCI host bridge to bus 0000:00 [ 9.794835] pci_bus 0000:00: root bus resource [mem 0xf4000000-0xf7ffffff window] [ 9.795828] pci_bus 0000:00: root bus resource [io 0x1000-0x7fff window] [ 9.796827] pci_bus 0000:00: root bus resource [io 0x0000-0x03af window] [ 9.797827] pci_bus 0000:00: root bus resource [io 0x03e0-0x0cf7 window] [ 9.7988 pci_bus 0000:00: root bus resource [io 0x0d00-0x0fff window] [ 9.799828] pci_bus 0000:00: root bus resource [io 0x03b0-0x03bb window] [ 9.800827] pci_bus 0000:00: root bus resource [io 0x03c0-0x03df window] [ 9.801827] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 9.802832] pci_bus 0000:00: root bus resource [bus 00-1f] [ 9.804209] pci 0000:00:00.0: [8086:0e00] type 00 class 0x060000 [ 9.805175] pci 0000:00:00.0: PME# supported from D0 D3hot D3cold [ 9.807038] pci 0000:00:01.0: [8086:0e02] type 01 class 0x060400 [ 9.808124] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold [ 9.812371] pci 0000:00:01.1: [8086:0e03] type 01 class 0x060400 [ 9.813123] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold [ 9.816620] pci 0000:00:02.0: [8086:0e04] type 01 class 0x060400 [ 9.817124] pci 0000:00:02.0: PME# supported from D0 D3hot D3cold [ 9.820747] pci 0000:00:02.1: [8086:0e05] type 01 class 0x060400 [ 9.821125] pci 0000:00:02.1: PME# supported from D0 D3hot D3cold [ 9.824611] pci 0000:00:02.2: [8086:0e06] type 01 class 0x060400 [ 9.825124] pci 0000:00:02.2: PME# supported from D0 D3hot D3cold [ 9.828640] pci 0000:00:02.3: [8086:0e07] type 01 class 0x060400 [ 9.829122] pci PME# supported from D0 D3hot D3cold [ 9.832607] pci 0000:00:03.0: [8086:0e08] type 01 class 0x060400 [ 9.833133] pci 0000:00:03.0: PME# supported from D0 D3hot D3cold [ 9.836734] pci 0000:00:03.1: [8086:0e09] type 01 class 0x060400 [ 9.837122] pci 0000:00:03.1: PME# supported from D0 D3hot D3cold [ 9.840580] pci 0000:00:03.2: [8086:0e0a] type 01 class 0x060400 [ 9.841124] pci 0000:00:03.2: PME# supported from D0 D3hot D3cold [ 9.844956] pci 0000:00:03.3: [8086:0e0b] type 01 class 0x060400 [ 9.846122] pci 0000:00:03.3: PME# supported from D0 D3hot D3cold [ 9.849576] pci 0000:00:04.0: [8086:0e20] type 00 class 0x088000 [ 9.849860] pci 0000:00:04.0: reg 0x10: [mem 0xf6cf0000-0xf6cf3fff 64bit] [ 9.851887] pci 0000:00:04.1: [8086:0e21] type 00 class 0x088000 [ 9.852855] pci 0000:00:04.1: reg 0x10: [mem 0xf6ce0000-0xf6ce3fff 64bit] [ 9.854895] pci 0000:00:04.2: [8086:0e22] type 00 class 0x088000 [ 9.855854] pci 0000:00:04.2: reg 0x10: [mem 0xf6cd0000-0xf6cd3fff 64bit] [ 9.857891] pci 0000:00:04.3: [8086:0e23] type 00 class 0x088000 [ 9.858855] pci 0000:00:04.3: reg 0x10: [mem 0xf6c64bit] [ 9.860887] pci 0000:00:04.4: [8086:0e24] type 00 class 0x088000 [ 9.861855] pci 0000:00:04.4: reg 0x10: [mem 0xf6cb0000-0xf6cb3fff 64bit] [ 9.863948] pci 0000:00:04.5: [8086:0e25] type 00 class 0x088000 [ 9.864855] pci 0000:00:04.5: reg 0x10: [mem 0xf6ca0000-0xf6ca3fff 64bit] [ 9.866891] pci 0000:00:04.6: [8086:0e26] type 00 class 0x088000 [ 9.867854] pci 0000:00:04.6: reg 0x10: [mem 0xf6c90000-0xf6c93fff 64bit] [ 9.869890] pci 0000:00:04.7: [8086:0e27] type 00 class 0x088000 [ 9.870854] pci 0000:00:04.7: reg 0x10: [mem 0xf6c80000-0xf6c83fff 64bit] [ 9.872901] pci 0000:00:05.0: [8086:0e28] type 00 class 0x088000 [ 9.874860] pci 0000:00:05.2: [8086:0e2a] type 00 class 0x088000 [ 9.876871] pci 0000:00:05.4: [8086:0e2c] type 00 class 0x080020 [ 9.877845] pci 0000:00:05.4: reg 0x10: [mem 0xf6c70000-0xf6c70fff] [ 9.879946] pci 0000:00:11.0: [8086:1d3e] type 01 class 0x060400 [ 9.881121] pci 0000:00:11.0: PME# supported from D0 D3hot D3cold [ 9.884631] pci 0000:00:1a.0: [8086:1d2d] type 00 class 0x0c0320 [ 9.884852] pci 0000:00:1a.0: reg 0x10: [mem 0xf6c60000-0xf6c603ff] [ 9.886070] pci 0000:00:1a.0: PME# supported from D0 [ 9.887781] pci 0000:00:1c.0: [8086:1d10] type 01 class 0x060400 [ 9.888098] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold [ 9.892286] pci 0000:00:1c.7: [8086:1d1e] type 01 class 0x060400 [ 9.893098] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold [ 9.896913] pci 0000:00:1d.0: [8086:1d26] type 00 class 0x0c0320 [ 9.897852] pci 0000:00:1d.0: reg 0x10: [mem 0xf6c50000-0xf6c503ff] [ 9.899069] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold [ 9.900740] pci 0000:00:1e.0: [8086:244e] type 01 class 0x060401 [ 9.901849] pci 0000:00:1f.0: [8086:1d41] type 00 class 0x060100 [ 9.905687] pci 0000:00:1f.2: [8086:1d00] type 00 class 0x01018f [ 9.905850] pci 0000:00:1f.2: reg 0x10: [io 0x4000-0x4007] [ 9.906834] pci 0000:00:1f.2: reg 0x14: [io 0x4008-0x400b] [ 9.907834] pci 0000:00:1f.2: reg 0x18: [io 0x4010-0x4017] [ 9.908834] pci 0000:00:1f.2: reg 0x1c: [io 0x4018-0x401b] [ 9.909833] pci 0000:00:1f.2: reg 0x20: [io 0x4020-0x402f] [ 9.910833] pci 0000:00:1f.2: reg 0x24: [io 0x4030-0x403f] [ 9.930559] pci 0000:04:00.0: [103c:323b] type 00 class 0x010400 [ 9.930853] pci 0000:04:00.0: reg 0x10: [mem 0xf7f00000-0xf7ffffff 64bit] [ 9.931837] pci 0000:04:00.0: reg 0x18: [mem 0xf7ef0000-0xf7ef03ff 64bit] [ 9.93200.0: reg 0x20: [io 0x6000-0x60ff] [ 9.933844] pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] [ 9.935102] pci 0000:04:00.0: PME# supported from D0 D1 D3hot [ 9.943059] pci 0000:00:01.0: PCI bridge to [bus 04] [ 9.943829] pci 0000:00:01.0: bridge window [io 0x6000-0x6fff] [ 9.944824] pci 0000:00:01.0: bridge window [mem 0xf7e00000-0xf7ffffff] [ 9.946292] pci 0000:00:01.1: PCI bridge to [bus 11] [ 9.949632] pci 0000:03:00.0: [14e4:1657] type 00 class 0x020000 [ 9.949856] pci 0000:03:00.0: reg 0x10: [mem 0xf6bf0000-0xf6bfffff 64bit pref] [ 9.950840] pci 0000:03:00.0: reg 0x18: [mem 0xf6be0000-0xf6beffff 64bit pref] [ 9.951839] pci 0000:03:00.0: reg 0x20: [mem 0xf6bd0000-0xf6bdffff 64bit pref] [ 9.952833] pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 9.954135] pci 0000:03:00.0: PME# supported from D0 D3hot D3cold [ 9.961571] pci 0000:03:00.1: [14e4:1657] type 00 class 0x020000 [ 9.961857] pci 0000:03:00.1: reg 0x10: [mem 0xf6bc0000-0xf6bcffff 64bit pref] [ 9.962841] pci 0000:03:00.1: reg 0x18: [mem 0xf6bb0000-0xf6bbffff 64bit pref] [ 9.963840] pci 0000:03:00.1: reg 0x20: [mem 0xf6ba0000-0xf6baffff 64bit pref] [ 9.964833] pci 0000:03:00.1: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 9.966134] pci 0000:03:00.1: PME# supported from D0 D3hot D3cold 7] pci 0000:03:00.2: [14e4:1657] type 00 class 0x020000 [ 9.973857] pci 0000:03:00.2: reg 0x10: [mem 0xf6b90000-0xf6b9ffff 64bit pref] [ 9.974840] pci 0000:03:00.2: reg 0x18: [mem 0xf6b80000-0xf6b8ffff 64bit pref] [ 9.975840] pci 0000:03:00.2: reg 0x20: [mem 0xf6b70000-0xf6b7ffff 64bit pref] [ 9.976833] pci 0000:03:00.2: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 9.978135] pci 0000:03:00.2: PME# supported from D0 D3hot D3cold [ 9.985558] pci 0000:03:00.3: [14e4:1657] type 00 class 0x020000 [ 9.985856] pci 0000:03:00.3: reg 0x10: [mem 0xf6b60000-0xf6b6ffff 64bit pref] [ 9.986840] pci 0000:03:00.3: reg 0x18: [mem 0xf6b50000-0xf6b5ffff 64bit pref] [ 9.987840] pci 0000:03:00.3: reg 0x20: [mem 0xf6b40000-0xf6b4ffff 64bit pref] [ 9.988833] pci 0000:03:00.3: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 9.990336] pci 0000:03:00.3: PME# supported from D0 D3hot D3cold [ 9.997529] pci 0000:00:02.0: PCI bridge to [bus 03] [ 9.997831] pci 0000:00:02.0: bridge window [mem 0xf4000000-0xf40fffff] [ 9.998827] pci 0000:00:02.0: br0xf6b00000-0xf6bfffff 64bit pref] [ 10.000296] pci 0000:00:02.1: PCI bridge to [bus 12] [ 10.001344] pci 0000:02:00.0: [103c:323b] type 00 class 0x010400 [ 10.001853] pci 0000:02:00.0: reg 0x10: [mem 0xf7d00000-0xf7dfffff 64bit] [ 10.002838] pci 0000:02:00.0: reg 0x18: [mem 0xf7cf0000-0xf7cf03ff 64bit] [ 10.003832] pci 0000:02:00.0: reg 0x20: [io 0x5000-0x50ff] [ 10.004843] pci 0000:02:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] [ 10.006095] pci 0000:02:00.0: PME# supported from D0 D1 D3hot [ 10.007951] pci 0000:00:02.2: PCI bridge to [bus 02] [ 10.008826] pci 0000:00:02.2: bridge window [io 0x5000-0x5fff] [ 10.009823] pci 0000:00:02.2: bridge window [mem 0xf7c00000-0xf7dfffff] [ 10.011294] pci 0000:00:02.3: PCI bridge to [bus 13] [ 10.029183] pci 0000:00:03.0: PCI bridge to [bus 07] [ 10.030351] pci 0000:00:03.1: PCI bridge to [bus 14] [ 10.031403] pci 0000:00:03.2: PCI bridge to [bus 15] [ 10.032293] pci 0000:00:03.3: PCI bridge to [bus 16] [ 10.033318] pci 0000:00:11.0: PCI bridge to [bus 18] [ 10.034298] pci 0000:00:1c.0: PCI bridge to [bus 0a] [ 10.035713] pci 0000:01:00.0: [103c:3306] type 00 class 0x088000 [ 10.035865] pci 0000:01:00.0: reg 0x10: [io 0x3000-0x30ff] [ 10.036842] p0: reg 0x14: [mem 0xf7bf0000-0xf7bf01ff] [ 10.037841] pci 0000:01:00.0: reg 0x18: [io 0x3400-0x34ff] [ 10.041742] pci 0000:01:00.1: [102b:0533] type 00 class 0x030000 [ 10.041864] pci 0000:01:00.1: reg 0x10: [mem 0xf5000000-0xf5ffffff pref] [ 10.042841] pci 0000:01:00.1: reg 0x14: [mem 0xf7be0000-0xf7be3fff] [ 10.043841] pci 0000:01:00.1: reg 0x18: [mem 0xf7000000-0xf77fffff] [ 10.045104] pci 0000:01:00.1: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] [ 10.046847] pci 0000:01:00.2: [103c:3307] type 00 class 0x088000 [ 10.047863] pci 0000:01:00.2: reg 0x10: [io 0x3800-0x38ff] [ 10.048840] pci 0000:01:00.2: reg 0x14: [mem 0xf6ff0000-0xf6ff00ff] [ 10.049841] pci 0000:01:00.2: reg 0x18: [mem 0xf6e00000-0xf6efffff] [ 10.050841] pci 0000:01:00.2: reg 0x1c: [mem 0xf6d80000-0xf6dfffff] [ 10.051841] pci 0000:01:00.2: reg 0x20: [mem 0xf6d70000-0xf6d77fff] [ 10.052841] pci 0000:01:00.2: reg 0x24: [mem 0xf6d60000-0xf6d67fff] [ 10.053841] pci 0000:01:00.2: reg 0x30: [mem 0x00000000-0x0000ffff pref] [ 10.055219] pci 0000:01:00.2: PME# supported from D0 D3hot D3cold [ 10.056838] pci 0000:01:00.4: [103cass 0x0c0300 [ 10.057945] pci 0000:01:00.4: reg 0x20: [io 0x3c00-0x3c1f] [ 10.062941] pci 0000:00:1c.7: PCI bridge to [bus 01] [ 10.063827] pci 0000:00:1c.7: bridge window [io 0x3000-0x3fff] [ 10.064824] pci 0000:00:1c.7: bridge window [mem 0xf6d00000-0xf7bfffff] [ 10.065828] pci 0000:00:1c.7: bridge window [mem 0xf5000000-0xf5ffffff 64bit pref] [ 10.066870] pci_bus 0000:17: extended config space not accessible [ 10.068323] pci 0000:00:1e.0: PCI bridge to [bus 17] (subtractive decode) [ 10.068848] pci 0000:00:1e.0: bridge window [mem 0xf4000000-0xf7ffffff window] (subtractive decode) [ 10.069828] pci 0000:00:1e.0: bridge window [io 0x1000-0x7fff window] (subtractive decode) [ 10.070827] pci 0000:00:1e.0: bridge window [io 0x0000-0x03af window] (subtractive decode) [ 10.071827] pci 0000:00:1e.0: bridge window [io 0x03e0-0x0cf7 window] (subtractive decode) [ 10.072827] pci 0000:00:1e.0: bridge window [io 0x0d00-0x0fff window] (subtractive decode) [ 10.073827] pci 0000:00:1e.0: bridge window [io 0x03b0-0x03bb window] (subtractive decode) [ 10.074827] pci 0000:00:1e.0: bridge window [io 0x03c0-0x03df window] (subtractive decode) [ 10.075827] pci 0000:00:1e.0: bridge window [mem 0x000a000ow] (subtractive decode) [ 10.088707] ACPI: PCI: Interrupt link LNKA configured for IRQ 5 [ 10.088823] ACPI: PCI: Interrupt link LNKA disabled [ 10.091798] ACPI: PCI: Interrupt link LNKB configured for IRQ 7 [ 10.091822] ACPI: PCI: Interrupt link LNKB disabled [ 10.094731] ACPI: PCI: Interrupt link LNKC configured for IRQ 10 [ 10.094822] ACPI: PCI: Interrupt link LNKC disabled [ 10.097736] ACPI: PCI: Interrupt link LNKD configured for IRQ 10 [ 10.097822] ACPI: PCI: Interrupt link LNKD disabled [ 10.100717] ACPI: PCI: Interrupt link LNKE configured for IRQ 5 [ 10.100822] ACPI: PCI: Interrupt link LNKE disabled [ 10.103741] ACPI: PCI: Interrupt link LNKF configured for IRQ 7 [ 10.103822] ACPI: PCI: Interrupt link LNKF disabled [ 10.106705] ACPI: PCI: Interrupt link LNKG configured for IRQ 0 [ 10.106822] ACPI: PCI: Interrupt link LNKG disabled [ 10.109747] ACPI: PCI: Interrupt link LNKH configured for IRQ 0 [ 10.109821] ACPI: PCI: Interrupt link LNKH disabled [ 10.111412] ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 20-3f]) [ 10.111869] acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 10.115679] acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER PCIeCapability LTR DPC] [ 10.110A08:01: FADT indicates ASPM is unsupported, using BIOS configuration [ 10.121263] PCI host bridge to bus 0000:20 [ 10.121830] pci_bus 0000:20: root bus resource [mem 0xfb000000-0xfbffffff window] [ 10.122827] pci_bus 0000:20: root bus resource [io 0x8000-0xffff window] [ 10.123822] pci_bus 0000:20: root bus resource [bus 20-3f] [ 10.125077] pci 0000:20:00.0: [8086:0e01] type 01 class 0x060400 [ 10.126096] pci 0000:20:00.0: PME# supported from D0 D3hot D3cold [ 10.128007] pci 0000:20:01.0: [8086:0e02] type 01 class 0x060400 [ 10.129112] pci 0000:20:01.0: PME# supported from D0 D3hot D3cold [ 10.132606] pci 0000:20:01.1: [8086:0e03] type 01 class 0x060400 [ 10.133127] pci 0000:20:01.1: PME# supported from D0 D3hot D3cold [ 10.136629] pci 0000:20:02.0: [8086:0e04] type 01 class 0x060400 [ 10.137111] pci 0000:20:02.0: PME# supported from D0 D3hot D3cold [ 10.140720] pci 0000:20:02.1: [8086:0e05] type 01 class 0x060400 [ 10.141111] pci 0000:20:02.1: PME# supported from D0 D3hot D3cold [ 10.144629] pci 0000:20:02.2: [8086:0e06] type 01 class 0x060400 [ 10.145110] pci 0000:20:02.2: PME# supported from D0 D3hot D3cold [ 10.148605] pci 0000:20:02.3: [8086:0e07] type 01 class 0x060400 [ 10.149110] pci 0000:20:02.3: PME# supported from D0 D3hot D3cold [ 10.152715] pci 0000:20:03.0: [8086:0e08] type 01 class 0x060400 [ 10.153119] pci 0000:20:03.0: PME# supported from D0 D3hot D3cold [ 10.156618] pci 0000:20:03.1: pe 01 class 0x060400 [ 10.157112] pci 0000:20:03.1: PME# supported from D0 D3hot D3cold [ 10.160658] pci 0000:20:03.2: [8086:0e0a] type 01 class 0x060400 [ 10.161110] pci 0000:20:03.2: PME# supported from D0 D3hot D3cold [ 10.164588] pci 0000:20:03.3: [8086:0e0b] type 01 class 0x060400 [ 10.165222] pci 0000:20:03.3: PME# supported from D0 D3hot D3cold [ 10.168566] pci 0000:20:04.0: [8086:0e20] type 00 class 0x088000 [ 10.168854] pci 0000:20:04.0: reg 0x10: [mem 0xfbff0000-0xfbff3fff 64bit] [ 10.170903] pci 0000:20:04.1: [8086:0e21] type 00 class 0x088000 [ 10.171854] pci 0000:20:04.1: reg 0x10: [mem 0xfbfe0000-0xfbfe3fff 64bit] [ 10.173868] pci 0000:20:04.2: [8086:0e22] type 00 class 0x088000 [ 10.174853] pci 0000:20:04.2: reg 0x10: [mem 0xfbfd0000-0xfbfd3fff 64bit] [ 10.176889] pci 0000:20:04.3: [8086:0e23] type 00 class 0x088000 [ 10.177853] pci 0000:20:04.3: reg 0x10: [mem 0xfbfc0000-0xfbfc3fff 64bit] [ 10.179893] pci 0000:20:04.4: [8086:0e24] type 00 class 0x088000 [ 10.180853] pci 0000:20:04.4: reg 0x10: [mem 0xfbfb0000-0xfbfb3fff 64bit] [ 10.182867] pci 0000:20:04.5: [8086:0e25] type 00 class 0x088000 [ 10.183853] pci 0000:20:04.5: reg 0x10: [mem 0xfbfa0000-0xfbfa3fff 64bit] [ 10.185874] pci 0000:26] type 00 class 0x088000 [ 10.186854] pci 0000:20:04.6: reg 0x10: [mem 0xfbf90000-0xfbf93fff 64bit] [ 10.188997] pci 0000:20:04.7: [8086:0e27] type 00 class 0x088000 [ 10.189854] pci 0000:20:04.7: reg 0x10: [mem 0xfbf80000-0xfbf83fff 64bit] [ 10.191888] pci 0000:20:05.0: [8086:0e28] type 00 class 0x088000 [ 10.193848] pci 0000:20:05.2: [8086:0e2a] type 00 class 0x088000 [ 10.195873] pci 0000:20:05.4: [8086:0e2c] type 00 class 0x080020 [ 10.196844] pci 0000:20:05.4: reg 0x10: [mem 0xfbf70000-0xfbf70fff] [ 10.199310] pci 0000:20:00.0: PCI bridge to [bus 2b] [ 10.200307] pci 0000:20:01.0: PCI bridge to [bus 21] [ 10.201301] pci 0000:20:01.1: PCI bridge to [bus 22] [ 10.202314] pci 0000:20:02.0: PCI bridge to [bus 23] [ 10.203390] pci 0000:20:02.1: PCI bridge to [bus 24] [ 10.204292] pci 0000:20:02.2: PCI bridge to [bus 25] [ 10.205309] pci 0000:20:02.3: PCI bridge to [bus 26] [ 10.206298] pci 0000:20:03.0: PCI bridge to [bus 27] [ 10.207291] pci 0000:20:03.1: PCI bridge to [bus 28] [ 10.208302] pci 0000:20:03.2: PCI bridge to [bus 29] [ 10.209320] pci 0000:20:03.3: PCI bridge to [bus 2a] [ 10.216328] iommu: Default domain type: Translated [ 10.216821] iommu: DMA domain TLBpolicy: lazy mode [ 10.222355] SCSI subsystem initialized [ 10.223811] ACPI: bus type USB registered [ 10.225563] usbcore: registered new interface driver usbfs [ 10.226171] usbcore: registered new interface driver hub [ 10.227041] usbcore: registered new device driver usb [ 10.229207] pps_core: LinuxPPS API ver. 1 registered [ 10.229825] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti [ 10.230948] PTP clock support registered [ 10.232930] EDAC MC: Ver: 3.0.0 [ 10.242369] NetLabel: Initializing [ 10.242826] NetLabel: domain hash size = 128 [ 10.243824] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO [ 10.245312] NetLabel: unlabeled traffic allowed by default [ 10.245829] PCI: Using ACPI for IRQ routing [ 10.248187] PCI: Discovered peer bus 1f [ 10.249790] PCI host bridge to bus 0000:1f [ 10.249840] pci_bus 0000:1f: root bus resource [io 0x0000-0xffff] [ 10.250827] pci_bus 0000:1f: root bus resource [mem 0x00000000-0x3fffffffffff] [ 10.251823] pci_bus 0000:1f: No busn resource found for root bus, will use [bus 1f-ff] [ 10.252821] pci_bus 0000:1f: binsert [bus 1f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 00-1f]) [ 10.253869] pci 0000:1f:08.0: [8086:0e80] type 00 class 0x088000 [ 10.255590] pci 0000:1f:09.0: [8086:0e90] type 00 class 0x088000 [ 10.256530] pci 0000:1f:0a.0: [8086:0ec0] type 00 class 0x088000 [ 10.257518] pci 0000:1f:0a.1: [8086:0ec1] type 00 class 0x088000 [ 10.258520] pci 0000:1f:0a.2: [8086:0ec2] type 00 class 0x088000 [ 10.259519] pci 0000:1f:0a.3: [8086:0ec3] type 00 class 0x088000 [ 10.260594] pci 0000:1f:0b.0: [8086:0e1e] type 00 class 0x088000 [ 10.261512] pci 0000:1f:0b.3: [8086:0e1f] type 00 class 0x088000 [ 10.262524] pci 0000:1f:0c.0: [8086:0ee0] type 00 class 0x088000 [ 10.263519] pci 0000:1f:0c.1: [8086:0ee2] type 00 class 0x088000 [ 10.264506] pci 0000:1f:0c.2: [8086:0ee4] type 00 class 0x088000 [ 10.265554] pci 0000:1f:0d.0: [8086:0ee1] type 00 class 0x088000 [ 10.266503] pci 0000:1f:0d.1: [8086:0ee3] type 00 class 0x088000 [ 10.267529] pci 0000:1f:0d.2: [8086:0ee5] type 00 class 0x088000 [ 10.268515] pci 0000:1f:0e.0: [8086:0ea0] type 00 class 0x088000 [ 10.269611] pci 0000:1f:0e.1: [8086:0e30] type 00 class 0x110100 [ 10.2705370f.0: [8086:0ea8] type 00 class 0x088000 [ 10.271645] pci 0000:1f:0f.1: [8086:0e71] type 00 class 0x088000 [ 10.272622] pci 0000:1f:0f.2: [8086:0eaa] type 00 class 0x088000 [ 10.273651] pci 0000:1f:0f.3: [8086:0eab] type 00 class 0x088000 [ 10.274619] pci 0000:1f:0f.4: [8086:0eac] type 00 class 0x088000 [ 10.275633] pci 0000:1f:0f.5: [8086:0ead] type 00 class 0x088000 [ 10.276626] pci 0000:1f:10.0: [8086:0eb0] type 00 class 0x088000 [ 10.277716] pci 0000:1f:10.1: [8086:0eb1] type 00 class 0x088000 [ 10.278664] pci 0000:1f:10.2: [8086:0eb2] type 00 class 0x088000 [ 10.279641] pci 0000:1f:10.3: [8086:0eb3] type 00 class 0x088000 [ 10.280628] pci 0000:1f:10.4: [8086:0eb4] type 00 class 0x088000 [ 10.281631] pci 0000:1f:10.5: [8086:0eb5] type 00 class 0x088000 [ 10.282638] pci 0000:1f:10.6: [8086:0eb6] type 00 class 0x088000 [ 10.283642] pci 0000:1f:10.7: [8086:0eb7] type 00 class 0x088000 [ 10.284947] pci 0000:1f:13.0: [8086:0e1d] type 00 class 0x088000 [ 10.286530] pci 0000:1f:13.1: [8086:0e34] type 00 class 0x110100 [ 10.287587] pci 0000:1f:13.4: [8086:0e81] type 00 class 0x088000 [ 10.288534] pci 0000e36] type 00 class 0x110100 [ 10.289542] pci 0000:1f:16.0: [8086:0ec8] type 00 class 0x088000 [ 10.290509] pci 0000:1f:16.1: [8086:0ec9] type 00 class 0x088000 [ 10.291505] pci 0000:1f:16.2: [8086:0eca] type 00 class 0x088000 [ 10.292548] pci_bus 0000:1f: busn_res: [bus 1f-ff] end is updated to 1f [ 10.292823] pci_bus 0000:1f: busn_res: can not insert [bus 1f] under domain [bus 00-ff] (conflicts with (null) [bus 00-1f]) [ 10.294777] PCI: Discovered peer bus 3f [ 10.295338] PCI host bridge to bus 0000:3f [ 10.295827] pci_bus 0000:3f: root bus resource [io 0x0000-0xffff] [ 10.296825] pci_bus 0000:3f: root bus resource [mem 0x00000000-0x3fffffffffff] [ 10.297821] pci_bus 0000:3f: No busn resource found for root bus, will use [bus 3f-ff] [ 10.298820] pci_bus 0000:3f: busn_res: can not insert [bus 3f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 20-3f]) [ 10.299863] pci 0000:3f:08.0: [8086:0e80] type 00 class 0x088000 [ 10.301687] pci 0000:3f:09.0: [8086:0e90] type 00 class 0x088000 [ 10.302514] pci 0000:3f:0a.0: [8086:0ec0] type 00 class 0x088000 [ 00:3f:0a.1: [8086:0ec1] type 00 class 0x088000 [ 10.304505] pci 0000:3f:0a.2: [8086:0ec2] type 00 class 0x088000 [ 10.305512] pci 0000:3f:0a.3: [8086:0ec3] type 00 class 0x088000 [ 10.306523] pci 0000:3f:0b.0: [8086:0e1e] type 00 class 0x088000 [ 10.307509] pci 0000:3f:0b.3: [8086:0e1f] type 00 class 0x088000 [ 10.308520] pci 0000:3f:0c.0: [8086:0ee0] type 00 class 0x088000 [ 10.309515] pci 0000:3f:0c.1: [8086:0ee2] type 00 class 0x088000 [ 10.310566] pci 0000:3f:0c.2: [8086:0ee4] type 00 class 0x088000 [ 10.311535] pci 0000:3f:0d.0: [8086:0ee1] type 00 class 0x088000 [ 10.312543] pci 0000:3f:0d.1: [8086:0ee3] type 00 class 0x088000 [ 10.313507] pci 0000:3f:0d.2: [8086:0ee5] type 00 class 0x088000 [ 10.314510] pci 0000:3f:0e.0: [8086:0ea0] type 00 class 0x088000 [ 10.315516] pci 0000:3f:0e.1: [8086:0e30] type 00 class 0x110100 [ 10.316523] pci 0000:3f:0f.0: [8086:0ea8] type 00 class 0x088000 [ 10.317624] pci 0000:3f:0f.1: [8086:0e71] type 00 class 0x088000 [ 10.318709] pci 0000:3f:0f.2: [8086:0eaa] type 00 class 0x088000 [ 10.319635] pci 0000:3f:0f.3: [8086:0eab] type 00 class 0x088000 [ 10.320602] pci 0000:3f:0f.4: [8086:0eac] type 00 class 0x088000 [ 10.321637] pci 0000:3f:0d] type 00 class 0x088000 [ 10.322619] pci 0000:3f:10.0: [8086:0eb0] type 00 class 0x088000 [ 10.323632] pci 0000:3f:10.1: [8086:0eb1] type 00 class 0x088000 [ 10.324624] pci 0000:3f:10.2: [8086:0eb2] type 00 class 0x088000 [ 10.325630] pci 0000:3f:10.3: [8086:0eb3] type 00 class 0x088000 [ 10.326600] pci 0000:3f:10.4: [8086:0eb4] type 00 class 0x088000 [ 10.327705] pci 0000:3f:10.5: [8086:0eb5] type 00 class 0x088000 [ 10.328611] pci 0000:3f:10.6: [8086:0eb6] type 00 class 0x088000 [ 10.329624] pci 0000:3f:10.7: [8086:0eb7] type 00 class 0x088000 [ 10.330605] pci 0000:3f:13.0: [8086:0e1d] type 00 class 0x088000 [ 10.331521] pci 0000:3f:13.1: [8086:0e34] type 00 class 0x110100 [ 10.332501] pci 0000:3f:13.4: [8086:0e81] type 00 class 0x088000 [ 10.333508] pci 0000:3f:13.5: [8086:0e36] type 00 class 0x110100 [ 10.334522] pci 0000:3f:16.0: [8086:0ec8] type 00 class 0x088000 [ 10.335582] pci 0000:3f:16.1: [8086:0ec9] type 00 class 0x088000 [ 10.336500] pci 0000:3f:16.2: [8086:0eca] type 00 class 0x088000 [ 10.337574] pci_bus 0000:3f: busn_res: [bus 3f-ff] end is updated to 3f [ 10.337823] pci_bus 0000:3f: busn_res: can not insert [bus 3f] under domain [bus 00-ff] (conflicts with (null) [bus 20-3f]) [ 10.352029] pci 0000:01:00.1: vgaarb: setting as boot VGA dev12] pci 0000:01:00.1: vgaarb: bridge control possible [ 10.352812] pci 0000:01:00.1: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none [ 10.353009] vgaarb: loaded [ 10.354648] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 [ 10.354821] hpet0: 8 comparators, 64-bit 14.318180 MHz counter [ 10.359484] clocksource: Switched to clocksource tsc-early [ 10.779726] VFS: Disk quotas dquot_6.6.0 [ 10.785764] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 10.793140] pnp: PnP ACPI init [ 10.799435] system 00:00: [mem 0xf4ffe000-0xf4ffffff] could not be reserved [ 10.806230] system 00:01: [io 0x0408-0x040f] has been reserved [ 10.808419] system 00:01: [io 0x04d0-0x04d1] has been reserved [ 10.810499] system 00:01: [io 0x0310-0x0315] has been reserved [ 10.812557] system 00:01: [io 0x0316-0x0317] has been reserved [ 10.814617] system 00:01: [io 0x0700-0x071f] has been reserved [ 10.816664] system 00:01: [io 0x0880-0x08ff] has been reserved [ 10.818788] system 00:01: [io 0x0900-0x097f] has been reserved [ 10.820856] system 00:01: [io 0x0cd4-0x0cd7] has been reserved [ 10.822920] system 00:01: [io 0x0cd0-0x0cd3] has been reserved [ 10.824978] system 00:01: [io 0x0f50-0x0f58] has been reserved [ 10.827183] system 00:01: [io 0x0ca0-0x0ca1] has been reserved [ 10.829274] system 00:01: [io 0x0ca4-0x0ca5] has been reserved [ 10.831325] system 00:01: [io 0x02f8-0x02ff] has been reserved [ 10.833596] system 00:01: [mem 0xc0000000-0xcfffffff] has been reserved [ 10.836274] system 00:01: [mem 0xfe000000-0xfebfffff] has been reserved [ 10.838789] system 00:01: [mem 0xfc000000-0xfc000fff] has been reserved [ 10.841072] system 00:01: [mem 0xfed1c000-0xfed1ffff] has been reserved [ 10.843380] system 00:01: [mem 0xfed30000-0xfed3ffff] has been reserved [ 10.845671] system 00:01: [mem 0xfee00000-0xfee00fff] has been reserved [ 10.848033] system 00:01: [mem 0xff800000-0xffffffff] has been reserved [ 10.865950] system 00:06: [mem 0xfbefe000-0xfbefffff] could not be reserved [ 10.869951] pnp: PnP ACPI: found 7 devices [ 10.909157] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns [ 10.912812] NET: Registered PF_INET protocol family [ 10.914948] IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) [ 10.920482] tcp_listen_portaddr_hash hash table entries: 1024 (order: 4, 81920 bytes, linear) [ 10.923580] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) [ 10.926268] TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) [ 10.929634] TCP bind hash table entries: 16384 (order: 8, 1310720 bytes, linear) [ 10.933103] TCP: Hash tables configured (established 16384 bind 16384) [ 10.936204] MPTCP token hash table entries: 2048 (order: 5, 196608 bytes, linear) [ 10.939156] UDP hash table entries: 1024 (order: 5, 196608 bytes, linear) [ 10.941970] UDP-Lite hash table entries: 1024 (order: 5, 196608 bytes, linear) [ 10.945421] NET: Registered PF_UNIX/PF_LOCAL protocol family [ 10.947599] NET: Registered PF_XDP protocol family [ 10.949369] pBAR 6: assigned [mem 0xf7e00000-0xf7e7ffff pref] [ 11.451924] pci 0000:00:01.0: PCI bridge to [bus 04] [ 11.453701] pci 0000:00:01.0: bridge window [io 0x6000-0x6fff] [ 11.455860] pci 0000:00:01.0: bridge window [mem 0xf7e00000-0xf7ffffff] [ 11.458274] pci 0000:00:01.1: PCI bridge to [bus 11] [ 11.460064] pci 0000:03:00.0: BAR 6: assigned [mem 0xf4000000-0xf403ffff pref] [ 11.462642] pci 0000:03:00.1: BAR 6: assigned [mem 0xf4040000-0xf407ffff pref] [ 11.465105] pci 0000:03:00.2: BAR 6: assigned [mem 0xf4080000-0xf40bffff pref] [ 11.467625] pci 0000:03:00.3: BAR 6: assigned [mem 0xf40c0000-0xf40fffff pref] [ 11.470083] pci 0000:00:02.0: PCI bridge to [bus 03] [ 11.471810] pci 0000:00:02.0: bridge window [mem 0xf4000000-0xf40fffff] [ 11.474141] pci 0000:00:02.0: bridge window [mem 0xf6b00000-0xf6bfffff 64bit pref] [ 11.476793] pci 0000:00:02.1: PCI bridge to [bus 12] [ 11.478605] pci 0000:02:00.0: BAR 6: assigned [mem 0xf7c00000-0xf7c7ffff pref] [ 11.481076] pci 0000:00:02.2: PCI bridge to [bus 02] [ 11.482797] pci 0000:00:02.2: bridge window [io 0x5000-0x5fff] [ 11.484885] pci 0000:00:02.2: bridge window [mem 0xf7c00000-0xf7dfffff] [ 11.487229] pci 0000:00:02.3: PCI bridge to [bus 13] [ 11.488991] pci 0000:00:03.0: PCI bridge to [bus 07] [ 11.49:00:03.1: PCI bridge to [bus 14] [ 11.892642] pci 0000:00:03.2: PCI bridge to [bus 15] [ 11.894428] pci 0000:00:03.3: PCI bridge to [bus 16] [ 11.896168] pci 0000:00:11.0: PCI bridge to [bus 18] [ 11.897994] pci 0000:00:1c.0: PCI bridge to [bus 0a] [ 11.900151] pci 0000:01:00.2: BAR 6: assigned [mem 0xf6d00000-0xf6d0ffff pref] [ 11.902661] pci 0000:00:1c.7: PCI bridge to [bus 01] [ 11.904358] pci 0000:00:1c.7: bridge window [io 0x3000-0x3fff] [ 11.906504] pci 0000:00:1c.7: bridge window [mem 0xf6d00000-0xf7bfffff] [ 11.908908] pci 0000:00:1c.7: bridge window [mem 0xf5000000-0xf5ffffff 64bit pref] [ 11.911564] pci 0000:00:1e.0: PCI bridge to [bus 17] [ 11.913271] pci_bus 0000:00: resource 4 [mem 0xf4000000-0xf7ffffff window] [ 11.915636] pci_bus 0000:00: resource 5 [io 0x1000-0x7fff window] [ 11.917837] pci_bus 0000:00: resource 6 [io 0x0000-0x03af window] [ 11.919947] pci_bus 0000:00: resource 7 [io 0x03e0-0x0cf7 window] [ 11.922064] pci_bus 0000:00: resource 8 [io 0x0d00-0x0fff window] [ 11.924167] pci_bus 0000:00: resource 9 [io 0x03b0-0x03bb window] [ 11.926264] pci_bus 0000:00: resource 10 [io 0x03c0-0x03df window] [ 11.928495] pci_bus 0000:00: resource 11 [mem 0x000a0000-0x000bfff2.237391] pci_bus 0000:04: resource 0 [io 0x6000-0x6fff] [ 12.432589] pci_bus 0000:04: resource 1 [mem 0xf7e00000-0xf7ffffff] [ 12.434758] pci_bus 0000:03: resource 1 [mem 0xf4000000-0xf40fffff] [ 12.436954] pci_bus 0000:03: resource 2 [mem 0xf6b00000-0xf6bfffff 64bit pref] [ 12.439490] pci_bus 0000:02: resource 0 [io 0x5000-0x5fff] [ 12.441380] pci_bus 0000:02: resource 1 [mem 0xf7c00000-0xf7dfffff] [ 12.443546] pci_bus 0000:01: resource 0 [io 0x3000-0x3fff] [ 12.445480] pci_bus 0000:01: resource 1 [mem 0xf6d00000-0xf7bfffff] [ 12.447698] pci_bus 0000:01: resource 2 [mem 0xf5000000-0xf5ffffff 64bit pref] [ 12.450142] pci_bus 0000:17: resource 4 [mem 0xf4000000-0xf7ffffff window] [ 12.452486] pci_bus 0000:17: resource 5 [io 0x1000-0x7fff window] [ 12.454592] pci_bus 0000:17: resource 6 [io 0x0000-0x03af window] [ 12.456699] pci_bus 0000:17: resource 7 [io 0x03e0-0x0cf7 window] [ 12.458873] pci_bus 0000:17: resource 8 [io 0x0d00-0x0fff window] [ 12.460983] pci_bus 0000:17: resource 9 [io 0x03b0-0x03bb window] [ 12.463089] pci_bus 0000:17: resource 10 [io 0x03c0-0x03df window] [ 12.465229] pci_bus 0000:17: resource 11 [mem 0x000a0000-0x000bffff window] [ 12.468870] pci 0000:20:00 [bus 2b] [ 12.970528] pci 0000:20:01.0: PCI bridge to [bus 21] [ 12.972321] pci 0000:20:01.1: PCI bridge to [bus 22] [ 12.974118] pci 0000:20:02.0: PCI bridge to [bus 23] [ 12.975864] pci 0000:20:02.1: PCI bridge to [bus 24] [ 12.977666] pci 0000:20:02.2: PCI bridge to [bus 25] [ 12.979371] pci 0000:20:02.3: PCI bridge to [bus 26] [ 12.981147] pci 0000:20:03.0: PCI bridge to [bus 27] [ 12.982877] pci 0000:20:03.1: PCI bridge to [bus 28] [ 12.984648] pci 0000:20:03.2: PCI bridge to [bus 29] [ 12.986352] pci 0000:20:03.3: PCI bridge to [bus 2a] [ 12.988164] pci_bus 0000:20: resource 4 [mem 0xfb000000-0xfbffffff window] [ 12.990530] pci_bus 0000:20: resource 5 [io 0x8000-0xffff window] [ 12.992950] pci_bus 0000:1f: resource 4 [io 0x0000-0xffff] [ 12.994908] pci_bus 0000:1f: resource 5 [mem 0x00000000-0x3fffffffffff] [ 12.997244] pci_bus 0000:3f: resource 4 [io 0x0000-0xffff] [ 12.999208] pci_bus 0000:3f: resource 5 [mem 0x00000000-0x3fffffffffff] [ 13.001610] pci 0000:00:05.0: disabled boot interrupts on device [8086:0e28] [ 13.017246] pci 0000:00:1a.0: quirk_usb_early_handoff+0x0/0x290 took 12886 usecs [ 13.032717] pci 0000:00:1d.0: quirk_usb_early_handoff+0x0/0x290 took 12530 usecs [ 13.046106] pci 0000:01:00.4: quirk_usb_early_handoff+0x0/0x290 took 104913.300442] pci 0000:20:05.0: disabled boot interrupts on device [8086:0e28] [ 13.451053] pci 0000:20:05.0: quirk_disable_intel_boot_interrupt+0x0/0x1f0 took 147085 usecs [ 13.454240] PCI: CLS 64 bytes, default 64 [ 13.457367] Trying to unpack rootfs image as initramfs... [ 13.706763] Freeing initrd memory: 39648K [ 13.712671] ACPI: bus type thunderbolt registered [ 13.726802] Initialise system trusted keyrings [ 13.728572] Key type blacklist registered [ 13.730578] workingset: timestamp_bits=36 max_order=19 bucket_order=0 [ 13.797096] zbud: loaded [ 13.808034] integrity: Platform Keyring initialized [ 13.822163] NET: Registered PF_ALG protocol family [ 13.823963] xor: automatically using best checksumming function avx [ 13.826414] Key type asymmetric registered [ 13.827954] Asymmetric key parser 'x509' registered [ 13.829729] Running certificate verification selftests [ 13.885898] cryptomgr_test (42) used greatest stack depth: 28760 bytes left [ 13.941973] Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' [ 13.945709] cryptomgr_test (43) used greatest stack depth: 28584 bytes left [ 13.951056] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) [ 13.953996] io scheduler mq-deadline registered [ 13.955623] io scheduler kyber registered [ 13.957981] io scheduler bfq registered [ 13.963430] atomic64_test: passed for x86-64 platform with CX8 and with SSE [ 14.521202] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 [ 14.529607] ACPI: \_PR_.CP2B: Found 2 idle states [ 14.538489] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 14.544118] ACPI: button: Power Button [PWRF] [ 14.561699] thermal LNXTHERM:00: registered as thermal_zone0 [ 14.563811] ACPI: thermal: Thermal Zone [THM0] (8 C) [ 14.569411] ERST: Error Record Serialization Table (ERST) support is initialized. [ 14.572372] pstore: Registered erst as persistent store backend [ 14.574806] GHES: HEST is not enabled! [ 14.580138] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 14.584028] 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A [ 14.591698] serial8250: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A [ 14.607881] Non-volatile memory driver v1.3 [ 14.619435] rdac: device handler registered [ 14.621767] hp_sw: device handler registered [ 14.623360] emc: device handler registered [ 14.625381] alua: device handler registered [ 14.632320] libphy: Fixed MDIO Bus: probed [ 14.635617] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 14.638225] ehci-pci: EHCI PCI platform driver [ 14.654346] ehci-pci 0000:00:1a.0: EHCI Host Controller [ 14.657995] ehci-pci 0000:00:1a.0: new USB bus registered, assigned bus number 1 [ 14.660735] ehci-pci 0000:00:1a.0: debug port 2 [ 14.667156] ehci-pci 0000:00:1a.0: irq 21, io mem 0xf6c60000 [ 14.675894] ehci-pci 0000:00:1a.0: USB 2.0 started, EHCI 1.00 [ 14.679998] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.14 [ 14.682941] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 14.685435] usb usb1: Product: EHCI Host Controller [ 14.687181] usb usb1: Manufacturer: Linux 5.14.0-256.2009_766119311.el9.x86_64+debug ehci_hcd [ 14.690131] usb usb1: SerialNumber: 0000:00:1a.0 [ 14.695440] hub 1-0:1.0: USB hub found [ 14.697337] hub 1-0:1.0: 2 ports detected [ 14.712732] ehci-pci 0000:00:1d.0: EHCI Host Controller [ 14.715484] ehci-pci 0000:00:1d.0: new USB bus registered, assigned bus number 2 [ 14.718411] ehci-pci 0000:00:1d.0: debug port 2 [ 14.725274] ehci-pci 0000:00:1d.0: irq 20, io mem 0xf6c50000 [ 14.733873] ehci-pci 0000:00:1d.0: USB 2.0 started, EHCI 1.00 [ 14.737079] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.14 [ 14.739976] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 14.742473] usb usb2: Product: EHCI Host Controller [ 14.744147] usb usb2: Manufacturer: Linux 5.14.0-256.2009_766119311.el9.x86_64+debug ehci_hcd [ 14.747086] usb usb2: SerialNumber: 0000:00:1d.0 [ 14.751418] hub 2-0:1.0: USB hub found [ 14.753074] hub 2-0:1.0: 2 ports detected [ 14.756018] tsc: Refined TSC clocksource calibration: 2094.951 MHz [ 14.758554] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1e328ef1914, max_idle_ns: 440795263413 ns [ 14.765702] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 14.768072] ohci-pci: OHCI PCI platform driver [ 14.770106] uhci_hcd: USB Universal Host Controller Interface driver [ 14.772581] clocksource: Switched to clocksource tsc [ 14.779057] uhci_hcd 0000:01:00.4: UHCI Host Controller [ 14.782367] uhci_hcd 0000:01:00.4: new USB bus registered, assigned bus number 3 [ 14.785441] uhci_hcd 0000:01:00.4: detected 8 ports [ 14.787242] uhci_hcd 0000:01:00.4: port count misdetected? forcing to 2 ports [ 14.790083] uhci_hcd 0000:01:00.4: irq 47, io port 0x00003c00 [ 14.793689] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 [ 14.796859] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 14.799705] usb usb3: Product: UHCI Host Controller [ 14.801445] usb usb3: Manufacturer: Linux 5.14.0-256.2009_766119311.el9.x86_64+debug uhci_hcd [ 14.804577] usb usb3: SerialNumber: 0000:01:00.4 [ 14.809715] hub 3-0:1.0: USB hub found [ 14.811545] hub 3-0:1.0: 2 ports detected [ 14.819462] usbcore: registered new interface driver usbserial_generic [ 14.822278] usbserial: USB Serial support registered for generic [ 14.825589] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f0e:PS2M] at 0x60,0x64 irq 1,12 [ 14.832382] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 14.834337] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 14.839647] mousedev: PS/2 mouse device common for all mice [ 14.844501] rtc_cmos 00:03: RTC can wake from S4 [ 14.850220] rtc_cmos 00:03: registered as rtc0 [ 14.851964] rtc_cmos 00:03: setting system c2-03T04:25:22 UTC (1675398322) [ 15.254704] hpet: Lost 25 RTC interrupts [ 15.258244] rtc_cmos 00:03: alarms up to one day, bytes nvram, hpet irqs [ 15.360937] usb 1-1: new high-speed USB device number 2 using ehci-pci [ 15.364968] intel_pstate: Intel P-state driver initializing [ 15.391971] hid: raw HID events driver (C) Jiri Kosina [ 15.394860] usbcore: registered new interface driver usbhid [ 15.396841] usbhid: USB HID core driver [ 15.399065] drop_monitor: Initializing network drop monitor service [ 15.416352] usb 2-1: new high-speed USB device number 2 using ehci-pci [ 15.436402] Initializing XFRM netlink socket [ 15.441891] NET: Registered PF_INET6 protocol family [ 15.449631] Segment Routing with IPv6 [ 15.451096] NET: Registered PF_PACKET protocol family [ 15.453521] mpls_gso: MPLS GSO support [ 15.454898] mce: Unable to init MCE device (rc: -5) [ 15.459006] microcode: sig=0x306e4, pf=0x1, revision=0x42e [ 15.461044] microcode: Microcode Update Driver: v2.2. [ 15.461066] IPI shorthand broadcast: enabled [ 15.464481] AVX version of gcm_enc/dec engaged. [ 15.466175] AES CTR mode by8 optimization enabled [ 15.471483] sched_clock: Marking stable (8295536188, 7175291522)->(26590451596, -11119623886) [ 15.477418] registered taskstats version5.593334] Loading compiled-in X.509 certificates [ 15.784009] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 3bcdb855b1eeffc8155fd4f9576830612b2c709a' [ 15.789732] Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' [ 15.795257] usb 1-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00 [ 15.798369] usb 1-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 15.801727] usb 2-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00 [ 15.805029] usb 2-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 15.808724] Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' [ 15.816770] hub 1-1:1.0: USB hub found [ 15.819143] hub 2-1:1.0: USB hub found [ 15.822571] hub 2-1:1.0: 8 ports detected [ 15.824145] hub 1-1:1.0: 6 ports detected [ 15.827105] zswap: loaded using pool lzo/zbud [ 15.829190] cryptomgr_test (67) used greatest stack depth: 27920 bytes left [ 15.834770] debug_vm_pgtable: [debug_vm_pgtable ]: Validating architecture page table h85623] page_owner is disabled [ 16.345233] pstore: Using crash dump compression: deflate [ 16.347730] Key type big_key registered [ 16.367094] modprobe (70) used greatest stack depth: 27608 bytes left [ 16.384265] Key type encrypted registered [ 16.385986] ima: No TPM chip found, activating TPM-bypass! [ 16.388256] Loading compiled-in module X.509 certificates [ 16.391580] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 3bcdb855b1eeffc8155fd4f9576830612b2c709a' [ 16.395761] ima: Allocated hash algorithm: sha256 [ 16.397703] ima: No architecture policies found [ 16.400200] evm: Initialising EVM extended attributes: [ 16.402013] evm: security.selinux [ 16.403186] evm: security.SMACK64 (disabled) [ 16.404687] evm: security.SMACK64EXEC (disabled) [ 16.406264] evm: security.SMACK64TRANSMUTE (disabled) [ 16.408094] evm: security.SMACK64MMAP (disabled) [ 16.409699] evm: security.apparmor (disabled) [ 16.411208] evm: security.ima [ 16.412257] evm: security.capability [ 16.413498] evm: HMAC attrs: 0x1 [ 16.443447] cryptomgr_test (75) used greatest stack depth: 27416 bytes left [ 16.509868] usb 2-1.3: new high-speed USB device number 3 using ehci-pci [ 16.595730] usb 2-1.3: New vice found, idVendor=0424, idProduct=2660, bcdDevice= 8.01 [ 16.798789] usb 2-1.3: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 16.808383] hub 2-1.3:1.0: USB hub found [ 16.812916] hub 2-1.3:1.0: 2 ports detected [ 16.986442] cryptomgr_test (177) used greatest stack depth: 27032 bytes left [ 17.187121] PM: Magic number: 11:993:412 [ 17.219683] Freeing unused decrypted memory: 2036K [ 17.226657] Freeing unused kernel image (initmem) memory: 5300K [ 17.227780] Write protecting the kernel read-only data: 57344k [ 17.233957] Freeing unused kernel image (text/rodata gap) memory: 2036K [ 17.236854] Freeing unused kernel image (rodata/data gap) memory: 1400K [ 17.311975] x86/mm: Checked W+X mappings: passed, no W+X pages found. [ 17.312399] x86/mm: Checking user space page tables [ 17.383413] x86/mm: Checked W+X mappings: passed, no W+X pages found. [ 17.383998] Run /init as init process [ 17.446560] mkdir (187) used greatest stack depth: 27016 bytes left [ 17.560925] loop: module loaded [ 17.597600] squashfs: version 4.0 (2009/01/31) Phillip Lougher [ 17.699091] loop0: detected capacity change from 0 to 54376 [ 17.729425] overlayfs: upper fs does not support RENAME_WHITEOUT. [ 17.747412] mount (205) used greatest stack depth: 25128 bytes left [ 18.208272] systemd[1]: RTC configured in localtime, applying delta of -300 minutes to system time. [ 18.300404] modprobe (207) used greatest stack depth: 24312 bytes left [ 18.321418] systemd[1]: systemd 252-3.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [ 18.339300] systemd[1]: Detected architecture x86-64. [ 18.339641] systemd[1]: Running in initrd. Welcome to CentOS Stream 9 dracut-057-20.git20221213.el9 (Initramfs) ! [ 18.350043] systemd[1]: Hostname set to . [ 18.950613] systemd[1]: /usr/lib/systemd/system/kdump-capture.service:23: Standard output type syslog is obsolete, automatically updating to journal. Please update your unit file, and consider removing the setting altogether. [ 18.952298] systemd[1]: /usr/lib/systemd/system/kdump-capture.service:24: Standard output type syslog+console is obsolete, automatically updating to journal+console. Please update your unit file, and consider removing the setting altogether. [ 19.002194] systemd[1]: Queued start job for default target Initrd Default Target. [ 19.006013] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [ OK ] Started Dispatch Password …ts to Console Directory Watch . [ 19.011175] systemd[1]: Reached target Initrd Root Device. [ OK ] Reached target Initrd Root Device . [ 19.015135] systemd[1]: Reached target Initrd /usr File System. [ OK ] Reached target Initrd /usr File System . [ 19.019048] systemd[1]: Reached target Local File Systems. [ OK ] Reached target Local File Systems . [ 19.023041] systemd[1]: Reached target Path Units. [ OK ] Reached target Path Units . [ 19.028060] systemd[1]: Reached target Slice Units. [ OK ] Reached target Slice Units . [ 19.033047] systemd[1]: Reached target Swaps. [ OK ] Reached target Swaps . [ 19.037040] systemd[1]: Reached target Timer Units. [ OK ] Reached target Timer Units . [ 19.042705] systemd[1]: Listening on D-Bus System Message Bus Socket. [ OK ] Listening on D-Bus System Message Bus Socket . [ 19.046297] systemd[1]: Listening on Journal Socket (/dev/log). [ OK ] Listening on Journal Socket (/dev/log) . [ 19.052769] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket . [ 19.057187] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket . [ 19.062346] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket . [ 19.066043] systemd[1]: Reached target Socket Units. [ OK ] Reached target Socket Units . [ 19.070397] systemd[1]: Create List of Static Device Nodes was skipped because of an unmet condition check (ConditionFileNotEmpty=/lib/modules/5.14.0-256.2009_766119311.el9.x86_64+debug/modules.devname). [ 19.098175] systemd[1]: Starting Journal Service... Starting Journal Service ... [ 19.111408] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [ 19.129537] systemd[1]: Starting Apply Kernel Variables... Starting Apply Kernel Variables ... [ 19.169796] systemd[1]: Starting Create System Users... Starting Create System Users ... [ 19.224738] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console ... [ 19.418168] systemd[1]: Finished Apply Kernel Variables. [ OK ] Finished Apply Kernel Variables . [ 19.501651] systemd[1]: Started Journal Service. [ OK ] Started Journal Service . [ 12.353052] systemd-sysusers[232]: Creating group 'users' with GID 100. [ 12.391376] systemd-sysusers[232]: Creating group 'dbus' with GID 81. [ 12.419777] systemd-sysusers[232]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. [ 12.468453] systemd[1]: Finished Create System Users. [ OK ] Finished Create System Users . [ 12.494956] systemd[1]: Starting Create Static Device Nodes in /dev... Starting Create Static Device Nodes in /dev ... [ 12.539773] systemd[1]: Starting Create Volatile Files and Directories... Starting Create Volatile Files and Directories ... [ 12.773017] systemd[1]: Finished Create Static Device Nodes in /dev. [ OK ] Finished Create Static Device Nodes in /dev . [ 12.915750] systemd[1]: Finished Create Volatile Files and Directories. [ OK ] Finished Create Volatile Files and Directories . [ 13.117174] systemd[1]: Finished Setup Virtual Console. [ OK ] Finished Setup Virtual Console . [ 13.136353] systemd[1]: Starting dracut ask for additional cmdline parameters... Starting dracut ask for additional cmdline parameters ... [ 13.254720] systemd[1]: Finished dracut ask for additional cmdline parameters. [ OK ] Finished dracut ask for additional cmdline parameters . [ 13.271335] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook ... [ 13.369254] dracut-cmdline[253]: dracut-9 dracut-057-20.git20221213.el9 [ 13.397958] dracut-cmdline[253]: Using kernel command line parameters: rd.neednet kdump_remote_ip=10.0.152.11 elfcorehdr=0x3d000000 BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug ro resume=/dev/mapper/cs_hpe--dl360pgen8--08-swap console=ttyS1,115200n81 irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 acpi_no_memhotplug transparent_hugepage=never nokaslr hest_disable novmcoredd cma=0 hugetlb_cma=0 disable_cpu_apicid=0 hpwdt.pretimeout=0 hpwdt.kdumptimeout=0 [ 21.060148] dracut-cmdline (253) used greatest stack depth: 24280 bytes left [ 13.892024] systemd[1]: Finished dracut cmdline hook. [ OK ] Finished dracut cmdline hook . [ 13.908649] systemd[1]: Starting dracut pre-udev hook... Starting dracut pre-udev hook ... [ 21.418654] RPC: Registered named UNIX socket transport module. [ 21.419608] RPC: Registered udp transport module. [ 21.420468] RPC: Registered tcp transport module. [ 21.421559] RPC: Registered tcp NFSv4.1 backchannel transport module. [ 14.352675] rpc.idmapd[336]: Setting log level to 0 [ 14.485319] systemd[1]: Finished dracut pre-udev hook. [ OK ] Finished dracut pre-udev hook . [ 14.503484] systemd[1]: Starting Rule-based Manager for Device Events and Files... Starting Rule-based Manage…for Device Events and Files ... [ 14.611907] systemd-udevd[349]: Using default interface naming scheme 'rhel-9.0'. [ OK ] Started Rule-based Manager for Device Events and Files . [ 14.659953] systemd[1]: Started Rule-based Manager for Device Events and Files. [ 14.667826] systemd[1]: dracut pre-trigger hook was skipped because no trigger condition checks were met. [ 14.683624] systemd[1]: Starting Coldplug All udev Devices... Starting Coldplug All udev Devices ... [ * ] A start job is running for Coldplug All udev Devices (4s / no limit) M [ * * ] A start job is running for Coldplug All udev Devices (5s / no limit) [ 17.256204] systemd[1]: Finished Coldplug All udev Devices. M [ OK ] Finished Coldplug All udev Devices . [ 17.262120] systemd[1]: Reached target System Initialization. [ OK ] Reached target System Initialization . [ OK ] Reached target Basic System . [ 17.270940] systemd[1]: Reached target Basic System. [ 17.275865] systemd[1]: System is tainted: local-hwclock [ 17.299135] systemd[1]: Starting nm-initrd.service... Starting nm-initrd.service ... [ 18.137180] NetworkManager[353]: [1675416332.9298] NetworkManager (version 1.41.90-1.el9) is starting... (boot:0502f8d8-6baf-4a35-926a-3566f89cf188) [ 18.142061] NetworkManager[353]: [1675416332.9613] Read config: /etc/NetworkManager/NetworkManager.conf (lib: initrd-no-auto-default.conf) (etc: 10-kdump-netif_allowlist.conf, 95-kdump-timeouts.conf) [ 18.189956] systemd[1]: Starting D-Bus System Message Bus... Starting D-Bus System Message Bus ... [ OK ] Started D-Bus System Message Bus . [ 18.377822] systemd[1]: Started D-Bus System Message Bus. [ 18.475031] dbus-broker-lau[355]: Ready [ 18.487679] NetworkManager[353]: [1675416333.3106] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" [ OK ] Started nm-initrd.service . [ 18.493049] systemd[1]: Started nm-initrd.service. [ 18.495292] systemd[1]: Reached target Network. [ OK ] Reached target Network . [ 18.536768] systemd[1]: Starting nm-wait-online-initrd.service... Starting nm-wait-online-initrd.service ... [ 18.583880] NetworkManager[353]: [1675416333.4069] manager[0x55cdf26e3080]: monitoring kernel firmware directory '/lib/firmware'. [ 18.613160] NetworkManager[353]: [1675416333.4363] hostname: hostname: couldn't get property from hostnamed [ 18.618431] NetworkManager[353]: [1675416333.4416] hostname: static hostname changed from (none) to "hpe-dl360pgen8-08.hpe2.lab.eng.bos.redhat.com" [ 18.641748] NetworkManager[353]: [1675416333.4638] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) [ 18.879928] NetworkManager[353]: [1675416333.7028] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.41.90-1.el9/libnm-device-plugin-team.so) [ 18.883879] NetworkManager[353]: [1675416333.7045] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file [ 18.888198] NetworkManager[353]: [1675416333.7060] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file [ 18.891844] NetworkManager[353]: [1675416333.7079] manager: Networking is enabled by state file [ 18.897882] NetworkManager[353]: [1675416333.7210] settings: Loaded settings plugin: keyfile (internal) [ 18.917098] NetworkManager[353]: [1675416333.7403] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.41.90-1.el9/libnm-settings-plugin-ifcfg-rh.so") [ 18.942936] NetworkManager[353]: [1675416333.7661] dhcp: init: Using DHCP client 'internal' [ 18.946820] NetworkManager[353]: [1675416333.7690] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) [ 18.967130] NetworkManager[353]: [1675416333.7903] ifcfg-rh: dbus: couldn't acquire D-Bus service: GDBus.Error:org.freedesktop.DBus.Error.AccessDenied: Request to own name refused by policy [ 26.691191] tg3 0000:03:00.0 eth0: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c4 [ 26.691873] tg3 0000:03:00.0 eth0: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 26.692846] tg3 0000:03:00.0 eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 26.693281] tg3 0000:03:00.0 eth0: dma_rwctrl[00000001] dma_mask[64-bit] [ 19.528268] NetworkManager[353]: [1675416334.3496] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) [ 26.811121] tg3 0000:03:00.1 eth1: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c5 [ 26.811741] tg3 0000:03:00.1 eth1: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 26.812684] tg3 0000:03:00.1 eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 26.813136] tg3 0000:03:00.1 eth1: dma_rwctrl[00000001] dma_mask[64-bit] [ 19.645916] NetworkManager[353]: [1675416334.4688] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) [ 26.928162] tg3 0000:03:00.2 eth2: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c6 [ 26.928752] tg3 0000:03:00.2 eth2: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 26.929704] tg3 0000:03:00.2 eth2: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 26.930167] tg3 0000:03:00.2 eth2: dma_rwctrl[00000001] dma_mask[64-bit] [ 19.761874] NetworkManager[353]: [1675416334.5850] manager: (eth2): new Ethernet device (/org/freedesktop/NetworkManager/Devices/4) [ 27.044158] tg3 0000:03:00.3 eth3: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c7 [ 27.044786] tg3 0000:03:00.3 eth3: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 27.045795] tg3 0000:03:00.3 eth3: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 27.046261] tg3 0000:03:00.3 eth3: dma_rwctrl[00000001] dma_mask[64-bit] [ 19.880902] NetworkManager[353]: [1675416334.7040] manager: (eth3): new Ethernet device (/org/freedesktop/NetworkManager/Devices/5) [ 27.253536] hpwdt 0000:01:00.0: HPE Watchdog Timer Driver: NMI decoding initialized [ 27.266102] hpwdt 0000:01:00.0: HPE Watchdog Timer Driver: Version: 2.0.4 [ 27.266489] hpwdt 0000:01:00.0: timeout: 30 seconds (nowayout=0) [ 27.267265] hpwdt 0000:01:00.0: pretimeout: off. [ 27.268465] hpwdt 0000:01:00.0: kdumptimeout: 0. [ 27.537513] ACPI: bus type drm_connector registered [ 27.731123] mgag200 0000:01:00.1: vgaarb: deactivate vga console [ 27.752688] Console: switching to colour dummy device 80x25 [ 27.805666] [drm] Initialized mgag200 1.0.0 20110418 for 0000:01:00.1 on minor 0 [ 27.831344] fbcon: mgag200drmfb (fb0) is primary device [ 28.066603] Console: switching to colour frame buffer device 128x48 [ 28.211019] mgag200 0000:01:00.1: [drm] fb0: mgag200drmfb frame buffer device [ * * * ] A start job is running for nm-wait-…ine-initrd.service (8s / no limit) M [ * * * ] A start job is running for nm-wait-…ine-initrd.service (9s / no limit) [ 28.620978] tg3 0000:03:00.0 eno1: renamed from eth0 [ 21.455888] NetworkManager[353]: [1675416336.2771] device (eth0): interface index 2 renamed iface from 'eth0' to 'eno1' [ 21.585457] NetworkManager[353]: [1675416336.4085] device (eno1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') [ 29.117812] tg3 0000:03:00.1 eno2: renamed from eth1 [ 21.951205] NetworkManager[353]: [1675416336.7742] device (eth1): interface index 3 renamed iface from 'eth1' to 'eno2' M [ * * * ] A start job is running for nm-wait-…ne-initrd.service (10s / no limit) [ 29.410554] tg3 0000:03:00.2 eno3: renamed from eth2 [ 22.245212] NetworkManager[353]: [1675416337.0682] device (eth2): interface index 4 renamed iface from 'eth2' to 'eno3' M [ * * * ] A start job is running for nm-wait-…ne-initrd.service (10s / no limit) [ 29.701779] tg3 0000:03:00.3 eno4: renamed from eth3 [ 22.538030] NetworkManager[353]: [1675416337.3592] device (eth3): interface index 5 renamed iface from 'eth3' to 'eno4' M [ * * ] A start job is running for nm-wait-…ne-initrd.service (11s / no limit) M [ * ] A start job is running for nm-wait-…ne-initrd.service (11s / no limit) M [ * * ] A start job is running for nm-wait-…ne-initrd.service (12s / no limit) M [ * * * ] A start job is running for nm-wait-…ne-initrd.service (12s / no limit) M [ * * * ] A start job is running for nm-wait-…ne-initrd.service (13s / no limit) [ 32.519825] tg3 0000:03:00.0 eno1: Link is up at 1000 Mbps, full duplex [ 32.520397] tg3 0000:03:00.0 eno1: Flow control is off for TX and off for RX [ 32.521230] tg3 0000:03:00.0 eno1: EEE is disabled [ 32.522299] IPv6: ADDRCONF(NETDEV_CHANGE): eno1: link becomes ready [ 25.355869] NetworkManager[353]: [1675416340.1755] device (eno1): carrier: link connected [ 25.365260] NetworkManager[353]: [1675416340.1882] device (eno1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') [ 25.380259] NetworkManager[353]: [1675416340.2032] policy: auto-activating connection 'eno1' (4cf9cca4-b39b-4e79-b1d1-e59527230a7f) [ 25.387576] NetworkManager[353]: [1675416340.2105] device (eno1): Activation: starting connection 'eno1' (4cf9cca4-b39b-4e79-b1d1-e59527230a7f) [ 25.391855] NetworkManager[353]: [1675416340.2147] device (eno1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') [ 25.402441] NetworkManager[353]: [1675416340.2254] manager: NetworkManager state is now CONNECTING [ 25.409445] NetworkManager[353]: [1675416340.2324] device (eno1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') [ 25.420642] NetworkManager[353]: [1675416340.2434] device (eno1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') [ 25.428959] NetworkManager[353]: [1675416340.2517] dhcp4 (eno1): activation: beginning transaction (timeout in 90 seconds) M [ * * * ] A start job is running for nm-wait-…ne-initrd.service (13s / no limit) M [ * * * ] A start job is running for nm-wait-…ne-initrd.service (14s / no limit) M [ * * ] A start job is running for nm-wait-…ne-initrd.service (14s / no limit) M [ * ] A start job is running for nm-wait-…ne-initrd.service (15s / no limit) M [ * * ] A start job is running for nm-wait-…ne-initrd.service (16s / no limit) M [ * * * ] A start job is running for nm-wait-…ne-initrd.service (16s / no limit) M [ * * * ] A start job is running for nm-wait-…ne-initrd.service (17s / no limit) M [ * * * ] A start job is running for nm-wait-…ne-initrd.service (17s / no limit) M [ * * * ] A start job is running for nm-wait-…ne-initrd.service (18s / no limit) M [ * * ] A start job is running for nm-wait-…ne-initrd.service (19s / no limit) M [ * ] A start job is running for nm-wait-…ne-initrd.service (19s / no limit) M [ * * ] A start job is running for nm-wait-…ne-initrd.service (20s / no limit) [ 32.474398] NetworkManager[353]: [1675416347.2823] dhcp4 (eno1): state changed new lease, address=10.16.216.84 [ 32.478207] NetworkManager[353]: [1675416347.2912] policy: set 'eno1' (eno1) as default for IPv4 routing and DNS [ 32.508715] NetworkManager[353]: [1675416347.3315] device (eno1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') [ 32.520046] NetworkManager[353]: [1675416347.3430] device (eno1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') [ 32.526326] NetworkManager[353]: [1675416347.3493] device (eno1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') [ 32.537409] NetworkManager[353]: [1675416347.3604] manager: NetworkManager state is now CONNECTED_SITE [ 32.548133] NetworkManager[353]: [1675416347.3711] device (eno1): Activation: successful, device activated. [ 32.556320] NetworkManager[353]: [1675416347.3793] manager: NetworkManager state is now CONNECTED_GLOBAL [ 32.565453] NetworkManager[353]: [1675416347.3884] manager: startup complete M [ * * * ] A start job is running for nm-wait-…ne-initrd.service (20s / no limit) [ 32.621994] systemd[1]: Finished nm-wait-online-initrd.service. M [ OK ] Finished nm-wait-online-initrd.service . [ 32.632432] systemd[1]: Reached target Network is Online. [ OK ] Reached target Network is Online . [ 32.653862] systemd[1]: Starting dracut initqueue hook... Starting dracut initqueue hook ... [ 33.273768] systemd[1]: Finished dracut initqueue hook. [ OK ] Finished dracut initqueue hook . [ 33.281508] systemd[1]: Reached target Preparation for Remote File Systems. [ OK ] Reached target Preparation for Remote File Systems . [ 33.303741] systemd[1]: Mounting /kdumproot/var/crash... Mounting /kdumproot/var/crash ... [ 33.310508] systemd[1]: dracut pre-mount hook was skipped because no trigger condition checks were met. [ 33.313841] systemd[1]: Reached target Initrd Root File System. [ OK ] Reached target Initrd Root File System . [ 33.342106] systemd[1]: Starting Mountpoints Configured in the Real Root... Starting Mountpoints Configured in the Real Root ... [ 33.610008] systemd[1]: initrd-parse-etc.service: Deactivated successfully. [ 33.618518] systemd[1]: Finished Mountpoints Configured in the Real Root. [ OK ] Finished Mountpoints Configured in the Real Root . [ 33.633119] systemd[1]: Reached target Initrd File Systems. [ OK ] Reached target Initrd File Systems . [ 33.641329] systemd[1]: Reached target Initrd Default Target. [ OK ] Reached target Initrd Default Target . [ 33.651363] systemd[1]: dracut mount hook was skipped because no trigger condition checks were met. [ 40.840390] FS-Cache: Loaded [ 41.165514] Key type dns_resolver registered [ 41.714538] NFS: Registering the id_resolver key type [ 41.715097] Key type id_resolver registered [ 41.715393] Key type id_legacy registered [ 42.713693] mount.nfs (433) used greatest stack depth: 23368 bytes left [ OK ] Mounted /kdumproot/var/crash . [ 35.553958] systemd[1]: Mounted /kdumproot/var/crash. [ 35.562017] systemd[1]: Reached target Remote File Systems. [ OK ] Reached target Remote File Systems . Starting dracut pre-pivot and cleanup hook ... [ 35.601169] systemd[1]: Starting dracut pre-pivot and cleanup hook... [ 35.941859] rpc.idmapd[336]: exiting on signal 15 [ 36.208451] systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully. [ 36.258836] systemd[1]: Finished dracut pre-pivot and cleanup hook. [ OK ] Finished dracut pre-pivot and cleanup hook . [ 36.277188] systemd[1]: Starting Kdump Vmcore Save Service... Starting Kdump Vmcore Save Service ... [ 36.448074] kdump[502]: Kdump is using the default log level(3). [ 38.029418] kdump[535]: saving to /kdumproot/var/crash/cki/5.14.0-256.2009_766119311.el9.x86_64+debug/7495000/hpe-dl360pgen8-08.hpe2.lab.eng.bos.redhat.com/10.16.216.84-2023-02-03-04:25:51/ [ 38.408959] kdump[540]: saving vmcore-dmesg.txt to /kdumproot/var/crash/cki/5.14.0-256.2009_766119311.el9.x86_64+debug/7495000/hpe-dl360pgen8-08.hpe2.lab.eng.bos.redhat.com/10.16.216.84-2023-02-03-04:25:51/ [ 39.381950] kdump[547]: saving vmcore-dmesg.txt complete [ 39.434189] kdump[549]: saving vmcore [ 223.484945] kdump.sh[550]: Checking for memory holes : [ 0.0 %] / Checking for memory holes : [100.0 %] | Excluding unnecessary pages : [100.0 %] \ Checking for memory holes : [100.0 %] - Checking for memory holes : [100.0 %] / Excluding unnecessary pages : [ 97.7 %] | Excluding unnecessary pages : [100.0 %] \ Copying data : [ 0.1 %] - eta: 16m38s Copying data : [ 1.4 %] / eta: 2m21s Copying data : [ 1.8 %] | eta: 2m44s Copying data : [ 2.1 %] \ eta: 3m6s Copying data : [ 2.4 %] - eta: 3m23s Copying data : [ 2.6 %] / eta: 3m46s Copying data : [ 2.8 %] | eta: 4m3s Copying data : [ 3.0 %] \ eta: 4m19s Copying data : [ 3.2 %] - eta: 4m32s Copying data : [ 3.4 %] / eta: 4m44s Copying data : [ 3.6 %] | eta: 4m54s Copying data : [ 3.8 %] \ eta: 5m4s Copying data : [ 4.1 %] - eta: 5m4s Copying data : [ 4.3 %] / eta: 5m11s Copying data : [ 4.5 %] | eta: 5m19s Copying data : [ 4.7 %] \ eta: 5m24s Copying data : [ 4.9 %] - eta: 5m30s Copying data : [ 5.1 %] / eta: 5m35s Copying data : [ 5.3 %] | eta: 5m40s Copying data : [ 5.5 %] \ eta: 5m43s Copying data : [ 5.6 %] - eta: 5m55s Copying data : [ 5.8 %] / eta: 5m57s Copying data : [ 6.0 %] | eta: 6m0s Copying data : [ 6.2 %] \ eta: 6m3s Copying data : [ 6.4 %] - eta: 6m6s Copying data : [ 6.6 %] / eta: 6m8s Copying data : [ 6.8 %] | eta: 6m10s Copying data : [ 7.0 %] \ eta: 6m12s Copying data : [ 7.2 %] - eta: 6m14s Copying data : [ 7.9 %] / eta: 5m49s Copying data : [ [ 230.98.1 %] | eta: 5m52s Copying data : [ 8.3 %] \ eta: 5m55s Copying data : [ 8.5 %] - eta: 5m55s Copying data : [ 8.7 %] / eta: 5m57s Copying data : [ 8.9 %] | eta: 5m58s Copying data : [ 9.1 %] \ eta: 6m0s Copying data : [ 9.3 %] - eta: 6m1s Copying data : [ 9.5 %] / eta: 6m2s Copying data : [ 9.7 %] | eta: 6m4s Copying data : [ 9.9 %] \ eta: 6m4s Copying data : [ 10.1 %] - eta: 6m5s Copying data : [ 10.4 %] / eta: 6m2s Copying data : [ 10.6 %] | eta: 6m3s Copying data : [ 10.7 %] \ eta: 6m7s Copying data : [ 10.9 %] - eta: 6m8s Copying data : [ 11.1 %] / eta: 6m9s Copying data : [ 11.4 %] | eta: 6m5s Copying data : [ 11.6 %] \ eta: 6m6s Copying data : [ 11.8 %] - eta: 6m7s Copying data : [ 12.0 %] / eta: 6m6s Copying data : [ 12.2 %] | eta: 6m7s Copying data : [ 12.4 %] \ eta: 6m7s Copying data : [ 12.6 %] - eta: 6m8s Copying data : [ 12.8 %] / eta: 6m7s Copying data : [ 13.0 %] | eta: 6m8s Copying data : [ 13.2 %] \ eta: 6m8s Copying data : [ 13.4 %] - eta: 6m9s Copying data : [ 13.6 %] / eta: 6m9s Copying data : [ 13.8 %] | eta: 6m8s Copying data : [ 14.3 %] \ eta: 6m0s Copying data [ 232.223375] g0: watchdog did not stop! : [ 15.5 %] - eta: 5m33s Copying data : [ 16.9 %] / eta: 5m5s Copying data : [ 18.3 %] | eta: 4m42s Copying data : [ 19.7 %] \ eta: 4m21s Copying data : [ 21.1 %] - eta: 4m3s Copying data : [ 22.2 %] / eta: 3m51s Copying data : [ 25.9 %] | eta: 3m12s Copying data : [ 30.3 %] \ eta: 2m37s Copying data : [ 31.6 %] - eta: 2m30s Copying data : [ 33.0 %] / eta: 2m23s Copying data : [ 34.4 %] | eta: 2m16s Copying data : [ 35.4 %] \ eta: 2m12s Copying data : [ 36.8 %] - eta: 2m6s Copying data : [ 38.3 %] / eta: 2m0s Copying data : [ 39.7 %] |[ 232.344 eta: 1m54s Copying data : [ 41.2 %] \ eta: 1m49s Copying data : [ 42.6 %] - eta: 1m44s Copying data : [ 43.9 %] / eta: 1m40s Copying data : [ 45.3 %] | eta: 96s Copying data : [ 46.7 %] \ eta: 92s Copying data : [ 48.1 %] - eta: 88s Copying data : [ 49.5 %] / eta: 83s Copying data : [ 52.7 %] | eta: 75s Copying data : [ 56.0 %] \ eta: 66s Copying data : [ 57.2 %] - eta: 64s Copying data : [ 57.4 %] / eta: 63s Copying data : [ 57.6 %] | eta: 64s Copying data : [ 57.8 %] \ eta: 64s Copying data : [ 58.5 %] - eta: 64s Copying data [ 232.87 : [ 59.3 %] / eta: 61s Copying data : [ 62.3 %] | eta: 56s Copying data : [ 65.6 %] \ eta: 48s Copying data : [ 66.9 %] - eta: 46s Copying data : [ 67.9 %] / eta: 44s Copying data : [ 69.2 %] | eta: 43s Copying data : [ 70.6 %] \ eta: 40s Copying data : [ 72.0 %] - eta: 38s Copying data : [ 76.2 %] / eta: 31s Copying data : [ 77.5 %] | eta: 29s Copying data : [ 77.8 %] \ eta: 29s Copying data : [ 78.1 %] - eta: 29s Copying data : [ 78.5 %] / eta: 28s Copying data : [ 78.8 %] | eta: 28s Copying data : [ 79.4 %] \ [ eta: 27s Copying data : [ 80.0 %] - eta: 26s Copying data : [ 80.2 %] / eta: 27s Copying data : [ 80.5 %] | eta: 26s Copying data : [ 80.9 %] \ eta: 25s Copying data : [ 81.3 %] - eta: 26s Copying data : [ 81.7 %] / eta: 25s Copying data : [ 81.9 %] | eta: 24s Copying data : [ 82.2 %] \ eta: 24s Copying data [ 233.817950] systemd-journald[229]: Received SIGTERM from PID 1 (systemd-shutdow). [ 226.647392] systemd[1]: Shutting down. [ 226.655414] kdump.sh[550]: The dumpfile is saved to /kdumproot/var/crash/cki/5.14.0-256.2009_766119311.el9.x86_64+debug/7495000/hpe-dl360pgen8-08.hpe2.lab.eng.bos.redhat.com/10.16.216.84-2023-02-03-04:25:51//vmcore-incomplete. [ 226.659947] kdump.sh[550]: makedumpfile Completed. [ 226.665837] kdump[558]: saving vmcore complete [ 226.670084] systemd[1]: Using hardware watchdog 'HPE iLO2+ HW Watchdog Timer', version 0, device /dev/watchdog0 [ 226.676460] kdump[560]: saving the /run/initramfs/kexec-dmesg.log to /kdumproot/var/crash/cki/5.14.0-256.2009_766119311.el9.x86_64+debug/7495000/hpe-dl360pgen8-08.hpe2.lab.eng.bos.redhat.com/10.16.216.84-2023-02-03-04:25:51// [ 227.082254] systemd[1]: Watchdog running with a timeout of 10min. [ 227.088503] kdump[566]: Executing final action systemctl reboot -f [ 227.092510] NetworkManager[353]: [1675416540.8938] caught SIGTERM, shutting down normally. [ 227.098828] dbus-broker[356]: Dispatched 459 messages @ 12(±9)μs / message. [ 227.103106] NetworkManager[353]: [1675416540.9134] dhcp4 (eno1): canceled DHCP transaction [ 227.107111] NetworkManager[353]: [1675416540.9135] dhcp4 (eno1): activation: beginning transaction (timeout in 90 seconds) [ 227.111107] NetworkManager[353]: [1675416540.9136] dhcp4 (eno1): state changed no lease [ 227.115073] NetworkManager[353]: [1675416540.9152] manager: NetworkManager state is now CONNECTED_SITE [ 227.119181] NetworkManager[353]: [1675416540.9334] exiting (success) [ 234.327430] systemd-shutdown[1]: Sending SIGKILL to remaining processes... [ 234.384277] systemd-shutdown[1]: Unmounting file systems. [ 234.398141] [569]: Unmounting '/sysroot/var/lib/nfs/rpc_pipefs'. [ 234.422526] [570]: [ 235.317639] [571]: Unmounting '/run/credentials/systemd-tmpfiles-setup.service'. [ 235.333208] [572]: Unmounting '/run/credentials/systemd-tmpfiles-setup-dev.service'. [ 235.347384] [573]: Unmounting '/run/credentials/systemd-sysusers.service'. [ 235.361539] [574]: Unmounting '/run/credentials/systemd-sysctl.service'. [ 235.375625] [575]: Remounting '/' read-only with options 'lowerdir=/squash/root,upperdir=/squash/overlay/upper,workdir=/squash/overlay/work/'. [ 235.496565] [576]: Unmounting '/squash/root'. [ 235.513993] [577]: Unmounting '/squash'. [ 235.522237] systemd-shutdown[1]: All filesystems unmounted. [ 235.522592] systemd-shutdown[1]: Deactivating swaps. [ 235.523627] systemd-shutdown[1]: All swaps deactivated. [ 235.524431] systemd-shutdown[1]: Detaching loop devices. [ 235.531327] systemd-shutdown[1]: Detaching loopback /dev/loop0. [ 235.533082] systemd-shutdown[1]: Could not detach loopback /dev/loop0: Device or resource busy [ 235.534178] systemd-shutdown[1]: Not all loop devices detached, 1 left. [ 235.534749] systemd-shutdown[1]: Stopping MD devices. [ 235.536204] systemd-shutdown[1]: All MD devices stopped. [ 235.536596] systemd-shutdown[1]: Detaching DM devices. [ 235.537768] systemd-shutdown[1]: All DM devices detached. [ 235.538647] systemd-shutdown[1]: Detaching loop devices. [ 235.544139] systemd-shutdown[1]: Detaching loopback /dev/loop0. [ 235.545726] systemd-shutdown[1]: Could not detach loopback /dev/loop0: Device or resource busy [ 235.547020] systemd-shutdown[1]: Not all loop devices detached, 1 left. [ 235.548001] systemd-shutdown[1]: Detaching loop devices. [ 235.554514] systemd-shutdown[1]: Detaching loopback /dev/loop0. [ 235.555601] systemd-shutdown[1]: Could not detach loopback /dev/loop0: Device or resource busy [ 235.556809] systemd-shutdown[1]: Not all loop devices detached, 1 left. [ 235.557399] systemd-shutdown[1]: Cannot finalize remaining loop devices, continuing. [ 235.557909] watchdog: watchdog0: watchdog did not stop! [ 235.569556] systemd-shutdown[1]: Failed to finalize loop devices, ignoring. [ 235.574436] systemd-shutdown[1]: Syncing filesystems and block devices. [ 235.580295] systemd-shutdown[1]: Rebooting. [ 236.325158] reboot: Restarting system [ 236.325566] reboot: machine restart [-- MARK -- Fri Feb 3 09:30:00 2023] [7l [7l [7l ProLiant System BIOS - P71 (05/21/2018) Copyright 1982, 2018 Hewlett-Packard Development Company, L.P. 32 GB Installed 2 Processor(s) detected, 12 total cores enabled, Hyperthreading is enabled Proc 1: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz Proc 2: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz QPI Speed: 7.2 GT/s HP Power Profile Mode: Balanced Power and Performance Power Regulator Mode: Dynamic Power Savings Redundant ROM Detected - This system contains a valid backup System ROM. Inlet Ambient Temperature: 19C/66F Advanced Memory Protection Mode: Advanced ECC Support HP SmartMemory authenticated in all populated DIMM slots. SATA Option ROM ver 2.00.C02 Copyright 1982, 2011. Hewlett-Packard Development Company, L.P. iLO 4 Advanced press [F8] to configure iLO 4 v2.80 Jan 25 2022 10.16.216.85 Slot 0 HP Smart Array P420i Controller Initializing... (0 MB, v8.32) 1 Logical Drive Press to run the HP Smart Storage Administrator (HP SSA) or ACU Press to run the Option ROM Configuration For Arrays Utility Press to Skip Configuration and Continue Slot 1 HP Smart Array P421 Controller Initializing... (1 GB, v8.32) 0 Logical Drives [1;25r 1785-Slot 1 Drive Array Not Configured No Drives Detected Press to run the HP Smart Storage Administrator (HP SSA) or ACU Press to run the Option ROM Configuration For Arrays Utility Press to Skip Configuration and Continue Broadcom NetXtreme Ethernet Boot Agent Copyright (C) 2000-2017 Broadcom Corporation All rights reserved. Press Ctrl-S to enter Configuration Menu [7l [7l [7l Press "F9" key for ROM-Based Setup Utility Press "F10" key for Intelligent Provisioning Press "F11" key for Default Boot Override Options Press "F12" key for Network Boot For access via BIOS Serial Console Press "ESC+9" for ROM-Based Setup Utility Press "ESC+0" for Intelligent Provisioning Press "ESC+!" for Default Boot Override Options Press "ESC+@" for Network Boot [7l [7l Attempting Boot From NIC Broadcom UNDI PXE-2.1 v20.6.50 Copyright (C) 2000-2017 Broadcom Corporation Copyright (C) 1997-2000 Intel Corporation All rights reserved. CLIENT MAC ADDR: 2C 44 FD 84 51 C4. GUID: 30343536-3138-5355-4534-303452355454 DHCP./ - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ | | CLIENT IP: 10.16.216.84 MASK: 255.255.254.0 DHCP IP: 10.19.43.29 GATEWAY IP: 10.16.217.254 TFTP. TFTP.- P X E L I N U X 4 . 0 5 2 0 1 1 - 1 2 - 0 9 C o p y r i g h t ( C ) 1 9 9 4 - 2 0 1 1 H . P e t e r A n v i n e t a l ! P X E e n t r y p o i n t f o u n d ( w e h o p e ) a t 9 5 A 1 : 0 0 D 6 v i a p l a n A U N D I c o d e s e g m e n t a t 9 5 A 1 l e n 6 B 7 0 U N D I d a t a s e g m e n t a t 9 1 E A l e n 3 B 7 0 G e t t i n g c a c h e d p a c k e t 0 1 0 2 0 3 M y I P a d d r e s s s e e m s t o b e 0 A 1 0 D 8 5 4 1 0 . 1 6 . 2 1 6 . 8 4 i p = 1 0 . 1 6 . 2 1 6 . 8 4 : 1 0 . 1 9 . 1 6 5 . 1 6 4 : 1 0 . 1 6 . 2 1 7 . 2 5 4 : 2 5 5 . 2 5 5 . 2 5 4 . 0 B O O T I F = 0 1 - 2 c - 4 4 - f d - 8 4 - 5 1 - c 4 S Y S U U I D = 3 6 3 5 3 4 3 0 - 3 8 3 1 - 5 5 5 3 - 4 5 3 4 - 3 0 3 4 5 2 3 5 5 4 5 4 T F T P p r e f i x : T r y i n g t o l o a d : p x e l i n u x . c f g / 3 6 3 5 3 4 3 0 - 3 8 3 1 - 5 5 5 3 - 4 5 3 4 - 3 0 3 4 5 2 3 5 5 4 5 4 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 1 - 2 c - 4 4 - f d - 8 4 - 5 1 - c 4 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 D 8 5 4 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 D 8 5 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 D 8 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 D T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A T r y i n g t o l o a d : p x e l i n u x . c f g / [22;29H [22;30H0 [22;30H T r y i n g t o l o a d : p x e l i n u x . c f g / d e f a u l t o k * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * R e d H a t E n g i n e e r i n g L a b s N e t w o r k B o o t P r e s s E N T E R t o b o o t f r o m l o c a l d i s k T y p e " m e n u " a t b o o t p r o m p t t o v i e w i n s t a l l m e n u * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * b o o t : B o o t i n g . . . .. [?25l Use the ^ and v keys to change the selection. Press 'e' to edit the selected item, or 'c' for a command prompt. CentOS Stream (5.14.0-256.2009_766119311.el9.x86_64+debug) 9 with debugg> CentOS Stream (5.14.0-247.el9.x86_64) 9 CentOS Stream (0-rescue-99e1b32cbaf74173bd2789197e86723f) 9 U s e t h e a n d k e y s t o c h a n g e t h e s e l e c t i o n . P r e s s ' e ' t o e d i t t h e s e l e c t e d i t e m , o r ' c ' f o r a c o m m a n d p r o m p t . C e n t O S S t r e a m ( 5 . 1 4 . 0 - 2 5 6 . 2 0 0 9 _ 7 6 6 1 1 9 3 1 1 . e l 9 . x 8 6 _ 6 4 + d e b u g ) 9 w i t h d e b u g g C e n t O S S t r e a m ( 5 . 1 4 . 0 - 2 4 7 . e l 9 . x 8 6 _ 6 4 ) 9 C e n t O S S t r e a m ( 0 - r e s c u e - 9 9 e 1 b 3 2 c b a f 7 4 1 7 3 b d 2 7 8 9 1 9 7 e 8 6 7 2 3 f ) 9 T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 5 s . The selected entry will be started automatically in 5s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 4 s . The selected entry will be started automatically in 4s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 3 s . The selected entry will be started automatically in 3s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 2 s . The selected entry will be started automatically in 2s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 1 s . The selected entry will be started automatically in 1s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 0 s . The selected entry will be started automatically in 0s. Probing EDD (edd=off to disable)... ok [7l [ 0.000000] microcode: microcode updated early to revision 0x42e, date = 2019-03-14 [ 0.000000] [ 0.000000] The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. [ 0.000000] Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug root=/dev/mapper/cs_hpe--dl360pgen8--08-root ro resume=/dev/mapper/cs_hpe--dl360pgen8--08-swap rd.lvm.lv=cs_hpe-dl360pgen8-08/root rd.lvm.lv=cs_hpe-dl360pgen8-08/swap console=ttyS1,115200n81 crashkernel=1G-2G:384M,2G-3G:512M,3G-4G:768M,4G-16G:1G,16G-64G:2G,64G-128G:2G,128G-:4G [ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [ 0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. [ 0.000000] signal: max sigframe size: 1776 [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009c7ff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009c800-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bddabfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bddac000-0x00000000bddddfff] ACPI data [ 0.000000] BIOS-e820: [mem 0x00000000bddde000-0x00000000cfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000fee0ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000ff800000-0x00000000ffffffff] reserved [ 000000] BIOS-e820: [mem 0x0000000100000000-0x000000083fffefff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.8 present. [ 0.000000] DMI: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [ 0.000000] tsc: Fast TSC calibration using PIT [ 0.000000] tsc: Detected 2094.981 MHz processor [ 0.001551] last_pfn = 0x83ffff max_arch_pfn = 0x400000000 [ 0.002383] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [ 0.008428] last_pfn = 0xbddac max_arch_pfn = 0x400000000 [ 0.014925] found SMP MP-table at [mem 0x000f4f80-0x000f4f8f] [ 0.014984] Using GB pages for direct mapping [ 0.016792] RAMDISK: [mem 0x33a57000-0x35d23fff] [ 0.016804] ACPI: Early table checksum verification disabled [ 0.016817] ACPI: RSDP 0x00000000000F4F00 000024 (v02 HP ) [ 0.016834] ACPI: XSDT 0x00000000BDDAED00 0000E4 (v01 HP ProLiant 00000002 ? 0000162E) [ 0.016856] ACPI: FACP 0x00000000BDDAEE40 0000F4 (v03 HP ProLiant 00000002 ? 0000162E) [ 0.016876] ACPI BIOS Warning (bug): Invalid length for FADT/Pm1aControlBlock: 32, using default 16 (20211217/tbfadt-669) [ 0.016888] ACPI BIOS Warning (bug): Invalid length for FADT/Pm2ControlBlock: 32, using default 8 (20211217/tbfadt-669) [ 0.016903] ACPI: DSDT 0x000026DC (v01 HP DSDT 00000001 INTL 20030228) [ 0.016918] ACPI: FACS 0x00000000BDDAC140 000040 [ 0.016931] ACPI: FACS 0x00000000BDDAC140 000040 [ 0.016944] ACPI: SPCR 0x00000000BDDAC180 000050 (v01 HP SPCRRBSU 00000001 ? 0000162E) [ 0.016958] ACPI: MCFG 0x00000000BDDAC200 00003C (v01 HP ProLiant 00000001 00000000) [ 0.016972] ACPI: HPET 0x00000000BDDAC240 000038 (v01 HP ProLiant 00000002 ? 0000162E) [ 0.016986] ACPI: FFFF 0x00000000BDDAC280 000064 (v02 HP ProLiant 00000002 ? 0000162E) [ 0.017000] ACPI: SPMI 0x00000000BDDAC300 000040 (v05 HP ProLiant 00000001 ? 0000162E) [ 0.017014] ACPI: ERST 0x00000000BDDAC340 000230 (v01 HP ProLiant 00000001 ? 0000162E) [ 0.017028] ACPI: APIC 0x00000000BDDAC580 00026A (v01 HP ProLiant 00000002 00000000) [ 0.017042] ACPI: SRAT 0x00000000BDDAC800 000750 (v01 HP Proliant 00000001 ? 0000162E) [ 0.017056] ACPI: FFFF 0x00000000BDDACF80 000176 (v01 HP ProLiant 00000001 ? 0000162E) [ 0.017071] ACPI: BERT 0x00000000BDDAD100 000030 (v01 HP ProLiant 00000001 ? 0000162E) [ 0.017085] ACPI: HEST 0x00000000BDDAD140 0000BC (v01 HP ProLiant 02E) [ 0.017099] ACPI: DMAR 0x00000000BDDAD200 00051C (v01 HP ProLiant 00000001 ? 0000162E) [ 0.017113] ACPI: FFFF 0x00000000BDDAEC40 000030 (v01 HP ProLiant 00000001 00000000) [ 0.017127] ACPI: PCCT 0x00000000BDDAEC80 00006E (v01 HP Proliant 00000001 PH 0000504D) [ 0.017141] ACPI: SSDT 0x00000000BDDB1640 0007EA (v01 HP DEV_PCI1 00000001 INTL 20120503) [ 0.017155] ACPI: SSDT 0x00000000BDDB1E40 000103 (v03 HP CRSPCI0 00000002 HP 00000001) [ 0.017169] ACPI: SSDT 0x00000000BDDB1F80 000098 (v03 HP CRSPCI1 00000002 HP 00000001) [ 0.017183] ACPI: SSDT 0x00000000BDDB2040 00038A (v02 HP riser0 00000002 INTL 20030228) [ 0.017197] ACPI: SSDT 0x00000000BDDB2400 000385 (v03 HP riser1a 00000002 INTL 20030228) [ 0.017211] ACPI: SSDT 0x00000000BDDB27C0 000BB9 (v01 HP pcc 00000001 INTL 20120503) [ 0.017225] ACPI: SSDT 0x00000000BDDB3380 000377 (v01 HP pmab 00000001 INTL 20120503) [ 0.017239] ACPI: SSDT 0x00000000BDDB3700 005524 (v01 HP pcc2 00000001 INTL 20120503) [ 0.017254] ACPI: SSDT 0x00000000BDDB8C40 003AEC (v01 INTEL PPM RCM 00000001 INTL 20061109) [ 0.017266] ACPI: Reserving FACP table memory at [mem 0xbddaee40-0xbddaef33] [ 0.017272] ACPI: Reserving DSDT table memory at [mb161b] [ 0.017276] ACPI: Reserving FACS table memory at [mem 0xbddac140-0xbddac17f] [ 0.017281] ACPI: Reserving FACS table memory at [mem 0xbddac140-0xbddac17f] [ 0.017285] ACPI: Reserving SPCR table memory at [mem 0xbddac180-0xbddac1cf] [ 0.017290] ACPI: Reserving MCFG table memory at [mem 0xbddac200-0xbddac23b] [ 0.017294] ACPI: Reserving HPET table memory at [mem 0xbddac240-0xbddac277] [ 0.017298] ACPI: Reserving FFFF table memory at [mem 0xbddac280-0xbddac2e3] [ 0.017303] ACPI: Reserving SPMI table memory at [mem 0xbddac300-0xbddac33f] [ 0.017307] ACPI: Reserving ERST table memory at [mem 0xbddac340-0xbddac56f] [ 0.017312] ACPI: Reserving APIC table memory at [mem 0xbddac580-0xbddac7e9] [ 0.017316] ACPI: Reserving SRAT table memory at [mem 0xbddac800-0xbddacf4f] [ 0.017321] ACPI: Reserving FFFF table memory at [mem 0xbddacf80-0xbddad0f5] [ 0.017325] ACPI: Reserving BERT table memory at [mem 0xbddad100-0xbddad12f] [ 0.017329] ACPI: Reserving HEST table memory at [mem 0xbddad140-0xbddad1fb] [ 0.017334] ACPI: Reserving DMAR table memory at [mem 0xbddad200-0x 0.017338] ACPI: Reserving FFFF table memory at [mem 0xbddaec40-0xbddaec6f] [ 0.017343] ACPI: Reserving PCCT table memory at [mem 0xbddaec80-0xbddaeced] [ 0.017347] ACPI: Reserving SSDT table memory at [mem 0xbddb1640-0xbddb1e29] [ 0.017352] ACPI: Reserving SSDT table memory at [mem 0xbddb1e40-0xbddb1f42] [ 0.017356] ACPI: Reserving SSDT table memory at [mem 0xbddb1f80-0xbddb2017] [ 0.017361] ACPI: Reserving SSDT table memory at [mem 0xbddb2040-0xbddb23c9] [ 0.017366] ACPI: Reserving SSDT table memory at [mem 0xbddb2400-0xbddb2784] [ 0.017370] ACPI: Reserving SSDT table memory at [mem 0xbddb27c0-0xbddb3378] [ 0.017375] ACPI: Reserving SSDT table memory at [mem 0xbddb3380-0xbddb36f6] [ 0.017380] ACPI: Reserving SSDT table memory at [mem 0xbddb3700-0xbddb8c23] [ 0.017384] ACPI: Reserving SSDT table memory at [mem 0xbddb8c40-0xbddbc72b] [ 0.017478] SRAT: PXM 0 -> APIC 0x00 -> Node 0 [ 0.017485] SRAT: PXM 0 -> APIC 0x01 -> Node 0 [ 0.017489] SRAT: PXM 0 -> APIC 0x02 -> Node 0 [ 0.017493] SRAT: PXM 0 -> APIC 0x03 -> Node 0 [ 0.017497] SRAT: PXM 0 -> APIC 0x04 -> Node 0 [ 0.017501] SRAT: PXM 0 -> APIC 0x05 -> Node 0 [ 0.017505] SRAT: PXM 0 -> APIC 0x06 ->7508] SRAT: PXM 0 -> APIC 0x07 -> Node 0 [ 0.017512] SRAT: PXM 0 -> APIC 0x08 -> Node 0 [ 0.017516] SRAT: PXM 0 -> APIC 0x09 -> Node 0 [ 0.017519] SRAT: PXM 0 -> APIC 0x0a -> Node 0 [ 0.017523] SRAT: PXM 0 -> APIC 0x0b -> Node 0 [ 0.017527] SRAT: PXM 1 -> APIC 0x20 -> Node 1 [ 0.017531] SRAT: PXM 1 -> APIC 0x21 -> Node 1 [ 0.017535] SRAT: PXM 1 -> APIC 0x22 -> Node 1 [ 0.017539] SRAT: PXM 1 -> APIC 0x23 -> Node 1 [ 0.017543] SRAT: PXM 1 -> APIC 0x24 -> Node 1 [ 0.017547] SRAT: PXM 1 -> APIC 0x25 -> Node 1 [ 0.017550] SRAT: PXM 1 -> APIC 0x26 -> Node 1 [ 0.017554] SRAT: PXM 1 -> APIC 0x27 -> Node 1 [ 0.017558] SRAT: PXM 1 -> APIC 0x28 -> Node 1 [ 0.017561] SRAT: PXM 1 -> APIC 0x29 -> Node 1 [ 0.017565] SRAT: PXM 1 -> APIC 0x2a -> Node 1 [ 0.017569] SRAT: PXM 1 -> APIC 0x2b -> Node 1 [ 0.017580] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x43fffffff] [ 0.017587] ACPI: SRAT: Node 1 PXM 1 [mem 0x440000000-0x83fffffff] [ 0.017624] NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] [ 0.017666] NODE_DATA(1) allocated [mem 0x83ffd4000-0x83fffefff] [ 0.018171] Reserving 2048MB of memory at 976MB for crashkernel (System RAM: 32733MB) [ 0.116189] Zone ranges: [ 0.116205] DMA [mem 0x0000000000001000-0x0000000000ffffff] [ 0.116221] DMA32 [mem 0x0000000001000000-0x00000000ffffffff] [ 0.116220x0000000100000000-0x000000083fffefff] [ 0.116236] Device empty [ 0.116242] Movable zone start for each node [ 0.116248] Early memory node ranges [ 0.116252] node 0: [mem 0x0000000000001000-0x000000000009bfff] [ 0.116258] node 0: [mem 0x0000000000100000-0x00000000bddabfff] [ 0.116263] node 0: [mem 0x0000000100000000-0x000000043fffffff] [ 0.116270] node 1: [mem 0x0000000440000000-0x000000083fffefff] [ 0.116281] Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] [ 0.116298] Initmem setup node 1 [mem 0x0000000440000000-0x000000083fffefff] [ 0.116322] On node 0, zone DMA: 1 pages in unavailable ranges [ 0.116563] On node 0, zone DMA: 100 pages in unavailable ranges [ 0.164857] On node 0, zone Normal: 8788 pages in unavailable ranges [ 0.166917] On node 1, zone Normal: 1 pages in unavailable ranges [ 0.899482] kasan: KernelAddressSanitizer initialized [ 0.899810] ACPI: PM-Timer IO Port: 0x908 [ 0.899849] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.899911] IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23 [ 0.899926] IOAPIC[1]: apic_id 0, version 32, address 0xfec10000, GSI 24-47 [ 0.899937] IOAPIC[2]: apic_id 10, version 32, address 0xfec40000, GSI 48-71 [ 0.899946] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 globa [ 0.899956] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.899974] ACPI: Using ACPI (MADT) for SMP configuration information [ 0.899979] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.899994] ACPI: SPCR: SPCR table version 1 [ 0.899998] ACPI: SPCR: Unexpected SPCR Access Width. Defaulting to byte size [ 0.900005] ACPI: SPCR: console: uart,mmio,0x0,9600 [ 0.900013] TSC deadline timer available [ 0.900018] smpboot: Allowing 64 CPUs, 40 hotplug CPUs [ 0.900107] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] [ 0.900117] PM: hibernation: Registered nosave memory: [mem 0x0009c000-0x0009cfff] [ 0.900122] PM: hibernation: Registered nosave memory: [mem 0x0009d000-0x0009ffff] [ 0.900127] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.900131] PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.900141] PM: hibernation: Registered nosave memory: [mem 0xbddac000-0xbddddfff] [ 0.900146] PM: hibernation: Registered nosave memory: [mem 0xbddde000-0xcfffffff] [ 0.900150] PM: hibernation: Registered nosave memory: [mem 0xd0000000-0xfebfffff] [ 0.900154] PM: hibernation: Registered nosave mec00000-0xfee0ffff] [ 0.900158] PM: hibernation: Registered nosave memory: [mem 0xfee10000-0xff7fffff] [ 0.900162] PM: hibernation: Registered nosave memory: [mem 0xff800000-0xffffffff] [ 0.900175] [mem 0xd0000000-0xfebfffff] available for PCI devices [ 0.900180] Booting paravirtualized kernel on bare hardware [ 0.900198] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns [ 0.921920] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:64 nr_cpu_ids:64 nr_node_ids:2 [ 1.005053] percpu: Embedded 515 pages/cpu s2072576 r8192 d28672 u4194304 [ 1.005853] Fallback order for Node 0: 0 1 [ 1.005880] Fallback order for Node 1: 1 0 [ 1.005926] Built 2 zonelists, mobility grouping on. Total pages: 8248628 [ 1.005931] Policy zone: Normal [ 1.005953] Kernel command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug root=/dev/mapper/cs_hpe--dl360pgen8--08-root ro resume=/dev/mapper/cs_hpe--dl360pgen8--08-swap rd.lvm.lv=cs_hpe-dl360pgen8-08/root rd.lvm.lv=cs_hpe-dl360pgen8-08/swap console=ttyS1,115200n81 crashkernel=1G-2G:384M,2G-3G:512M,3G-4G:768M,4G-16G:1G,16G-64G:2G,64G-128G:2G,128G-:4G [ 1.006158] Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311 will be passed to user space. [ 1.007859] mem auto-init: stack:off, heap alloc:off, heap free:off [ 1.007866] Stack Depot early init allocating hash table with memblock_alloc, 8388608 bytes [ 1.009765] software IO TLB: area num 64. [ 3.359455] Memory: 1173116K/33518872K available (38920K kernel code, 13007K rwdata, 14984K rodata, 5300K init, 42020K bss, 7436800K reserved, 0K cma-reserved) [ 3.359496] random: get_random_u64 called from kmem_cache_open+0x22/0x380 with crng_init=0 [ 3.381014] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=64, Nodes=2 [ 3.381022] kmemleak: Kernel memory leak detector disabled [ 3.385337] Kernel/User page tables isolation: enabled [ 3.385938] ftrace: allocating 45745 entries in 179 pages [ 3.428317] ftrace: allocated 179 pages with 5 groups [ 3.434222] Dynamic Preempt: voluntary [ 3.438722] Running RCU self tests [ 3.440208] rcu: Preemptible hierarchical RCU implementation. [ 3.440213] rcu: RCU lockdep checking is enabled. [ 3.440216] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=64. [ 3.440222] rcu: RCU callback double-/use-after-free debug is enabled. [ 3.440226] Trampoline variant of Tasks RCU enabled. [ 3.440229] Rude variant of Tasks RCU enabled. [ 3.440233] Tracing variant of Tasks RCU enabled. [ 3.440237] rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. [ usting geometry for rcu_fanout_leaf=16, nr_cpu_ids=64 [ 3.462478] NR_IRQS: 524544, nr_irqs: 1752, preallocated irqs: 16 [ 3.463411] rcu: srcu_init: Setting srcu_struct sizes based on contention. [ 3.463510] random: crng init done (trusting CPU's manufacturer) [ 3.470528] Console: colour VGA+ 80x25 [ 8.729949] printk: console [ttyS1] enabled [ 8.731400] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar [ 8.733995] ... MAX_LOCKDEP_SUBCLASSES: 8 [ 8.735416] ... MAX_LOCK_DEPTH: 48 [ 8.736927] ... MAX_LOCKDEP_KEYS: 8192 [ 8.738588] ... CLASSHASH_SIZE: 4096 [ 8.740088] ... MAX_LOCKDEP_ENTRIES: 65536 [ 8.741670] ... MAX_LOCKDEP_CHAINS: 131072 [ 8.743233] ... CHAINHASH_SIZE: 65536 [ 8.744752] memory used by lock dependency info: 11641 kB [ 8.746609] memory used for stack traces: 4224 kB [ 8.748325] per task-struct memory footprint: 2688 bytes [ 8.750532] mempolicy: Enabling automatic NUMA balancing. Configure with numa_balancing= or the kernel.numa_balancing sysctl [ 8.754406] ACPI: Core revision 20211217 [ 8.757331] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 13348 9.068363] APIC: Switch to symmetric I/O mode setup [ 9.162513] DMAR: Host address width 46 [ 9.163971] DMAR: DRHD base: 0x000000fbefe000 flags: 0x0 [ 9.165944] DMAR: dmar0: reg_base_addr fbefe000 ver 1:0 cap d2078c106f0466 ecap f020de [ 9.168914] DMAR: DRHD base: 0x000000f4ffe000 flags: 0x1 [ 9.170810] DMAR: dmar1: reg_base_addr f4ffe000 ver 1:0 cap d2078c106f0466 ecap f020de [ 9.173549] DMAR: RMRR base: 0x000000bdffd000 end: 0x000000bdffffff [ 9.175714] DMAR: RMRR base: 0x000000bdff6000 end: 0x000000bdffcfff [ 9.178111] DMAR: RMRR base: 0x000000bdf83000 end: 0x000000bdf84fff [ 9.180340] DMAR: RMRR base: 0x000000bdf7f000 end: 0x000000bdf82fff [ 9.182498] DMAR: RMRR base: 0x000000bdf6f000 end: 0x000000bdf7efff [ 9.184665] DMAR: RMRR base: 0x000000bdf6e000 end: 0x000000bdf6efff [ 9.186879] DMAR: RMRR base: 0x000000000f4000 end: 0x000000000f4fff [ 9.189077] DMAR: RMRR base: 0x000000000e8000 end: 0x000000000e8fff [ 9.191214] DMAR: [Firmware Bug]: No firmware reserved region can cover this RMRR [0x00000000000e8000-0x00000000000e8fff], contact BIOS vendor for fixes [ 9.195774] DMAR: [Firmware Bug]: Your BIOS is broken; bad RMRR [0x00000000000e8000-0x00000000000e8fff] [ or: HP; Ver: P71; Product Version: [ 9.700568] DMAR: RMRR base: 0x000000bddde000 end: 0x000000bdddefff [ 9.702782] DMAR: ATSR flags: 0x0 [ 9.703964] DMAR-IR: IOAPIC id 10 under DRHD base 0xfbefe000 IOMMU 0 [ 9.706284] DMAR-IR: IOAPIC id 8 under DRHD base 0xf4ffe000 IOMMU 1 [ 9.708535] DMAR-IR: IOAPIC id 0 under DRHD base 0xf4ffe000 IOMMU 1 [ 9.710920] DMAR-IR: HPET id 0 under DRHD base 0xf4ffe000 [ 9.712777] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit. [ 9.712782] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting. [ 9.719091] DMAR-IR: Enabled IRQ remapping in xapic mode [ 9.720991] x2apic: IRQ remapping doesn't support X2APIC mode [ 9.722999] Switched APIC routing to physical flat. [ 9.726717] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 [ 9.733337] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1e32ab70c81, max_idle_ns: 440795302416 ns [ 9.737038] Calibrating delay loop (skipped), value calculated using timer frequency.. 4189.96 BogoMIPS (lpj=2094981) [ 9.738035] pid_max: default: 65536 minimum: 512 [ 9.740712] LSM: Security Framework initializing [ 9.741168] Yama: becoming mindful. [ 9.742135] SELinux: Initializing. [ 9.743562] LSM support for eBPF active [ 9.757986] Dentry cache hash ta194304 (order: 13, 33554432 bytes, vmalloc hugepage) [ 9.765856] Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, vmalloc hugepage) [ 9.767555] Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, vmalloc) [ 9.768347] Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, vmalloc) [ 9.774679] CPU0: Thermal monitoring enabled (TM1) [ 9.775174] process: using mwait in idle threads [ 9.776050] Last level iTLB entries: 4KB 512, 2MB 8, 4MB 8 [ 9.777031] Last level dTLB entries: 4KB 512, 2MB 0, 4MB 0, 1GB 4 [ 9.778057] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization [ 9.779035] Spectre V2 : Mitigation: Retpolines [ 9.780031] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch [ 9.781031] Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT [ 9.782031] Spectre V2 : Enabling Restricted Speculation for firmware calls [ 9.783039] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier [ 9.784031] Spectre V2 : User space: Mitigation: STIBP via prctl [ 9.785033] Speculatiitigation: Speculative Store Bypass disabled via prctl [ 9.786048] MDS: Mitigation: Clear CPU buffers [ 9.787031] MMIO Stale Data: Unknown: No mitigations [ 9.826786] Freeing SMP alternatives memory: 32K [ 9.829921] smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1170 [ 9.830077] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz (family: 0x6, model: 0x3e, stepping: 0x4) [ 9.835089] cblist_init_generic: Setting adjustable number of callback queues. [ 9.836032] cblist_init_generic: Setting shift to 6 and lim to 1. [ 9.837662] cblist_init_generic: Setting shift to 6 and lim to 1. [ 9.838695] cblist_init_generic: Setting shift to 6 and lim to 1. [ 9.839267] Running RCU-tasks wait API self tests [ 9.942282] Performance Events: PEBS fmt1+, IvyBridge events, 16-deep LBR, full-width counters, Broken BIOS detected, complain to your hardware vendor. [ 9.943033] [Firmware Bug]: the BIOS has corrupted hw-PMU resources (MSR 38d is 330) [ 9.944032] Intel PMU driver. [ 9.945047] ... version: 3 [ 9.946032] ... bit width: 48 [ 9.947031] ... generic registers: 4 [ 9.948031] ... value mask: 0000ffffffffffff [ 9.949031] ... max period: 00007fffffffffff [ 9.950031] ... fixed-purpose events: 3 [ 9.951038] ... event mask: 000000070000000f [ 9.9: Hierarchical SRCU implementation. [ 9.956034] rcu: Max phase no-delay instances is 400. [ 9.960092] Callback from call_rcu_tasks_trace() invoked. [ 9.976140] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. [ 9.990512] smp: Bringing up secondary CPUs ... [ 9.993275] x86: Booting SMP configuration: [ 9.994036] .... node #0, CPUs: #1 [ 10.004273] #2 [ 10.010397] #3 [ 10.016517] #4 [ 10.022470] #5 [ 10.028533] [ 10.029034] .... node #1, CPUs: #6 [ 6.239566] smpboot: CPU 6 Converting physical 0 to logical die 1 [ 10.102112] Callback from call_rcu_tasks_rude() invoked. [ 10.104709] #7 [ 10.112788] #8 [ 10.120235] #9 [ 10.127251] #10 [ 10.134240] #11 [ 10.141061] [ 10.141664] .... node #0, CPUs: #12 [ 10.146137] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. [ 10.149012] #13 [ 10.154171] #14 [ 10.159286] #15 [ 10.164325] #16 [ 10.169148] #17 [ 10.174350] [ 10.174932] .... node #1, CPUs: #18 [ 10.181104] #19 [ 10.187080] #20 [ 10.193291] #21 [ 10.199234] #22 [ 10.205507] #2 10.206763] Callback from call_rcu_tasks() invoked. [ 10.326316] smp: Brought up 2 nodes, 24 CPUs [ 10.327044] smpboot: Max logical packages: 6 [ 10.328037] smpboot: Total of 24 processors activated (101174.32 BogoMIPS) [ 10.903394] node 0 deferred pages initialised in 565ms [ 10.906581] pgdatinit0 (143) used greatest stack depth: 28672 bytes left [ 11.304587] node 1 deferred pages initialised in 965ms [ 11.318761] devtmpfs: initialized [ 11.322904] x86/mm: Memory block size: 128MB [ 11.498755] DMA-API: preallocated 65536 debug entries [ 11.501038] DMA-API: debugging enabled by kernel config [ 11.502037] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns [ 11.507221] futex hash table entries: 16384 (order: 9, 2097152 bytes, vmalloc) [ 11.515474] prandom: seed boundary self test passed [ 11.518210] prandom: 100 self tests passed [ 11.524825] prandom32: self test passed (less than 6 bits correlated) [ 11.527045] pinctrl core: initialized pinctrl subsystem [ 11.530404] [ 11.531036] ************************************************************* [ 11.533035] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE ** [ 11.536036] ** ** [ 11.538033] ** IOMMU DebugFS SUPPORT HAS BEEN ENABLED IN THIS KERNEL ** [ 11.540035] ** ** [ 11.543036] ** This means that this kernel is built to expose internal ** [ 11.545033] ** IOMMU data structures, which may compromise security on ** [ 11.547034] ** your system. ** [ 11.550036] ** ** [ 11.552033] ** If you see this message and you are not debugging the ** [ 11.554034] ** kernel, report this immediately to your vendor! ** [ 11.557036] ** ** [ 11.559033] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE ** [ 11.562036] ************************************************************* [ 11.564120] PM: RTC time: 04:33:59, date: 2023-02-03 [ 11.579677] NET: Registered PF_NETLINK/PF_ROUTE protocol family [ 11.589563] DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations [ 11.592341] DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations [ 11.595362] DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations [ 11.598531] audit: initializing netlink subsys (disabled) [ 11.601575] audit: type=2000 audit(1675398830.899:1): state=initialized audit_enabled=0 res=1 [ 11.607043] thermal_sys: Registered thermal governor 'fair_share' [ 11.607064] thermal_sys: Registered thermal governor 'step_wise' [ 11.609048] thermal_sys: Registered thermal governor 'user_space' [ 11.612930] cpuidle: using governor menu [ 11.615375] Detected 1 PCC Subspaces [ 11.616043] Registering PCC driver as Mailbox controller [ 11.618200] HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB [ 11.619096] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it [ 11.620043] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 11.623980] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xc0000000-0xcfffffff] (base 0xc0000000) [ 11.624057] PCI: MMCONFIG at [mem 0xc0000000-0xcfffffff] reserved in E820 [ 11.729759] PCI: Using configuration type 1 for base access [ 11.730087] PCI: HP ProLiant DL360 detected, enabling pci=bfsort. [ 11.731243] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 11.751492] ENERGY_PERF_BIAS: Set to 'normal', was 'performance' [ 12.001038] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. [ 12.009405] HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB [ 12.011104] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages [ 12.014061] HugeTLB registered00 MiB page size, pre-allocated 0 pages [ 12.122120] cryptd: max_cpu_qlen set to 1000 [ 12.135792] ACPI: Added _OSI(Module Device) [ 12.137050] ACPI: Added _OSI(Processor Device) [ 12.139046] ACPI: Added _OSI(3.0 _SCP Extensions) [ 12.141045] ACPI: Added _OSI(Processor Aggregator Device) [ 12.143078] ACPI: Added _OSI(Linux-Dell-Video) [ 12.144065] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) [ 12.146068] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) [ 12.618754] ACPI: 10 ACPI AML tables successfully acquired and loaded [ 12.895025] ACPI: Interpreter enabled [ 12.896352] ACPI: PM: (supports S0 S4 S5) [ 12.898066] ACPI: Using IOAPIC for interrupt routing [ 12.900639] HEST: Table parsing has been initialized. [ 12.902040] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 12.906038] PCI: Using E820 reservations for host bridge windows [ 13.155702] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-1f]) [ 13.158095] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 13.164561] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER PCIeCapability LTR DPC] [ 13.168041] acpi PNP0A08:00: FADT indicates ASPM is unsupported, using BIOS configuration [ 13.181310] PCI host bridge to bus 0000:00 [ 13.183053] pci_bus 0000:00: root bus resource [mem 0xf4000000-0xf7ffffff window] [ 13.185046] pci_bus 0000:00: root bus resource [io 0x1000-0x7fff window] [ 13.187051] pci_bus 0000:00: root bus resource [io 0x0000-0x03af window] [ 13.190047] pci_bus 0000:00: root bus resource [io 0x03e0-0x0cf7 window] [ 13.193046] pci_bus 0000:00: root bus resource [io 0x0d00-0x0fff window] [ 13.195044] pci_bus 0000:00: root bus resource [io 0x03b0-0x03bb window] [ 13.197045] pci_bus 0000:00: root bus resource [io 0x03c0-0x03df window] [ 13.200046] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 13.202067] pci_bus 0000:00: root bus resource [bus 00-1f] [ 13.204485] pci 0000:00:00.0: [8086:0e00] type 00 class 0x060000 [ 13.207428] pci 0000:00:00.0: PME# supported from D0 D3hot D3cold [ 13.210455] pci 0000:00:01.0: [8086:0e02] type 01 class 0x060400 [ 13.213379] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold [ 13.222682] pci 0000:00:01.1: [8086:0e03] type 01 class 0x060400 [ 13.223574] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold [ 13.229930] pci 0000:00:02.0: [8086:0e04] type 01 class 0x060400 [ 13.230957] pci 0000:00:02.0: PME# supported from D0 D3hot D3cold [ 13.237104] pci 0000:00:02.1: [8086:0e05] type 01 class 0x060400 [ 13.238586] pci 0000:00:02.1: PME# supported from D0 D3hot D3cold [ 13.244630] pci 0000:00:02.2: [8086:0e06] type 01 class 0x060400 [ 13.245611] pci 0000:00:02.2: PME# supported from D0 D3hot D3cold [ 13.251871] pci 0000:00:02.3: [8086:0e07] type 01 class 0x060400 [ 13.252565] pci 0000:00:02.3: PME# supported from D0 D3hot D3cold [ 13.258632] pci 0000:00:03.0: [8086:0e08] type 01 class 0x060400 [ 13.259169] pci 0000:00:03.0: enabling Extended Tags [ 13.260478] pci 0000:00:03.0: PME# supported from D0 D3hot D3cold [ 13.266924] pci 0000:00:03.1: [8086:0e09] type 01 class 0x060400 [ 13.267580] pci 0000:00:03.1: PME# supported from D0 D3hot D3cold [ 13.273622] pci 0000:00:03.2: [8086:0e0a] type 01 class 0x060400 [ 13.274764] pci 0000:00:03.2: PME# supported from D0 D3hot D3cold [ 13.280594] pci 0000:00:03.3: [8086:0e0b] type 01 class 0x060400 [ 13.281566] pci 0000:00:03.3: PME# supported from D0 D3hot D3cold [ 13.287584] pci 0000:00:04.0: [8086:0e20] type 00 class 0x088000 [ 13.288102] pci 0000:00:04.0: reg 0x10: [mem 0xf6cf0000-0xf6cf3fff 64bit] [ 13.291204] pci 0000:00:04.1: [8086:0e21] type 00 class 0x088000 [ 13.292099] pci 0000:00:04.1: reg 0x10: [mem 0xf6ce0000-0xf6ce3fff 64bit] [ 13.295132] pci 0000:00:04.2: [8086:0e22] type 00 class 0x088000 [ 13.296099] pci 0000:00:04.2: reg 0x10: [mem 0xf6cd0000-0xf6cd3fff 64bit] [ 13.299105] pci 0000:00:04.3: [8086:0e23] type 00 class 0x088000 [ 13.300099] pci 0000:00:04.3: reg 0x10: [mem 0xf6cc0000-0xf6cc3fff 64bit] [ 13.303105] pci 0000:00:04.4: [8086:0e24] type 00 class 0x088000 [ 13.304098] pci 0000:00:04.4: reg 0x10: [mem 0xf6cb0000-0xf6cb3fff 64bit] [ 13.307530] pci 0000:00:04.5: [8086:0e25] type 00 class 0x088000 [ 13.308117] pci 0000:00:04.5: reg 0x10: [mem 0xf6ca0000-0xf6ca3fff 64bit] [ 13.310932] pci 0000:00:04.6: [8086:0e26] type 00 class 0x088000 [ 13.311095] pci 0000:00:04.6: reg 0x10: [mem 0xf6c90000-0xf6c93fff 64bit] [ 13.313948] pci 0000:00:04.7: [8086:0e27] type 00 class 0x088000 5] pci 0000:00:04.7: reg 0x10: [mem 0xf6c80000-0xf6c83fff 64bit] [ 13.316227] pci 0000:00:05.0: [8086:0e28] type 00 class 0x088000 [ 13.318235] pci 0000:00:05.2: [8086:0e2a] type 00 class 0x088000 [ 13.320208] pci 0000:00:05.4: [8086:0e2c] type 00 class 0x080020 [ 13.321062] pci 0000:00:05.4: reg 0x10: [mem 0xf6c70000-0xf6c70fff] [ 13.323336] pci 0000:00:11.0: [8086:1d3e] type 01 class 0x060400 [ 13.324363] pci 0000:00:11.0: PME# supported from D0 D3hot D3cold [ 13.328235] pci 0000:00:1a.0: [8086:1d2d] type 00 class 0x0c0320 [ 13.329068] pci 0000:00:1a.0: reg 0x10: [mem 0xf6c60000-0xf6c603ff] [ 13.330306] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold [ 13.332144] pci 0000:00:1c.0: [8086:1d10] type 01 class 0x060400 [ 13.333334] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold [ 13.337693] pci 0000:00:1c.7: [8086:1d1e] type 01 class 0x060400 [ 13.338329] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold [ 13.342721] pci 0000:00:1d.0: [8086:1d26] type 00 class 0x0c0320 [ 13.343069] pci 0000:00:1d.0: reg 0x10: [mem 0xf6c50000-0xf6c503ff] [ 13.344306] pci 0000:00:1d.0: PME# supported from D0 D3hot46075] pci 0000:00:1e.0: [8086:244e] type 01 class 0x060401 [ 13.348158] pci 0000:00:1f.0: [8086:1d41] type 00 class 0x060100 [ 13.352182] pci 0000:00:1f.2: [8086:1d00] type 00 class 0x01018f [ 13.353065] pci 0000:00:1f.2: reg 0x10: [io 0x4000-0x4007] [ 13.354049] pci 0000:00:1f.2: reg 0x14: [io 0x4008-0x400b] [ 13.355048] pci 0000:00:1f.2: reg 0x18: [io 0x4010-0x4017] [ 13.356048] pci 0000:00:1f.2: reg 0x1c: [io 0x4018-0x401b] [ 13.357048] pci 0000:00:1f.2: reg 0x20: [io 0x4020-0x402f] [ 13.358048] pci 0000:00:1f.2: reg 0x24: [io 0x4030-0x403f] [ 13.380497] pci 0000:04:00.0: [103c:323b] type 00 class 0x010400 [ 13.381069] pci 0000:04:00.0: reg 0x10: [mem 0xf7f00000-0xf7ffffff 64bit] [ 13.382054] pci 0000:04:00.0: reg 0x18: [mem 0xf7ef0000-0xf7ef03ff 64bit] [ 13.383050] pci 0000:04:00.0: reg 0x20: [io 0x6000-0x60ff] [ 13.384060] pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] [ 13.385046] pci 0000:04:00.0: enabling Extended Tags [ 13.386458] pci 0000:04:00.0: PME# supported from D0 D1 D3hot [ 13.395429] pci 0000:00:01.0: PCI bridge to [bus 04] [ 13.396043] pci 0000:00:01.0: bridge window [io 0x6000-0x6fff] [ 13.397039] pci 0000:00:01.0: bridge window 0-0xf7ffffff] [ 13.398525] pci 0000:00:01.1: PCI bridge to [bus 11] [ 13.401749] pci 0000:03:00.0: [14e4:1657] type 00 class 0x020000 [ 13.402067] pci 0000:03:00.0: reg 0x10: [mem 0xf6bf0000-0xf6bfffff 64bit pref] [ 13.403053] pci 0000:03:00.0: reg 0x18: [mem 0xf6be0000-0xf6beffff 64bit pref] [ 13.404053] pci 0000:03:00.0: reg 0x20: [mem 0xf6bd0000-0xf6bdffff 64bit pref] [ 13.405046] pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 13.406349] pci 0000:03:00.0: PME# supported from D0 D3hot D3cold [ 13.413765] pci 0000:03:00.1: [14e4:1657] type 00 class 0x020000 [ 13.414069] pci 0000:03:00.1: reg 0x10: [mem 0xf6bc0000-0xf6bcffff 64bit pref] [ 13.415054] pci 0000:03:00.1: reg 0x18: [mem 0xf6bb0000-0xf6bbffff 64bit pref] [ 13.416052] pci 0000:03:00.1: reg 0x20: [mem 0xf6ba0000-0xf6baffff 64bit pref] [ 13.417046] pci 0000:03:00.1: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 13.418341] pci 0000:03:00.1: PME# supported from D0 D3hot D3cold [ 13.425883] pci 0000:03:00.2: [14e4:1657] type 00 class 0x020000 [ 13.426068] pci 0000:03:00.2: reg 0x10: [mem 0xf6b90000-0xf6b9ffff 64bit pref] [ 13.427054] pci 0000:03:00.2: reg 0x18: [mem 0xf6b80000-0xf6b8ffff 64bit pref] [ 13.428052] pci 0000:03:00.2: reg 0x20:xf6b7ffff 64bit pref] [ 13.429048] pci 0000:03:00.2: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 13.430341] pci 0000:03:00.2: PME# supported from D0 D3hot D3cold [ 13.437758] pci 0000:03:00.3: [14e4:1657] type 00 class 0x020000 [ 13.438068] pci 0000:03:00.3: reg 0x10: [mem 0xf6b60000-0xf6b6ffff 64bit pref] [ 13.439052] pci 0000:03:00.3: reg 0x18: [mem 0xf6b50000-0xf6b5ffff 64bit pref] [ 13.440052] pci 0000:03:00.3: reg 0x20: [mem 0xf6b40000-0xf6b4ffff 64bit pref] [ 13.441046] pci 0000:03:00.3: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 13.442339] pci 0000:03:00.3: PME# supported from D0 D3hot D3cold [ 13.449868] pci 0000:00:02.0: PCI bridge to [bus 03] [ 13.450051] pci 0000:00:02.0: bridge window [mem 0xf6b00000-0xf6bfffff 64bit pref] [ 13.451495] pci 0000:00:02.1: PCI bridge to [bus 12] [ 13.452556] pci 0000:02:00.0: [103c:323b] type 00 class 0x010400 [ 13.453064] pci 0000:02:00.0: reg 0x10: [mem 0xf7d00000-0xf7dfffff 64bit] [ 13.454052] pci 0000:02:00.0: reg 0x18: [mem 0xf7cf0000-0xf7cf03ff 64bit] [ 13.455044] pci 0000:02:00.0: reg 0x20: [io 0x5000-0x50ff] [ 13.456055] pci 0000:02:00.0: reg 0x30000-0x0007ffff pref] [ 13.457045] pci 0000:02:00.0: enabling Extended Tags [ 13.458291] pci 0000:02:00.0: PME# supported from D0 D1 D3hot [ 13.460123] pci 0000:00:02.2: PCI bridge to [bus 02] [ 13.461039] pci 0000:00:02.2: bridge window [io 0x5000-0x5fff] [ 13.462037] pci 0000:00:02.2: bridge window [mem 0xf7c00000-0xf7dfffff] [ 13.463622] pci 0000:00:02.3: PCI bridge to [bus 13] [ 13.481443] pci 0000:00:03.0: PCI bridge to [bus 07] [ 13.482501] pci 0000:00:03.1: PCI bridge to [bus 14] [ 13.483543] pci 0000:00:03.2: PCI bridge to [bus 15] [ 13.484506] pci 0000:00:03.3: PCI bridge to [bus 16] [ 13.485524] pci 0000:00:11.0: PCI bridge to [bus 18] [ 13.486521] pci 0000:00:1c.0: PCI bridge to [bus 0a] [ 13.487918] pci 0000:01:00.0: [103c:3306] type 00 class 0x088000 [ 13.488074] pci 0000:01:00.0: reg 0x10: [io 0x3000-0x30ff] [ 13.489053] pci 0000:01:00.0: reg 0x14: [mem 0xf7bf0000-0xf7bf01ff] [ 13.490052] pci 0000:01:00.0: reg 0x18: [io 0x3400-0x34ff] [ 13.494070] pci 0000:01:00.1: [102b:0533] type 00 class 0x030000 [ 13.495072] pci 0000:01:00.1: reg 0x10: [mem 0xf5000000-0xf5ffffff pref] [ 13.496054] pci 0000:01:00.1: reg 0x14: [mem 0xf7be0000-0xf7be3fff] [ 13.497063] pci 0000:01:00.1: reg 0x18: [mem 0xf7000000-0xf77fffff] [ 13.498308] pci 0000:01:00.1: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] [ 13.500049] pci 0000:01:00.2: [103c:3307] type 00 class 0x088000 [ 13.501073] pci 0000:01:00.2: reg 0x10: [io 0x3800-0x38ff] [ 13.502052] pci 0000:01:00.2: reg 0x14: [mem 0xf6ff0000-0xf6ff00ff] [ 13.503053] pci 0000:01:00.2: reg 0x18: [mem 0xf6e00000-0xf6efffff] [ 13.504053] pci 0000:01:00.2: reg 0x1c: [mem fff] [ 13.505054] pci 0000:01:00.2: reg 0x20: [mem 0xf6d70000-0xf6d77fff] [ 13.506052] pci 0000:01:00.2: reg 0x24: [mem 0xf6d60000-0xf6d67fff] [ 13.507053] pci 0000:01:00.2: reg 0x30: [mem 0x00000000-0x0000ffff pref] [ 13.508324] pci 0000:01:00.2: PME# supported from D0 D3hot D3cold [ 13.510073] pci 0000:01:00.4: [103c:3300] type 00 class 0x0c0300 [ 13.511147] pci 0000:01:00.4: reg 0x20: [io 0x3c00-0x3c1f] [ 13.516153] pci 0000:00:1c.7: PCI bridge to [bus 01] [ 13.517039] pci 0000:00:1c.7: bridge window [io 0x3000-0x3fff] [ 13.518072] pci 0000:00:1c.7: bridge window [mem 0xf6d00000-0xf7bfffff] [ 13.519042] pci 0000:00:1c.7: bridge window [mem 0xf5000000-0xf5ffffff 64bit pref] [ 13.520085] pci_bus 0000:17: extended config space not accessible [ 13.521507] pci 0000:00:1e.0: PCI bridge to [bus 17] (subtractive decode) [ 13.522062] pci 0000:00:1e.0: bridge window [mem 0xf4000000-0xf7ffffff window] (subtractive decode) [ 13.523041] pci 0000:00:1e.0: bridge window [io 0x1000-0x7fff window] (subtractive decode) [ 13.524041] pci 0000:00:1e.0: bridge window [io 0x0000-0x03af window] (subtractive decode) [ 13.525041] pci 0000:00:1e.0: bridge window [io 0x03e0-0x0cf7 window] (subtractive decode) [ 13000:00:1e.0: bridge window [io 0x0d00-0x0fff window] (subtractive decode) [ 13.527043] pci 0000:00:1e.0: bridge window [io 0x03b0-0x03bb window] (subtractive decode) [ 13.528041] pci 0000:00:1e.0: bridge window [io 0x03c0-0x03df window] (subtractive decode) [ 13.529041] pci 0000:00:1e.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) [ 13.542077] ACPI: PCI: Interrupt link LNKA configured for IRQ 5 [ 13.545025] ACPI: PCI: Interrupt link LNKB configured for IRQ 7 [ 13.547996] ACPI: PCI: Interrupt link LNKC configured for IRQ 10 [ 13.549968] ACPI: PCI: Interrupt link LNKD configured for IRQ 10 [ 13.551963] ACPI: PCI: Interrupt link LNKE configured for IRQ 5 [ 13.553961] ACPI: PCI: Interrupt link LNKF configured for IRQ 7 [ 13.555995] ACPI: PCI: Interrupt link LNKG configured for IRQ 0 [ 13.556034] ACPI: PCI: Interrupt link LNKG disabled [ 13.558946] ACPI: PCI: Interrupt link LNKH configured for IRQ 0 [ 13.559034] ACPI: PCI: Interrupt link LNKH disabled [ 13.560650] ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 20-3f]) [ 13.561072] acpi PNP0A08:01: _OSC: OS supports [ExtendedC Segments MSI EDR HPX-Type3] [ 13.564940] acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER PCIeCapability LTR DPC] [ 13.565036] acpi PNP0A08:01: FADT indicates ASPM is unsupported, using BIOS configuration [ 13.570405] PCI host bridge to bus 0000:20 [ 13.571043] pci_bus 0000:20: root bus resource [mem 0xfb000000-0xfbffffff window] [ 13.572041] pci_bus 0000:20: root bus resource [io 0x8000-0xffff window] [ 13.573035] pci_bus 0000:20: root bus resource [bus 20-3f] [ 13.574260] pci 0000:20:00.0: [8086:0e01] type 01 class 0x060400 [ 13.575329] pci 0000:20:00.0: PME# supported from D0 D3hot D3cold [ 13.577236] pci 0000:20:01.0: [8086:0e02] type 01 class 0x060400 [ 13.578348] pci 0000:20:01.0: PME# supported from D0 D3hot D3cold [ 13.581799] pci 0000:20:01.1: [8086:0e03] type 01 class 0x060400 [ 13.582425] pci 0000:20:01.1: PME# supported from D0 D3hot D3cold [ 13.585817] pci 0000:20:02.0: [8086:0e04] type 01 class 0x060400 [ 13.586348] pci 0000:20:02.0: PME# supported from D0 D3hot D3cold [ 13.589838] pci 0000:20:02.1: [8086:0e05] type 01 class 0x060400 [ 13.590334] pci 0000:20:02.1: PME# supported from D0 D3hot D3cold [ 13.593784] pci 0000:20:02.2: [8086:0e06] type 01 class 0x060400 [ 13.594335] pci 0000:20:02.2: PME# supported from D0 D3hot D3cold [ 0000:20:02.3: [8086:0e07] type 01 class 0x060400 [ 13.598359] pci 0000:20:02.3: PME# supported from D0 D3hot D3cold [ 13.601819] pci 0000:20:03.0: [8086:0e08] type 01 class 0x060400 [ 13.602112] pci 0000:20:03.0: enabling Extended Tags [ 13.603266] pci 0000:20:03.0: PME# supported from D0 D3hot D3cold [ 13.606843] pci 0000:20:03.1: [8086:0e09] type 01 class 0x060400 [ 13.607334] pci 0000:20:03.1: PME# supported from D0 D3hot D3cold [ 13.610886] pci 0000:20:03.2: [8086:0e0a] type 01 class 0x060400 [ 13.611333] pci 0000:20:03.2: PME# supported from D0 D3hot D3cold [ 13.614784] pci 0000:20:03.3: [8086:0e0b] type 01 class 0x060400 [ 13.615335] pci 0000:20:03.3: PME# supported from D0 D3hot D3cold [ 13.618748] pci 0000:20:04.0: [8086:0e20] type 00 class 0x088000 [ 13.619070] pci 0000:20:04.0: reg 0x10: [mem 0xfbff0000-0xfbff3fff 64bit] [ 13.621076] pci 0000:20:04.1: [8086:0e21] type 00 class 0x088000 [ 13.622069] pci 0000:20:04.1: reg 0x10: [mem 0xfbfe0000-0xfbfe3fff 64bit] [ 13.624095] pci 0000:20:04.2: [8086:0e22] type 00 class 0x088000 [ 13.625069] pci 0000:20:04.2: reg 0x10: [mem 0xfbfd0000-0xfbfd3fff 64bit] [ 13.627095] pci 0000:20:04.3: [8086:0e23] type 00 class 0x088000 [ 13.628068] pci 0000:20:04.3: reg 0x10: [mem 0xfbfc0000-0xfbfc3fff 64bit] [ 13.0:04.4: [8086:0e24] type 00 class 0x088000 [ 13.631070] pci 0000:20:04.4: reg 0x10: [mem 0xfbfb0000-0xfbfb3fff 64bit] [ 13.633094] pci 0000:20:04.5: [8086:0e25] type 00 class 0x088000 [ 13.634069] pci 0000:20:04.5: reg 0x10: [mem 0xfbfa0000-0xfbfa3fff 64bit] [ 13.636098] pci 0000:20:04.6: [8086:0e26] type 00 class 0x088000 [ 13.637068] pci 0000:20:04.6: reg 0x10: [mem 0xfbf90000-0xfbf93fff 64bit] [ 13.639099] pci 0000:20:04.7: [8086:0e27] type 00 class 0x088000 [ 13.640079] pci 0000:20:04.7: reg 0x10: [mem 0xfbf80000-0xfbf83fff 64bit] [ 13.642101] pci 0000:20:05.0: [8086:0e28] type 00 class 0x088000 [ 13.644065] pci 0000:20:05.2: [8086:0e2a] type 00 class 0x088000 [ 13.646078] pci 0000:20:05.4: [8086:0e2c] type 00 class 0x080020 [ 13.647059] pci 0000:20:05.4: reg 0x10: [mem 0xfbf70000-0xfbf70fff] [ 13.649626] pci 0000:20:00.0: PCI bridge to [bus 2b] [ 13.650548] pci 0000:20:01.0: PCI bridge to [bus 21] [ 13.651501] pci 0000:20:01.1: PCI bridge to [bus 22] [ 13.652539] pci 0000:20:02.0: PCI bridge to [bus 23] [ 13.653516] pci 0000:20:02.1: PCI bridge to [bus 24] [ 13.654520] pci 0000:20:02.2: PCI bridge [ 13.655519] pci 0000:20:02.3: PCI bridge to [bus 26] [ 13.656521] pci 0000:20:03.0: PCI bridge to [bus 27] [ 13.657513] pci 0000:20:03.1: PCI bridge to [bus 28] [ 13.658612] pci 0000:20:03.2: PCI bridge to [bus 29] [ 13.659539] pci 0000:20:03.3: PCI bridge to [bus 2a] [ 13.667182] iommu: Default domain type: Translated [ 13.668035] iommu: DMA domain TLB invalidation policy: lazy mode [ 13.671905] SCSI subsystem initialized [ 13.672582] ACPI: bus type USB registered [ 13.673481] usbcore: registered new interface driver usbfs [ 13.674245] usbcore: registered new interface driver hub [ 13.675674] usbcore: registered new device driver usb [ 13.676836] pps_core: LinuxPPS API ver. 1 registered [ 13.677034] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti [ 13.678100] PTP clock support registered [ 13.681098] EDAC MC: Ver: 3.0.0 [ 13.687640] NetLabel: Initializing [ 13.688034] NetLabel: domain hash size = 128 [ 13.689032] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO [ 13.690316] NetLabel: unlabeled traffic allowed by default [ 13.691035] PCI: Using ACPI for IRQ routing [ 13.692918] PCI: Discovered peer bus 1f [ 13.693686] PCI host bridge to bus 0000:1f [ 13.694036] pci_bus 0000:1f: Unknown NUMA node; performance will be red43] pci_bus 0000:1f: root bus resource [io 0x0000-0xffff] [ 13.696041] pci_bus 0000:1f: root bus resource [mem 0x00000000-0x3fffffffffff] [ 13.697036] pci_bus 0000:1f: No busn resource found for root bus, will use [bus 1f-ff] [ 13.698034] pci_bus 0000:1f: busn_res: can not insert [bus 1f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 00-1f]) [ 13.699088] pci 0000:1f:08.0: [8086:0e80] type 00 class 0x088000 [ 13.700754] pci 0000:1f:09.0: [8086:0e90] type 00 class 0x088000 [ 13.701734] pci 0000:1f:0a.0: [8086:0ec0] type 00 class 0x088000 [ 13.702710] pci 0000:1f:0a.1: [8086:0ec1] type 00 class 0x088000 [ 13.703715] pci 0000:1f:0a.2: [8086:0ec2] type 00 class 0x088000 [ 13.704744] pci 0000:1f:0a.3: [8086:0ec3] type 00 class 0x088000 [ 13.705715] pci 0000:1f:0b.0: [8086:0e1e] type 00 class 0x088000 [ 13.706704] pci 0000:1f:0b.3: [8086:0e1f] type 00 class 0x088000 [ 13.707795] pci 0000:1f:0c.0: [8086:0ee0] type 00 class 0x088000 [ 13.708693] pci 0000:1f:0c.1: [8086:0ee2] type 00 class 0x088000 [ 13.709712] pci 0000:1f:0c.2: [8086:0ee4] type 00 class 0x088000 [ 13.710710] pci 0000:1f:0d.0: [8086:0ee1] tx088000 [ 13.711723] pci 0000:1f:0d.1: [8086:0ee3] type 00 class 0x088000 [ 13.712706] pci 0000:1f:0d.2: [8086:0ee5] type 00 class 0x088000 [ 13.713712] pci 0000:1f:0e.0: [8086:0ea0] type 00 class 0x088000 [ 13.714707] pci 0000:1f:0e.1: [8086:0e30] type 00 class 0x110100 [ 13.715740] pci 0000:1f:0f.0: [8086:0ea8] type 00 class 0x088000 [ 13.716886] pci 0000:1f:0f.1: [8086:0e71] type 00 class 0x088000 [ 13.717828] pci 0000:1f:0f.2: [8086:0eaa] type 00 class 0x088000 [ 13.718811] pci 0000:1f:0f.3: [8086:0eab] type 00 class 0x088000 [ 13.719811] pci 0000:1f:0f.4: [8086:0eac] type 00 class 0x088000 [ 13.720811] pci 0000:1f:0f.5: [8086:0ead] type 00 class 0x088000 [ 13.721840] pci 0000:1f:10.0: [8086:0eb0] type 00 class 0x088000 [ 13.722819] pci 0000:1f:10.1: [8086:0eb1] type 00 class 0x088000 [ 13.723819] pci 0000:1f:10.2: [8086:0eb2] type 00 class 0x088000 [ 13.724897] pci 0000:1f:10.3: [8086:0eb3] type 00 class 0x088000 [ 13.725810] pci 0000:1f:10.4: [8086:0eb4] type 00 class 0x088000 [ 13.726810] pci 0000:1f:10.5: [8086:0eb5] type 00 class 0x088000 [ 13.727815] pci 0000:1f:10.6: [8086:0eb6] type 00 class 0x088000 [ 13.728814] pci 0000:1f:10.7: [8086:0eb7] type 00 class 0x088000pci 0000:1f:13.0: [8086:0e1d] type 00 class 0x088000 [ 13.730773] pci 0000:1f:13.1: [8086:0e34] type 00 class 0x110100 [ 13.731715] pci 0000:1f:13.4: [8086:0e81] type 00 class 0x088000 [ 13.732706] pci 0000:1f:13.5: [8086:0e36] type 00 class 0x110100 [ 13.733795] pci 0000:1f:16.0: [8086:0ec8] type 00 class 0x088000 [ 13.734715] pci 0000:1f:16.1: [8086:0ec9] type 00 class 0x088000 [ 13.735726] pci 0000:1f:16.2: [8086:0eca] type 00 class 0x088000 [ 13.736719] pci_bus 0000:1f: busn_res: [bus 1f-ff] end is updated to 1f [ 13.737036] pci_bus 0000:1f: busn_res: can not insert [bus 1f] under domain [bus 00-ff] (conflicts with (null) [bus 00-1f]) [ 13.739045] PCI: Discovered peer bus 3f [ 13.740558] PCI host bridge to bus 0000:3f [ 13.741035] pci_bus 0000:3f: Unknown NUMA node; performance will be reduced [ 13.742039] pci_bus 0000:3f: root bus resource [io 0x0000-0xffff] [ 13.743050] pci_bus 0000:3f: root bus resource [mem 0x00000000-0x3fffffffffff] [ 13.744035] pci_bus 0000:3f: No busn resource found for root bus, will use [bus 3f-ff] [ 13.745036] pci_bus 0000:3f: busn_res: can not insert [bus 3f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 20-3f]) [ 13.746077] pci 0000:3f:08.0: [8086:0e80] type 00 class 13.747737] pci 0000:3f:09.0: [8086:0e90] type 00 class 0x088000 [ 13.748730] pci 0000:3f:0a.0: [8086:0ec0] type 00 class 0x088000 [ 13.749788] pci 0000:3a.1: [8086:0ec1] type 00 class 0x088000 [ 13.750705] pci 0000:3f:0a.2: [8086:0ec2] type 00 class 0x088000 [ 13.751729] pci 0000:3f:0a.3: [8086:0ec3] type 00 class 0x088000 [ 13.752711] pci 0000:3f:0b.0: [8086:0e1e] type 00 class 0x088000 [ 13.753731] pci 0000:3f:0b.3: [8086:0e1f] type 00 class 0x088000 [ 13.754732] pci 0000:3f:0c.0: [8086:0ee0] type 00 class 0x088000 [ 13.755714] pci 0000:3f:0c.1: [8086:0ee2] type 00 class 0x088000 [ 13.756713] pci 0000:3f:0c.2: [8086:0ee4] type 00 class 0x088000 [ 13.757731] pci 0000:3f:0d.0: [8086:0ee1] type 00 class 0x088000 [ 13.758871] pci 0000:3f:0d.1: [8086:0ee3] type 00 class 0x088000 [ 13.759719] pci 0000:3f:0d.2: [8086:0ee5] type 00 class 0x088000 [ 13.760715] pci 0000:3f:0e.0: [8086:0ea0] type 00 class 0x088000 [ 13.761728] pci 0000:3f:0e.1: [8086:0e30] type 00 class 0x110100 [ 13.762735] pci 0000:3f:0f.0: [8086:0ea8] type 00 class 0x088000 [ 13.763838] pci 0000:3f:0f.1: [8086:0e71] type 00 class 0x088000 [ 13.764842] pci 0000:3f:0f.2: [8086:0eaa] type 00 class 0x088000 [ 13.765831] pci 0000:3f:0f.3: [8086:0eab] type 00 class 0x088000 [ 13.766899] pci 0000:30eac] type 00 class 0x088000 [ 13.768417] pci 0000:3f:0f.5: [8086:0ead] type 00 class 0x088000 [ 13.770395] pci 0000:3f:10.0: [8086:0eb0] type 00 class 0x088000 [ 13.772424] pci 0000:3f:10.1: [8086:0eb1] type 00 class 0x088000 [ 13.774383] pci 0000:3f:10.2: [8086:0eb2] type 00 class 0x088000 [ 13.776448] pci 0000:3f:10.3: [8086:0eb3] type 00 class 0x088000 [ 13.778375] pci 0000:3f:10.4: [8086:0eb4] type 00 class 0x088000 [ 13.780392] pci 0000:3f:10.5: [8086:0eb5] type 00 class 0x088000 [ 13.782392] pci 0000:3f:10.6: [8086:0eb6] type 00 class 0x088000 [ 13.784547] pci 0000:3f:10.7: [8086:0eb7] type 00 class 0x088000 [ 13.786377] pci 0000:3f:13.0: [8086:0e1d] type 00 class 0x088000 [ 13.788254] pci 0000:3f:13.1: [8086:0e34] type 00 class 0x110100 [ 13.789722] pci 0000:3f:13.4: [8086:0e81] type 00 class 0x088000 [ 13.790734] pci 0000:3f:13.5: [8086:0e36] type 00 class 0x110100 [ 13.791717] pci 0000:3f:16.0: [8086:0ec8] type 00 class 0x088000 [ 13.792734] pci 0000:3f:16.1: [8086:0ec9] type 00 class 0x088000 [ 13.793708] pci 0000:3f:16.2: [8086:0eca] type 00 class 0x088000 [ 13.794819] pci_bus 0000:3f: busn_res: [bus 3f-ff] end is updated to 3f [ 13.795036] pci_bus 0000:3f: busn_res: can not insert [bus 3f] u0-ff] (conflicts with (null) [bus 20-3f]) [ 13.809212] pci 0000:01:00.1: vgaarb: setting as boot VGA device [ 13.810025] pci 0000:01:00.1: vgaarb: bridge control possible [ 13.810025] pci 0000:01:00.1: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none [ 13.810234] vgaarb: loaded [ 13.811550] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 [ 13.813033] hpet0: 8 comparators, 64-bit 14.318180 MHz counter [ 13.817903] clocksource: Switched to clocksource tsc-early [ 14.304148] VFS: Disk quotas dquot_6.6.0 [ 14.305893] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 14.310088] pnp: PnP ACPI init [ 14.317129] system 00:00: [mem 0xf4ffe000-0xf4ffffff] could not be reserved [ 14.324499] system 00:01: [io 0x0408-0x040f] has been reserved [ 14.326694] system 00:01: [io 0x04d0-0x04d1] has been reserved [ 14.328896] system 00:01: [io 0x0310-0x0315] has been reserved [ 14.331116] system 00:01: [io 0x0316-0x0317] has been reserved [ 14.333261] system 00:01: [io 0x0700-0x071f] has been reserved [ 14.335386] system 00:01: [io 0x0880-0x08ff] has been reserved [ 14.337532] system 00:01: [io 0x0900-0x097f] has been reserved [ 14.339634] system 00:01: [io 0x0cd4-0x0cd7] has been reserved [ 14.341746] system 00:01: [io 0x0cd0-0x0cd3] has been reserved [ 14.343938] system 00:01: [io 0x0f50-0x0f58] has been reserved [ 14.346087] system 00:01: [io 0x0ca0-0x0ca1] has been reserved [ 14.348294] system 00:01: [io 0x0ca4-0x0ca5] has been reserved [ 14.350431] system 00:01: [io 0x02f8-0x02ff] has been reserved [ 14.352562] system 00:01: [mem 0xc0000000-0xcfffffff] has been reserved [ 14.354920] system 00:01: [mem 0xfe000000-0xfebfffff] has been reserved [ 14.357284] system 00:01: [mem 0xfc000000-0xfc000fff] has been reserved [ 14.359666] system 00:01: [mem 0xfed1c000-0xfed1ffff] has been reserved [ 14.362579] system 00:01: [mem 0xfed30000-0xfed3ffff] has been reserved [ 14.365003] system 00:01: [mem 0xfee00000-0xfee00fff] has been reserved [ 14.367414] system 00:01: [mem 0xff800000-0xffffffff] has been reserved [ 14.401546] system 00:06: [mem 0xfbefe000-0xfbefffff] could not be reserved [ 14.407117] pnp: PnP ACPI: found 7 devices [ 14.483641] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns [ 14.488383] NET: Registered PF_INET protocol family [ 14.493248] IP idents hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) [ 14.512119] tcp_listen_portaddr_hash hash table entries: 16384 (order: 8, 1310720 bytes, vmalloc) [ 14.517784] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, vmalloc) [ 14.522649] TCP established hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) [ 14.529346] TCP bind hash table entries: 65536 (order: 10, 5242880 bytes, vmalloc hugepage) [ 14.539197] TCP: Hash tables configured (established 262144 bind 65536) [ 14.557193] MPTCP token hash table entries: 32768 (order: 9, 3145728 bytes, vmalloc) [ 14.566230] UDP hash table entries: 16384 (order: 9, 3145728 bytes, vmalloc) [ 14.575433] UDP-Lite hash table entries: 16384 (order: 9, 3145728 bytes, vmalloc) [ 14.589391] NET: Registered PF_UNIX/PF_LOCAL protocol family [ 14.591630] NET: Registered PF_XDP protocol family [ 14.593481] pci 0000:00:02.0: BAR 14: assigned [mem 0xf4000000-0xf40fffff] [ 14.595941] pci 0000:04:00.0: BAR 6: assigned [mem 0xf7e00000-0xf7e7ffff pref] [ 14.598562] pci 0000:00:01.0: PCI bridge to [bus 04] [ 14.600346] pci 0000:00:01.0: bridge window [io 0x6000-0x6fff] [ 14.602514] pci 0000:00:01.0: bridge window [mem 0xf7e00000-0xf7ffffff] [ 14.605063] pci 0000:00:01.1: PCI bridge to [bus 11] [ 14.606930] pci 0000:03:00.0: BAR 6: assigned [mem 0xf4000000-0xf403ffff pref] [94] pci 0000:03:00.1: BAR 6: assigned [mem 0xf4040000-0xf407ffff pref] [ 14.912180] pci 0000:03:00.2: BAR 6: assigned [mem 0xf4080000-0xf40bffff pref] [ 14.914736] pci 0000:03:00.3: BAR 6: assigned [mem 0xf40c0000-0xf40fffff pref] [ 14.917266] pci 0000:00:02.0: PCI bridge to [bus 03] [ 14.919048] pci 0000:00:02.0: bridge window [mem 0xf4000000-0xf40fffff] [ 14.921376] pci 0000:00:02.0: bridge window [mem 0xf6b00000-0xf6bfffff 64bit pref] [ 14.924025] pci 0000:00:02.1: PCI bridge to [bus 12] [ 14.925801] pci 0000:02:00.0: BAR 6: assigned [mem 0xf7c00000-0xf7c7ffff pref] [ 14.928363] pci 0000:00:02.2: PCI bridge to [bus 02] [ 14.930104] pci 0000:00:02.2: bridge window [io 0x5000-0x5fff] [ 14.932200] pci 0000:00:02.2: bridge window [mem 0xf7c00000-0xf7dfffff] [ 14.934522] pci 0000:00:02.3: PCI bridge to [bus 13] [ 14.936236] pci 0000:00:03.0: PCI bridge to [bus 07] [ 14.938067] pci 0000:00:03.1: PCI bridge to [bus 14] [ 14.939821] pci 0000:00:03.2: PCI bridge to [bus 15] [ 14.941540] pci 0000:00:03.3: PCI bridge to [bus 16] [ 14.943250] pci 0000:00:11.0: PCI bridge to [bus 18] [ 14.945060] pci 0000:00:1c.0: PCI bridge to [bus 0a] [ 14.946823] pci 0000:01:00.2: BAR 6: assigned [mem 0xf6d00000-0xf6d0ffff pref] [ 14.949401] pci 0ridge to [bus 01] [ 15.450943] pci 0000:00:1c.7: bridge window [io 0x3000-0x3fff] [ 15.453108] pci 0000:00:1c.7: bridge window [mem 0xf6d00000-0xf7bfffff] [ 15.455440] pci 0000:00:1c.7: bridge window [mem 0xf5000000-0xf5ffffff 64bit pref] [ 15.458173] pci 0000:00:1e.0: PCI bridge to [bus 17] [ 15.459911] pci_bus 0000:00: resource 4 [mem 0xf4000000-0xf7ffffff window] [ 15.462303] pci_bus 0000:00: resource 5 [io 0x1000-0x7fff window] [ 15.464412] pci_bus 0000:00: resource 6 [io 0x0000-0x03af window] [ 15.466521] pci_bus 0000:00: resource 7 [io 0x03e0-0x0cf7 window] [ 15.468704] pci_bus 0000:00: resource 8 [io 0x0d00-0x0fff window] [ 15.470820] pci_bus 0000:00: resource 9 [io 0x03b0-0x03bb window] [ 15.472934] pci_bus 0000:00: resource 10 [io 0x03c0-0x03df window] [ 15.475065] pci_bus 0000:00: resource 11 [mem 0x000a0000-0x000bffff window] [ 15.477456] pci_bus 0000:04: resource 0 [io 0x6000-0x6fff] [ 15.479403] pci_bus 0000:04: resource 1 [mem 0xf7e00000-0xf7ffffff] [ 15.481565] pci_bus 0000:03: resource 1 [mem 0xf4000000-0xf40fffff] [ 15.483722] pci_bus 0000:03: resource 2 [mem 0xf6b00000-0xf] [ 15.985986] pci_bus 0000:02: resource 0 [io 0x5000-0x5fff] [ 15.988010] pci_bus 0000:02: resource 1 [mem 0xf7c00000-0xf7dfffff] [ 15.990174] pci_bus 0000:01: resource 0 [io 0x3000-0x3fff] [ 15.992095] pci_bus 0000:01: resource 1 [mem 0xf6d00000-0xf7bfffff] [ 15.994273] pci_bus 0000:01: resource 2 [mem 0xf5000000-0xf5ffffff 64bit pref] [ 15.996741] pci_bus 0000:17: resource 4 [mem 0xf4000000-0xf7ffffff window] [ 15.999159] pci_bus 0000:17: resource 5 [io 0x1000-0x7fff window] [ 16.001274] pci_bus 0000:17: resource 6 [io 0x0000-0x03af window] [ 16.003381] pci_bus 0000:17: resource 7 [io 0x03e0-0x0cf7 window] [ 16.005483] pci_bus 0000:17: resource 8 [io 0x0d00-0x0fff window] [ 16.007668] pci_bus 0000:17: resource 9 [io 0x03b0-0x03bb window] [ 16.009775] pci_bus 0000:17: resource 10 [io 0x03c0-0x03df window] [ 16.011923] pci_bus 0000:17: resource 11 [mem 0x000a0000-0x000bffff window] [ 16.016886] pci 0000:20:00.0: PCI bridge to [bus 2b] [ 16.018820] pci 0000:20:01.0: PCI bridge to [bus 21] [ 16.020638] pci 0000:20:01.1: PCI bridge to [bus 22] [ 16.022449] pci 0000:20:02.0: PCI bridge to [bus 23] [ 16.024272] pci 0000:20dge to [bus 24] [ 16.425966] pci 0000:20:02.2: PCI bridge to [bus 25] [ 16.427841] pci 0000:20:02.3: PCI bridge to [bus 26] [ 16.429602] pci 0000:20:03.0: PCI bridge to [bus 27] [ 16.431340] pci 0000:20:03.1: PCI bridge to [bus 28] [ 16.433098] pci 0000:20:03.2: PCI bridge to [bus 29] [ 16.434843] pci 0000:20:03.3: PCI bridge to [bus 2a] [ 16.436600] pci_bus 0000:20: resource 4 [mem 0xfb000000-0xfbffffff window] [ 16.439063] pci_bus 0000:20: resource 5 [io 0x8000-0xffff window] [ 16.441467] pci_bus 0000:1f: resource 4 [io 0x0000-0xffff] [ 16.443446] pci_bus 0000:1f: resource 5 [mem 0x00000000-0x3fffffffffff] [ 16.445766] pci_bus 0000:3f: resource 4 [io 0x0000-0xffff] [ 16.447769] pci_bus 0000:3f: resource 5 [mem 0x00000000-0x3fffffffffff] [ 16.450241] pci 0000:00:05.0: disabled boot interrupts on device [8086:0e28] [ 16.479648] pci 0000:00:1a.0: quirk_usb_early_handoff+0x0/0x290 took 26232 usecs [ 16.509102] pci 0000:00:1d.0: quirk_usb_early_handoff+0x0/0x290 took 26157 usecs [ 16.526371] pci 0000:01:00.4: quirk_usb_early_handoff+0x0/0x290 took 14253 usecs [ 16.529604] pci 0000:20:05.0: disabled boot interrupts on device [8086:0e28] [ 16.532402] PCI: CLS 64 bytes, default 64 [ 16.537281] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 16.537720] Trying to unpack rootfs image as initramfs... [ 16.539694] software IO TLB: mapped [mem 0x0000000039000000-0x000000003d000000] (64MB) [ 16.545pe thunderbolt registered [ 17.167639] Initialise system trusted keyrings [ 17.169624] Key type blacklist registered [ 17.172667] workingset: timestamp_bits=36 max_order=23 bucket_order=0 [ 17.289476] zbud: loaded [ 17.303646] integrity: Platform Keyring initialized [ 17.320122] NET: Registered PF_ALG protocol family [ 17.321975] xor: automatically using best checksumming function avx [ 17.324439] Key type asymmetric registered [ 17.325916] Asymmetric key parser 'x509' registered [ 17.327786] Running certificate verification selftests [ 17.438513] cryptomgr_test (211) used greatest stack depth: 28528 bytes left [ 17.544984] Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' [ 17.552181] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) [ 17.555793] io scheduler mq-deadline registered [ 17.557725] io scheduler kyber registered [ 17.560266] io scheduler bfq registered [ 17.569429] atomic64_test: passed for x86-64 platform with CX8 and with SSE [ 17.824443] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 [ 17.830122] ACPI: \_PR_.CP00: Found 2 idle states [ 17.882412] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 17.887265] ACPI: button: Power Button [PWRF] [ 17.932162] thermal LNXTHERM:00: registered as thermal_zone0 [ 17.934221] ACPI: thermal: Thermal Zone [THM0] (8 C) [ 17.937165] ERST: Error Record Serialization Table (ERST) support is initialized. [ 17.940065] pstore: Registered erst as persistent store backend [ 17.946218] GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. [ 17.952633] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 17.955896] 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A [ 17.961820] serial8250: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A [ 17.978212] Non-volatile memory driver v1.3 [ 18.045235] rdac: device handler registered [ 18.047839] hp_sw: device handler registered [ 18.049537] emc: device handler registered [ 18.052441] alua: device handler registered [ 18.054481] tsc: Refined TSC clocksource calibration: 2094.949 MHz [ 18.057257] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1e328cf0a17, max_idle_ns: 440795250041 ns [ 18.059419] libphy: Fixed MDIO Bus: probed [ 18.062395] clocksource: Switched to clocksource tsc [ 18.064309] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 18.066744] ehci-pci: EHCI PCI platform driver [ 18.084429] ehci-pci 0000:00:1a.0: EHCI Host Controller [ 18.089487] ehci-pci 0000:00:1a.0: new USB bus registered, assigned bus number 1 [ 18.092383] ehci-pci 0000:00:1a.0: debug port 2 [ 18.099364] ehci-pci 0000:00:1a.0: irq 21, io mem 0xf6c60000 [ 18.108210] ehci-pci 0000:00:1a.0: USB 2.0 started, EHCI 1.00 [ 18.113517] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.14 [ 18.116530] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 18.119138] usb usb1: Product: EHCI Host Controller [ 18.120908] usb usb1: Manufacturer: Linux 5.14.0-256.2009_766119311.el9.x86_64+debug ehci_hcd [ 18.123848] usb usb1: SerialNumber: 0000:00:1a.0 [ 18.131591] hub 1-0:1.0: USB hub found [ 18.133655] hub 1-0:1.0: 2 ports detected [ 18.157698] ehci-pci 0000:00:1d.0: EHCI Host Controller [ 18.162810] ehci-pci 0000:00:1d.0: new USB bus registered, assigned bus number 2 [ 18.165530] ehci-pci 0000:00:1d.0: debug port 2 [ 18.171828] ehci-pci 0000:00:1d.0: irq 20, io mem 0xf6c50000 [ 18.181171] ehci-pci 0000:00:1d.0: USB 2.0 started, EHCI 1.00 [ 18.185241] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.14 [ 18.188286] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 18.190828] usb usb2: Product: EHCI Host Controller [ 18.192593] usb usb2: Manufacturer: Linux 5.14.0-256.2009_766119311.el9.x86_64+debug ehci_hcd [ 18.195503] usb usb2: SerialNumber: 0000:00:1d.0 [ 18.201519] hub 2-0:1.0: USB hub found [ 18.203250] hub 2-0:1.0: 2 ports detected [ 18.211617] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 18.214022] ohci-pci: OHCI PCI platform driver [ 18.216478] uhci_hcd: USB Universal Host Controller Interface driver [ 18.225431] uhci_hcd 0000:01:00.4: UHCI Host Controller [ 18.230195] uhci_hcd 0000:01:00.4: new USB bus registered, assigned bus number 3 [ 18.233005] uhci_hcd 0000:01:00.4: detected 8 ports [ 18.234769] uhci_hcd 0000:01:00.4: port count misdetected? forcing to 2 ports [ 18.237789] uhci_hcd 0000:01:00.4: irq 47, io port 0x00003c00 [ 18.242103] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 [ 18.245624] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 18.248396] usb usb3: Product: UHCI Host Controller [ 18.250265] usb usb3: Manufacturer: Linux 5.14.0-256.2009_766119311.el9.x86_64+debug uhci_hcd [ 18.253402] usb usb3: SerialNumber: 0000:01:00.4 [ 18.260972] hub 3-0:1.0: USB hub found [ 18.263394] hub 3-0:1.0: 2 ports detected [ 18.269926] usbcore: registered new interface driver usbserial_generic [ 18.272513] usbserial: USB Serial support registered for generic [ 18.275434] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f0e:PS2M] at 0x60,0x64 irq 1,12 [ 18.281443] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 18.283539] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 18.287160] mousedev: PS/2 mouse device common for all mice [ 18.290941] rtc_cmos 00:03: RTC can wake from S4 [ 18.296620] rtc_cmos 00:03: registered as rtc0 [ 18.298482] rtc_cmos 00:03: setting system clock to 2023-02-03T04:34:14 UTC (1675398854) [ 18.301809] rtc_cmos 00:03: alarms up to one day, 114 bytes nvram, hpet irqs [ 18.314141] intel_pstate: Intel P-state driver initializing [ 18.349108] hid: raw HID events driver (C) Jiri Kosina [ 18.351932] usbcore: registered new interface driver usbhid [ 18.353922] usbhid: USB HID core driver [ 18.356332] drop_monitor: Initializing network drop monitor service [ 18.385136] usb 1-1: new high-speed USB device number 2 using ehci-pci [ 18.390872] Initializing XFRM netlink socket [ 18.396527] NET: Registered PF_INET6 protocol family [ 18.412185] Segment Routing with IPv6 [ 18.413646] NET: Registered PF_PACKET protocol family [ 18.416405] mpls_gso: MPLS GSO support [ 18.448214] usb 2-1: new high-speed USB device number 2 using ehci-pci [ 18.458624] microcode: sig=0x306e4, pf=0x1, revision=0x42e [ 18.463151] microcode: Microcode Update Driver: v2.2. [ 18.463175] IPI shorthand broadcast: enabled [ 18.466565] AVX version of gcm_enc/dec engaged. [ 18.468698] AES CTR mode by8 optimization enabled [ 18.476992] sched_clock: Marking stable (12238195448, 6238566841)->(28246512860, -9769750571) [ 18.515291] registered taskstats version 1 [ 18.522868] usb 1-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00 [ 18.525711] Loading compiled-in X.509 certificates [ 18.526159] usb 1-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 18.533674] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 3bcdb855b1eeffc8155fd4f9576830612b2c709a' [ 18.537619] hub 1-1:1.0: USB hub found [ 18.539336] Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321770' [ 18.540114] hub 1-1:1.0: 6 ports detected [ 18.844526] Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' [ 18.861266] usb 2-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00 [ 18.864157] usb 2-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 18.869228] hub 2-1:1.0: USB hub found [ 18.871060] hub 2-1:1.0: 8 ports detected [ 18.895981] Freeing initrd memory: 35636K [ 18.913789] cryptomgr_test (246) used greatest stack depth: 27672 bytes left [ 18.933240] zswap: loaded using pool lzo/zbud [ 18.940770] debug_vm_pgtable: [debug_vm_pgtable ]: Validating architecture page table helpers [ 19.882597] page_owner is disabled [ 19.887788] pstore: Using crash dump compression: deflate [ 19.890644] Key type big_key registered [ 19.923315] modprobe (249) used greatest stack depth: 27240 bytes left [ 19.950157] usb 2-1.3: new high-speed USB device number 3 using ehci-pci [ 19.955228] Key type encrypted registered [ 19.956986] ima: No TPM chip found, activating TPM-bypass! [ 19.959157] Loading compiled-in module X.509 certificates [ 19.962517] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 3bcdb855b1eeffc8155fd4f9576830612b2c709a' [ 19.966484] ima: Allocated hash algorithm: sha256 [ 19.968434] ima: No architecture policies found [ 19.970572] evm: Initialising EVM extended attributes: [ 19.972842] evm: security.selinux [ 19.974063] evm: security.SMACK64 (disabled) [ 19.975560] evm: security.SMACK64EXEC (disabled) [ 19.977187] evm: security.SMACK64TRANSMUTE (disabled) [ 19.978981] evm: security.SMACK64MMAP (disabled) [ 19.980633] evm: security.apparmor (disabled) [ 19.982136] evm: security.ima [ 19.983179] evm: security.capability [ 19.984469] evm: HMAC attrs: 0x1 [ 20.040680] usb 2-1.3: New USB device found, idVendor=0424, idProduct=2660, bcdDevice= 8.01 [ 20.043696] usb 2-1.3: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 20.050488] hub 2-1.3:1.0: USB hub found [ 20.052957] hub 2-1.3:1.0: 2 ports detected [ 20.850767] cryptomgr_test (356) used greatest stack depth: 27032 bytes left [ 21.149236] PM: Magic number: 11:649:564 [ 21.211938] Freeing unused decrypted memory: 2036K [ 21.225201] Freeing unused kernel image (initmem) memory: 5300K [ 21.226880] Write protecting the kernel read-only data: 57344k [ 21.238390] Freeing unused kernel image (text/rodata gap) memory: 2036K [ 21.243819] Freeing unused kernel image (rodata/data gap) memory: 1400K [ 21.392853] x86/mm: Checked W+X mappings: passed, no W+X pages found. [ 21.393277] x86/mm: Checking user space page tables [ 21.477186] x86/mm: Checked W+X mappings: passed, no W+X pages found. [ 21.477739] Run /init as init process [ 21.647006] systemd[1]: systemd 252-3.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [ 21.668147] systemd[1]: Detected architecture x86-64. [ 21.668518] systemd[1]: Running in initrd. Welcome to CentOS Stream 9 dracut-057-20.git20221213.el9 (Initramfs) ! [ 21.676540] systemd[1]: Hostname set to . [ 22.928788] systemd[1]: Queued start job for default target Initrd Default Target. [ 22.949360] systemd[1]: Created slice Slice /system/systemd-hibernate-resume. [ OK ] Created slice Slice /system/systemd-hibernate-resume . [ 22.956626] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [ OK ] Started Dispatch Password …ts to Console Directory Watch . [ 22.960441] systemd[1]: Reached target Initrd /usr File System. [ OK ] Reached target Initrd /usr File System . [ 22.965394] systemd[1]: Reached target Path Units. [ OK ] Reached target Path Units . [ 22.967257] systemd[1]: Reached target Slice Units. [ OK ] Reached target Slice Units . [ 22.971309] systemd[1]: Reached target Swaps. [ OK ] Reached target Swaps . [ 22.975390] systemd[1]: Reached target Timer Units. [ OK ] Reached target Timer Units . [ 22.981711] systemd[1]: Listening on D-Bus System Message Bus Socket. [ OK ] Listening on D-Bus System Message Bus Socket . [ 22.988749] systemd[1]: Listening on Journal Socket (/dev/log). [ OK ] Listening on Journal Socket (/dev/log) . [ 22.995663] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket . [ 22.999684] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket . [ 23.004940] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket . [ 23.009399] systemd[1]: Reached target Socket Units. [ OK ] Reached target Socket Units . [ 23.040851] systemd[1]: Starting Create List of Static Device Nodes... Starting Create List of Static Device Nodes ... [ 23.097620] systemd[1]: Starting Journal Service... Starting Journal Service ... [ 23.106260] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [ 23.136966] systemd[1]: Starting Apply Kernel Variables... Starting Apply Kernel Variables ... [ 23.169144] systemd[1]: Starting Create System Users... Starting Create System Users ... [ 23.201357] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console ... [ 23.241921] systemd[1]: Finished Create List of Static Device Nodes. [ OK ] Finished Create List of Static Device Nodes . [ 23.300469] systemd[1]: Finished Apply Kernel Variables. [ OK ] Finished Apply Kernel Variables . [ 23.436219] systemd[1]: Finished Create System Users. [ OK ] Finished Create System Users . [ 23.467663] systemd[1]: Starting Create Static Device Nodes in /dev... Starting Create Static Device Nodes in /dev ... [ 23.663709] systemd[1]: Finished Create Static Device Nodes in /dev. [ OK ] Finished Create Static Device Nodes in /dev . [ 23.866744] systemd[1]: Finished Setup Virtual Console. [ OK ] Finished Setup Virtual Console . [ 23.874800] systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. [ 23.909637] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook ... [ 24.597794] systemd[1]: Started Journal Service. [ OK ] Started Journal Service . Starting Create Volatile Files and Directories ... [ OK ] Finished Create Volatile Files and Directories . [ OK ] Finished dracut cmdline hook . Starting dracut pre-udev hook ... [ 26.278846] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. [ 26.280352] device-mapper: uevent: version 1.0.3 [ 26.283943] device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com [ OK ] Finished dracut pre-udev hook . Starting Rule-based Manage…for Device Events and Files ... [ OK ] Started Rule-based Manager for Device Events and Files . Starting Coldplug All udev Devices ... [ * ] (1 of 3) A start job is running for…l360pgen8--08-swap (6s / no limit) M [ * * ] (1 of 3) A start job is running for…l360pgen8--08-swap (6s / no limit) M [ * * * ] (1 of 3) A start job is running for…l360pgen8--08-swap (7s / no limit) M [ * * * ] (2 of 3) A start job is running for…g All udev Devices (8s / no limit) M [ * * * ] (2 of 3) A start job is running for…g All udev Devices (8s / no limit) M [ OK ] Finished Coldplug All udev Devices . [ OK ] Reached target Network . Starting dracut initqueue hook ... [ 31.803485] hpwdt 0000:01:00.0: HPE Watchdog Timer Driver: NMI decoding initialized [ 31.810135] hpwdt 0000:01:00.0: HPE Watchdog Timer Driver: Version: 2.0.4 [ 31.810563] hpwdt 0000:01:00.0: timeout: 30 seconds (nowayout=0) [ 31.811429] hpwdt 0000:01:00.0: pretimeout: on. [ 31.812735] hpwdt 0000:01:00.0: kdumptimeout: -1. [ 31.860639] Warning: Unmaintained hardware is detected: hpsa:323B:103C @ 0000:02:00.0 [ 31.861193] HP HPSA Driver (v 3.4.20-200) [ 31.861587] hpsa 0000:02:00.0: can't disable ASPM; OS doesn't have ASPM control [ 31.931765] tg3 0000:03:00.0 eth0: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c4 [ 31.932592] tg3 0000:03:00.0 eth0: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 31.933970] tg3 0000:03:00.0 eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 31.934696] tg3 0000:03:00.0 eth0: dma_rwctrl[00000001] dma_mask[64-bit] [ 32.018400] hpsa 0000:02:00.0: Logical aborts not supported [ 32.018760] hpsa 0000:02:00.0: HP SSD Smart Path aborts not supported [ 32.019555] tg3 0000:03:00.1 eth1: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c5 [ 32.020250] tg3 0000:03:00.1 eth1: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 32.021500] tg3 0000:03:00.1 eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 32.022068] tg3 0000:03:00.1 eth1: dma_rwctrl[00000001] dma_mask[64-bit] [ 32.058128] ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] [ 32.097407] scsi host0: hpsa [ 32.103854] hpsa can't handle SMP requests [ 32.111304] Warning: Unmaintained hardware is detected: hpsa:323B:103C @ 0000:04:00.0 [ 32.111926] tg3 0000:03:00.2 eth2: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c6 [ 32.112746] tg3 0000:03:00.2 eth2: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 32.113908] tg3 0000:03:00.2 eth2: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 32.114696] tg3 0000:03:00.2 eth2: dma_rwctrl[00000001] dma_mask[64-bit] [ 32.115469] hpsa 0000:04:00.0: can't disable ASPM; OS doesn't have ASPM control [ 32.123234] hpsa 0000:02:00.0: scsi 0:0:0:0: added RAID HP P420i controller SSDSmartPathCap- En- Exp=1 [ 32.123995] hpsa 0000:02:00.0: scsi 0:0:1:0: masked Direct-Access ATA MM0500GBKAK PHYS DRV SSDSmartPathCap- En- Exp=0 [ 32.125003] hpsa 0000:02:00.0: scsi 0:1:0:0: added Direct-Access HP LOGICAL VOLUME RAID-0 SSDSmartPathCap- En- Exp=1 [ 32.128411] hpsa can't handle SMP requests [ 32.131472] scsi 0:0:0:0: RAID HP P420i 8.32 PQ: 0 ANSI: 5 [ 32.136008] scsi 0:1:0:0: Direct-Access HP LOGICAL VOLUME 8.32 PQ: 0 ANSI: 5 [ 32.165706] tg3 0000:03:00.3 eth3: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c7 [ 32.166447] tg3 0000:03:00.3 eth3: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 32.167514] tg3 0000:03:00.3 eth3: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 32.168012] tg3 0000:03:00.3 eth3: dma_rwctrl[00000001] dma_mask[64-bit] [ 32.243070] hpsa 0000:04:00.0: Logical aborts not supported [ 32.243425] hpsa 0000:04:00.0: HP SSD Smart Path aborts not supported [ 32.312149] scsi host2: ata_piix [ 32.345314] scsi host3: ata_piix [ 32.348531] ata1: SATA max UDMA/133 cmd 0x4000 ctl 0x4008 bmdma 0x4020 irq 17 [ 32.349486] ata2: SATA max UDMA/133 cmd 0x4010 ctl 0x4018 bmdma 0x4028 irq 17 [ 32.382617] scsi host1: hpsa [ 32.399889] hpsa can't handle SMP requests [ 32.401808] tg3 0000:03:00.0 eno1: renamed from eth0 [ 32.410911] hpsa 0000:04:00.0: scsi 1:0:0:0: added RAID HP P421 controller SSDSmartPathCap- En- Exp=1 [ 32.411605] hpsa 0000:04:00.0: scsi 1:0:1:0: masked Enclosure PMCSIERA SRCv8x6G enclosure SSDSmartPathCap- En- Exp=0 [ 32.415384] hpsa can't handle SMP requests [ 32.417430] scsi 1:0:0:0: RAID HP P421 8.32 PQ: 0 ANSI: 5 [ 32.420398] tg3 0000:03:00.1 eno2: renamed from eth1 [ 32.433014] tg3 0000:03:00.2 eno3: renamed from eth2 [ 32.449790] tg3 0000:03:00.3 eno4: renamed from eth3 [ 32.517487] scsi 0:0:0:0: Attached scsi generic sg0 type 12 [ 32.519372] scsi 0:1:0:0: Attached scsi generic sg1 type 0 [ 32.520945] scsi 1:0:0:0: Attached scsi generic sg2 type 12 [ 32.619091] sd 0:1:0:0: [sda] 976707632 512-byte logical blocks: (500 GB/466 GiB) [ 32.620737] sd 0:1:0:0: [sda] Write Protect is off [ 32.622425] sd 0:1:0:0: [sda] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA [ 32.623153] sd 0:1:0:0: [sda] Preferred minimum I/O size 262144 bytes [ 32.623645] sd 0:1:0:0: [sda] Optimal transfer size 262144 bytes [ 32.688717] sda: sda1 sda2 [ 32.693858] sd 0:1:0:0: [sda] Attached SCSI disk [ 33.390208] ata2.00: failed to resume link (SControl 0) [ 33.702221] ata1.01: failed to resume link (SControl 0) [ 33.713847] ata1.00: SATA link down (SStatus 0 SControl 300) [ 33.714687] ata1.01: SATA link down (SStatus 4 SControl 0) [ 34.430201] ata2.01: failed to resume link (SControl 0) [ 34.441736] ata2.00: SATA link down (SStatus 4 SControl 0) [ 34.442131] ata2.01: SATA link down (SStatus 4 SControl 0) [ 36.328770] cp (703) used greatest stack depth: 26392 bytes left [ OK ] Found device /dev/mapper/cs_hpe--dl360pgen8--08-root . [ OK ] Reached target Initrd Root Device . [ OK ] Found device /dev/mapper/cs_hpe--dl360pgen8--08-swap . Starting Resume from hiber…cs_hpe--dl360pgen8--08-swap ... [ OK ] Finished Resume from hiber…r/cs_hpe--dl360pgen8--08-swap . [ OK ] Reached target Preparation for Local File Systems . [ OK ] Reached target Local File Systems . [ OK ] Reached target System Initialization . [ OK ] Reached target Basic System . [ OK ] Finished dracut initqueue hook . [ OK ] Reached target Preparation for Remote File Systems . [ OK ] Reached target Remote File Systems . Starting File System Check…cs_hpe--dl360pgen8--08-root ... [ 38.748114] fsck (738) used greatest stack depth: 25128 bytes left [ OK ] Finished File System Check…r/cs_hpe--dl360pgen8--08-root . Mounting /sysroot ... [ 40.237953] SGI XFS with ACLs, security attributes, scrub, verbose warnings, quota, no debug enabled [ 40.311380] XFS (dm-0): Mounting V5 Filesystem [ * * * ] A start job is running for /sysroot (18s / no limit) M [ * * ] A start job is running for /sysroot (18s / no limit) [ 41.530273] XFS (dm-0): Starting recovery (logdev: internal) [ 41.738568] XFS (dm-0): Ending recovery (logdev: internal) [ 41.789137] mount (740) used greatest stack depth: 24280 bytes left M [ OK ] Mounted /sysroot . [ OK ] Reached target Initrd Root File System . Starting Mountpoints Configured in the Real Root ... [ 41.955505] systemd-fstab-g (752) used greatest stack depth: 23976 bytes left [ OK ] Finished Mountpoints Configured in the Real Root . [ OK ] Reached target Initrd File Systems . [ OK ] Reached target Initrd Default Target . Starting dracut pre-pivot and cleanup hook ... [ OK ] Finished dracut pre-pivot and cleanup hook . Starting Cleaning Up and Shutting Down Daemons ... [ OK ] Stopped target Network . [ OK ] Stopped target Timer Units . [ OK ] Closed D-Bus System Message Bus Socket . [ OK ] Stopped dracut pre-pivot and cleanup hook . [ OK ] Stopped target Initrd Default Target . [ OK ] Stopped target Basic System . [ OK ] Stopped target Initrd Root Device . [ OK ] Stopped target Initrd /usr File System . [ OK ] Stopped target Path Units . [ OK ] Stopped Dispatch Password …ts to Console Directory Watch . [ OK ] Stopped target Remote File Systems . [ OK ] Stopped target Preparation for Remote File Systems . [ OK ] Stopped target Slice Units . [ OK ] Stopped target Socket Units . [ OK ] Stopped target System Initialization . [ OK ] Stopped target Local File Systems . [ OK ] Stopped target Preparation for Local File Systems . [ OK ] Stopped target Swaps . [ OK ] Stopped dracut initqueue hook . [ OK ] Stopped Apply Kernel Variables . [ OK ] Stopped Create Volatile Files and Directories . [ OK ] Stopped Coldplug All udev Devices . Stopping Rule-based Manage…for Device Events and Files ... [ OK ] Stopped Setup Virtual Console . [ OK ] Finished Cleaning Up and Shutting Down Daemons . [ OK ] Stopped Rule-based Manager for Device Events and Files . [ OK ] Closed udev Control Socket . [ OK ] Closed udev Kernel Socket . [ OK ] Stopped dracut pre-udev hook . [ OK ] Stopped dracut cmdline hook . Starting Cleanup udev Database ... [ OK ] Stopped Create Static Device Nodes in /dev . [ OK ] Stopped Create List of Static Device Nodes . [ OK ] Stopped Create System Users . [ OK ] Finished Cleanup udev Database . [ OK ] Reached target Switch Root . Starting Switch Root ... [ 44.075531] systemd-journald[403]: Received SIGTERM from PID 1 (systemd). [ 47.737718] SELinux: policy capability network_peer_controls=1 [ 47.738639] SELinux: policy capability open_perms=1 [ 47.738971] SELinux: policy capability extended_socket_class=1 [ 47.740080] SELinux: policy capability always_check_network=0 [ 47.740922] SELinux: policy capability cgroup_seclabel=1 [ 47.741275] SELinux: policy capability nnp_nosuid_transition=1 [ 47.742043] SELinux: policy capability genfs_seclabel_symlinks=1 [ 48.270741] audit: type=1403 audit(1675398884.471:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 [ 48.287441] systemd[1]: Successfully loaded SELinux policy in 2.365254s. [ 48.437599] systemd[1]: RTC configured in localtime, applying delta of -300 minutes to system time. [ 48.839475] systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 300.227ms. [ 48.907297] systemd[1]: systemd 252-3.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [ 48.924762] systemd[1]: Detected architecture x86-64. Welcome to CentOS Stream 9 ! [ 49.507802] systemd-rc-local-generator[797]: /etc/rc.d/rc.local is not marked executable, skipping. [ 50.204128] grep (804) used greatest stack depth: 23784 bytes left [ 51.571326] systemd[1]: /usr/lib/systemd/system/restraintd.service:8: Standard output type syslog+console is obsolete, automatically updating to journal+console. Please update your unit file, and consider removing the setting altogether. [ 52.347929] systemd[1]: initrd-switch-root.service: Deactivated successfully. [ 52.357232] systemd[1]: Stopped Switch Root. [ OK ] Stopped Switch Root . [ 52.372441] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. [ 52.389347] systemd[1]: Created slice Slice /system/getty. [ OK ] Created slice Slice /system/getty . [ 52.405946] systemd[1]: Created slice Slice /system/modprobe. [ OK ] Created slice Slice /system/modprobe . [ 52.424387] systemd[1]: Created slice Slice /system/serial-getty. [ OK ] Created slice Slice /system/serial-getty . [ 52.441578] systemd[1]: Created slice Slice /system/sshd-keygen. [ OK ] Created slice Slice /system/sshd-keygen . [ 52.461716] systemd[1]: Created slice User and Session Slice. [ OK ] Created slice User and Session Slice . [ 52.470274] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [ OK ] Started Dispatch Password …ts to Console Directory Watch . [ 52.475890] systemd[1]: Started Forward Password Requests to Wall Directory Watch. [ OK ] Started Forward Password R…uests to Wall Directory Watch . [ 52.488084] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. [ OK ] Set up automount Arbitrary…s File System Automount Point . [ 52.490492] systemd[1]: Reached target Local Encrypted Volumes. [ OK ] Reached target Local Encrypted Volumes . [ 52.495735] systemd[1]: Stopped target Switch Root. [ OK ] Stopped target Switch Root . [ 52.498018] systemd[1]: Stopped target Initrd File Systems. [ OK ] Stopped target Initrd File Systems . [ 52.502596] systemd[1]: Stopped target Initrd Root File System. [ OK ] Stopped target Initrd Root File System . [ 52.504880] systemd[1]: Reached target Local Integrity Protected Volumes. [ OK ] Reached target Local Integrity Protected Volumes . [ 52.509588] systemd[1]: Reached target Slice Units. [ OK ] Reached target Slice Units . [ 52.514618] systemd[1]: Reached target System Time Set. [ OK ] Reached target System Time Set . [ 52.519630] systemd[1]: Reached target Local Verity Protected Volumes. [ OK ] Reached target Local Verity Protected Volumes . [ 52.529954] systemd[1]: Listening on Device-mapper event daemon FIFOs. [ OK ] Listening on Device-mapper event daemon FIFOs . [ 52.554320] systemd[1]: Listening on LVM2 poll daemon socket. [ OK ] Listening on LVM2 poll daemon socket . [ 52.746217] systemd[1]: Listening on RPCbind Server Activation Socket. [ OK ] Listening on RPCbind Server Activation Socket . [ 52.752797] systemd[1]: Reached target RPC Port Mapper. [ OK ] Reached target RPC Port Mapper . [ 52.787117] systemd[1]: Listening on Process Core Dump Socket. [ OK ] Listening on Process Core Dump Socket . [ 52.792426] systemd[1]: Listening on initctl Compatibility Named Pipe. [ OK ] Listening on initctl Compatibility Named Pipe . [ 52.824341] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket . [ 52.835176] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket . [ 52.872858] systemd[1]: Activating swap /dev/mapper/cs_hpe--dl360pgen8--08-swap... Activating swap /dev/mappe…cs_hpe--dl360pgen8--08-swap ... [ 52.924471] systemd[1]: Mounting Huge Pages File System... Mounting Huge Pages File System ... [ 52.969146] systemd[1]: Mounting POSIX Message Queue File System... Mounting POSIX Message Queue File System ... [ 53.015393] systemd[1]: Mounting Kernel Debug File System... Mounting Kernel Debug File System ... [ 53.044863] systemd[1]: Mounting Kernel Trace File System... Mounting Kernel Trace File System ... [ 53.050228] systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). [ 53.050847] Adding 16502780k swap on /dev/mapper/cs_hpe--dl360pgen8--08-swap. Priority:-2 extents:1 across:16502780k FS [ 53.097640] systemd[1]: Starting Create List of Static Device Nodes... Starting Create List of Static Device Nodes ... [ 53.121723] systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Starting Monitoring of LVM…meventd or progress polling ... [ 53.150630] systemd[1]: Starting Load Kernel Module configfs... Starting Load Kernel Module configfs ... [ 53.180908] systemd[1]: Starting Load Kernel Module drm... Starting Load Kernel Module drm ... [ 53.220573] systemd[1]: Starting Load Kernel Module fuse... Starting Load Kernel Module fuse ... [ 53.319610] systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Starting Read and set NIS …from /etc/sysconfig/network ... [ 53.325896] systemd[1]: systemd-fsck-root.service: Deactivated successfully. [ 53.328799] systemd[1]: Stopped File System Check on Root Device. [ OK ] Stopped File System Check on Root Device . [ 53.332944] systemd[1]: Stopped Journal Service. [ OK ] Stopped Journal Service . [ 53.338944] systemd[1]: systemd-journald.service: Consumed 1.799s CPU time. [ 53.406194] systemd[1]: Starting Journal Service... Starting Journal Service ... [ 53.451930] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [ 53.489256] systemd[1]: Starting Generate network units from Kernel command line... Starting Generate network …ts from Kernel command line ... [ 53.535993] systemd[1]: Starting Remount Root and Kernel File Systems... Starting Remount Root and Kernel File Systems ... [ 53.545328] systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. [ 53.582919] systemd[1]: Starting Apply Kernel Variables... Starting Apply Kernel Variables ... [ 53.622363] systemd[1]: Starting Coldplug All udev Devices... Starting Coldplug All udev Devices ... [ 53.665248] systemd[1]: Activated swap /dev/mapper/cs_hpe--dl360pgen8--08-swap. [ OK ] Activated swap /dev/mapper/cs_hpe--dl360pgen8--08-swap . [ 53.717652] systemd[1]: Started Journal Service. [ OK ] Started Journal Service . [ OK ] Mounted Huge Pages File System . [ OK ] Mounted POSIX Message Queue File System . [ OK ] Mounted Kernel Debug File System . [ OK ] Mounted Kernel Trace File System . [ OK ] Finished Create List of Static Device Nodes . [ 53.773344] fuse: init (API version 7.36) [ OK ] Finished Load Kernel Module configfs . [ OK ] Finished Load Kernel Module fuse . [ OK ] Finished Read and set NIS …e from /etc/sysconfig/network . [ OK ] Finished Generate network units from Kernel command line . [ OK ] Finished Apply Kernel Variables . [ OK ] Reached target Preparation for Network . [ OK ] Reached target Swaps . [ 53.908420] ACPI: bus type drm_connector registered Mounting FUSE Control File System ... Mounting Kernel Configuration File System ... [ OK ] Finished Load Kernel Module drm . [ OK ] Finished Remount Root and Kernel File Systems . [ OK ] Finished Monitoring of LVM… dmeventd or progress polling . [ OK ] Mounted FUSE Control File System . [ OK ] Mounted Kernel Configuration File System . Starting Flush Journal to Persistent Storage ... Starting Load/Save Random Seed ... Starting Create Static Device Nodes in /dev ... [ 54.284809] systemd-journald[822]: Received client request to flush runtime journal. [ OK ] Finished Flush Journal to Persistent Storage . [ OK ] Finished Load/Save Random Seed . [ OK ] Finished Create Static Device Nodes in /dev . [ OK ] Reached target Preparation for Local File Systems . Starting Rule-based Manage…for Device Events and Files ... [ OK ] Started Rule-based Manager for Device Events and Files . [ * ] (1 of 4) A start job is running for…l360pgen8--08-home (5s / no limit) M [ * * ] (1 of 4) A start job is running for…l360pgen8--08-home (5s / no limit) M Starting Load Kernel Module configfs ... [ OK ] Finished Load Kernel Module configfs . [ OK ] Finished Coldplug All udev Devices . [ 59.442863] power_meter ACPI000D:00: Found ACPI power meter. [ 59.445688] power_meter ACPI000D:00: Ignoring unsafe software power cap! [ 59.446348] power_meter ACPI000D:00: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). [ 59.566683] IPMI message handler: version 39.2 [ 59.645619] ipmi device interface [ 59.719202] dca service started, version 1.12.1 [ 59.731366] ipmi_si: IPMI System Interface driver [ 59.732370] ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS [ 59.732776] ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 [ 59.733828] ipmi_si: Adding SMBIOS-specified kcs state machine [ 59.737712] ipmi_si IPI0001:00: ipmi_platform: probing via ACPI [ 59.739730] ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2-0x0ca3] regsize 1 spacing 1 irq 0 [ 59.829097] ioatdma: Intel(R) QuickData Technology Driver 5.00 [ 59.871956] ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI [ 59.872981] ipmi_si: Adding ACPI-specified kcs state machine [ 59.875639] ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 [ 59.966230] ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. [ 60.067849] ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x00000b, prod_id: 0x2000, dev_id: 0x13) Mounting /boot ... [ 60.170252] XFS (sda1): Mounting V5 Filesystem [ 60.218142] ipmi_si IPI0001:00: IPMI kcs interface initialized [ OK ] Started /usr/sbin/lvm vgch…on event cs_hpe-dl360pgen8-08 . [ 60.267263] ipmi_ssif: IPMI SSIF Interface driver [ 60.785585] input: PC Speaker as /devices/platform/pcspkr/input/input4 [ 61.500342] XFS (sda1): Starting recovery (logdev: internal) [ 61.590064] XFS (sda1): Ending recovery (logdev: internal) [ OK ] Mounted /boot . [ 61.748448] mgag200 0000:01:00.1: vgaarb: deactivate vga console [ 61.759583] Console: switching to colour dummy device 80x25 [ 62.145671] RAPL PMU: API unit is 2^-32 Joules, 2 fixed counters, 163840 ms ovfl timer [ 62.146166] RAPL PMU: hw unit of domain pp0-core 2^-16 Joules [ 62.146935] RAPL PMU: hw unit of domain package 2^-16 Joules [ 62.632378] [drm] Initialized mgag200 1.0.0 20110418 for 0000:01:00.1 on minor 0 [ 62.687465] fbcon: mgag200drmfb (fb0) is primary device [ 63.804882] iTCO_vendor_support: vendor-support=0 [ 63.822538] Console: switching to colour frame buffer device 128x48 [ 63.866824] mgag200 0000:01:00.1: [drm] fb0: mgag200drmfb frame buffer device [ * * * ] A start job is running for /dev/map…360pgen8--08-home (11s / no limit)[ 63.873468] iTCO_wdt iTCO_wdt.1.auto: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS M [ OK ] Found device /dev/mapper/cs_hpe--dl360pgen8--08-home . Mounting /home ... [ 63.986353] EDAC MC0: Giving out device to module sb_edac controller Ivy Bridge SrcID#0_Ha#0: DEV 0000:1f:0e.0 (INTERRUPT) [ 63.990388] EDAC MC1: Giving out device to module sb_edac controller Ivy Bridge SrcID#1_Ha#0: DEV 0000:3f:0e.0 (INTERRUPT) [ 63.991080] EDAC sbridge: Ver: 1.1.2 [ 64.057946] XFS (dm-2): Mounting V5 Filesystem [-- MARK -- Fri Feb 3 09:35:00 2023] [ 64.309897] intel_rapl_common: Found RAPL domain package [ 64.310266] intel_rapl_common: Found RAPL domain core [ 64.313980] intel_rapl_common: Found RAPL domain package [ 64.314429] intel_rapl_common: Found RAPL domain core [ 65.188645] XFS (dm-2): Starting recovery (logdev: internal) [ 65.347977] XFS (dm-2): Ending recovery (logdev: internal) [ OK ] Mounted /home . [ OK ] Reached target Local File Systems . Starting Automatic Boot Loader Update ... Starting Create Volatile Files and Directories ... [ OK ] Finished Automatic Boot Loader Update . [ OK ] Finished Create Volatile Files and Directories . Mounting RPC Pipe File System ... Starting Security Auditing Service ... Starting RPC Bind ... [ OK ] Started RPC Bind . [ 68.129889] RPC: Registered named UNIX socket transport module. [ 68.130781] RPC: Registered udp transport module. [ 68.131575] RPC: Registered tcp transport module. [ 68.132536] RPC: Registered tcp NFSv4.1 backchannel transport module. [ OK ] Mounted RPC Pipe File System . [ OK ] Reached target rpc_pipefs.target . [ OK ] Started Security Auditing Service . Starting Record System Boot/Shutdown in UTMP ... [ OK ] Finished Record System Boot/Shutdown in UTMP . [ OK ] Reached target System Initialization . [ OK ] Started CUPS Scheduler . [ OK ] Started dnf makecache --timer . [ OK ] Started Daily Cleanup of Temporary Directories . [ OK ] Reached target Path Units . [ OK ] Listening on Avahi mDNS/DNS-SD Stack Activation Socket . [ OK ] Listening on CUPS Scheduler . [ OK ] Listening on D-Bus System Message Bus Socket . [ OK ] Listening on SSSD Kerberos…ache Manager responder socket . [ OK ] Reached target Socket Units . [ OK ] Reached target Basic System . Starting Network Manager ... Starting Avahi mDNS/DNS-SD Stack ... Starting NTP client/server ... Starting Restore /run/initramfs on shutdown ... [ OK ] Started irqbalance daemon . Starting Load CPU microcode update ... [ OK ] Started Hardware RNG Entropy Gatherer Daemon . Starting System Logging Service ... [ OK ] Reached target sshd-keygen.target . [ OK ] Reached target User and Group Name Lookups . Starting User Login Management ... [ OK ] Finished Restore /run/initramfs on shutdown . Starting D-Bus System Message Bus ... [ OK ] Started System Logging Service . [ OK ] Started NTP client/server . Starting Wait for chrony to synchronize system clock ... [ 71.281939] reload_microcod (1062) used greatest stack depth: 22280 bytes left [ OK ] Finished Load CPU microcode update . [ OK ] Started D-Bus System Message Bus . [ OK ] Started Avahi mDNS/DNS-SD Stack . [ OK ] Started User Login Management . [ OK ] Started Network Manager . [ OK ] Created slice User Slice of UID 0 . [ OK ] Reached target Network . Starting Network Manager Wait Online ... Starting CUPS Scheduler ... Starting GSSAPI Proxy Daemon ... Starting OpenSSH server daemon ... Starting User Runtime Directory /run/user/0 ... Starting Hostname Service ... [ OK ] Finished User Runtime Directory /run/user/0 . Starting User Manager for UID 0 ... [ OK ] Started OpenSSH server daemon . [ OK ] Started Hostname Service . [ OK ] Started CUPS Scheduler . [ OK ] Started GSSAPI Proxy Daemon . [ OK ] Reached target NFS client services . [ OK ] Reached target Preparation for Remote File Systems . [ OK ] Listening on Load/Save RF …itch Status /dev/rfkill Watch . Starting Network Manager Script Dispatcher Service ... [ OK ] Started Network Manager Script Dispatcher Service . [ * * * ] (2 of 3) A start job is running for… Manager for UID 0 (3s / no limit) M [ OK ] Started User Manager for UID 0 . [ 76.464044] tg3 0000:03:00.0 eno1: Link is up at 1000 Mbps, full duplex [ 76.464667] tg3 0000:03:00.0 eno1: Flow control is off for TX and off for RX [ 76.465484] tg3 0000:03:00.0 eno1: EEE is disabled [ 76.466691] IPv6: ADDRCONF(NETDEV_CHANGE): eno1: link becomes ready [ * * * ] (2 of 2) A start job is running for…nize system clock (24s / no limit) M [ * * * ] (2 of 2) A start job is running for…nize system clock (25s / no limit) M [ * * ] (1 of 2) A start job is running for…nager Wait Online (25s / no limit) M [ * ] (1 of 2) A start job is running for…nager Wait Online (26s / no limit) M [ * * ] (1 of 2) A start job is running for…nager Wait Online (26s / no limit) M [ * * * ] (2 of 2) A start job is running for…nize system clock (27s / no limit) M [ OK ] Finished Network Manager Wait Online . [ OK ] Reached target Network is Online . Mounting /var/crash ... [ OK ] Started Anaconda Monitorin…ost-boot notification program . Starting Notify NFS peers of a restart ... [ OK ] Started Notify NFS peers of a restart . [ FAILED ] Failed to mount /var/crash . See 'systemctl status var-crash.mount' for details. [ DEPEND ] Dependency failed for Remote File Systems . Starting Crash recovery kernel arming ... Starting Permit User Sessions ... [ OK ] Finished Permit User Sessions . [ OK ] Started Deferred execution scheduler . [ OK ] Started Getty on tty1 . [ OK ] Started Serial Getty on ttyS1 . [ OK ] Reached target Login Prompts . CentOS Stream 9 Kernel 5.14.0-256.2009_766119311.el9.x86_64+debug on an x86_64 hpe-dl360pgen8-08 login: [ 88.686041] PKCS7: Message signed outside of X.509 validity window [ 84.602451] restraintd[1420]: * Fetching recipe: http://lab-02.hosts.prod.psi.bos.redhat.com:8000//recipes/13330040/ [ 85.003673] restraintd[1420]: * Parsing recipe [ 85.046695] restraintd[1420]: * Running recipe [ 85.048816] restraintd[1420]: ** Continuing task: 155735207 [/mnt/tests/github.com/beaker-project/beaker-core-tasks/archive/master.tar.gz/reservesys] [ 85.112088] restraintd[1420]: ** Preparing metadata [ 86.292089] restraintd[1420]: ** Refreshing peer role hostnames: Retries 0 [ 86.427781] restraintd[1420]: ** Updating env vars [ 86.428672] restraintd[1420]: *** Current Time: Fri Feb 03 04:35:28 2023 Localwatchdog at: * Disabled! * [ 86.529561] restraintd[1420]: ** Running task: 155735207 [/distribution/reservesys] [ 98.660601] Running test [R:13330040 T:155735207 - /distribution/reservesys - Kernel: 5.14.0-256.2009_766119311.el9.x86_64+debug] [ 152.865263] Running test [R:13330040 T:9 - Reboot test - Kernel: 5.14.0-256.2009_766119311.el9.x86_64+debug] [ 153.582278] systemd-journald[822]: Received client request to flush runtime journal. Stopping Session 2 of User root ... [ OK ] Removed slice Slice /system/modprobe . [ OK ] Removed slice Slice /system/sshd-keygen . [ OK ] Removed slice Slice /system/systemd-hibernate-resume . [ OK ] Stopped target Multi-User System . [ OK ] Stopped target Login Prompts . [ OK ] Stopped target Preparation for Remote File Systems . [ OK ] Stopped target NFS client services . [ OK ] Stopped target rpc_pipefs.target . [ OK ] Stopped target RPC Port Mapper . [ OK ] Stopped target Timer Units . [ OK ] Stopped dnf makecache --timer . [ OK ] Stopped Daily rotation of log files . [ OK ] Stopped Daily Cleanup of Temporary Directories . [ OK ] Closed LVM2 poll daemon socket . [ OK ] Closed Process Core Dump Socket . [ OK ] Closed Load/Save RF Kill Switch Status /dev/rfkill Watch . Unmounting RPC Pipe File System ... Stopping Deferred execution scheduler ... Stopping Avahi mDNS/DNS-SD Stack ... Stopping Command Scheduler ... Stopping CUPS Scheduler ... Stopping Restore /run/initramfs on shutdown ... Stopping Getty on tty1 ... Stopping GSSAPI Proxy Daemon ... Stopping irqbalance daemon ... Stopping The restraint harness. ... Stopping Hardware RNG Entropy Gatherer Daemon ... Stopping System Logging Service ... Stopping Serial Getty on ttyS1 ... Stopping OpenSSH server daemon ... Stopping Hostname Service ... Stopping Load/Save Random Seed ... [ OK ] Stopped Avahi mDNS/DNS-SD Stack . [ 154.875319] sda1: Can't mount, would change RO state [ OK ] Stopped irqbalance daemon . [ OK ] Stopped Hardware RNG Entropy Gatherer Daemon . [ OK ] Stopped CUPS Scheduler . [ OK ] Stopped OpenSSH server daemon . [ OK ] Stopped System Logging Service . [ OK ] Stopped GSSAPI Proxy Daemon . [ OK ] Stopped Deferred execution scheduler . [ OK ] Stopped Getty on tty1 . [ OK ] Stopped Serial Getty on ttyS1 . [ OK ] Stopped Command Scheduler . [ OK ] Stopped The restraint harness. . [ OK ] Stopped Hostname Service . [ OK ] Unmounted RPC Pipe File System . [ OK ] Stopped Load/Save Random Seed . [ OK ] Stopped Session 2 of User root . [ OK ] Removed slice Slice /system/getty . [ OK ] Removed slice Slice /system/serial-getty . [ OK ] Stopped target Network is Online . [ OK ] Stopped target sshd-keygen.target . [ OK ] Stopped target System Time Synchronized . [ OK ] Stopped target System Time Set . [ OK ] Stopped Network Manager Wait Online . [ OK ] Stopped Wait for chrony to synchronize system clock . Stopping NTP client/server ... Stopping User Login Management ... Stopping Permit User Sessions ... Stopping User Manager for UID 0 ... [ OK ] Stopped User Manager for UID 0 . Stopping User Runtime Directory /run/user/0 ... [ OK ] Stopped Permit User Sessions . [ OK ] Unmounted /run/user/0 . [ OK ] Stopped User Runtime Directory /run/user/0 . [ OK ] Removed slice User Slice of UID 0 . [ OK ] Stopped target Network . Stopping Network Manager ... [ 156.063360] NetworkManager (1057) used greatest stack depth: 21064 bytes left [ OK ] Stopped Network Manager . [ OK ] Stopped target Preparation for Network . [ OK ] Stopped Generate network units from Kernel command line . [ OK ] Stopped User Login Management . [ OK ] Stopped NTP client/server . [ OK ] Stopped target User and Group Name Lookups . [ * * * ] A stop job is running for Restore /…tramfs on shutdown (4s / no limit) M [ * * * ] A stop job is running for Restore /…tramfs on shutdown (4s / no limit) M [ * * ] A stop job is running for Restore /…tramfs on shutdown (5s / no limit) M [ * ] A stop job is running for Restore /…tramfs on shutdown (5s / no limit) M [ * * ] A stop job is running for Restore /…tramfs on shutdown (6s / no limit) M [ * * * ] A stop job is running for Restore /…tramfs on shutdown (6s / no limit) M [ * * * ] A stop job is running for Restore /…tramfs on shutdown (7s / no limit) M [ * * * ] A stop job is running for Restore /…tramfs on shutdown (7s / no limit) M [ * * * ] A stop job is running for Restore /…tramfs on shutdown (8s / no limit) M [ * * ] A stop job is running for Restore /…tramfs on shutdown (8s / no limit) M [ * ] A stop job is running for Restore /…tramfs on shutdown (9s / no limit) M [ OK ] Stopped Restore /run/initramfs on shutdown . [ OK ] Stopped target Basic System . [ OK ] Stopped target Path Units . [ OK ] Stopped CUPS Scheduler . [ OK ] Stopped target Slice Units . [ OK ] Removed slice User and Session Slice . [ OK ] Stopped target Socket Units . [ OK ] Closed Avahi mDNS/DNS-SD Stack Activation Socket . [ OK ] Closed CUPS Scheduler . [ OK ] Closed SSSD Kerberos Cache Manager responder socket . Stopping D-Bus System Message Bus ... [ OK ] Stopped D-Bus System Message Bus . [ OK ] Closed D-Bus System Message Bus Socket . [ OK ] Stopped target System Initialization . [ OK ] Unset automount Arbitrary …s File System Automount Point . [ OK ] Stopped target Local Encrypted Volumes . [ OK ] Stopped Dispatch Password …ts to Console Directory Watch . [ OK ] Stopped Forward Password R…uests to Wall Directory Watch . [ OK ] Stopped target Local Integrity Protected Volumes . [ OK ] Stopped target Swaps . [ OK ] Stopped target Local Verity Protected Volumes . Deactivating swap /dev/cs_hpe-dl360pgen8-08/swap ... [ OK ] Stopped Read and set NIS d…e from /etc/sysconfig/network . [ OK ] Stopped Automatic Boot Loader Update . [ OK ] Stopped Apply Kernel Variables . Stopping Record System Boot/Shutdown in UTMP ... [ OK ] Unmounted /run/credentials/systemd-sysctl.service . [ OK ] Deactivated swap /dev/disk…e-cs_hpe--dl360pgen8--08-swap . [ OK ] Deactivated swap /dev/cs_hpe-dl360pgen8-08/swap . [ OK ] Deactivated swap /dev/disk…9-6e40-4c96-b71b-8f5ccefa4a5f . [ OK ] Deactivated swap /dev/disk…VddQjM1zfKyIkLdc2MXslMhJMCGs5 . [ OK ] Deactivated swap /dev/dm-1 . [ OK ] Deactivated swap /dev/mapper/cs_hpe--dl360pgen8--08-swap . [ OK ] Stopped Record System Boot/Shutdown in UTMP . Stopping Security Auditing Service ... [ 164.238894] audit: type=1305 audit(1675417000.021:129): op=set audit_pid=0 old=1020 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1 [ OK ] Stopped Security Auditing Service . [ 164.279494] audit: type=1131 audit(1675417000.063:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=auditd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ OK ] Stopped Create Volatile Files and Directories . [ 164.288910] audit: type=1131 audit(1675417000.072:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ OK ] Stopped target Local File Systems . Unmounting /boot ... Unmounting /home ... Unmounting /run/credential…temd-tmpfiles-setup.service ... [ 164.390816] XFS (dm-2): Unmounting Filesystem Unmounting /run/credential…-tmpfiles-setup-dev.service ... [ OK ] Unmounted /run/credentials…ystemd-tmpfiles-setup.service . [ OK ] Unmounted /run/credentials…md-tmpfiles-setup-dev.service . [ 164.606943] XFS (sda1): Unmounting Filesystem [ OK ] Unmounted /home . [ OK ] Unmounted /boot . [ OK ] Stopped target Preparation for Local File Systems . [ OK ] Reached target Unmount All Filesystems . Stopping Monitoring of LVM…meventd or progress polling ... [ OK ] Stopped Remount Root and Kernel File Systems . [ 164.834853] audit: type=1131 audit(1675417000.618:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ OK ] Stopped Create Static Device Nodes in /dev . [ 164.840928] audit: type=1131 audit(1675417000.624:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ OK ] Stopped Monitoring of LVM2… dmeventd or progress polling . [ 165.126947] audit: type=1131 audit(1675417000.910:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvm2-monitor comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ OK ] Reached target System Shutdown . [ OK ] Reached target Late Shutdown Services . [ OK ] Finished System Reboot . [ 165.147779] audit: type=1130 audit(1675417000.931:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-reboot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ 165.149214] audit: type=1131 audit(1675417000.931:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-reboot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' [ OK ] Reached target System Reboot . [ 165.175722] audit: type=1334 audit(1675417000.959:137): prog-id=0 op=UNLOAD [ 165.176253] audit: type=1334 audit(1675417000.959:138): prog-id=0 op=UNLOAD [ 165.328502] watchdog: watchdog0: watchdog did not stop! [ 165.463373] systemd-shutdown[1]: Using hardware watchdog 'HPE iLO2+ HW Watchdog Timer', version 0, device /dev/watchdog0 [ 165.464273] systemd-shutdown[1]: Watchdog running with a timeout of 10min. [ 165.628738] systemd-shutdown[1]: Syncing filesystems and block devices. [ 165.777347] systemd-shutdown[1]: Sending SIGTERM to remaining processes... [ 165.969522] systemd-journald[822]: Received SIGTERM from PID 1 (systemd-shutdow). [ 166.112401] systemd-shutdown[1]: Sending SIGKILL to remaining processes... [ 166.295495] systemd-shutdown[1]: Unmounting file systems. [ 166.313194] [1703]: Remounting '/' read-only with options 'seclabel,attr2,inode64,logbufs=8,logbsize=256k,sunit=512,swidth=512,noquota'. [ 166.853598] systemd-shutdown[1]: All filesystems unmounted. [ 166.854154] systemd-shutdown[1]: Deactivating swaps. [ 166.855066] systemd-shutdown[1]: All swaps deactivated. [ 166.855765] systemd-shutdown[1]: Detaching loop devices. [ 166.857968] systemd-shutdown[1]: All loop devices detached. [ 166.858611] systemd-shutdown[1]: Stopping MD devices. [ 166.860359] systemd-shutdown[1]: All MD devices stopped. [ 166.860764] systemd-shutdown[1]: Detaching DM devices. [ 166.884173] systemd-shutdown[1]: Detaching DM /dev/dm-2 (253:2). [ 166.932931] systemd-shutdown[1]: Detaching DM /dev/dm-1 (253:1). [ 166.960369] systemd-shutdown[1]: Not all DM devices detached, 1 left. [ 166.961700] systemd-shutdown[1]: Detaching DM devices. [ 166.970261] systemd-shutdown[1]: Not all DM devices detached, 1 left. [ 166.970711] systemd-shutdown[1]: Cannot finalize remaining DM devices, continuing. [ 166.971657] watchdog: watchdog0: watchdog did not stop! [ 167.013371] systemd-shutdown[1]: Successfully changed into root pivot. [ 167.013838] systemd-shutdown[1]: Returning to initrd... [ 167.944008] dracut Warning: Killing all remaining processes dracut Warning: Killing all remaining processes [ 168.849978] XFS (dm-0): Unmounting Filesystem [ 169.274213] dracut Warning: Unmounted /oldroot. dracut Warning: Unmounted /oldroot. [ 169.448605] dracut: Disassembling device-mapper devices Rebooting. [ 170.078243] kvm: exiting hardware virtualization [ 171.465991] reboot: Restarting system [ 171.466574] reboot: machine restart [7l [7l [7l ProLiant System BIOS - P71 (05/21/2018) Copyright 1982, 2018 Hewlett-Packard Development Company, L.P. 32 GB Installed 2 Processor(s) detected, 12 total cores enabled, Hyperthreading is enabled Proc 1: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz Proc 2: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz QPI Speed: 7.2 GT/s HP Power Profile Mode: Balanced Power and Performance Power Regulator Mode: Dynamic Power Savings Redundant ROM Detected - This system contains a valid backup System ROM. Inlet Ambient Temperature: 19C/66F Advanced Memory Protection Mode: Advanced ECC Support HP SmartMemory authenticated in all populated DIMM slots. SATA Option ROM ver 2.00.C02 Copyright 1982, 2011. Hewlett-Packard Development Company, L.P. iLO 4 Advanced press [F8] to configure iLO 4 v2.80 Jan 25 2022 10.16.216.85 Slot 0 HP Smart Array P420i Controller Initializing... (0 MB, v8.32) 1 Logical Drive [1;25r Press to run the HP Smart Storage Administrator (HP SSA) or ACU Press to run the Option ROM Configuration For Arrays Utility Press to Skip Configuration and Continue Slot 1 [22;08H HP Smart Array P421 Controller Initializing... (1 GB, v8.32) 0 Logical Drives 1785-Slot 1 Drive Array Not Configured No Drives Detected Press to run the HP Smart Storage Administrator (HP SSA) or ACU Press to run the Option ROM Configuration For Arrays Utility Press to Skip Configuration and Continue Broadcom NetXtreme Ethernet Boot Agent Copyright (C) 2000-2017 Broadcom Corporation All rights reserved. Press Ctrl-S to enter Configuration Menu [7l [7l [7l Press "F9" key for ROM-Based Setup Utility Press "F10" key for Intelligent Provisioning Press "F11" key for Default Boot Override Options Press "F12" key for Network Boot For access via BIOS Serial Console Press "ESC+9" for ROM-Based Setup Utility Press "ESC+0" for Intelligent Provisioning Press "ESC+!" for Default Boot Override Options Press "ESC+@" for Network Boot [7l [7l Attempting Boot From NIC Broadcom UNDI PXE-2.1 v20.6.50 Copyright (C) 2000-2017 Broadcom Corporation Copyright (C) 1997-2000 Intel Corporation All rights reserved. CLIENT [-- MARK -- Fri Feb 3 09:40:00 2023] MAC ADDR: 2C 44 FD 84 51 C4. GUID: 30343536-3138-5355-4534-303452355454 DHCP.- \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - .\ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / / - - \ \ | | / - CLIENT IP: 10.16.216.84 MASK: 255.255.254.0 DHCP IP: 10.19.43.29 GATEWAY IP: 10.16.217.254 TFTP. TFTP.| P X E L I N U X 4 . 0 5 2 0 1 1 - 1 2 - 0 9 C o p y r i g h t ( C ) 1 9 9 4 - 2 0 1 1 H [12;51H [12;52H. P e t e r A n v i n e t a l ! P X E e n t r y p o i n t f o u n d ( w e h o p e ) a t 9 5 A 1 : 0 0 D 6 v i a p l a n A U N D I c o d e s e g m e n t a t 9 5 A 1 l e n 6 B 7 0 U N D I d a t a s e g m e n t a t 9 1 E A l e n 3 B 7 0 G e t t i n g c a c h e d p a c k e t 0 1 0 2 0 3 M y I P a d d r e s s s e e m s t o b e 0 A 1 0 D 8 5 4 1 0 . 1 6 . 2 1 6 . 8 4 i p = 1 0 . 1 6 . 2 1 6 . 8 4 : 1 0 . 1 9 . 1 6 5 . 1 6 4 : 1 0 . 1 6 . 2 1 7 . 2 5 4 : 2 5 5 . 2 5 5 . 2 5 4 . 0 B O O T I F = 0 1 - 2 c - 4 4 - f d - 8 4 - 5 1 - c 4 S Y S U U I D = 3 6 3 5 3 4 3 0 - 3 8 3 1 - 5 5 5 3 - 4 5 3 4 - 3 0 3 4 5 2 3 5 5 4 5 4 T F T P p r e f i x : T r y i n g t o l o a d : p x e l i n u x . c f g / 3 6 3 5 3 4 3 0 - 3 8 3 1 - 5 5 5 3 - 4 5 3 4 - 3 0 3 4 5 2 3 5 5 4 5 4 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 1 - 2 c - 4 4 - f d - 8 4 - 5 1 - c 4 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 D 8 5 4 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 D 8 5 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 D 8 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 D [22;65H T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 0 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A 1 T r y i n g t o l o a d : p x e l i n u x . c f g / 0 A T r y i n g t o l o a d : p x e l i n u x . c f g / 0 T r y i n g t o l o a d : p x e l i n u x . c f g / d e f a u l t o k * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * R e d H a t E n g i n e e r i n g L a b s N e t w o r k B o o t P r e s s E N T E R t o b o o t f r o m l o c a l d i s k T y p e " m e n u " a t b o o t p r o m p t t o v i e w i n s t a l l m e n u * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * b o o t : B o o t i n g . . . .. [?25l Use the ^ and v keys to change the selection. Press 'e' to edit the selected item, or 'c' for a command prompt. CentOS Stream (5.14.0-256.2009_766119311.el9.x86_64+debug) 9 with debugg> CentOS Stream (5.14.0-247.el9.x86_64) 9 CentOS Stream (0-rescue-99e1b32cbaf74173bd2789197e86723f) 9 U s e t h e a n d k e y s t o c h a n g e t h e s e l e c t i o n . P r e s s ' e ' t o e d i t t h e s e l e c t e d i t e m , o r ' c ' f o r a c o m m a n d p r o m p t . C e n t O S S t r e a m ( 5 . 1 4 . 0 - 2 5 6 . 2 0 0 9 _ 7 6 6 1 1 9 3 1 1 . e l 9 . x 8 6 _ 6 4 + d e b u g ) 9 w i t h d e b u g g C e n t O S S t r e a m ( 5 . 1 4 . 0 - 2 4 7 . e l 9 . x 8 6 _ 6 4 ) 9 C e n t O S S t r e a m ( 0 - r e s c u e - 9 9 e 1 b 3 2 c b a f 7 4 1 7 3 b d 2 7 8 9 1 9 7 e 8 6 7 2 3 f ) 9 T h e s e l e c23;15Hd e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 5 s . The selected entry will be started automatically in 5s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 4 s . The selected entry will be started automatically in 4s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 3 s . The selected entry will be started automatically in 3s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 2 s . The selected entry will be started automatically in 2s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 1 s . The selected entry will be started automatically in 1s. T h e s e l e c t e d e n t r y w i l l b e s t a r t e d a u t o m a t i c a l l y i n 0 s . The selected entry will be started automatically in 0s. Probing EDD (edd=off to disable)... ok [7l [ 0.000000] microcode: microcode updated early to revision 0x42e, date = 2019-03-14 [ 0.000000] [ 0.000000] The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. [ 0.000000] Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug root=/dev/mapper/cs_hpe--dl360pgen8--08-root ro resume=/dev/mapper/cs_hpe--dl360pgen8--08-swap rd.lvm.lv=cs_hpe-dl360pgen8-08/root rd.lvm.lv=cs_hpe-dl360pgen8-08/swap console=ttyS1,115200n81 crashkernel=1G-2G:384M,2G-3G:512M,3G-4G:768M,4G-16G:1G,16G-64G:2G,64G-128G:2G,128G-:4G [ 0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' [ 0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' [ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 [ 0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. [ 0.000000] signal: max sigframe size: 1776 [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009c7ff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009c800-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x00000000bddabfff] usable [ 0.000000] BIOS-e820: [mem 0x00000000bddac000-0x00000000bddddfff] ACPI data [ 0.000000] BIOS-e820: [mem 0x00000000bddde000-0x00000000cfffffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000fec00000-0x00000000fee0ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000ff800000-0x00000000ffffffff] reserved [ 000] BIOS-e820: [mem 0x0000000100000000-0x000000083fffefff] usable [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.8 present. [ 0.000000] DMI: HP ProLiant DL360p Gen8, BIOS P71 05/21/2018 [ 0.000000] tsc: Fast TSC calibration failed [ 0.000000] last_pfn = 0x83ffff max_arch_pfn = 0x400000000 [ 0.000000] x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT [ 0.000000] last_pfn = 0xbddac max_arch_pfn = 0x400000000 [ 0.000000] found SMP MP-table at [mem 0x000f4f80-0x000f4f8f] [ 0.000000] Using GB pages for direct mapping [ 0.000000] RAMDISK: [mem 0x33a57000-0x35d23fff] [ 0.000000] ACPI: Early table checksum verification disabled [ 0.000000] ACPI: RSDP 0x00000000000F4F00 000024 (v02 HP ) [ 0.000000] ACPI: XSDT 0x00000000BDDAED00 0000E4 (v01 HP ProLiant 00000002 ? 0000162E) [ 0.000000] ACPI: FACP 0x00000000BDDAEE40 0000F4 (v03 HP ProLiant 00000002 ? 0000162E) [ 0.000000] ACPI BIOS Warning (bug): Invalid length for FADT/Pm1aControlBlock: 32, using default 16 (20211217/tbfadt-669) [ 0.000000] ACPI BIOS Warning (bug): Invalid length for FADT/Pm2ControlBlock: 32, using default 8 (20211217/tbf] ACPI: DSDT 0x00000000BDDAEF40 0026DC (v01 HP DSDT 00000001 INTL 20030228) [ 0.000000] ACPI: FACS 0x00000000BDDAC140 000040 [ 0.000000] ACPI: FACS 0x00000000BDDAC140 000040 [ 0.000000] ACPI: SPCR 0x00000000BDDAC180 000050 (v01 HP SPCRRBSU 00000001 ? 0000162E) [ 0.000000] ACPI: MCFG 0x00000000BDDAC200 00003C (v01 HP ProLiant 00000001 00000000) [ 0.000000] ACPI: HPET 0x00000000BDDAC240 000038 (v01 HP ProLiant 00000002 ? 0000162E) [ 0.000000] ACPI: FFFF 0x00000000BDDAC280 000064 (v02 HP ProLiant 00000002 ? 0000162E) [ 0.000000] ACPI: SPMI 0x00000000BDDAC300 000040 (v05 HP ProLiant 00000001 ? 0000162E) [ 0.000000] ACPI: ERST 0x00000000BDDAC340 000230 (v01 HP ProLiant 00000001 ? 0000162E) [ 0.000000] ACPI: APIC 0x00000000BDDAC580 00026A (v01 HP ProLiant 00000002 00000000) [ 0.000000] ACPI: SRAT 0x00000000BDDAC800 000750 (v01 HP Proliant 00000001 ? 0000162E) [ 0.000000] ACPI: FFFF 0x00000000BDDACF80 000176 (v01 HP ProLiant 00000001 ? 0000162E) [ 0.000000] ACPI: BERT 0x00000000BDDAD100 000030 (v01 HP ProLiant 00000001 ? 0000162E) [ 0.000000] ACPI: HEST 0x00000000BDDAD140 0000BC (v01 HP ProLiant 00000001 ? 0000162E) [ 0.000000] ACPI: DMAR 0x00000000BDDAD2000000001 ? 0000162E) [ 0.000000] ACPI: FFFF 0x00000000BDDAEC40 000030 (v01 HP ProLiant 00000001 00000000) [ 0.000000] ACPI: PCCT 0x00000000BDDAEC80 00006E (v01 HP Proliant 00000001 PH 0000504D) [ 0.000000] ACPI: SSDT 0x00000000BDDB1640 0007EA (v01 HP DEV_PCI1 00000001 INTL 20120503) [ 0.000000] ACPI: SSDT 0x00000000BDDB1E40 000103 (v03 HP CRSPCI0 00000002 HP 00000001) [ 0.000000] ACPI: SSDT 0x00000000BDDB1F80 000098 (v03 HP CRSPCI1 00000002 HP 00000001) [ 0.000000] ACPI: SSDT 0x00000000BDDB2040 00038A (v02 HP riser0 00000002 INTL 20030228) [ 0.000000] ACPI: SSDT 0x00000000BDDB2400 000385 (v03 HP riser1a 00000002 INTL 20030228) [ 0.000000] ACPI: SSDT 0x00000000BDDB27C0 000BB9 (v01 HP pcc 00000001 INTL 20120503) [ 0.000000] ACPI: SSDT 0x00000000BDDB3380 000377 (v01 HP pmab 00000001 INTL 20120503) [ 0.000000] ACPI: SSDT 0x00000000BDDB3700 005524 (v01 HP pcc2 00000001 INTL 20120503) [ 0.000000] ACPI: SSDT 0x00000000BDDB8C40 003AEC (v01 INTEL PPM RCM 00000001 INTL 20061109) [ 0.000000] ACPI: Reserving Faee40-0xbddaef33] [ 0.000000] ACPI: Reserving DSDT table memory at [mem 0xbddaef40-0xbddb161b] [ 0.000000] ACPI: Reserving FACS table memory at [mem 0xbddac140-0xbddac17f] [ 0.000000] ACPI: Reserving FACS table memory at [mem 0xbddac140-0xbddac17f] [ 0.000000] ACPI: Reserving SPCR table memory at [mem 0xbddac180-0xbddac1cf] [ 0.000000] ACPI: Reserving MCFG table memory at [mem 0xbddac200-0xbddac23b] [ 0.000000] ACPI: Reserving HPET table memory at [mem 0xbddac240-0xbddac277] [ 0.000000] ACPI: Reserving FFFF table memory at [mem 0xbddac280-0xbddac2e3] [ 0.000000] ACPI: Reserving SPMI table memory at [mem 0xbddac300-0xbddac33f] [ 0.000000] ACPI: Reserving ERST table memory at [mem 0xbddac340-0xbddac56f] [ 0.000000] ACPI: Reserving APIC table memory at [mem 0xbddac580-0xbddac7e9] [ 0.000000] ACPI: Reserving SRAT table memory at [mem 0xbddac800-0xbddacf4f] [ 0.000000] ACPI: Reserving FFFF table memory at [mem 0xbddacf80-0xbddad0f5] [ 0.000000] ACPI: Reserving BERT table memory at [mem 0xbddad100-0xbddad12f] [ 0.000000] ACPI: Reserving HEST table memory at [mem 0xbddad140-0xbddad1fb] [ 0.000000] ACPI: Reserving DMAR table memory at [mem 0xbddad200-0xbddad71b]Reserving FFFF table memory at [mem 0xbddaec40-0xbddaec6f] [ 0.000000] ACPI: Reserving PCCT table memory at [mem 0xbddaec80-0xbddaeced] [ 0.000000] ACPI: Reserving SSDT table memory at [mem 0xbddb1640-0xbddb1e29] [ 0.000000] ACPI: Reserving SSDT table memory at [mem 0xbddb1e40-0xbddb1f42] [ 0.000000] ACPI: Reserving SSDT table memory at [mem 0xbddb1f80-0xbddb2017] [ 0.000000] ACPI: Reserving SSDT table memory at [mem 0xbddb2040-0xbddb23c9] [ 0.000000] ACPI: Reserving SSDT table memory at [mem 0xbddb2400-0xbddb2784] [ 0.000000] ACPI: Reserving SSDT table memory at [mem 0xbddb27c0-0xbddb3378] [ 0.000000] ACPI: Reserving SSDT table memory at [mem 0xbddb3380-0xbddb36f6] [ 0.000000] ACPI: Reserving SSDT table memory at [mem 0xbddb3700-0xbddb8c23] [ 0.000000] ACPI: Reserving SSDT table memory at [mem 0xbddb8c40-0xbddbc72b] [ 0.000000] SRAT: PXM 0 -> APIC 0x00 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x01 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x02 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x03 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x04 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x05 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x06AT: PXM 0 -> APIC 0x07 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x08 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x09 -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x0a -> Node 0 [ 0.000000] SRAT: PXM 0 -> APIC 0x0b -> Node 0 [ 0.000000] SRAT: PXM 1 -> APIC 0x20 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x21 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x22 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x23 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x24 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x25 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x26 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x27 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x28 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x29 -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x2a -> Node 1 [ 0.000000] SRAT: PXM 1 -> APIC 0x2b -> Node 1 [ 0.000000] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x43fffffff] [ 0.000000] ACPI: SRAT: Node 1 PXM 1 [mem 0x440000000-0x83fffffff] [ 0.000000] NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] [ 0.000000] NODE_DATA(1) alffefff] [ 0.000000] Reserving 2048MB of memory at 976MB for crashkernel (System RAM: 32733MB) [ 0.000000] Zone ranges: [ 0.000000] DMA [mem 0x0000000000001000-0x0000000000ffffff] [ 0.000000] DMA32 [mem 0x0000000001000000-0x00000000ffffffff] [ 0.000000] Normal [mem 0x0000000100000000-0x000000083fffefff] [ 0.000000] Device empty [ 0.000000] Movable zone start for each node [ 0.000000] Early memory node ranges [ 0.000000] node 0: [mem 0x0000000000001000-0x000000000009bfff] [ 0.000000] node 0: [mem 0x0000000000100000-0x00000000bddabfff] [ 0.000000] node 0: [mem 0x0000000100000000-0x000000043fffffff] [ 0.000000] node 1: [mem 0x0000000440000000-0x000000083fffefff] [ 0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] [ 0.000000] Initmem setup node 1 [mem 0x0000000440000000-0x000000083fffefff] [ 0.000000] On node 0, zone DMA: 1 pages in unavailable ranges [ 0.000000] On node 0, zone DMA: 100 pages in unavailable ranges [ 0.000000] On node 0, zone Normal: 8788 pages in unavailable ranges [ 0.000000] On node 1, zone Normal: 1 pages in unavailable ranges [ 0.000000] kasan: KernelAddressSanitizer initialized [ 0.000000] ACPI: PM-Ti0000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) [ 0.000000] IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23 [ 0.000000] IOAPIC[1]: apic_id 0, version 32, address 0xfec10000, GSI 24-47 [ 0.000000] IOAPIC[2]: apic_id 10, version 32, address 0xfec40000, GSI 48-71 [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) [ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) [ 0.000000] ACPI: Using ACPI (MADT) for SMP configuration information [ 0.000000] ACPI: HPET id: 0x8086a201 base: 0xfed00000 [ 0.000000] ACPI: SPCR: SPCR table version 1 [ 0.000000] ACPI: SPCR: Unexpected SPCR Access Width. Defaulting to byte size [ 0.000000] ACPI: SPCR: console: uart,mmio,0x0,9600 [ 0.000000] TSC deadline timer available [ 0.000000] smpboot: Allowing 64 CPUs, 40 hotplug CPUs [ 0.000000] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] [ 0.000000] PM: hibernation: Registered nosave memory: [mem 0x0009c000-0x0009cfff] [ 0.000000] PM: hibernation: Registered n9d000-0x0009ffff] [ 0.000000] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] [ 0.000000] PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] [ 0.000000] PM: hibernation: Registered nosave memory: [mem 0xbddac000-0xbddddfff] [ 0.000000] PM: hibernation: Registered nosave memory: [mem 0xbddde000-0xcfffffff] [ 0.000000] PM: hibernation: Registered nosave memory: [mem 0xd0000000-0xfebfffff] [ 0.000000] PM: hibernation: Registered nosave memory: [mem 0xfec00000-0xfee0ffff] [ 0.000000] PM: hibernation: Registered nosave memory: [mem 0xfee10000-0xff7fffff] [ 0.000000] PM: hibernation: Registered nosave memory: [mem 0xff800000-0xffffffff] [ 0.000000] [mem 0xd0000000-0xfebfffff] available for PCI devices [ 0.000000] Booting paravirtualized kernel on bare hardware [ 0.000000] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns [ 0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:64 nr_cpu_ids:64 nr_node_ids:2 [ 0.000000] percpu: Embedded 515 pages/cpu s2072576 r8192 d28672 u4194304 [ 0.000000] Fallback order for Node 0: 0 1 [ 0.000000] Fallback order for Node 1: 1 0 [ 0.000000] Built 2 zonelists, mobility 8628 [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug root=/dev/mapper/cs_hpe--dl360pgen8--08-root ro resume=/dev/mapper/cs_hpe--dl360pgen8--08-swap rd.lvm.lv=cs_hpe-dl360pgen8-08/root rd.lvm.lv=cs_hpe-dl360pgen8-08/swap console=ttyS1,115200n81 crashkernel=1G-2G:384M,2G-3G:512M,3G-4G:768M,4G-16G:1G,16G-64G:2G,64G-128G:2G,128G-:4G [ 0.000000] Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-256.2009_766119311.el9.x86_64+debug", will be passed to user space. [ 0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off [ 0.000000] Stack Depot early init allocating hash table with memblock_alloc, 8388608 bytes [ 0.000000] software IO TLB: area num 64. [ 0.000000] Memory: 1173116K/33518872K available (38920K kernel code, 13007K rwdata, 14984K rodata, 5300K init, 42020K bss, 7436796K reserved, 0K cma-reserved) [ 0.000000] random: get_random_u64 called from kmem_cache_open+0x22/0x380 with crng_init=0 [ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=64, Nodes=2 [ 0.000000] kmemleak: Kernel memory leak detector disabled [ 0.000000] Kernel/User page tables isolation: enabled [ 0.000000] ftrace: allocating 45745 entries in 179 pages [ 0.000000] ftrace: allocated 179 pages with 5 groups [ 0.000000] Dynamic Pr00] Running RCU self tests [ 0.000000] rcu: Preemptible hierarchical RCU implementation. [ 0.000000] rcu: RCU lockdep checking is enabled. [ 0.000000] rcu: RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=64. [ 0.000000] rcu: RCU callback double-/use-after-free debug is enabled. [ 0.000000] Trampoline variant of Tasks RCU enabled. [ 0.000000] Rude variant of Tasks RCU enabled. [ 0.000000] Tracing variant of Tasks RCU enabled. [ 0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. [ 0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=64 [ 0.000000] NR_IRQS: 524544, nr_irqs: 1752, preallocated irqs: 16 [ 0.000000] rcu: srcu_init: Setting srcu_struct sizes based on contention. [ 0.000000] random: crng init done (trusting CPU's manufacturer) [ 0.000000] Console: colour VGA+ 80x25 [ 0.000000] printk: console [ttyS1] enabled [ 0.000000] Lock dependency validator: Copyright (c) 2006 Red Hat, Inc., Ingo Molnar [ 0.000000] ... MAX_LOCKDEP_SUBCLASSES: 8 [ 0.000000] ... MAX_LOCK_DEPTH: 48 [ 0.000000] ... MAX_LOCKDEP_KEYS: 8192 [ 0.000000] ... CLASSHASH_SIZE: 4096 [ 0.000000] ... MA5536 [ 0.000000] ... MAX_LOCKDEP_CHAINS: 131072 [ 0.000000] ... CHAINHASH_SIZE: 65536 [ 0.000000] memory used by lock dependency info: 11641 kB [ 0.000000] memory used for stack traces: 4224 kB [ 0.000000] per task-struct memory footprint: 2688 bytes [ 0.000000] mempolicy: Enabling automatic NUMA balancing. Configure with numa_balancing= or the kernel.numa_balancing sysctl [ 0.000000] ACPI: Core revision 20211217 [ 0.000000] clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns [ 0.000000] APIC: Switch to symmetric I/O mode setup [ 0.001000] DMAR: Host address width 46 [ 0.002000] DMAR: DRHD base: 0x000000fbefe000 flags: 0x0 [ 0.003000] DMAR: dmar0: reg_base_addr fbefe000 ver 1:0 cap d2078c106f0466 ecap f020de [ 0.005000] DMAR: DRHD base: 0x000000f4ffe000 flags: 0x1 [ 0.006000] DMAR: dmar1: reg_base_addr f4ffe000 ver 1:0 cap d2078c106f0466 ecap f020de [ 0.007000] DMAR: RMRR base: 0x000000bdffd000 end: 0x000000bdffffff [ 0.008000] DMAR: RMRR base: 0x000000bdff6000 end: 0x000000bdffcfff [ 0.009000] DMAR: RMRR base: 0x000000bdf83000 end: 0x000000bdf84fff [ 0.010000] DMAR: RMRR base: 0x000000bdf7f000 end: 0x000000bdf82fff [ 0.011000] DMAR: RMRR base: 0x000000bdf6f000 end: 0x000000bdf7efff [ 0.012000] DMAR: RMRRx000000bdf6efff [ 0.013000] DMAR: RMRR base: 0x000000000f4000 end: 0x000000000f4fff [ 0.014000] DMAR: RMRR base: 0x000000000e8000 end: 0x000000000e8fff [ 0.015000] DMAR: [Firmware Bug]: No firmware reserved region can cover this RMRR [0x00000000000e8000-0x00000000000e8fff], contact BIOS vendor for fixes [ 0.016000] DMAR: [Firmware Bug]: Your BIOS is broken; bad RMRR [0x00000000000e8000-0x00000000000e8fff] [ 0.016000] BIOS vendor: HP; Ver: P71; Product Version: [ 0.017000] DMAR: RMRR base: 0x000000bddde000 end: 0x000000bdddefff [ 0.018000] DMAR: ATSR flags: 0x0 [ 0.019000] DMAR-IR: IOAPIC id 10 under DRHD base 0xfbefe000 IOMMU 0 [ 0.020000] DMAR-IR: IOAPIC id 8 under DRHD base 0xf4ffe000 IOMMU 1 [ 0.021000] DMAR-IR: IOAPIC id 0 under DRHD base 0xf4ffe000 IOMMU 1 [ 0.022000] DMAR-IR: HPET id 0 under DRHD base 0xf4ffe000 [ 0.023000] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit. [ 0.023000] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting. [ 0.027000] DMAR-IR: Enabled IRQ remapping in xapic mode [ 0.027000] x2apic: IRQ remapping doesn't support X2APIC mode [ 0.028000] Switched APIC routing to physical flat. [ 0.029000] ..T=2 apic2=-1 pin2=-1 [ 0.035000] tsc: PIT calibration matches HPET. 1 loops [ 0.036000] tsc: Detected 2094.949 MHz processor [ 0.000030] clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x1e328cf0a17, max_idle_ns: 440795250041 ns [ 0.003730] Calibrating delay loop (skipped), value calculated using timer frequency.. 4189.89 BogoMIPS (lpj=2094949) [ 0.004726] pid_max: default: 65536 minimum: 512 [ 0.006877] LSM: Security Framework initializing [ 0.007859] Yama: becoming mindful. [ 0.008827] SELinux: Initializing. [ 0.010855] LSM support for eBPF active [ 0.026112] Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, vmalloc hugepage) [ 0.034157] Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, vmalloc hugepage) [ 0.035733] Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, vmalloc) [ 0.037783] Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, vmalloc) [ 0.044175] CPU0: Thermal monitoring enabled (TM1) [ 0.044845] process: using mwait in idle threads [ 0.045740] Last level iTLB entries: 4KB 512, 2MB 8, 4MB 8 [ 0.046725] Last level dTLB entries: 4KB 512, 2MB 0, 4MB 0, 1GB 4 [ 0.047744] Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization28] Spectre V2 : Mitigation: Retpolines [ 0.049725] Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch [ 0.050725] Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT [ 0.051725] Spectre V2 : Enabling Restricted Speculation for firmware calls [ 0.052732] Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier [ 0.053725] Spectre V2 : User space: Mitigation: STIBP via prctl [ 0.054727] Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl [ 0.055738] MDS: Mitigation: Clear CPU buffers [ 0.056725] MMIO Stale Data: Unknown: No mitigations [ 0.092662] Freeing SMP alternatives memory: 32K [ 0.095676] smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1170 [ 0.095764] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz (family: 0x6, model: 0x3e, stepping: 0x4) [ 0.100713] cblist_init_generic: Setting adjustable number of callback queues. [ 0.100725] cblist_init_generic: Setting shift to 6 and lim to 1. [ 0.102342] cblist_init_generic: Setting shift to 6 and lim to 1. [ 0.103371] cblist_init_generic: Setting shift to 6 and lim to 1. [ 0.103955] Running RCU-tasks wait API self tests [ 0.214010] Performance Events: PEBS fmt1+, IvyBridge events, 16-deep LBdth counters, Broken BIOS detected, complain to your hardware vendor. [ 0.214728] [Firmware Bug]: the BIOS has corrupted hw-PMU resources (MSR 38d is 330) [ 0.215726] Intel PMU driver. [ 0.216739] ... version: 3 [ 0.217727] ... bit width: 48 [ 0.218726] ... generic registers: 4 [ 0.219726] ... value mask: 0000ffffffffffff [ 0.220726] ... max period: 00007fffffffffff [ 0.221726] ... fixed-purpose events: 3 [ 0.222726] ... event mask: 000000070000000f [ 0.225382] rcu: Hierarchical SRCU implementation. [ 0.225727] rcu: Max phase no-delay instances is 400. [ 0.229800] Callback from call_rcu_tasks_trace() invoked. [ 0.245147] NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. [ 0.259032] smp: Bringing up secondary CPUs ... [ 0.261846] x86: Booting SMP configuration: [ 0.262730] .... node #0, CPUs: #1 [ 0.271639] #2 [ 0.277547] #3 [ 0.283761] #4 [ 0.289680] #5 [ 0.295694] [ 0.295745] .... node #1, CPUs: #6 [ 0.000000] smpboot: CPU 6 Converting physical 0 to logical die 1 [ 0.368769] Callback from call_rcu_tasks_rude() invoked. [ 0.371 [ 0.671832] Callback from call_rcu_tasks() invoked. [ 0.674525] #8 [ 0.682018] #9 [ 0.689073] #10 [ 0.696960] #11 [ 0.703934] [ 0.704485] .... node #0, CPUs: #12 [ 0.708568] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. [ 0.711528] #13 [ 0.716888] #14 [ 0.721749] #15 [ 0.726779] #16 [ 0.731827] #17 [ 0.736874] [ 0.737492] .... node #1, CPUs: #18 [ 0.743854] #19 [ 0.749813] #20 [ 0.754806] #21 [ 0.761016] #22 [ 0.766715] #23 [ 0.771220] smp: Brought up 2 nodes, 24 CPUs [ 0.771743] smpboot: Max logical packages: 6 [ 0.772736] smpboot: Total of 24 processors activated (101196.96 BogoMIPS) [ 1.318715] node 0 deferred pages initialised in 536ms [ 1.320226] pgdatinit0 (143) used greatest stack depth: 28216 bytes left [ 1.744778] node 1 deferred pages initialised in 962ms [ 1.758175] devtmpfs: initialized [ 1.762669] x86/mm: Memory block size: 128MB [ 1.980647] DMA-API: preallocated 65536 debug entries [ 1.982741] DMA-API: debugging enabled by kernel config [ 1.984739] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns [ 1.989746] futex hash table entries: 16384 (order: 9, 2097152 bytes, vmalloc) [ 2.001926] prandom: seed boundary self test passed [ 2.005783] prandom: 100 self tests passed [ 2.016118] prandom32: self test passed (less than 6 bits correlated) [ 2.017752] pinctrl core: initialized pinctrl subsystem [ 2.022020] [ 2.022679] ************************************************************* [ 2.024741] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE ** [ 2.027738] ** ** [ 2.029734] ** IOMMU DebugFS SUPPORT HAS BEEN ENABLED IN THIS KERNEL ** [ 2.032737] ** ** [ 2.034734] ** This means that this kernel is built to expose internal ** [ 2.037738] ** IOMMU data structures, which may compromise security on ** [ 2.039734] ** your system. ** [ 2.042738] ** ** [ 2.044734] ** If you see this message and you are not debugging the ** [ 2.046736] ** kernel, report this immediately to your vendor! ** [ 2.049738] ** ** [ 2.051734] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE ** [ 2.054730] ************************************************************* [ 2.056862] PM: RTC time: 04:41:48, date: 2023-02-03 [ 2.075156] NET: Registered PF_NETLINK/PF_ROUTE protocol family [ 2.084819] DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations [ 2.088159] DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations [ 2.091121] DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations [ 2.094360] audit: initializing netlink subsys (disabled) [ 2.097931] audit: type=2000 audit(1675399300.132:1): state=initialized audit_enabled=0 res=1 [ 2.103011] thermal_sys: Registered thermal governor 'fair_share' [ 2.103033] thermal_sys: Registered thermal governor 'step_wise' [ 2.105760] thermal_sys: Registered thermal governor 'user_space' [ 2.108325] cpuidle: using governor menu [ 2.113015] Detected 1 PCC Subspaces [ 2.114739] Registering PCC driver as Mailbox controller [ 2.117810] HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB [ 2.120796] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it [ 2.122737] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 [ 2.128823] PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xc0000000-0xcfffffff] (base 0xc0000000) [ 2.131754] PCI: MMCONFIG at [mem 0xc0000000-0xcfffffff] reserved in E820 [ 2.240922] PCI: Using configuration type 1 for base access [ 2.242786] PCI: HP ProLiant DL360 detected, enabling pci=bfsort. [ 2.244940] core: PMU erratum BJ122, BV98, HSD29 worked around, HT is on [ 2.267209] ENERGY_PERF_BIAS: Set to 'normal', was 'performance' [ 2.487128] kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. [ 2.496295] HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB [ 2.498793] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages [ 2.500738] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages [ 2.510100] cryptd: max_cpu_qlen set to 1000 [ 2.519777] ACPI: Added _OSI(Module Device) [ 2.520731] ACPI: Added _OSI(Processor Device) [ 2.522732] ACPI: Added _OSI(3.0 _SCP Extensions) [ 2.523729] ACPI: Added _OSI(Processor Aggregator Device) [ 2.525749] ACPI: Added _OSI(Linux-Dell-Video) [ 2.527744] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) [ 2.529760] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) [ 2.915575] ACPI: 10 ACPI AML tables successfully acquired and loaded [ 3.189152] ACPI: Interpreter enabled [ 3.191015] ACPI: PM: (supports S0 S4 S5) [ 3.192760] ACPI: Using IOAPIC for interrupt routing [ 3.194339] HEST: Table parsing has been initialized. [ 3.196735] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug [ 3.199731] PCI: Using E820 reservations for host bridge windows [ 3.536707] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-1f]) [ 3.539788] acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 3.546234] acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER PCIeCapability LTR DPC] [ 3.549736] acpi PNP0A08:00: FADT indicates ASPM is unsupported, using BIOS configuration [ 3.568193] PCI host bridge to bus 0000:00 [ 3.569764] pci_bus 0000:00: root bus resource [mem 0xf4000000-0xf7ffffff window] [ 3.572758] pci_bus 0000:00: root bus resource [io 0x1000-0x7fff window] [ 3.574753] pci_bus 0000:00: root bus resource [io 0x0000-0x03af window] [ 3.577755] pci_bus 0000:00: root bus resource [io 0x03e0-0x0cf7 window] [ 3.579752] pci_bus 0000:00: root bus resource [io 0x0d00-0x0fff window] [ 3.582755] pci_bus 0000:00: root bus resource [io 0x03b0-0x03bb window] [ 3.584752] pci_bus 0000:00: root bus resource [io 0x03c0-0x03df window] [ 3.587774] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] [ 3.590764] pci_bus 0000:00: root bus resource [bus 00-1f] [ 3.593509] pci 0000:00:00.0: [8086:0e00] type 00 class 0x060000 [ 3.596398] pci 0000:00:00.0: PME# supported from D0 D3hot D3cold [ 3.601135] pci 0000:00:01.0: [8086:0e02] type 01 class 0x060400 [ 3.603273] pci 0000:00:01.0: PME# supported from D0 D3hot D3cold [ 3.612748] pci 0000:00:01.1: [8086:0e03] type 01 class 0x060400 [ 3.615319] pci 0000:00:01.1: PME# supported from D0 D3hot D3cold [ 3.623659] pci 0000:00:02.0: [8086:0e04] type 01 class 0x060400 [ 3.626293] pci 0000:00:02.0: PME# supported from D0 D3hot D3cold [ 3.634614] pci 0000:00:02.1: [8086:0e05] type 01 class 0x060400 [ 3.637279] pci 0000:00:02.1: PME# supported from D0 D3hot D3cold [ 3.645293] pci 0000:00:02.2: [8086:0e06] type 01 class 0x060400 [ 3.648031] pci 0000:00:02.2: PME# supported from D0 D3hot D3cold [ 3.653975] pci 0000:00:02.3: [8086:0e07] type 01 class 0x060400 [ 3.657495] pci 0000:00:02.3: PME# supported from D0 D3hot D3cold [ 3.665349] pci 0000:00:03.0: [8086:0e08] type 01 class 0x060400 [ 3.667887] pci 0000:00:03.0: enabling Extended Tags [ 3.669178] pci 0000:00:03.0: PME# supported from D0 D3hot D3cold [ 3.677792] pci 0000:00:03.1: [8086:0e09] type 01 class 0x060400 [ 3.680323] pci 0000:00:03.1: PME# supported from D0 D3hot D3cold [ 3.688461] pci 0000:00:03.2: [8086:0e0a] type 01 class 0x060400 [ 3.691297] pci 0000:00:03.2: PME# supported from D0 D3hot D3cold [ 3.699849] pci 0000:00:03.3: [8086:0e0b] type 01 class 0x060400 [ 3.702302] pci 0000:00:03.3: PME# supported from D0 D3hot D3cold [ 3.710258] pci 0000:00:04.0: [8086:0e20] type 00 class 0x088000 [ 3.712805] pci 0000:00:04.0: reg 0x10: [mem 0xf6cf0000-0xf6cf3fff 64bit] [ 3.716874] pci 0000:00:04.1: [8086:0e21] type 00 class 0x088000 [ 3.719800] pci 0000:00:04.1: reg 0x10: [mem 0xf6ce0000-0xf6ce3fff 64bit] [ 3.723869] pci 0000:00:04.2: [8086:0e22] type 00 class 0x088000 [ 3.725798] pci 0000:00:04.2: reg 0x10: [mem 0xf6cd0000-0xf6cd3fff 64bit] [ 3.730844] pci 0000:00:04.3: [8086:0e23] type 00 class 0x088000 [ 3.732798] pci 0000:00:04.3: reg 0x10: [mem 0xf6cc0000-0xf6cc3fff 64bit] [ 3.736869] pci 0000:00:04.4: [8086:0e24] type 00 class 0x088000 [ 3.739800] pci 0000:00:04.4: reg 0x10: [mem 0xf6cb0000-0xf6cb3fff 64bit] [ 3.744108] pci 0000:00:04.5: [8086:0e25] type 00 class 0x088000 [ 3.746801] pci 0000:00:04.5: reg 0x10: [mem 0xf6ca0000-0xf6ca3fff 64bit] [ 3.751028] pci 0000:00:04.6: [8086:0e26] type 00 class 0x088000 [ 3.753801] pci 0000:00:04.6: reg 0x10: [mem 0xf6c90000-0xf6c93fff 64bit] [ 3.757873] pci 0000:00:04.7: [8086:0e27] type 00 class 0x088000 [ 3.759798] pci 0000:00:04.7: reg 0x10: [mem 0xf6c80000-0xf6c83fff 64bit] [ 3.765012] pci 0000:00:05.0: [8086:0e28] type 00 class 0x088000 [ 3.768778] pci 0000:00:05.2: [8086:0e2a] type 00 class 0x088000 [ 3.772817] pci 0000:00:05.4: [8086:0e2c] type 00 class 0x080020 [ 3.775785] pci 0000:00:05.4: reg 0x10: [mem 0xf6c70000-0xf6c70fff] [ 3.779997] pci 0000:00:11.0: [8086:1d3e] type 01 class 0x060400 [ 3.783277] pci 0000:00:11.0: PME# supported from D0 D3hot D3cold [ 3.790223] pci 0000:00:1a.0: [8086:1d2d] type 00 class 0x0c0320 [ 3.792806] pci 0000:00:1a.0: reg 0x10: [mem 0xf6c60000-0xf6c603ff] [ 3.795197] pci 0000:00:1a.0: PME# supported from D0 D3hot D3cold [ 3.799693] pci 0000:00:1c.0: [8086:1d10] type 01 class 0x060400 [ 3.802283] pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold [ 3.811539] pci 0000:00:1c.7: [8086:1d1e] type 01 class 0x060400 [ 3.814280] pci 0000:00:1c.7: PME# supported from D0 D3hot D3cold [ 3.823002] pci 0000:00:1d.0: [8086:1d26] type 00 class 0x0c0320 [ 3.824791] pci 0000:00:1d.0: reg 0x10: [mem 0xf6c50000-0xf6c503ff] [ 3.827196] pci 0000:00:1d.0: PME# supported from D0 D3hot D3cold [ 3.831612] pci 0000:00:1e.0: [8086:244e] type 01 class 0x060401 [ 3.835940] pci 0000:00:1f.0: [8086:1d41] type 00 class 0x060100 [ 3.843383] pci 0000:00:1f.2: [8086:1d00] type 00 class 0x01018f [ 3.845791] pci 0000:00:1f.2: reg 0x10: [io 0x4000-0x4007] [ 3.847762] pci 0000:00:1f.2: reg 0x14: [io 0x4008-0x400b] [ 3.849761] pci 0000:00:1f.2: reg 0x18: [io 0x4010-0x4017] [ 3.851762] pci 0000:00:1f.2: reg 0x1c: [io 0x4018-0x401b] [ 3.852758] pci 0000:00:1f.2: reg 0x20: [io 0x4020-0x402f] [ 3.854765] pci 0000:00:1f.2: reg 0x24-0x403f] [ 4.286388] pci 0000:04:00.0: [103c:323b] type 00 class 0x010400 [ 4.288799] pci 0000:04:00.0: reg 0x10: [mem 0xf7f00000-0xf7ffffff 64bit] [ 4.290769] pci 0000:04:00.0: reg 0x18: [mem 0xf7ef0000-0xf7ef03ff 64bit] [ 4.293762] pci 0000:04:00.0: reg 0x20: [io 0x6000-0x60ff] [ 4.295777] pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] [ 4.297758] pci 0000:04:00.0: enabling Extended Tags [ 4.300259] pci 0000:04:00.0: PME# supported from D0 D1 D3hot [ 4.316332] pci 0000:00:01.0: PCI bridge to [bus 04] [ 4.318751] pci 0000:00:01.0: bridge window [io 0x6000-0x6fff] [ 4.320735] pci 0000:00:01.0: bridge window [mem 0xf7e00000-0xf7ffffff] [ 4.324720] pci 0000:00:01.1: PCI bridge to [bus 11] [ 4.331225] pci 0000:03:00.0: [14e4:1657] type 00 class 0x020000 [ 4.333801] pci 0000:03:00.0: reg 0x10: [mem 0xf6bf0000-0xf6bfffff 64bit pref] [ 4.336776] pci 0000:03:00.0: reg 0x18: [mem 0xf6be0000-0xf6beffff 64bit pref] [ 4.338789] pci 0000:03:00.0: reg 0x20: [mem 0xf6bd0000-0xf6bdffff 64bit pref] [ 4.341782] pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 4.344079] pci 0000:03:00.0: PME# supported from D0 D3hot D3cold [ 4.354941] pci 0000:03:00.1: [14e4:1657] type 0000 [ 4.747316] pci 0000:03:00.1: reg 0x10: [mem 0xf6bc0000-0xf6bcffff 64bit pref] [ 4.749753] pci 0000:03:00.1: reg 0x18: [mem 0xf6bb0000-0xf6bbffff 64bit pref] [ 4.752754] pci 0000:03:00.1: reg 0x20: [mem 0xf6ba0000-0xf6baffff 64bit pref] [ 4.755747] pci 0000:03:00.1: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 4.758305] pci 0000:03:00.1: PME# supported from D0 D3hot D3cold [ 4.774763] pci 0000:03:00.2: [14e4:1657] type 00 class 0x020000 [ 4.776799] pci 0000:03:00.2: reg 0x10: [mem 0xf6b90000-0xf6b9ffff 64bit pref] [ 4.779777] pci 0000:03:00.2: reg 0x18: [mem 0xf6b80000-0xf6b8ffff 64bit pref] [ 4.781789] pci 0000:03:00.2: reg 0x20: [mem 0xf6b70000-0xf6b7ffff 64bit pref] [ 4.784764] pci 0000:03:00.2: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 4.788343] pci 0000:03:00.2: PME# supported from D0 D3hot D3cold [ 4.804058] pci 0000:03:00.3: [14e4:1657] type 00 class 0x020000 [ 4.805797] pci 0000:03:00.3: reg 0x10: [mem 0xf6b60000-0xf6b6ffff 64bit pref] [ 4.808777] pci 0000:03:00.3: reg 0x18: [mem 0xf6b50000-0xf6b5ffff 64bit pref] [ 4.810771] pci 0000:03:00.3: reg 0x20: [mem 0xf6b40000-0xf6b4ffff 5.213276] pci 0000:03:00.3: reg 0x30: [mem 0x00000000-0x0003ffff pref] [ 5.304219] pci 0000:03:00.3: PME# supported from D0 D3hot D3cold [ 5.319904] pci 0000:00:02.0: PCI bridge to [bus 03] [ 5.321769] pci 0000:00:02.0: bridge window [mem 0xf6b00000-0xf6bfffff 64bit pref] [ 5.325644] pci 0000:00:02.1: PCI bridge to [bus 12] [ 5.329020] pci 0000:02:00.0: [103c:323b] type 00 class 0x010400 [ 5.330793] pci 0000:02:00.0: reg 0x10: [mem 0xf7d00000-0xf7dfffff 64bit] [ 5.333767] pci 0000:02:00.0: reg 0x18: [mem 0xf7cf0000-0xf7cf03ff 64bit] [ 5.335758] pci 0000:02:00.0: reg 0x20: [io 0x5000-0x50ff] [ 5.337799] pci 0000:02:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] [ 5.340761] pci 0000:02:00.0: enabling Extended Tags [ 5.343251] pci 0000:02:00.0: PME# supported from D0 D1 D3hot [ 5.346919] pci 0000:00:02.2: PCI bridge to [bus 02] [ 5.348749] pci 0000:00:02.2: bridge window [io 0x5000-0x5fff] [ 5.351748] pci 0000:00:02.2: bridge window [mem 0xf7c00000-0xf7dfffff] [ 5.354697] pci 0000:00:02.3: PCI bridge to [bus 13] [ 5.393618] pci 0000:00:03.0: PCI bridge to [bus 07] [ 5.396765] pci 0000:00:03.1: PCI bridg [ 5.789720] pci 0000:00:03.2: PCI bridge to [bus 15] [ 5.792261] pci 0000:00:03.3: PCI bridge to [bus 16] [ 5.794290] pci 0000:00:11.0: PCI bridge to [bus 18] [ 5.796296] pci 0000:00:1c.0: PCI bridge to [bus 0a] [ 5.799899] pci 0000:01:00.0: [103c:3306] type 00 class 0x088000 [ 5.801773] pci 0000:01:00.0: reg 0x10: [io 0x3000-0x30ff] [ 5.803752] pci 0000:01:00.0: reg 0x14: [mem 0xf7bf0000-0xf7bf01ff] [ 5.805750] pci 0000:01:00.0: reg 0x18: [io 0x3400-0x34ff] [ 5.811135] pci 0000:01:00.1: [102b:0533] type 00 class 0x030000 [ 5.812775] pci 0000:01:00.1: reg 0x10: [mem 0xf5000000-0xf5ffffff pref] [ 5.815753] pci 0000:01:00.1: reg 0x14: [mem 0xf7be0000-0xf7be3fff] [ 5.817750] pci 0000:01:00.1: reg 0x18: [mem 0xf7000000-0xf77fffff] [ 5.820045] pci 0000:01:00.1: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] [ 5.823869] pci 0000:01:00.2: [103c:3307] type 00 class 0x088000 [ 5.825773] pci 0000:01:00.2: reg 0x10: [io 0x3800-0x38ff] [ 5.827751] pci 0000:01:00.2: reg 0x14: [mem 0xf6ff0000-0xf6ff00ff] [ 5.829750] pci 0000:01:00.2: reg 0x18: [mem 0xf6e00000-0xf6efffff] [ 5.832752] pci 0000:01:00.2: reg 0x1c: [mem 0xf6d80000-0xf6dfffff] [ 5.834751] .2: reg 0x20: [mem 0xf6d70000-0xf6d77fff] [ 6.227280] pci 0000:01:00.2: reg 0x24: [mem 0xf6d60000-0xf6d67fff] [ 6.229752] pci 0000:01:00.2: reg 0x30: [m0x00000000-0x0000ffff pref] [ 6.330006] pci 0000:01:00.2: PME# supported from D0 D3hot D3cold [ 6.334754] pci 0000:01:00.4: [103c:3300] type 00 class 0x0c0300 [ 6.336913] pci 0000:01:00.4: reg 0x20: [io 0x3c00-0x3c1f] [ 6.342233] pci 0000:00:1c.7: PCI bridge to [bus 01] [ 6.343750] pci 0000:00:1c.7: bridge window [io 0x3000-0x3fff] [ 6.345746] pci 0000:00:1c.7: bridge window [mem 0xf6d00000-0xf7bfffff] [ 6.347751] pci 0000:00:1c.7: bridge window [mem 0xf5000000-0xf5ffffff 64bit pref] [ 6.350841] pci_bus 0000:17: extended config space not accessible [ 6.353639] pci 0000:00:1e.0: PCI bridge to [bus 17] (subtractive decode) [ 6.356794] pci 0000:00:1e.0: bridge window [mem 0xf4000000-0xf7ffffff window] (subtractive decode) [ 6.359774] pci 0000:00:1e.0: bridge window [io 0x1000-0x7fff window] (subtractive decode) [ 6.362757] pci 0000:00:1e.0: bridge window [io 0x0000-0x03af window] (subtractive decode) [ 6.365757] pci 0000:00:1e.0: bridge window [io 0x03e0-0x0cf7 window] (subtractive decode) [ 6.368756] pci 0000:00:1e.0: bridge window [io 0x0d00-0x0fff window] (subtractive decode) [ 6.371757] pci 0000:00:1e.0: bridge window [io 0x03b0-0x03bb window] (subtractive decode) [ 6.374756] pci 0000:00:1e.0: bridge window [io 0x03c0-0x03df window] (subtractive 6.668175] pci 0000:00:1e.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) [ 6.793869] ACPI: PCI: Interrupt link LNKA configured for IRQ 5 [ 6.799838] ACPI: PCI: Interrupt link LNKB configured for IRQ 7 [ 6.806689] ACPI: PCI: Interrupt link LNKC configured for IRQ 10 [ 6.812720] ACPI: PCI: Interrupt link LNKD configured for IRQ 10 [ 6.818638] ACPI: PCI: Interrupt link LNKE configured for IRQ 5 [ 6.825080] ACPI: PCI: Interrupt link LNKF configured for IRQ 7 [ 6.831683] ACPI: PCI: Interrupt link LNKG configured for IRQ 0 [ 6.832744] ACPI: PCI: Interrupt link LNKG disabled [ 6.838781] ACPI: PCI: Interrupt link LNKH configured for IRQ 0 [ 6.840740] ACPI: PCI: Interrupt link LNKH disabled [ 6.844009] ACPI: PCI Root Bridge [PCI1] (domain 0000 [bus 20-3f]) [ 6.846818] acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] [ 6.855883] acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug SHPCHotplug PME AER PCIeCapability LTR DPC] [ 6.859747] acpi PNP0A08:01: FADT indicates ASPM is unsupported, using BIOS configuration [ 6.87181ridge to bus 0000:20 [ 7.263259] pci_bus 0000:20: root bus resource [mem 0xfb000000-0xfbffffff window] [ 7.265741] pci_bus 0000:20: root bus resource [io 0x8000-0xffff window] [ 7.268732] pci_bus 0000:20: root bus resource [bus 20-3f] [ 7.271008] pci 0000:20:00.0: [8086:0e01] type 01 class 0x060400 [ 7.273053] pci 0000:20:00.0: PME# supported from D0 D3hot D3cold [ 7.277096] pci 0000:20:01.0: [8086:0e02] type 01 class 0x060400 [ 7.279056] pci 0000:20:01.0: PME# supported from D0 D3hot D3cold [ 7.284056] pci 0000:20:01.1: [8086:0e03] type 01 class 0x060400 [ 7.287055] pci 0000:20:01.1: PME# supported from D0 D3hot D3cold [ 7.291905] pci 0000:20:02.0: [8086:0e04] type 01 class 0x060400 [ 7.295054] pci 0000:20:02.0: PME# supported from D0 D3hot D3cold [ 7.299964] pci 0000:20:02.1: [8086:0e05] type 01 class 0x060400 [ 7.302053] pci 0000:20:02.1: PME# supported from D0 D3hot D3cold [ 7.308012] pci 0000:20:02.2: [8086:0e06] type 01 class 0x060400 [ 7.310053] pci 0000:20:02.2: PME# supported from D0 D3hot D3cold [ 7.315912] pci 0000:20:02.3: [8086:0e07] type 01 class 0x060400 [ 7.318058] pcME# supported from D0 D3hot D3cold [ 7.813007] pci 0000:20:03.0: [8086:0e08] type 01 class 0x060400 [ 7.814811] pci 0000:20:03.0: enabling Extended Tags [ 7.816986] pci 0000:20:03.0: PME# supported from D0 D3hot D3cold [ 7.821902] pci 0000:20:03.1: [8086:0e09] type 01 class 0x060400 [ 7.824164] pci 0000:20:03.1: PME# supported from D0 D3hot D3cold [ 7.829913] pci 0000:20:03.2: [8086:0e0a] type 01 class 0x060400 [ 7.832061] pci 0000:20:03.2: PME# supported from D0 D3hot D3cold [ 7.836978] pci 0000:20:03.3: [8086:0e0b] type 01 class 0x060400 [ 7.840080] pci 0000:20:03.3: PME# supported from D0 D3hot D3cold [ 7.844871] pci 0000:20:04.0: [8086:0e20] type 00 class 0x088000 [ 7.846766] pci 0000:20:04.0: reg 0x10: [mem 0xfbff0000-0xfbff3fff 64bit] [ 7.850937] pci 0000:20:04.1: [8086:0e21] type 00 class 0x088000 [ 7.852766] pci 0000:20:04.1: reg 0x10: [mem 0xfbfe0000-0xfbfe3fff 64bit] [ 7.856057] pci 0000:20:04.2: [8086:0e22] type 00 class 0x088000 [ 7.858767] pci 0000:20:04.2: reg 0x10: [mem 0xfbfd0000-0xfbfd3fff 64bit] [ 7.861936] pci 0000:20:04.3: [8086:0e23] type 00 class 0x088000 [ 7.863766] pci 0000:20:04.3: reg 0x10: [mem 0xfbfc0000-0xfbfc3fff 64bit] [ 7.867931] pci 0000:20:04.4: [8086:0eass 0x088000 [ 8.260294] pci 0000:20:04.4: reg 0x10: [mem 0xfbfb0000-0xfbfb3fff 64bit] [ 8.265414] pci 0000:20:04.5: [8086:0e25] type 00 class 0x088000 [ 8.267802] pci 0000:20:04.5: reg 0x10: [mem 0xfbfa0000-0xfbfa3fff 64bit] [ 8.271861] pci 0000:20:04.6: [8086:0e26] type 00 class 0x088000 [ 8.273797] pci 0000:20:04.6: reg 0x10: [mem 0xfbf90000-0xfbf93fff 64bit] [ 8.278798] pci 0000:20:04.7: [8086:0e27] type 00 class 0x088000 [ 8.280787] pci 0000:20:04.7: reg 0x10: [mem 0xfbf80000-0xfbf83fff 64bit] [ 8.284824] pci 0000:20:05.0: [8086:0e28] type 00 class 0x088000 [ 8.290149] pci 0000:20:05.2: [8086:0e2a] type 00 class 0x088000 [ 8.293806] pci 0000:20:05.4: [8086:0e2c] type 00 class 0x080020 [ 8.295783] pci 0000:20:05.4: reg 0x10: [mem 0xfbf70000-0xfbf70fff] [ 8.301997] pci 0000:20:00.0: PCI bridge to [bus 2b] [ 8.304715] pci 0000:20:01.0: PCI bridge to [bus 21] [ 8.307720] pci 0000:20:01.1: PCI bridge to [bus 22] [ 8.309682] pci 0000:20:02.0: PCI bridge to [bus 23] [ 8.312685] pci 0000:20:02.1: PCI bridge to [bu2720] pci 0000:20:02.2: PCI bridge to [bus 25] [ 8.805534] pci 0000:20:02.3: PCI bridge to [bus 26] [ 8.808703] pci 0000:20:03.0: PCI bridge to [bus 27] [ 8.810708] pci 0000:20:03.1: PCI bridge to [bus 28] [ 8.813658] pci 0000:20:03.2: PCI bridge to [bus 29] [ 8.816695] pci 0000:20:03.3: PCI bridge to [bus 2a] [ 8.832823] iommu: Default domain type: Translated [ 8.833737] iommu: DMA domain TLB invalidation policy: lazy mode [ 8.842362] SCSI subsystem initialized [ 8.844793] ACPI: bus type USB registered [ 8.846670] usbcore: registered new interface driver usbfs [ 8.849183] usbcore: registered new interface driver hub [ 8.852006] usbcore: registered new device driver usb [ 8.855255] pps_core: LinuxPPS API ver. 1 registered [ 8.857740] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti [ 8.860871] PTP clock support registered [ 8.866062] EDAC MC: Ver: 3.0.0 [ 8.878620] NetLabel: Initializing [ 8.879731] NetLabel: domain hash size = 128 [ 8.881737] NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO [ 8.884264] NetLabel: unlabeled traffic allowed by default [ 8.885740] PCI: Using ACPI for IRQ routing [ 8.889314] PCI: Discovered peer bu.066629] PCI host bridge to bus 0000:1f [ 9.282017] pci_bus 0000:1f: Unknown NUMA node; performance will be reduced [ 9.284889] pci_bus 0000:1f: root bus resource [io 0x0000-0xffff] [ 9.286775] pci_bus 0000:1f: root bus resource [mem 0x00000000-0x3fffffffffff] [ 9.289746] pci_bus 0000:1f: No busn resource found for root bus, will use [bus 1f-ff] [ 9.292747] pci_bus 0000:1f: busn_res: can not insert [bus 1f-ff] under domain [bus 00-ff] (conflicts with (null) [bus 00-1f]) [ 9.296863] pci 0000:1f:08.0: [8086:0e80] type 00 class 0x088000 [ 9.300276] pci 0000:1f:09.0: [8086:0e90] type 00 class 0x088000 [ 9.304180] pci 0000:1f:0a.0: [8086:0ec0] type 00 class 0x088000 [ 9.308119] pci 0000:1f:0a.1: [8086:0ec1] type 00 class 0x088000 [ 9.311106] pci 0000:1f:0a.2: [8086:0ec2] type 00 class 0x088000 [ 9.314114] pci 0000:1f:0a.3: [8086:0ec3] type 00 class 0x088000 [ 9.318127] pci 0000:1f:0b.0: [8086:0e1e] type 00 class 0x088000 [ 9.322146] pci 0000:1f:0b.3: [8086:0e1f] type 00 class 0x088000 [ 9.325138] pci 0000:1f:0c.0: [8086:0ee0] type 00 class 0x088000 [ 9.329221] pci 0000:1f:0c.1: [8086:0ee2] type 0000 [ 9.722720] pci 0000:1f:0c.2: [8086:0ee4] type 00 class 0x088000 [ 9.725817] pci 0000:1f:0d.0: [8086:0ee1] type 00 class 0x088000 [ 9.728538] pci 0000:1f:0d.1: [8086:0ee3] type 00 class 0x088000 [ 9.731526] pci 0000:1f:0d.2: [8086:0ee5] type 00 class 0x088000 [ 9.734547] pci 0000:1f:0e.0: [8086:0ea0] type 00 class 0x088000 [ 9.737513] pci 0000:1f:0e.1: [8086:0e30] type 00 class 0x110100 [ 9.740569] pci 0000:1f:0f.0: [8086:0ea8] type 00 class 0x088000 [ 9.743737] pci 0000:1f:0f.1: [8086:0e71] type 00 class 0x088000 [ 9.747661] pci 0000:1f:0f.2: [8086:0eaa] type 00 class 0x088000 [ 9.750664] pci 0000:1f:0f.3: [8086:0eab] type 00 class 0x088000 [ 9.753641] pci 0000:1f:0f.4: [8086:0eac] type 00 class 0x088000 [ 9.756645] pci 0000:1f:0f.5: [8086:0ead] type 00 class 0x088000 [ 9.759674] pci 0000:1f:10.0: [8086:0eb0] type 00 class 0x088000 [ 9.762653] pci 0000:1f:10.1: [8086:0eb1] type 00 class 0x088000 [ 9.765670] pci 0000:1f:10.2: [8086:0eb2] type 00 class 0x088000 [ 9.768651] pci 0000:1f:10.3: [8086:0eb3] type 00 class 0x088000 [ 9.771749] pci 0000:1f:10.4: [8086:0eb4] type 00 class 0x088000 [ 9.774675] pci 0000:1f:10.5: [8086:0eb5] type 00 class 0x088000 [ 9.777657] pci 0000:1f:10.6e 00 class 0x088000 [ 10.268720] pci 0000:1f:10.7: [8086:0eb7] type 00 class 0x088000 [ 10.273383] pci 0000:1f:13.0: [8086:0e1d] type 00 class 0x088000 [ 10.277289] pci 0000:1f:13.1: [8086:0e34] type 00 class 0x110100 [ 10.280116] pci 0000:1f:13.4: [8086:0e81] type 00 class 0x088000 [ 10.284110] pci 0000:1f:13.5: [8086:0e36] type 00 class 0x110100 [ 10.287145] pci 0000:1f:16.0: [8086:0ec8] type 00 class 0x088000 [ 10.291108] pci 0000:1f:16.1: [8086:0ec9] type 00 class 0x088000 [ 10.294114] pci 0000:1f:16.2: [8086:0eca] type 00 class 0x088000 [ 10.298126] pci_bus 0000:1f: busn_res: [bus 1f-ff] end is updated to 1f [ 10.300745] pci_bus 0000:1f: busn_res: can not insert [bus 1f] under domain [bus 00-ff] (conflicts with (null) [bus 00-1f]) [ 10.306665] PCI: Discovered peer bus 3f [ 10.308845] PCI host bridge to bus 0000:3f [ 10.310740] pci_bus 0000:3f: Unknown NUMA node; performance will be reduced [ 10.312751] pci_bus 0000:3f: root bus resource [io 0x0000-0xffff] [ 10.314750] pci_bus 0000:3f: root bus resource [mem 0x00000000-0x3fffffffffff] [ 10.317743] pci_bus 0000:3f: No busn resource found for root bus, will use [bus 3f-ff] [ 10.319738] pci_bus 0000:3f: busn_res: can not insert nder domain [bus 00-ff] (conflicts with (null) [bus 20-3f]) [ 10.715409] pci 0000:3f:08.0: [8086:0e80] type 00 class 0x088000 [ 10.717546] pci 0000:3f:09.0: [8086:0e90] type 00 class 0x088000 [ 10.720547] pci 0000:3f:0a.0: [8086:0ec0] type 00 class 0x088000 [ 10.723518] pci 0000:3f:0a.1: [8086:0ec1] type 00 class 0x088000 [ 10.726508] pci 0000:3f:0a.2: [8086:0ec2] type 00 class 0x088000 [ 10.729504] pci 0000:3f:0a.3: [8086:0ec3] type 00 class 0x088000 [ 10.732508] pci 0000:3f:0b.0: [8086:0e1e] type 00 class 0x088000 [ 10.735500] pci 0000:3f:0b.3: [8086:0e1f] type 00 class 0x088000 [ 10.738521] pci 0000:3f:0c.0: [8086:0ee0] type 00 class 0x088000 [ 10.741600] pci 0000:3f:0c.1: [8086:0ee2] type 00 class 0x088000 [ 10.744514] pci 0000:3f:0c.2: [8086:0ee4] type 00 class 0x088000 [ 10.747533] pci 0000:3f:0d.0: [8086:0ee1] type 00 class 0x088000 [ 10.750515] pci 0000:3f:0d.1: [8086:0ee3] type 00 class 0x088000 [ 10.753510] pci 0000:3f:0d.2: [8086:0ee5] type 00 class 0x088000 [ 10.756508] pci 0000:3f:0e.0: [8086:0ea0] type 00 class 0x088000 [ 10.759515] pci 0000:3f:0e.1: [8086:0e30] type 00 class 0x110100 [ 10.762542] pci 0000:3f:0f.0: [8086:0ea8] type 00 class 0x088000 [ 10.764651] pci 0000:3f:0f.1: [8086:0e71] type 00 class 0x088000 [ 10.767720] pci 0000:3f: type 00 class 0x088000 [ 11.258720] pci 0000:3f:0f.3: [8086:0eab] type 00 class 0x088000 [ 11.262534] pci 0000:3f:0f.4: [8086:0eac] type 00 class 0x088000 [ 11.266336] pci 0000:3f:0f.5: [8086:0ead] type 00 class 0x088000 [ 11.270353] pci 0000:3f:10.0: [8086:0eb0] type 00 class 0x088000 [ 11.274322] pci 0000:3f:10.1: [8086:0eb1] type 00 class 0x088000 [ 11.277337] pci 0000:3f:10.2: [8086:0eb2] type 00 class 0x088000 [ 11.281331] pci 0000:3f:10.3: [8086:0eb3] type 00 class 0x088000 [ 11.285361] pci 0000:3f:10.4: [8086:0eb4] type 00 class 0x088000 [ 11.289311] pci 0000:3f:10.5: [8086:0eb5] type 00 class 0x088000 [ 11.292364] pci 0000:3f:10.6: [8086:0eb6] type 00 class 0x088000 [ 11.296464] pci 0000:3f:10.7: [8086:0eb7] type 00 class 0x088000 [ 11.300359] pci 0000:3f:13.0: [8086:0e1d] type 00 class 0x088000 [ 11.304080] pci 0000:3f:13.1: [8086:0e34] type 00 class 0x110100 [ 11.307134] pci 0000:3f:13.4: [8086:0e81] type 00 class 0x088000 [ 11.311105] pci 0000:3f:13.5: [8086:0e36] type 00 class 0x110100 [ 11.314130] pci 0000:3f:16.0: [8086:0ec8] type 00 class 0x088000 [ 11.318078] pci 0000:3f:16.1: [8086:0ec9] type 00 class 0x088000 [ 11.321111] pci 0000:3f:16.2: [8086:0eca] type 00 class 0x088000 [ 11.324896] pci_bus 0000:3f: busn_res: [bus 3f-ff] end is updated to 3f [ 11.327759]:3f: busn_res: can not insert [bus 3f] under domain [bus 00-ff] (conflicts with (null) [bus 20-3f]) [ 11.737341] pci 0000:01:00.1: vgaarb: setting as boot VGA device [ 11.737720] pci 0000:01:00.1: vgaarb: bridge control possible [ 11.737720] pci 0000:01:00.1: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none [ 11.743968] vgaarb: loaded [ 11.746299] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 [ 11.748757] hpet0: 8 comparators, 64-bit 14.318180 MHz counter [ 11.756250] clocksource: Switched to clocksource tsc-early [ 12.305959] VFS: Disk quotas dquot_6.6.0 [ 12.307648] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 12.311778] pnp: PnP ACPI init [ 12.318793] system 00:00: [mem 0xf4ffe000-0xf4ffffff] could not be reserved [ 12.326191] system 00:01: [io 0x0408-0x040f] has been reserved [ 12.328691] system 00:01: [io 0x04d0-0x04d1] has been reserved [ 12.330793] system 00:01: [io 0x0310-0x0315] has been reserved [ 12.332858] system 00:01: [io 0x0316-0x0317] has been reserved [ 12.335118] system 00:01: [io 0x0700-0x071f] has been reserved [ 12.337261] system 00:01: [io 0x0880-0x08ff] has been reserved [ 12.339392] system 00:01: [io 0x0900-0x097f] has been reserved [ 12.341523] system 00:01: [io 0x0cd4-0x0cd7] has been reserved [ 12.343655] system 00:01: [io 0x0cd0-0x0cd3] has been reserved [ 12.345952] system 00:01: [io 0x0f50-0x0f58] has been reserved [ 12.348114] system 00:01: [io 0x0ca0-0x0ca1] has been reserved [ 12.350231] system 00:01: [io 0x0ca4-0x0ca5] has been reserved [ 12.352358] system 00:01: [io 0x02f8-0x02ff] has been reserved [ 12.354482] system 00:01: [mem 0xc0000000-0xcfffffff] has been reserved [ 12.356864] system 00:01: [mem 0xfe000000-0xfebfffff] has been reserved [ 12.359220] system 00:01: [mem 0xfc000000-0xfc000fff] has been reserved [ 12.361577] system 00:01: [mem 0xfed1c000-0xfed1ffff] has been reserved [ 12.363959] system 00:01: [mem 0xfed30000-0xfed3ffff] has been reserved [ 12.366383] system 00:01: [mem 0xfee00000-0xfee00fff] has been reserved [ 12.368816] system 00:01: [mem 0xff800000-0xffffffff] has been reserved [ 12.402374] system 00:06: [mem 0xfbefe000-0xfbefffff] could not be reserved [ 12.407902] pnp: PnP ACPI: found 7 devices [ 12.481856] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns [ 12.486404] NET: Registered PF_INET protocol family [ 12.491130] IP idents hash table entries: 262144 (order: 9, 2097152 byoc) [ 12.803787] tcp_listen_portaddr_hash hash table entries: 16384 (order: 8, 1310720 bytes, vmalloc) [ 12.808167] Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, vmalloc) [ 12.812159] TCP established hash table entries: 262144 (order: 9, 2097152 bytes, vmalloc) [ 12.817643] TCP bind hash table entries: 65536 (order: 10, 5242880 bytes, vmalloc hugepage) [ 12.824546] TCP: Hash tables configured (established 262144 bind 65536) [ 12.836555] MPTCP token hash table entries: 32768 (order: 9, 3145728 bytes, vmalloc) [ 12.843094] UDP hash table entries: 16384 (order: 9, 3145728 bytes, vmalloc) [ 12.849593] UDP-Lite hash table entries: 16384 (order: 9, 3145728 bytes, vmalloc) [ 12.858514] NET: Registered PF_UNIX/PF_LOCAL protocol family [ 12.860598] NET: Registered PF_XDP protocol family [ 12.862425] pci 0000:00:02.0: BAR 14: assigned [mem 0xf4000000-0xf40fffff] [ 12.864881] pci 0000:04:00.0: BAR 6: assigned [mem 0xf7e00000-0xf7e7ffff pref] [ 12.867369] pci 0000:00:01.0: PCI bridge to [bus 04] [ 12.869113] pci 0000:00:01.0: bridge window [io 0x6000-0x6fff] [ 12.871207] pci 0000:00:01.0: bridge window [mem 0xf7e00000-0xf7ffffff] [ 12.873573] pci 0000:00:0e to [bus 11] [ 13.275363] pci 0000:03:00.0: BAR 6: assigned [mem 0xf4000000-0xf403ffff pref] [ 13.277941] pci 0000:03:00.1: BAR 6: assigned [mem 0xf4040000-0xf407ffff pref] [ 13.280419] pci 0000:03:00.2: BAR 6: assigned [mem 0xf4080000-0xf40bffff pref] [ 13.282879] pci 0000:03:00.3: BAR 6: assigned [mem 0xf40c0000-0xf40fffff pref] [ 13.285433] pci 0000:00:02.0: PCI bridge to [bus 03] [ 13.287203] pci 0000:00:02.0: bridge window [mem 0xf4000000-0xf40fffff] [ 13.289528] pci 0000:00:02.0: bridge window [mem 0xf6b00000-0xf6bfffff 64bit pref] [ 13.292187] pci 0000:00:02.1: PCI bridge to [bus 12] [ 13.293909] pci 0000:02:00.0: BAR 6: assigned [mem 0xf7c00000-0xf7c7ffff pref] [ 13.296465] pci 0000:00:02.2: PCI bridge to [bus 02] [ 13.298210] pci 0000:00:02.2: bridge window [io 0x5000-0x5fff] [ 13.300306] pci 0000:00:02.2: bridge window [mem 0xf7c00000-0xf7dfffff] [ 13.302629] pci 0000:00:02.3: PCI bridge to [bus 13] [ 13.304379] pci 0000:00:03.0: PCI bridge to [bus 07] [ 13.306158] pci 0000:00:03.1: PCI bridge to [bus 14] [ 13.307918] pci 0000:00:03.2: PCI bridge to [bus 15] [ 13.309646] pci 0000:00:03.3: PCI bridge to [bus 16] [ 13.311396] pci 0000:00:11.0: PCI bridge to [bus 18] [ 13.313150] pci 0000:00:1c.0: PC0a] [ 13.814827] pci 0000:01:00.2: BAR 6: assigned [mem 0xf6d00000-0xf6d0ffff pref] [ 13.817392] pci 0000:00:1c.7: PCI bridge to [bus 01] [ 13.819147] pci 0000:00:1c.7: bridge window [io 0x3000-0x3fff] [ 13.821249] pci 0000:00:1c.7: bridge window [mem 0xf6d00000-0xf7bfffff] [ 13.823570] pci 0000:00:1c.7: bridge window [mem 0xf5000000-0xf5ffffff 64bit pref] [ 13.826301] pci 0000:00:1e.0: PCI bridge to [bus 17] [ 13.828416] pci_bus 0000:00: resource 4 [mem 0xf4000000-0xf7ffffff window] [ 13.830759] pci_bus 0000:00: resource 5 [io 0x1000-0x7fff window] [ 13.832905] pci_bus 0000:00: resource 6 [io 0x0000-0x03af window] [ 13.835111] pci_bus 0000:00: resource 7 [io 0x03e0-0x0cf7 window] [ 13.837224] pci_bus 0000:00: resource 8 [io 0x0d00-0x0fff window] [ 13.839345] pci_bus 0000:00: resource 9 [io 0x03b0-0x03bb window] [ 13.841460] pci_bus 0000:00: resource 10 [io 0x03c0-0x03df window] [ 13.843607] pci_bus 0000:00: resource 11 [mem 0x000a0000-0x000bffff window] [ 13.846068] pci_bus 0000:04: resource 0 [io 0x6000-0x6fff] [ 13.847978] pci_bus 0000:04: resource 1 [mem 0xf7e00000-0xf7ff.044847] pci_bus 0000:03: resource 1 [mem 0xf4000000-0xf40fffff] [ 14.252509] pci_bus 0000:03: resource 2 [mem 0xf6b00000-0xf6bfffff 64bit pref] [ 14.255114] pci_bus 0000:02: resource 0 [io 0x5000-0x5fff] [ 14.257033] pci_bus 0000:02: resource 1 [mem 0xf7c00000-0xf7dfffff] [ 14.259231] pci_bus 0000:01: resource 0 [io 0x3000-0x3fff] [ 14.261146] pci_bus 0000:01: resource 1 [mem 0xf6d00000-0xf7bfffff] [ 14.263284] pci_bus 0000:01: resource 2 [mem 0xf5000000-0xf5ffffff 64bit pref] [ 14.265806] pci_bus 0000:17: resource 4 [mem 0xf4000000-0xf7ffffff window] [ 14.268180] pci_bus 0000:17: resource 5 [io 0x1000-0x7fff window] [ 14.270332] pci_bus 0000:17: resource 6 [io 0x0000-0x03af window] [ 14.272455] pci_bus 0000:17: resource 7 [io 0x03e0-0x0cf7 window] [ 14.274576] pci_bus 0000:17: resource 8 [io 0x0d00-0x0fff window] [ 14.276744] pci_bus 0000:17: resource 9 [io 0x03b0-0x03bb window] [ 14.278864] pci_bus 0000:17: resource 10 [io 0x03c0-0x03df window] [ 14.281013] pci_bus 0000:17: resource 11 [mem 0x000a0000-0x000bffff window] [ 14.285883] pci 0000:20:00.0: PCI bridge to [bus 2b] [ 14.287782] pci 0000:20:01.0: PCI bridge to [bus 21] [ 14.28:01.1: PCI bridge to [bus 22] [ 14.791371] pci 0000:20:02.0: PCI bridge to [bus 23] [ 14.793169] pci 0000:20:02.1: PCI bridge to [bus 24] [ 14.794996] pci 0000:20:02.2: PCI bridge to [bus 25] [ 14.796758] pci 0000:20:02.3: PCI bridge to [bus 26] [ 14.798507] pci 0000:20:03.0: PCI bridge to [bus 27] [ 14.800300] pci 0000:20:03.1: PCI bridge to [bus 28] [ 14.802047] pci 0000:20:03.2: PCI bridge to [bus 29] [ 14.803804] pci 0000:20:03.3: PCI bridge to [bus 2a] [ 14.805615] pci_bus 0000:20: resource 4 [mem 0xfb000000-0xfbffffff window] [ 14.808018] pci_bus 0000:20: resource 5 [io 0x8000-0xffff window] [ 14.810456] pci_bus 0000:1f: resource 4 [io 0x0000-0xffff] [ 14.812434] pci_bus 0000:1f: resource 5 [mem 0x00000000-0x3fffffffffff] [ 14.814763] pci_bus 0000:3f: resource 4 [io 0x0000-0xffff] [ 14.816698] pci_bus 0000:3f: resource 5 [mem 0x00000000-0x3fffffffffff] [ 14.819166] pci 0000:00:05.0: disabled boot interrupts on device [8086:0e28] [ 14.849349] pci 0000:00:1a.0: quirk_usb_early_handoff+0x0/0x290 took 27064 usecs [ 14.877822] pci 0000:00:1d.0: quirk_usb_early_handoff+0x0/0x290 took 25228 usecs [ 14.892689] pci 0000:01:00.4: quirk_usb_early_handoff+0x0/0x290 took 11909 usec365] pci 0000:20:05.0: disabled boot interrupts on device [8086:0e28] [ 15.297640] pci 0000:20:05.0: quirk_disable_intel_boot_interrupt+0x0/0x1f0 took 311796 usecs [ 15.300872] PCI: CLS 64 bytes, default 64 [ 15.305550] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) [ 15.305904] Trying to unpack rootfs image as initramfs... [ 15.307849] software IO TLB: mapped [mem 0x0000000039000000-0x000000003d000000] (64MB) [ 15.312718] ACPI: bus type thunderbolt registered [ 15.427348] Initialise system trusted keyrings [ 15.429330] Key type blacklist registered [ 15.432260] workingset: timestamp_bits=36 max_order=23 bucket_order=0 [ 15.563560] zbud: loaded [ 15.582969] integrity: Platform Keyring initialized [ 15.598800] NET: Registered PF_ALG protocol family [ 15.600615] xor: automatically using best checksumming function avx [ 15.603063] Key type asymmetric registered [ 15.604538] Asymmetric key parser 'x509' registered [ 15.606279] Running certificate verification selftests [ 15.822987] Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' [ 15.829940] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) [ 15.833501] io scheduler mq-deadline registered [ 15.835236] io scheduler kyber registered [ 15.837775] io scheduler bfq registered [ 15.846714] atomic64_test: passed for x86-64 platform with CX8 and with SSE [ 16.196170] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 [ 16.201810] ACPI: \_PR_.CP00: Found 2 idle states [ 16.255098] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 16.262487] ACPI: button: Power Button [PWRF] [ 16.317185] tsc: Refined TSC clocksource calibration: 2094.951 MHz [ 16.318808] thermal LNXTHERM:00: registered as thermal_zone0 [ 16.319609] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x1e328ef1914, max_idle_ns: 440795263413 ns [ 16.321417] ACPI: thermal: Thermal Zone [THM0] (8 C) [ 16.326702] clocksource: Switched to clocksource tsc [ 16.328315] ERST: Error Record Serialization Table (ERST) support is initialized. [ 16.331699] pstore: Registered erst as persistent store backend [ 16.338477] GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. [ 16.345191] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled [ 16.349105] 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A [ 16.355568] serial8250: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A [ 16.372949] Non-volatile memory driver v1.3 [ 16.441939] rdac: device handler registered [ 16.444566] hp_sw: device handler registered [ 16.446205] emc: device handler registered [ 16.449553] alua: device handler registered [ 16.454903] libphy: Fixed MDIO Bus: probed [ 16.457662] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 16.460071] ehci-pci: EHCI PCI platform driver [ 16.470566] ehci-pci 0000:00:1a.0: EHCI Host Controller [ 16.474362] ehci-pci 0000:00:1a.0: new USB bus registered, assigned bus number 1 [ 16.477198] ehci-pci 0000:00:1a.0: debug port 2 [ 16.483531] ehci-pci 0000:00:1a.0: irq 21, io mem 0xf6c60000 [ 16.491893] ehci-pci 0000:00:1a.0: USB 2.0 started, EHCI 1.00 [ 16.495601] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.14 [ 16.498540] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 16.501036] usb usb1: Product: EHCI Host Controller [ 16.502707] usb usb1: Manufacturer: Linux 5.14.0-256.2009_766119311.el9.x86_64+debug ehci_hcd [ 16.505684] usb usb1: SerialNumber: 0000:00:1a.0 [ 16.511288] hub 1-0:1.0: USB hub found [ 16.513261] h 2 ports detected [ 16.827719] ehci-pci 0000:00:1d.0: EHCI Host Controller [ 16.830799] ehci-pci 0000:00:1d.0: new USB bus registered, assigned bus number 2 [ 16.833420] ehci-pci 0000:00:1d.0: debug port 2 [ 16.839396] ehci-pci 0000:00:1d.0: irq 20, io mem 0xf6c50000 [ 16.847800] ehci-pci 0000:00:1d.0: USB 2.0 started, EHCI 1.00 [ 16.850873] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.14 [ 16.853749] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 16.856322] usb usb2: Product: EHCI Host Controller [ 16.858024] usb usb2: Manufacturer: Linux 5.14.0-256.2009_766119311.el9.x86_64+debug ehci_hcd [ 16.860887] usb usb2: SerialNumber: 0000:00:1d.0 [ 16.865322] hub 2-0:1.0: USB hub found [ 16.866877] hub 2-0:1.0: 2 ports detected [ 16.872186] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 16.874518] ohci-pci: OHCI PCI platform driver [ 16.876567] uhci_hcd: USB Universal Host Controller Interface driver [ 16.882500] uhci_hcd 0000:01:00.4: UHCI Host Controller [ 16.885958] uhci_hcd 0000:01:00.4: new USB bus registered, assigned bus number 3 [ 16.888607] uhci_hcd 0000:01:00.4: detected 8 por9803] uhci_hcd 0000:01:00.4: port count misdetected? forcing to 2 ports [ 17.293510] uhci_hcd 0000:01:00.4: irq 47, io port 0x00003c00 [ 17.297135] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 [ 17.300286] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 17.302907] usb usb3: Product: UHCI Host Controller [ 17.304768] usb usb3: Manufacturer: Linux 5.14.0-256.2009_766119311.el9.x86_64+debug uhci_hcd [ 17.307939] usb usb3: SerialNumber: 0000:01:00.4 [ 17.313873] hub 3-0:1.0: USB hub found [ 17.315851] hub 3-0:1.0: 2 ports detected [ 17.322994] usbcore: registered new interface driver usbserial_generic [ 17.325702] usbserial: USB Serial support registered for generic [ 17.329132] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f0e:PS2M] at 0x60,0x64 irq 1,12 [ 17.335131] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 17.337074] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 17.341649] mousedev: PS/2 mouse device common for all mice [ 17.345588] rtc_cmos 00:03: RTC can wake from S4 [ 17.353834] rtc_cmos 00:03: registered as rtc0 [ 17.355887] rtc_cmosystem clock to 2023-02-03T04:42:03 UTC (1675399323) [ 17.425833] usb 1-1: new high-speed USB device number 2 using ehci-pci [ 17.859195] rtc_cmos 00:03: alarms up to one day, 114 bytes nvram, hpet irqs [ 17.908173] Freeing initrd memory: 35636K [ 17.912856] usb 2-1: new high-speed USB device number 2 using ehci-pci [ 17.917422] intel_pstate: Intel P-state driver initializing [ 17.954409] hid: raw HID events driver (C) Jiri Kosina [ 17.957792] usbcore: registered new interface driver usbhid [ 17.959796] usbhid: USB HID core driver [ 17.962619] drop_monitor: Initializing network drop monitor service [ 17.991765] usb 1-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00 [ 17.994778] usb 1-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 18.001752] hub 1-1:1.0: USB hub found [ 18.003875] hub 1-1:1.0: 6 ports detected [ 18.008126] Initializing XFRM netlink socket [ 18.015258] NET: Registered PF_INET6 protocol family [ 18.036891] Segment Routing with IPv6 [ 18.038538] NET: Registered PF_PACKET protocol family [ 18.041522] mpls_gso: MPport [ 18.343925] usb 2-1: New USB device found, idVendor=8087, idProduct=0024, bcdDevice= 0.00 [ 18.346843] usb 2-1: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 18.351832] hub 2-1:1.0: USB hub found [ 18.354021] hub 2-1:1.0: 8 ports detected [ 18.401034] microcode: sig=0x306e4, pf=0x1, revision=0x42e [ 18.406163] microcode: Microcode Update Driver: v2.2. [ 18.406197] IPI shorthand broadcast: enabled [ 18.409775] AVX version of gcm_enc/dec engaged. [ 18.411881] AES CTR mode by8 optimization enabled [ 18.419961] sched_clock: Marking stable (18454021052, -34279934)->(19548617431, -1128876313) [ 18.481147] registered taskstats version 1 [ 18.492168] Loading compiled-in X.509 certificates [ 18.500538] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 3bcdb855b1eeffc8155fd4f9576830612b2c709a' [ 18.506794] Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' [ 18.512919] Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' [ 18.609682] zswap: loaded using pool lzo/zbud [ 18.618763] debug_vm_pgtable: [debug_vm_pgtable ]: Validating architecture page table helpers [ 18.646894] usb 2-1.3: new high-speed USB device number 3 using ehci-pci [ 18.905821] usb 2-1.3: New USB device found, idVendor=0424, idProduct=2660, bcdDevice= 8.01 [ 18.908766] usb 2-1.3: New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 18.913807] hub 2-1.3:1.0: USB hub found [ 18.915815] hub 2-1.3:1.0: 2 ports detected [ 19.776000] page_owner is disabled [ 19.780891] pstore: Using crash dump compression: deflate [ 19.783969] Key type big_key registered [ 19.859377] Key type encrypted registered [ 19.861051] ima: No TPM chip found, activating TPM-bypass! [ 19.863204] Loading compiled-in module X.509 certificates [ 19.867666] Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 3bcdb855b1eeffc8155fd4f9576830612b2c709a' [ 19.872078] ima: Allocated hash algorithm: sha256 [ 19.874076] ima: No architecture policies found [ 19.876459] evm: Initialising EVM extended attributes: [ 19.878304] evm: security.selinux [ 19.879500] evm: security.SMACK64 (disabled) [ 19.881032] evm: security.SMACK64EXEC (disabled) [ 19.882699] evm: security.SMACK64TRANSMUTE (disabled) [ 19.884466] evm: security.SMACK64MMAP (disabled) [ 19.886128] evm: security.apparmor (disabled) [ 19.887647] evm: security.ima [ 19.888719] evm: security.capability [ 19.890074] evm: HMAC attrs: 0x1 [ 19.925361] modprobe (255) used greatest stack depth: 27544 bytes left [ 19.969383] cryptomgr_test (254) used greatest stack depth: 27296 bytes left [ 20.766314] cryptomgr_test (356) used greatest stack depth: 27032 bytes left [ 21.082593] PM: Magic number: 11:752:665 [ 21.084482] acpi LNXCPU:14: hash matches [ 21.086067] platform dmi-ipmi-si.0: hash matches [ 21.161218] Freeing unused decrypted memory: 2036K [ 21.173528] Freeing unused kernel image (initmem) memory: 5300K [ 21.175341] Write protecting the kernel read-only data: 57344k [ 21.186575] Freeing unused kernel image (text/rodata gap) memory: 2036K [ 21.192209] Freeing unused kernel image (rodata/data gap) memory: 1400K [ 21.343316] x86/mm: Checked W+X mappings: passed, no W+X pages found. [ 21.343741] x86/mm: Checking user space page tables [ 21.427559] x86/mm: Checked W+X mappings: passed, no W+X pages found. [ 21.428033] Run /init as init process [ 21.565871] systemd[1]: systemd 252-3.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [ 21.584546] systemd[1]: Detected architecture x86-64. [ 21.584951] systemd[1]: Running in initrd. Welcome to CentOS Stream 9 dracut-057-20.git20221213.el9 (Initramfs) ! [ 21.591904] systemd[1]: Hostname set to . [ 22.493146] dracut-rootfs-g (368) used greatest stack depth: 27000 bytes left [ 22.759667] systemd[1]: Queued start job for default target Initrd Default Target. [ 22.780037] systemd[1]: Created slice Slice /system/systemd-hibernate-resume. [ OK ] Created slice Slice /system/systemd-hibernate-resume . [ 22.787476] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [ OK ] Started Dispatch Password …ts to Console Directory Watch . [ 22.792333] systemd[1]: Reached target Initrd /usr File System. [ OK ] Reached target Initrd /usr File System . [ 22.797106] systemd[1]: Reached target Path Units. [ OK ] Reached target Path Units . [ 22.798860] systemd[1]: Reached target Slice Units. [ OK ] Reached target Slice Units . [ 22.802994] systemd[1]: Reached target Swaps. [ OK ] Reached target Swaps . [ 22.807099] systemd[1]: Reached target Timer Units. [ OK ] Reached target Timer Units . [ 22.813458] systemd[1]: Listening on D-Bus System Message Bus Socket. [ OK ] Listening on D-Bus System Message Bus Socket . [ 22.819499] systemd[1]: Listening on Journal Socket (/dev/log). [ OK ] Listening on Journal Socket (/dev/log) . [ 22.826426] systemd[1]: Listening on Journal Socket. [ OK ] Listening on Journal Socket . [ 22.831412] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket . [ 22.836913] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket . [ 22.841131] systemd[1]: Reached target Socket Units. [ OK ] Reached target Socket Units . [ 22.865862] systemd[1]: Starting Create List of Static Device Nodes... Starting Create List of Static Device Nodes ... [ 22.902333] systemd[1]: Starting Journal Service... Starting Journal Service ... [ 22.909437] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [ 22.928946] systemd[1]: Starting Apply Kernel Variables... Starting Apply Kernel Variables ... [ 22.951550] systemd[1]: Starting Create System Users... Starting Create System Users ... [ 22.976654] systemd[1]: Starting Setup Virtual Console... Starting Setup Virtual Console ... [ 23.005777] systemd[1]: Finished Create List of Static Device Nodes. [ OK ] Finished Create List of Static Device Nodes . [ 23.100255] systemd[1]: Finished Apply Kernel Variables. [ OK ] Finished Apply Kernel Variables . [ 23.206211] systemd[1]: Finished Create System Users. [ OK ] Finished Create System Users . [ 23.229984] systemd[1]: Starting Create Static Device Nodes in /dev... Starting Create Static Device Nodes in /dev ... [ 23.415918] systemd[1]: Finished Create Static Device Nodes in /dev. [ OK ] Finished Create Static Device Nodes in /dev . [ 23.710581] systemd[1]: Finished Setup Virtual Console. [ OK ] Finished Setup Virtual Console . [ 23.717001] systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. [ 23.736995] systemd[1]: Starting dracut cmdline hook... Starting dracut cmdline hook ... [ 24.417049] systemd[1]: Started Journal Service. [ OK ] Started Journal Service . Starting Create Volatile Files and Directories ... [ OK ] Finished Create Volatile Files and Directories . [ OK ] Finished dracut cmdline hook . Starting dracut pre-udev hook ... [ 26.130068] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. [ 26.132183] device-mapper: uevent: version 1.0.3 [ 26.137204] device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com [ OK ] Finished dracut pre-udev hook . Starting Rule-based Manage…for Device Events and Files ... [ OK ] Started Rule-based Manager for Device Events and Files . Starting Coldplug All udev Devices ... [ * ] (1 of 3) A start job is running for…l360pgen8--08-root (6s / no limit) M [ * * ] (1 of 3) A start job is running for…l360pgen8--08-root (6s / no limit) M [ * * * ] (1 of 3) A start job is running for…l360pgen8--08-root (7s / no limit) M [ * * * ] (2 of 3) A start job is running for…g All udev Devices (7s / no limit) M [ * * * ] (2 of 3) A start job is running for…g All udev Devices (8s / no limit) M [ * * * ] (2 of 3) A start job is running for…g All udev Devices (8s / no limit) [ 31.507360] udevadm (543) used greatest stack depth: 26904 bytes left M [ OK ] Finished Coldplug All udev Devices . [ OK ] Reached target Network . Starting dracut initqueue hook ... [ 31.674589] hpwdt 0000:01:00.0: HPE Watchdog Timer Driver: NMI decoding initialized [ 31.688377] Warning: Unmaintained hardware is detected: hpsa:323B:103C @ 0000:02:00.0 [ 31.688882] HP HPSA Driver (v 3.4.20-200) [ 31.689254] hpsa 0000:02:00.0: can't disable ASPM; OS doesn't have ASPM control [ 31.694129] hpwdt 0000:01:00.0: HPE Watchdog Timer Driver: Version: 2.0.4 [ 31.694576] hpwdt 0000:01:00.0: timeout: 30 seconds (nowayout=0) [ 31.695380] hpwdt 0000:01:00.0: pretimeout: on. [ 31.696084] hpwdt 0000:01:00.0: kdumptimeout: -1. [ 31.810628] tg3 0000:03:00.0 eth0: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c4 [ 31.811362] tg3 0000:03:00.0 eth0: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 31.812434] tg3 0000:03:00.0 eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 31.812973] tg3 0000:03:00.0 eth0: dma_rwctrl[00000001] dma_mask[64-bit] [ 31.855663] tg3 0000:03:00.1 eth1: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c5 [ 31.856372] tg3 0000:03:00.1 eth1: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 31.857364] tg3 0000:03:00.1 eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 31.857832] tg3 0000:03:00.1 eth1: dma_rwctrl[00000001] dma_mask[64-bit] [ 31.868668] hpsa 0000:02:00.0: Logical aborts not supported [ 31.869131] hpsa 0000:02:00.0: HP SSD Smart Path aborts not supported [ 31.908111] ata_piix 0000:00:1f.2: MAP [ P0 P2 P1 P3 ] [ 31.925300] scsi host0: hpsa [ 31.928007] tg3 0000:03:00.2 eth2: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c6 [ 31.928780] tg3 0000:03:00.2 eth2: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 31.929857] tg3 0000:03:00.2 eth2: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 31.930473] tg3 0000:03:00.2 eth2: dma_rwctrl[00000001] dma_mask[64-bit] [ 31.942991] hpsa can't handle SMP requests [ 31.946396] Warning: Unmaintained hardware is detected: hpsa:323B:103C @ 0000:04:00.0 [ 31.946964] hpsa 0000:04:00.0: can't disable ASPM; OS doesn't have ASPM control [ 31.992901] tg3 0000:03:00.3 eth3: Tigon3 [partno(629133-002) rev 5719001] (PCI Express) MAC address 2c:44:fd:84:51:c7 [ 31.993527] tg3 0000:03:00.3 eth3: attached PHY is 5719C (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[1]) [ 31.994517] tg3 0000:03:00.3 eth3: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] [ 31.995070] tg3 0000:03:00.3 eth3: dma_rwctrl[00000001] dma_mask[64-bit] [ 32.016309] hpsa 0000:02:00.0: scsi 0:0:0:0: added RAID HP P420i controller SSDSmartPathCap- En- Exp=1 [ 32.017160] hpsa 0000:02:00.0: scsi 0:0:1:0: masked Direct-Access ATA MM0500GBKAK PHYS DRV SSDSmartPathCap- En- Exp=0 [ 32.018300] hpsa 0000:02:00.0: scsi 0:1:0:0: added Direct-Access HP LOGICAL VOLUME RAID-0 SSDSmartPathCap- En- Exp=1 [ 32.041387] hpsa can't handle SMP requests [ 32.046367] scsi 0:0:0:0: RAID HP P420i 8.32 PQ: 0 ANSI: 5 [ 32.054133] scsi 0:1:0:0: Direct-Access HP LOGICAL VOLUME 8.32 PQ: 0 ANSI: 5 [ 32.101947] scsi host1: ata_piix [ 32.108657] hpsa 0000:04:00.0: Logical aborts not supported [ 32.109045] hpsa 0000:04:00.0: HP SSD Smart Path aborts not supported [ 32.172158] scsi host2: ata_piix [ 32.181299] ata1: SATA max UDMA/133 cmd 0x4000 ctl 0x4008 bmdma 0x4020 irq 17 [ 32.182202] ata2: SATA max UDMA/133 cmd 0x4010 ctl 0x4018 bmdma 0x4028 irq 17 [ 32.225795] scsi host3: hpsa [ 32.245797] hpsa can't handle SMP requests [ 32.274821] hpsa 0000:04:00.0: scsi 3:0:0:0: added RAID HP P421 controller SSDSmartPathCap- En- Exp=1 [ 32.275531] hpsa 0000:04:00.0: scsi 3:0:1:0: masked Enclosure PMCSIERA SRCv8x6G enclosure SSDSmartPathCap- En- Exp=0 [ 32.279877] hpsa can't handle SMP requests [ 32.281489] scsi 3:0:0:0: RAID HP P421 8.32 PQ: 0 ANSI: 5 [ 32.286288] tg3 0000:03:00.1 eno2: renamed from eth1 [ 32.315200] tg3 0000:03:00.3 eno4: renamed from eth3 [ 32.334317] tg3 0000:03:00.0 eno1: renamed from eth0 [ 32.357570] tg3 0000:03:00.2 eno3: renamed from eth2 [ 32.379583] scsi 0:0:0:0: Attached scsi generic sg0 type 12 [ 32.381201] scsi 0:1:0:0: Attached scsi generic sg1 type 0 [ 32.382914] scsi 3:0:0:0: Attached scsi generic sg2 type 12 [ 32.465914] sd 0:1:0:0: [sda] 976707632 512-byte logical blocks: (500 GB/466 GiB) [ 32.467637] sd 0:1:0:0: [sda] Write Protect is off [ 32.469428] sd 0:1:0:0: [sda] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA [ 32.470070] sd 0:1:0:0: [sda] Preferred minimum I/O size 262144 bytes [ 32.470537] sd 0:1:0:0: [sda] Optimal transfer size 262144 bytes [ 32.577774] sda: sda1 sda2 [ 32.583420] sd 0:1:0:0: [sda] Attached SCSI disk [ 33.220904] ata2.00: failed to resume link (SControl 0) [ 33.532893] ata1.01: failed to resume link (SControl 0) [ 33.544694] ata1.00: SATA link down (SStatus 0 SControl 300) [ 33.545549] ata1.01: SATA link down (SStatus 4 SControl 0) [ 34.260893] ata2.01: failed to resume link (SControl 0) [ 34.272415] ata2.00: SATA link down (SStatus 4 SControl 0) [ 34.272803] ata2.01: SATA link down (SStatus 4 SControl 0) [ 34.438305] scsi_id (661) used greatest stack depth: 26712 bytes left [ 36.132991] cp (712) used greatest stack depth: 26312 bytes left [ OK ] Found device /dev/mapper/cs_hpe--dl360pgen8--08-root . [ OK ] Reached target Initrd Root Device . [ OK ] Found device /dev/mapper/cs_hpe--dl360pgen8--08-swap . Starting Resume from hiber…cs_hpe--dl360pgen8--08-swap ... [ OK ] Finished Resume from hiber…r/cs_hpe--dl360pgen8--08-swap . [ OK ] Reached target Preparation for Local File Systems . [ OK ] Reached target Local File Systems . [ OK ] Reached target System Initialization . [ OK ] Reached target Basic System . [ OK ] Finished dracut initqueue hook . [ OK ] Reached target Preparation for Remote File Systems . [ OK ] Reached target Remote File Systems . Starting File System Check…cs_hpe--dl360pgen8--08-root ... [ 38.618825] fsck (748) used greatest stack depth: 25528 bytes left [ OK ] Finished File System Check…r/cs_hpe--dl360pgen8--08-root . Mounting /sysroot ... [ 40.030068] SGI XFS with ACLs, security attributes, scrub, verbose warnings, quota, no debug enabled [ 40.101422] XFS (dm-0): Mounting V5 Filesystem [ 40.757665] XFS (dm-0): Ending clean mount [ 40.817044] mount (751) used greatest stack depth: 24736 bytes left [ OK ] Mounted /sysroot . [ OK ] Reached target Initrd Root File System . Starting Mountpoints Configured in the Real Root ... [ 40.986875] systemd-fstab-g (763) used greatest stack depth: 23592 bytes left [ OK ] Finished Mountpoints Configured in the Real Root . [ OK ] Reached target Initrd File Systems . [ OK ] Reached target Initrd Default Target . Starting dracut pre-pivot and cleanup hook ... [ OK ] Finished dracut pre-pivot and cleanup hook . Starting Cleaning Up and Shutting Down Daemons ... [ OK ] Stopped target Network . [ OK ] Stopped target Timer Units . [ OK ] Closed D-Bus System Message Bus Socket . [ OK ] Stopped dracut pre-pivot and cleanup hook . [ OK ] Stopped target Initrd Default Target . [ OK ] Stopped target Basic System . [ OK ] Stopped target Initrd Root Device . [ OK ] Stopped target Initrd /usr File System . [ OK ] Stopped target Path Units . [ OK ] Stopped Dispatch Password …ts to Console Directory Watch . [ OK ] Stopped target Remote File Systems . [ OK ] Stopped target Preparation for Remote File Systems . [ OK ] Stopped target Slice Units . [ OK ] Stopped target Socket Units . [ OK ] Stopped target System Initialization . [ OK ] Stopped target Local File Systems . [ OK ] Stopped target Preparation for Local File Systems . [ OK ] Stopped target Swaps . [ OK ] Stopped dracut initqueue hook . [ OK ] Stopped Apply Kernel Variables . [ OK ] Stopped Create Volatile Files and Directories . [ OK ] Stopped Coldplug All udev Devices . Stopping Rule-based Manage…for Device Events and Files ... [ OK ] Stopped Setup Virtual Console . [ OK ] Finished Cleaning Up and Shutting Down Daemons . [ OK ] Stopped Rule-based Manager for Device Events and Files . [ OK ] Closed udev Control Socket . [ OK ] Closed udev Kernel Socket . [ OK ] Stopped dracut pre-udev hook . [ OK ] Stopped dracut cmdline hook . Starting Cleanup udev Database ... [ OK ] Stopped Create Static Device Nodes in /dev . [ OK ] Stopped Create List of Static Device Nodes . [ OK ] Stopped Create System Users . [ OK ] Finished Cleanup udev Database . [ OK ] Reached target Switch Root . Starting Switch Root ... [ 43.088568] systemd-journald[405]: Received SIGTERM from PID 1 (systemd). [ 46.578856] SELinux: policy capability network_peer_controls=1 [ 46.579711] SELinux: policy capability open_perms=1 [ 46.580039] SELinux: policy capability extended_socket_class=1 [ 46.580837] SELinux: policy capability always_check_network=0 [ 46.581894] SELinux: policy capability cgroup_seclabel=1 [ 46.582432] SELinux: policy capability nnp_nosuid_transition=1 [ 46.583267] SELinux: policy capability genfs_seclabel_symlinks=1 [ 47.120400] audit: type=1403 audit(1675399353.264:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 [ 47.135376] systemd[1]: Successfully loaded SELinux policy in 2.303100s. [ 47.286352] systemd[1]: RTC configured in localtime, applying delta of -300 minutes to system time. [ 47.713087] systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 342.906ms. [ 47.792823] systemd[1]: systemd 252-3.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [ 47.811859] systemd[1]: Detected architecture x86-64. Welcome to CentOS Stream 9 ! [ 48.398627] systemd-rc-local-generator[808]: /etc/rc.d/rc.local is not marked executable, skipping. [ 49.061177] kdump-dep-gener (796) used greatest stack depth: 23224 bytes left [ 50.395391] systemd[1]: /usr/lib/systemd/system/restraintd.service:8: Standard output type syslog+console is obsolete, automatically updating to journal+console. Please update your unit file, and consider removing the setting altogether. [ 51.114908] systemd[1]: initrd-switch-root.service: Deactivated successfully. [ 51.122831] systemd[1]: Stopped Switch Root. [ OK ] Stopped Switch Root . [ 51.139109] systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. [ 51.152960] systemd[1]: Created slice Slice /system/getty. [ OK ] Created slice Slice /system/getty . [ 51.168991] systemd[1]: Created slice Slice /system/modprobe. [ OK ] Created slice Slice /system/modprobe . [ 51.185914] systemd[1]: Created slice Slice /system/serial-getty. [ OK ] Created slice Slice /system/serial-getty . [ 51.200050] systemd[1]: Created slice Slice /system/sshd-keygen. [ OK ] Created slice Slice /system/sshd-keygen . [ 51.224966] systemd[1]: Created slice User and Session Slice. [ OK ] Created slice User and Session Slice . [ 51.233967] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. [ OK ] Started Dispatch Password …ts to Console Directory Watch . [ 51.239550] systemd[1]: Started Forward Password Requests to Wall Directory Watch. [ OK ] Started Forward Password R…uests to Wall Directory Watch . [ 51.252672] systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. [ OK ] Set up automount Arbitrary…s File System Automount Point . [ 51.254968] systemd[1]: Reached target Local Encrypted Volumes. [ OK ] Reached target Local Encrypted Volumes . [ 51.260245] systemd[1]: Stopped target Switch Root. [ OK ] Stopped target Switch Root . [ 51.262413] systemd[1]: Stopped target Initrd File Systems. [ OK ] Stopped target Initrd File Systems . [ 51.267289] systemd[1]: Stopped target Initrd Root File System. [ OK ] Stopped target Initrd Root File System . [ 51.269681] systemd[1]: Reached target Local Integrity Protected Volumes. [ OK ] Reached target Local Integrity Protected Volumes . [ 51.274260] systemd[1]: Reached target Slice Units. [ OK ] Reached target Slice Units . [ 51.276415] systemd[1]: Reached target System Time Set. [ OK ] Reached target System Time Set . [ 51.281326] systemd[1]: Reached target Local Verity Protected Volumes. [ OK ] Reached target Local Verity Protected Volumes . [ 51.290901] systemd[1]: Listening on Device-mapper event daemon FIFOs. [ OK ] Listening on Device-mapper event daemon FIFOs . [ 51.311034] systemd[1]: Listening on LVM2 poll daemon socket. [ OK ] Listening on LVM2 poll daemon socket . [ 51.500586] systemd[1]: Listening on RPCbind Server Activation Socket. [ OK ] Listening on RPCbind Server Activation Socket . [ 51.506417] systemd[1]: Reached target RPC Port Mapper. [ OK ] Reached target RPC Port Mapper . [ 51.534418] systemd[1]: Listening on Process Core Dump Socket. [ OK ] Listening on Process Core Dump Socket . [ 51.539817] systemd[1]: Listening on initctl Compatibility Named Pipe. [ OK ] Listening on initctl Compatibility Named Pipe . [ 51.572874] systemd[1]: Listening on udev Control Socket. [ OK ] Listening on udev Control Socket . [ 51.583698] systemd[1]: Listening on udev Kernel Socket. [ OK ] Listening on udev Kernel Socket . [ 51.619445] systemd[1]: Activating swap /dev/mapper/cs_hpe--dl360pgen8--08-swap... Activating swap /dev/mappe…cs_hpe--dl360pgen8--08-swap ... [ 51.674666] systemd[1]: Mounting Huge Pages File System... Mounting Huge Pages File System ... [ 51.713341] systemd[1]: Mounting POSIX Message Queue File System... Mounting POSIX Message Queue File System ... [ 51.821459] Adding 16502780k swap on /dev/mapper/cs_hpe--dl360pgen8--08-swap. Priority:-2 extents:1 across:16502780k FS [ 51.850188] systemd[1]: Mounting Kernel Debug File System... Mounting Kernel Debug File System ... [ 51.899529] systemd[1]: Mounting Kernel Trace File System... Mounting Kernel Trace File System ... [ 51.905696] systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). [ 51.962779] systemd[1]: Starting Create List of Static Device Nodes... Starting Create List of Static Device Nodes ... [ 52.001837] systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Starting Monitoring of LVM…meventd or progress polling ... [ 52.043281] systemd[1]: Starting Load Kernel Module configfs... Starting Load Kernel Module configfs ... [ 52.081209] systemd[1]: Starting Load Kernel Module drm... Starting Load Kernel Module drm ... [ 52.102642] systemd[1]: Starting Load Kernel Module fuse... Starting Load Kernel Module fuse ... [ 52.387619] systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Starting Read and set NIS …from /etc/sysconfig/network ... [ 52.393508] systemd[1]: systemd-fsck-root.service: Deactivated successfully. [ 52.396393] systemd[1]: Stopped File System Check on Root Device. [ OK ] Stopped File System Check on Root Device . [ 52.400572] systemd[1]: Stopped Journal Service. [ OK ] Stopped Journal Service . [ 52.403392] systemd[1]: systemd-journald.service: Consumed 1.812s CPU time. [ 52.462908] systemd[1]: Starting Journal Service... Starting Journal Service ... [ 52.526061] systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met. [ 52.560010] systemd[1]: Starting Generate network units from Kernel command line... Starting Generate network …ts from Kernel command line ... [ 52.604222] systemd[1]: Starting Remount Root and Kernel File Systems... Starting Remount Root and Kernel File Systems ... [ 52.612138] systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. [ 52.624513] fuse: init (API version 7.36) [ 52.652267] systemd[1]: Starting Apply Kernel Variables... Starting Apply Kernel Variables ... [ 52.689805] systemd[1]: Starting Coldplug All udev Devicem_connector registered Starting Coldplug All udev Devices ... [ 52.955296] systemd[1]: Activated swap /dev/mapper/cs_hpe--dl360pgen8--08-swap. [ OK ] Activated swap /dev/mapper/cs_hpe--dl360pgen8--08-swap . [ 53.018916] systemd[1]: Started Journal Service. [ OK ] Started Journal Service . [ OK ] Mounted Huge Pages File System . [ OK ] Mounted POSIX Message Queue File System . [ OK ] Mounted Kernel Debug File System . [ OK ] Mounted Kernel Trace File System . [ OK ] Finished Create List of Static Device Nodes . [ OK ] Finished Monitoring of LVM… dmeventd or progress polling . [ OK ] Finished Load Kernel Module configfs . [ OK ] Finished Load Kernel Module drm . [ OK ] Finished Load Kernel Module fuse . [ OK ] Finished Read and set NIS …e from /etc/sysconfig/network . [ OK ] Finished Generate network units from Kernel command line . [ OK ] Finished Remount Root and Kernel File Systems . [ OK ] Finished Apply Kernel Variables . [ OK ] Reached target Preparation for Network . [ OK ] Reached target Swaps . Mounting FUSE Control File System ... Mounting Kernel Configuration File System ... Starting Flush Journal to Persistent Storage ... Starting Load/Save Random Seed ... Starting Create Static Device Nodes in /dev ... [ OK ] Mounted FUSE Control File System . [ OK ] Mounted Kernel Configuration File System . [ 53.679231] systemd-journald[833]: Received client request to flush runtime journal. [ OK ] Finished Load/Save Random Seed . [ OK ] Finished Create Static Device Nodes in /dev . [ OK ] Reached target Preparation for Local File Systems . Starting Rule-based Manage…for Device Events and Files ... [ OK ] Finished Flush Journal to Persistent Storage . [ OK ] Started Rule-based Manager for Device Events and Files . [ * ] (1 of 4) A start job is running for…g All udev Devices (6s / no limit) M Starting Load Kernel Module configfs ... [ OK ] Finished Load Kernel Module configfs . [ OK ] Finished Coldplug All udev Devices . [ 58.326275] power_meter ACPI000D:00: Found ACPI power meter. [ 58.328888] power_meter ACPI000D:00: Ignoring unsafe software power cap! [ 58.329389] power_meter ACPI000D:00: hwmon_device_register() is deprecated. Please convert the driver to use hwmon_device_register_with_info(). [ 58.594629] IPMI message handler: version 39.2 [ 58.650785] ipmi device interface [ 58.705808] dca service started, version 1.12.1 [ 58.715942] ipmi_si: IPMI System Interface driver [ 58.717191] ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS [ 58.717867] ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 [ 58.718957] ipmi_si: Adding SMBIOS-specified kcs state machine [ 58.724996] ipmi_si IPI0001:00: ipmi_platform: probing via ACPI [ 58.727819] ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2-0x0ca3] regsize 1 spacing 1 irq 0 [ 58.819427] ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI [ 58.820509] ipmi_si: Adding ACPI-specified kcs state machine [ 58.823613] ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 [ 58.842400] ioatdma: Intel(R) QuickData Technology Driver 5.00 [ 58.905887] ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. [ 58.972018] ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x00000b, prod_id: 0x2000, dev_id: 0x13) [ 59.103878] ipmi_si IPI0001:00: IPMI kcs interface initialized [ 59.129904] ipmi_ssif: IPMI SSIF Interface driver Mounting /boot ... [ 59.257679] XFS (sda1): Mounting V5 Filesystem [ OK ] Started /usr/sbin/lvm vgch…on event cs_hpe-dl360pgen8-08 . [ 59.360276] input: PC Speaker as /devices/platform/pcspkr/input/input4 [ 59.853588] XFS (sda1): Ending clean mount [ 59.865549] mgag200 0000:01:00.1: vgaarb: deactivate vga console [ 59.878859] Console: switching to colour dummy device 80x25 [ OK ] Mounted /boot . [ 60.314129] [drm] Initialized mgag200 1.0.0 20110418 for 0000:01:00.1 on minor 0 [ 60.326981] RAPL PMU: API unit is 2^-32 Joules, 2 fixed counters, 163840 ms ovfl timer [ 60.327419] RAPL PMU: hw unit of domain pp0-core 2^-16 Joules [ 60.328184] RAPL PMU: hw unit of domain package 2^-16 Joules [ 60.788334] fbcon: mgag200drmfb (fb0) is primary device [ 60.957809] Console: switching to colour frame buffer device 128x48 [ 61.489897] mgag200 0000:01:00.1: [drm] fb0: mgag200drmfb frame buffer device [ * * ] A start job is running for /dev/map…360pgen8--08-home (11s / no limit) [ 62.142342] iTCO_vendor_support: vendor-support=0 [ 62.176228] iTCO_wdt iTCO_wdt.1.auto: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS [ 62.499008] EDAC MC0: Giving out device to module sb_edac controller Ivy Bridge SrcID#0_Ha#0: DEV 0000:1f:0e.0 (INTERRUPT) [ 62.505194] EDAC MC1: Giving out device to module sb_edac controller Ivy Bridge SrcID#1_Ha#0: DEV 0000:3f:0e.0 (INTERRUPT) [ 62.505211] EDAC sbridge: Ver: 1.1.2 M [ OK ] Found device /dev/mapper/cs_hpe--dl360pgen8--08-home . Mounting /home ... [ 62.696911] XFS (dm-2): Mounting V5 Filesystem [ 63.267081] intel_rapl_common: Found RAPL domain package [ 63.267428] intel_rapl_common: Found RAPL domain core [ 63.271459] intel_rapl_common: Found RAPL domain package [ 63.271944] intel_rapl_common: Found RAPL domain core [ 63.377977] XFS (dm-2): Ending clean mount [ OK ] Mounted /home . [ OK ] Reached target Local File Systems . Starting Automatic Boot Loader Update ... Starting Create Volatile Files and Directories ... [ OK ] Finished Automatic Boot Loader Update . [ OK ] Finished Create Volatile Files and Directories . Mounting RPC Pipe File System ... Starting Security Auditing Service ... Starting RPC Bind ... [ OK ] Started RPC Bind . [ 65.649440] RPC: Registered named UNIX socket transport module. [ 65.650416] RPC: Registered udp transport module. [ 65.651139] RPC: Registered tcp transport module. [ 65.651815] RPC: Registered tcp NFSv4.1 backchannel transport module. [ OK ] Mounted RPC Pipe File System . [ OK ] Reached target rpc_pipefs.target . [ 65.884362] mktemp (1033) used greatest stack depth: 22408 bytes left [ OK ] Started Security Auditing Service . Starting Record System Boot/Shutdown in UTMP ... [ OK ] Finished Record System Boot/Shutdown in UTMP . [ OK ] Reached target System Initialization . [ OK ] Started CUPS Scheduler . [ OK ] Started dnf makecache --timer . [ OK ] Started Daily Cleanup of Temporary Directories . [ OK ] Reached target Path Units . [ OK ] Listening on Avahi mDNS/DNS-SD Stack Activation Socket . [ OK ] Listening on CUPS Scheduler . [ OK ] Listening on D-Bus System Message Bus Socket . [ OK ] Listening on SSSD Kerberos…ache Manager responder socket . [ OK ] Reached target Socket Units . [ OK ] Reached target Basic System . Starting Network Manager ... Starting Avahi mDNS/DNS-SD Stack ... Starting NTP client/server ... Starting Restore /run/initramfs on shutdown ... [ OK ] Started irqbalance daemon . Starting Load CPU microcode update ... [ OK ] Started Hardware RNG Entropy Gatherer Daemon . Starting System Logging Service ... [ OK ] Reached target sshd-keygen.target . [ OK ] Reached target User and Group Name Lookups . Starting User Login Management ... [ OK ] Finished Restore /run/initramfs on shutdown . Starting D-Bus System Message Bus ... [ OK ] Started NTP client/server . [ OK ] Started System Logging Service . Starting Wait for chrony to synchronize system clock ... [ 68.561097] reload_microcod (1065) used greatest stack depth: 22136 bytes left [ OK ] Finished Load CPU microcode update . [ OK ] Started D-Bus System Message Bus . [ OK ] Started User Login Management . [ OK ] Started Avahi mDNS/DNS-SD Stack . [ OK ] Started Network Manager . [ OK ] Created slice User Slice of UID 0 . [ OK ] Reached target Network . Starting Network Manager Wait Online ... Starting CUPS Scheduler ... Starting GSSAPI Proxy Daemon ... Starting OpenSSH server daemon ... Starting User Runtime Directory /run/user/0 ... Starting Hostname Service ... [ OK ] Started OpenSSH server daemon . [ OK ] Finished User Runtime Directory /run/user/0 . Starting User Manager for UID 0 ... [ OK ] Started CUPS Scheduler . [ OK ] Started GSSAPI Proxy Daemon . [ OK ] Reached target NFS client services . [ OK ] Reached target Preparation for Remote File Systems . [ OK ] Started Hostname Service . [ OK ] Listening on Load/Save RF …itch Status /dev/rfkill Watch . Starting Network Manager Script Dispatcher Service ... [ OK ] Started Network Manager Script Dispatcher Service . [ OK ] Started User Manager for UID 0 . [ * * * ] (1 of 2) A start job is running for…nize system clock (23s / no limit) [ 74.455272] tg3 0000:03:00.0 eno1: Link is up at 1000 Mbps, full duplex [ 74.455932] tg3 0000:03:00.0 eno1: Flow control is off for TX and off for RX [ 74.456790] tg3 0000:03:00.0 eno1: EEE is disabled [ 74.458135] IPv6: ADDRCONF(NETDEV_CHANGE): eno1: link becomes ready M [ * * * ] (2 of 2) A start job is running for…nager Wait Online (23s / no limit) M [ * * * ] (2 of 2) A start job is running for…nager Wait Online (24s / no limit) M [ * * * ] (2 of 2) A start job is running for…nager Wait Online (24s / no limit) M [ * * ] (1 of 2) A start job is running for…nize system clock (25s / no limit) M [ * ] (1 of 2) A start job is running for…nize system clock (25s / no limit) M [ * * ] (1 of 2) A start job is running for…nize system clock (26s / no limit) M [ * * * ] (2 of 2) A start job is running for…nager Wait Online (26s / no limit) M [ * * * ] (2 of 2) A start job is running for…nager Wait Online (27s / no limit) M [ * * * ] (2 of 2) A start job is running for…nager Wait Online (27s / no limit) M [ * * * ] (1 of 2) A start job is running for…nize system clock (28s / no limit) M [ OK ] Finished Network Manager Wait Online . [ OK ] Reached target Network is Online . Mounting /var/crash ... [ OK ] Started Anaconda Monitorin…ost-boot notification program . Starting Notify NFS peers of a restart ... [ OK ] Started Notify NFS peers of a restart . [ 79.897504] FS-Cache: Loaded [ 80.459992] Key type dns_resolver registered [ 81.237580] NFS: Registering the id_resolver key type [ 81.238035] Key type id_resolver registered [ 81.238309] Key type id_legacy registered [ * * ] (1 of 2) A start job is running for…nize system clock (30s / no limit) M [ * ] (1 of 2) A start job is running for…nize system clock (31s / no limit) M [ OK ] Mounted /var/crash . [ OK ] Reached target Remote File Systems . Starting Crash recovery kernel arming ... Starting Permit User Sessions ... [ OK ] Finished Permit User Sessions . [ OK ] Started Deferred execution scheduler . [ OK ] Started Getty on tty1 . [ OK ] Started Serial Getty on ttyS1 . [ OK ] Reached target Login Prompts . [ OK ] Finished Wait for chrony to synchronize system clock . [ OK ] Reached target System Time Synchronized . [ OK ] Started Daily rotation of log files . [ OK ] Reached target Timer Units . [ OK ] Started Command Scheduler . Starting The restraint harness. ... [ OK ] Started The restraint harness. . [ OK ] Reached target Multi-User System . Starting Record Runlevel Change in UTMP ... [ OK ] Finished Record Runlevel Change in UTMP . [ 85.276895] restraintd[1302]: * Fetching recipe: http://lab-02.hosts.prod.psi.bos.redhat.com:8000//recipes/13330040/ [ 85.421838] restraintd[1302]: * Parsing recipe [ 85.463439] restraintd[1302]: * Running recipe [ 85.467912] restraintd[1302]: ** Continuing task: 155735207 [/mnt/tests/github.com/beaker-project/beaker-core-tasks/archive/master.tar.gz/reservesys] [ 85.545653] restraintd[1302]: ** Preparing metadata [ 86.233779] restraintd[1302]: ** Refreshing peer role hostnames: Retries 0 [ 86.714315] restraintd[1302]: ** Updating env vars [ 86.719110] restraintd[1302]: *** Current Time: Fri Feb 03 04:43:12 2023 Localwatchdog at: * Disabled! * [ 86.885927] restraintd[1302]: ** Running task: 155735207 [/distribution/reservesys] CentOS Stream 9 Kernel 5.14.0-256.2009_766119311.el9.x86_64+debug on an x86_64 hpe-dl360pgen8-08 login: [ 91.718639] PKCS7: Message signed outside of X.509 validity window [ 92.570252] Running test [R:13330040 T:155735207 - /distribution/reservesys - Kernel: 5.14.0-256.2009_766119311.el9.x86_64+debug] [ 133.517043] Running test [R:13330040 T:9 - Reboot test - Kernel: 5.14.0-256.2009_766119311.el9.x86_64+debug] [ 178.213658] Running test [R:13330040 T:10 - /distribution/command - Kernel: 5.14.0-256.2009_766119311.el9.x86_64+debug] [-- MARK -- Fri Feb 3 09:45:00 2023]