Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save Chester-Gillon/ff95216a134556de15f5d261bc7a5d89 to your computer and use it in GitHub Desktop.

Select an option

Save Chester-Gillon/ff95216a134556de15f5d261bc7a5d89 to your computer and use it in GitHub Desktop.
Notes about Milestone Husky IVO 350T Rev 3 reuse

0. Introduction

This contains notes about re-use of a second hand Milestone Husky IVO 350T Rev 3 PC.

This is a re-badged Dell PC, from a video technology software company.

Husky IVO Dell models and driver links links to the Dell Optiplex XE4 (rev. 3) https://www.dell.com/support/home/da-dk/product-support/product/optiplex-xe4/drivers, for the Husky IVO 350T.

The Milestone Husky IVO™ 350T Rev. 3 Getting started and maintenance guide links to the Dell Installation and service manual for the OptiPlex XE4 Tower.

1. Dell Command Configure for BIOS options

Investigating the ability to get/set BIOS without using the BIOS GUI.

References:

1.1. Use under Windows 11

Downloaded Dell-Command-Configure-Application_5RNW8_WIN64_5.2.1.16_A00.EXE and installed into Windows 11 Pro 23H2.

1.1.1. Save initial state

Used the command line to save the settings to an ini file:

C:\Program Files (x86)\Dell\Command Configure\X86_64>cctk.exe -o c:\Users\mr_halfword\bios_settings\0_initial_settings.ini

This is for BIOS version 1.17.0.

1.1.2. After BIOS update

C:\Program Files (x86)\Dell\Command Configure\X86_64>cctk.exe -o c:\Users\mr_halfword\bios_settings\1_after_bios_update.ini

Changes since the previous settings:

  • The following changed from:
    BiosVer=1.17.0
    
    To:
    BiosVer=1.36.0
    
  • The following is new:
    ;ABIProvState=Disabled
    ABIState=Disabled
    
  • The following is new:
    InternalDmaCompatibility=Disabled
    

1.1.3. Change audio settings

Disable the interla speaker and microphone:

C:\Program Files (x86)\Dell\Command Configure\X86_64>cctk.exe --InternalSpeaker=disabled
InternalSpeaker=Disabled

C:\Program Files (x86)\Dell\Command Configure\X86_64>cctk.exe --Microphone=disabled
Microphone=Disabled

Power cycled and then saved the settings:

C:\Program Files (x86)\Dell\Command Configure\X86_64>cctk.exe -o c:\Users\mr_halfword\bios_settings\2_after_audio_setting_changes.ini

Comparing against the previous settings showed as expected InternalSpeaker and Microphone had changed from Enabled to Disabled.

1.1.4. Settings got reset

While trying to boot an AlmaLinux 10 live image from a SD card in a USB reader the PC got stuck and didn't boot. After holding down the power button / removing the power cable started booting again. In Windows noticed the internal speaker was enable again. Saved the settings:

C:\Program Files (x86)\Dell\Command Configure\X86_64>cctk.exe -o c:\Users\mr_halfword\bios_settings\3_settings_got_reset.ini

For some reason, a number of setting have got reset / changed:

  • AcPwrRcvry : Last -> Off
  • BlockSleep : Enabled -> Disabled
  • CStatesCtrl : Disabled -> Enabled
  • ChasIntrusion : Disabled -> SilentEnable
  • DeepSleepCtrl : S4AndS5 -> Disabled
  • EmbNic1 : EnabledPxe -> Enabled
  • InternalSpeaker : Enabled -> Disabled
  • Microphone : Enabled -> Disabled

Entered the BIOS setup. The BIOS Event Log has:

01/01/2008 00:01:01 Invalid configuration information - please run SETUP program.
01/01/2008 00:01:43 WARNING: Your system experienced an issue. The system recovered from that state.
Due to the recovery, the configuration and BIOS Setting of your platfrom may have changed.
Go to BIOS Setup to verify your configuration settings.

2. Install DELL updates

After using the Dell Service Tag string in the BIOS settings output got a list of available downloads.

2.1. BIOS

Downloaded OptiPlex_XE_1.36.0.exe.

Ran the installed which displayed a table with the following (have shown where the new version differs in bold):

Payload Name Current Version New Version
System BIOS with BiosGuard 1.17.0 1.36.0
Embedded Controller 1.30.0 1.39.0
Backup Embedded Controller 1.1.18 1.1.18
Main System Cypress Port Controller 0 1.8.64.90 1.8.64.90
Gigabit Ethernet 2.3 2.3
Intel Management Engine Corporate Firmware Update 16.1.27.2176 16.1.38.2676

3. PCIe slot settings

Ethernet Controller I225-LM PCIe x1 2.5 GbE NIC Card in the x1 PCIe slot:

domain=0000 bus=02 dev=00 func=00 rev=03
  vendor_id=8086 (Intel Corporation) device_id=15f2 (Ethernet Controller I225-LM) subvendor_id=8086 subdevice_id=0001
  iommu_group=13
  driver=igc
  control: I/O- Mem+ BusMaster+ ParErr- SERR- DisINTx+
  status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
  bar[0] base_addr=70e00000 size=100000 is_IO=0 is_prefetchable=0 is_64=0
  bar[3] base_addr=70f00000 size=4000 is_IO=0 is_prefetchable=0 is_64=0
  Capabilities: [40] Power Management
  Capabilities: [50] Message Signaled Interrupts
  Capabilities: [70] MSI-X
  Capabilities: [a0] PCI Express v2 Express Endpoint, MSI 0
    Link capabilities: Max speed 5 GT/s Max width x1
    Negotiated link status: Current speed 5 GT/s Width x1
    Link capabilities2: Not implemented
    DevCap: MaxPayload 512 bytes PhantFunc 0 Latency L0s Maximum of 512 ns L1 Maximum of 64 μs
            ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
    DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
            RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
    DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr+ TransPend-
    LnkCap: Port # 0 ASPM L1
            L0s Exit Latency 1 μs to less than 2 μs
            L1 Exit Latency 2 μs to less than 4 μs
            ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
    LnkCtl: ASPM L1 Entry Enabled RCB 64 bytes Disabled- CommClk+
            ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
    LnkSta: TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
  domain=0000 bus=00 dev=1c func=00 rev=11
    vendor_id=8086 (Intel Corporation) device_id=7ab8 (Alder Lake-S PCH PCI Express Root Port #1)
    iommu_group=9
    driver=pcieport
    control: I/O+ Mem+ BusMaster+ ParErr- SERR- DisINTx+
    status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
    Capabilities: [40] PCI Express v2 Root Port, MSI 0
      Link capabilities: Max speed 8 GT/s Max width x1
      Negotiated link status: Current speed 5 GT/s Width x1
      Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s
      DevCap: MaxPayload 256 bytes PhantFunc 0 Latency L0s Maximum of 64 ns L1 Maximum of 1 μs
              ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
      DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
              RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
      DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
      LnkCap: Port # 1 ASPM L1
              L0s Exit Latency 512 ns to less than 1 μs
              L1 Exit Latency 32 μs to 64 μs
              ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
      LnkCtl: ASPM L1 Entry Enabled RCB 64 bytes Disabled- CommClk+
              ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
      LnkSta: TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
      SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
              Slot #1 PowerLimit 10.000W Interlock- NoCompl+
    Capabilities: [80] Message Signaled Interrupts
    Capabilities: [98] Bridge subsystem vendor/device ID
    Capabilities: [a0] Power Management

Intel X710 for 10GbE SFP+ in x4 PCIe slot:

linux@haswell-alma:~/fpga_sio/software_tests/eclipse_project/bin/release> dump_info/dump_pci_info_pciutils 8086:1572
domain=0000 bus=03 dev=00 func=01 rev=02
  vendor_id=8086 (Intel Corporation) device_id=1572 (Ethernet Controller X710 for 10GbE SFP+) subvendor_id=15d9 subdevice_id=0000
  iommu_group=15
  driver=i40e
  control: I/O- Mem+ BusMaster+ ParErr- SERR- DisINTx+
  status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
  bar[0] base_addr=6001000000 size=800000 is_IO=0 is_prefetchable=1 is_64=1
  bar[3] base_addr=6002000000 size=8000 is_IO=0 is_prefetchable=1 is_64=1
  Capabilities: [40] Power Management
  Capabilities: [50] Message Signaled Interrupts
  Capabilities: [70] MSI-X
  Capabilities: [a0] PCI Express v2 Express Endpoint, MSI 0
    Link capabilities: Max speed 8 GT/s Max width x4
    Negotiated link status: Current speed 8 GT/s Width x4
    Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s
    DevCap: MaxPayload 2048 bytes PhantFunc 0 Latency L0s Maximum of 512 ns L1 Maximum of 64 μs
            ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
    DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
            RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop-
    DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
    LnkCap: Port # 0 ASPM not supported
            L0s Exit Latency 1 μs to less than 2 μs
            L1 Exit Latency 8 μs to less than 16 μs
            ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
    LnkCtl: ASPM Disabled RCB 64 bytes Disabled- CommClk+
            ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
    LnkSta: TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
  Capabilities: [e0] Vital Product Data
  domain=0000 bus=00 dev=1c func=04 rev=11
    vendor_id=8086 (Intel Corporation) device_id=7abc (Alder Lake-S PCH PCI Express Root Port #5)
    iommu_group=10
    driver=pcieport
    control: I/O+ Mem+ BusMaster+ ParErr- SERR- DisINTx+
    status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
    Capabilities: [40] PCI Express v2 Root Port, MSI 0
      Link capabilities: Max speed 8 GT/s Max width x4
      Negotiated link status: Current speed 8 GT/s Width x4
      Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s
      DevCap: MaxPayload 256 bytes PhantFunc 0 Latency L0s Maximum of 64 ns L1 Maximum of 1 μs
              ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
      DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
              RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
      DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
      LnkCap: Port # 5 ASPM L1
              L0s Exit Latency 512 ns to less than 1 μs
              L1 Exit Latency 32 μs to 64 μs
              ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
      LnkCtl: ASPM Disabled RCB 64 bytes Disabled- CommClk+
              ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
      LnkSta: TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
      SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
              Slot #4 PowerLimit 25.000W Interlock- NoCompl+
    Capabilities: [80] Message Signaled Interrupts
    Capabilities: [98] Bridge subsystem vendor/device ID
    Capabilities: [a0] Power Management

domain=0000 bus=03 dev=00 func=00 rev=02
  vendor_id=8086 (Intel Corporation) device_id=1572 (Ethernet Controller X710 for 10GbE SFP+) subvendor_id=15d9 subdevice_id=093b
  iommu_group=14
  driver=i40e
  control: I/O- Mem+ BusMaster+ ParErr- SERR- DisINTx+
  status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
  bar[0] base_addr=6001800000 size=800000 is_IO=0 is_prefetchable=1 is_64=1
  bar[3] base_addr=6002008000 size=8000 is_IO=0 is_prefetchable=1 is_64=1
  Capabilities: [40] Power Management
  Capabilities: [50] Message Signaled Interrupts
  Capabilities: [70] MSI-X
  Capabilities: [a0] PCI Express v2 Express Endpoint, MSI 0
    Link capabilities: Max speed 8 GT/s Max width x4
    Negotiated link status: Current speed 8 GT/s Width x4
    Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s
    DevCap: MaxPayload 2048 bytes PhantFunc 0 Latency L0s Maximum of 512 ns L1 Maximum of 64 μs
            ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
    DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
            RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop-
    DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
    LnkCap: Port # 0 ASPM not supported
            L0s Exit Latency 1 μs to less than 2 μs
            L1 Exit Latency 8 μs to less than 16 μs
            ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
    LnkCtl: ASPM Disabled RCB 64 bytes Disabled- CommClk+
            ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
    LnkSta: TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
  Capabilities: [e0] Vital Product Data
  domain=0000 bus=00 dev=1c func=04 rev=11
    vendor_id=8086 (Intel Corporation) device_id=7abc (Alder Lake-S PCH PCI Express Root Port #5)
    iommu_group=10
    driver=pcieport
    control: I/O+ Mem+ BusMaster+ ParErr- SERR- DisINTx+
    status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
    Capabilities: [40] PCI Express v2 Root Port, MSI 0
      Link capabilities: Max speed 8 GT/s Max width x4
      Negotiated link status: Current speed 8 GT/s Width x4
      Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s
      DevCap: MaxPayload 256 bytes PhantFunc 0 Latency L0s Maximum of 64 ns L1 Maximum of 1 μs
              ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
      DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
              RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
      DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
      LnkCap: Port # 5 ASPM L1
              L0s Exit Latency 512 ns to less than 1 μs
              L1 Exit Latency 32 μs to 64 μs
              ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
      LnkCtl: ASPM Disabled RCB 64 bytes Disabled- CommClk+
              ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
      LnkSta: TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
      SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
              Slot #4 PowerLimit 25.000W Interlock- NoCompl+
    Capabilities: [80] Message Signaled Interrupts
    Capabilities: [98] Bridge subsystem vendor/device ID
    Capabilities: [a0] Power Management

Intel X710 for 10GbE SFP+ in x16 PCIe slot:

linux@haswell-alma:~/fpga_sio/software_tests/eclipse_project/bin/release> dump_info/dump_pci_info_pciutils 8086:1572
domain=0000 bus=01 dev=00 func=01 rev=02
  vendor_id=8086 (Intel Corporation) device_id=1572 (Ethernet Controller X710 for 10GbE SFP+) subvendor_id=15d9 subdevice_id=0000
  iommu_group=13
  driver=i40e
  control: I/O- Mem+ BusMaster+ ParErr- SERR- DisINTx+
  status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
  bar[0] base_addr=6001000000 size=800000 is_IO=0 is_prefetchable=1 is_64=1
  bar[3] base_addr=6002000000 size=8000 is_IO=0 is_prefetchable=1 is_64=1
  Capabilities: [40] Power Management
  Capabilities: [50] Message Signaled Interrupts
  Capabilities: [70] MSI-X
  Capabilities: [a0] PCI Express v2 Express Endpoint, MSI 0
    Link capabilities: Max speed 8 GT/s Max width x8
    Negotiated link status: Current speed 8 GT/s Width x8
    Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s
    DevCap: MaxPayload 2048 bytes PhantFunc 0 Latency L0s Maximum of 512 ns L1 Maximum of 64 μs
            ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
    DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
            RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop-
    DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
    LnkCap: Port # 0 ASPM not supported
            L0s Exit Latency 1 μs to less than 2 μs
            L1 Exit Latency 8 μs to less than 16 μs
            ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
    LnkCtl: ASPM Disabled RCB 64 bytes Disabled- CommClk+
            ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
    LnkSta: TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
  Capabilities: [e0] Vital Product Data
  domain=0000 bus=00 dev=01 func=00 rev=05
    vendor_id=8086 (Intel Corporation) device_id=460d (12th Gen Core Processor PCI Express x16 Controller #1)
    iommu_group=1
    driver=pcieport
    control: I/O+ Mem+ BusMaster+ ParErr- SERR+ DisINTx+
    status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
    Capabilities: [40] PCI Express v2 Root Port, MSI 0
      Link capabilities: Max speed 16 GT/s Max width x16
      Negotiated link status: Current speed 8 GT/s Width x8
      Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s 16.0 GT/s
      DevCap: MaxPayload 256 bytes PhantFunc 0 Latency L0s Maximum of 64 ns L1 Maximum of 1 μs
              ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
      DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
              RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
      DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
      LnkCap: Port # 2 ASPM L1
              L0s Exit Latency 2 μs to 4 μs
              L1 Exit Latency 8 μs to less than 16 μs
              ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
      LnkCtl: ASPM Disabled RCB 64 bytes Disabled- CommClk+
              ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
      LnkSta: TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
      SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
              Slot #2 PowerLimit 75.000W Interlock- NoCompl+
    Capabilities: [80] Message Signaled Interrupts
    Capabilities: [98] Bridge subsystem vendor/device ID
    Capabilities: [a0] Power Management

domain=0000 bus=01 dev=00 func=00 rev=02
  vendor_id=8086 (Intel Corporation) device_id=1572 (Ethernet Controller X710 for 10GbE SFP+) subvendor_id=15d9 subdevice_id=093b
  iommu_group=12
  driver=i40e
  control: I/O- Mem+ BusMaster+ ParErr- SERR- DisINTx+
  status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
  bar[0] base_addr=6001800000 size=800000 is_IO=0 is_prefetchable=1 is_64=1
  bar[3] base_addr=6002008000 size=8000 is_IO=0 is_prefetchable=1 is_64=1
  Capabilities: [40] Power Management
  Capabilities: [50] Message Signaled Interrupts
  Capabilities: [70] MSI-X
  Capabilities: [a0] PCI Express v2 Express Endpoint, MSI 0
    Link capabilities: Max speed 8 GT/s Max width x8
    Negotiated link status: Current speed 8 GT/s Width x8
    Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s
    DevCap: MaxPayload 2048 bytes PhantFunc 0 Latency L0s Maximum of 512 ns L1 Maximum of 64 μs
            ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
    DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
            RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop-
    DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
    LnkCap: Port # 0 ASPM not supported
            L0s Exit Latency 1 μs to less than 2 μs
            L1 Exit Latency 8 μs to less than 16 μs
            ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
    LnkCtl: ASPM Disabled RCB 64 bytes Disabled- CommClk+
            ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
    LnkSta: TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
  Capabilities: [e0] Vital Product Data
  domain=0000 bus=00 dev=01 func=00 rev=05
    vendor_id=8086 (Intel Corporation) device_id=460d (12th Gen Core Processor PCI Express x16 Controller #1)
    iommu_group=1
    driver=pcieport
    control: I/O+ Mem+ BusMaster+ ParErr- SERR+ DisINTx+
    status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
    Capabilities: [40] PCI Express v2 Root Port, MSI 0
      Link capabilities: Max speed 16 GT/s Max width x16
      Negotiated link status: Current speed 8 GT/s Width x8
      Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s 16.0 GT/s
      DevCap: MaxPayload 256 bytes PhantFunc 0 Latency L0s Maximum of 64 ns L1 Maximum of 1 μs
              ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
      DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
              RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
      DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
      LnkCap: Port # 2 ASPM L1
              L0s Exit Latency 2 μs to 4 μs
              L1 Exit Latency 8 μs to less than 16 μs
              ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
      LnkCtl: ASPM Disabled RCB 64 bytes Disabled- CommClk+
              ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
      LnkSta: TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
      SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
              Slot #2 PowerLimit 75.000W Interlock- NoCompl+
    Capabilities: [80] Message Signaled Interrupts
    Capabilities: [98] Bridge subsystem vendor/device ID
    Capabilities: [a0] Power Management

M.2 2230 PCIe solid-state drive:

linux@haswell-alma:~/fpga_sio/software_tests/eclipse_project/bin/release> dump_info/dump_pci_info_pciutils 15b7:5015
domain=0000 bus=02 dev=00 func=00 rev=01
  vendor_id=15b7 (Sandisk Corp) device_id=5015 (PC SN740 NVMe SSD (DRAM-less)) subvendor_id=15b7 subdevice_id=5015
  iommu_group=14
  driver=nvme
  control: I/O+ Mem+ BusMaster+ ParErr- SERR- DisINTx+
  status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
  bar[0] base_addr=70600000 size=4000 is_IO=0 is_prefetchable=0 is_64=1
  Capabilities: [80] Power Management
  Capabilities: [90] Message Signaled Interrupts
  Capabilities: [b0] MSI-X
  Capabilities: [c0] PCI Express v2 Express Endpoint, MSI 0
    Link capabilities: Max speed 16 GT/s Max width x4
    Negotiated link status: Current speed 16 GT/s Width x4
    Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s 16.0 GT/s
    DevCap: MaxPayload 512 bytes PhantFunc 0 Latency L0s Maximum of 1 μs L1 No limit
            ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
    DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
            RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
    DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
    LnkCap: Port # 0 ASPM L1
            L0s Exit Latency More than 4 μs
            L1 Exit Latency 4 μs to less than 8 μs
            ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
    LnkCtl: ASPM L1 Entry Enabled RCB 64 bytes Disabled- CommClk+
            ExtSynch+ ClockPM- AutWidDis- BWInt- ABWMgmt-
    LnkSta: TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
  domain=0000 bus=00 dev=06 func=00 rev=05
    vendor_id=8086 (Intel Corporation) device_id=464d (12th Gen Core Processor PCI Express x4 Controller #0)
    iommu_group=4
    driver=pcieport
    control: I/O+ Mem+ BusMaster+ ParErr- SERR+ DisINTx+
    status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
    Capabilities: [40] PCI Express v2 Root Port, MSI 0
      Link capabilities: Max speed 16 GT/s Max width x4
      Negotiated link status: Current speed 16 GT/s Width x4
      Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s 16.0 GT/s
      DevCap: MaxPayload 256 bytes PhantFunc 0 Latency L0s Maximum of 64 ns L1 Maximum of 1 μs
              ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
      DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
              RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
      DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
      LnkCap: Port # 5 ASPM L0s and L1
              L0s Exit Latency 2 μs to 4 μs
              L1 Exit Latency 8 μs to less than 16 μs
              ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
      LnkCtl: ASPM L1 Entry Enabled RCB 64 bytes Disabled- CommClk+
              ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
      LnkSta: TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
      SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
              Slot #5 PowerLimit 75.000W Interlock- NoCompl+
    Capabilities: [80] Message Signaled Interrupts
    Capabilities: [90] Bridge subsystem vendor/device ID
    Capabilities: [a0] Power Management

The summary is:

  • The x1 PCIe slot 1 "Alder Lake-S PCH PCI Express Root Port #1" is gen3.
  • The x4 PCIe slot 4 "Alder Lake-S PCH PCI Express Root Port #5" is gen3.
  • The x16 PCIe slot 2 "12th Gen Core Processor PCI Express x16 Controller #1" is gen4.
  • The M.2 NVME "12th Gen Core Processor PCI Express x4 Controller #0" is gen4.

Therefore, the PCIe connections to the processor are gen4 whereas those on the platform hub are gen3.

4. PCIe port power management issues

A VD100 Dev Board & Kit with AMD Versal AI Edge XCVE2302 board was fitted in the x16 slot 2, with the aim of testing the gen4 x4 PCIe interface in the PC.

4.1. Loading vfio-pci seems to remove power

Powered up the PC.

Load the VD100_dma_stream_crc64 program image over JTAG. At this point the Hardware Manager PCIe debugger shows for the GTYP:

  • All PCIe lanes are at 0.0 Gbps.
  • The PLLs are Not Locked.

Reboot the PC to cause the BIOS to re-enumerate the PCIe bus.

Boot into the openSUSE Leap 15.5 on USB stick, manually adding intel_iommu=on to the command line.

The PCIe endpoint is reported on and active:

linux@DESKTOP-OQMPARM:~> cat /sys/bus/pci/devices/0000:01:00.0/power/control
on
linux@DESKTOP-OQMPARM:~> cat /sys/bus/pci/devices/0000:01:00.0/power/runtime_status
active

At this point the Hardware Manager PCIe debugger shows:

  • All GYTP lanes are at 16 Gbps
  • The GYTP PLLs are locked
  • The LINK_INFO is Gen4x4
  • The current state is L0

The FPGA PCIe interface has enumerated with expected gen3 x4 interface:

linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> dump_info/dump_pci_info_pciutils 
domain=0000 bus=01 dev=00 func=00 rev=00
  vendor_id=10ee (Xilinx Corporation) device_id=b044 (Device b044) subvendor_id=0002 subdevice_id=0021
  iommu_group=12
  control: I/O- Mem- BusMaster- ParErr- SERR- DisINTx-
  status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
  bar[0] base_addr=6001000000 size=10000 is_IO=0 is_prefetchable=1 is_64=1
  Capabilities: [40] Power Management
  Capabilities: [48] Message Signaled Interrupts
  Capabilities: [70] PCI Express v2 Express Endpoint, MSI 0
    Link capabilities: Max speed 16 GT/s Max width x4
    Negotiated link status: Current speed 16 GT/s Width x4
    Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s 16.0 GT/s
    DevCap: MaxPayload 1024 bytes PhantFunc 0 Latency L0s Maximum of 64 ns L1 Maximum of 1 μs
            ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 75.000W
    DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq-
            RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
    DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
    LnkCap: Port # 0 ASPM not supported
            L0s Exit Latency More than 4 μs
            L1 Exit Latency More than 64 μs
            ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
    LnkCtl: ASPM Disabled RCB 64 bytes Disabled- CommClk+
            ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
    LnkSta: TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
  domain=0000 bus=00 dev=01 func=00 rev=05
    vendor_id=8086 (Intel Corporation) device_id=460d (12th Gen Core Processor PCI Express x16 Controller #1)
    iommu_group=1
    driver=pcieport
    control: I/O+ Mem+ BusMaster+ ParErr- SERR+ DisINTx+
    status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
    Capabilities: [40] PCI Express v2 Root Port, MSI 0
      Link capabilities: Max speed 16 GT/s Max width x16
      Negotiated link status: Current speed 16 GT/s Width x4
      Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s 16.0 GT/s
      DevCap: MaxPayload 256 bytes PhantFunc 0 Latency L0s Maximum of 64 ns L1 Maximum of 1 μs
              ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
      DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
              RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
      DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
      LnkCap: Port # 2 ASPM L1
              L0s Exit Latency 2 μs to 4 μs
              L1 Exit Latency 8 μs to less than 16 μs
              ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
      LnkCtl: ASPM Disabled RCB 64 bytes Disabled- CommClk+
              ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
      LnkSta: TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
      SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
              Slot #2 PowerLimit 75.000W Interlock- NoCompl+
    Capabilities: [80] Message Signaled Interrupts
    Capabilities: [98] Bridge subsystem vendor/device ID
    Capabilities: [a0] Power Management

Bind the vfio-pci driver:

linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> ~/fpga_sio/software_tests/eclipse_project/bind_xilinx_devices_to_vfio.sh 
IOMMU devices present: dmar0  dmar1
Loading vfio-pci module
Bound vfio-pci driver to 0000:01:00.0 10ee:b044 [0002:0021]
Waiting for /dev/vfio/12 to be created
Giving user permission to IOMMU group 12 for 0000:01:00.0 10ee:b044 [0002:0021]

At this point bind the driver, the Vivado hardware manager over the JTAG connection, on different runs, either:

  • The FPGA comes back as not programmed.
  • Can't find the FPGA over JTAG.

dump_pci_info_pciutils can find the card, but config reads returns all-ones:

linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> dump_info/dump_pci_info_pciutils 
domain=0000 bus=01 dev=00 func=00 rev=ff
  vendor_id=10ee (Xilinx Corporation) device_id=b044 (Device b044)
  iommu_group=12
  driver=vfio-pci
  control: I/O+ Mem+ BusMaster+ ParErr+ SERR+ DisINTx+
  status: INTx+ <ParErr+ >TAbort+ <TAbort+ <MAbort+ >SERR+ DetParErr+
  bar[0] base_addr=6001000000 size=10000 is_IO=0 is_prefetchable=1 is_64=1
  Capabilities: [ff] Unknown encoding 0xff
  domain=0000 bus=00 dev=01 func=00 rev=05
    vendor_id=8086 (Intel Corporation) device_id=460d (12th Gen Core Processor PCI Express x16 Controller #1)
    iommu_group=1
    driver=pcieport
    control: I/O+ Mem+ BusMaster+ ParErr- SERR+ DisINTx+
    status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR+ DetParErr-
    Capabilities: [40] PCI Express v2 Root Port, MSI 0
      Link capabilities: Max speed 16 GT/s Max width x16
      Negotiated link status: Current speed 2.5 GT/s Width x4
      Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s 16.0 GT/s
      DevCap: MaxPayload 256 bytes PhantFunc 0 Latency L0s Maximum of 64 ns L1 Maximum of 1 μs
              ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
      DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
              RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
      DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
      LnkCap: Port # 2 ASPM L1
              L0s Exit Latency 2 μs to 4 μs
              L1 Exit Latency 8 μs to less than 16 μs
              ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
      LnkCtl: ASPM Disabled RCB 64 bytes Disabled- CommClk+
              ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
      LnkSta: TrErr- Train- SlotClk+ DLActive- BWMgmt+ ABWMgmt-
      SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
              Slot #2 PowerLimit 75.000W Interlock- NoCompl+
    Capabilities: [80] Message Signaled Interrupts
    Capabilities: [98] Bridge subsystem vendor/device ID
    Capabilities: [a0] Power Management

The endpoint is reported as suspended:

linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> cat /sys/bus/pci/devices/0000:01:00.0/power/runtime_status
suspended
linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> cat /sys/bus/pci/devices/0000:01:00.0/power/control
auto

dmesg reported the following PCIe errors after loading the VFIO driver:

[  135.028850] VFIO - User Level meta-driver version: 0.3
[  135.160380] pcieport 0000:00:01.0: AER: Uncorrected (Non-Fatal) error message received from 0000:00:01.0
[  135.160397] pcieport 0000:00:01.0: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, (Receiver ID)
[  135.160409] pcieport 0000:00:01.0:   device [8086:460d] error status/mask=00200000/00010000
[  135.160416] pcieport 0000:00:01.0:    [21] ACSViol                (First)
[  137.859392] pcieport 0000:00:01.0: Data Link Layer Link Active not set in 1000 msec
[  137.859427] vfio-pci 0000:01:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[  137.921047] pcieport 0000:00:01.0: AER: device recovery successful
[  173.527376] pcieport 0000:00:01.0: Data Link Layer Link Active not set in 1000 msec
[  173.527411] vfio-pci 0000:01:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[  173.531131] vfio-pci 0000:01:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[  173.533377] vfio-pci 0000:01:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[  173.534811] vfio-pci 0000:01:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[  173.536329] vfio-pci 0000:01:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[  173.537793] vfio-pci 0000:01:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[  173.539276] vfio-pci 0000:01:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[  173.541284] vfio-pci 0000:01:00.0: can't change power state from D3cold to D0 (config space inaccessible)
[  173.543639] vfio-pci 0000:01:00.0: can't change power state from D3cold to D0 (config space inaccessible)

The above was repeated a number of times, including trying different FPGA designs with a PCIe interface.

4.2. Adding pcie_port_pm=off helped

Repeated the test but with pcie_port_pm=off as well as intel_iommu=on added to the command line.

After boot had enumerated as gen4 x4:

linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> dump_info/dump_pci_info_pciutils 
domain=0000 bus=01 dev=00 func=00 rev=00
  vendor_id=10ee (Xilinx Corporation) device_id=b044 (Device b044) subvendor_id=0002 subdevice_id=0021
  iommu_group=12
  control: I/O- Mem- BusMaster- ParErr- SERR- DisINTx-
  status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
  bar[0] base_addr=6001000000 size=10000 is_IO=0 is_prefetchable=1 is_64=1
  Capabilities: [40] Power Management
  Capabilities: [48] Message Signaled Interrupts
  Capabilities: [70] PCI Express v2 Express Endpoint, MSI 0
    Link capabilities: Max speed 16 GT/s Max width x4
    Negotiated link status: Current speed 16 GT/s Width x4
    Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s 16.0 GT/s
    DevCap: MaxPayload 1024 bytes PhantFunc 0 Latency L0s Maximum of 64 ns L1 Maximum of 1 μs
            ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 75.000W
    DevCtl: CorrErr- NonFatalErr- FatalErr- UnsupReq-
            RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
    DevSta: CorrErr+ NonFatalErr- FatalErr- UnsupReq+ AuxPwr- TransPend-
    LnkCap: Port # 0 ASPM not supported
            L0s Exit Latency More than 4 μs
            L1 Exit Latency More than 64 μs
            ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
    LnkCtl: ASPM Disabled RCB 64 bytes Disabled- CommClk+
            ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
    LnkSta: TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
  domain=0000 bus=00 dev=01 func=00 rev=05
    vendor_id=8086 (Intel Corporation) device_id=460d (12th Gen Core Processor PCI Express x16 Controller #1)
    iommu_group=1
    driver=pcieport
    control: I/O+ Mem+ BusMaster+ ParErr- SERR+ DisINTx+
    status: INTx- <ParErr- >TAbort- <TAbort- <MAbort- >SERR- DetParErr-
    Capabilities: [40] PCI Express v2 Root Port, MSI 0
      Link capabilities: Max speed 16 GT/s Max width x16
      Negotiated link status: Current speed 16 GT/s Width x4
      Link capabilities2: Supported link speeds 2.5 GT/s 5.0 GT/s 8.0 GT/s 16.0 GT/s
      DevCap: MaxPayload 256 bytes PhantFunc 0 Latency L0s Maximum of 64 ns L1 Maximum of 1 μs
              ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- SlotPowerLimit 0.000W
      DevCtl: CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+
              RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
      DevSta: CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr+ TransPend-
      LnkCap: Port # 2 ASPM L1
              L0s Exit Latency 2 μs to 4 μs
              L1 Exit Latency 8 μs to less than 16 μs
              ClockPM- Surprise- LLActRep+ BwNot+ ASPMOptComp+
      LnkCtl: ASPM Disabled RCB 64 bytes Disabled- CommClk+
              ExtSynch- ClockPM- AutWidDis- BWInt- ABWMgmt-
      LnkSta: TrErr- Train- SlotClk+ DLActive+ BWMgmt+ ABWMgmt-
      SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
              Slot #2 PowerLimit 75.000W Interlock- NoCompl+
    Capabilities: [80] Message Signaled Interrupts
    Capabilities: [98] Bridge subsystem vendor/device ID
    Capabilities: [a0] Power Management

The device power was on and active:

linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> cat /sys/bus/pci/devices/0000:01:00.0/powe
r/control
on
linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> cat /sys/bus/pci/devices/0000:01:00.0/power/runtime_status
active

The Hardware Manager PCIe debugger shows the FPGA in the L0 state.

Bind the vfio-pci driver:

linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> ~/fpga_sio/software_tests/eclipse_project/bind_xilinx_devices_to_vfio.sh 
IOMMU devices present: dmar0  dmar1
Loading vfio-pci module
Bound vfio-pci driver to 0000:01:00.0 10ee:b044 [0002:0021]
Waiting for /dev/vfio/12 to be created
Giving user permission to IOMMU group 12 for 0000:01:00.0 10ee:b044 [0002:0021]

The following appears in dmesg:

[  423.996381] VFIO - User Level meta-driver version: 0.3

The Hardware Manager PCIe debugger shows the FPGA in the L1 state. The device power was auto and suspended:

linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> cat /sys/bus/pci/devices/0000:01:00.0/power/control
auto
linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> cat /sys/bus/pci/devices/0000:01:00.0/power/runtime_status
suspended

display_identified_pcie_fpga_designs worked:

linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> identify_pcie_fpga_design/display_identified_pcie_fpga_designs 
Opening device 0000:01:00.0 (10ee:b044) with IOMMU group 12
Enabled bus master for 0000:01:00.0

Design VD100_dma_stream_crc64:
  PCI device 0000:01:00.0 rev 00 IOMMU group 12
  DMA bridge bar 0 AXI Stream
  Channel ID  addr_alignment  len_granularity  num_address_bits
       H2C 0               1                1                64
       H2C 1               1                1                64
       H2C 2               1                1                64
       H2C 3               1                1                64
       C2H 0               1                1                64
       C2H 1               1                1                64
       C2H 2               1                1                64
       C2H 3               1                1                64

test_dma_descriptor_credits worked:

linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> xilinx_dma_bridge_for_pcie/test_dma_descri
ptor_credits 
Opening device 0000:01:00.0 (10ee:b044) with IOMMU group 12
Enabled bus master for 0000:01:00.0
Testing DMA bridge bar 0 AXI Stream
Successfully sent 3069 messages from Ch0->0 with a total of 20390436 64-bit words in 41349 descriptors
Successfully sent 3069 messages from Ch1->1 with a total of 20390436 64-bit words in 41349 descriptors
Successfully sent 3069 messages from Ch2->2 with a total of 20390436 64-bit words in 41349 descriptors
Successfully sent 3069 messages from Ch3->3 with a total of 20390436 64-bit words in 41349 descriptors
Test: PASS

crc64_stream_latency worked:

linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> xilinx_dma_bridge_for_pcie/crc64_stream_latency 
Opening device 0000:01:00.0 (10ee:b044) with IOMMU group 12
Enabled bus master for 0000:01:00.0
Testing design VD100_dma_stream_crc64 using C2H 0 -> H2C 0
     32 len bytes latencies (us):   2.342 (50')   2.425 (75')   3.181 (99')  20.998 (99.999')
     64 len bytes latencies (us):   2.330 (50')   2.397 (75')   2.669 (99')   5.366 (99.999')
    128 len bytes latencies (us):   2.387 (50')   2.416 (75')   2.689 (99')  14.674 (99.999')
    256 len bytes latencies (us):   2.478 (50')   2.618 (75')   2.830 (99') 659.892 (99.999')
    512 len bytes latencies (us):   2.517 (50')   2.538 (75')   2.849 (99')   4.057 (99.999')
   1024 len bytes latencies (us):   2.635 (50')   2.769 (75')   2.980 (99') 784.120 (99.999')
   2048 len bytes latencies (us):   2.790 (50')   2.865 (75')   2.998 (99')   5.121 (99.999')
   4096 len bytes latencies (us):   3.152 (50')   3.200 (75')   3.405 (99')   4.176 (99.999')
   8192 len bytes latencies (us):   3.797 (50')   3.841 (75')   4.098 (99')   4.684 (99.999')
  16384 len bytes latencies (us):   4.997 (50')   5.049 (75')   5.236 (99')  18.052 (99.999')
  32768 len bytes latencies (us):   7.496 (50')   7.547 (75')   7.699 (99')  20.578 (99.999')
  65536 len bytes latencies (us):  12.426 (50')  12.480 (75')  12.736 (99')  25.559 (99.999')
 131072 len bytes latencies (us):  22.249 (50')  22.306 (75')  22.505 (99')  30.430 (99.999')
 262144 len bytes latencies (us):  41.913 (50')  41.978 (75')  42.200 (99')  55.501 (99.999')
 524288 len bytes latencies (us):  81.237 (50')  81.317 (75')  81.534 (99')  93.126 (99.999')
1048576 len bytes latencies (us): 160.957 (50') 161.087 (75') 161.395 (99') 1156.609 (99.999')
Testing design VD100_dma_stream_crc64 using C2H 1 -> H2C 1
     32 len bytes latencies (us):   2.343 (50')   2.419 (75')   2.613 (99')   5.743 (99.999')
     64 len bytes latencies (us):   2.328 (50')   2.356 (75')   2.595 (99')  15.279 (99.999')
    128 len bytes latencies (us):   2.403 (50')   2.446 (75')   2.741 (99')  13.745 (99.999')
    256 len bytes latencies (us):   2.472 (50')   2.496 (75')   2.803 (99')  15.187 (99.999')
    512 len bytes latencies (us):   2.506 (50')   2.522 (75')   2.748 (99')  15.906 (99.999')
   1024 len bytes latencies (us):   2.637 (50')   2.790 (75')   2.842 (99')  13.857 (99.999')
   2048 len bytes latencies (us):   2.811 (50')   2.885 (75')   2.994 (99')   6.843 (99.999')
   4096 len bytes latencies (us):   3.187 (50')   3.226 (75')   3.441 (99')   4.224 (99.999')
   8192 len bytes latencies (us):   3.788 (50')   3.833 (75')   4.078 (99')   6.404 (99.999')
  16384 len bytes latencies (us):   5.082 (50')   5.160 (75')   5.413 (99')  14.902 (99.999')
  32768 len bytes latencies (us):   7.504 (50')   7.558 (75')   7.715 (99')  20.311 (99.999')
  65536 len bytes latencies (us):  12.416 (50')  12.466 (75')  12.688 (99')  23.563 (99.999')
 131072 len bytes latencies (us):  22.244 (50')  22.302 (75')  22.507 (99')  32.533 (99.999')
 262144 len bytes latencies (us):  41.924 (50')  41.990 (75')  42.215 (99') 1060.270 (99.999')
 524288 len bytes latencies (us):  81.213 (50')  81.295 (75')  81.517 (99') 1096.347 (99.999')
1048576 len bytes latencies (us): 160.679 (50') 160.808 (75') 161.135 (99') 876.037 (99.999')
Testing design VD100_dma_stream_crc64 using C2H 2 -> H2C 2
     32 len bytes latencies (us):   2.333 (50')   2.349 (75')   2.594 (99')   3.777 (99.999')
     64 len bytes latencies (us):   2.335 (50')   2.498 (75')   2.661 (99')   5.833 (99.999')
    128 len bytes latencies (us):   2.373 (50')   2.430 (75')   2.756 (99')   4.777 (99.999')
    256 len bytes latencies (us):   2.405 (50')   2.437 (75')   2.640 (99')  14.355 (99.999')
    512 len bytes latencies (us):   2.496 (50')   2.638 (75')   2.879 (99')  15.459 (99.999')
   1024 len bytes latencies (us):   2.590 (50')   2.691 (75')   2.900 (99')  15.488 (99.999')
   2048 len bytes latencies (us):   2.781 (50')   2.818 (75')   2.992 (99')   5.280 (99.999')
   4096 len bytes latencies (us):   3.149 (50')   3.254 (75')   3.432 (99')   4.196 (99.999')
   8192 len bytes latencies (us):   3.800 (50')   3.845 (75')   4.042 (99')  16.140 (99.999')
  16384 len bytes latencies (us):   5.143 (50')   5.181 (75')   5.369 (99')   6.020 (99.999')
  32768 len bytes latencies (us):   7.524 (50')   7.569 (75')   7.751 (99')   8.631 (99.999')
  65536 len bytes latencies (us):  12.443 (50')  12.482 (75')  12.606 (99')  15.946 (99.999')
 131072 len bytes latencies (us):  22.272 (50')  22.322 (75')  22.459 (99')  25.942 (99.999')
 262144 len bytes latencies (us):  41.938 (50')  41.991 (75')  42.143 (99') 1060.416 (99.999')
 524288 len bytes latencies (us):  81.248 (50')  81.325 (75')  81.526 (99') 1094.427 (99.999')
1048576 len bytes latencies (us): 160.418 (50') 160.549 (75') 160.867 (99') 1169.540 (99.999')
Testing design VD100_dma_stream_crc64 using C2H 3 -> H2C 3
     32 len bytes latencies (us):   2.308 (50')   2.384 (75')   2.646 (99')   3.594 (99.999')
     64 len bytes latencies (us):   2.321 (50')   2.380 (75')   2.651 (99')   6.231 (99.999')
    128 len bytes latencies (us):   2.355 (50')   2.408 (75')   2.664 (99')  15.310 (99.999')
    256 len bytes latencies (us):   2.406 (50')   2.477 (75')   2.728 (99')  14.161 (99.999')
    512 len bytes latencies (us):   2.478 (50')   2.652 (75')   2.808 (99')   6.496 (99.999')
   1024 len bytes latencies (us):   2.618 (50')   2.750 (75')   2.870 (99')  15.508 (99.999')
   2048 len bytes latencies (us):   2.788 (50')   2.843 (75')   3.076 (99')   6.769 (99.999')
   4096 len bytes latencies (us):   3.135 (50')   3.292 (75')   3.455 (99')   4.147 (99.999')
   8192 len bytes latencies (us):   3.819 (50')   3.865 (75')   4.062 (99')   6.845 (99.999')
  16384 len bytes latencies (us):   5.130 (50')   5.173 (75')   5.444 (99')  17.612 (99.999')
  32768 len bytes latencies (us):   7.569 (50')   7.603 (75')   7.767 (99')  18.039 (99.999')
  65536 len bytes latencies (us):  12.494 (50')  12.533 (75')  12.647 (99')  24.288 (99.999')
 131072 len bytes latencies (us):  22.310 (50')  22.356 (75')  22.481 (99')  35.428 (99.999')
 262144 len bytes latencies (us):  41.971 (50')  42.026 (75')  42.167 (99')  54.563 (99.999')
 524288 len bytes latencies (us):  81.292 (50')  81.368 (75')  81.557 (99') 839.407 (99.999')
1048576 len bytes latencies (us): 160.464 (50') 160.594 (75') 160.918 (99') 1154.087 (99.999')

test_dma_bridge_independent_streams worked:

linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> xilinx_dma_bridge_for_pcie/test_dma_bridge_independent_streams
Opening device 0000:01:00.0 (10ee:b044) with IOMMU group 12
Enabled bus master for 0000:01:00.0
Using num_descriptors=64 bytes_per_buffer=0x1000000 data_mapping_size_words=0x10000000
Selecting test of VD100_dma_stream_crc64 design PCI device 0000:01:00.0 IOMMU group 12 H2C channel 0
Selecting test of VD100_dma_stream_crc64 design PCI device 0000:01:00.0 IOMMU group 12 H2C channel 1
Selecting test of VD100_dma_stream_crc64 design PCI device 0000:01:00.0 IOMMU group 12 H2C channel 2
Selecting test of VD100_dma_stream_crc64 design PCI device 0000:01:00.0 IOMMU group 12 H2C channel 3
Selecting test of VD100_dma_stream_crc64 design PCI device 0000:01:00.0 IOMMU group 12 C2H channel 0
Selecting test of VD100_dma_stream_crc64 design PCI device 0000:01:00.0 IOMMU group 12 C2H channel 1
Selecting test of VD100_dma_stream_crc64 design PCI device 0000:01:00.0 IOMMU group 12 C2H channel 2
Selecting test of VD100_dma_stream_crc64 design PCI device 0000:01:00.0 IOMMU group 12 C2H channel 3
Press Ctrl-C to stop test
  0000:01:00.0 H2C channel 0 1640.069 Mbytes/sec (16391340032 bytes in 977 transfers over 9.994300 secs)
  0000:01:00.0 H2C channel 1 1640.069 Mbytes/sec (16391340032 bytes in 977 transfers over 9.994299 secs)
  0000:01:00.0 H2C channel 2 1640.068 Mbytes/sec (16391340032 bytes in 977 transfers over 9.994303 secs)
  0000:01:00.0 H2C channel 3 1640.068 Mbytes/sec (16391340032 bytes in 977 transfers over 9.994303 secs)
  0000:01:00.0 C2H channel 0 0.001 Mbytes/sec (7816 bytes in 977 transfers over 9.994298 secs)
  0000:01:00.0 C2H channel 1 0.001 Mbytes/sec (7816 bytes in 977 transfers over 9.994298 secs)
  0000:01:00.0 C2H channel 2 0.001 Mbytes/sec (7816 bytes in 977 transfers over 9.994302 secs)
  0000:01:00.0 C2H channel 3 0.001 Mbytes/sec (7816 bytes in 977 transfers over 9.994302 secs)

  0000:01:00.0 H2C channel 0 1639.967 Mbytes/sec (16408117248 bytes in 978 transfers over 10.005149 secs)
  0000:01:00.0 H2C channel 1 1639.967 Mbytes/sec (16408117248 bytes in 978 transfers over 10.005149 secs)
  0000:01:00.0 H2C channel 2 1639.967 Mbytes/sec (16408117248 bytes in 978 transfers over 10.005149 secs)
  0000:01:00.0 H2C channel 3 1639.967 Mbytes/sec (16408117248 bytes in 978 transfers over 10.005149 secs)
  0000:01:00.0 C2H channel 0 0.001 Mbytes/sec (7824 bytes in 978 transfers over 10.005150 secs)
  0000:01:00.0 C2H channel 1 0.001 Mbytes/sec (7824 bytes in 978 transfers over 10.005149 secs)
  0000:01:00.0 C2H channel 2 0.001 Mbytes/sec (7824 bytes in 978 transfers over 10.005149 secs)
  0000:01:00.0 C2H channel 3 0.001 Mbytes/sec (7824 bytes in 978 transfers over 10.005149 secs)

^C  0000:01:00.0 H2C channel 0 1639.882 Mbytes/sec (2717908992 bytes in 162 transfers over 1.657380 secs)
  0000:01:00.0 H2C channel 1 1639.883 Mbytes/sec (2717908992 bytes in 162 transfers over 1.657380 secs)
  0000:01:00.0 H2C channel 2 1639.885 Mbytes/sec (2717908992 bytes in 162 transfers over 1.657378 secs)
  0000:01:00.0 H2C channel 3 1639.885 Mbytes/sec (2717908992 bytes in 162 transfers over 1.657377 secs)
  0000:01:00.0 C2H channel 0 0.001 Mbytes/sec (1296 bytes in 162 transfers over 1.657380 secs)
  0000:01:00.0 C2H channel 1 0.001 Mbytes/sec (1296 bytes in 162 transfers over 1.657380 secs)
  0000:01:00.0 C2H channel 2 0.001 Mbytes/sec (1296 bytes in 162 transfers over 1.657378 secs)
  0000:01:00.0 C2H channel 3 0.001 Mbytes/sec (1296 bytes in 162 transfers over 1.657378 secs)

Overall test statistics:
  0000:01:00.0 H2C channel 0 1640.008 Mbytes/sec (35517366272 bytes in 2117 transfers over 21.656829 secs)
  0000:01:00.0 H2C channel 1 1640.008 Mbytes/sec (35517366272 bytes in 2117 transfers over 21.656829 secs)
  0000:01:00.0 H2C channel 2 1640.008 Mbytes/sec (35517366272 bytes in 2117 transfers over 21.656830 secs)
  0000:01:00.0 H2C channel 3 1640.008 Mbytes/sec (35517366272 bytes in 2117 transfers over 21.656830 secs)
  0000:01:00.0 C2H channel 0 0.001 Mbytes/sec (16936 bytes in 2117 transfers over 21.656828 secs)
  0000:01:00.0 C2H channel 1 0.001 Mbytes/sec (16936 bytes in 2117 transfers over 21.656827 secs)
  0000:01:00.0 C2H channel 2 0.001 Mbytes/sec (16936 bytes in 2117 transfers over 21.656829 secs)
  0000:01:00.0 C2H channel 3 0.001 Mbytes/sec (16936 bytes in 2117 transfers over 21.656829 secs)


Overall PASS

After this the device power was auto and suspended:

linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> cat /sys/bus/pci/devices/0000:01:00.0/power/control
auto
linux@DESKTOP-OQMPARM:~/fpga_sio/software_tests/eclipse_project/bin/release> cat /sys/bus/pci/devices/0000:01:00.0/power/runtime_status
suspended

Watching the PCIe debugger shows:

  • The FPGA is in the L0 state when a test is actively using the card via VFIO.
  • The FPGA is in the L1 state when the test program has stopped.

By single stepping display_identified_pcie_fpga_designs in the debugger:

  • The VFIO_GROUP_GET_DEVICE_FD puts the FPGA into the L0 state and the power into the active state. Remains in this condition when paused in the debugger, so when no active PCIe access.
  • The close (vfio_device->device_fd) puts the FPGA into the L1 state and the power into the suspended state.
  • The pci_read_word calls in open_vfio_devices_matching_filter before actually opening the VFIO device cause one sequence of transitions of the FPGA through L1 -> L0 -> L1. When not single stepping perhaps the state would be left in L0 over a number of consecutive accesses.

Need to investigate the interation between VFIO and power state management.

5. SM Bus Controller

Under Windows 11 device manager is showing that no driver is available for the SM Bus Controller PCI\VEN_8086&DEV_7AA3&SUBSYS_0ACB1028&REV_11.

Downloaded Intel-Chipset-Device-Software_K1T72_WIN64_10.1.19949.8616_A01.EXE from the Dell Intel Chipset Device Software which lists the OptiPlex XE4 as a compatible system. After installing that Device Manager now recogises a "Intel(R) SMBus - 7AA3" under System Devices.

Under AlmaLinux 10.1:

$ sudo lspci -nn -vvv -d 8086:7aa3
[sudo] password for mr_halfword: 
00:1f.4 SMBus [0c05]: Intel Corporation Alder Lake-S PCH SMBus Controller [8086:7aa3] (rev 11)
	Subsystem: Dell Device [1028:0acb]
	Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Interrupt: pin C routed to IRQ 18
	IOMMU group: 11
	Region 0: Memory at 6001238000 (64-bit, non-prefetchable) [size=256]
	Region 4: I/O ports at efa0 [size=32]
	Kernel driver in use: i801_smbus
	Kernel modules: i2c_i801
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment