Category: Homelab

  • 🌐 How to Update → Intel X710 10G SFP+ Firmware (v9.54) on the Minisforum MS-A2

    The Minisforum MS-A2 ships with the Intel X710 10G SFP network card (retail version), but getting the most out of it requires a proper firmware update. In this guide, I’ll walk you through the steps to update to firmware version 9.54 on ESXi 9.0. While there’s a few ways you can install the Firmware I wanted to purely do it from ESXi. Here’s what to expect:

    • Latest Intel Firmware for 700 Series NICs
    • Install the Intel nvmupdaten64e VIB
    • PuTTy / WinSCP
    • Do it all on ESXi 9.0
    • Have time for 2 reboots

    Running the following command gives you information about your NIC, we’re looking for the VID/DID/SVID/SSID values so we can verify grabbing the correct Firmware:

    esxcli hardware pci list

    Before you dive into a firmware update, make sure you check the Broadcom Compatibility Guide. It’s an easy step to skip, but downloading the wrong firmware can quickly turn into a very expensive mistake, sometimes even bricking your card. I always use it for cross referencing in these scenarios.

    Intel tends to safeguard against that by building in protections by changing the SSID uniquely for OEMs, but things can still go sideways if you get creative in ways you shouldn’t. For example, some folks try to tweak configuration files to force a device mapping that isn’t supported, swapping an OEM firmware for a retail firmware when it should absolutely stay OEM. That’s a recipe for trouble. (Boom Bricked NIC)

    The takeaway? Stick with the compatibility guide, follow the proper firmware path for your hardware, and save yourself from a potential headache (and a dead card).


    Here’s what we were able to gather:

    • Vendor ID (VID) = 8086
    • Device ID (DID) = 1572
    • SubVendor ID (SVID) = 8086
    • SubDevice ID (SSID) = 0000

    Head on over to the IO Devices section of the Broadcom Compatibility Guide:

    The results show Intel Corporation in the Brand Name which indicates that this is an Intel Retail NIC. If it showed any other name it would belong to that respective OEM.

    You can now head over to the Intel Website to grab the Firmware Package (search by VMware or ID # 18638) and grab the Non-Volatile Memory (NVM) Update Utility for Intel Ethernet Adapters 700 Series – VMware ESX (ID # 18638):

    You can change the version that you want in the drop down, I went with 9.54.

    The tar ball contains a VIB that allows you to run nvmupdaten64e from inside of an ESXi host. Upload it to a repository (with WinSCP/SCP/Datastore Upload) on your host and run the following command to extract the tar:

    tar -xzvf 700Series_NVMUpdatePackage_v9_54_ESX.tar.gz

    You’ll find the following file among the archive – Intel-esx-nvmupdaten64e_1.43.8.0-800.20613240_24669197.zip. Run the following command to install the vib.

    esxcli software vib install -d Intel-esx-nvmupdaten64e_1.43.8.0-800.20613240_24669197.zip

    This is a VMwareAccepted VIB, which indicates that it is tested and verified by us to work as expected. More on Acceptance Levels for VIBs here.

    Put your host in maintenance mode and reboot the host, once you’re back up and running with the VIB installed you’ll need to find where nvmupdaten64e is located.

    nvmupdaten64e will be located here:

    /opt/nvmupdaten64e/bin/

    You won’t be able to add any files to this directory so you have to use the command line option (-a) to provide a working directory that has your Firmware Binaries and Config Files. You can issue the following command to get it going:

    ./nvmupdaten64e -a /the/directory/where/you/extracted/your/tar/ESXi_x64

    You’ll be able to indicate here if you want to update (A) All Firmware or select a number, in my case the Intel X710 was (01). So I went with(01) and installed the Firmware successfully. Once it completes you’ll have to reboot again. Roll the dice on whether you want to backup your NVM images or not.

    You’ll notice that my Intel Ethernet XXV710-DA2 25G NIC shows “Update not available” which means that it’s not supported by this Intel Retail Firmware. I found out that it’s actually a Dell OEM version of the Intel Ethernet XXV710 25G by using the BCG to cross reference its DID/VID/SVID/SSID, I’ll write up how I got through that Firmware update soon without an iDRAC or Lifecycle Manager.

    Edit: This process can be used for any retail Intel 700 Series NIC inside of an ESXi 8.0 or 9.0 host, here’s a list of adapters that are compatible:

    • Intel® Ethernet Converged Network Adapter XL710-QDA1
    • Intel® Ethernet Network Adapter XXV710-DA2
    • Intel® Ethernet Converged Network Adapter X710-DA4
    • Intel® Ethernet Converged Network Adapter X710-T4
    • Intel® Ethernet Converged Network Adapter X710-DA2
    • Intel® Ethernet Network Adapter XXV710-DA1
    • Intel® Ethernet Network Adapter XXV710-DA1 for OCP
    • Intel® Ethernet Network Adapter XXV710-DA2 for OCP
    • Intel® Ethernet Controller X710-AT2
    • Intel® Ethernet Network Adapter X710-DA2 for OCP 3.0
    • Intel® Ethernet Network Adapter X710-T2L
    • Intel® Ethernet Network Adapter X710-T2L for OCP 3.0
    • Intel® Ethernet Network Adapter X710-T4L
    • Intel® Ethernet Controller X710-TM4
    • Intel® Ethernet Server Adapter XL710-QDA1 for Open Compute Project
    • Intel® Ethernet Server Adapter XL710-QDA2 for Open Compute Project
    • Intel® Ethernet Converged Network Adapter XL710-QDA2
    • Intel® Ethernet Controller XL710-BM1
    • Intel® Ethernet Controller XL710-BM2
    • Intel® Ethernet Controller X710-BM2
    • Intel® Ethernet Network Adapter X710-DA4 for OCP 3.0
    • Intel® Ethernet Network Adapter X710-T4L for OCP 3.0
    • Intel® Ethernet Server Adapter X710-DA2 for OCP
    • Intel® Ethernet Controller XXV710-AM1
    • Intel® Ethernet Controller XL710-AM2
    • Intel® Ethernet Controller X710-AM2
    • Intel® Ethernet Controller XXV710-AM2
    • Intel® Ethernet Controller XL710-AM1
  • 🌱 VCF Host Seeding Failed → VLCM Extracting Image Info Error

    If you can’t get past vCenter deployment during the VCF Installer and the some of these conditions are true for you:

    • You’re running vSAN ESA with devices that aren’t on the vSAN HCL.
    • You’re using a custom VIB, like William Lam’s nested-vsan-esa-mock-hw-vib to bypass vSAN HCL:
      https://github.com/lamw/nested-vsan-esa-mock-hw-vib
    • You have and are seeing Host Seeding Failed errors in your vcsa-cli-installer.log on your VCF Installer.
    • You’re deploying in a Home Lab or Test Environment.

    Ran into this little error while running through my VCF install:


    2025-08-17 22:34:49,819 - vCSACliInstallLogger - ERROR - Traceback (most recent call last):
      File "main.py", line 412, in <module>
      File "main.py", line 386, in main
      File "/build/mts/release/bora-24623374/src/bora/install/vcsa-installer/vcsaCliInstaller/tasking/workflow.py", line 777, in execute
      File "/build/mts/release/bora-24623374/src/bora/install/vcsa-installer/vcsaCliInstaller/tasking/workflow.py", line 765, in execute
      File "/build/mts/release/bora-24623374/src/bora/install/vcsa-installer/vcsaCliInstaller/tasking/taskflow.py", line 1007, in execute
      File "/build/mts/release/bora-24623374/src/bora/install/vcsa-installer/vcsaCliInstaller/tasking/taskflow.py", line 971, in _execute_single_threaded
    tasking.taskflow.TaskExecutionFailureException: Host seeding failed:(vmodl.MethodFault) {
       dynamicType = <unset>,
       dynamicProperty = (vmodl.DynamicProperty) [],
       msg = 'MethodFault.summary',
       faultCause = <unset>,
       faultMessage = (vmodl.LocalizableMessage) [
          (vmodl.LocalizableMessage) {
             dynamicType = <unset>,
             dynamicProperty = (vmodl.DynamicProperty) [],
             key = 'com.vmware.vcint.error_from_vlcm',
             arg = (vmodl.KeyAnyValue) [
                (vmodl.KeyAnyValue) {
                   dynamicType = <unset>,
                   dynamicProperty = (vmodl.DynamicProperty) [],
                   key = 'vlcm_error',
                   value = 'Error:\n   com.vmware.vapi.std.errors.error\nMessages:\n   com.vmware.vcIntegrity.lifecycle.EsxImage.SoftwareInfoExtractError<An error occurred while extracting image info on the host.>\n   com.vmware.vcIntegrity.lifecycle.ExtractDepotTask.HostExtractDepotFailed<Extraction of image from host hlab01.varchitected.com failed.>\n'
                }
             ],
             message = "An internal error occurred: 'Error:\n   com.vmware.vapi.std.errors.error\nMessages:\n   com.vmware.vcIntegrity.lifecycle.EsxImage.SoftwareInfoExtractError<An error occurred while extracting image info on the host.>\n   com.vmware.vcIntegrity.lifecycle.ExtractDepotTask.HostExtractDepotFailed<Extraction of image from host hlab01.varchitected.com failed.>\n'"
          }
       ]
    }
    
    2025-08-17 22:34:49,819 - vCSACliInstallLogger - ERROR - Exception message: Host seeding failed:(vmodl.MethodFault) {
       dynamicType = <unset>,
       dynamicProperty = (vmodl.DynamicProperty) [],
       msg = 'MethodFault.summary',
       faultCause = <unset>,
       faultMessage = (vmodl.LocalizableMessage) [
          (vmodl.LocalizableMessage) {
             dynamicType = <unset>,
             dynamicProperty = (vmodl.DynamicProperty) [],
             key = 'com.vmware.vcint.error_from_vlcm',
             arg = (vmodl.KeyAnyValue) [
                (vmodl.KeyAnyValue) {
                   dynamicType = <unset>,
                   dynamicProperty = (vmodl.DynamicProperty) [],
                   key = 'vlcm_error',
                   value = 'Error:\n   com.vmware.vapi.std.errors.error\nMessages:\n   com.vmware.vcIntegrity.lifecycle.EsxImage.SoftwareInfoExtractError<An error occurred while extracting image info on the host.>\n   com.vmware.vcIntegrity.lifecycle.ExtractDepotTask.HostExtractDepotFailed<Extraction of image from host hlab01.varchitected.com failed.>\n'
                }
             ],
             message = "An internal error occurred: 'Error:\n   com.vmware.vapi.std.errors.error\nMessages:\n   com.vmware.vcIntegrity.lifecycle.EsxImage.SoftwareInfoExtractError<An error occurred while extracting image info on the host.>\n   com.vmware.vcIntegrity.lifecycle.ExtractDepotTask.HostExtractDepotFailed<Extraction of image from host hlab01.varchitected.com failed.>\n'"
          }
       ]

    Part of the fix was relatively easy and had to do with removing the VIB that I used to bypass the vSAN HCL from William Lam’s GitHub:



    It appears you need this for the validation, I plan to import this into VLCM to redistribute to the rest of the cluster later.


    The lifecycle.log also showed this was missing, apparently you also need to stage the VM Tools VIB on the host prior to adding it to VCF.

    2025-08-17T23:41:02Z In(14) lifecycle[2101592]: imagemanagerctl:1174 Calling with arguments: software --getsoftwareinfo
    2025-08-17T23:41:03Z In(14) lifecycle[2101592]: HostImage:269 Installers initiated are {'quickpatch': <esximage.Installer.QuickPatchInstaller.QuickPatchInstaller object at 0x812942b990>, 'live': <esximage.Installer.LiveImageInstaller.LiveImageInstaller object at 0x812fa281d0>, 'boot': <esximage.Installer.BootBankInstaller.BootBankInstaller object at 0x812f7f16d0>, 'locker': <esximage.Installer.LockerInstaller.LockerInstaller object at 0x812fc43290>}
    2025-08-17T23:41:03Z Db(15) lifecycle[2101592]: HostSeeding:864 BaseImage details : 9.0.0.0.24755229, ESXi, 9.0.0.0.24755229, 2025-06-17 00:00:00.000001
    2025-08-17T23:41:03Z Er(11) lifecycle[2101592]: HostSeeding:736 BaseImg Comps are removed: {'VMware-VM-Tools'}
    2025-08-17T23:41:03Z Er(11) lifecycle[2101592]: HostSeeding:919 Software info extract errors: The following Components have been removed on the host: VMware-VM-Tools
    2025-08-17T23:41:03Z Er(11) lifecycle[2101592]: imagemanagerctl:506 Get Software Info Failed: The following Components have been removed on the host: VMware-VM-Tools

    After resolving that I ended up with these errors in /var/log/lifecycle.log on the ESXi host, eluding to VIBs that aren’t in the Reserved VIB Cache Storage which can be found at the /var/vmware/lifecycle/hostSeed/ folder:


    2025-08-18T00:09:23Z Db(15) lifecycle[2104496]: HostSeeding:1068 Creating directory /var/vmware/lifecycle/hostSeed
    2025-08-18T00:09:23Z Db(15) lifecycle[2104496]: HostSeeding:1102 List of esxio VIB Ids:
    2025-08-18T00:09:23Z Db(15) lifecycle[2104496]: Misc:93 {'VMW_bootbank_penedac-esxio_0.1-1vmw.900.0.24755229', 'VMware_bootbank_loadesxio_9.0.0-0.24755229', 'VMware_bootbank_vmware-esx-esxcli-nvme-plugin-esxio_1.4.0.2-1vmw.900.0.24755229', 'VMW_bootbank_vmkusb-esxio_0.1-28vmw.900.0.24755229', 'VMW_bootbank_nmlxbf-gige-esxio_2.3-1vmw.900.0.24755229', 'VMW_bootbank_nmlx5-cc-esxio_4.24.0.7-16vmw.900.0.24755229', 'VMW_bootbank_nvme-pcie-esxio_1.4.0.2-1vmw.900.0.24755229', 'VMware_bootbank_nsx-proto2-libs-esxio_9.0.0.0-9.0.24733064', 'VMW_bootbank_nvmxnet3-esxio_2.0.0.31-16vmw.900.0.24755229', 'VMW_bootbank_nvmetcp-esxio_2.0.0.1-1vmw.900.0.24755229', 'VMW_bootbank_rd1173-esxio_0.1-1vmw.900.0.24755229', 'VMware_bootbank_native-misc-drivers-esxio_9.0.0-0.24755229', 'VMW_bootbank_mnet-esxio_0.1-1vmw.900.0.24755229',
    2025-08-18T00:09:23Z Db(15) lifecycle[2104496]: Misc:93 'VMware_bootbank_gc-esxio_9.0.0-0.24755229', 'VMW_bootbank_bfedac-esxio_0.1-1vmw.900.0.24755229', 'VMW_bootbank_spidev-esxio_0.1-1vmw.900.0.24755229', 'VMware_bootbank_esxio-combiner-esxio_9.0.0-0.24755229', 'VMW_bootbank_ionic-en-esxio_24.9.0-11vmw.900.0.24755229', 'VMW_bootbank_nsxpensandoatlas_1.46.0.E.41.2.512-2vmw.900.0.24554284', 'VMW_bootbank_nmlx5-rdma-esxio_4.24.0.7-16vmw.900.0.24755229', 'VMW_bootbank_nmlx5-core-esxio_4.24.0.7-16vmw.900.0.24755229', 'VMware_bootbank_nsx-python-logging-esxio_9.0.0.0-9.0.24733064', 'VMware_bootbank_nsx-esx-datapath-esxio_9.0.0.0-9.0.24733064', 'VMware_bootbank_nsx-python-utils-esxio_9.0.0.0-9.0.24733064', 'VMware_bootbank_esxio-dvfilter-generic-fastpath_9.0.0-0.24755229', 'VMware_bootbank_nsx-context-mux-esxio_9.0.0.0-9.0.24733064',
    2025-08-18T00:09:23Z Db(15) lifecycle[2104496]: Misc:93 'VMware_bootbank_nsx-exporter-esxio_9.0.0.0-9.0.24733064', 'VMW_bootbank_nvmxnet3-ens-esxio_2.0.0.23-24vmw.900.0.24755229', 'VMware_bootbank_nsx-shared-libs-esxio_9.0.0.0-9.0.24733064', 'VMware_bootbank_esxio_9.0.0-0.24755229', 'VMware_bootbank_nsx-cfgagent-esxio_9.0.0.0-9.0.24733064', 'VMware_bootbank_nsx-opsagent-esxio_9.0.0.0-9.0.24733064', 'VMW_bootbank_nmlxbf-pmc-esxio_0.1-6vmw.900.0.24755229', 'VMware_bootbank_nsx-python-protobuf-esxio_9.0.0.0-9.0.24499934', 'VMware_bootbank_nsx-proxy-esxio_9.0.0.0-9.0.24733064', 'VMW_bootbank_pengpio-esxio_0.1-1vmw.900.0.24755229', 'VMware_bootbank_nsx-host-esxio_9.0.0.0-9.0.24733064', 'VMware_bootbank_nsx-vdpi-esxio_9.0.0.0-9.0.24733064', 'VMW_bootbank_mlnx-bfbootctl-esxio_0.1-7vmw.900.0.24755229',
    2025-08-18T00:09:23Z Db(15) lifecycle[2104496]: Misc:93 'VMware_bootbank_nsx-ids-esxio_9.0.0.0-9.0.24733064', 'VMware_bootbank_nsx-mpa-esxio_9.0.0.0-9.0.24733064', 'VMware_bootbank_nsx-adf-esxio_9.0.0.0-9.0.24733064', 'VMW_bootbank_pensandoatlas_1.46.0.E.41.1.334-2vmw.900.0.24579338', 'VMware_bootbank_vsipfwlib-esxio_9.0.0.0-9.0.24733064', 'VMware_bootbank_nsx-snproxy-esxio_9.0.0.0-9.0.24733064', 'VMW_bootbank_pvscsi-esxio_0.1-7vmw.900.0.24755229', 'VMware_bootbank_nsxcli-esxio_9.0.0.0-9.0.24733064', 'VMW_bootbank_dwi2c-esxio_0.1-7vmw.900.0.24755229', 'VMW_bootbank_penspi-esxio_0.1-1vmw.900.0.24755229', 'VMware_bootbank_nsx-nestdb-esxio_9.0.0.0-9.0.24733064', 'VMware_bootbank_esxio-update_9.0.0-0.24755229', 'VMware_bootbank_nsx-cpp-libs-esxio_9.0.0.0-9.0.24733064', 'VMware_bootbank_esxio-base_9.0.0-0.24755229',
    2025-08-18T00:09:23Z Db(15) lifecycle[2104496]: Misc:93 'VMware_bootbank_bmcal-esxio_9.0.0-0.24755229', 'VMware_bootbank_nsx-monitoring-esxio_9.0.0.0-9.0.24733064', 'VMware_bootbank_nsx-platform-client-esxio_9.0.0.0-9.0.24733064', 'VMware_bootbank_nsx-netopa-esxio_9.0.0.0-9.0.24733064', 'VMW_bootbank_vmksdhci-esxio_1.0.3-7vmw.900.0.24755229'}
    2025-08-18T00:09:23Z In(14) lifecycle[2104496]: Depot:913 Generating vib: VMware_bootbank_vmware-hbrsrv_9.0.0-0.24755229
    2025-08-18T00:09:23Z Db(15) lifecycle[2104496]: HostSeeding:119 Calculated sha256 checksum of payload hbrsrv '9b539e373a3295d3d00cb5ca0d8a1b6310f0ef00e21900d8699338e528f48a28', expected '9b539e373a3295d3d00cb5ca0d8a1b6310f0ef00e21900d8699338e528f48a28'
    2025-08-18T00:09:23Z Db(15) lifecycle[2104496]: Vib:3519 Skip truncating since the payload 'hbrsrv' is unsigned.
    2025-08-18T00:09:23Z Db(15) lifecycle[2104496]: Vib:3519 Skip truncating since the payload 'hbrsrv' is unsigned.
    2025-08-18T00:09:23Z Db(15) lifecycle[2104496]: Vib:3519 Skip truncating since the payload 'hbrsrv' is unsigned.
    2025-08-18T00:09:24Z In(14) lifecycle[2104496]: Depot:1186 VIB VMware_bootbank_vmware-hbrsrv_9.0.0-0.24755229 downloaded to /var/vmware/lifecycle/hostSeed/recreateVibs/vib20/vmware-hbrsrv/VMware_bootbank_vmware-hbrsrv_9.0.0-0.24755229.vib
    2025-08-18T00:09:24Z In(14) lifecycle[2104496]: Depot:913 Generating vib: VMW_bootbank_penedac-esxio_0.1-1vmw.900.0.24755229
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: HostSeeding:1136 Extract depot failed: ('VMW_bootbank_penedac-esxio_0.1-1vmw.900.0.24755229', 'Failed to add reserved VIB VMW_bootbank_penedac-esxio_0.1-1vmw.900.0.24755229: not found in the reserved VIB cache storage')
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:399 Extract depot failed.
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:122 [ReservedVibExtractError]
    2025-08-18T00:09:24Z Er(11)[+] lifecycle[2104496]: ('VMW_bootbank_penedac-esxio_0.1-1vmw.900.0.24755229', 'Failed to add reserved VIB VMW_bootbank_penedac-esxio_0.1-1vmw.900.0.24755229: not found in the reserved VIB cache storage')
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127 Traceback (most recent call last):
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127   File "/lib64/python3.11/site-packages/vmware/esximage/Depot.py", line 931, in GenerateVib
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127     resVibPath = resVibCache.getVibLocation(vibobj.id)
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127   File "/lib64/python3.11/site-packages/vmware/esximage/ImageManager/HostSeeding.py", line 1271, in getVibLocation
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127     raise VibNotInCacheError('VIB %s is not available in cached locations'
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127 esximage.ImageManager.HostSeeding.VibNotInCacheError: VIB VMW_bootbank_penedac-esxio_0.1-1vmw.900.0.24755229 is not available in cached locations
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127 
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127 During handling of the above exception, another exception occurred:
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127 
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127 Traceback (most recent call last):
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127   File "/usr/lib/vmware/lifecycle/bin/imagemanagerctl.py", line 397, in depots
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127     HostSeeding.InstalledImageInfo().extractDepot(task)
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127   File "/lib64/python3.11/site-packages/vmware/esximage/ImageManager/HostSeeding.py", line 1120, in extractDepot
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127     Depot.DepotFromImageProfile(newProfile, depotDir,
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127   File "/lib64/python3.11/site-packages/vmware/esximage/Depot.py", line 1341, in DepotFromImageProfile
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127     return DepotFromImageProfiles(imgprofiles, depotdir, vibdownloadfn, vendor,
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127   File "/lib64/python3.11/site-packages/vmware/esximage/Depot.py", line 1184, in DepotFromImageProfiles
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127     vibdownloadfn(localfn, allRelatedVibs[vibid],
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127   File "/lib64/python3.11/site-packages/vmware/esximage/Depot.py", line 934, in GenerateVib
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127     raise Errors.ReservedVibExtractError(vibobj.id,
    2025-08-18T00:09:24Z Er(11) lifecycle[2104496]: imagemanagerctl:127 esximage.Errors.ReservedVibExtractError: ('VMW_bootbank_penedac-esxio_0.1-1vmw.900.0.24755229', 'Failed to add reserved VIB VMW_bootbank_penedac-esxio_0.1-1vmw.900.0.24755229: not found in the reserved VIB cache storage')

    Additional evidence from vCenter Recent Tasks pane:


    This happened through trial and error, I then figured I needed to split the io VIBs to the correct directories in the Reserved VIB Cache Storage. Errors started to resolve until…


    And now I made sure that the tools-light was the correct version in the Reserved VIB Cache Storage, the Offline Depot (VMware-ESXi-9.0.0.0100.24813472-depot.zip) had an outdated version.


    Okay, it never ends… I’m trying to get this going before I run out of day light… On the bright side I get enough time inside of vCenter before the deployment Workflow kills the VM to get a glimpse of the error from the Recent Tasks pane.

    There’s some official guidance to this on the in KB 402817 on the Broadcom site:
    https://knowledge.broadcom.com/external/article/402817/failed-to-extract-image-from-the-host-no.html


    After a little bit of patience, we got through the errors in the workflows…


    Now we have a SDDC Manager, vCenter, NSX, Fleet Management, and VCF Operations kicking off.

    And this is what a 3-node MS-A2 with vSAN and (2) 4TB Samsung 990 EVO Plus NVMe Drives looks like this:

  • 🏬 QNAP NAS → VCF HTTP Offline Depot Setup

    If you want to host the VCF Offline Depot on your QNAP NAS, this walkthrough gets you up and running fast. Hosting the depot locally saves space, bandwidth, and even cuts down the number of helper VMs you’d otherwise keep around.


    Tested on:

    • QNAP TVS-h1688X
    • QuTS hero h5.2.6.3195
    • VCF Installer VCF-SDDC-Manager-Appliance-9.0.0.0.24703748.ova

    Step 1) Enable the Web Server

    Control Panel → Applications → Web Server → Enable Web Server.
    (HTTP is fine here; I’ll show the VCF Installer tweak for HTTP a bit later.)


    Step 2) Move the files to your Web Root

    By default, QNAP serves from the Web share. In my case that’s:

    /share/ZFS24_DATA/Web/

    You can use the default Web share or create a Virtual Host if you want a dedicated hostname/port. The important part is that your document root actually contains the VCF depot layout.

    This is the exact folder structure that worked for me:

    I had to move the vsan folder and metadata folder into the PROD folder to sit alongside the COMP folder, both of those originally downloaded into COMP automatically.


    Step 3) Add basic authentication

    Create your .htaccess and .htpasswd files, here’s the content of my .htaccess:

    # at /share/ZFS24_DATA/Web/.htaccess (you will need to change your path so it matches what your ZFS path is
    Options +Indexes
    IndexOptions FancyIndexing NameWidth=*
    
    AuthType Basic
    AuthName "Restricted Area"
    AuthUserFile /share/ZFS24_DATA/Web/.htpasswd
    
    # Let Java/okhttp clients (VCF) through without a password
    SetEnvIfNoCase User-Agent "Java|okhttp" vcf_ok=1
    
    <IfModule mod_authz_core.c>        # Apache 2.4
      <RequireAny>
        Require env vcf_ok
        Require valid-user
      </RequireAny>
    </IfModule>
    <IfModule !mod_authz_core.c>       # Apache 2.2 fallback
      Order allow,deny
      Allow from env=vcf_ok
      Satisfy any
      Require valid-user
    </IfModule>
    
    # Don’t leak .ht* files
    <FilesMatch "^\.ht">
      Require all denied
    </FilesMatch>
    
    # Make sure JSON is sent with correct type
    AddType application/json .json

    I then ran these commands to create my .htpsswd file on the QNAP NAS via PuTTy:

    HASH=$(openssl passwd -apr1 'YourStrongPassword!')
    echo "admin:$HASH" > /share/ZFS24_DATA/Web/.htpasswd

    htpsswd is not a command that is found in bash on the QNAP NAS, instead you can leverage openssl to hash your password:

    Restart the QNAP web server:

    /etc/init.d/Qthttpd.sh restart

    Step 4) Allow HTTP for the Offline Depot on the VCF Installer Appliance

    By default the VCF Installer is looking to use HTTPS when connecting to the Offline Depot. For the purposes of a lab, this is overkill. The command below will allow you to connect to an Offline Depot with HTTP.

    While the VCF user is allowed to connect via SSH, it doesn’t have privileges to edit the file that we need to make the change on. The default setting for root is that it’s not allowed to login via SSH on the VCF Installer Appliance, you can change this if you want. I found it quicker to do what you need via the console where root is allowed to login:

    echo "lcm.depot.adapter.httpsEnabled=false" >> /opt/vmware/vcf/lcm/lcm-app/conf/application-prod.properties
    systemctl restart lcm

    Step 5) Add the Offline Depot in the VCF Installer UI

    This is relatively simple, you just need to put in your details and hit Configure:

    If you run into issues, you can leverage CURL to validate whether or not you can authenticate. Success looks like HTTP/1.1 200 OK.

    While testing this out, I got a few HTTP/1.1 401 Errors:

    Once I fixed my .htaccess file, those errors were resolved:

    curl -I -u 'admin:y0ur$tr0ngPa$$w0rd!!' http://offlinedepot2.varchitected.com/PROD/metadata/productVersionCatalog/v1/productVersionCatalog.json

    Step 6) Pre-stage the bits

    Click Download to pre-stage the content you need (select all of the files first).


    Step 7) Wait for the files to Load…


    Outtro (the witty bit)

    Congratulations, you just turned a humble QNAP into a mini-CDN for SDDC Manager. Fewer VMs, fewer downloads from the internet, and CPU cycles easing into a smooth landing, leaving more runway for your lab workloads instead. If only every homelab project was this satisfying: copy some files, charm Apache with a .htaccess, flip one tiny flag in the VCF Installer, and boom. VCF now eats from your own buffet. Bon appétit, SDDC. Now you can deploy VCF 9.0 with your own Offline Depot.🍴🚀

  • 🧪 Building My “Getting Started” Home Lab: ASUS NUC 15 Pro+ Edition

    After a month of research, weighing hardware options, and diving deep into reviews, I finally made the decision to build my new home lab setup. As a VMware enthusiast, I wanted something that was compact but powerful enough to handle everything in VCF, all while fitting neatly into my workspace. That’s when I landed on the ASUS NUC 15 Pro+ with the Intel Core Ultra 9 285H processor-the perfect little powerhouse!


    🧠 The Heart of the Build: ASUS NUC 15 Pro+ with Intel Core Ultra 9 285H

    This isn’t your typical NUC. While the specs officially list support for 96GB of DDR5 RAM (SODIMM), I was pleasantly surprised to find that it actually supports 128GB-just what I needed. I picked up the Crucial 128GB Kit (2x64GB) DDR5 RAM, 5600MHz, which is more than enough to power my ESXi setup and handle multiple virtual machines simultaneously. It’s fast, responsive, and easily handles the demands of a small private cloud setup at home.

    Everything can be installed completely tool-less and it can be mounted to any surface. During initial testing, it generated very little heat and was very responsive.


    💾 Dual NVMe Setup: Unmatched Speed and Capacity

    When it comes to storage, I wasn’t willing to compromise on speed. I opted for the Corsair MP600 Micro 2TB NVMe PCIe x4 drive in the 2242 slot as my primary storage. The performance is stunning, and with PCIe 4.0 support, it’s more than enough to handle everything I throw at it.

    But I didn’t stop there-who can resist the urge for more speed and storage, right? In the 2280 slot, I installed a Samsung SSD 9100 PRO 1TB. This is where the fun begins: I’m using it for NVMe Memory Tiering in ESXi and to carve out some capacity for VMFS. Together, these two NVMe drives offer the perfect balance of performance and storage capacity, handling everything from VM storage to memory-intensive tasks-keeping everything in my virtual environment running smoothly.


    🌐 Networking: Thunderbolt 4 Speed for Seamless Performance

    I didn’t neglect networking either. To ensure I get maximum throughput, I added the OWC Thunderbolt 4 10G Ethernet Adapter. Fast networking is a must in any lab, and this adapter lets me transfer large files, run multi-node clusters, and test configurations without hitting any bottlenecks. With Thunderbolt 4 connectivity, I can rest assured that network speed will never be a limiting factor in my home lab.

    I decided to give the OWC TB4 version a try myself. Unfortunately the OWC TB4 version wasn't compatible with the ESXi USB NIC Driver Fling. I ended up swapping it out for the TB3 version.

    🔧 Why This Build?

    Why the ASUS NUC 15 Pro+ and these specific components? Simple: I wanted a setup that was compact yet powerful, with potential to expand to a 2-node or 3-node. The Intel Core Ultra 9 285H CPU gives me all the processing power I need, while the dual NVMe storage ensures I’ll never run out of fast, responsive space. The Thunderbolt 4 adapter takes care of any networking requirements, ensuring smooth operations even with heavy workloads.

    It’s the perfect mix of size and performance for getting started.

    Looking back, I realized that the ASUS NUC 15 Pro+ with the 285H doesn’t have enough cores to support VCF Auto, which means I can’t fully unlock the potential I was hoping for from the ASUS unless I run a 2nd box with different hardware.

    🧪 Final Thoughts: Lab Ready, Future-Proof, and Compact

    This lab is built to last. Whether I’m testing VMware Cloud Foundation (VCF), experimenting with ESXi features, or just refining my skills, this setup is ready for it all-well, except for VCF Auto, but that’s a topic for another day. The ability to leverage NVMe Memory Tiering to increase the amount of Memory for additional workloads is a game-changer, and the enhanced speed and connectivity make this NUC the center of my lab.

    I’m eager to see how this setup performs over time, and I’ve already started rolling out VCF. Stay tuned for more updates as I continue optimizing and tweaking this lab.

    For anyone thinking about building their own home lab, I highly recommend the ASUS NUC 15 Pro+ setup. It’s compact, powerful, and the perfect platform to elevate your VCF knowledge to the next level.