How To Install and Boot Microsoft Hyper-V 2019 from Persistent Memory (or not)

How To Install and Boot Microsoft Hyper-V 2019 from Persistent Memory (or not)

In a previous post  I described how to install and boot Fedora Linux using only Persistent Memory, no SSDs are required. For this follow on post, I attempted to install Microsoft Windows Server 2022 onto the persistent memory.

TL;DR - I was able to select the PMem devices as the install disk, but when the installer begins to write data, we get an “Error code: 0xC0000005”. I haven’t found a solution to this problem (yet).

Create a Bootable USB

Follow the instructions in my previous blog where I document How to Create a Bootable Windows USB in Fedora Linux .

Install Hyper-V 2019

The first setup screen you’ll see when booting from the ISO or USB image allows you to select the installation language, Time and keyboard layout. Click “Next“ once you have confirmed your choices.

Click “Install Now” to begin the installation process

Read and accept the EULA License terms

Select ‘Custom: Install the newer version of Hyper-V Server only (advanced)’

Select a disk or partition to install Windows Server, you can optionally create a new one from the available capacity or use all the available capacity by clicking “Next“. I found the 60GB PMem Devices listed as ‘Drive 6’ and ‘Drive 7’. Unfortunately, there’s no way to obtain information about the device, so you have to identify using the capacity.

Shortly into the installation, I encountered error code 0xC0000005 - “Windows installation encountered an unexpected error. Verify that the installation sources are accessible, and restart the installation.”

Most search results for this error during install indicate “The error halts the Windows OS installation and is mostly related to temporary hardware issues with the RAM or corrupt hard drive due to bad sectors.” I know the RAM and PMem is good, and I verified the USB image installs to an SSD without any issues.

I encountered the same problem while installing Windows Server 2019 and 2022 . If I get some free cycles, I’ll continue to debug the problem. If you have any suggestions, please leave me a comment.

How To Enable Debug Logging in ipmctl

How To Enable Debug Logging in ipmctl

The ipmctl utility is used for configuring and managing Intel Optane Persistent Memory modules (DCPMM/PMem). It supports the functionality to:

  • Discover Persistent Memory on the server
  • Provision the persistent memory configuration
  • View and update the firmware on the persistent memory modules
  • Configure data-at-rest security
  • Track health and performance of the persistent memory modules
  • Debug and troubleshoot persistent memory modules

I wrote the IPMCTL User Guide showing how to use the tool, but what if ipmctl returns an error or something you’re not expecting? How do you debug the debugger? On Linux, ipmctl relies on libndctl to help perform communication to the BIOS and persistent memory modules themselves. This is a complicated stack involving multiple kernel drivers and the physical hardware itself. Anything along this path could be causing a problem.

Read More
How to Confirm Virtual to Physical Memory Mappings for PMem and FSDAX Files

How to Confirm Virtual to Physical Memory Mappings for PMem and FSDAX Files

Are you curious whether your application’s memory-mapped files are really using Intel Optane Persistent Memory (PMem), Compute Express Link (CXL) Non-Volatile Memory Modules (NV-CMM), or another DAX-enabled persistent memory device? Want to understand how virtual memory maps onto physical, non-volatile regions? Let’s use easily adaptable scripts in both Python and C to confirm this on your Linux system, definitively.

Why Does This Matter?

With the advent of persistent memory and DAX (Direct Access) filesystems, applications can memory-map files directly onto PMem, bypassing the traditional DRAM page cache. This promises significant performance and durability improvements for data-intensive workloads and databases, such as SQLite, Redis, and others.

Read More
Understanding STREAM: Benchmarking Memory Bandwidth for DRAM and CXL

Understanding STREAM: Benchmarking Memory Bandwidth for DRAM and CXL

In today’s Artificial Intelligence (AI), Machine Learning (ML), and high-performance computing (HPC) landscape, memory bandwidth is a critical factor in determining overall system performance. As workloads grow increasingly data-intensive, traditional DRAM-only setups are often insufficient, prompting the rise of new memory expansion technologies like Compute Express Link (CXL). To evaluate memory bandwidth across DRAM and CXL devices, we use a modified industry-standard tool called STREAM.

In this blog, we’ll explore what STREAM is, how it works, why it’s commonly used for benchmarking memory bandwidth, and how a modified version of STREAM can be used to measure performance in heterogeneous memory environments, including DRAM and CXL.

Read More