How To Install and Boot Microsoft Hyper-V 2019 from Persistent Memory (or not)

How To Install and Boot Microsoft Hyper-V 2019 from Persistent Memory (or not)

In a previous post  I described how to install and boot Fedora Linux using only Persistent Memory, no SSDs are required. For this follow on post, I attempted to install Microsoft Windows Server 2022 onto the persistent memory.

TL;DR - I was able to select the PMem devices as the install disk, but when the installer begins to write data, we get an “Error code: 0xC0000005”. I haven’t found a solution to this problem (yet).

Create a Bootable USB

Follow the instructions in my previous blog where I document How to Create a Bootable Windows USB in Fedora Linux .

Install Hyper-V 2019

The first setup screen you’ll see when booting from the ISO or USB image allows you to select the installation language, Time and keyboard layout. Click “Next“ once you have confirmed your choices.

Click “Install Now” to begin the installation process

Read and accept the EULA License terms

Select ‘Custom: Install the newer version of Hyper-V Server only (advanced)’

Select a disk or partition to install Windows Server, you can optionally create a new one from the available capacity or use all the available capacity by clicking “Next“. I found the 60GB PMem Devices listed as ‘Drive 6’ and ‘Drive 7’. Unfortunately, there’s no way to obtain information about the device, so you have to identify using the capacity.

Shortly into the installation, I encountered error code 0xC0000005 - “Windows installation encountered an unexpected error. Verify that the installation sources are accessible, and restart the installation.”

Most search results for this error during install indicate “The error halts the Windows OS installation and is mostly related to temporary hardware issues with the RAM or corrupt hard drive due to bad sectors.” I know the RAM and PMem is good, and I verified the USB image installs to an SSD without any issues.

I encountered the same problem while installing Windows Server 2019 and 2022 . If I get some free cycles, I’ll continue to debug the problem. If you have any suggestions, please leave me a comment.

An Introduction to Generative Prompt Engineeering

An Introduction to Generative Prompt Engineeering

Introduction

Over the past few years, there has been a significant explosion in the use and development of large language models (LLMs). An LLM is a language model consisting of a neural network with many parameters (commonly multi-billions of weights), trained on large quantities of text. Some of the most popular large language models are: GPT-3 (Generative Pretrained Transformer 3) – developed by OpenAI ; BERT (Bidirectional Encoder Representations from Transformers) – developed by Google; RoBERTa (Robustly Optimized BERT Approach) – developed by Facebook AI; T5 (Text-to-Text Transfer Transformer) – developed by Google. Many others exist and continue to emerge. These language models are designed to understand and generate natural language text, allowing for a wide range of applications such as chatbots, content creation, language translation, and more.

Read More
Your Personal Codespace: Self-Host VS Code on Any Server

Your Personal Codespace: Self-Host VS Code on Any Server

GitHub Codespaces and other cloud IDEs have revolutionized development, offering a complete VS Code environment that runs on a remote server and is accessible from any browser. It’s a game-changer for productivity and flexibility.

But what if you could have that same powerful, seamless experience on your own terms?

This guide will show you how to build your very own private Codespace, replicating the convenience of the GitHub experience on any server you control—be it a machine in your home lab, a dedicated server, or a budget-friendly cloud VM. We’ll explore two distinct paths to get you up and running with a persistent, browser-based VS Code instance on Ubuntu 24.04, complete with AI assistants like Gemini and GitHub Copilot to boost your workflow.

Read More
The Library Landscape: Why Build Another One?

The Library Landscape: Why Build Another One?

Series: Building lib3mf-rs

This post is part of a 5-part series on building a comprehensive 3MF library in Rust:

  1. Part 1: My Journey Building a 3MF Native Rust Library from Scratch
  2. Part 2: The Library Landscape - Why Build Another One?
  3. Part 3: Into the 3MF Specification Wilderness - Reading 1000+ Pages of Specifications
  4. Part 4: Design for Developers - Features, Flags, and the CLI
  5. Part 5: Reflections and What’s Next - Lessons from Building lib3mf-rs

“Why not just use the existing library?”

It’s a fair question. One I asked myself many times during the early days of this project. The 3MF Consortium maintains lib3MF , a comprehensive C++ implementation used by major companies in additive manufacturing. Why build another one?

Read More