December 26, 2019

How To Verify Linux Kernel Support for Persistent Memory

Posted on December 26, 2019  •  2 minutes

Linux Kernel support for persistent memory was first delivered in version 4.0 of the mainline kernel, however, it was not enabled by default until version 4.2.

If you use a Linux distribution that uses kernel 4.2 or later, or the distro backports features in to an older kernel, you will almost certainly have persistent memory support enabled by default. It is still worth verifying what features are enabled and disabled as this may vary by distro and release version for the very latest persistent memory features.

If you build your own Kernel and require persistent memory feature support, you’ll need to ensure you configure the Kernel correctly.

We’ll use Fedoro for this article, but the process is the same or very similar for other Linux distro’s. Fedora stores the kernel configuration file in /boot/config-<kernel_version>.<Fedora_release>.<architecture>. For example, on an x64 Intel server running Fedora 30 with Kernel 5.3.18-200.fc30.x86_64, the config file is /boot/config-5.3.18-200.fc30.x86_64. This file is automatically generated and it is not recommended to edit it directly as it’ll get overwritten or replaced when you update the kernel. Read Changing Fedora Kernel Configuration Options for more information on how the config file is generated and how you can make changes to the kernel options.

The config file is a plain text document that we can view to see what features are enabled or disabled. For this particular system, there’s a total of 7,407 configurable entries of which 1,716 are currently commented out (disabled).

// Count the number of configurable options
grep CONFIG_ /boot/config-`uname -r` | wc -l
7407

// Count the number of enabled options
grep CONFIG_ /boot/config-`uname -r` | egrep "^CONFIG_" | wc -l
5691

// Count the number of commented (DISABLED) items
grep CONFIG_ /boot/config-`uname -r` | egrep "^\#" | wc -l
1716

To look for the persistent memory specific configuration options, use:

# egrep -i "CONFIG_ZONE_DEVICE|NFIT|PMEM|_ND_|BTT|NVDIMM|DAX" /boot/config-`uname -r`
CONFIG_X86_PMEM_LEGACY_DEVICE=y
CONFIG_X86_PMEM_LEGACY=m
CONFIG_ACPI_NFIT=m
# CONFIG_NFIT_SECURITY_DEBUG is not set
CONFIG_ZONE_DEVICE=y
# CONFIG_VIRTIO_PMEM is not set
CONFIG_LIBNVDIMM=m
CONFIG_BLK_DEV_PMEM=m
CONFIG_ND_BLK=m
CONFIG_ND_CLAIM=y
CONFIG_ND_BTT=m
CONFIG_BTT=y
CONFIG_ND_PFN=m
CONFIG_NVDIMM_PFN=y
CONFIG_NVDIMM_DAX=y
CONFIG_NVDIMM_KEYS=y
CONFIG_DAX_DRIVER=y
CONFIG_DAX=y
CONFIG_DEV_DAX=m
CONFIG_DEV_DAX_PMEM=m
CONFIG_DEV_DAX_KMEM=m
# CONFIG_DEV_DAX_PMEM_COMPAT is not set
CONFIG_FS_DAX=y
CONFIG_FS_DAX_PMD=y
CONFIG_ARCH_HAS_PMEM_API=y

Note: Over time, the name of the configuration option has changed, and may change again in the future. As such, the egrep filter may not produce an accurate or exhaustive list.

Summary

This article focused on what persistent memory related configuration options to look for within the kernel config file. This allows custom kernel builders to enable the necessary features, and for anyone running production systems to verify what features are enabled in your kernel.

Related Posts

Linux Device Mapper WriteCache (dm-writecache) performance improvements in Linux Kernel 5.8

The Linux ‘dm-writecache’ target allows for writeback caching of newly written data to an SSD or NVMe using persistent memory will achieve much better performance in Linux Kernel 5.8. Red Hat developer Mikulas Patocka has been working to enhance the dm-writecache performance using Intel Optane Persistent Memory (PMem) as the cache device. The performance optimization now queued for Linux 5.8 is making use of CLFLUSHOPT within dm-writecache when available instead of MOVNTI.

Read More

Using Linux Kernel Memory Tiering

In this post, I’ll discuss what memory tiering is, why we need it, and how to use the memory tiering feature available in the mainline v5.15 Kernel. What is Memory Tiering? With the advent of various new memory types, some systems will have multiple types of memory, e.g. High Bandwidth Memory (HBM), DRAM, Persistent Memory (PMem), CXL and others. The Memory Storage hierarchy should be familiar to you. Memory Storage Hierarchy

Read More

How To Emulate CXL Devices using KVM and QEMU

What is CXL? Compute Express Link (CXL) is an open standard for high-speed central processing unit-to-device and CPU-to-memory connections, designed for high-performance data center computers. CXL is built on the PCI Express physical and electrical interface with protocols in three areas: input/output, memory, and cache coherence. CXL is designed to be an industry open standard interface for high-speed communications, as accelerators are increasingly used to complement CPUs in support of emerging applications such as Artificial Intelligence and Machine Learning.

Read More