How To Configure LVM on RAID 1 Device

In this article we will discuss how you can take advantage of both LVM (Logical Volume Manager) and RAID (Redundant Array of Independent Disks) when managing your OS disk devices.

What is LVM?

LVM can be defined as a software layer that works on top of your physical storage devices. It gives you a way to manage all your storage devices as a single pool of logical storage. This will result in a simplified storage management tasks as a System Administrator.

Key Components of LVM:

  • Physical Volume (PV): A PV is an underlying physical storage device on the computer system (physical server or Virtual Machine). This can be an HDD, SSD, NVMe, or a RAID device.
  • Volume Group (VG): This is a group of single or multiple PVs to form a single logical storage pool. You can manage and allocate the pool dynamically.
  • Logical Volumes (LV): This is a virtual disk created from VG storage pool. It can be resized on the fly provided extra capacity is available in the pool.

What is RAID?

RAID can be either hardware or software based depending on the system configuration. If you are working on a dedicated server with RAID Controller, then hardware raid can be used. RAID combines multiple physical disks into a single logical unit. The primary focus of RAID is data fault tolerance and on redundancy. Below is a list of commonly used RAID levels.

  • RAID 0 (Striping): Gives improved performance since data is split across multiple disks. If you are focused on faster read/write speeds but care less about data redundancy then this is suitable for you. Please note that if one disk fails, then all the data is lost.
  • RAID 1 (Mirroring): In RAID 1, the data is replicated across multiple disks (data mirrored). With this you get redundancy in that if one disk fails, you can still access the data on the other mirrored disk(s).
  • RAID 5/6: With these RAID levels gives you both data redundancy and some level of performance. It distributes the data and parity information across multiple disks.
  • RAID 10: This RAID configuration is also referred to as RAID 1+0. It gives you the benefits of RAID 1 (mirroring) and what RAID 0 offers (striping). You will experience a balance between performance and redundancy.

LVM vs RAID – Key Differences

FeatureLVMRAID
Primary FocusFlexibility, Volume ManagementData Redundancy, Fault Tolerance
Data ProtectionLimited (mirroring can be achieved)High (depending on RAID level)
PerformanceCan improve performance (striping)RAID 0 offers improved read/write speeds
ScalabilityHighly scalable, add disks dynamicallyScalability depends on RAID level
CostLower cost (software-based)Can be more expensive (hardware RAID)
ComplexityLess complexMore complex configuration (some RAID levels)
RIAD vs LVM – Comparison Table

How To Create RAID 1 Device

This article is only suitable for software based RAID. For hardware RAID refer to your physical hardware documentations.

In this article, we will configure software RAID – Raid level 1. If you have done hardware RAID, you can confidently skip this step.

List physical devices available on your device.

# lsblk
NAME                     MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda                        8:0    0   3.5T  0 disk
sdd                        8:48   0   3.5T  0 disk

Here we can see sda and sdb. I will create raid 1 device /dev/md10 which uses /dev/sda and /dev/sdd.

# mdadm --create /dev/md10 --level=raid1 --raid-devices=2 /dev/sdX /dev/sdY
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.

Validate your setup and replace sdX and sdY with correct labels of your disks.

Refer to RAID Levels documentations available online for the exact number of devices required to configure your desired RAID level.

You can also use Seagate RAID Capacity Calculator to know the number of disk devices required and resulting capacity for specific RAID level.

Confirm the details of created device.

# lsblk /dev/md10
NAME MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
md10   9:10   0  3.5T  0 raid1

Details of created RAID device is available in /proc/mdstat file.

# cat /proc/mdstat
Personalities : [raid1]
md10 : active raid1 sdd[1] sda[0]
      3750606144 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  0.6% (25142016/3750606144) finish=300.6min speed=206511K/sec
      bitmap: 28/28 pages [112KB], 65536KB chunk

md1 : active raid1 sdc2[1] sdb2[0]
      467898176 blocks super 1.2 [2/2] [UU]
      bitmap: 1/4 pages [4KB], 65536KB chunk

md0 : active raid1 sdc1[1] sdb1[0]
      818176 blocks super 1.2 [2/2] [UU]

unused devices: <none>

Create Partition on RAID Device

If you intend to use all the RAID device disk in LVM, then you can skip this step. If you want to partition and use single partition of the disk in LVM, then follow long.

We will create two partitions – One will span 0-70% of raid device capacity, and the other 70-100%. You can customize this to suit your intended use case.

sudo parted -s -a optimal -- /dev/md10 mklabel gpt

# First partition
sudo parted -s -a optimal -- /dev/md10 mkpart primary 0% 70%

# Second partition
sudo parted -s -a optimal -- /dev/md10 mkpart primary 70% 100%
sudo parted -s -- /dev/md10 align-check optimal 1

We can print all partition table in the RAID device.

# parted /dev/md10 p
Model: Linux Software RAID Array (md)
Disk /dev/md10: 3841GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  2688GB  2688GB               primary
 2      2688GB  3841GB  1152GB               primary

The same results can be checked using lsblk command on Linux.

# ls /dev/md10*
/dev/md10  /dev/md10p1  /dev/md10p2

# lsblk /dev/md10
NAME     MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
md10       9:10   0  3.5T  0 raid1
├─md10p1 259:0    0  2.4T  0 part
└─md10p2 259:1    0    1T  0 part

Configure LVM on RAID 1 Device

These examples are based on RAID 1 device that we created but the process applies to other RAID levels since we are not doing mirroring at the LVM level.

Let’s create a physical volume from the partition. The partition in our setup is /dev/md10p1

$ sudo pvcreate /dev/md10p1
  Physical volume "/dev/md10p1" successfully created.

Next we are creating a volume group called nova. Change volume group name accordingly.

$ sudo vgcreate nova /dev/md10p1
  Volume group "nova" successfully created
  • Note that Volume Group can also be created from the raw block device.
# Example on creating a volume group from RAID device. 
sudo vgcreate nova /dev/mdxY

And a Logical Volume called ephemeral that uses all volume capacity.

$ sudo lvcreate -n ephemeral -l 100%FREE nova
  Logical volume "ephemeral" created.
  • To specify capacity use -L
# Example to create 100GB
sudo lvcreate -n <lvname> -L 100G nova

# Example to create 1TB 
sudo lvcreate -n <lvname>  -L 1T nova
  • Or percentage of the total – e.g only use 50% of the free space.
sudo lvcreate -n <lvname> -l 50%FREE nova

To list all physical volumes, volume groups and logical volumes run;

sudo pvdisplay
sudo vgdisplay
sudo lvdisplay

We will create xfs filesystem on /dev/nova/ephemeral – path to logical volume, uses format /dev/<vgname>/<lvname>.

$ sudo mkfs.xfs /dev/nova/ephemeral
meta-data=/dev/nova/ephemeral    isize=512    agcount=4, agsize=164088832 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
data     =                       bsize=4096   blocks=656355328, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=320486, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Discarding blocks...Done.

Last step will be to mount the filesystem. In below example we are mounting it under /mnt path.

$ sudo mount -t xfs /dev/nova/ephemeral /mnt
$ df -hT /mnt/
Filesystem                 Type  Size  Used Avail Use% Mounted on
/dev/mapper/nova-ephemeral xfs   2.5T   18G  2.5T   1% /mnt

To persist this against system reboots we have to edit the /etc/fstab file.

$ sudo vim /etc/fstab
/dev/nova/ephemeral /var/lib/nova xfs defaults 0 0

Make sure the mount point path exists before you use it. Test your configurations.

sudo umount /mnt
sudo mount -a

If it’s working fine, you should get no output or error when you run mount -a command.

Conclusion

To conclude, by performing the configuration of LVM on a RAID device you get the great combination of redundancy and flexibility for your system storage requirements. You can manage storage dynamically using powerful features of LVM as you leverage on the mirroring provided by RAID 1 mirroring to better protect your data.

CloudSpinx Engineers are available to help with any challenges and questions that you have about LVM and RAID. We have a group of knowledgeable and friendly support engineers available via Live Chat or Email address to assist you with any kind of troubleshooting and ensure smooth running of your Linux systems.

Reach out today and allow us to help you navigate the world of Linux!.

Your IT Journey Starts Here!

Ready to level up your IT skills? Our new eLearning platform is coming soon to help you master the latest technologies.

Be the first to know when we launch! Join our waitlist now.

Join our Linux and open source community. Subscribe to our newsletter for tips, tricks, and collaboration opportunities!

Recent Post

Leave a Comment

Your email address will not be published. Required fields are marked *

Related Post

The web browser is the key arsenal that we use to browse the internet. If you want to access information […]

In this guide,we will look at how you can install Java 11 on Oracle Linux 8. Java is a widely […]

The adoption of Containers and microservice architectures has been amazing and speedy in the past few years. Docker is widely […]

Let's Connect

Unleash the full potential of your business with CloudSpinx. Our expert solutions specialists are standing by to answer your questions and tailor a plan that perfectly aligns with your unique needs.
You will get a response from our solutions specialist within 12 hours
We understand emergencies can be stressful. For immediate assistance, chat with us now

Contact CloudSpinx today!

Download CloudSpinx Profile

Discover the full spectrum of our expertise and services by downloading our detailed Company Profile. Simply enter your first name, last name, and email address.