To create a RAID 1 array with 2x drives, pass them to the mdadm --create command. You will have to specify the device name you wish to create (**/dev/md0** in our case), the RAID level, and the number of devices:
If the drives you are using are not partitioned with the boot flag enabled, you will likely be given the following warning. It is safe to type **y** to continue:
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 1953383488K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
The mdadm tool will start to mirror the drives. This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat file:
As you can see in the second line, the /dev/md0 device has been created with the RAID 1 configuration using the /dev/sda and /dev/sdb devices. The fourth line shows the progress of the syncing. You can continue the guide while this process completes.
!!! note
If your system is configured to [display RAID fault on the LED2](/mdadm/#configure-fault-led), then you should also see the red LED2 blinking while your array is (re-)syncing.
To create a RAID 6 array with 4x drives, pass them to the mdadm --create command. You will have to specify the device name you wish to create (**/dev/md0** in our case), the RAID level, and the number of devices:
The mdadm tool will start to configure the array (it actually uses the recovery process to build the array for performance reasons). This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat file:
As you can see in the second line, the /dev/md0 device has been created with the RAID 6 configuration using the /dev/sda, /dev/sdb, /dev/sdc and /dev/sdd devices. The fourth line shows the progress of the syncing. You can continue the guide while this process completes.
!!! note
If your system is configured to [display RAID fault on the LED2](/mdadm/#configure-fault-led), then you should also see the red LED2 blinking while your array is (re-)syncing.
To create a RAID 10 array with 4x drives, pass them to the mdadm --create command. You will have to specify the device name you wish to create (**/dev/md0** in our case), the RAID level, and the number of devices:
The mdadm tool will start to configure the array (it actually uses the recovery process to build the array for performance reasons). This can take some time to complete, but the array can be used during this time. You can monitor the progress of the mirroring by checking the /proc/mdstat file:
As you can see in the second line, the /dev/md0 device has been created with the RAID 10 configuration using the /dev/sda, /dev/sdb, /dev/sdc and /dev/sdd devices. The fourth line shows the progress of the syncing. You can continue the guide while this process completes.
!!! note
If your system is configured to [display RAID fault on the LED2](/mdadm/#configure-fault-led), then you should also see the red LED2 blinking while your array is (re-)syncing.
The Helios4 System-On-Chip is a 32bit architecture, therefore the max partition size supported is 16TB. If your RAID array is more than 16TB of usable space, then you will need to create more than just one partition. You can use **fdisk /dev/md0** command to create several partitions on your array, then instead of using */dev/md0* for the below commands, it will be */dev/md0p1*, */dev/md0p2*, etc...
To make sure that the array is reassembled automatically at boot, we will have to modify /etc/mdadm/mdadm.conf file. You can automatically scan the active array and append the file by typing:
To get a detailed picture of your array setup and its state you can use the following command:
sudo mdadm -D /dev/md0
This is the output when the array is in good health, you can see the line showing **State : clean**.
/dev/md0:
Version : 1.2
Creation Time : Sun Jul 29 14:59:58 2018
Raid Level : raid10
Array Size : 3906766848 (3725.78 GiB 4000.53 GB)
Used Dev Size : 1953383424 (1862.89 GiB 2000.26 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Nov 13 06:21:03 2018
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : helios4:0 (local to host helios4)
UUID : ea143418:153e050e:ac86d56c:07547815
Events : 95892
Number Major Minor RaidDevice State
0 8 0 0 active sync set-A /dev/sda
1 8 16 1 active sync set-B /dev/sdb
2 8 32 2 active sync set-A /dev/sdc
3 8 48 3 active sync set-B /dev/sdd
In below example the array has a failed drive, you can see the line **State : clean, degraded** and that the RaidDevice 2 (/dev/sdc) is removed. The array is still active but in a degraded state because it requires /dev/sdc to be replaced as soon as possible.
/dev/md0:
Version : 1.2
Creation Time : Sun Jul 29 14:59:58 2018
Raid Level : raid10
Array Size : 3906766848 (3725.78 GiB 4000.53 GB)
Used Dev Size : 1953383424 (1862.89 GiB 2000.26 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Nov 13 08:34:06 2018
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : helios4:0 (local to host helios4)
UUID : ea143418:153e050e:ac86d56c:07547815
Events : 95892
Number Major Minor RaidDevice State
0 8 0 0 active sync set-A /dev/sda
1 8 16 1 active sync set-B /dev/sdb
- 0 0 2 removed /dev/sdc
3 8 48 3 active sync set-B /dev/sdd
2 8 32 - faulty /dev/sdc
## Replace a Failed Drive
Once you have identified the failed drive with the command **mdadm -D**, as shown in the previous section, you will need to do the following steps to replace the failed drive:
1. Mark the faulty drive as failed.
`mdadm /dev/md0 --fail /dev/sdc`
2. Remove the drive from the array.
`mdadm /dev/md0 --remove /dev/sdc`
3. Identify which physical drive is to be replaced.
6. Add the new drive to the array. You will need to use **lsblk** command to figure out what's the device name of the new drive. Most probably it will be the same than before.
Receive a notification whenever mdadm detects something wrong with your array. This is very important since you don't want to miss out an issue on your array in order to have time to take the right actions.
Make the Red Fault LED (LED2) indicates if an error has been detected on your array. The below script will light up the LED2 if an error occurs on an array, and make LED2 blink during reconstruction of a degraded array.
Latest Armbian builds, starting version 5.68, are already configured to display RAID faults on Red Fault LED (LED2), therefore you can skip this section.
If for some reasons you want to add an existing array to your system (e.g you just did a new fresh install of your operating system), you can use the following command to detect your existing array.
This process will completely destroy the array and any data written to it. Make sure that you are operating on the correct array and that you have copied off any data you need to retain prior to destroying the array.
Find the active arrays in the /proc/mdstat file by typing: