diff --git a/docs/mdadm.md b/docs/mdadm.md index a673f83..5e48577 100644 --- a/docs/mdadm.md +++ b/docs/mdadm.md @@ -87,7 +87,10 @@ Output unused devices: -As you can see in the second line, the /dev/md0 device has been created with the RAID 1 configuration using the /dev/sda and /dev/sdb devices. The fourth line shows the progress on the mirroring. You can continue the guide while this process completes. +As you can see in the second line, the /dev/md0 device has been created with the RAID 1 configuration using the /dev/sda and /dev/sdb devices. The fourth line shows the progress of the syncing. You can continue the guide while this process completes. + +!!! note + If your system is configured to [display RAID fault on the LED2](/mdadm/#configure-fault-led), then you should also see the red LED2 blinking while your array is (re-)syncing. ### Create RAID 6 Array @@ -109,7 +112,10 @@ Output unused devices: -As you can see in the second line, the /dev/md0 device has been created with the RAID 6 configuration using the /dev/sda, /dev/sdb, /dev/sdc and /dev/sdd devices. The fourth line shows the progress on the mirroring. You can continue the guide while this process completes. +As you can see in the second line, the /dev/md0 device has been created with the RAID 6 configuration using the /dev/sda, /dev/sdb, /dev/sdc and /dev/sdd devices. The fourth line shows the progress of the syncing. You can continue the guide while this process completes. + +!!! note + If your system is configured to [display RAID fault on the LED2](/mdadm/#configure-fault-led), then you should also see the red LED2 blinking while your array is (re-)syncing. ### Create RAID 10 Array @@ -131,7 +137,10 @@ Output unused devices: -As you can see in the second line, the /dev/md0 device has been created with the RAID 10 configuration using the /dev/sda, /dev/sdb, /dev/sdc and /dev/sdd devices. The fourth line shows the progress on the mirroring. You can continue the guide while this process completes. +As you can see in the second line, the /dev/md0 device has been created with the RAID 10 configuration using the /dev/sda, /dev/sdb, /dev/sdc and /dev/sdd devices. The fourth line shows the progress of the syncing. You can continue the guide while this process completes. + +!!! note + If your system is configured to [display RAID fault on the LED2](/mdadm/#configure-fault-led), then you should also see the red LED2 blinking while your array is (re-)syncing. ## Create and Mount the Filesystem @@ -330,6 +339,9 @@ Edit the following section and replace root by your email address. Make the Red Fault LED (LED2) indicates if an error has been detected on your array. The below script will light up the LED2 if an error occurs on an array, and make LED2 blink during reconstruction of a degraded array. +!!! note + Latest Armbian builds, starting version 5.68, are already configured to display RAID faults on Red Fault LED (LED2), therefore you can skip this section. + First create the script *mdadm-fault-led.sh* sudo nano /usr/sbin/mdadm-fault-led.sh diff --git a/docs/omv.md b/docs/omv.md index 068ab57..e67831b 100644 --- a/docs/omv.md +++ b/docs/omv.md @@ -105,14 +105,16 @@ You can see the ongoing build / re-syncing process and get an estimated finish t !!! important While you could carry on with some part of OMV configuration during the RAID re-syncing process, we strongly advice to let this process complete first. You should see the following in the RAID state once re-syncing is complete : **active**. +!!! note + If your system is configured to [display RAID fault on the LED2](/mdadm/#configure-fault-led), then you should also see the red LED2 blinking while your array is (re-)syncing. + ![!OMV RAID Clean](/img/omv/raid10_active.png) !!! info Whenever you change some settings in OMV, the following banner will appear. You can immediately apply the configuration by clicking **Apply** or you can carry on with your configuration and apply the changes at a later stage. + ![!OMV Save Settings](/img/omv/save_settings.png) - - ## Install LVM Plugin To have a better control of storage partitioning we will use Linux Logical Volume Manager (LVM). To create Logical Volume in OMV you will need first to install the LVM plugin.