“A few weeks ago I had the distinct displeasure of waking up to
a series of emails indicating that a series of RAID arrays on a
remote system had degraded. The remote system was still running,
but one of the hard drives was pretty much dead.“Upon logging in, it was found that four out of six RAID devices
for a particular drive match were running in degraded mode: four
partitions of the /dev/sdf device had failed; the two operational
partitions still working were the /boot and swap partitions (the
system is running three RAID1 mirrored drives; a total of six
physical drives).“Checking the SMART status of /dev/sdf showed that SMART
information on the drive could not be read. It was absolutely on
its last legs. Luckily, I had a spare 300GB drive with which to
replace it, so the removal and restructure of the RAID devices
would be easy.”
Replace a failed drive in Linux RAID
By
Get the Free Newsletter!
Subscribe to Developer Insider for top news, trends, & analysis