The first part of this series focused on what I consider to be the basic building block of all enterprise class storage solutions – the hard drive. In this segment I am going to cover RAID which stands for Redundant Array of Independent Disks. The reason why I am focusing on RAID as the next step is because it is the logical combining of individual hard drives into a single virtual drive.
Note that I used the term “virtual drive”. This is an essential point to keep in mind because there is a layer of abstraction between the physical devices and the resource that your customers see. This adds complexity to the solution, but it also is the secret that allows RAID to be so resilient when hardware failure occurs.
Why use RAID at all?
Basically there are only three reasons to setup RAID:
- Increase the available storage space.
- Increase the performance capabilities of your storage solution.
- Protect data against hard drive failure.
These reasons easily justify the complexity that comes with provisioning RAID storage solutions. When you consider that storage solution providers have made the provisioning of RAID very easy to setup and maintain (in many cases doing all of the work behind the scenes) there are no good reasons not to use RAID based solutions.
The many flavors of RAID.
There are different levels of RAID, and each provides one or more of the previously mentioned benefits of RAID. Here are three of the most commonly used RAID levels:
This level of RAID is disk striping across multiple drives. Think of a stripe as being a sentence:
The quick brown fox jumped over the lay dog.
Now each word in the sentence represents a block of data to be stored on the logical drive. The logical drive is made of multiple physical drives, and the stripe of blocks is written across both drives. This results in faster performance during both read and write operations because the bandwidth of both drives is available to the system. The table below represents how the data would look as it is written across both drives:
|DRIVE #1||DRIVE #2|
RAID 1 is disk mirroring, which means that you are creating a duplicate of one drive by writing the data again to a second drive. You might be able to configure the system to read from each of the two drives independently, but it is very likely that you will suffer a slight performance hit with write operations and you will lose half of your physical drive space because of the duplication. Using the same example from before your drives would look like this once the data was written:
|DRIVE #1||DRIVE #2|
Worse performance and you basically get the storage space of a single drive for the price of two. What is the benefit of RAID 1? Imagine that same scenario where we lose drive #2 in the volume, and we are left with only drive #1. Here is what you would see:
Think about that for a moment – your one drive was running fine for three years and suddenly failed. Your one of two identical drives installed at the same time and both running the same load under the exact same conditions failed. What are the odds that your second drive is going to fail soon? This is one reason why it is a good idea to have scheduled hardware refreshes (which will probably be a topic for a future article).
Now this is where the magic really starts to happen, because with RAID 5 we introduce distributed parity. When you setup a RAID 5 volume you need three physical drives, and you should probably have a special disk controller to have acceptable performance levels(to be covered in a future article as well). The gain though is that you can lose any one of the drives in the volume and when you rebuild the volume you can do so using more than a single drive to do so. This speeds up the process of rebuilds significantly, and the multiple drives improve the read operations performance of the system in many situations (you take a hit with write performance, but this is usually acceptable for most cases).
Distributed parity is the big gain from RAID 5. It lessens the risk from a drive failure while at the same time increasing the amount of drive space that is available to the volume at a lower cost.
The drive size for a RAID 5 volume is determined by a simple formula:
s * (n – 1) = v
In the above formula s represents the size of the smallest drive in the volume, but the best practice is to use all drives of the same size in the volume. The n represents the total number of drives, and because the distributed parity will be written with every stripe of data across the drives you lose the equivalent of one drive’s worth of space from the volume (hence the – 1). The result is v which is the amount of drive space that is actually usable on the volume.
If we were to build a RAID 5 volume using four 2TB drives the formula would be written as follows:
2 * (4 – 1) = 6
This results in our volume having 6TB of usable space. Unlike RAID 1 our cost is much better with RAID 5 in that instead of dedicating an entire drive to resiliency we use 25% of each drive for the same purpose.
And how exactly is that 25% used for resiliency? The table below demonstrates how binary data is written to each of the drives in a 4 drive RAID 5 volume:
|DRIVE #1||DRIVE #2||DRIVE #3||DRIVE #4|
This might seem incredibly confusing, but remember that computers use binary as the very foundation for their operations. Everything is either a bit in the form of a 1 or a 0, which means that every stripe of data adds up to either an odd or even number. The parity bit does not contain any data, but it records whether the stripe total should be odd or even. The parity bit also has a special “mark” so that it can be identified as the parity bit. There are different forms of parity, but for now I want to keep things as simple as possible. If the parity bit is 0p the total for the stripe is even, and if the parity bit is 1p the total for the stripe is odd.
If we were to lose drive #3 our data would like like this after replacing the bad drive:
|DRIVE #1||DRIVE #2||DRIVE #3||DRIVE #4|
In this case “X” means that no data has been written to the drive yet. Now the system starts to rebuild the data by calculating the total for a stripe.
Stripe 1: 0 + 1 = 1
The parity bit for stripe 1 is 1p, so the total result should be an odd number. 1 is an odd number, therefore the missing bit is a 0. If it were another 1 the parity bit would be a 0p.
Stripe 2: 1 + 0 + 1 = 2
There is no parity bit, so the system must write a new parity bit based upon the total for the stripe. The new parity bit is 0p since the total is an even number.
Stripe 3: 1 + 1 = 2
The parity bit is 1p, so the total for the stripe should be an odd number. The total of 2 is not an odd number, so the missing bit must be a 1.
Stripe 4: 0 + 0 = 0
The parity bit is 0p, so the missing bit must be a 0 because the parity bit says that the total is not an odd number (for our purposes a total of zero will be treated as an even number).
Of course a computer does this much faster than any human being could ever hope to, but this gives you a very rough idea of what is happening when the distributed parity is used to recreate lost data with.
This has been a highly simplified version of what RAID is, and I only covered 3 RAID levels. You should always read the technical documentation that comes with any solution that you implement for the purpose of designing or administering a production environment with.
Yet I do hope that this gives you some insight into what RAID does on a fundamental level and why it is so important to an enterprise class storage solution. You need something more robust than a single drive to guarantee data integrity with, and there are no drives large enough to meet the space requirements of a true enterprise class environment (or even smaller non-enterprise class environments).
If you have an questions please leave them in a comment below. In the next part of this series I will focus on explaining the fundamental differences between a SAN and a NAS. Until then, keep learning and challenging yourself with new technologies even if they are just new to you.