

This latency will vary, however, based on the implementation of the RAID level as well as on the processing capability of the system.

This introduces latency, but does not curtail throughput. Primarily, parity RAID levels require heavy processing in order to handle write operations, with different levels having different amounts of computation necessary for each operation. Some types of RAID also have dramatic amounts of computational overhead associated with them while others do not. It is primarily hobby and consumer RAID systems that fail in this aspect. We must assume that all are working to the limits of the specification. Or it may fail to use the available spindles (such as having a RAID 1 array read only from a single disk instead of from both simultaneously!) There is no easy way to account for deficiencies in specific implementations. A poor implementation might cause latency. One is the implementation of the system itself. RAID is complex and many factors influence the final performance. Even the biggest, fastest, most robust cache options cannot change the long term, sustained performance of an array. Suffice it to say that it can be very dramatic depending heavily not only on the cache choices themselves, but also heavily on workload. There is no simple formula for determining how different cache options will impact the overall performance. But they will not fundamentally change the performance of the array itself under the hood. Artifacts such as memory caches and solid state caches will do amazing things to alter the overall performance of a storage subsystem.

It’s also important to remember that we are only talking about the performance of the array itself, not an entire storage subsystem. But we can compare performance in a meaningful way by speaking to it in relationship to the individual drives within the array. This is important as IOPS are often very hard to define. We can abstract away the RAID array and not have to think in terms of raw IOPS (Input/Output Operations Per Second). This allows us to talk in terms of relative performance as a factor of the drive performanc. We will use “X” to refer to the performance of each drive individually. In our discussions we will use “N” to represent the total number of drives, often referred to as spindles, in our array. To make discussing performance easier we need to define a few terms as we will be working with some equations. Read performance is effectively stable across all types. In terms of RAID, reading is extremely easy and writing is rather complex. There are two types of performance to look at with all storage: reading and writing.
