ZFS for Dummies
SGI NAS ZFS Datasheet - (registration required)
For more than a decade, storage system performance has remained rather stagnant while drive capacities and application performance demands steadily have increased. The result of this trend is an expensive problem: Storage users are forced into buying expensive hard disk drives (HDDs) to get a moderate performance boost (by reducing I/O latency) and/or forced into over-buying capacity in order to meet performance requirements. With the advent and decreasing price of flash, storage vendors are integrating it into their products to solve this problem. SGI NAS ZFS technology is leading the industry in its ability to automatically and intelligently use flash in a storage system that offers the appropriate capacity and performance capabilities at a total cost that is dramatically lower than most legacy storage systems (and even still lower than legacy vendors who have begun to use flash as well).
The Parts of a ZFS Hybrid Storage Pool
In a ZFS Hybrid Storage Pool (HSP), there typically are three varieties of hardware-DRAM, flash, and spinning HDDs, with flash being used in two distinct ways. The following sections explore each of these.
A New Storage Parameter: Working Set Size
For legacy storage systems, sizing means determining necessary capacity, IOPS, and throughput and then doing some simple math to determine the number of spindles that could provide those numbers (with some thought given to parity overhead, controller limitations, etc.). As the industry moves toward more sophisticated caching methodologies in storage systems, such as SGI NAS, a new parameter for expressing storage needs has become evident: The "Working Set Size" (WSS) can be described as the subset of total data that is actively worked upon (e.g., 500GB of this quarter's sales data out of a total database of 20TB). Knowing the WSS makes it possible to size ARC, L2ARC, and even HDDs more accurately, but few applications today have an awareness of WSS. SGI NAS is working with its partners in the industry to develop tools to make this measurement easier and to provide guidance based on its experience with thousands of deployments.
A Few Frequently Asked Questions
Doesn't flash wear out quickly when used for writes?
Yes, flash does wear out, but today's enterprise flash vendors have done a lot to improve wear-leveling algorithms, reserve spare flash cells, and otherwise extend the longevity of flash in enterprise devices. So, don't buy a USB thumb drive for your ZIL; know that not all SSDs are created equal; consider SGI NAS Hardware Supported List (HSL); or contact a sales representative or sales engineer for specific guidance.
What happens if a flash drive fails?
First of all, data isn't lost; losing a ZIL/SLOG or L2ARC device is simply an impact to performance. Any data in the L2ARC also is in the spinning HDDs, so losing an L2ARC device simply means that any requests for the data that were on the failed flash drive now must be serviced by reads from the spinning HDDs. If multiple flash drives are used for L2ARC, they are used in a striped fashion, and the remaining devices will continue to be used if one fails. Any data in the ZIL/SLOG also is in the ARC until it is flushed to the spinning HDDs, which SGI NAS does every five seconds by default. Thus, only if the SLOG device fails and controller power is lost within five seconds would data be lost. This is probabilistically extremely rare, but many customers prefer the comfort of mirrored SLOG devices nonetheless, so the options to stripe or mirror data across SLOG devices both exist.
Can I put L2ARC in the head nodes in HA Clusters?
Yes, but there is a tradeoff to consider. Most SGI NAS HA Cluster deployments consist of two server "head nodes" (containing CPUs, DRAM for ARC, and PCIe cards for client-side Ethernet/Fibre connectivity and back-end storage connectivity) and JBODs with shared storage that can be physically accessed by either head node if one fails. Placing the L2ARC devices in the JBODs means that they can be used by either head node, regardless of the state of the cluster. But, if squeezed for space, it might make sense to utilize drive slots in the head nodes for L2ARC and accept the temporary performance impact (of some SSDs being unusable if a head node is down) until the cluster is back to its optimal condition.
Can I use PCIe-based flash devices?
Yes, but the tradeoff here is much like that described in the preceding answer. PCIe-based flash devices obviously must reside in the head nodes, so if a head fails or is powered off, the other node cannot utilize that flash. (There is no cross-node cache mirroring in ZFS today.) While that is the downside, the upside is that PCIe-based flash devices offer lower I/O latencies than SSD flash devices so, where performance is most critical, some customers prefer to have PCIe-based flash devices in the head nodes and then plan to restore the HA Clusters to the optimal state as quickly as possible if something should happen.
What about an all-SSD pool?
Many SGI NAS customers use all-SSD pools. Where the budget allows, this offers a very low-latency solution and negates the benefit of the L2ARC and SLOG (unless SSDs with differing performance characteristics are used).