In my last article, back in 2001, I had envisioned a network storage accessibility from anywhere at anytime. Today we are starting to see network storage pool accessibility using techniques such as Cloud Storage. EMC’s Atoms onLine provides customers instant access to online storage, while “NetApp, VMware and Cisco have been collectively working on a joint Cloud solution …”, wrote NetApp’s Val Bercovici. Already companies such as Mozy have started to offer network storage services for a fee to laptop users. Individuals can use network storage as an extension or replacement of their directly attached drive so that if their DA HDD fails, they could always rely on their data being available over their network-attached storage drive.
Now we are indeed approaching true network storage centralization through virtualization and are taking a technological leap to centralized virtual storage using converging technologies, moving towards more efficient and unified green technology where data becomes available all the time. But do we have mature and evolved storage building block components as well as SW layers?
The historical trend of storage components has helped to make current momentum on development of virtual network storage technologies such as Cloud Storage possible. Basic underlying building blocks are storage interfaces such as SCSI, Fibre Channel (FC) and ATA. They were accepted by the market and have been maturing in the past two decades or so with FC coming last. Other technologies, such as ESDI, targeted at specific markets, failed. FC customers benefit from storage scalability, large SANs and point-to-point interfaces while SCSI is popular for medium-level pools of data but uses a parallel interface and ATA is often used for lower-end applications. As the ATA, SCSI and FC markets grew, they gained maturity. But pressure built to expand the capability and features of these interfaces has gradually caused them to overlap each other.
In early 2000s, ATA was pushed to mimic FC’s point-to-point feature and SATA was born. SCSI was mature but could not do point-to-point, so SAS was approved and filled that gap. SCSI also was pressured to piggy-back on top of networking protocols such TCP/IP through tunneling, so that it could reach storage volumes that are farther away, leading to the birth of network storage SANs thru block I/O (i.e. iSCSI). Other network storage encapsulations included FCIP and iFCP.
Meanwhile smarter storage software layers such as new distributed filesystems, cloud storage and others were being developed to take advantage of these new interfaces. The variety of cluster filesystems together with a multitude of techniques to map many drives into simple centralized volumes have been gaining popularity. For example the iSCSI stack gained a nitch when customers found out that they could still use their SCSI-based storage server pools over the network. As a result, IDC predicts a 75.8% increase in iSCSI revenue between 2005 and 2010. IDC also predicts that iSCSI revenue will top $5 billion, or 20% of the external disk storage market, in 2010, up from $305 million, or 3% of the external disk market, in 2005. So the market has nodded to encapsulation as the future direction.
As iSCSI-based solutions started to show financial traction and acceptability, although IP-based storage, the trend towards converging specs (i.e. FCoE, AoE, ..) for Ethernet based storage, together with smarter handling of storage layers such as dedup, cloud storage, SAN, NAS, DR, and distributed filesystems such as ZFS, began to pick up steam. These new encapsulation/tunneling techniques pushed multiple interfaces to converge. Companies have been gobbling each other to gain know-how so that their storage portfolio could satisfy their new technology roadmap, which shows use of convergence and more intelligent storage-layering techniques. Dell bought EqualLogic last year (2k8) to add IP storage to its fleet. EMC bought Data Domain to add its smart dedup to its storage fleet. It looks like EMC’s thrust toward cloud storage is right on the money. Broadcom tried unsuccessfully to get FCoE, a brand new Ethernet storage technology from Emulex, into its fleet of storage solutions.
It is clear that all the vendors are focused and aware of where we will be in the next three-to-seven years and are wisely pushing to acquire the missing pieces of the network storage puzzle. The key is how to achieve centralize storage through virtualization and what popular building block technologies can get you there. With successful adoption of iSCSI, we know what the market wants next — to adopt network storage technologies with gradual and minimal interruption to existing storage infrastructures. iSCSI, FCoE and AoE are good candidates. They each help network connectivity for existing FC, SCSI and ATA gradually.
FCoE has emerged as the most recent building block component of the network storage technology, standardize by the T11 folks after two years of hard work. You can see all 180 pages here. FCoE, which can be the big brother of AoE, uses IEEE’s 802.3 to encapsulate FC over Ethernet. The Ethernet enhancements are called CEE (Converged Enhanced Ethernet). With CEE and Ethernet features such as priority-based flow control and congestion notification, OEMs can finally use basic block components such as FCoE capable adapters or CNAs (Converged Network Adapters) and FCoE capable switches and create full solutions to quench the thirst of salivating IT managers who want reduced number of HBAs, cables, power usage and cost without needing to learn new management tools. INCITS is now in the process of making FCoE an ANSI standard.
You can install and run a simple software version of the FCoE network by downloading and installing Linux 2.26.29 as an FCoE initiator and OpenSolaris build 112 as an FCoE target (see how). FCoE initiator was also added recently to OpenSolaris 122 (see how to here). This is the software version of FCoE which basically turns your 1Gb or 10Gb NIC into a CNA. You can then share your target’s storage over the CNA with the FCoE initiators or jump through a CEE-based switch to a FC SAN. Many companies, such as Qlogic and Emulex, have invested in HBAs that support FCoE or CNAs. On the switch side, Brocade and Cisco, which seem to always be at odds with each other, have both adopted FCoE. The Brocade 8000 FCoE switch and Cisco Nexus 5000 are already available for full solution OEMs. IBM and Netapp have both recently announced selling new solutions which will use all FCoE components from Cisco, Brocade, Qlogic and Emulex.
FCoE is another piece of the storage virtualization puzzle which allows separate technologies to converge and provide the end users with gradual, simple, clean, and accessible data that soon will be accessible from anywhere at anytime. Technologies such as FCoE also help greatly to reduce TCO and are ultra green. Dell’Oro Group’s forecast for Fibre Channel revenue from switches and host bus adapters (HBAs) is $2.36 billion this year, $2.68 billion in 2010 and $2.86 billion in 2011, when the FCoE sales would outgrow FC.
Many companies are currently working to take advantage of increasing higher bandwidth offerings (i.e. current 10Gb, and in future 40Gb or 100Gb envisioned by Brocade’s chief technology officer David Stevens) in encapsulation technologies together with smarter storage related software layers, so that central storage can be achieved through more efficient and smarter virtualization. The future user will have private data available all the time. Sophisticated smart and efficient hardware and software will allow this seemingly easy network-based access to private data, which today resides on your directly attached drive in your system, laptop or data center, possible. As network data backup and restore interfaces mature, you will start seeing much smaller systems, which is a side benefit of network storage. This could by itself revolutionize laptop form factors. This is because you no longer require a power-consuming, heat-generating hard drive, power supply and fan to keep it working. Drives become virtually mapped from data warehouses of your choice through a universal Cloud Storage network interface.
The final pieces of the puzzle is to balance the flow of secure data on the storage network framework which connects all storage (ATA based, SCSI based, FC based) together. When we achieve the ultimate super storage network infrastructure (reminds me of the borg http://en.wikipedia.org/wiki/Borg_(Star_Trek)) which allows full virtualization, we can then start to tune and synchronize all HW and SW components to achieve a fine balance of secure data flow by adopting intelligent software and logic to monitor the entire traffic.
Guest Contributor: Farid Bavandpouri, Storage Engineer at Super Micro Computer, Inc. Originally titled “Convergence to Green Computing“.