It’s something that I have been discussing a lot with one of my customers.
At the moment, they are hesitant to invest in a potentially very expensive SAN solution involving fibre channel connections. Currently, there are eight vSphere servers configured with no centralised storage. Hence, all virtual machines are running on the local disk arrays. Therefore, there is no DRS, no HA and no Vmotion! We are protecting the virtual machines only with the local server’s ability to recover from physical disk failure, in this case, on board RAID 5 controllers.
My temporary solution was an open source iSCSI storage solution until a more permenant solution can be found.
However, the big question that I keep coming up against is, “What’s the performance hit I will take?”
It’s undeniable that the fibre channel should be faster, as the article below describes, this may not be that much of an issue.
If we look at the way a virtual machine (and a physical machine, indirectly) runs, the most important resources are RAM and CPU. Disk storage is (sort of) secondary. The exception to this is streaming media or backup servers. Typically, performance difference is minimal.
As long as you have a properly implemented iSCSI solution. I have look at the security considerations previously (http://invurted.com/tutorial-iscsi-security/) and performance will always be optimal if iSCSI is isolated to its own infrastructure. This can be achieved by physical infrastructure or VLANs to isolate the traffic.
In short, there should be very little to stop small to medium enterprises from adopting iSCSI solutions for shared storage. Its performance is comparable to fibre channel in most circumstances and the relative cost is less than most fibre channel solutions for a minimal performance hit.
What weighs more: one pound of bricks or one pound of feathers? Which is faster: 2 Gb FC or 1 Gb Ethernet? Hint: Both questions have the same answer.
The area of iSCSI performance and how it compares to Fibre Channel is often misunderstood. Both of these SAN interconnects are typically measured by bandwidth with “2 Gb” FC SANs dominating the market today and “1 Gb” Ethernet used for the majority of iSCSI SANs.
Which would you say is faster: a 2 Gb FC connection or a 1 Gb Ethernet connection? It’s a trick question — they are equally fast. They both transfer data at the speed of light. Bandwidth is not an issue of speed but size. Tthink about a four-lane highway versus a two-lane highway. If there are just a few automobiles traveling on either highway, drivers will be able to go the maximum speed. As more drivers travel on each road, the two-lane highway will experience a bottleneck before the four-lane highway does.
This is the same with FC and Ethernet. A 2 Gb FC interconnect has twice the bandwidth (double the number of lanes) of 1 Gb Ethernet. Bandwidth has an impact on performance when large requests are being processed. In this case, most of the work is spent transferring the data over the network making bandwidth the critical path. However, for smaller read and write requests, the storage system spends more time accessing data making the CPU, cache memory, bus speeds and hard drives more important to overall application performance.
Unless you have a bandwidth-intensive application (e.g., streaming media or backup data), the difference in performance will be minimal. Enterprise Strategy Group (ESG) Lab has tested storage systems that support iSCSI and FC and the performance difference is minimal — ranging between 5% and 15%.
In fact, an iSCSI storage system can actually outperform a FC-based product depending on more important factors than bandwidth, including the number of processors, host ports, cache memory and disk drives and how wide they can be striped.
The slowest component of the storage performance chain is the hard disk drive. It takes a hard disk drive much longer — sometimes several thousands-percent longer — to access data in a storage system than the electronic components like processors, bus and memory. The timeline for an I/O starts with a read/write command being sent to the hard drive from the application. This is followed by long, mechanical access times waiting for the drive to move the actuator, referred to as the seek process.
The seek process is by far the slowest part of storage performance. The actuator then has to spin to the data that’s been requested, which is another long mechanical process that creates latency. Next, the data is transferred from the drive to the CPU and a status handshake is performed to terminate the request. Access time associated with all disk drives, which includes seek plus latency, is responsible for the majority of the “wait time.”
Traditional storage systems are typically limited in the number of drives across which they can stripe data. Many traditional storage systems can only stripe up to 16 drives, while more advanced products can stripe across hundreds of drives. Striping data across a large number of drives allows a system to leverage all the actuators, which work in parallel to make read/write functions a much more efficient process. Striping data across many drives increases performance and essentially eliminates the need for tuning performance and determining hot spots. Naturally, there is a cost associated with acquiring more hard drives, so a balance and consideration of price/performance is important.
In ESG Lab head-to-head testing, we configured a storage system using traditional striping methods and another one using wide striping. ESG Lab used the same workloads to compare the performance of the traditionally configured system and that of a system using a wide stripe group of 48 drives. The stripe group of 48 drives significantly outperformed the traditional method.
A comparison of Iometer results revealed a 44% improvement in the number of disk I/Os per second when switching from traditional volumes to a 48-drive wide stripe group. That is an amazing performance difference, much more than the five to 15% difference that we found between iSCSI and FC.
Some iSCSI storage systems may not have well-tuned performance optimized iSCSI target drivers. This is the fault of the storage vendor and they need to go back to their R&D group and do a better job. Additionally, ESG Lab has found that using a TCP/IP offload engine (TOE) on the iSCSI target port within the storage system can have a measurable positive impact on performance. Some iSCSI storage systems do not have integrated TOE support.
The architecture of the storage system, the speed and number of processors, the amount of memory and the intelligence of its caching algorithms, the speed the disk drives and number of drives in a stripe group, the number of host ports and the backend interconnect all play a major role in performance. I recommend that you evaluate the storage system based on all of the above criteria. It is the storage system itself that will make a bigger difference. The speed of iSCSI is not the issue.