I recently purchased a Lenovo EMC PX2-300d 2-bay NAS and wanted to establish a performance baseline for future troubleshooting. For details on the configuration and performance tests I conducted continue reading.
Storage Configuration
A Lenovo EMC PX2-300d (firmware 3.3.4.29754) populated with a single Seagate ST3000DM00 3TB 7200 RPM Barracuda 3.5″ Internal Desktop Hard Drive. The Seagate Drive is listed on the HDDs and SSDs approved for px2-300d list.
Host Configuration
Apple MAC mini Quad-core i7/2.3GHz/16GB, VMware vSphere ESXi 5.1 u1 installed on a local Samsung 840 Pro Series 2.5-Inch 256 GB SATA Solid State Drive (datastore1).
Network Configuration
The Management and VM Network on (vSwitch0) are connected to the 192.168.1.x network using the onboard Broadcom BCM57766 network adapter (vmnic0). vmnic0 is connected to a Cisco 3560 Compact Switch (WS-C3560CG-8PC) and is not used for Storage traffic.
A VMkernel Port configured with the IP address (10.10.10.202) was created on a second vSwitch (vSwitch1). The Apple Thunderbolt to Gigabit Ethernet Adapter BCM57762 (vmnic1) is directly connected to Ethernet Interface 2 (10.10.10.102) of the Lenovo EMC PX2-300d with a BELKIN 1′ Cat6 Patch Cable.
There are several methods for testing storage performance, I decided to use the following:
- Virtual Appliance – VMware Labs – VMware I/O Analyzer
- Virtual Machine – HD Tune Pro 5.50 & Intel NAS Performance Toolkit (NASPT)
Virtual Appliance – VMware I/O Analyzer Results
In the VMware I/O Analyzer Results iSCSI read throughput (IOPS) and read bandwidth (MB/s) numbers are almost identical to the numbers collected for NFS. iSCSI write throughput (IOPS) and write bandwidth (MB/s) 9.4% better than NFS. The NFS Read Throughput bandwidth is equal to the Ram-to-Ram Network Performance numbers recorded in tom’s HARDWARE article Gigabit Ethernet: Dude, Where’s My Bandwidth?
I placed the VMware-io-analyzer-1.5.1 virtual machine on the NFS datastore (NFS01), and achieved the following results.
IOPS | MB/s | |
Max_Throughput.icf (Read) | 222.32 | 111.16 |
Max_Write_Throughput | 202.24 | 101.12 |
In the second test I enabled the Software iSCSI Initiator on the ESXi host and added the 1024 GB iSCSI LUN as a VMFS-5 datastore (R0). I placed the VMware-io-analyzer-1.5.1 virtual machine on the iSCSI datastore, and achieved the following results.
IOPS | MB/s | |
Max_Throughput.icf (Read) | 223.8 | 111.9 |
Max_Write_Throughput | 221.22 | 110.61 |
Virtual Machine - HD Tune Pro and Intel (NASPT) Results
A Windows Server 2003 R2 Enterprise Edition -SP2 virtual machine was used for the HD Tune Pro 5.50 and Intel NAS Performance Toolkit (NASPT) tests. The virtual machine is configured with a single vCPU, 1024 MB of Memory, and a single vNIC connected to the VM Network Virtual Machine Port Group. The operating system is installed Samsung Electronics 840 Pro Series 2.5-Inch 256 GB SATA Solid State Drive (datastore1) using an 8 GB Thick Provision Eager Zeroed .vmdk file (Hard disk 1). An additional 80 GB Thick Provision Eager Zeroed .vmdk file (Hard disk 2) which was created on the NFS or iSCSI datastore for testing.
HD Tune Results illustrated a marginal improvement of iSCSI over NFS. Average iSCSI read bandwidth (MB/s) is 7.8% better than NFS. Average write bandwidth (MB/s) is 3.9% better than NFS. The NASPT Results show similar File Copy to NAS numbers. NFS is 6.7% faster than iSCSI in the File Copy from NAS application test.
HD Tune Results – 80 GB (Hard disk 2) .vmdk file – NFS datastore
HD Tune Results – 80 GB (Hard disk 2) .vmdk file – iSCSI datastore
Intel NASPT NFS Results (left) and iSCSI Results (right)
