I’m thinking of moving my storage for my NAS (N40L) from DAS to iSCSI. My current set-up is trivial (and not performant): the N40L has 4 local SATA drives and an LSI2008 in IT mode connected to 6 SATA drives.
I think the current bottleneck is the controller:
# dd if=/dev/zero of=a bs=8192 count=1310720 1310720+0 records in 1310720+0 records out 10737418240 bytes transferred in 68.962335 secs (155699749 bytes/sec)
These results are targeting the 6 drives in a ZFS RAIDZ pool.
Anyway, by moving the pool to iSCSI I will be introducing IP and Ethernet to the transport, with 10G Ethernet a must. As a quick test I installed a cheap Intel 10G NIC into a spare N40L and connected it directly to a Cisco UCS with 10G with a twinax cable. I booted both machines from live USB OSes: FreeBSD 13 on the N40L and Ubuntu 20 Server on the UCS. FreeBSD 10 would hang at random times with the 10G NIC installed in the N40L nor would it recognise the Cisco 10G NIC in the UCS,
I installed iPerf3 on each machine, bumped the MTU on FreeBSD to 9000 (crazy that it defaults to 1500 for a 10G NIC) and ran a quick test. Here’s the results:
N40L:
root@:~ # iperf3 -s
-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------
Accepted connection from 10.30.1.1, port 56382
[ 5] local 10.30.1.2 port 5201 connected to 10.30.1.1 port 56384
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.15 GBytes 9.89 Gbits/sec
[ 5] 1.00-2.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 5] 2.00-3.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 5] 3.00-4.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 5] 4.00-5.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 5] 5.00-6.00 sec 1.15 GBytes 9.89 Gbits/sec
[ 5] 6.00-7.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 5] 7.00-8.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 5] 8.00-9.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 5] 9.00-10.00 sec 1.15 GBytes 9.90 Gbits/sec
[ 5] 10.00-10.00 sec 1.32 MBytes 9.49 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 11.5 GBytes 9.89 Gbits/sec receiver
UCS:
root@ubuntu-server:~# iperf3 -c 10.30.1.2
Connecting to host 10.30.1.2, port 5201
[ 5] local 10.30.1.1 port 56384 connected to 10.30.1.2 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.15 GBytes 9.91 Gbits/sec 0 1.39 MBytes
[ 5] 1.00-2.00 sec 1.15 GBytes 9.90 Gbits/sec 0 1.39 MBytes
[ 5] 2.00-3.00 sec 1.15 GBytes 9.90 Gbits/sec 0 1.39 MBytes
[ 5] 3.00-4.00 sec 1.15 GBytes 9.89 Gbits/sec 0 1.39 MBytes
[ 5] 4.00-5.00 sec 1.15 GBytes 9.90 Gbits/sec 0 1.39 MBytes
[ 5] 5.00-6.00 sec 1.15 GBytes 9.90 Gbits/sec 0 1.39 MBytes
[ 5] 6.00-7.00 sec 1.15 GBytes 9.89 Gbits/sec 0 1.39 MBytes
[ 5] 7.00-8.00 sec 1.15 GBytes 9.90 Gbits/sec 0 1.39 MBytes
[ 5] 8.00-9.00 sec 1.15 GBytes 9.90 Gbits/sec 0 1.39 MBytes
[ 5] 9.00-10.00 sec 1.15 GBytes 9.90 Gbits/sec 0 1.39 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 11.5 GBytes 9.90 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 11.5 GBytes 9.89 Gbits/sec receiver
iperf Done.
When reversing the direction the results were the same.
A bit later I ran some more tests. This time iSCSI with the target being the UCS server and the initiator being the N40L. The UCS is running ESXi 6.7 passing through the HBA to a FreeBSD 13.1 VM:
mpt0 Adapter:
Board Name: SAS3444
Board Assembly:
Chip Name: C1068E
Chip Revision: UNUSED
RAID Levels: none
The 10Gb adapter in the UCS is owned by ESXi and the VM is connected using a standard vSwitch. The N40L (also running FreeBSD 13.1) has an Intel 10Gb NIC. Both NICs are configured for 9000B MTU (verified during tests with tcpdump).
Here are some raw network tests similar to those above. The UDP results require further examination.
VM -> N40L (TCP)
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.00 sec 11.5 GBytes 9.89 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 11.5 GBytes 9.88 Gbits/sec receiver
VM -> N40L (UDP)
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.00 sec 9.31 GBytes 8.00 Gbits/sec 0.000 ms 0/1115968 (0%) sender
[ 5] 0.00-10.00 sec 7.85 GBytes 6.74 Gbits/sec 0.007 ms 174995/1115968 (16%) receiver
N40L -> VM (TCP)
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.00 sec 11.4 GBytes 9.81 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 11.4 GBytes 9.82 Gbits/sec receiver
N40L -> VM (UDP)
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.00 sec 7.37 GBytes 6.33 Gbits/sec 0.000 ms 0/883094 (0%) sender
[ 5] 0.00-10.00 sec 2.68 GBytes 2.30 Gbits/sec 0.014 ms 562078/883064 (64%) receiver
Here is the VM writing to a 7K SAS drive on the HBA locally (no network involved):
dd if=/dev/zero of=/mnt/a bs=8192 count=1310720
1310720+0 records in
1310720+0 records out
10737418240 bytes transferred in 59.083653 secs (181732472 bytes/sec)
That’s a write speed of about 1.35GB/s. So we know the VM can access its local drives at that speed. I was hoping N40L hitting the same drive over iSCSI would get similar results:
dd if=/dev/zero of=/mnt/a bs=8192 count=1310720
1310720+0 records in
1310720+0 records out
10737418240 bytes transferred in 108.958877 secs (98545603 bytes/sec)
Unfortunately not. The write speeds over iSCSI were almost half of the local write speeds, 751MB/s.
When I get the time I will move the HBA to the N40L and test the local write speeds on that machine.