Quantcast
Channel: DELL-Donald Wi's Activities
Viewing all articles
Browse latest Browse all 2320

6100XV vs 6210X: Diffrence in write speed/latency

$
0
0

Hi,

In november 2011 we purchased an 6100XV and recently an 6210X has been racked to be placed within the same group as the old unit. For testing purposes both units are running in their own group for now untill I'm happy with what I see.

The setup:
As storage we've a PS6100XV (4x 1Gbit + MGT) and an 6210X (2x 10Gbit (copper or fiber) + MGT) both running FW 7.07 and connected to our iSCSI switched.

Our storage network consist of two M6348's placed in our M1000E chassis. The switches are stacked and running version 5.1.3.7.

The processing power for our vSphere cluster is coming from our M1000E chassis with 4 blades (M710/M820). Each blade is running vSphere 5.5 (1892794) with MEM 1.2 installed and is connected with 4x 1 Gbit to the internal M6348 ports using 4 paths.


The 'problem':
When dd-ing 2 GB of data from /dev/zero to disk within the VM I see diffrent results. Please note that I'm using the same VMware host, network etc only diffrent equallogic's:
    6100XV    : 101 MB/s
    6210X    : 239 MB/s


Looking in esxtop I see the following on my iSCSI ports during the 2GB write:
6100XV:
    Latency raises to 7 ms (seen by SanHQ as well)
    We see and equally spread load across all 4 links of ~256 Mbit

Esxtop - network
   PORT-ID              USED-BY  TEAM-PNIC DNAME              PKTTX/s  MbTX/s    PKTRX/s  MbRX/s %DRPTX %DRPRX
  50331658                 vmk2     vmnic4 vSwitch1           2182.96  256.54    3934.29    4.20   0.00   0.00
  50331659                 vmk3     vmnic5 vSwitch1           2249.15  265.84    4087.83    4.37   0.00   0.00
  50331660                 vmk4     vmnic8 vSwitch1           2182.39  269.51    4053.12    4.44   0.00   0.00
  50331661                 vmk5     vmnic9 vSwitch1           2155.30  267.10    4032.52    4.43   0.00   0.00

Esxtop - virtual disk
     GID VMNAME           VDEVNAME NVDISK   CMDS/s  READS/s WRITES/s MBREAD/s MBWRTN/s LAT/rd LAT/wr
    7115 mail38                  -      2   576.02    69.81   506.21     1.06   118.68  22.27   7.01

6210X:
    Latency raises to 74 ms (seen by SanHQ as well)
    We see and pretty close maxed out load across both links of 900+ Mbit

Esxtop - network
   PORT-ID              USED-BY  TEAM-PNIC DNAME              PKTTX/s  MbTX/s    PKTRX/s  MbRX/s %DRPTX %DRPRX
  50331658                 vmk2     vmnic4 vSwitch1          10995.86  918.20   16286.85   14.44   0.00   0.00
  50331659                 vmk3     vmnic5 vSwitch1          11036.87  945.63   16856.19   11.83   0.00   0.00
  50331660                 vmk4     vmnic8 vSwitch1            247.96    8.55     251.77    3.59   0.00   0.00
  50331661                 vmk5     vmnic9 vSwitch1            258.45    8.35     251.77    4.51   0.00   0.00

Esxtop - virtual disk
     GID VMNAME           VDEVNAME NVDISK   CMDS/s  READS/s WRITES/s MBREAD/s MBWRTN/s LAT/rd LAT/wr
 3477742 eql02bench01            -      1   436.24     2.15   434.09     0.01   216.48  11.68  74.16


As the 6210X only has two active ports it only utilises two paths but as you can see it can combined both links to a 2 Gbit connection where the 6100XV can not combined all four links to a 4 Gbit connection. Besides the bandwidth 'issue' I'm seeing a huge diffrene in latency during the writes between both units.


Questions:
1) What the diffrence between both units that causes the one to be limited to 1 Gbit and the other not
2) Why is the write latency so high? Both controllers are in the green so write back should be active.

Regards,

- Henk


Viewing all articles
Browse latest Browse all 2320

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>