Quantcast
Channel: VMware Communities : Unanswered Discussions - VMTN
Viewing all articles
Browse latest Browse all 61567

slower write performance in a vm

$
0
0

Hello,

 

I have a single esx host in a cluster. On this host i have only 1 vm.

The esx host is connected to a eql group. In this group i have multiple pools and members. I have the same problem to all pools but for testing purposes i`ll stick to 1 member.


the member is a PS6100XV (24x 600gb 15k) in raid50.

this is member is connected to a stack of 4 powerconnects 62xx series.

the host is a dell poweredge r610 (but also a r620 gives the same problem).

the host is configured as the best pratices of equallogic state: 2vmknic's + 2 physical nics + heartbeat ip

Besides that is has 2 seperate dedicated nics for iscsi traffic from within the vm.

 

My test vm has 4vcpu and 8GB memory and 4 nics (2 for mgmt purposes and 2 for iscsi traffic).

In the vm i have installed the latest drivers from equallogic for iscsi MPIO

it has multiple drives:

 

c: (os) is a volume on a vmfs which resides on the ps61000xv

e: (testing) is a seperate volume on a seperate vmdk on a seperate vmfs which resides on the same ps6100xv

f: (testing) is a volume connected directly to the san through the ms iscsi mpio driver.

 

as a test i have run sqlio (http://tools.davidklee.net/sqlio.aspx)

 

i have run the same test over and over again on both e and f and it keeps giving me the same results.

Read performance is overall the same, but write performance is alot slower on a vmdk file then on a direct iscsi target.

I would it expect it to be the same, so am i wrong?

 

13-7-2012 20-59-15.jpg

13-7-2012 22-19-17.jpg


Viewing all articles
Browse latest Browse all 61567


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>