Quantcast
Channel: DELL-Donald Wi's Activities
Viewing all 2320 articles
Browse latest View live

VSM support for vSphere 6.5?

$
0
0

Now that vSphere 6.5 is out, is there an ETA on an updated VSM that is supported/compatible with it?

It seems that we have to wait to upgrade till then since we're using VVOLs. :)


Re-enable management interface via console CLI

$
0
0

Hi all, hoping someone can help!

Very stupidly managed to disable the management interface using the GUI for my Equallogic today, and obviously the management GUI turned red all over and couldn't connect to my SAN.

I can't access the management area using the GUI now to re-enable the interface (as the interface is obviously down now!). 

Is there a command to run using the console CLI connection to re-enable the interface (make it come back up)?

Any urgent help is greatly appreciated!

EQL Group Manager software

$
0
0

Where can I download the latest Group Manager software for use on a PS6510E1 array?

thanks,

Robert Matthews

PS6110 Sector Size Vs Stripe Size for RDM's?

$
0
0

We have our primary SQL server running on this san, and recently I noticed that when it was set up the drives on the server that connects to it were all formatted with 4k drives.  The guy who orgianly did it is saying that we can't move the data to a new RDM drive, formatted for 64k on this san because the SAN only supports 512 and 4k since it won't be using vmdk files. We're having some performance issues with this SQL server and everything I'm reading is saying that we need to set up the database, log, and temp db drives as 64k. 

From reading up on it it appears that while the sector size on the SAN only supports 512 or 4k, the stripe size by default (and this is configured for RAID 10) can do 64k, so if a new drive was created mapped to the server and the data migrated from old to new, it should work.  Can anyone confirm this for me or let me know what part I'm not getting?

EqualLogic SMP Storage Provider support for SCVMM 2016

$
0
0

Hi there

I'd like to get a little more information surrounding the SMP provider that comes shipped with HIT Kit 5.0, specifically, whether this will work with SCVMM 2016.

We can see in the documentation that HIT Kit 5.0 provides support for Windows Server 2016, however we cannot see any mention of whether the provider supports VMM 2016 (only mentions of VMM 2012 SP1 / 2012 R2).

Could we get Dells stance on this and whether support for SCVMM 2016 is on the roadmap?

Kind Regards

Matthew

Rethinning, couldn't find volume

$
0
0

Hi everybody,

I'm trying to rethin a volume on a Windows Server 2008R2 server on EqualLogic.

On EqualLog is states 100% full, but on Windows it uses 70GB from the 600GB...

When trying a rethin i get the message:

eqlrethin l:\
Rethinning 95% of the free space on volume l:\
- Rethinning
failed to rethin volume l:\
couldn't find volume

Tried it with and without the \

With the volumename:

eqlrethin srv01-logfiles
Rethinning 95% of the free space on volume srv01-logfiles
failed to rethin volume srv01-logfiles
failed to get volume extents for srv01-logfiles\ - The system cannot find the file specified.

Does anyone know how to solve this? (Donald :-)?)

KR, Marcel

Hit Kit vs RHEL Native Multipath

$
0
0

Hello everyone.

I'm new to the forum. We recently setup a RHEL 7.1 server and would like to setup multipath connections to our SAN. What are the pros/cons of using RHEL native multipath vs using the Dell HIT Kit for multipathing?

Replication Status Partner Down

$
0
0

Hi Alll

We are using DELL Equallogic ps6100 SAN And we showing an error on SAN dashboard :  Status Partner Down, we have using P2P link for two site replication so please guide me why we showing it ?

 

 

 


Queue Depth Spikes

$
0
0

We have an EQ group with PS6210s, and we've been seeing some queue depth spikes up to 80,000 at about 4:30am or so. The QD will climb for about 30-40 minutes and then drop like a rock to a normal average (less than 5). We're pretty sure it's a web system that is doing some data transfers at that time. We aren't seeing any adverse affects from this so far, so I'm wondering -

1 - Should I worry?

2 - Is the sudden dropoff normal? I would have expected a more gradual decline as the queue clears out.

Thanks.

MEM 1.4 on ESXi 6.5

$
0
0

I configured then installed MEM 1.4 on ESXi 6.5 fine (using hardware offload on nics).  After using the setup.pl script I noticed iSCSI was not enabled on ESXi and MTU, delayed ack, login timeout were still at default values.  I've adjusted this all manually however am wondering if there is anything else I should be aware of post-install on ESXi 6.5?  Are there available instructions for ESXi 6.5?  I understand it's officially supported but the MEM 1.4 documentation makes no reference to it.

Thanks!

---

Reference (commands I issued and their output):

setup.pl --configure --bestpractices --server SERVERIP --username=root --nics=vmnic9,vmnic10 --ips=192.168.10.120,192.168.10.121  --mtu=9000 --groupip=192.168.10.30 --chapuser=CHAPUSER --chapsecret=CHAPSECRET

Assigning IP address 192.168.10.120 to iSCSI0.

Creating portgroup iSCSI1 on vSwitch vSwitchISCSI.

Assigning IP address 192.168.10.121 to iSCSI1.

Creating new bridge.

Adding uplink vmnic9 to vSwitchISCSI.

Adding uplink vmnic10 to vSwitchISCSI.

Setting new uplinks for vSwitchISCSI.

Setting uplink for iSCSI0 to vmnic9.

Setting uplink for iSCSI1 to vmnic10.

Bound vmk1 to vmhba35.

vmk2 is not usable for port binding with vmhba35, skipping.

vmk2 is not usable for port binding with vmhba34, skipping.

vmk2 is not usable for port binding with vmhba33, skipping.

Bound vmk2 to vmhba32.

Set SATP Host System Best Practices.

Checking global HBA settings for adapter vmhba32.

Updating DelayedAck from true to false

Updating LoginTimeout from 5 to 60

Checking global HBA settings for adapter vmhba35.

Updating DelayedAck from true to false

Updating LoginTimeout from 5 to 60

Refreshing host storage system.

Adding discovery address 192.168.10.30 with CHAP user CHAPUSER to storage adapter vmhba32.

Adding discovery address 192.168.10.30 with CHAP user CHAPUSER to storage adapter vmhba35.

Rescanning all HBAs.

Network configuration finished successfully.

No Dell EqualLogic Multipathing Extension Module found.

Continue your setup by installing the module with the --install option or through vCenter Update  

Then I installed MEM:

setup.pl --install --server SERVERIP --username=root --bundle=dell-eql-mem-esx6-1.4.0.426823.zip

Clean install of Dell EqualLogic Multipathing Extension Module.

Bundle being installed dell-eql-mem-esx6-1.4.0.426823.zip

Copying dell-eql-mem-esx6-1.4.0.426823.zip to [dc01-vhost01-localdatastore]/dell-eql-mem-esx6-1.4.0.426823.zip

The install operation may take several minutes.  Please do not interrupt it.

Check to see if the install succeeded

Found Dell EqualLogic Multipathing Extension bundle installed: 1.4.0-426823

Install succeeded

Clean install was successful.

EqualLogic: mixed 10Gb SAN and 1Gb SAN in one group

$
0
0

Hello All,
Current environment: 
- 1Gb EQL4000 connects 1Gb switch
- 10Gb EQL 6000 connects 10Gb switch
- 1Gb switch connects 10Gb switch, in the same VLAN
May I mix 10Gb EQL and 1Gb EQL in one GROUP?  if 10Gb and 1Gb in one Group, does it affect performance?
thanks.

Dell Equallogic MEM 1.4

$
0
0

Hello,

Is Vmware Esxi 6.5 supported for Dell Equallogic MEM 1.4?

Regards

B Van Velzen

Can the number of open CLI sessions be increased from 7?

$
0
0

We are using our EqualLogic array with OpenStack drivers for Cinder volume storage.

OpenStack is reporting errors like the following when performing intensive volume provisioning/removal operations.

Error: Number of open cli sessions reached maximum allowed value of 7

Is it possible to increase the number of supported CLI sessions? 

The OpenStack driver appears to be closing sessions once used but can use multiple sessions to allow parallel provisioning of resources.

Running "pool select default show" takes about 20s with around 70 volumes - is this "normal"?

$
0
0

We are using the OpenStack Mitaka Cinder driver for EqualLogic to provide our test system with a variety of storage volumes.

The driver (according to logs) repeatedly issues calls to "pool select default show" that take (at best) around 20s each to complete - considerably longer on a busy system.  Because of these delays, there is contention on the restricted number of SSH CLI sessions to the EQL.

My question here is whether that is a reasonable length of time for this command or whether there is an underlying performance issue with our system.  

I have attached the full output of the command from a console session below which contains system config details, status and volume.  The dynamic volumes being deleted/created are in the range of 20GB - 200GB.  The large 1+TB volumes are long-living.

Any thoughts on this would be really appreciated.  Thanks.

metaswitch-eql> pool select default show
______________________________ Pool Information _______________________________
Name: default
Description:
Default: true
Data-Reduction: no-capable-hardware
TotalVolumes: 78
VolumesOnline: 78
VolumesInUse: 64
StorageContainers: 0
StorageContainerSpaceReserved: 0MB
NonStorageContainerVolumes: 78
StorageContainerVolumes: 0
NonStorageContainerOnlineVolumes: 78
StorageContainerOnlineVolumes: 0
NonStorageContainerSnapshots: 61
StorageContainerSnapshots: 0
TotalSnapshots: 61
SnapshotsOnline: 0
SnapshotsInUse: 0
TotalMembers: 1
MembersOnline: 1
MembersInUse: 1
TotalCapacity: 9.1TB
VolumeReserve: 7.61TB
VolumeReportedSpace: 11.38TB
SnapReservedSpace: 595.77GB
SnapReservedSpaceInUse: 36.06GB
ReplicationReservedSpace: 0MB
DelegatedSpace: 0MB
DelegatedSpaceInUse: 0MB
FreeSpace: 922.86GB
FailbackSpace: 0MB
ThinProvFreeSpace: 9.01TB
AvailableForBorrowing: 188.36GB
TotalSpaceBorrowing: 1.07TB
Connections: 75
ExpandedSnapDataSize: N/A
CompressedSnapDataSize: N/A
CompressionSavings: N/A
_______________________________________________________________________________


___________________________________ Members ___________________________________


Name Status Model Version Disks Capacity FreeSpace Connections
---------- ------- ------- ---------- ----- ---------- ---------- -----------
camel11 online 70-0400 V9.0.5 (R4 24 9.1TB 640.2GB 95
30781)


___________________________________ Volumes ___________________________________


Name Size Snapshots Status Permission Connections
--------------- ---------- --------- -------- ---------- -----------
EmergencyPool 20GB 1 online read-write 0
EQL-VMWare-3 1.5TB 0 online read-write 4
EQL-VMWare-1 1.5TB 0 online read-write 3
EQL-VMWare-2 1.5TB 0 online read-write 3
EQL-Openstack-1 2.25TB 0 online read-write 2
EQL-Openstack-2 2.25TB 0 online read-write 4
volume-6b65d5b2 1GB 1 online read-write 0
-e03a-4d41-bd
ce-1e13c162ac
e7
volume-d45c3614 1GB 1 online read-write 0
-febc-44e0-8e
48-3097bd5462
14
volume-a767003d 20GB 1 online read-write 1
-f08b-48cd-ae
2e-1aa54bb805
8a
volume-480cb784 20GB 1 online read-write 1
-2c03-4a0f-9f
ac-312d3bcf53
86
volume-8811dfc7 20GB 1 online read-write 1
-7e3f-4a86-97
8c-c738c8c9ed
cc
volume-4b4b6189 20GB 1 online read-write 1
-cd0a-4c22-bc
b2-a992e207a8
86
volume-07941209 20GB 1 online read-write 1
-3048-46c4-bd
3e-e88c61cf69
ce
volume-f776b1f2 20GB 1 online read-write 1
-28bf-498e-88
3c-1031133c0f
b6
volume-3af5f334 20GB 1 online read-write 1
-ddd4-4eef-ac
5d-bb649c5a58
d3
volume-bfdff0df 20GB 1 online read-write 1
-2f59-455e-b4
6e-3b103eaf8e
58
volume-ff8d2337 20GB 1 online read-write 1
-50f3-4ef8-8c
94-baf32dafd2
66
volume-206baa11 20GB 1 online read-write 1
-45ca-4e10-b7
5e-c4822599a8
e8
volume-1b223713 20GB 1 online read-write 1
-4de1-471f-a8
1b-33296a5b7f
e5
volume-9e4d83b9 20GB 1 online read-write 1
-4cef-4253-aa
5d-84f5eab7c7
ce
volume-c0ef5198 20GB 1 online read-write 1
-adaa-42d2-ab
b5-61565fbd49
4a
volume-db471f42 20GB 1 online read-write 1
-d2ed-4b12-bc
fe-1e93cf257f
36
volume-84e3a40c 20GB 1 online read-write 1
-665d-4faf-93
6e-b051920b93
e9
volume-188e257b 20GB 1 online read-write 1
-299d-4eba-8c
22-8046a0ec3e
dd
volume-fd22a5c5 20GB 1 online read-write 1
-b82f-4fcf-81
67-4a5ba5027a
1a
volume-1aec961c 20GB 1 online read-write 1
-6c17-4d5f-8b
3a-090e332bb0
4f
volume-5d065a7b 30GB 1 online read-write 1
-cb66-4347-99
0e-9d07a4c2fe
9c
volume-6343a094 20GB 1 online read-write 1
-0b0f-4569-9a
89-a81905d538
1e
volume-e90cb8a6 40GB 1 online read-write 1
-d880-4f39-bc
82-b2e8026904
a5
volume-73e2d82f 40GB 1 online read-write 1
-b67f-4231-b2
b9-0af5bcc196
9c
volume-e8065e3e 20GB 1 online read-write 1
-9cda-4831-82
09-e323e60382
b3
volume-539cd2f5 20GB 1 online read-write 1
-352e-40b2-80
ff-e0a40ca16b
79
volume-b85b2e14 20GB 1 online read-write 1
-3df8-4710-ad
5a-ef3ebc2d37
6d
volume-d0435ca5 20GB 1 online read-write 1
-c530-4ef0-b9
4e-673fc833a0
99
volume-f24b2a49 20GB 1 online read-write 1
-4e57-4b56-ac
6b-8c693475ce
b8
volume-521fe447 20GB 1 online read-write 1
-6461-4a56-bc
ac-4fc1e9fa74
7a
volume-5ec24671 20GB 1 online read-write 1
-3f56-4744-9e
9f-4fa94eb8c8
13
volume-c5c03dd4 20GB 1 online read-write 1
-9b3b-425c-83
94-d355640803
1a
volume-b1fe23ba 20GB 1 online read-write 1
-5da1-48e6-bd
05-ba807a47c8
66
volume-9745c313 20GB 1 online read-write 1
-8453-4bfe-99
a7-d7250390e9
82
volume-8d2fb5c4 20GB 1 online read-write 1
-8bf2-4cb6-b7
ac-e63acc0fa3
b4
volume-d992fd57 20GB 1 online read-write 1
-3f06-4478-8a
db-ccd9faf314
41
volume-fce73a1c 20GB 1 online read-write 1
-d5cf-42db-ae
ab-5e738b7086
45
volume-303c80fc 20GB 1 online read-write 1
-f020-45ba-80
36-c596b02ec3
c3
volume-61298a44 20GB 1 online read-write 1
-9227-42f8-8f
36-8aa1921338
46
volume-9256de7b 20GB 1 online read-write 1
-5008-4963-a2
77-c42460149f
f3
volume-ad3043b0 20GB 1 online read-write 1
-7607-41d1-b5
79-3b8a0b873c
b6
volume-c2ec2ea4 20GB 1 online read-write 1
-d9b3-4365-9b
b4-a5c3bbc2e9
ec
volume-a32406b2 50GB 1 online read-write 1
-a264-4410-89
c8-d5182ec995
6e
volume-3432cf96 30GB 1 online read-write 1
-1921-4b09-98
9a-1df4efe416
aa
volume-0c92c7d7 20GB 1 online read-write 1
-c193-48bf-bd
42-f526a9f8c8
5f
volume-3d9c2c58 265GB 1 online read-write 0
-551f-4fbc-bb
d8-15a05fd87c
bf
volume-9e340b78 60GB 1 online read-write 0
-646f-47b8-b9
50-688f93dba5
f0
volume-a61feeeb 100GB 1 online read-write 0
-7d51-4157-be
06-447084cfc9
6f
volume-c8b3b4fc 100GB 1 online read-write 0
-6984-4a7e-9e
a2-c375fb0a15
ed
volume-75756f98 100GB 1 online read-write 0
-6d0f-4743-9a
5d-5cd64af4b4
40
volume-466892fe 20GB 1 online read-write 0
-ecf4-4da4-99
06-06bf2e3107
2a
volume-4ba95cea 20GB 1 online read-write 0
-bf50-4c70-80
3f-80e22ca755
d6
volume-312624fc 20GB 1 online read-write 0
-8b8b-4d26-a7
47-45d2a32517
52
volume-839d3346 20GB 1 online read-write 1
-dd06-4339-93
05-5776b2b227
6b
volume-62e3dcff 20GB 1 online read-write 1
-1271-40c1-84
f8-81b23b4516
99
volume-f1b6bf91 50GB 1 online read-write 1
-dca3-4f58-8b
fe-30002b628f
f5
volume-9af1d9bf 30GB 1 online read-write 1
-28d6-44fa-a3
93-044b111330
a3
volume-8b9c0eaa 20GB 1 online read-write 1
-3c50-409e-b2
a1-42c57bd1b5
2f
volume-03dc956c 100GB 1 online read-write 0
-fdf6-48f7-8e
3a-50d92d6fb9
b9
volume-cb0631a5 20GB 1 online read-write 0
-5d25-4ed5-b9
58-6c753a3688
62
volume-cc38ef81 30GB 0 online read-write 0
-cdf1-49f1-94
4d-216b29dc46
9e
volume-ebe29394 100GB 0 online read-write 1
-9792-4216-aa
ec-385397bd7a
6d
volume-c546f051 20GB 0 online read-write 1
-7538-4976-bf
a5-11652f5519
18
volume-d449a639 20GB 0 online read-write 1
-5be8-4385-b8
ec-1ebb8755d3
03
volume-37c751c5 20GB 0 online read-write 1
-d408-4e26-ac
29-af2ec7c298
12
volume-9618f4d1 20GB 0 online read-write 1
-f1fb-45c3-80
eb-10d141d156
6b
volume-c2371bb1 30GB 0 online read-write 1
-bff5-4ba7-bf
2b-78b5ef0ac6
3e
volume-dc4ac771 20GB 0 online read-write 1
-e0d8-4757-ad
ab-77e2af87c1
da
volume-7aa7ccc0 20GB 0 online read-write 1
-cd91-4b77-be
7d-b41e7ce5f0
48
volume-02e0e4f0 20GB 0 online read-write 1
-0f76-4e8b-a4
75-e99473707e
80
volume-bf9e7ee1 200GB 0 online read-write 1
-4659-4275-a2
03-a6cdc9cd5e
fd
volume-adaa132a 20GB 0 online read-write 1
-9f86-4612-b8
a5-19c039b60e
a4

PS6100X In-use space exceeds the warning limit.

$
0
0

I know this isn't a critical fault but in the past I've used vmkfstools -y to resolve this. We are running ESXi 5.5 and the EqualLogic firmware is V7.1.5. I know its old and we have it scheduled to upgrade soon. My question is why doesn't vmkfstools -y work any more in reclaiming disk space? If there is another procedure, where can I find it?

Thanks,

Dave


Dell Equallogic MEM 1.4 VUM Error "invalid vendor code"

$
0
0

Here is a fix for the following error message in VUM: "Invalid vendor code del in patch metadata, another vendor code with different capitalization already exists in database. Check the Update Manager log files for more details."

  1. Unzip dell-eql-mem-esx6-1.4.0.426823.zip (or similarly named file)
  2. Edit index.xml
  3. Change line 4 (ie. code tag) from "del" to "DELL" (no quotes)
  4. Zip all previously unziped files back into the originally named file.

I used the MEM 1.3 patch as a template and tested the modified MEM 1.4 on vmware 6.5.

VSM support for vSphere 6.5?

$
0
0

Now that vSphere 6.5 is out, is there an ETA on an updated VSM that is supported/compatible with it?

It seems that we have to wait to upgrade till then since we're using VVOLs. :)

Dell Equallogic MEM 1.4

$
0
0

Hello,

Is Vmware Esxi 6.5 supported for Dell Equallogic MEM 1.4?

Regards

B Van Velzen

VMware 6.5 MEM support

Is VSM 4.6.0 supported on vSphere 6.5?

$
0
0

I understand MEM 1.4 is fully supported on vSphere 6.5 (confirmed by Donald in an earlier thread).  Does the same apply to VSM 4.6.0?

Thanks.

Viewing all 2320 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>