Origin3k's Posts

Origin3k's Posts

I can signed that. There is a challenge responce inbuild into the EQL and the support can help you. It may take some time to get the right supporter who knows about 🙂 Regards, Joerg
Achja.... solltest du auf Doppelter Redundanz bleiben so ist bei einem RAID10DM die nutzbare Kapazitaet ~5.83TB und bei einfacher Redundanz ~8.75TB. Da bei der SC erstmal alles Thinprovisioning ist s... See more...
Achja.... solltest du auf Doppelter Redundanz bleiben so ist bei einem RAID10DM die nutzbare Kapazitaet ~5.83TB und bei einfacher Redundanz ~8.75TB. Da bei der SC erstmal alles Thinprovisioning ist siehst du quasi die Raw Kapazitaet und somit die 17.5TB.   Gruss Joerg
Die SC hat keine fixen Raid Diskgroups mehr bzw. hatte es eh nie.  Ueber die Redundanz geschichte wird primaer festgelegt ob ein ein oder 2 gleichzeitige Diskausfaelle ueberlebt willst. Die Redundanz... See more...
Die SC hat keine fixen Raid Diskgroups mehr bzw. hatte es eh nie.  Ueber die Redundanz geschichte wird primaer festgelegt ob ein ein oder 2 gleichzeitige Diskausfaelle ueberlebt willst. Die Redundanz kann pro Tier eingestellt werden wenn du im DSM auf StorageType klickst. Indirekt wird festgelegt ob es sich als RAID10DM oder RAID5 oder 6 verkauft. Zu deiner eigentlichen Frage.... dies wird bei der SC schon immer ueber die Storage Profile geregelt. Neben ein paar Vordefinierten kann man auch eigene Definieren um Volumes auf ein Tier zu pinnen oder aber zu regeln ob man ein Raid Tiering haben moechte oder nur ein einzigen Typ. Jedem Volume kannst du ein anderen Profil zuordnen. Du kannst jeder Zeit zur Laufzeit ein andere Zuordnung treffen! Die Frage was dir an dem Recomended Storage Profile/Balanced nicht gefaellt weil das schreibt alle einkommenden Daten auf das Tier 1 als RAID10 bzw. in deinem Falle RAID10(DM) um dann mittels Raid Tiering die Daten spaeter bzw. Zeitnah(Snapshot) die Daten dynamisch in ein RAID5 bzw. bei dir RAID6 umzulegen.  Sollte die Lizenz fuer DataProgression gekauft worden sein und ein Tier3 zur Verfuegung stehen wuerden die Daten spaeter da landen. Aber nach deiner Beschreibung hast du nur einen Typ Disk und somit alle im Tier1.   Gruss Joerg
2 Options - The SC400/420 enclosures are supportet on the 4020 now (need a rescent SCOS version) - Data Inplace Upgrade (Buy a empty chassis and move existing disks and lic to the new system). See ... See more...
2 Options - The SC400/420 enclosures are supportet on the 4020 now (need a rescent SCOS version) - Data Inplace Upgrade (Buy a empty chassis and move existing disks and lic to the new system). See https://downloads.dell.com/manuals/all-products/esuprt_software/esuprt_it_ops_datcentr_mgmt/general-solution-resources_white-papers4_en-us.pdf for details.   Regards, Joerg
Without a valid support contract youre not allowed to download and install the software.   I never understood why a customer is not allowed to download the software which was the current one in the... See more...
Without a valid support contract youre not allowed to download and install the software.   I never understood why a customer is not allowed to download the software which was the current one in the same time its support contract runs out.  Think about of a reinstallation of SANHQ.  Other company take care of that.....   Regards, Joerg
DSITV doesnt support VMware vSphere latest version which means right now there is no support for vSphere 6.7.  I have tried it and get an error about not obtaining the SSL cert when try to connect t... See more...
DSITV doesnt support VMware vSphere latest version which means right now there is no support for vSphere 6.7.  I have tried it and get an error about not obtaining the SSL cert when try to connect to vCenter.   Regards, Joerg
You can also use the Storage Updatemanger tool which will guide you through the process. It'll take care of the right FW version.   Dont forget to applay the Drive FW kit as well at the end of your... See more...
You can also use the Storage Updatemanger tool which will guide you through the process. It'll take care of the right FW version.   Dont forget to applay the Drive FW kit as well at the end of your journey.   Regards, Joerg
No its not normal. If you power on a SCv20x0 there is around 2min silence..... and for 1min some noise as a huricane. After that sound level drops to "normal". If you check with DSM or DSM Datacolle... See more...
No its not normal. If you power on a SCv20x0 there is around 2min silence..... and for 1min some noise as a huricane. After that sound level drops to "normal". If you check with DSM or DSM Datacollector the Temperatur/FAN status under hardware tab what do you see? I have deployed around 20 systems and never takes notes that they run on 100%. Only if you remove one controller when its running... that it goes back to 100%. Regards, Joerg
Yes.. one member after another.  We have setup the EQL with best practices and our vSphere cluster survive  the unavailable volumes.  The CM failover took the volumes offline for around 5sek which me... See more...
Yes.. one member after another.  We have setup the EQL with best practices and our vSphere cluster survive  the unavailable volumes.  The CM failover took the volumes offline for around 5sek which means 3 ping are lost. I have 24 members in the house and a dozen on customer sites. Both of your PS6010 and PS6100 can run the latest 9.1.x FW. Please dont forget to applay the Drive FW. Its a extra package and can only be applied through the command line or the EQL Update Manager and not with the help of the EQL Groupmanager! Regards, Joerg
I add some notes to my previous post. You cant jump directly from 7.x to 9.0.9. Check the matrix.... or use the external EQL update manager. Its a stand alone java app which helps there because it ... See more...
I add some notes to my previous post. You cant jump directly from 7.x to 9.0.9. Check the matrix.... or use the external EQL update manager. Its a stand alone java app which helps there because it checks internaly if update is allowed or not. It also can apply the Drive FW updates as well. Again.... all members within a group have to be on the same FW version. Regards, Joerg
Hi, Version 9.1.4 Version 9.0.9 Version 8.1.13 Version 7.1.14 are the current Versions depanding on the branch. All Member within a Group have to be on the same version... only for t... See more...
Hi, Version 9.1.4 Version 9.0.9 Version 8.1.13 Version 7.1.14 are the current Versions depanding on the branch. All Member within a Group have to be on the same version... only for the short period of time during the update it is allowed to have  mixed versions. On the FW Downloadpage you will also find a Supportmatrix from which version you can upgrade to. It also depends on the PS Modell if 7.x is your latest available or not. IIRC he PS5000 ends on 7.x. Having a valid support contract makes life easier. Regards, Joerg
"...Can we assign an IP from a pool 10.11.0..x which is our san IP...." From your Screenshot your ISCSI WKA (Well Known Address) is coming from 10.11.8.0 so iam not sure if you mistyped this. 1. ... See more...
"...Can we assign an IP from a pool 10.11.0..x which is our san IP...." From your Screenshot your ISCSI WKA (Well Known Address) is coming from 10.11.8.0 so iam not sure if you mistyped this. 1. Yes, dedicated MGMT can be changed 2. Dedicated MGMT have to be always a  different subnet compared to the ISCSI WKA 3. If MGMT should be in the same subnet... you have to disable the dedicated MGMT and than the normal eth0 Ports are used and your reach the EQL GroupMGR over the ISCSI WKA. This is how the early EQLs are working and also all units like PS6000 which didnt comes with dedicated ports but you can convert the last eth3 into a dedicated port. All 10G units and the later ones comes with dedic. ports. Regards, Joerg
Take a look to http://en.community.dell.com/support-forums/storage/f/3775/t/19991233 and check the Requiremnents like: - Maybe the EQL Volume need a thinprovisioning one - FW Version - VHDX vDis... See more...
Take a look to http://en.community.dell.com/support-forums/storage/f/3775/t/19991233 and check the Requiremnents like: - Maybe the EQL Volume need a thinprovisioning one - FW Version - VHDX vDisks Regards, Joerg
Well... it depends on what an how you doing the snapshot (smart copy/clone). 1. If you have integrated VSM (the old ASM/VE) it gives the possibility to create a snapshot of a single VM through the ... See more...
Well... it depends on what an how you doing the snapshot (smart copy/clone). 1. If you have integrated VSM (the old ASM/VE) it gives the possibility to create a snapshot of a single VM through the vSphere Client/WebClient GUI. At the backend the EQL creates a snapshot of the entire volume where the VM is located on. Always the entire volume is snapshoted. On that way a more quescied snapshot is possible because VSM triggers also a vSphere Snapshot if you like. An Restore is possible through the vSphere/WebClient GUI!. For us it takes a long time for large VMs because the EQL copy the complete volume  around. 2.  Simple Snapshot based on a collection configured and managed by EQL GroupManager. All selected Volumes. No Quiescing. For an "restore" you promote a Snap temporary to an ESXi Host. You have to tell the ESXi to mount this Volume with the help of the command line because ESXi detects the Volume as an existing datastore (same signature). If you allowed it creates automaticly an "<GUUID>-snapshot" Datastore in which you can browse around. You have to add VMs back to the vCenter inventory or just add additionals vDisk to an existing[1] VM. [1] Dont add the "same" vDisk twice to a VM. Choose always a similar and copy files over the network if needed We use option 2... but you need some practice and a roadbook. If you do it the first time you will be lost until you find "esxcfg-volume -l" 🙂 Regards, Joerg
After 9.5 years with 24 EQL in the house i would bring the 6210 back into the group with the existing M4110.   With all members in one group you can move the volume around with no downtime by just cl... See more...
After 9.5 years with 24 EQL in the house i would bring the 6210 back into the group with the existing M4110.   With all members in one group you can move the volume around with no downtime by just clicking a button within GroupManager. - Place the 6210 on its own pool to verify if its working well and wait until the raid buid is finished. I also highly suggest to perform a CM restart to see if the standby CM get switch connectifiy and if the hosts can ping the interfaces - Bring them all to the same FW Now you have 3 option and IIRC i tried all of them but cant remember to the last one 1. Move Volume between Pool - It took ages - Only on volume on time 2. Move the 6210 into the M4110 pool - EQLs volume distribution kicks in and place data on the new guy - At the end select delete M4110 member to evacuate the rest of the volume - Works fine if you have a lot of volumes 3. Merge Pools - Cant remember in details All 3 variants works very well and without downtime or doesnt effecting the Hosts. Most of the time with use method 2. Note: With VMware 6.5 they come with VMFS 6.0. You cant upgrade existing Datastores and have to create a new Datastore and selecting the VMFS 6.0 option. After that you have to svMotion your VMs. Regards, Joerg
No, because every EQL frontend port needs connectivity to each Server storage port. Regards, Joerg.
Customer runs V7.1.2. . It was not my case/customer. But we follow the advice from the eql support. The exact msg was "Warning health condition currently exist. -> Lost blocks detected in RAID set.... See more...
Customer runs V7.1.2. . It was not my case/customer. But we follow the advice from the eql support. The exact msg was "Warning health condition currently exist. -> Lost blocks detected in RAID set." From the action plan: "... Lost blocks generally occur on drives during a reconstruction or copy to spare when a block on a drive has been damaged, normally we would reconstruct that bad form the parity or the mirror but If that block that belongs to the partity has also been damaged then the block is marked as lost. To clear the lost blocks you must firstly delete the following snapshots that contain some lost blocks which are the following:...." Regards, Joerg
Same Error here.... the Support investigate the problem and told us that the volume and snapshots are effected and we have to delete both to get rid of the problem. So we have to restore a few TBs an... See more...
Same Error here.... the Support investigate the problem and told us that the volume and snapshots are effected and we have to delete both to get rid of the problem. So we have to restore a few TBs and than we delete the effected volumes. Regards, Joerg
Short Answer: No. This question wasnt here the first time and the dell guys always answering with a no. There was never a CM Upgrade Kit SKU. Also a PS4100 sounds like 12x3.5" disk and with a wor... See more...
Short Answer: No. This question wasnt here the first time and the dell guys always answering with a no. There was never a CM Upgrade Kit SKU. Also a PS4100 sounds like 12x3.5" disk and with a working MPIO and a sequential read you can get up to 200MB/s.  I doubt that 12 slow 3.5" disk can deliver the performance for 10GbE. The 4100 is also end of sale. Regards Joerg
Why do you think that 6.5 isnt supportet? The documentation referrers to 6.5 and i installed on a vCenter 6.5u1 last saturday. I can give some additional informations: About the problems..... e... See more...
Why do you think that 6.5 isnt supportet? The documentation referrers to 6.5 and i installed on a vCenter 6.5u1 last saturday. I can give some additional informations: About the problems..... even when we got the plugin installed and configured the credentials for one use the plugin crashed our web client when we try to do something with it. I got crash files within the vsphere-client log folder and also "out of memory" messages in the log files when ever try to  use a Dell Storage feature. So i made 2 modifications to the webclient to allow more memory and increasing the heap size for the java stuff.  Even the VMware based plugins for SRM and others can crash the Webclient. The last 4 installation went very well. Regards, Joerg