Origin3k's Posts

Origin3k's Posts

DSITV <= 5.0 doenst support vSphere 6.7(vCenter). You have to wait until SCO7.4 together with DSITV 5.1 comes GA.
It looks like that we have to wait for SCOS 7.4 which brings full support for vSphere 6.7
For sure i followed the instructions on page 34 because otherwise it is impossible to perform the upgrade because you cant select the needed action to go on after uploading the *.zip. All other sett... See more...
For sure i followed the instructions on page 34 because otherwise it is impossible to perform the upgrade because you cant select the needed action to go on after uploading the *.zip. All other settings like our LiveVolumes are in place. Regards Joerg
I updated my DSM to DSM-VA-18.1.20.114 today because we wann upgrade SCOS to 7.3.x  After updates all Clients and connect to the Datacollector i found that the treshold definitions all cleared and re... See more...
I updated my DSM to DSM-VA-18.1.20.114 today because we wann upgrade SCOS to 7.3.x  After updates all Clients and connect to the Datacollector i found that the treshold definitions all cleared and removed. I had a open storage alarm before the update and it was gone away... this was how i take notice. Regards Joerg
The used and also the "free" space is shown on the first dashboard like page named "Summary" when connecting with DSM to your SC or DSM DataCollector. In reality the question about "how much free sp... See more...
The used and also the "free" space is shown on the first dashboard like page named "Summary" when connecting with DSM to your SC or DSM DataCollector. In reality the question about "how much free space" cant be answered. Its hard to understand but when you familar with SC you know why. Regards Joerg
Longtime ago a "recycle bin" was introduced. You have to purge the bin to get the space back. Within the group manager click on the lower left "tools?" I have no grpmgr in front but iam sure you will... See more...
Longtime ago a "recycle bin" was introduced. You have to purge the bin to get the space back. Within the group manager click on the lower left "tools?" I have no grpmgr in front but iam sure you will find it.   Otherwise are you sure you havent delete a thin provisioning volume? Maybe it wasnt full.   Regards Joerg
Grrrrrrrrrrrrrrr.... why is my text deleted when switching between text and html view? Again.... the system took up to 30min for initializing first time before it will responce to the auto discover ... See more...
Grrrrrrrrrrrrrrr.... why is my text deleted when switching between text and html view? Again.... the system took up to 30min for initializing first time before it will responce to the auto discover broadcast. If you have powered down the system within this time period you have bricked the system and need Dell Support for bring it back and mostlikely you have to re-imaging. But first... please hook up both serial cables to the two CMs and install the serial/usb driver which is will hidden on dell.com on a windows system. For every cable the driver adds up to 4 com ports. So mostlikely try com1 with 115200/8/1/n for CM1 and com5 for CM2. You have to press enter in your telnet(putty) client to get a responce. Please post what you see. IIRC the default credentials are User: Admin Pass: mmm If you only have a single SCv without additional enclosures please DONT CONNECT the 2 SAS cables for crossconnecting the two CM. This is  different compared to SCv2020 and SC4020.   Regards Joerg
You have present 1TB to your Hosts which is the 1TB "configurated" value. As a high water mark the Hosts have written 688.65GB into that 1TB volume. Since the SC is thinprovisioning it doesnt use any... See more...
You have present 1TB to your Hosts which is the 1TB "configurated" value. As a high water mark the Hosts have written 688.65GB into that 1TB volume. Since the SC is thinprovisioning it doesnt use any space for the free 335GB. You have 3 snapshots which adds 116GB (its a low change ratio) to the 688GB.  You have RAID protection (Single or Dual Redundancy) and based on the assigned Storage Profile its a mix of RAID10 and RAID5 or RAID10DM and RAID6 or RAID5/6 for everything which adds another 201GB to the total consumtion is 1006.61GB right now. The system expect that more Data is comming in because of the measurement from the past and your growing rate so it pre"allocate" some space to be prepared. The system will also move blocks between the RaidLevel. With that dynamic RAID Level and the unknown changeratio the system can only predict what space is needed but canot give a precise answer. Regards Joerg
That you lost connect is a expected behaviour especially if you only have one member. You havent told us the FW version youre comming from so please take notice that support for HTTPS was removed be... See more...
That you lost connect is a expected behaviour especially if you only have one member. You havent told us the FW version youre comming from so please take notice that support for HTTPS was removed because of security reasons. So for accessing the the GropMGR again use http://groupIP instead of httpS://groupIP   Regards Joerg
You should Update your EQL to FW 9.1.x or 10.x together with the Drive FW 13.x   Regards Joerg
For me it sounds as he ask for a system which supports a automatic transparend failover. Thats what 2 SCv together with licenses for RIRA + LiveVolume can archived. For sure is that one SC holds the... See more...
For me it sounds as he ask for a system which supports a automatic transparend failover. Thats what 2 SCv together with licenses for RIRA + LiveVolume can archived. For sure is that one SC holds the primary Volume and the other the secondary one. If you choose syncrone replication together with LV technicly the SC creates a proxy volume and present this LUN to the ESXi Cluster. The ESXi see one Volume/Datastore. No this volumes isnt "span" over the two systems and this is what you have ask for instead 2 copies exists. We used a couple of "stretched" ESXi Clusters with 2x SC in a live volume setup. We have survived a unplanned downtime of one of the SC. Notice: We dont use a single VMFS Datastore instead we have multible which means different SC Volumes which we spread over the 2 SC and enable LV on them. Regards Joerg    
You should always place a new member into A empty pool or the one which you use for maintenance Ping all iSCSI Network Interfaces . Perform a CM failover to check also the standby Network Update ... See more...
You should always place a new member into A empty pool or the one which you use for maintenance Ping all iSCSI Network Interfaces . Perform a CM failover to check also the standby Network Update all members to the same FW Than you can move the new guy into your production pool. But in YOUR CASE i highly suggest to place your Hybrid EQL into a dedicated pool because of the huge speed difference combined with the larger PS6100E the EQL Loadbalancer will place more data on the old and slow member because of the larger capacity. Regards Joerg
For sure they are displayed during the installation/setup in the summary.   If you click in DSM on Hardware and expand a CM->IO Ports->iSCSI and select the greyed local port the MGMT Port is marked... See more...
For sure they are displayed during the installation/setup in the summary.   If you click in DSM on Hardware and expand a CM->IO Ports->iSCSI and select the greyed local port the MGMT Port is marked and you can see a MAC. But iam unsure if this one is also the MGMT and not the optional one for the FlexPort. If you ping the CM from the same subnet and taking a look into your arp cache you will find the MAC for sure.   Also... some shell access developer from commandline will bring up the addres i think. Regards Joerg
The username "Admin" is hardcoded in the SC. If you install DSM Datacollector, formaly known as EnterpriseManager, the installer ask for a username during installation. If youre smart guy you had use... See more...
The username "Admin" is hardcoded in the SC. If you install DSM Datacollector, formaly known as EnterpriseManager, the installer ask for a username during installation. If youre smart guy you had use "Admin" as well and the same password for both systems. If not.... just document it 🙂 DSM also can integrated into Window AD so the wrong "admin" isnt a big deal.   Replication is always managed by the DSM(EM) and not directly on a single storage.   In the early days you can install a secondary DSM(EM) but i never have uses this. The installer ask if "this" instance is a secondary one. I have no clue what its good for. We use the VirtualAppliance since 2.5 Years now also for our LiveVolume setups.   The SC4020 comes with a BMC on each controller but Dell advised every customer to modify the IP to a non-routable network like 169.254.0.0. Newer SCs like 3020 and 5020 comes with a iDRAC. For real DR and Troubleshooting you should cabling the 2 serial cables.   Regards Joerg
As always... you have to create a "Server" on the SC and during this process the servers SAS HBA will be registered on the SC so be sure that your Server is up and running and all cables are in place... See more...
As always... you have to create a "Server" on the SC and during this process the servers SAS HBA will be registered on the SC so be sure that your Server is up and running and all cables are in place and connected. Because sometime is very hard to identify the HBAs wwn we always add one Server after another and not two at same time. We use the Hostname of the phys. Server as the name for  the "Server" object.  Later you have to use this Server object to create a ACL and specify if a Server gets access to a volume or not. If you place multible Servers into one Server Cluster object you can use this for a ACL as well. This makes sence within a vSphere ESXi or similar environment.   Regards Joerg
EQL always need a switch pair with a ISL. How do you think you get ASM and EQL FW without a Dell support contract for your array? Regards Joerg    
Drive FW != Array FW For updating the Drive FW there is no restart needed. For sure you should updating Array FW because 8.1 is very old. Take look to SANHQ aswell as for ASM for your Hyper-V Hosts.
Some notes 1. There is a much more rescent Custom ISO for the 6.5u2: VMware-VMvisor-Installer-6.5.0.update02-10719125.x86_64-DellEMC_Customized-A07.iso You can download it from https://downloads.de... See more...
Some notes 1. There is a much more rescent Custom ISO for the 6.5u2: VMware-VMvisor-Installer-6.5.0.update02-10719125.x86_64-DellEMC_Customized-A07.iso You can download it from https://downloads.dell.com/FOLDER05330276M/1/VMware-VMvisor-Installer-6.5.0.update02-10719125.x86_64-DellEMC_Customized-A07.iso 2. There is a tech document that says that configuring a SC which use SAS directly with the DSM Client is not supportet. (For us its works). You should configure the Array via DSM Datacollector(formaly known as Enterprise Manager). 3. I dont think that the SC makes a huge different between 6.0 or 6.5 Host types 4. Most importat. Please Read and understand:  https://www.dell.com/support/article/us/en/19/how11081/preparing-vmware-esxi-hosts-to-attach-to-scv20x0-scv30x0-sc4020-sc5020-sas-arrays?lang=en 5. I always take a look to the best practices guide and apply the adv. settings manually to the host Regards Joerg
We use a entry level DR4100 with 9TB for years but where unable to get a support renewal because the product isnt available any more. Swapped the DR against DD3300 16TB and found out the backup time... See more...
We use a entry level DR4100 with 9TB for years but where unable to get a support renewal because the product isnt available any more. Swapped the DR against DD3300 16TB and found out the backup time doubles. Upgrade DDOS to 6.1 and also latest vRanger version doesnt change anything. We dont expected this performance drop with 5 year younger HW and same diskcount. Right now we just compare the runtime of the largest vSphere VM and DR only needs 90min and DD up to 3h. Booth units use 10G SFP+ networking. Any hints how to speed up or is this just the way a DD3300 works? It was a surprise to us to find out that DD3300 boots a hypervisor first and all the magic comes from a VM.   Regards, Joerg  
14 month since GA of vSphere 6.7. Not bad....