bhalilov1's Posts

bhalilov1's Posts

Yes, Step 6 changes your protection domain in PROD cluster from "Write Disabled" to Writable if you run the forward policy at this point it will fail, since your DR side is still also Writable... See more...
Yes, Step 6 changes your protection domain in PROD cluster from "Write Disabled" to Writable if you run the forward policy at this point it will fail, since your DR side is still also Writable in step 8 by running the resync-prep from PROD you change the state of the domain in DR to " Write Disabled"
When the *_mirror policy is created on DR cluster, protection domain is created on your PROD cluster, you can see it if you run isi_classic domain list When you're finished and failed back ... See more...
When the *_mirror policy is created on DR cluster, protection domain is created on your PROD cluster, you can see it if you run isi_classic domain list When you're finished and failed back to PROD, the domain is in Writable state. If you delete the *_mirror policy the domain also gets deleted. The next time you failover and run "isi sync recovery resync-prep" to failback, you will have to wait for that domain mark job to run again - it may take long time. On the other side of the coin, if you decide to keep the mirror policy and the protection domain on your PROD cluster, you will not be able to mv files to the directory that's under the domain, even if it's in " Writable" state
How about symsg -sid 1234 show host1_sg symdev -sid 1234 list -R1 -sg host1_sg or symsg -sid 1234 list -v
I tried this on busy directory and ended up with thousands of SIQ snapshots that the Isilon couldn't keep up deleting. 
Dynamox, if you ever tried failover/failback  you'd have protection domain created on the source cluster by the mirror policy you can check with the command above, and also with isi sync tar... See more...
Dynamox, if you ever tried failover/failback  you'd have protection domain created on the source cluster by the mirror policy you can check with the command above, and also with isi sync targer lists From what I learned from support even if the domain is "SyncIQ,Writable"  mv to it is not allowed Workaround is to go to the target cluster, delete the *_mirror policy - this will delete the "SyncIQ,Writable" protection domain in your source cluster and allow you to mv into that directory.
Yes, corrected
Very good, thank you. With VMCP the cross-connect becomes complete overkill. And since with the "non-uniform" (non-cross connect) we can use RR policy, it simplifies the config great deal.
Gary, This is excellent indeed. How do you think this will affect the cross-connect requirement ?
Here is a simple excel file to generate the commands for online meta expansion. It will create an BCV-meta as big as the original, use it to online expand with data protection, then cleanup. Fill... See more...
Here is a simple excel file to generate the commands for online meta expansion. It will create an BCV-meta as big as the original, use it to online expand with data protection, then cleanup. Fill up the following cells : Meta to expand Number of members (including the meta head) of the existing meta First device to add and number of members to add (these members should be existing devices with the same size as the originals) Protection meta start - this is the start range of members that will be used as protection BCV  meta during the expansion. Starting with this device there should be enough consecutive devices to create the BCV meta with the same meta member count as the original. Same requirements for size as above. POOL - name of the thin pool for the original and protection metas. Make sure you have enough space, the requirement is that the BCV-meta needs to be fully provisioned before it's used for expansion. This works for TDEVs, For traditional devices remove the bind/unbind steps.
Sean Besides symdisk list -failed is there any way to see if there are sparing sessions going on ? (symcli)
If you need the size of the directory and all of  it's content ( subdirectories included)  you can query the sqlite3 database that FSanalize job creates. I'm not sure if you need InsightIQ  to be... See more...
If you need the size of the directory and all of  it's content ( subdirectories included)  you can query the sqlite3 database that FSanalize job creates. I'm not sure if you need InsightIQ  to be able to schedule the FSanalyze jobs : ptc-3# sqlite3 /ifs/.ifsvar/modules/fsa/pub/latest/results.db 'select system_squash_map.path , disk_usage.phys_size_sum /1024  from system_squash_map  join disk_usage on system_squash_map.lin = disk_usage.lin  where system_squash_map.path like "%big%"  ;' dm/bigdir|486389 dm/samebigdir|486341 dm/big1|486341 dm/verybig|486365
Please correct me if I'm wrong, but wouldn't round robin cause really inefficient caching ?
Hi Drew, Why there are no metrics to monitor VAAI operations with Unisphere ?
From VMWARE KB "Frequently Asked Questions for vStorage APIs for Array Integration (1021976)" : This will list all visible devices with their capabilities : esxcli storage core device v... See more...
From VMWARE KB "Frequently Asked Questions for vStorage APIs for Array Integration (1021976)" : This will list all visible devices with their capabilities : esxcli storage core device vaai status get esxtop will show statistics for VAAI operations too. It's kinda difficult to find at first - press u for disk device, then f to select fields. The VAAI stats are not visible by default, you need to press o and p : O:  VAAISTATS= VAAI Stats P:  VAAILATSTATS/cmd = VAAI Latency Stats (ms)
RLY ? Give us the tool back EMC ! Who is the product management genius that decided this and how much of the sales  of another product Performance Viewer is eating ? Or how much of PS revenue wil... See more...
RLY ? Give us the tool back EMC ! Who is the product management genius that decided this and how much of the sales  of another product Performance Viewer is eating ? Or how much of PS revenue will EMC gain ? I also keep at least 3 months of UPV files - we have performance affecting events we need to go back and review then compare. I have one SG for my entire VMware cluster - 230TB and I need to have volume level performance data - historical is useless in this case. Here is another reason:  A year ago I went to Symm Performance training - the tool they used for the labs was Performance Manager - 10 years old completely obsolete and part of discontinued ECC. 3 months later my organizations paid another 3k for the newer Symm Performance training - 100% based on UPV. Now that tool is not available ?
And Happy Monday, 7.2 is out https://support.emc.com/docu56010_Isilon-OneFS-7.2-Release-Notes.pdf?language=en_US
We have Isilon cluster on remote unmanned site. Last week during node firmware upgrade and then this week during disk firmware upgrade a node would reboot and never come back. We have 32000x-s... See more...
We have Isilon cluster on remote unmanned site. Last week during node firmware upgrade and then this week during disk firmware upgrade a node would reboot and never come back. We have 32000x-ssd and X400 nodes - can we attach serial console or KVM to them and which is better ? Is there a way for remote power management ?
Worked perfectly for IIQ 3.01  vApp upgrade to 3.1. 3.1 has higher requirements for CPU and Memory, changed to 4 Core/16GB
There is a very nice graph in Unispere VMAX that will show you how was the storage allocated/moved between the tiers. Select your VMAX, Performance -> Analyze, select Historical Double click on... See more...
There is a very nice graph in Unispere VMAX that will show you how was the storage allocated/moved between the tiers. Select your VMAX, Performance -> Analyze, select Historical Double click on Array SID, then select the "Storage Groups" Tab Double click on the Storage group you need to see, then go to "Virtual Pool Tier" , you will see the tiers that this SG is allocating from. Cntrl-Select all tiers, the select the "Allocated Pool Capacity" Metrics and draw a graph. You will get something like this: In my case the volumes were bound to the FC tier FCR1 (yellow)  and migrated from another array (using OpenReplicator) . After the  migration finished FAST-VP  moved the majority of the allocations to SATA Tier SAR6 (blue line) and some went to Flash EFR5 (red line)
Thinking about enabling Access time tracking for the purposes of tiering files with SmartPools. We have the default "Metadata read acceleration" as SSD strategy. We have some directories with web... See more...
Thinking about enabling Access time tracking for the purposes of tiering files with SmartPools. We have the default "Metadata read acceleration" as SSD strategy. We have some directories with web pages (heavy reads) that I'm concerned that will be impacted, since if access time needs to be updated, that's a metadata write. How does choosing the Time Precision work ? If I chose precision of 1 day, does that mean that metadata of that file will be updated only once per day, even if the file is opened and read from 1000 times that day ?