I was told some time ago that 7.1 would include a lot of enhancements on network configuration.
Things like DNS and default GW per subnet, but also better integration with standard windows tools.
What can you tell us about these points?
> Is the limitation of maximum 5 simultaneously running SyncIQ policies still valid ? If so, is this a hard of soft limit ?
This limit is still in effect. What workflow are you attempting to set up that you are looking to move past that? The limit is technically configurable, but changes to this limit can allow SyncIQ to overrun a cluster, so it is not normally supported.
> Does the new job engine have a way to control the priority if the SyncIQ Target Aware Sync ?
The Job Engine and SyncIQ subsystems are entirely separate -- there is no way to control the CPU usage of a target-aware initial sync at this time.
> Also, are there enhancement of MMC integration ?
MMC integration was not a part of the OneFS 7.1 release. The current plan is to look at tighter MMC integration sometime in 2014, but that plan is subject to change.
Sorry the answers weren't what you were looking for!
1. If a group of folders are deduplicated and they are mirrored to another cluster using SynqIQ will the data be transferred deduped or will it get transferred and stored on the target cluster in its "fat" state ?
2. How is the Dedupe functionality licensed ? Is it per cluster, per TB or something else ?
> As for the SSD/GNA topic, I was referring to the setting of sysctl efs.bam.layout.ssd.sys_btree.mirror.
This is indeed wandering a bit off-topic, as it is the setting that was put into place in OneFS 7.0, but the general point is that the system B-tree is a different structure than what GNA covers. GNA covers putting copies of file/directory metadata on SSDs; as I understand it, the system b-tree covers, essentially, metadata about inodes, which are a level of abstraction above files/directories and are not covered by GNA code.
This is why there is a little bit of overhead to upgrading to 7.0 with SSDs; the usage does go up, but in general, so long as the 6.5 system was somewhere close to the supported guidelines for SSD number and density in a cluster, the amount of extra usage that mirroring system b-trees adds will not make a difference of more than a few percentage points of SSD usage overall.
> I was told some time ago that 7.1 would include a lot of enhancements on network configuration.
I'm not sure who told you those would be included in OneFS 7.1, but I'm afraid they didn't make the cut. These are still planned as improvements to the Access Zone feature (so that we can support per-Access Zone-level network configuration), but I do not have information I can share regarding when they will be available at this time.
Additional MMC integration (which I am guessing is what you are hoping for with "additional integration with standard Windows tools") also relies on similar improvements to Access Zones. Basic MMC integration works, generally speaking; what specifically are you looking for with Windows integration?
For MMC, I would like to see creating a share from MMC added to just one zone. And event logging!
The network part was introduced to us on EMC World and during a one on one by David (If you want his full name mail me).
> 1. If a group of folders are deduplicated and they are mirrored to another cluster using SynqIQ will the data be transferred deduped > or will it get transferred and stored on the target cluster in its "fat" state ?
In this release, SyncIQ does not have the ability to transfer deduplicated data in a deduplicated state. The data would be "rehydrated" and sent in full to the target cluster. That said, it is perfectly okay to run dedupe on the target cluster as well -- SyncIQ is smart enough to do the right thing on incremental runs even if the target has been deduplicated, and it is possible to deduplicate data that is marked read-only due to being part of a SyncIQ target.
> 2. How is the Dedupe functionality licensed ? Is it per cluster, per TB or something else ?
Per cluster, like all previous OneFS software features.
> On 6.5.5 it is set to 1, on 7.0 it has been found to be set to 3,
but if that is not actually the default on 7.0 or 7.1 then I don't see
an issue for us.
We have 126.96.36.199, and it was found to be set to 3. Decreased it to 1 to increase speed of SnapshotDeletes as instructed by support.
Hello both Peter and shhh!
I did a bit of digging and found that there were some layout inconsistencies that were cropping up with the setting you mentioned (efs.bam.layout.ssd.sys_btree.mirror set to 3.)
As a result of these, the default setting has been reverted back to 1 (a single mirror) for that sysctl and a few others. This will solve a number of problems, not least of which is the slow performance of SnapshotDelete on clusters with fewer than 8 SSDs (this is likely the issue sssh! was running into.)
The new default is applied in 188.8.131.52, 184.108.40.206, and 7.1.0. (The prior two releases are available now.) I apologize for getting this wrong; in my defense, this *is* a pretty late-breaking change...