Welcome to the EMC VMAX3 community Ask the Expert conversation. On this occasion we will be covering new VMAX3 features involving the built-in Hypervisor, Embedded NAS offering, and local and remote replication strategies. Among the many areas will be discussing, our experts will answer your questions in regards to best practices, supported configurations, challenges with multi-site replication, as well as consolidation opportunities to combine block and file workloads on VMAX3.
Meet Your Experts:
Principal Corporate Systems Engineer - EMC
Paul started his career at EMC 10 years ago in tech support working in the OSAPI Unix team. After a few years he continued his career path in the Proven Solutions arena working with Oracle and SAP proven solutions team to produce white papers and proven solutions guides focusing on the integration with EMC products. This involved design, build and test of full EMC SAN environments with the Core EMC technologies, VMAX, VNX, RecoverPoint and DataDomain. He is currently working as Principal Corporate Systems Engineer in the Core Technologies Division VMAX focused.
|Corp. Systems Engineer, Symmetrix Local Replication - EMC|
Michael Joined EMC from Wentworth Institute of Technology. Worked in the PSE Lab for about 7 years in both Hopkinton and Sydney, Australia. Then moved to the Corp SE Team focusing on Symmetrix HW and Enginuity. Moved to the Enginuity Local Replication team in January 2010. Now back as a Corp SE focusing on Local Replication. During his tenure at EMC Michael has worked with the following technology: Symmetrix HW and Enginuity, drive sparing, TimeFinder, Snap, SnapVX, VP Snap, Clone.
VMAX Corporate System Engineer - EMC
Kevin joined EMC's Global Solutions team as a Solutions Engineer in 2005 working on all EMC storage products. After a few years he started working on the original design and testing of the Vblock offering, later continuing his career with VCE. At VCE Kevin started in customer proof of concepts, then the engineering team that designed the Vblock 300 and 700 series platforms.
In 2013, he rejoined EMC and currently working as a Principal Corporate Systems Engineer in the Core Technologies Division, VMAX focused with Embedded NAS.
Consulting Corporate Systems Engineer - EMC
Mike has been with EMC for over 15 years and part of the VMAX engineering team for the past 10 years. Mike's areas of expertise include SRDF, ORS, FLM, Access Controls, User Authorization, Host IO Limits, Performance, Databases, Code Development, and FAST.
This discussion takes place from February 23rd - March 6th. Get ready by bookmarking this page or signing up for e-mail notifications.
Share this event on Twitter or LinkedIn:
Thank you for your interesting in VMAX3. Note that this forum is focused on the extensive feature set offered with VMAX3 today; specifically TimeFinder, eNAS, and SRDF. EMC is continuously evaluating additional features for VMAX3 arrays.
Roadmap discussions have always required specific approvals and local account team involvement. Please contact your account team for info in future functionality. They can certainly contact us and product managers outside of the forum if they need assistance.
Welcome to the ECN Ask the Experts event!
This discussion is now officially open for questions. Let's make of this an informative and respectful channel for exchange of information about VMAX 3.
I would like to start out with two questions ?
1) How is this different from a gateway model (i used to use CFS-14 with DMX2000 )
2) Why embedded NAS on a VMAX when you have an outstanding platform in Isilon ?
The Internal eNAS Datamover runs on the VMAX3 hypervisor as a VM container, we run a virtual version of the control stations and the datamovers. We keep the active data movers and control stations on different directors from the standby to ensure the highest availability. We use an internal cut though driver (essentially the HBA for the datamovers) to access the storage through internal Virtual Front End Ports and we have a secure internal network for all the internal communication and management.
Provisioning to eNAS is pretty much the same as provisioning to any host, we have a masking view that is already configured for you when the array is installed. You can then provision storage to the data movers with or without a Service Level (essentially a target response time) so you get all the functionality of a VNX but you get the added benefits of VMAX3 SLO and the robustness of the VMAX hardware platform.
We have a lot of information on the technote
The Doc does a much better job on the positioning of the product than I ever could
Great question around why offer embedded NAS on VMAX3 when Isilon is a great platform for enterprise file storage. Here is the logic.
VMAX represents the leading storage solution used by customers to store, protect, and replicate mission-critical block storage. While this is the primary reason customers buy VMAX today, many have expressed interest in storing moderate amounts of mission-critical or Tier 2 file capacity on VMAX to consolidate islands of storage in their data center. VMAX3 unified allows customers to maintain “one infrastructure” to manage both large amounts of mission-critical block storage vital to their business with moderate amounts of file capacity.
This approach helps our customers reduce total cost of ownership since embedded NAS runs directly on VMAX3 hardware resources and is managed through the familiar Unisphere interface. An added benefit is that embedded NAS extends the value of VMAX to file by offering rich data services across both block and file storage. Examples of VMAX3 data services spanning across block and file capacity include FAST, service level provisioning, Dynamic Host IO Limits, and Data at Rest Encryption.
Isilon remains the premier scale-out NAS solution from EMC for >150TB file installs with its ability to seamlessly scale linearly from simple 3-node designs to 144 nodes providing tremendous throughput that is required from large data sets. VMAX3 unified consolidates huge amounts of mission-critical block storage combined with storing moderate amounts of mission-critical or Tier 2 file data as mentioned above.
Hope this helps.
VMAX Business Unit
Thank you for your reply, a few follow up questions:
When we refer to moderate amounts of file storage we are expecting most customers to start off with < 200TB of usable file capacity and grow to larger amounts overtime. VMAX3 supports up to 768TB of usable file storage capacity with a maximum file system size of 16TB. Note that some customers will elect to support up to 768TBs of file data with VMAX3 at initial deployment. We just do not expect it to be the norm.
Your other questions will be better answered by Paul or Kevin on this panel.
Thanks again for your interest in embedded NAS on VMAX3.
The Dynamic Host IO Limits does treat eNAS as another host. You are able to control how much IO can be allotted to the child SGs that are presented which make up the separate NAS storage pools. You are not able to individually control how much IO is given to individual file systems.
eNAS Code upgrades are performed as part of the HYPERMAX OS upgrade. During the upgrade it is asked if you want the Data Movers to be rebooted automatically or manually to have the eNAS Code upgrade take effect.
eNAS can replicate right now using the asynchronous IP based File Replication.
With all previous generations of Symmetrix the only way to achieve SRDF consistency across multiple arrays required an external host managing that consistency and the cycle switching. With the Hypervisor built in at the array level now is there the opportunity (now or roadmapped) to allow for that functionality to come back within the array (as long as a VMAX3 is involved) and drop the requirement for an external management host?
I've seen several references to the removal of support for metas on VMAX3. How does that work if I have to use SRDF to get data to a VMAX 20K that requires metas?
Managing SRDF consistency across multiple arrays is still done via a host running SE and the SRDF daemon. I can't go into roadmap on here but right now that management piece hasn't changed.
On your second question for Metas, all you need to do on the VMAX3 is create a pairing of the same size or bigger on the V3, VMAX systems connecting via SRDF must be running Enginuity 5876.272.177
We have significantly improved SRDF operation under the covers with more concurrency on writes for SRDF/S and Multi Session Cycles on SRDF/A.
For configuring SRDF you should definitely check out Unisphere 8.x the management interface is pretty simple to configure everything for you, on V2 and V3 you can literally just right click and select Protect and SRDF, the wizard will work out the number of hypers and all create a device group for managing. Worth looking at.
It is also important to point out that the VMAX3 HYPERMAX OS with Solutions Enabler 8.0.1 and beyond relaxes some of the existing SRDF device pairing requirements with respect to meta devices and device sizes. In these cases, pairing devices of mismatched sizes may result in a smaller to larger device size pairing with restrictions resulting for SRDF restore, failover, SRDF/Star, and swap operations. As such, it is critically important to understand the implications of pairing devices of differing sizes and configurations with the new platform.
HYPERMAX OS does not support meta devices; however, device pairs between non-meta devices on an array running HYPERMAX OS and a meta device on an array running Enginutiy 5876 are supported. Device pairs between a non-meta device (HYPERMAX OS) and a meta device (Enginutiy 5876) may be either concatenated or striped.
We are a mainframe shop, anything i need to worry/understand/take into consideration when dealing with CKDs ..SRDF/S of CKDs between VMAX2 and VMAX3 ?
I guess you are going to be a bit behind me on the VMAX3 dynamox. No CKD support yet on the VMAX3 platform and our account team can't even give us an estimate on when it will be there. We were told that the 40K is the "go to" platform for mainframe for the foreseeable future. I'm sure they do have it on the roadmap but nobody seems to be talking timelines yet..
Thanks for your interest in mainframe support on VMAX3. EMC's development team is working to deliver CKD support on VMAX3, but the engineers in this session do not have access to mainframe roadmap information. Unfortunately, you will need to contact your EMC account team for the latest status.
VMAX Business Unit
This Ask the Expert event has concluded. Hoping that you received answers to all of your questions during the 2 week period of this event. At this point we would like to thank you for your participation, but especial thanks goes out to our SMEs for hosting this event. We appreciate you for selflessly taking the time to help our users on ECN!
Stay tuned for more Ask the expert events on VMAX3. See you soon!
Do we have any best practice for getting the FC cabling done on the VMAX3.