Your cut and paste 阿超 is currect & cool which is a simplefied Chinese name :>)
In many ways the VMAX platform can be treated very much the same as a DMX (or at least as the DMX3s I'm used to). That isn't entirely true if you are going to use Virtual Provisioning, FAST VP, etc... but for basic operation the biggest difference I found when we implemented our fist VMAX was the concepts of mapping and masking using masking views. Masking views are a lot more like the concepts of Storage Groups on the CLARiiON/VNX platforms, but with some added granularity and flexibility.
One of the major conceptual differences from the DMX to VMAX architecture was the removal of the full backplane being deployed up front on the DMX. With DMX you would deploy an array and then populate the backplane with a mixture of front end & back end directors as well as memory directors. With VMAX you deploy engines prepopulated with FE & BE directors and cache memory. You do have flexibility in the port counts and types for the FE side of things, but the directors are already there. When you need more directors (of either type) you deploy more engines and they connect through the "virtual backplane". Right now you can expand up to a maximum of 8 engines, but by making the backplane virtual I'm curious if we will soon be hearing about greater growth potential since there is no longer a physical chassis limitation (I've heard rumours, but wondering if there will be more announced at EMC World).
So up to now, (ignoring the VP & FAST VP) that means that even with the hardware platform changes there aren't a lot of differences in the way you plan deployment of hosts and applications on the array. Maybe if you have a super demanding application you might take advantage of some of the optimizations in the new architecture, but I suspect that type of deployment and optimization would involve EMC Professional Services for planning and validation. I'm talking more for the average user base.
All that being said, adding in Virtual Provisioning (which I believe was an option on the DMX4) and FAST VP (which I THINK is limited to VMAX) is a game changer from a planning and configuration perspective. When I qualify my thoughts on the DMX4 you should know that I've never worked on that platform so I'm just going by what I heard and think I remember. VP & FAST VP can fundamentally alter the performance capabilities, disk layout concepts, and provisioning best practices for hosts and applications. What I don't think it really changes is your front end connectivity for hosts (although I guess it could potentially require more FA ports to a host if you are really going to drive higher performance than you COULD before).I'm not going to go into deep dive detail on FAST VP as there are lots of white papers on it, but the whole idea of being able to spread your backend disk load much wider than we could easily do before, as well as adding in the automated capability to ensure that the autotiering has data sitting on the appropriate tiers to meet performance even when it changes over time... Let's just say I've been looking forward to FAST VP on Symm for a while now, and will finally get to to put it into full play in the next few months (We've been using FAST VP on VNX for some time now).
Sadly I was have no idea what it actually says or how to even begin thinking about pronouncing it, but fortunately it didn't keep me from cutting and pasting 🙂
You are right, RRR... it shouldn't come as a shock (although I talked to one admin once who found out when the vendor arrived onsite to do the install). In some ways I think some of the concepts might be easier to grasp for a less experienced CLARiiON admin moving up to Symm than to a moderate to very experienced admin just because there are things that you may have to unlearn to avoid incorrect assumptions based on your previous experience. I did a lot of reading and asked a lot of questions as well as taking SRDF and TimeFinder courses, but there was no more basic Symmetrix course like the CLARiiON & VNX ones. All the Symm courses were on specific parts of Symm technology (like TimeFinder & SRDF, or performance specific).
Yes, booting from SAN is probably one of the greatest sounding ideas that turns out to be almost more painful than it is helpful once you get into it. We try to avoid it as much as possible, although it tends to go a bit smoother on the Symms than it does on the CLARiiON/VNX line. Something to do with the active/active vs. active/passive pathing (or active/"sort of active" with ALUA )
Yep, I believe the backend architecture is the biggest change just as your answer. thanks a lot!!
Thanks Allen for the answers
If I would like to think about the VMAX & DMX configuration base on the host side application from the user perspective, would you please recommand any documentations for reference? Example, How to, etc.
Hey experts , I have another question:
suppose I have a Linux host with several disks and I'm migrating from CX to VNX or DMX to VMAX, or VMAX to CX... in fact it doesn't matter at all since my question is about the point where Linux finally sees the new LUNs I've presented and I made sure the data has been copied over from the source to the target LUN. Linux simply doesn't recognise the new LUNs as the old ones. For Linux it's like the old LUNs are gone and it received a bunch of new LUNs. Do you have any ideas about how to migrate data from an old storage system to a new system without loosing data ? Ok, host based mirroring is an option that seems fool proof, but I'd like options that don't include the hosts itself.
Depending on what exactly you are looking for here there are two series of whitepapers I would recommend.
For host connectivity in general there is a series of host connectivity guides targeted at various operating systems that go deep into the gory details of connectivity to various EMC storage platforms and all the details you need to consider.
If you are looking for more specific application configuration (and associated storage configuration) there is a series of whitepapers on specific applications (i.e. Microsoft Exchange, Oracle, etc) that dive deep into those topics.
You can find all those whitepapers on Powerlink.
This seems to be more of a host based issue since the way a host recognizes a LUN is driven by the OS itself. I no longer do a lot of in depth work on the OS side of things... I work WITH the Server specialists, but I rely mostly on their specialization. Maybe dynamox will have more insight on this one since I believe he still does a fair bit of host based work too (and we all know he just lovesLinux *lol*)