Start a Conversation

Unsolved

This post is more than 5 years old

38644

March 8th, 2016 15:00

Ask the Expert: VMAX All Flash – Extreme Performance at Petabyte Scale, and Best Practices

YOU MAY ALSO BE INTERESTED ON THESE ATE EVENTS...

Ask the Expert: VMAX3 Hypervisor, Embedded NAS Assessment, Local and Remote Replication Strategies

Re: Ask the Expert: Learn Your Guide to Survival in the Transforming World of IT from the EMC Experts

Ask the Expert: Top Tips & Tricks to Rock Your XtremIO World

Welcome to the EMC VMAX community Ask the Expert conversation. On this occasion we will be covering these exciting VMAX All Flash topics: new architecture engineered for all-flash, achieving extreme performance levels, in-line compression data reduction technology, new V-Brick and Flash Capacity Packs, appliance-based hardware and software packaging, local and remote replication, and rich data services.

Among the many areas we will be discussing, our experts will answer your questions in regards to the new VMAX All Flash architecture, how you can achieve extreme levels of performance, deployment best practices, supported configurations and rich data services, challenges with multi-site replication, and consolidation opportunities to combine mainframe, open systems, block and file workloads on VMAX All Flash.

Watch this fun "hands-on" demo on the differences between Scale Up and Scale Out All Flash


Also, check out these informative VMAX All Flash assets.


Meet Your Experts:

profile-image-display.jspa?imageID=12600&size=350

Paul Martin

Principal Corporate Systems Engineer

Paul started his career at EMC 10 years ago in tech support working in the OSAPI Unix team. After a few years he continued his career path in the Proven Solutions arena working with Oracle and SAP proven solutions team to produce white papers and proven solutions guides focusing on the integration with EMC products. This involved design, build and test of full EMC SAN environments with the Core EMC technologies, VMAX, VNX, RecoverPoint and DataDomain. He is currently working as Principal Corporate Systems Engineer in the Core Technologies Division VMAX focused.

profile-image-display.jspa?imageID=13802&size=350

Andrew Lubeck

Consultant Corporate Systems Engineer

Andrew has been with EMC for 16 years. In that time he has worked for Customer Service, Professional Services, and, for the past 10 years, Symmetrix Engineering. The products that he's currently supporting are FTS, FAST.X and ProtectPoint. Andrew also does a lot of work involving migration strategies and best practices.

profile-image-display.jspa?imageID=12853&size=350

Mike Adams

Consulting Corporate Systems Engineer

Mike has been with EMC for over 15 years and part of the VMAX engineering team for the past 10 years. Mike's areas of expertise include SRDF, ORS, FLM, Access Controls, User Authorization, Host IO Limits, Performance, Databases, Code Development, and FAST.

Salvadore James.jpg

James Salvadore

Manager Corporate Systems Engineer

James joined EMC in 2004. His current roll involves pre-sales customer support. James regularly performs numerous demonstrations of EMC technologies for customers. He is also heavily involved in new product introductions specifically in the Symmetrix beta programs.

profile-image-display.jspa?imageID=16947&size=350

John Adams

Manager VMAX Global Performance Engineering

John is currently managing the Symmetrix Global Performance Support group. This group deals with all of the outwardly facing performance topics, including documentation to the Symmetrix performance gurus in the field, as well as being the engineering escalation path from customer support level 2. Twitter: @Quincy56.

lightbulb.png INTERESTED ON A PARTICULAR ATE TOPIC? SUBMIT IT TO US


This discussion will take place Mar. 14th - 25th. Get ready by bookmarking this page or signing up for e-mail notifications.

Share this event on Twitter or LinkedIn:

>> Ask the Expert: VMAX All Flash – Configuration, Extreme Performance at Petabyte Scale, and Best Practices http://bit.ly/220P4RE #EMCATE <<

March 14th, 2016 06:00

This Ask the Expert session is now open for questions. For the next couple of weeks our Subject Matter Expert will be around to reply to your questions, comments or inquiries about our topic. Let’s make this conversation useful, respectful and entertaining for all. Enjoy!

2.1K Posts

March 14th, 2016 08:00

Congratulations on this launch folks. I'm looking forward to seeing more about this new product in the Symmetrix line and hopefully getting my hands on one in the near future. I've been slogging through the documentation on it trying to pick out the differences that are most relevant and there is one thing that I'm curious about right from the start. I suspect I will have more questions the further I dig, but for now...

If the initial capacities are set at 53TBu with each engine and the growth capacities are 13TBu per unit.... why are there three different drive sized available at launch? What exactly would drive the need for multiple drive sizes? Is 13TBu different from 13TBu when you order? How do you know which 13TBu you should be ordering?

1 Rookie

 • 

20.4K Posts

March 14th, 2016 09:00

I have an all flash VMAX3 200K, where does it fit in this discussion ? Can i take advantage of some of the licensing ?  We bought the 200K in December, so can i get the "ALL FLASH"  front panel  ? 

5 Practitioner

 • 

274.2K Posts

March 14th, 2016 11:00

Greetings Allen,

Thank you for the kind congratulations. Many people worked hard and long hours to make this launch happen.

The size of drives in the VMAX All Flash depends on two factors: the total usable capacity ordered and the RAID protection scheme selected for the system (either RAID5 7+1 or RAID6 14+2). The initial V-Brick 53TBu capacity is comprised of two RAID groups (one for each director) of 26 TBu each. For systems using RAID5 7+1 protection, the initial V-Brick will use 3.8 TB drives to achieve the 53 TBu. For systems using RAID6 14+2 protection, the initial V-Brick will use 1.9 TB drives to achieve the 53 TBu. Any additional 13 TBu capacity blocks required by the system also have to be a full RAID group. If your system requires an odd number of 13 TBu capacity blocks to achieve the desired total usable capacity, the capacity blocks could end up using a smaller size drive than was used by the initial V-Brick 53 TBu. A configuration with mixed drive sizes is perfectly fine on the VMAX All Flash. The details on the size of drives used will change over time as new drives are qualified for the VMAX All Flash so I wouldn't get overly concerned understanding the rules behind this. What you need to remember is that your VMAX All Flash will come pre-configured with the best possible drive combinations to achieve your desired total usable capacity and chosen RAID protection scheme.

I hope this helps.

Jim

5 Practitioner

 • 

274.2K Posts

March 14th, 2016 12:00

Greetings Dynamox!

Systems that are sold as VMAX3 hybrid arrays are different from VMAX All Flash even if they have a single all flash tier. These VMAX3 hybrid systems are not designed for compression (expected to GA later this year). The VMAX3 hybrid software is licensed by usable TB and not bundled into the system as it is with the VMAX All Flash. Customers interested in VMAX All Flash should have a discussion with their sales team about details.

Jim

2.1K Posts

March 14th, 2016 12:00

Thanks Jim. That helps a bit... and raises more questions in my mind. From a more technical perspective it sounds like the growth limits of these arrays is more "fluid" than they may at first seem. This is concerning to me in trying to plan a configuration and future growth as we would always want to make sure that we weren't making choices today that would "shoot us in the foot" tomorrow.

How can we avoid making config choices now that may result in smaller drive sizes (thus higher drive counts) which could negatively impact the overall growth potential (scalability) for the array as a whole? Or are there mechanisms in place that make this a moot point?

5 Practitioner

 • 

274.2K Posts

March 15th, 2016 07:00

Hi Allen,

Each V-Brick can support up to 500 TBu when using a 2 TB cache engine. Each of the two DAEs shipped with the V-Brick have 120 drive slots in them (240 drive slots per V-Brick total). Having this many drive slots for each V-Brick means that a customer can easily expand by simply adding the additional capacity pack drives in the empty drive slots of the V-Brick DAEs. This was done by design as it makes capacity upgrades much simpler and easier to plan for.

Jim

2.1K Posts

March 15th, 2016 08:00

So if I understand that correctly the V-Brick actually consists of the Engine and two 120 drive DAEs right up front so you don't have to worry about DAEs in the future? And I think I was reading it right that you could do two V-Bricks per cabinet. So the only time you have to worry about anything more than just drives is when you are adding a new V-Brick to an existing cabinet or a new V-Brick in a new cabinet? Or in our case, adding a new cabinet as we never put in a single engine at a time.

Based on that it looks like you could potentially limit yourself to less than the 500TBu per V-Brick unless there were at least some of the 3.8TB drives included in the config. That being said, there is also a good possibility that you wouldn't run into that situation as newer larger drives are introduced and shift the balance back within the 240 drive "limitation". I think I can live with that.

5 Practitioner

 • 

274.2K Posts

March 17th, 2016 08:00

Hi Allen,

You understanding is correct. We tried to make adding capacity as simple as possible so that all that is needed is to add drives in the empty DAE slots up until the maximum V-Brick usable capacity is reached. The probability of running out of drive slots in a V-Brick is pretty slim, especially as higher capacity flash drives become available. With dual v-bricks in a single cabinet, you have two engines and 4 DAEs (480 drive slots) on a single floor tile. This gives you a very dense all flash solution.

Jim

2.1K Posts

March 17th, 2016 10:00

Thanks again Jim. It takes some getting used to... dense being good :-)

I'm more used to the traditional Symmetrix platforms of old and I'm glad to see this configuration getting easier with each iteration. This sounds like a great design that will really make life easier for those who deploy and upgrade the VMAX AF. Here's hoping that will soon be me!

I'm sure I'll have more questions soon as I dig through more documentation.

18 Posts

March 21st, 2016 07:00

Hi All,

I understand the All Flash array utilizes compression of 1:2 at least. Most of our landscape consist out of compressed Oracle or SAP databases. Usually compressing compressed data will result in little extra compression. Or in some cases even extra data. Does the VMAX All Flash still can get this compression ratio if the database is compressed as well?

Mart

2 Posts

March 21st, 2016 09:00

Hi All - can you comment on if Storage Analytics is available in the software packages - "F" or "FX" or is it going to be available as a selectable option?

Thank you

Tom

5 Practitioner

 • 

274.2K Posts

March 21st, 2016 12:00

Hi Mart,

If the data coming to the VMAX All Flash is already compressed, then we expect that compression on the array would provide minimal (if any) additional compression of the data. One thing you might want to consider going forward is  if you are finding that host based / application based compression is consuming a large number of host CPU cycles, then you might want to have the compression performed on the storage array as away of offloading the CPU on the host. Something to think about anyway.

Jim

15 Posts

March 21st, 2016 12:00

Hi Mart,

Like you said, it’s not very effective to try to compress an already compressed data. You need to consider pros. vs cons. For example, compression at the database level has often better granularity than storage compression, i.e. you can decide what database objects you want affected. However, if you do it at the storage level, you don’t need to worry about missing any new database, or table, or index, etc. – the granularity is wider, but easier to catch your applications' storage. Also, compression at the db level often helps performance (for example, each IO has higher payload as each db block contains more data), however, it often comes at the price of host CPU cycles handling the compression. If host CPU is a concern, why not let the storage do it instead. In short – you can make the right choice for your specific applications and business needs. I don't think that there is one answer that fits everyone. (p.s, I believe that encryption discussion is not too different).

-Yaron

5 Practitioner

 • 

274.2K Posts

March 26th, 2016 07:00

Hi Tom,

The storage analytics package comes as part of embedded Unisphere for VMAX so it is included by default in the F option. Also included in the F option is Database Storage Analyzer and Solutions Enabler.

Jim

No Events found!

Top