Start a Conversation

Unsolved

This post is more than 5 years old

9901

October 23rd, 2013 06:00

Ask the Expert: All About XtremIO

Welcome to this Ask the Expert event! Following the XPECT MORE webcast at 11am EST on November 14, join us here to discuss the features and benefits of the XtremIO 100% flash scale-out enterprise storage array.

Your hosts:

profile-image-display.jspa?imageID=8976&size=350 Arindam Paul is addicted to the business of technology. He's been with XtremIO since 2012. Before XtremIO, he delved in SAN networking (Andiamo Systems - MDS), Core Routing (ASR 9000) and WAN Optimization (WAAS) at Cisco. And CDN (DSA) at Akamai.
profile-image-display.jspa?imageID=8840&size=350 Chris Carrieri joined EMC in Feb 2011 and initially started as a customer engineer in Raleigh. He still lives in Raleigh, but now works in the Federal Division and has been highly focused on XtremIO implementations since May 2013. Chris is also focused on VNX and Avamar.
profile-image-display.jspa?imageID=8952&size=350 Itzik Reich works in EMC's presales organization. He has deep expertise around virtualization and high performing environments. Itzik is a vExpert, VMware Certified Instructor, a MCSE, and a CITRIX Certified Administrator.
profile-image-display.jspa?imageID=8977&size=350 Miroslav Klivansky is a consultant technologist with over 20 years experience, ranging from teaching software engineering to engineering management. He has comprehensive knowledge in multiple areas of storage, network, and system engineering, with performance and technical education at the core.

This discussion will run November 14-22. Get ready by logging in and "following" the thread to receive notifications.

12 Posts

November 14th, 2013 10:00

It was the VMware reference architecture on their site. Page 8 shows 1.9TB usable capacity, all SSD array, 16x200 MLC SSDs — all different from the GA specs.

http://www.vmware.com/files/pdf/techpaper/vmware-view-solution-guide-emc-xtremio.pdf

15 Posts

November 14th, 2013 10:00

We are exploring but there is no commitment yet.

15 Posts

November 14th, 2013 10:00

Link is in the answer above. Just click.

12 Posts

November 14th, 2013 10:00

Thanks — do you have the link for the latest one?

15 Posts

November 14th, 2013 10:00

Here is the updated RA at www.xtremio.com : XtremIO | VMware Horizon View VDI Solution Guide http://www.xtremio.com/vmware-horizon-view-vdi-solution-guide

The RA you mentioned is very old. We will remove it.

12 Posts

November 14th, 2013 11:00

Can you clarify for me if the GA version has snapshotting -- a variety of the resources talks about it, but I swear I saw a response on the keynote that it would come later.

Thanks

Gary

15 Posts

November 14th, 2013 11:00

Sure. We had quite a few questions on this in the live Q&A session but here it is again: Our snapshots are very unique and elegant integrated completely with the rest of the architecture i.e. inline data reduction, in-memory metadata etc. We'll be releasing snapshots in a post-GA release.

15 Posts

November 14th, 2013 11:00

Everything in the RA, all the collateral and product literature is up to date. Enjoy !  

12 Posts

November 14th, 2013 11:00

Thank you — do you expect any change between now and when it GA’s or can I go with the details in the resources?

--

1 Rookie

 • 

20.4K Posts

November 16th, 2013 21:00

As you add new x-bricks you also add new FC ports correct ? So what is the strategy in terms of front-end connectivity, do you zone new ports to hosts ?

5 Practitioner

 • 

274.2K Posts

November 18th, 2013 07:00

dynamox,

Lets say you have one brick zoned to hosts. If you add another brick, you would gain 4 more FC ports and 4 more iSCSI ports. (2 each controller). You would have to add zones per best practices on the existing hosts and/or new hosts now utilizing the box. However, if the same host(s) that was originally zoned to only the first brick, and you added a brick, you could take advantage of using the added brick but the paths for HA etc would not be setup per best practices. The more bricks, better performance for IOPS and obviously greater physical and logical space, with maintaining <1ms latency as you scale. When the system expands, resources remain balanced, and data in the array is distributed across all building blocks to maintain consistent performance and equivalent flash

wear levels.

Arindam or Miroslav can expand on my input if needed:)

1 Rookie

 • 

20.4K Posts

November 18th, 2013 08:00

Chris Carrieri wrote:

However, if the same host(s) that was originally zoned to only the first brick, and you added a brick, you could take advantage of using the added brick but the paths for HA etc would not be setup per best practices.

Chris,

can you please expand on the HA part a little bit. Let's say today i have one xbrick and one host is zoned to it. Tomorrow i add another x-brick but my host remains zoned to the first x-brick. Obviously i can't take advantage of the additional paths to the new x-brick but in terms of HA ..what am i missing ?

5 Practitioner

 • 

274.2K Posts

November 18th, 2013 08:00

Once a unit goes from one brick to multiple bricks, IB switches are added to the configuration, therefore the controllers can now all communicate internally. So IOs that come into brick #1 can still spread data across the newly 2nd added brick. However, what I meant by HA, is that if FC ports went bad on the first brick or went down, you are vulnerable to paths until you actually zone the new FC ports to your Host HBAs. So instead of 2 paths per HBAs via each fabrics, you need to make it 4 paths for each HBA. Does that make more sense. Sorry for not being clear.

5 Practitioner

 • 

274.2K Posts

November 18th, 2013 10:00

Hi Chris, dynamox,

To expand a little, I'd say that the general philosophy is to treat the entire cluster as a single storage array. So in general, we want as much connectivity between the client and cluster as possible, and as many paths between the targets and initiators as possible. In the example, we went from an optimal configuration with one X-Brick, to a sub-optimal configuration when another X-Brick was added because the number of paths could now be improved. The cluster will gladly serve the already registered initiators through the new target ports, but in the example given the switches would restrict the paths based on zoning. That would imply that as part of expanding the cluster you should revisit the zoning on the switches and add the new target ports to the appropriate zones.

In general, we want as many paths as possible between initiators and targets. As Chris mentioned, we will service storage requests from any target port regardless of which X-Brick holds the actual data. If the data resides on a different X-Brick from the target port, we'll just traverse the backend IB network for the SSD-side requests. That said, there may be reasons to limit the number of paths available (traffic isolation, security, host OS multipathing limitations, etc.).

Lastly, as already mentioned in several places, we're delivering cluster expansion in a future release targeted for 2014. So while it's great to understand the theory of how things will work, this discussion is theoretical for the near future. We'll work with customers to help make sure they size their initial cluster purchases so that there is little risk of outgrowing their capabilities.

Take care and hope that helps,

Miroslav

5 Practitioner

 • 

274.2K Posts

November 18th, 2013 11:00

Thanks Miroslav!

Chris Carrieri

Delivery Specialist

EMC² | Federal Division

Professional Services

Raleigh, NC | ☎ (919) 370-6107

No Events found!

Top