Start a Conversation

Unsolved

This post is more than 5 years old

21229

February 26th, 2016 14:00

Ask the Expert: Introducing DSSD & Rack-Scale Flash

DSSD-social-facebook-815x351-launch.jpg


YOU MAY ALSO BE INTERESTED ON THESE ATE EVENTS...

Ask the Expert: Introducing the New VCE VxRail™ Appliance

Ask the Expert: VCE Vision Intelligent Operations Software

Ask the Expert: Virtualization of Mission Critical Database workloads with Microsoft SQL, Oracle and DB2


Welcome to the ATE Community Ask the Expert conversation. On this occasion we will be covering the highly-anticipated announcement from DSSD which was announced on February 29th, 2016. Among many of the areas we'll be discussing, our experts are available to answer your questions in regards to the recently announced D5 and Rack-Scale Flash technology.

Meet Your Experts:

profile-image-display.jspa?imageID=16885&size=350

Greg Baltazar

Senior Education Services Consultant

Greg started with EMC in April of 2000. His entire career at EMC has been within Education Service. Greg currently serve as Product Manager for DSSD, VxRail, and DCA education assets. Twitter: @greg_sb1

profile-image-display.jspa?imageID=16888&size=350

Don Wake

Technical Marketing Engineer

Don has a diverse Enterprise Storage & Networking Career. In the past 16 years he has worked for HP (now HPE) Sierra Logic/Emulex, Brocade, NetApp, QLogic and now EMC-DSSD. Don has held multiple roles including Firmware Test Engineer, Applications Engineer, Customer Program Manager, Systems Engineer and Technical Marketing Manager. Twitter: @DonWakeTech

profile-image-display.jspa?imageID=16890&size=350

Jason Tolu

Product Marketing Manager

Jason is the Product Marketing Manager for EMC DSSD D5, Rack-Scale Flash solution. Prior to joining EMC, he was product marketing manager at Dell for KACE systems management applicant. Prior to Dell, worked in product marketing for a number of Silicon Valley start-ups. Joined IT industry as an analyst for Gartner.

profile-image-display.jspa?imageID=16891&size=350

Vibhuti Bhushan

Principal Product Manager

Vibhuti is a Solutions PM responsible for all the solutions built on DSSD D5. He has broad experience in Enterprise storage, including Flash, Scale-out NAS, Hadoop and other offerings related to Analytics. Before DSSD, Vibhuti has been a Product Manager at NetApp and Isilon.

profile-image-display.jspa?imageID=16892&size=350

Principal Product Manager

Maryam has 9 years of experience in various Flash Storage technologies. She is a technology enthusiast, Snowboarder, runner, and I love getting to know new people from different backgrounds. Twitter: @MSanglaji

lightbulb.png INTERESTED ON A PARTICULAR ATE TOPIC? SUBMIT IT TO US


This discussion will take place Feb. 29th - Mar. 11th. Get ready by bookmarking this page or signing up for e-mail notifications.

Share this event on Twitter or LinkedIn:

>> Ask the Expert: Introducing DSSD & Rack-Scale Flash http://bit.ly/1Q5RYiU #EMCATE <<

February 29th, 2016 10:00

This Ask the Expert session is now open for questions. For the next couple of weeks our Subject Matter Expert will be around to reply to your questions, comments or inquiries about our topic.

Let’s make this conversation useful, respectful and entertaining for all. Enjoy!

1 Rookie

 • 

20.4K Posts

February 29th, 2016 20:00

How does DSSD configuration scale, do you have to have two PCI-E cards per "brick" (what's the terminology here, looks like everything at EMC is measured in bricks now).

5 Practitioner

 • 

274.2K Posts

March 1st, 2016 08:00

A single D5 has 96 PCIe gen 3 x 4 lane ports. So, 48 DSSD client cards can be installed in hosts to give them all access to the D5’s storage. Those cards can be installed singly  in 48 clients, or two cards per host to give 24 clients faster and more redundant access, or any combination or one or two client cards. A given host can currently connect to just a single D5, so per host there is no scalability. We expect that to change soon and then each host could connect to more than 1 D5 for more scale. Also, clustered file systems can be used to knit infrastructure together. For example you could have N racks of hosts each having hosts connected to a D5,  and with a clustered file system all hosts could see all storage (albeit with some access needing to use a network or infiniband interconnect). The overall performance and capacity of such a configuration can be extraordinary.

1 Rookie

 • 

20.4K Posts

March 1st, 2016 12:00

Thank you Don.

does the system integrate with ESRS, are you sending telemetry via ESRS to provide performance dashboards ? How do you plan on servicing "dark sites" ?

5 Practitioner

 • 

274.2K Posts

March 1st, 2016 15:00

Thank you for the follow up question about our support for ESRS. I've broken down my answer into the three parts you have asked:

Q1) Does the system integrate with ESRS?

A: Yes. The DSSD D5 includes integration with the EMC Secure Remote Support (ESRS) network.

Q2) Are you sending telemetry via ESRS to provide performance dashboards?

A. We do send statistics related to our Data Path, not sure that qualifies as a Performance Dashboard. But here is exactly what we currently send as "Telemetry".   Based on the appliance settings, the following information is sent to DSSD on a regular basis:

• Machine hardware and software inventory

• Data path statistics

• Error events and diagnosed faults

• Flash wear on FMs

Q3) How do you plan on servicing "dark sites"?

A. For customers that cannot use DSSD Telemetry or EMC ESRS VE, the appliance offers the option of recording the telemetry reports to a locally attached disk. The disk, in the form of a USB stick, is expected to be left attached continuously to the appliance. As a result, the racks must contain sufficient clearance at the front of the chassis to enable continuous insertion.

I hope this helps to answer your question dynamox.

Sincerely,

Don

1 Rookie

 • 

20.4K Posts

March 1st, 2016 16:00

Don Wake wrote:

Q3) How do you plan on servicing "dark sites"?

A. For customers that cannot use DSSD Telemetry or EMC ESRS VE, the appliance offers the option of recording the telemetry reports to a locally attached disk. The disk, in the form of a USB stick, is expected to be left attached continuously to the appliance. As a result, the racks must contain sufficient clearance at the front of the chassis to enable continuous insertion.

I hope this helps to answer your question dynamox.

Sincerely,

Don

concerns of theft ? Rogue CE of another company walking away with customer's metadata.

If system has multiple cards, how do you manage multipathing ?  Is DSSD support matrix available in E-Lab ?

5 Practitioner

 • 

274.2K Posts

March 2nd, 2016 09:00

Hi Dynamox,

1) Concerns of theft? Rogue CE of another company walking away with customers metadata?

A: The data on the USB is just telemetry about the appliance, not its data. It provides information about how the appliance is performing and whether any components are close to EOL or producing faults.

2) If system has multiple cards, how do you manage multipathing?

A: DSSD provides Multipathing that is always on and spreads data across multiple ports on one or two client cards for performance and fault recovery.  With a two Client Card option the application is protected from Client Port faults, Client Card faults and server PCIe bus faults.

3) Is DSSD support matrix available in E-Lab?

A: Not yet.  DSSD provides a support matrix through our Systems Engineers when working directly with clients.

9 Posts

March 7th, 2016 06:00

How is the storage presented to the host: Object/LUN/NAS/SMB/NFS?

5 Practitioner

 • 

274.2K Posts

March 7th, 2016 08:00

Q:How is the storage presented to the host: Object/LUN/NAS/SMB/NFS?

A: Hi "@AI..." thank you for your excellent question.  The D5 offers several ways for storage to be presented to the host.  Keep in mind that up to 48 hosts are redundantly connected to the D5 via PCIe I/O cables for maximum performance and for sharing the storage.   With that "PCIe Mesh Fabric" architecture in mind, for host connectivity, we offer three ways to configure and present DSSD D5 storage to your applications on your hosts:

DSSD Block Driver: DSSD created a high performance DSSD Block Driver interface that allows customers to use legacy block device applications without the need to modify their existing application I/O source code in any way.  There is also the “DSSD Block Service” running in the user space that handles the management of the data path.  To control which hosts can see which Block Devices, the concept of a DSSD volumes exists.  On the D5 appliance an administrator first creates volumes. Those volumes are assigned to hosts.  Once a host has a volume assigned, the host administrator can carve up D5 storage into objects of whatever size and of varying block lengths (512Byte and 4K to be usable by the Linux OS).

Once the host administrator has created these block objects, they next configure and start the DSSD Block Device Service.  Once the DSSD Block Device service is running on the host(s), a block device entry is made in /dev.  At this point your block device looks like any other block device and the device can be used as a raw device or you can create and mount a file system on it.  Here is an example from my D5 where I have created three separate block devices, each device is a 1TB device.  They are named dssd0000, dssd0001 and dssd0002 and are lncluded in the list of other block devices also installed in my host, such as the root directory and an SSD device installed in the server as well.


lsblk-output.jpg

There are two other access methods that are available to applications for using DSSD D5 storage. The Flood Direct Memory API and via DSSD Plug-Ins.

Flood Direct Memory API: Any application can be modified, or new Applications can be developed to use the Flood Direct Memory API “verbs”.  All data is stored on the D5 as some type of an Object. The “libflood” API C-library includes commands to create, modify, destroy, read and write to objects.  A DSSD Block object can be accessed directly from an application using the Flood Direct Memory API, and achieve maximum D5 performance as opposed to using the DSSD Block Driver.

DSSD PLUG-INS: Essentially DSSD may create an API interface for certain categories of applications – like HDFS to allow the upper level application code to remain unchanged yet still provide direct objct access to the D5.  The key difference here is that DSSD can create API Plug-Ins or other common application interfaces for applications that support modification of their file system or I/O subsystem, as opposed to a customer “rolling their own API code”.

The first example of this is the DSSD Hadoop Plug-In that will be available at GA.  That Plug-In will enable HDFS distributions to install the DSSD Hadoop Plug-In and then make no changes to the upper level application interface.  On the back end the Plug-In will perform native D5 I/O and bypass the Kernel.    The first HDFS distribution that will be certified to use the DSSD Hadoop Plug-In is “Cloudera”.  Other certifications will be made available as they are completed. For example Pivotal, Hortonworks or any other HDFS distribution supporting standard Hadoop Plug-Ins. 

For More information I suggest going to our technology briefs linked on our DSSD web page.

https://www.emc.com/en-us/storage/flash/dssd/dssd-d5/technology.htm

Specifically checkout our "Modular Storage Architecture" tech brief:  the https://www.emc.com/en-us/collateral/data-sheet/h14867-ds-dssd-d5-object-oriented-next-gen-analytics.htm

On our support.emc.com portal you can access all of our DSSD D5 Documentation.  Here is a link to our DSSD Client Guide: 

https://support.emc.com/docu59379_DSSD-Client-User-Guide.pdf?language=en_US

Review chapter 3 - Managing Clients.  There is a section in there that discusses the creation of block devices.

I hope this answers your question.  Look forward to more discussion!

5 Practitioner

 • 

274.2K Posts

March 7th, 2016 10:00

How much redundancy is built into a DSSD D5 / are there any single points of failure?

5 Practitioner

 • 

274.2K Posts

March 7th, 2016 11:00

Hi Nelseh,

Thank you for your question:

Q: How much redundancy is built into a DSSD D5 / are there any single points of failure?

All hardware components are redundant except for the passive midplane. Dual redundancy is part of the product architecture.

2 X CM, 2 X SM, each survive single component failure, 2 X IOM are redundant if hosts have a connection to each, 2+2 power supplies, survive dual failures, and Cubic RAID survives many failures. 4+1 dual ­rotor fans survive a single fan module (2 rotor) failure. The midplane is a single non­redundant component but it is passive and should not fail under normal circumstances.

March 14th, 2016 06:00

This Ask the Expert event has officially ended, but don't let that retract you from asking more questions. At this point our SMEs are still welcomed to answer and continue the discussion though not required. Here is where we ask our community members to chime in and assist other users if they're able to come up with accurate information.


Many thanks to our SMEs who selflessly made themselves available to answer questions. We also appreciate our community members for taking part of the discussion and ask so many interesting questions.


ATE events are made for your benefit as members of ECN. If you’re interested in pitching a topic or Subject Matter Experts we would be interested in hearing it. To learn more on what it takes to start an event please visit our Ask the Expert Program Space on ECN.


Cheers!

No Events found!

Top