donwake's Posts

donwake's Posts

Hi Nelseh, Thank you for your question: Q: How much redundancy is built into a DSSD D5 / are there any single points of failure? All hardware components are redundant except for the p... See more...
Hi Nelseh, Thank you for your question: Q: How much redundancy is built into a DSSD D5 / are there any single points of failure? All hardware components are redundant except for the passive midplane. Dual redundancy is part of the product architecture. 2 X CM, 2 X SM, each survive single component failure, 2 X IOM are redundant if hosts have a connection to each, 2+2 power supplies, survive dual failures, and Cubic RAID survives many failures. 4+1 dual ­rotor fans survive a single fan module (2 rotor) failure. The midplane is a single non­redundant component but it is passive and should not fail under normal circumstances.
Q:How is the storage presented to the host: Object/LUN/NAS/SMB/NFS? A: Hi "@AI..." thank you for your excellent question.  The D5 offers several ways for storage to be presented to the host. ... See more...
Q:How is the storage presented to the host: Object/LUN/NAS/SMB/NFS? A: Hi "@AI..." thank you for your excellent question.  The D5 offers several ways for storage to be presented to the host.  Keep in mind that up to 48 hosts are redundantly connected to the D5 via PCIe I/O cables for maximum performance and for sharing the storage.   With that "PCIe Mesh Fabric" architecture in mind, for host connectivity, we offer three ways to configure and present DSSD D5 storage to your applications on your hosts: DSSD Block Driver: DSSD created a high performance DSSD Block Driver interface that allows customers to use legacy block device applications without the need to modify their existing application I/O source code in any way.  There is also the “DSSD Block Service” running in the user space that handles the management of the data path.  To control which hosts can see which Block Devices, the concept of a DSSD volumes exists.  On the D5 appliance an administrator first creates volumes. Those volumes are assigned to hosts.  Once a host has a volume assigned, the host administrator can carve up D5 storage into objects of whatever size and of varying block lengths (512Byte and 4K to be usable by the Linux OS). Once the host administrator has created these block objects, they next configure and start the DSSD Block Device Service.  Once the DSSD Block Device service is running on the host(s), a block device entry is made in /dev.  At this point your block device looks like any other block device and the device can be used as a raw device or you can create and mount a file system on it.  Here is an example from my D5 where I have created three separate block devices, each device is a 1TB device.  They are named dssd0000, dssd0001 and dssd0002 and are lncluded in the list of other block devices also installed in my host, such as the root directory and an SSD device installed in the server as well. There are two other access methods that are available to applications for using DSSD D5 storage. The Flood Direct Memory API and via DSSD Plug-Ins. Flood Direct Memory API: Any application can be modified, or new Applications can be developed to use the Flood Direct Memory API “verbs”.  All data is stored on the D5 as some type of an Object. The “libflood” API C-library includes commands to create, modify, destroy, read and write to objects.  A DSSD Block object can be accessed directly from an application using the Flood Direct Memory API, and achieve maximum D5 performance as opposed to using the DSSD Block Driver. DSSD PLUG-INS: Essentially DSSD may create an API interface for certain categories of applications – like HDFS to allow the upper level application code to remain unchanged yet still provide direct objct access to the D5.  The key difference here is that DSSD can create API Plug-Ins or other common application interfaces for applications that support modification of their file system or I/O subsystem, as opposed to a customer “rolling their own API code”. The first example of this is the DSSD Hadoop Plug-In that will be available at GA.  That Plug-In will enable HDFS distributions to install the DSSD Hadoop Plug-In and then make no changes to the upper level application interface.  On the back end the Plug-In will perform native D5 I/O and bypass the Kernel.    The first HDFS distribution that will be certified to use the DSSD Hadoop Plug-In is “Cloudera”.  Other certifications will be made available as they are completed. For example Pivotal, Hortonworks or any other HDFS distribution supporting standard Hadoop Plug-Ins.  For More information I suggest going to our technology briefs linked on our DSSD web page. https://www.emc.com/en-us/storage/flash/dssd/dssd-d5/technology.htm Specifically checkout our "Modular Storage Architecture" tech brief:  the https://www.emc.com/en-us/collateral/data-sheet/h14867-ds-dssd-d5-object-oriented-next-gen-analytics.htm On our support.emc.com portal you can access all of our DSSD D5 Documentation.  Here is a link to our DSSD Client Guide:  https://support.emc.com/docu59379_DSSD-Client-User-Guide.pdf?language=en_US Review chapter 3 - Managing Clients.  There is a section in there that discusses the creation of block devices. I hope this answers your question.  Look forward to more discussion!
Hi Dynamox, 1) Concerns of theft? Rogue CE of another company walking away with customers metadata? A: The data on the USB is just telemetry about the appliance, not its data. It provides infor... See more...
Hi Dynamox, 1) Concerns of theft? Rogue CE of another company walking away with customers metadata? A: The data on the USB is just telemetry about the appliance, not its data. It provides information about how the appliance is performing and whether any components are close to EOL or producing faults. 2) If system has multiple cards, how do you manage multipathing? A: DSSD provides Multipathing that is always on and spreads data across multiple ports on one or two client cards for performance and fault recovery.  With a two Client Card option the application is protected from Client Port faults, Client Card faults and server PCIe bus faults. 3) Is DSSD support matrix available in E-Lab? A: Not yet.  DSSD provides a support matrix through our Systems Engineers when working directly with clients.
Thank you for the follow up question about our support for ESRS. I've broken down my answer into the three parts you have asked: Q1) Does the system integrate with ESRS? A: Yes. The DSSD D5 i... See more...
Thank you for the follow up question about our support for ESRS. I've broken down my answer into the three parts you have asked: Q1) Does the system integrate with ESRS? A: Yes. The DSSD D5 includes integration with the EMC Secure Remote Support (ESRS) network. Q2) Are you sending telemetry via ESRS to provide performance dashboards? A. We do send statistics related to our Data Path, not sure that qualifies as a Performance Dashboard. But here is exactly what we currently send as "Telemetry".   Based on the appliance settings, the following information is sent to DSSD on a regular basis: • Machine hardware and software inventory • Data path statistics • Error events and diagnosed faults • Flash wear on FMs Q3) How do you plan on servicing "dark sites"? A. For customers that cannot use DSSD Telemetry or EMC ESRS VE, the appliance offers the option of recording the telemetry reports to a locally attached disk. The disk, in the form of a USB stick, is expected to be left attached continuously to the appliance. As a result, the racks must contain sufficient clearance at the front of the chassis to enable continuous insertion. I hope this helps to answer your question dynamox. Sincerely, Don
A single D5 has 96 PCIe gen 3 x 4 lane ports. So, 48 DSSD client cards can be installed in hosts to give them all access to the D5’s storage. Those cards can be installed singly  in 48 clients, o... See more...
A single D5 has 96 PCIe gen 3 x 4 lane ports. So, 48 DSSD client cards can be installed in hosts to give them all access to the D5’s storage. Those cards can be installed singly  in 48 clients, or two cards per host to give 24 clients faster and more redundant access, or any combination or one or two client cards. A given host can currently connect to just a single D5, so per host there is no scalability. We expect that to change soon and then each host could connect to more than 1 D5 for more scale. Also, clustered file systems can be used to knit infrastructure together. For example you could have N racks of hosts each having hosts connected to a D5,  and with a clustered file system all hosts could see all storage (albeit with some access needing to use a network or infiniband interconnect). The overall performance and capacity of such a configuration can be extraordinary.