Dell | Terascala HPC Storage Solution Part 2

Dell | Terascala HPC Storage Solution Part 2

Scott Collier This is a follow up to Part 1 of the Dell | Terascala HPC Storage Solution series. In the last post, I gave a high level overview of the appliance and discussed benchmarking data as well. In this post I’ll go into a bit more detail about the solution – particularly how it’s administered via the GUI interface and I’ll also talk about some of the metadata performance results. You can see the picture of the equipment we used in the lab in figure 1.
Dell | Terascala HPC Storage Solution

Figure 1: Dell HPC Lab I/O Cluster

For more details on the software and hardware involved, please refer to the first post at:|+Terascala+HPC+Storage+Solution+Part+I

Now, as we all know, Lustre itself can be a bit complex to administer. With the Dell | Terascala HPC Storage Solution, we have created an appliance that demystifies Lustre by providing a comprehensive GUI that will be used for all Lustre filesystem administration. Here’s a sneak peek at the GUI and some of its functionality.

Let’s start off by looking at the main console of the GUI in Figure 2. You can see on the left hand pane (Hardware) is a tree based representation of the hardware components of the appliance. Here we have 2 racks. One rack has the Terascala MDS servers configured in a Active-Passive manner as well as the Terascala OSS servers configured in a Active-Active manner. You can also see on the Hardware pane another rack which consists of the Dell MD3000 storage arrays. In this configuration we have 3. One for the MDT, and 2 for the OST’s. Basically, in all the panes, we have status information so it’s easy to tell if there are any problems with the filesystem. More detail about the "Lustre Systems" pane in the top middle. Here you can see the filesystem "lstrdell" and the OST and MDT status. You can also see the read and write performance as well as the size of the filesystem and the used / free space on the file system and the status.

Dell | Terascala Management Console

Figure 2: Dell | Terascala Management Console

To drill down a bit more, if I click on the RAID Enclosure as shown in Figure 3., I can get a detailed status on the MD3000 array.

Dell | Terascala MD3000 Window

Figure 3: Dell | Terascala MD3000 Window

The enclosure status window shows us that all of our drives are good as well as the fan status, etc..
The OSS’s and the MDS’s are configured as redundant HA servers. Figure 4. Shows how I can failover the LUNs from one server to another.

Dell | Terascala HPC Storage Solution Failover

Figure 4: Dell | Terascala HPC Storage Solution Failover

So, basically, Lustre is a rich filesystem with lots of features and Terascala has created a easy way to administer it. I’ll provide some links below where you can see more information about the appliance as a whole.

Moving on to metadata performance…. Here I’ll share with you some of the results we obtained while benchmarking the Dell HSS. Metadata is basically data about data. We are trying to stress the array and find out how quickly we can create / delete / update and get the status of both files and directories. The test we will be using is "mdtest". mdtest is an MPI-coordinated benchmark test that performs create / stat / delete operations on files and directories and then reports performance. The version of mdtest that we are using was patched by Li Ou, another HPC engineer on our team. His version includes utime results. In Figure 5., we’ll discuss the results of metadata testing N-to-N, directory Vs. file.

Metadata N-to-N dir Vs. file

Figure 5: Metadata N-to-N dir Vs. file

This test shows that directory creates are much faster than file creates. The reason for this is because a file create involves operations on both the MDT and the OST. Directory creates only require operations on the MDT.

Next let’s take a look at metadata testing file, N-to-N Vs. N-to-1 in Figure 6.

Metadata file N-to-1 Vs. N-to-N

Figure 6: Metadata file N-to-1 Vs. N-to-N

Here you can see that N-to-N file creates are much faster than N-to-1 because of the locks and serialization that’s required for N-to-1.

Why do benchmarks like this matter? They matter because they provide a industry standard way of characterizing the storage. Once we know how the storage performs in certain circumstances, we can then make better decisions about which applications will benefit from it based on the attributes of the application.

Finally, in this blog post, I’d like to share more information about the Dell | Terascala HPC Storage Solution.

We (Li Ou, Rick Friedman – Terascala and I) wrote a white paper that can be found here:

Overview on the Terascala website:

-- Scott Collier

Article ID: SLN312042

Last Date Modified: 08/14/2018 02:32 AM

Rate this article

Easy to understand
Was this article helpful?
Yes No
Send us feedback
Comments cannot contain these special characters: <>()\
Sorry, our feedback system is currently down. Please try again later.

Thank you for your feedback.