Welcome. You may be asking yourself what is scale scale is software to find storage from DELL E MC that provides data center scale block storage using standard X 86 servers and Ethernet networking. This enables data centers to move away from proprietary hardware, delivering more functionality and reliability from commodity servers and networks scale is incredibly versatile with clusters ranging in size from at least three into the thousands in both hyper converged and storage only configurations. This allows you to start small and easily grow as demand grows, increasing your cluster size by small or great numbers automated and transparently no matter the size.
Each node in a software defined storage cluster has a bit of administrative overhead during deployment server management, hypervisor and storage software all need multiple networks, configured, root password set and software installed and updated depending on the size of the cluster. This could take several days. Even for a team of admins. In this video, we're going to demonstrate how scal enables a single administrator to deploy a complete cluster and less than 30 minutes from node power on to do this. We'll start with a three node cluster leveraging DELL E MC ready nodes for scale io ready nodes are optimally configured X 86 servers with the hypervisor and scale virtual machine pre installed. Ready nodes are available in a wide variety of configurations.
Two U 24 disc systems based on the Dell power edge R 7 30 XD and one U 10 drive systems based on the power edge R 630. Each is available in all flash hybrid or HCD only configurations as well as hyper converged with more ram and CP or as storage only personalities. Before we get too far into the demo, we should take a few moments to introduce some scal concepts and acronyms. Let's start with the host. The host is the OS or hypervisor onto which the SCA software or virtual machine is installed. Today, we are using VMWARE I as our host. Since our host is a hypervisor, scal will run in its own virtual machine referenced as the SBM or scale virtual machine. Let's expand the SBM to highlight two key roles.
The first are the scale IO data clients which manage the host connection to the scale IO storage and the scale IO data servers or SDS which manage the local storage and present it to the SCS as a large pool. Some nodes act as metadata managers or MDMS MD MS direct SD CS to the correct SDS node, every cluster has at least two MDMS. A primary and secondary. The tiebreaker role is required to help decide which MDM is primary in the event of a network failure. Let's take a look at the steps required to build our three node scale IO cluster.
The first step is the physical installation of nodes into the data center racks. Dell makes us easy with ready rails. These rails require no screws and thankfully no screwdrivers. After installing into the racks, nodes will need to be cabled to the networks. Three physical networks are required a one gigabit management network for the Ira hypervisor and scalable management networks and a redundant 10 gigabit Ethernet network for data and inter cluster communication data networks can get fairly complicated for very large clusters with hundreds of nodes. But put simply the only requirement is that each node be able to communicate with both every other node and every SDC. This image shows how the networks in our three node cluster used for the demo are cabled.
While the nodes are being physically installed and cabled, our admin can get to work installing the scale IO management software and configuring the information the deployment wizard will pull from in order to configure node security networks and connect them to V center. Once the management software is configured and the nodes have been powered on and cabled to their networks. The automated deployment can begin when we log in the first time we'll be prompted to do our initial set up where we define the passwords and IP addresses for each of the networks and software layers. There are some specific guidelines detailed in the introduction section.
Feel free to pause if you'd like to read them in detail. At each step during the deployment, we will have an opportunity to review and change settings before advancing to the next step. This outlines all of the steps taken during the automated deployment under security. We set the desired route or admin password for each software layer. Here, we set a range for a pool of IP addresses from which the automated deployment will pull IP S for each node. The data networks are the redundant 10 gig ethernet private networks labeled here as data network A and B. Finally, we establish our connection to the V center server and select the data center into which the nodes will be placed to start the deployment provided. Our nodes have been installed cabled and powered on.
We just click the add nodes button because this was the scale IO lab. We found eight nodes, but we are just selecting the three nodes we're using today. We've sped things up a bit but a running clock is always visible. The IP addresses are applied to change any of them. We could have selected the IP before proceeding here, we see our A and B networks and IPs from the pools we defined earlier during the deploy step, cluster roles are assigned and needed. Software is deployed and configured protection domains are much more relevant for larger clusters. And we'll have another video later on which covers them in much greater detail. Nodes have a mix of sds and HDD S. We need to choose how the sds will be used.
We will configure them as cache devices, applying the setting to all of the nodes and accelerating the performance of our tds. We confirm that each device is assigned to a storage pool while we only have one pool here, larger clusters may have multiple pools varying in performance or availability characteristics. A later video will cover storage pools in much greater detail. Once the storage pool is created, the deployment is finished, our total time was just over 27 minutes and remarkably would have been very similar with dozens or hundreds of nodes.
The hardware tab allows us to review the configuration of each node, see its IP addresses and cluster role and see any hardware components that may have issues. The dashboard shows us our capacity utilization our performance overview as well as other information relative to the day to day management of the system, having fully deployed our scale software defined storage system. Our next step which we cover in the next video in this series is to provision storage to our clients. The VMWARE hosts.