Hello and welcome to this next video demonstrating the use of E MC data protection advisor version six. In this instance, we'll be looking at the implementation of DP A in a clustered environment. As always, please refer to the full supporting documentation at the location shown with particular reference to the installation and administration guide and the white paper. DP A six deployment architecture guide. There are a number of key essentials to be understood before considering implementation of DP A application clustering.
First of all, you must ask yourself is DP A application clustering really required DP A application clustering is provided for purposes of scalability. That is where there is a requirement to share workload over multiple servers in larger or expanding environments. DP A clustering does not provide for high availability services. It is also to be noted that the DP A data store may not be clustered. A separate technology for data store replication is provided and this is further discussed and described in the deployment architecture guide in order to manage resources speedily and effectively E MC recommends that DP A is implemented in a virtual environment where this is done.
It's also recommended that each application server host in a cluster be provisioned on a discrete ESX host. This will allow for speedier recovery in the event of hardware failure. Regarding licenses, please be aware that no additional licenses are required for DP A cluster implementation on sizing each application server. Be it a master or a slave should be sized identically according to the recommendations of the DP A sizing estimator, this will ensure that in the event of any hardware failure or failures. A single application server can continue to offer a service albeit with perhaps reduced performance. There are a couple of networking prerequisites for the implementation of DP clustering.
A dedicated UDP multicast enabled V line should be provided for the DP application service. A UDP address is provided. But if there is any conflict with existing applications, if it is not a dedicated land, then the UDP multicast address may be changed. Secondly, a hardware load balancing switch must be provided for the DP A application server cluster. While software based load balancing switches have been tried, the performance of these has been found to be compromised. E MC makes no recommendation regarding the vendor of the switch provided. Nor does EC provide any recommendation regarding the algorithms used for load balancing.
A shared folder is required for reporting purposes and must be provided across all application nodes, both master and slaves. This folder can be located anywhere as long as it is visible to the application nodes in this demo, we've placed the shared folder on the data store host purely for convenience. Depending upon utilization, the folder may be quite large and you may wish to start with something like five gigabytes. The user name and password is required for authentication and this will be requested during the installation of the application nodes. And finally, use of an NFS mounted share is not recommended.
So let's now take a very quick spin around the schematic showing the implementation of the DP A application cluster. So from the corporate network, let's first of all provision, our dedicated VL which must be UDP multicast enabled. Let's add to this a hardware load balancing switch and then we can start installing our servers on our pre provisioned virtual machines. Starting with the data store, we can then install our first application server which will be the master and then one or more application service slaves. Don't forget the installation sequence, installing the data store first followed by the application master, followed by the application slave or slaves additionally. And optionally, you may wish to implement data store replication by installing a data store replication slave to the data store master.
The implementation of data store replication is not covered in this video. OK. Let's demo. Let's start with a partially installed DP A data store and we are at the panel requiring the data store listening address. We are offered three addresses in this particular instance. But please remember that if you use I PV four, then I PV four should be used exclusively throughout the implementation. Similarly with I PV six, in this instance, we'll select the I PV four address and click next. In this panel, we are requested to configure the data store access requiring the IP address for the DP application service which will access this data store in a clustered instance.
We shall start by adding the DP master address followed by the IP address of the slave or slaves. An agent is installed as part of the data store installation. In this panel, we are requested to enter the data store agent address. That is the address with which the agent should communicate. When sending data. We see that both of the previously input addresses for the master and slave are entered. However, the DP agents will be communicating with the cluster address on the load balancing switch and it's this address that we will enter here following the installation of the data store. Let's just check the status with the DP DS status command. And we can see that the status of the data store service is running before continuing with the installation of the application nodes.
We must first make an adjustment to the tuning of the data store to reflect the number of slaves to be installed. As a rule, there are 150 connections available to the data store. But in a clustered environment, we must multiply this by the number of application nodes including master and slaves. In this cluster, we are installing a master and a slave giving us two hosts. So let's make the adjustment. The required command is tune with the options minus minus connections. The requirement is that we take the number of nodes within the cluster and multiply by 150. In this case, giving us 300. And we must also specify the total amount of memory available on the data store in this case, eight gigabytes.
So let's issue this command. We are required to confirm the amount of memory and the tuning of the data store has been completed. Successfully note that following this tuning, we will require to restart the data store service. So let's do this now. So moving on now to the host, which is to become the master server within the DP A cluster, let's initiate the installer and speak to a point where we are required to enter some cluster specific information. At this point, you are asked where you wish to install DP A. You are offered a default of C program files, em CD P A and this would normally be acceptable. However, if you choose to install DP A in an alternate location, then you must ensure that both the master and all slaves are installed in the same location on each of the hosts, elect to install the application service.
And in this panel, check the show advanced installation options, check box in the application, advanced options panel, you will see there are three check boxes and we shall check the install the DP services as cluster check box. The remaining two check boxes do not normally need to be checked. However, if you wish to change the multicast address, which is by default 239.1 0.2 0.10 then you should check both of these check boxes to ensure that the services do not start after installation. In this instance, as we are using the default multicast address, we shall check the install the DP services as cluster check box. Moving to the pre installation summary panel, we'll click on, install the installer runs and the installer completes, we must now enter the data store address for the data store with which we wish to connect.
Note that if we are implementing data store replication and we have both data store master and data store slaves available, then it is the data store master address which we are required to enter here. Here, we must select the address by which this application node within the cluster will announce itself to other DP application nodes within the cluster. Again, we'll select the I PV four address as this is the first DP A node within the cluster. It must be nominated to be the master and we'll do so here. Prior to installation. As previously mentioned, we will have provided a shared folder visible to and available to each application node within the cluster. In this particular instance, the shared folder has been created on the data store hosts and we have a domain user and password authentication to access the folder. This folder should be visible to all application nodes.
So we'll enter the location of the shared folder and the user name and password for access to that folder. No further agent configuration should be required at this stage. So we'll move on as we saw during the installation of the data store, we are required to enter the address with which the agent installed on this host will communicate with the server itself through the cluster address. So here we'll enter the address of the load balancing switch on completion. Click done before moving on to installing the slave. We must first check that the services required in support of the master server have indeed started correctly and fully, we may do this by navigating in windows explorer to the services application folder of the installation folder and checking the file names which currently are showing is deploying once the installation is complete, these file names will change to a deployed suffix.
We may also check in a command window by running the command DP A service status and checking the output to show that the service is running and the installation has fully completed. Once the service has fully started, you may check the configuration of the application by running the DP A app con command showing the bind address, the data store service, the operation mode ie clustered that this is the master rule server, the address of the server and the multicast address configured. Our master DP A application server is now fully installed and running and we will now move on to install the first of the slaves associated with this cluster. Again, we'll zip through the opening panels until we reach the first of the cluster specific pains. So we are again installing as an application service. And again, we'll select the sure advanced installation options, choosing to install the DP services as cluster. Moving through the pre installation summary here.
Again, we enter the identity of the data store, selecting the IP address of this clustered server. Don't forget if it's I PV four, we must keep I PV four all the way through the operation. Having already configured our master, this server will be configured as our first slave being a slave. We must now enter the IP address or fully qualified domain name of the master application node. Again, we need to specify the location of the shared folder we have previously created and enter the authentication for access to the sheer. Once again, the agent needs to be pointed to the load balancing switch address of the cluster and our installation completes after which we can check the status of the services in the same way as we did previously on the master. And we can check the configuration of this slave by issuing the DP A app con command.
Again, you will notice that this cluster roll is now shown as a slave and that the master address is shown correctly. And finally, in the logs folder of both the master and slave application servers, check the server dot log file in the respective log files on each of the master and slaves. You will find the appropriate text indicating a successful start to the clustering services. And as a final physical check, once data is available within DP A, you may run ad P A report on both master server and slave or slave servers to ensure that publishing access to the shared folder is working correctly. This now completes the installation of the DP A application cluster, but you may of course wish to install further slaves up to a maximum of three slaves.