The following ViPR Q&A was compiled from “Ask the Expert: What is ViPR? Let's Start Using ViPR! (Japanese Version)”,this discussion took place from Sep. 22th 2014 to Oct. 3rd 2014.
There are two parts of this article:
Virtual Asset configuration
Before we start, let's check whether the console is in Admin mode…. Virtual Asset configuration needs to be set up by an authorized user. Today, we are the default root user, so we can perform this configuration. Check the mode shown at the upper right of the console. If "User" appears here, change to Admin mode.
Then let’s start Virtual Asset configuration. We create virtual assets using the Virtual Assets menu. You can access this menu from the menu icon on the left side of the console. Click the icon to show the menu pane.
Then create a Virtual Array. Virtual Arrays let you have optimized storage by tenants. For example, you could combine VMAX, VNX and Isilon to create storage that satisfies high-end to mid-range requirements and provides a scale-out NAS capability.
From the menu, select Virtual Assets → Virtual Arrays.
Click the green +Add.
Enter the Virtual Array "Name".
From the Block and File Storage area at the lower left of the console, click Network, and select the network you created when you configured the physical assets.
Return to the virtual array configuration screen.
Open up Associated Storage Systems and check whether mapping has been done or not.
If everything is ok, click Save.
Virtual Array setup screen and link to network selection screen
Adding/selecting a network
Back to Virtual Array setup screen
Checking whether physical Array was mapped in Virtual Array
Next, we will create a Virtual Pool.
From the menu, select Virtual Assets → File Virtual Pools.
Click the green +Add.
Enter the virtual pool name (an easy-to-remember name is fine) in "Name" and a comment in "Description".
Tick the Virtual Array you just created from Virtual Arrays (check that the Storage Pools number change at bottom of console when you pick VNX file storage from the virtual pool). This is because the storage pool created on the VNX File was selected as a virtual pool.
Open the Storage Pools area at the bottom of the console and switch the Pool Assignmentsetting to Manual.
Select the VNX File storage pools to include in the virtual pool. (You can select more than one.)
Then click Save.
Repeat step 2-7 if you want to create more virtual pools.
Virtual Pool setup screen
Manual Storage Pool selection for virtual Pool
Creating two types of virtual pools looks like this
Provisioning the NFS Datastore
Let’s look at the vCenter server before provisioning. This screen shows the Datastore for the vCenter server. We will allocate an NFS Datastore called ATE_Datastore. ATE_Datastore does not exist yet.
Anyone can perform provisioning from ViPR, so from now on we will switch the console to User mode. If it says "Admin" here, please change to the User mode.
You can access the services catalog on the left side of the console.
Now, we need to create at least one "Project" first. A project is a logical organizational group that can be created in ViPR. It is also possible to map AD/LDAP users here. Projects are used to manage the owners of the provisioned volumes and file systems. Even in the test environment we are using today, it is necessary to create at least one.
Click the menu icon on the left side of the console and click Tenant Settings → Projects.
Click +Add, enter the name of the project in "Name" and save. Okay, let's get back on track!
Change to User mode, and then go to the services catalog screen. The first screen that appears is the screen for the service catalog group (category). From this screen, click File Services for VMware vCenter, which contains the NFS Datastore provisioning menu.
Then, click Create Filesystemand NFS Datastore.
Select the service catalog parameters.
Datastore Name: Enter the NFS Datastore name.
vCenter: Select the vCenter server you registered as a physical asset.
Datacenter: Select a vCenter datacenter that includes an ESX cluster for provisioning.
Storage Type: Today we are provisioning the NFS Datastore to the ESX cluster, so select "Shared".
ESX/Host Cluster: When you select "Shared" as the storage type, specify the ESX cluster name as the provisioning host, rather than an individual ESXi host name.
Virtual Array: Select the virtual array.
Virtual Pool: select a virtual pool from the virtual pool options. The NFS file system will be created in NAS storage from mapped storage pools. The user does not need to think about which storage array was used for this pool.
Project: Specify the project that will be the owner of this NFS Datastore.
Export Name: Specify the NFS export name.
Size (GB): Specify the size of the Datastore.
Now all you have to do is click Order!
Execution of the Order looks like this.
A file system is created on the VNX File.
NFS Export to ESXi host.
A Datastore is created on the ESX cluster, and then mounted.
ATE_Datastore 50GB is created (check from vCenter Server)! We can confirm that the 50 GB ATE_Datastore was created.
How was that? With ViPR, you do not need to worry about storage administration duties, and you can create Datastores whenever you like. And even better, it doesn't matter if the storage includes Isilon, NetApp, or other products. All you can see is a virtual pool at the service catalog. You can perform provisioning using the exact same operation. Isn't that cool?
If you have any NAS storage supported by ViPR, please try this and see the difference.
Using Isilon with ViPR
The method was pretty much the same as for VNX files (except, of course, in Admin Mode). First of all, let's go back to Admin mode...
From the menu, select Physical Assets → Storage Systems
Click the green +Add.
Select EMC Isilon.
Enter the Isilon name/IP Address/authentication information for the root user.
After entering all the information, click save.
Once Discover is successful, the status icon at the right of the console turns green.
Let’s check Storage Pools and Ports.
Check that the Isilon console Pool is also n400_36tb_24gb (same as the ViPR console).
The next thing is to check the Storage Ports (Network interface). These are detected by FQDN (SmartConnect zone in Isilon).
Check this in the Isilon console. You can see that the configurations match.
Name resolution needs to be done for SmartConnect Zone at DNS. If the ESXi cluster server cannot resolve the name here, provisioning will fail. It is very important to resolve the names for SmartConnect Zone.
Next, we'll add the network configuration in Isilon.
From the menu, select Physical Assets → Networks.
Click the network you created before (IP_Network).
Click +Add on the network configuration screen, then select Add Array Ports.
Select Isilon SmartConnect Zone, and then click +Add.
Click +Add on the network configuration screen if needed, then select Add Host Ports.
Pick the ESXi host network interface in the same segment as SmartConnect Zone, then click +Add.
After entering all information, click Save.
Let's pause here to check the SmartConnectZone (here, demo.tnak.com) name resolution.
Now that we've added Isilon to the network, let's create a virtual pool!
From the menu, select Virtual Assets → File Virtual Pools.
Click the green +Add.
Enter the virtual pool name (an easy-to-remember name is fine) in "Name" and a comment in "Description".
Tick the virtual array already created from Virtual Arrays.
Open Hardware at the bottom of the console and set Storage Type to EMC Isilon.
Check the area number of the Storage Pools. If we filter by Storage Type, only the storage pools created on Isilon are selected for the virtual pool.
Open Storage Pools at the bottom of the console and set Pool Assignment.
Make sure Isilon is included in the virtual pool.
If everything is okay, click Save.
Platinum Pool has been added
Now you can do provisioning. Change to User mode, and then go to the Service Catalog screen. Open the NFS Datastore provisioning screen used last and open File Services for VMware vCenter.
Insert any necessary parameters, and then add the virtual pool you created and execute.
Provisioning has been completed.
Check in the vCenter console that a Datastore was created (ATE_Isilon100GB should have been created this time). Oh, there it is! The Datastore seems to have been created correctly. The capacity, 100 GB, is correct.
To double check… check from the Isilon console. It looks fine here too.
How was that? You can see that provisioning in a heterogeneous environment can be executed easily from exactly the same service catalog freely by changing the Virtual Pool selections only. It will be the same using NetApp, too! We welcome any comments from our partners who have tried this with NetApp.
I have a conceptual question, rather than a technical one. It's a basic question as I don't fully understand ViPR (as well as VPLEX).
You can find a similar question from the above thread, but I don't fully understand the difference between ViPR and VPLEX, and the superiority of ViPR over VPLEX.
We believe that a primary feature of ViPR is that it virtualizes multiple storage products. Even a combination of several storage types can be centrally operated and managed.
VPLEX also implements storage virtualization. For customers who have already installed VPLEX, the operation is already centralized and there is no advantage in also installing ViPR.
Of course, I can understand the differences, such as VPLEX accessing the data by Vol shared with VPLEX, and ViPR simply passing data (ViPR does not contain data). But in terms of operation, I don't understand the benefits of installing ViPR when VPLEX is installed, as VPLEX implements the storage virtualization.
(1) Does the reduction in operating costs due to ViPR include host assignment, as ViPR provides an API that includes the hosts? (Can it reduce the operating costs of the storage part only, as VPLEX centralizes only the storage management?)
(2) Can we implement the pool configuration between DCs, such as VPLEX Metro, using only ViPR?
When we look at ViPR geo-protection, it seems that it is possible.
(3) If (2) is not possible, can ViPR create the pools between DCs via ViPR by combining with VPLEX?
On the other hand, if (2) is possible, it seems that there is no difference in the Act/Act access functions between the DCs, which was considered a difference between ViPR and VPLEX.
ViPR and VPLEX each have their own advantages. Is any document available, such as a table that summarizes the differences between them?
In terms of the operational responsibilities and cost savings mentioned in (1), as it is stated in the question, ViPR can consistently improve the operational efficiency from host to storage, regardless of whether it is block or file storage. On the other hand, VPLEX can improve only the operational efficiency in the block storage space of the storage. So you can say that, overall, ViPR provides greater savings.
As for the possibility of pool configuration between DCs using only ViPR, mentioned in (2), from the viewpoint of the block storage, it cannot be implemented using only ViPR. However, it is possible by combining ViPR and VPLEX. ViPR Geo-Protection is a technology that is applied to the object storage.
The question asked in (3) is related to (2), and here too the answer is that it is possible.
See below; I will have to explain the differences between VPLEX and VIPR first.
I’ll explain VPLEX first. VPLEX provides a mirror between the storage casings across the sites, and a function for data migration. This functionality is provided seamlessly between the storage casings, without users of the service ever being aware of it, by virtualizing the actual storage logical volume targeted for IO. It is embedded in the data path, what is called the SAN fabric. This is because the flow of data, for distributed volumes or non-disruptive data migration, needs to be controlled in various ways for use.
ViPR does not touch logical storage volumes. ViPR does not interfere with the data path as it implements abstraction (centralization) of the management through a management LAN outside the data path. ViPR uses a management LAN to manage storage. Also, it uses a meta volume to manage groups.
VPLEX was often used for storage pool virtualization particularly before the advent of ViPR. This is because VPLEX can mask the differences between various models by virtualizing the logical volume. Under the current circumstances, there is more focus on DA (disaster avoidance), which means that things do not stop operating if there is a disaster, and data mobility (non-disruptive data migration). These functions are more advanced than DR (disaster recovery).
Why should resource pool abstraction make us select ViPR over VPLEX?
With VPLEX, the heterogeneous logical volumes of storage, which belong inside the VPLEX device, are fetched to implement encapsulation in a virtual volume by VPLEX. This enables all storage resources to be uniformly handled as a virtual volume of VPLEX, without the user being aware of the model of the storage when viewed from a higher order than VPLEX. However, this creates a management problem. The characteristics of each storage product are lost through encapsulation. When it is seen as a virtual volume, for example, a virtual volume created from SSD and a virtual volume created from a SATA disk look exactly the same.
Some vendors offer something called software-defined storage, which is based on these storage virtualization devices. But this kind of storage eliminates the characteristics of superior storage, which means it can only be used as cheap general-purpose storage.
EMC's approach to software-defined storage is to implement simple management by combining the optimum storage with various workloads. We think that it is essential to have a system that automates allocation of the optimum storage individually in accordance with requirements, such as performance-oriented storage or capacity-oriented storage. So EMC have introduced an offering called ViPR. As we mentioned in the first post, with the virtual pool function of ViPR, it is possible to select and use the optimum storage in accordance with the requirements by classifying high-end storage from EMC and other companies together as Gold Class and midrange storages as Silver Class.
EMC do not consider ViPR and VPLEX as two alternatives. Rather, we believe that we can offer a differentiated solution from EMC by combining two products. For example, ViPR is automatically linked with VPLEX. A virtual pool for VPLEX is created in ViPR that allows the back-end storage information to be linked to the created virtual pool. This allows ViPR to automate the processes from creating a logical volume from VMAX and VNX, which are linked to the virtual pool, to encapsulation of the volume with VPLEX.
In a certain sense, contingency planning and the flexible data migration function of VPLEX can be fully utilized, while providing management based on the service level for the virtual volume of VPLEX, which had no distinguishing features.
The table below summarizes the differences between ViPR and VPLEX. Please use it as a reference.
This question is about VMAX ports used by ViPR Controller.
When the ViPR Controller assigns VMAX's LUN, what kind of logic is used to select the ports to be assigned? I understand that the ViPR Controller considers the superior OS and does not perform port settings. When creating a Virtual Pool, I think there's nothing else to do but take measures to do port filtering. If there is anything else I can do, please tell me.
The ViPR Controller's logic for allotting VMAX ports is determined based on the ports included in the Virtual Array, and the number of multi-paths in the Virtual Pool. Allotment is performed so that the LUNs under the mapped ports are equalized. You cannot exclude certain ports that are included in the Virtual Array. In this case, you need to create multiple Virtual Arrays that explicitly consist only of certain ports.
This question is about Windows Japanese support
Windows-related services (NTFS) currently don't operate on a Japanese OS. I believe this would make it difficult to sell in the Japanese market. I'd like to know if you're aware of when Japanese operating systems will be supported.
Japanese language support for Windows at the provisioning target host will be included in ViPR 2.1, planned for release at the end of September.
This question is about multi-pathing software support.
When connecting storage from different companies, one obstacle is how multi-pathing software is handled. I understand that multi-pathing software from each company cannot be combined. This means that the server can only connect to devices within the support scope of the installed multi-pathing software.
Is there anything else I can do about this, other than install the software based on the assumption above? (For example, a direction where the scope of PowerPath's support is broadened, or a flow involving the standard features of the OS)
Regarding multi-pathing software, we currently also supports the native multi-pathing software for Windows and Linux, in addition to our PowerPath.
This question is about design at the time of installation
I understand that design at the time of installation is very important for the ViPR Controller. Are there any shared design specifications (such as documents used during meetings) or the like for EMC? The following questions are close to sales promotions.
Regarding the design specifications (meeting memo) at the time of installation, I believe that the installation roadmap and checklist in the Documentation Set for EMC ViPR 2.0 Product Documentation Index will serve as useful references. Based on your question, I assume that you are one of our partners or SIers. Please inquire with the SE that handles your company for documents such as specification sheets that are not disclosed to the public.
How SDS is sold? I really feel that there is a strong demand among Japanese customers for an SDDC (Software Defined Data Center) that includes servers and networks.
Since the ViPR Controller specializes only in storage, I’d like to put forward proposals that include servers and networks, but I can't quite figure out which superior components would be the best practice. I'd appreciate it if you could share some information about this.
ViPR is not a product that provides its own function for a certain vendor's product, but rather, a product that can openly link to any superior automation or orchestration tool. Recently, some customers dislike being restricted to a certain hypervisor, preferring instead to combine and select the optimal hypervisors based on cost efficiency. I believe that in these cases, we can demonstrate the unique merits of ViPR, such as being able to put multiple hypervisor environments on a unified storage foundation that has been configured using ViPR. Also, I believe that being able to highlight which tools can be linked will distinguish each of our partner companies.
This question is about ViPR Services.
I feel that ViPR Services is an attractive solution for converting NAS to object storage. However, I really feel that the fact that the ViPR Controller is necessary is a big obstacle to sales.
I understand that, when using ViPR Services, the ViPR Controller only fulfils the role of configuration (settings). Is there any other function that the ViPR Controller performs?
ViPR Services only provides data access (data planes) such as for objects and HDFS. All other functions are functions of ViPR Controller. For example, the ViPR Controller handles discovery of physical hardware, asset management such as for virtual pool creation, user authentication linking necessary for ViPR Services, provisioning for object data stores and buckets, and charge backs. Also, the Geo-Distribution function for the latest ViPR can be implemented by multi-site linking using the ViPR Controller. I hope you can see that the ViPR Controller is not just a simple provisioning automation feature, but a control plane for the whole software-defined storage environment.
I'm trying VNX and NetApp (FAS) on the ViPR2.0 Controller, and then have the following questions:
1. Regarding the ONTAP of FAS, When will NetApp CDOT be supported?
2. With VNX, we can use many function, such as disk selection, FAST, and tiers. With FAS, only disk capacity allotments seem to be possible. Are there any plans to support the functions that FAS has?
3. "ViPR_2.0_Data_Sheet_and_Compatibility_Matrix.pdf" says that the ONTAP Version of NetApp (7-mode) is 8.1, but when I try to do provisioning using ViPR, I get an error message, telling me to update ONTAP to version 8.1.1 or later...
1. At present, supporting for NetApp CDOT (clustered mode) is planned for the first half of 2015. (I’m sorry I can't be more specific.)
2. In addition to the basic functions of file system and snapshot, vFiler is also supported starting with ViPR 2.0.
3. Regarding the prerequisites for ONTAP on Support Matrix, is the version you're trying the latest version, ViPR 2.0 SP1 (build number 22.214.171.124.193)? The latest Support Matrix revision is version 06, and the requirement for this version and later is 8.1.x. I'd like to confirm with the US side if 8.1 is included in 8.1.x, but would it be possible for you to paste the version information for the version you're trying, and the screen capture filed by Discover, here?
ViPR Support Matrix - Rev 06: https://community.emc.com/docs/DOC-38014
ViPR2.0 has a much better display than ViPR1.x, doesn't it? I think some missing settings (incoherent configurations) have been nicely resolved as well.
Thank you very much for your feedback on the GUI design. In future, the management console for EMC software products will be replaced with ECUE (EMC Common User Experience), a console based on standard specifications that unify menu placement and color tones.
"ViPR2.0 Click-through Demo" was one of the documents downloaded by members who participated in the "EMC ASD Bootcamp 2.0 Hands-On." It was very easy to understand.
Like I said above, it's wonderful. I hope you will continue to expand the types of "Use Cases" and "Deploy and configure" for this click‑through demo.
Thank you for your feedback on our demo tool (which, unfortunately, is only available to our partners). The US side believes the demo tool is effective for operating on a real machine, as well as for showing GUI screen transitions for each use case, so they prepared this kind of lightweight demo tool. I'd like to offer feedback from Japan about the use case expansions we have received. Please feel free to use it in your future business.
We'd like to give ViPR a trial run. Luckily, we have VNX_Block and Unified products, but I remember seeing a document recently that said ViPR environments require SMI-S_Provider or XML_API.
Is SMI-S_Provider for Block provided by model number SE- SMI-STDS? I can't find the license for XML_API for File. Is this offered separately?
The SMI-S Provider for VNX Block is not a commercial software product, so you can download it from the EMC Online Support website for free by clicking the link below.
EMC Online Support SMI-S Provider Product Page
Also, the XML-API for VNX File is built into the VNX File, so there is no need to prepare it separately.
Reference materials for ViPR
The latest ViPR 2.1 product documents are found at the following URL:
EMC ViPR 2.1 Product Documentation Index
When you access the URL, the indexes of the documents are displayed first. Click the link of the item that you want to find.
What functions are added to ViPR 2.1? Let's begin with the new features in ViPR 2.1.
We've had a few comments along the lines of "English only?", and yes, they are available only in English at the moment. We are sorry about that. Even though I'm not particularly good at English, I managed to read the documents. Try to read them using your dictionary.
But there is one unexpected advantage in the fact that the documents are available on a website: That is, you can use an online machine translation service on them.
The image below is New Features in ViPR 2.1 translated into Japanese using machine translation. It may not be the perfect solution, but you might find it easier than jumping right into the original English version.
Other advantages of the HTML format are that you can use a search engine on the internet, and you can access the documents easily from a mobile device. So you don't have to store as many PDF files and other documents locally as you used to.