Start a Conversation

Unsolved

This post is more than 5 years old

36548

January 28th, 2010 16:00

Cloud Tiering Appliance/VE Trial Version FAQ

This FAQ should tell you everything you need to know to get started with the Cloud Tiering Appliance/VE trial version.

Question: Where can I get the CTA/VE trial version?

Answer: You can download the trial version here.

Question: What files do I need to download?

Answer: You will need the following three files:

    • CTA/VE trial version software .ZIP
    • CTA/VE Trial Version Installation Instructions .PDF  (attached below)
    • CTA Online Help .PDF  (attached below)

Question: What version of CTA/VE is the trial version?

Answer: The CTA/VE trial version is the current release, version 10.0.  This release of CTA/VE supports VNX, Celerra, and NetApp primary storage and VNX, VNXe, Celerra, Centera, Atmos, Isilon, Data Domain, Windows, and Amazon S3 target storage.

Question: How is the CTA/VE trial version different from the full product?

Answer: The CTA/VE trial version has all of the functionality of the full CTA/VE product. It is limited only in the number of files it can move.  While the full product has a file limit of 500 million, the trial version file limit is 5,000.

Question: Can I use the CTA/VE trial version in a production environment?

Answer: No, it is not recommended to use the trial version in a production environment. There is no upgrade path from the trial version to the full product.

Question: What are the system requirements for the CTA/VE trial version?

Answer: The CTA/VE trial version installs on VMware ESXi 4.1 or 5.0 or ESX 4.0 or 4.1 server with the following:

    • 1 Virtual CPUs (vCPU)
    • 1 GB memory
    • 20 GB disk

There is also a Workstation version of the CTA/VE trial version software.

Question: What other resources are available to help me learn more about how CTA/VE works?

Answer: You can learn more about CTA/VE features and benefits by visiting the product page on EMC.com.

Question: Is there a Getting Started document?

Answer: The two documents to help you get started, attached here, are CTA/VE Trial Version Installation Instructions and CTA Online Help.

Have more questions?  Just add them to this discussion thread and we'll make sure they are answered quickly.

3 Attachments

20 Posts

February 22nd, 2010 04:00

Hi Kristine,

Just giving the FMA VE a try and I'm running vSphere on my system. Your instructions state the FMA VE is compatible with vSphere, but you don't mention if we should upgrade the Virtual Hardware (VM version is v4, vSPhere supports VM's of v7) or upgrade the VMware tools.

Is this new VM Hardware version supported on the VE and upgrading Vmware tools a good idea?

Thanks

TG

100 Posts

February 22nd, 2010 11:00

Hi Tony,

vSphere is backwards compatible and we don't rev the FMA/VE software as a result of changes in the hardware version or tools, so you shouldn't have any issues and it's not necessary to upgrade your VM tools.

Thanks for downloading the FMA/VE trial version.  I'll be eager to learn how things go for you, so I hope you share your experience here.

Kris

20 Posts

February 23rd, 2010 01:00

Thanks Kristine,

Will keep the Virtual Hardware and VMware tools at v4 then

Will keep you posted on how I get on

TG

Tony Gent

VCP2 & 3, CNE, CCA, ACE

M: 0776 4485005

T: 01932 562611

F: 01932 565441

W: http://www.snsltd.co.uk

Helpdesk

T: 08456 022118

E: helpdesk@snsltd.helpserve.com

W: http://snsltd.helpserve.com

Simply Networking Solutions

Virtualisation, Backup and Disaster Recovery specialists

Upcoming events:-

Understand the next-generation virtualisation technologies from VMware and EMC

Free half day technical update

Find out more and register at http://www.snsltd.co.uk/virtualisation.html

Are you looking to cost-effectively improve your VMware Infrastructure 3 and vSphere 4 knowledge?

Why not take a look at our VMware vSphere 4 Fast-track training

P  please consider the environment before printing this email!

The information contained in this email and its attachments may be confidential. If you are not the intended recipient, any use, disclosure or copying of any part of this email is unauthorised. If you have received this email in error, please notify the originator immediately.

20 Posts

March 2nd, 2010 07:00

Hi Kristine,

Firstly - thanks again for the FMA VE trial, it works well on setup (Celerra is a little fiddly) but the documentation seems to work well and the product seems pretty straight forward.

I do have one query - which is hindering my Archiving test and it relates to "Delay Stubbing"

This is a new thing (since the last time I looked at FMA) and seems a little odd initially. FIrstly - surely you don't stub until all the files are copied, and therefore - why should you need to delay it by 'x' days?? Seems odd, but still, I understand the need to be cautious.

I noted in the documentation that Schdules that are "Run Now" are exempt from "Delay stubbing".  If I created my initial schedule as "Run NOW", It would not allow me to run a "Simulation" as it would just start archiving straight away, so I surmised, incorrectly it may seem, that creating a schedule that was sent to run once a month, then "Manually running" it from the schedule view would be the same as a "Run Once" but this did not seem to be the case. Therefore - I'm managed to archive "without stubbing" which is a little pointless .

As this is really a test environment, I felt  - OK no problem, lets just delete this schedule and start again with a run once. This also now fails and seems to take no files. So It seems for every operation I have to wait the day I set before the stubbing starts?.

Is there no way to override this??

I have tried changing the policy to have "Delay Stubbing - 0" but this does not seem to have an effect? It's as if the initial operation set the system to wait for a day and re-archiving does not reset the clock??

Any insight would be greatfully recieved.

On a more positive note , the installation of the Appliance is childsplay. The Avamar AVE guys should come and see how you do it!!

Thanks again,

TG

100 Posts

March 2nd, 2010 12:00

Hi Tony,

First of all, thanks for the feedback.  I'm very pleased to hear that you found the installation to be straightforward.

I completely understand your concern with delayed stubbing.  The feature was designed as a safeguard for asynchronously replicated environments (for DR).  For example, if the DR site data is 4 hours behind the primary site data, the delay allows time for the secondary file to replicate to the DR site before replacing the original file with a stub.  It's simply an extra safety measure; the default delay is 7 days.  We've put in a request, however, to change the default to 0 days.

I hope this helps, and let me know if I can help you in any other way.

Kris

20 Posts

March 10th, 2010 08:00

Hi Kristine,

Thanks again for the assistance. I wonder if I could impose upon your brain a little more . Please feel free to point me at another member of staff if you prefer, but I have a few questions relating to DR and the use of the FMA.

Most clients these days will buy two Celerra's, and ideally use Celerra Replicator to replicate file systems between them. How will be FMA cope with having it's source and target replicated?

Please see the questions below.

Tech Questions 1 :

Assuming I want to have a Celerra at Site-A that contains TWO file systems, one for the LIVE data and one for the ARCHIVE. This way I can use Site A’s backup infrastructure to pull both source and archive off to tape via NDMP. Great. But I also have a second Celerra at Site-B which I use for DR.

Ideally – I’d also like to replicate these file system to Celerra at Site-B? Can I just use Celerra Replicator to replicate the file systems 'Source and Archive' to a different Celerra? Assuming I can (as they are just file systems) would the FMA pointers still work??

In the event I try and USE the file system at Site-B (in the event of Site-A failure) would I be able to get the data from the archive?

Tech Question 1: b

If it’s NOT possible to use Celerra replicator to replicate the archive, can we ask Rainfinity to maintain the same archive twice, once on Site-A’s Celerra and Once at Site-b’s? If I then fail over - would I then be able to get to the source AND archive data?

Tech Question 2 :

In the event of a problem. Is it possible to “Recover” files from the archive straight back into the original file system. Essentially run an “Un Archive” task that will revert the file system to the un-archived state? (rather than accessing and changing them all!)

Tech Question 3 :

The archived files seem to be in a hidden/locked folder. Is it possible to open and access the archive to re-assure a client that the files are still present and correct and not held in some obscure database?

Any assistance would be much appreciated. 

TG

100 Posts

March 15th, 2010 12:00

Hi Tony,

Sorry for the delay; I had to get a little help on this as it's beyond my area of expertise.  Here's what I've learned:

1. Site-A has two filesystems - one for live data and one for archive. Site-B is used for DR. Can I use Celerra Replicator to replicate both filesystems to Site-B and will recall still work? The filesystems can be replicated using Celerra Replicator to another Celerra.  However, be aware of the recall architecture when archiving from Celerra to a NAS repository filesystem. The Celerra Data Mover recalls the archived data from the 'Archive' filesystem upon client I/O directly without using FMA.  The DM resolves the secondary path presented in the stub file, i.e. cifs://server.domain.prv/archive/.rffm_nas/0/0/0/1000 to recall the archived data. To be able to recall at Site-B, the DR Celerra must be able to resolve this path at the DR archive filesystem upon a site failure. This would likely require failover of the original CIFS server from SITE-A (if using CIFS) to the Celerra at Site-B to work properly. If Site-A is active, the recall will be done using the archive filesystem at Site-A. This applies whether the source and archive filesystems are archived using CIFS or NFS.

2. Is it possible to recover files from the archive back into the primary filesystem? This can be done by deleting the FMA DHSM connection from the Celerra filesystem and using the 'recall_policy yes' option.

Example: fs_dhsm -c -delete -recall_policy yes

where is the connection ID for the FMA DHSM connection. This will first recall all archived files back to the primary filesystem, then delete the connection.

3.  Is it possible to open and access the archive? Archived files are stored in a directory called .rffm_nas in the top level of the repository. From a CIFS client you can manually enter in this path to access the repository or view hidden folders to see it at the top level. From an NFS client, you can do the same or view the directory using 'ls -la'. The files are stored using file IDs specific to FMA so they will not directly correlate to the original filenames, so use caution with accessing the repository directory structure.

Hope this is helpful,

Kris

6 Posts

May 19th, 2010 14:00

Hi Kris,

Thanks for the helpful info..

What is the cost for full version FMA/VE ?

100 Posts

May 21st, 2010 14:00

Hello Maheshtata,

Thanks for your interest in FMA/VE.  If you will message me privately and let me know the name of your organization and where you're located, I can put you in touch with an EMC representative who can provide you with pricing information.

(Do you know how to send a private message?  Click on my name and when it takes you to my profile choose Send private message from the Actions box on the right.)

Kris

14 Posts

November 22nd, 2010 13:00

Abner,

Yes, you can have NetApp (the same filer) as the source and target (using different volumes for src and tgt).

Let me see if I can get you some help.

Dave

November 22nd, 2010 13:00

Hi Kristine,

I just deploy the FMA/VE into our ESX environment, now I would like to do some simulations. I setup into FMA/VE configuration two File Servers as NetApp and test the connection and got a successfull message, also I configured a NAS Respository as NetApp, then created a policy and rule, and finally schedule a task and tried to run a simulation, but it does not generate any report.

Can I use a NetApp V3040 as source (Primary soruce) and target (NAS Repository) for Simulation purpose?

Regards,

Abner Cordova.

November 22nd, 2010 13:00

Dave,

David Faul from EMC will have a remote session tomorrow to review my case, but I sent this question to the blog just to be sure this configuration is supported.

Regards,

Abner Cordova.

2 Posts

January 21st, 2011 18:00

hello

i have one question about using celerra replicator to replicate fma volumes source and destination

if i issue the comand "fs_dhsm -c filesystem_fma_source -i" (filemover display command)

this comands display the already enabled filemover services on the filesystem_fma_source

the result is

filesystem_fma_source:
state                = enabled
offline attr         = on
popup timeout        = 0
backup               = passthrough
read policy override = none
log file             = on
max log size         = 10MB
cid                 = 0
   type                 = CIFS
   secondary            = \\fmasecondary\filesystemdestination\
   state                = enabled
   read policy override = none
   write policy         = full
   local_server         = server.domain.local
   admin                = domain.local\admin

   wins                 =
cid                 = 1
   type                 = CIFS
 

question:

if i issue the same comand in the DR site, already file system replicated

1... do am i  going to see the same conection from the DR site side?

2--- what is going to happend if i enable failover on all replication filesystem and vmd conections.(from production site, in order to activate DR site)?

thank you

4 Posts

May 22nd, 2011 15:00

Hi,

I was trying to run import task from Celerra CIFS to Celerra CIFS, but I always got the same error " Failed to create the NAS CIFS DHSM connection with  error XmlApi::recvXmlResponse Error". I did all steps in the " FMAVE Trial Installation Instructions 0110.pdf".

server_http -append dhsm -users -hosts

server_http –service dhsm –start

fs_dhsm -modify -state enabled

all steps succeeded.

The following lines include the Archiving log (error is in red color):

May 05 13:46:36 Trying to create a CIFS to CIFS dhsm connection between primary server "EMCCIFSONE" share="FMASOURCE" and secondary server "EMCCIFSONE" share="FmaStore"

May 05 13:46:36 Trying to connect the Celerra XML API server '10.20.2.13' with user 'rffm'.

May 05 13:46:37 Trying to query the file system's ID of \\IPAddress\FMASOURCE.

May 05 13:46:37 Trying to enable DHSM on the file system 23

May 05 13:46:39 Trying to create a CIFS DHSM connection to \\emccifsone.dotnet.criticalsites.local\FmaStore for file system 23. NetBIOS domain 'DOTNET', CIFS user 'administrator'.

May 05 13:46:55 (WARNING) Failed to create the NAS CIFS DHSM connection with unknown error -13. You may need to manually create the DHSM connection.

May 05 13:46:55 (WARNING) Failed to create the NAS CIFS DHSM connection with  error XmlApi::recvXmlResponse Error

...

Cannot establish connection between secondary server emccifsone.dotnet.criticalsites.local and local server EMCCIFSONE.DOTNET.CRITICALSITES.LOCAL with share FmaStore with user DOTNET\administrator  NT status=INTERNAL_ERROR c00000e5H.

 This is a failure of the actual attempt to map the share over the network. status c00000e5H is a Microsoft SMB error code that among other places can be looked up here ... http://source.winehq.org/source/include/ntstatus.h
 
 1. BAD_NETWORK_NAME may mean that the share is not valid for the server.
 2. BAD_NETWORK_PATH may mean that the host name is wrong.

1. Look at the NT status code which is a CIFS error and act on the message.

2. Verify that the server emccifsone.dotnet.criticalsites.local name is an FQDN and not just the machine name or IP address. IP addresses cannot be used in place of a server name.

3. Verify that the user account belongs to the domain and has the correct credentials.

4. Verify that the Celerra Data Mover EMCCIFSONE.DOTNET.CRITICALSITES.LOCAL is joined to its domain. The CIFS server status will display that information.

5. If the error message seems incorrect based on the displayed input parameters then look at the Celerra Data Mover server log for more detailed information. Pay special attention to the log messages associated with the facilities MGFS, DHSM, SMB, and KERBEROS.

Actually I got the same error when I try to connect manually from CLI:

$ fs_dhsm -connection FS01 -create -type cifs -admin 'DOTNET\administrator' -secondary '\\EMCCIFSONE\FmaStore' -local_server EMCCIFSONE

Enter Password:********

Error 13158252563: Cannot establish connection between secondary server EMCCIFSONE.DOTNET.CRITICALSITES.LOCAL and local server EMCCIFSONE.DOTNET.CRITICALSITES.LOCAL with share FmaStore with user DOTNET\administrator  NT status=INTERNAL_ERROR c00000e5H.

I don't know how does the FMA know this password to enter it?. But it seems that it executes the same command (without the password  !!!)

Please Advise.

Thanks

Hatem Mostafa

25 Posts

May 23rd, 2011 08:00

Has the CIFS Server "EMCCIFSONE" been added to FMA?   When the File Server is added, you should provide the CIFS credentials.  That's where FMA will get the password when using the UI.  If CIFS Server not added, then it needs to be added.

Try mapping to the share \\EMCCIFSONE\FmaStore as the user DOTNET\Administrator.  This will verify permissions to access the share.

No Events found!

Top