Start a Conversation

Unsolved

This post is more than 5 years old

10419

February 26th, 2014 02:00

Avamar Grid vs Avamar single node with DD?

What are the pros and cons of each option (Avamar Grid vs Avamar single node with DD) ?

355 Posts

February 26th, 2014 04:00

Hello,

1.     Avamar provides source based deduplication and DD provides target based in-line deduplication. The requirement of bandwidth is higher for DD.

2.     Avamar is suitable for file system backup and provides very high dedup rate. But it has limited capacity. So Avamar is good for mid range environment.

3.     DD has a large capacity and it is good for enterprise environment. Compare to avamar, DD is a better option for Databases backup.

4.     When you use Multi node avamar server, it provides capability to run 72 backups each node(Avamar 7) and one overall restore during backup window. If you have 3 nodes, you can run 3x72-1 = 215 backups. But if you use single node with DD, you can run upto 72 scheduled backups only. This point would create a major difference.

I hope this will be helpful for you.

Regards,

Pawan

1 Message

February 26th, 2014 11:00

Hi,

If you don't need as many connections to the Avamar grid that a multinode grid provides, then having a data domain can provide for additional use cases such as using it as a archive target or target for oracle rman backups using ddboost without avamar agent which some companies do to allow dbas control of db backups.

Thanks.

Suresh.

91 Posts

February 27th, 2014 06:00

There's an (apparently still) undocumented issue of metadata containers which prevents code upgrades. All of the backup metadata goes to the Avamar system and if you send it too much metadata, even though you still might have available storage space on the node or DD) you won't be able to run code upgrades.

On the upside you can do VM instant access restores from image level backups sent over to a DD.

91 Posts

February 27th, 2014 07:00

I had a customer who was at 7.0 and wanted to go to 7.1 and was told by RCM that wouldn't be possible due to the metadata constraints.

You make it sound as though with an Avamar + DD config you need to make a consideration on which system you store your backups.

2K Posts

February 27th, 2014 07:00

All of the backup metadata goes to the Avamar system and if you send it too much metadata, even though you still might have available storage space on the node or DD) you won't be able to run code upgrades.

The amount of metadata going to the Avamar is rarely going to be a problem. The problem arises if backup data is sent to the Avamar back-end, causing an increase in the number of stripe files stored on the Avamar server. Even if they are emptied by garbage collection, stripe files can never be removed, so if the "cur" capacity is being consumed by empty "atomic" stripe files, that space can't be used to create new "composite" stripe files to store backup metadata.

This is mainly a concern for systems moving from storing backup data on the Avamar back-end to storing backup data on a Data Domain back-end instead. We have also seen this issue when a large backup is sent to the Avamar back-end accidentally because the checkbox to send the data to Data Domain was not checked.

For the former, there was a recent process change that will allow a larger percentage of the systems with high cur capacity to be upgraded to the 7.0 release, with various conditions (e.g. all new backup data will be sent to Data Domain). If you have systems that couldn't be upgraded at the time of the 7.0 release, you may want to get in touch with the RCM team to run a fresh assessment on these systems as they may qualify for an upgrade under the new process.

A new setting was created for the Avamar Administrator Server (MCS) that can be enabled in mcserver.xml to help prevent the latter. This setting can be enabled in two modes -- one to send backups to the Data Domain by default and one to disallow backups to the Avamar back-end completely. This setting is disabled by default but may be enabled under the new upgrade process I mentioned above or by customer request.

2K Posts

February 27th, 2014 07:00

Also, this issue should only affect upgrades from older versions to 7.0 since 7.0 is the first version that considers metadata capacity. Once the system is running 7.0, future upgrades (including application of service packs or upgrades to new releases) should have no further problems with this.

133 Posts

February 27th, 2014 08:00

Hi,

I'm Jeff Cameron, the RCM BRS technical supervisor.

Could you please email me at jeffrey.cameron@emc.com with the SR number of the upgrade that was denied?

I would like to look at the details for this upgrade request.

Thank you!

2K Posts

February 27th, 2014 08:00

I've reached out to RCM and management about the 7.0.0 to 7.0.1 upgrade.

As far as which system to store backups, some consideration is required. If the Avamar back-end is very small, you probably don't want to send backups there because of the issue with cur capacity that I mentioned above. On the other hand, while almost all of the Avamar plug-ins support the Data Domain back-end in Avamar 7.0, there are still some that do not (the major one being the VSS System State plug-in).

All of that said, the back-end to use is mainly a concern on systems where it's not possible to expand the Avamar storage (i.e. single node ADS / AVE systems or multi-node systems already at the 16 storage node limit). Adding nodes is the only way to lower per-node cur capacity.

Things are definitely moving toward using the Data Domain back-end for everything in Avamar / Data Domain integration so if in doubt, writing the backups to the Data Domain back-end is probably the way to go.

91 Posts

February 27th, 2014 10:00

Jeff,

Sent you an email with the appropriate info. RCM did not deny the upgrade request, they attempt to run the upgrade until some part of the upgrade process relating to the metadata capacity prevented them from going forward.

Thanks for looking into it.

91 Posts

February 27th, 2014 10:00

Part of the problem is that this very important limitation does not appear to be documented anywhere. You've said more to this issue than anything we found else where on EMC's support site, the Avamar documentation, and our EMC BRS reps combined.

From what I've seen the 7.x's new Metadata Utilization information is a much needed benefit, but it still doesn't provide enough information to properly plan for required capacity, growth, and replication considerations. Never mention the number of 6.x customers running alongside a DD. It's one thing to use existing methods of estimating the amount of capacity required without scratching one's head and wondering about unspecified metadata limits.

It's not irrational to present a 7.8 TB Avamar node and a 10 TB Data Domain system and not expect to use that capacity to the extent of documented limitations.

2K Posts

February 27th, 2014 14:00

The amount of metadata stored on the Avamar for an Avamar / Data Domain integrated system depends mainly on the number of files being backed up. For databases and VMware backups, the file counts will be very low so the amount of metadata storage required will also be low. The metadata capacity, therefore, only became important once support for filesystem and NDMP backups to Data Domain was added in Avamar 7. This is why metadata capacity was never tracked prior to Avamar 7.

I'm not that involved with the pre-sales side of things but I have heard that the sizing tools have been updated to consider metadata capacity.

For existing installations, the proactive_check script will report on metadata capacity if run in preupgrade mode. Depending on the results, the RCM team will provide advice on whether or not the system should be upgraded and what additional measures should be implemented (such as redirecting new backups to the DD only). Once the upgrade is complete, the Avamar Administrator GUI will display information about the metadata capacity so it can be tracked. Unless the customer is planning to run filesystem or NDMP backups to the DD, they should never have to worry about metadata capacity.

Regarding documentation, there is a Metadata Capacity Reporting and Monitoring tech note available. It's a little light on technical detail but is a nice customer-friendly overview that covers the core concepts:

https://support.emc.com/docu49793_Avamar-Metadata-Server-Capacity-Technical-Note.pdf

There is KB article describing metadata capacity currently in review. Unfortunately it is not yet published. The article number is 176865 if you want to keep an eye out for it.

If you have questions about metadata capacity on specific systems, you can always open a ticket and ask to speak with somebody from the capacity and replication team. Since you're a partner, they may also be able to provide you with some information on how to run through metadata capacity assessments on your own in future.

I hope I've addressed your concerns but if not, please let me know.

173 Posts

March 3rd, 2014 06:00

Exactly…

42 Posts

March 3rd, 2014 06:00

Hello,

I want just an explanation about this statement : Avamar provides source based deduplication and DD provides target based in-line deduplication. The requirement of bandwidth is higher for DD.

Avamar use DDBoost in this case, so deduplication is also source based, isn't it ?

Even if DDBoost is worst than Avamar algorythm, you need less bandwith than normal backup, no ?

Regards,

F@b

42 Posts

March 3rd, 2014 06:00

Thank you,

It is like every deduplication algorythm without local hash file.

F@b

173 Posts

March 3rd, 2014 06:00

There is very intensive communication between boost client and DD, client has to send each hash to DD. It doesn’t need high bandwidth but it needs low latency…

Regards

Lukas

No Events found!

Top