Highlighted
dverma
1 Copper

Avamar Grid vs Avamar single node with DD?

What are the pros and cons of each option (Avamar Grid vs Avamar single node with DD) ?

Tags (3)
0 Kudos
16 Replies

Re: Avamar Grid vs Avamar single node with DD?

Hello,

1.     Avamar provides source based deduplication and DD provides target based in-line deduplication. The requirement of bandwidth is higher for DD.

2.     Avamar is suitable for file system backup and provides very high dedup rate. But it has limited capacity. So Avamar is good for mid range environment.

3.     DD has a large capacity and it is good for enterprise environment. Compare to avamar, DD is a better option for Databases backup.

4.     When you use Multi node avamar server, it provides capability to run 72 backups each node(Avamar 7) and one overall restore during backup window. If you have 3 nodes, you can run 3x72-1 = 215 backups. But if you use single node with DD, you can run upto 72 scheduled backups only. This point would create a major difference.

I hope this will be helpful for you.

Regards,

Pawan

0 Kudos
suresh_kerai1
1 Copper

Re: Avamar Grid vs Avamar single node with DD?

Hi,

If you don't need as many connections to the Avamar grid that a multinode grid provides, then having a data domain can provide for additional use cases such as using it as a archive target or target for oracle rman backups using ddboost without avamar agent which some companies do to allow dbas control of db backups.

Thanks.

Suresh.

0 Kudos

Re: Avamar Grid vs Avamar single node with DD?

There's an (apparently still) undocumented issue of metadata containers which prevents code upgrades. All of the backup metadata goes to the Avamar system and if you send it too much metadata, even though you still might have available storage space on the node or DD) you won't be able to run code upgrades.

On the upside you can do VM instant access restores from image level backups sent over to a DD.

0 Kudos
ionthegeek
4 Beryllium

Re: Avamar Grid vs Avamar single node with DD?

All of the backup metadata goes to the Avamar system and if you send it too much metadata, even though you still might have available storage space on the node or DD) you won't be able to run code upgrades.

The amount of metadata going to the Avamar is rarely going to be a problem. The problem arises if backup data is sent to the Avamar back-end, causing an increase in the number of stripe files stored on the Avamar server. Even if they are emptied by garbage collection, stripe files can never be removed, so if the "cur" capacity is being consumed by empty "atomic" stripe files, that space can't be used to create new "composite" stripe files to store backup metadata.

This is mainly a concern for systems moving from storing backup data on the Avamar back-end to storing backup data on a Data Domain back-end instead. We have also seen this issue when a large backup is sent to the Avamar back-end accidentally because the checkbox to send the data to Data Domain was not checked.

For the former, there was a recent process change that will allow a larger percentage of the systems with high cur capacity to be upgraded to the 7.0 release, with various conditions (e.g. all new backup data will be sent to Data Domain). If you have systems that couldn't be upgraded at the time of the 7.0 release, you may want to get in touch with the RCM team to run a fresh assessment on these systems as they may qualify for an upgrade under the new process.

A new setting was created for the Avamar Administrator Server (MCS) that can be enabled in mcserver.xml to help prevent the latter. This setting can be enabled in two modes -- one to send backups to the Data Domain by default and one to disallow backups to the Avamar back-end completely. This setting is disabled by default but may be enabled under the new upgrade process I mentioned above or by customer request.

0 Kudos
ionthegeek
4 Beryllium

Re: Avamar Grid vs Avamar single node with DD?

Also, this issue should only affect upgrades from older versions to 7.0 since 7.0 is the first version that considers metadata capacity. Once the system is running 7.0, future upgrades (including application of service packs or upgrades to new releases) should have no further problems with this.

0 Kudos

Re: Avamar Grid vs Avamar single node with DD?

I had a customer who was at 7.0 and wanted to go to 7.1 and was told by RCM that wouldn't be possible due to the metadata constraints.

You make it sound as though with an Avamar + DD config you need to make a consideration on which system you store your backups.

0 Kudos

Re: Avamar Grid vs Avamar single node with DD?

Hi,

I'm Jeff Cameron, the RCM BRS technical supervisor.

Could you please email me at jeffrey.cameron@emc.com with the SR number of the upgrade that was denied?

I would like to look at the details for this upgrade request.

Thank you!

0 Kudos
ionthegeek
4 Beryllium

Re: Avamar Grid vs Avamar single node with DD?

I've reached out to RCM and management about the 7.0.0 to 7.0.1 upgrade.

As far as which system to store backups, some consideration is required. If the Avamar back-end is very small, you probably don't want to send backups there because of the issue with cur capacity that I mentioned above. On the other hand, while almost all of the Avamar plug-ins support the Data Domain back-end in Avamar 7.0, there are still some that do not (the major one being the VSS System State plug-in).

All of that said, the back-end to use is mainly a concern on systems where it's not possible to expand the Avamar storage (i.e. single node ADS / AVE systems or multi-node systems already at the 16 storage node limit). Adding nodes is the only way to lower per-node cur capacity.

Things are definitely moving toward using the Data Domain back-end for everything in Avamar / Data Domain integration so if in doubt, writing the backups to the Data Domain back-end is probably the way to go.

0 Kudos

Re: Re: Avamar Grid vs Avamar single node with DD?

Jeff,

Sent you an email with the appropriate info. RCM did not deny the upgrade request, they attempt to run the upgrade until some part of the upgrade process relating to the metadata capacity prevented them from going forward.

Thanks for looking into it.

0 Kudos