Avamar: MCS receives "OutOfMemory" messages
Summary: The Avamar Management Console Server (MCS) receives "OutOfMemory" messages due to large NVRAM File from a virtual machine.
Symptoms
Scheduled VMware and Physical backups appear to be unresponsive.
They may appear in the Avamar Administrator Activity window as running, but without any change in the progress bytes.
The Management Console Server (MCS) reports an "out of memory" condition, and the UI may be slow to respond.
VMware clients do not check in for long periods of times.
Cause
While backing up virtual machines, the MCS loads the Non-Volatile Random-Access Memory (NVRAM) files into its javaheap to read the BIOS configuration.
The NVRAM file is located in the same location where VMware *.vmdk files which are located on the datastore.
Occasionally, the NVRAM file that is typically only a few KB in size (less than 10 KB) grows to around or over 1 MB.
This is enough to cause MCS javaheap to go out of memory.
Resolution
1. Log in to the Avamar Utility Node and load the admin keys. For instructions on loading keys see Avamar: How to Log in to an Avamar Server and Load Various Keys.
2. Check if the javaheap is set at 1.5 GB already which is denoted by the "-Xmx1536m" in the output below:
ps -elf | grep java | grep mcserver
0 S admin 13623 1 2 76 0 - 556102 - 16:54 pts/0 00:10:16
/usr/java/jre1.6.0_22/bin/java -Xmx1536m -XX:MaxPermSize=256m -server -ea -cp
.:/usr/local/avamar/lib/mcserver.jar:/usr/local/avamar/lib/asn_server.jar:/usr/local/avamar/lib/mail.jar:/usr/local/avamar/lib/activation.jar:/usr/local/avamar/lib/xercesImpl.jar:/usr/local/avamar...<snip>...
6.1.23.jar:/usr/local/avamar/lib/jetty-util-6.1.23.jar:/usr/local/avamar/lib/servlet-api-2.5.jar:/usr/local/avamar/lib/jsp-api-2.1.jar:/usr/local/avamar/lib/jsp-
3. Confirm if "OutOfMemory" messages are reported in the log (due to potentially large NVRAM):
grep -hi "OutOfMemoryError\|VMware" /usr/local/avamar/var/mc/server_log/mcserver.log*
Exception in thread "Thread-191" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Unknown Source)
at java.io.ByteArrayOutputStream.write(Unknown Source)
at
com.avamar.mc.vmware.VmwareVirtualMachineFiles.readVmFile(VmwareVirtualMachineFiles.java:386)
at
com.avamar.mc.vmware.VmwareVirtualMachineFiles.getVirtualMachineFile(VmwareVirtualMachineFiles.java:109)
at
com.avamar.mc.vmware.VmwareVirtualCenter.getVirtualMachineFile(VmwareVirtualCenter.java:902)
at
com.avamar.mc.vmware.VmwareService.getNvramFileContent(VmwareService.java:2596)
at com.avamar.mc.wo.JobScheduler._gotVmWork(JobScheduler.java:530)
at com.avamar.mc.wo.JobScheduler.gotVmWork(JobScheduler.java:327)
at com.avamar.mc.wo.DPNScheduler.gotVmWork(DPNScheduler.java:144)
4. Check which clients have the large NVRAM:
grep "nvramContent\=\"null\"" mcserver.log.*
This sample output reports client "Company_1_Email-PDB reports '"nvramContent="null"':
FINE: MCS to Client(10.n.n.64:39270) Response: <workorder work="backup" type="work" ack="yes" cid="8580a6233796c72a4a73b89f0d2ae5fb644fcbd0" sync="bg"
wid="NFSSQLCluster-1350439200070" pid="vmimagew" pidnum="3016" msgver="5" sessionid="c61b498fdf57eaca6cadc760a3b7fbf6f7e89aea"
targetCid="bae291d2ba3d3e3ea77d5394bcaa60da3a072463" targetUUID="500faaf4-8f4b-b5f2-1744-1277d19a79cf" vcCid="3a25059a880e1d22b981c5815858d3dc95312621"
time="1350442682" customaction="" ><targetlist><path name="[nas_datastore_03] Company_1_Email-PDB/Company_1_Email-PDB.vmdk" backup="true" diskCapacity="85899345920">
</path><path name="[nas_datastore_03_sqldb] Company_1_Email-PDB/Company_1_Email-PDB.vmdk" backup="true" diskCapacity="171798691840"></path>
<path name="[nas_datastore_03_sqllogs] Company_1_Email-PDB/Company_1_Email-PDB.vmdk" backup="true" diskCapacity="32212254720"></path></targetlist><directives>
<flag type="string" name="encrypt" value="proprietary" /> <flag type="string" name="encrypt-strength" value="cleartext" /> <flag type="string" name="expire"
value="1353034800" /> <flag type="string" name="retention-type" value="daily,weekly,monthly,yearly"
....
"TRUE"
scsi0:2.deviceType = "scsi-hardDisk"
scsi0:2.present = "TRUE"
scsi0:2.redo = ""

migrate.hostlog = "
./Company_1_Email-PDB-b5986b60.hlog"

scsi0:0.ctkEnabled = "TRUE"
ctkEnabled = "TRUE"

sched.scsi0:1.shares = "normal"

ethernet1.virtualDev = "vmxnet3"
ethernet1.pciSlotNumber = "192"
ethernet1.startConnected = "TRUE"

ethernet1.allowGuestConnectionControl = "TRUE"
ethernet1.features = "1"
ethernet1.wakeOnPcktRcv = "TRUE"

ethernet1.addressType = "vpx"
ethernet1.generatedAddress = "00:00:56:8f:3f:b1"
ethernet1.networkName = "
VM Network - 117"
ethernet1.present = "TRUE"

" nvramContent="null" prevBackup="null" snapshotDesired="always" prevSnapName="null" >
</vmInfo><vmDiskInfoList numDisks="3" > <vmDiskInfo capacityInKB="83886080" vmdkFilename="[nas_datastore_03] Company_1_Email-PDB/Comany_1_Email-PDB.vmdk"
vmdkBaseFile="[nas_datastore_03] Company_1_Email-PDB/Comany_1_Email-PDB.vmdk" ordinal="1" srcOrdinal="-1" label="Hard disk 1" diskKey="2000"
datastoreUrl="ds:///vmfs/volumes/e4b3f733-24646679/" datastore="nas_datastore_03"
nvramcontent value, so all instances must be reviewed.
5. Once the client, or clients, with the large NVRAM is identified from the logs:
a. Reboot the client to have it create a new NVRAM file.
b. If the problem occurs for the same client, Create a Service Request with the Dell Technologies Technical Support Team.
6. If the error message persists, see Avamar: Symptom Code 22402 - Desc: Could not save console server data to server (Resolution Path) for additional troubleshooting.