yourssubash's Posts

yourssubash's Posts

Hi, PFI the image and folder that contains DD main log files, that needs in depth log analysis for various issues/ errors. Please share any more details to help others with the information.... See more...
Hi, PFI the image and folder that contains DD main log files, that needs in depth log analysis for various issues/ errors. Please share any more details to help others with the information.
This post is to list various Data Domain users, roles and their activities/ privileges. This is called as RBAC (Role based access control) in short, is an authentication policy that control... See more...
This post is to list various Data Domain users, roles and their activities/ privileges. This is called as RBAC (Role based access control) in short, is an authentication policy that controls which DD System Manager controls and CLI commands a user can access on a system. List of users: 1) Sysadmin, Admin, Limited admin. 2) The User, Security officer, Backup-operator. 3) None, The Tenant admin, The Tenant user. -           A Sysadmin is the default admin user. -           An admin can configure and monitor the entire Data Domain system. Most configuration features and commands are available only to admin role users. -           The limited-admin role can configure and monitor the Data Domain system with some limitations. Users who are assigned this role cannot perform data deletion operations, edit the registry or enter bash or SE mode. o    The user role can monitor the system, change their own password, and view system status. The user role cannot change the system configuration. o    The Security role is for a security officer who can manage other security officers, authorize procedures that require security officer approval, and perform all tasks supported for user-role users. Only the sysadmin user can create the first security officer and that first account cannot be deleted. After the first security officer is created, only security officers can create or modify other security officers. o    The Backup-operator role can perform all tasks permitted for user role users, create snapshots for MTrees, import, export, and move tapes between elements in a virtual tape library, and copy tapes across pools. Ø   The role of None is used for DD Boost authentication and tenant-users. A None role can log in to a Data Domain system and can change their password, but cannot monitor or configure the primary system. Ø   The Tenant Admin role can be appended to the other (non-tenant) roles when the Secure Multi-Tenancy (SMT) feature is enabled. A tenant-admin user can configure and monitor a specific tenant unit as well as schedule and run backup operations for the Tenant. Ø   The Tenant User role can be appended to the other (non-tenant) roles when the SMT feature is enabled. It enables a user to monitor a specific tenant unit and change the user password. References: DDOS administration guides from EMC support site.
Yes, a simple click on "follow" is giving us the notifications and any posts on the subjected title/ topic. Easy way to be updated with out any misses.
Hi James, your explanation is always vivid and informative every time. It is serving as a reference while troubleshooting customer issues. Kudos to your subject matter expertise ..
I found some documents for you. It is not a Data Domain issue I believe: In case if this is of any help: https://community.emc.com/message/886364 https://support.emc.com/docu5606_NAS_Acc... See more...
I found some documents for you. It is not a Data Domain issue I believe: In case if this is of any help: https://community.emc.com/message/886364 https://support.emc.com/docu5606_NAS_Access-Based_Enumeration_Support_Technical_Note.pdf?language=en_US
Very nice and customer friendly reference.
Hi Nick6266 or for others reference. There is a customer access KB that through some inputs on: MTree_replication Snapshot not getting deleted. Check this out: https://support.emc.com/kb/471648
After checking few Kb's, I found these scenario's as common steps to check on DD putty: 1) Steps and scenarios: If found huge snapshots: # df in sysadmin File Distribution. Also c... See more...
After checking few Kb's, I found these scenario's as common steps to check on DD putty: 1) Steps and scenarios: If found huge snapshots: # df in sysadmin File Distribution. Also check here if there are snapshots on the system ( # snapshot list mtree * (this will show the list of snapshots in a particular Mtree). If snapshots are available: in CLI, then run and check for all Mtrees the following (/data/col1/backup) # mtree list # Snapshot Summary for mtree <mtree name full path> e.g., # Snapshot Summary for mtree /data/col1/xxxxxxxxxxxyyyy # replication show config # fi st for Cleaning started at as well phase. # fi clean watch Inform customer to expire snapshots from GUI OR from command line which are no longer needed as well retain snapshots that are in use. If needed take help from specialist team(s). Expired snapshots will be cleared in next cleaning cycle. Once done, check the output of # df and #fi st Note: If Customer is on DDOS is at 5.4.0.8 there are couple of known issues. 1) Cleaning bug 2) Replication snapshots do not get expire. Please plan to upgrade it to DDOS 5.4.2.2. 2) Steps and scenarios: Cleaning is in progress. Alert with 90% threshold Investigate the common factors that might affect claiming space:- •          Replication lag •          Stale snapshots •          Any abnormal small files count •          Compression rate changes Pre-Comp = Data written before compression Post-Comp = Storage used after compression Global-Comp Factor = Pre-Comp / (Size after de-dupe) Local-Comp Factor = (Size after de-dupe) / Post-Comp Total-Comp Factor = Pre-Comp / Post-Comp Reduction % = ((Pre-Comp - Post-Comp) / Pre-Comp) * 100 /data/col1/backup # df in sysadmin # filesys clean show schedule # filesys clean status #filesys clean watch # nfs show active If filesys clean status is already running and in last phases, then keep the case in observation for the cleaning to complete and check for available space (# filesys show space) How To Determine Compression Rates: https://support.emc.com/kb/306103 3) Steps and scenarios : Expired snapshots not getting removed even after FS cleaning: Alert with 90% threshold This would be a possible cause of snapshots with Soft lock. Check these steps to isolate the same: # df in sysadmin File Distribution log. Also check here if there are snapshots on the system ( # snapshot list mtree * (this will show the list of snapshots in a particular Mtree). If snapshots are available: in CLI, then run and check for all Mtrees the following (/data/col1/backup) # mtree list # replication show config Run the below command to list all the Snapshots of the MTree on both Source and Destination:  # replication status <mtree> #snapshot list mtree /data/col1/<abc> # Snapshot Summary for mtree <mtree name full path>  e.g., #  snapshot list mtree /data/col1/avamar-48645863863 If found the "dm_rmsnapshot" in the ddfs.info* files, then we need to perform the below steps which should release the softlocks and the snapshot should be removed during the next cleaning: 1) Break replication on both source and destination (# replication break) 2) Resync replication on the source. (# replication sync) Run the Global Cleaning # filesys clean start • Once the GC finishes please make sure the Snapshots are deleted. #snapshot list mtree </data/col1/mtree-name> How to collect sfs_dump output for specialist team log analysis: Login to the DDR as "sysadmin" user and execute the following commands: 1. #system show serial 2. # priv set se 3. Collect sfs_dump: # se sfs_dump -h 4. This will take time and dump a lot of data to the screen, but it will be captured to the log file. 5. Once it finishes disable putty log. 6. Run the following command to get back to admin mode: # priv set admin 7. Close putty session 8. Compress sfsdump1_<hostname>.out files 9. Upload them using support portal to this SR or a temp FTP if needed. These are my initial observations. Kindly share other scenarios and steps to include in this document and for every ones reference. Thank you for reading and welcome your comments ...
Read in Google as well in one of the DD training, I am interested to share these details of these protocols. Kindly post your comments, suggestions, White papers and more details: Data Domain ... See more...
Read in Google as well in one of the DD training, I am interested to share these details of these protocols. Kindly post your comments, suggestions, White papers and more details: Data Domain Implementation: For successfully integrating the Data Domain system into a backup environment: 1. Perform the basic installation and configuration tasks.(make certain that all installations have occurred, including installation of all application software as necessary throughout the environment, and installation and initial configuration of the Data Domain system for proper network access by client systems and backup servers. 2. Configure the Data Domain system with the correct networking, and create a backup user (Performed by Implementation Engineers). 3. Configure the backup server with the necessary credentials or other settings as necessary, and create a share on the Data Domain system. (Performed by Implementation Engineers). Verifying Data Domain functionality with Backup system: 1. Perform administrative tasks on the backup systems administrative console in order to create a backup job. 2. Run and monitor the backup job in the backup systems administrative console. 3. Perform operations to perform backup and validate recovery for a client system. 4. Validate and analyze the backups within the Data Domain System Manager, where you can view statistics and reports. DDBoost Implementation: 1. Prepare Data Domain systems (A and B) for DD Boost Enable DD Boost and Set User. Create Storage Unit and CIFS Share. 2. Backup application Console Configure Data Domain systems as Devices for Boost Configure system A for Backup. Configure system B for Backup Clone. 3. Backup application Console Configure Backup / Clone Operations 4. Console: Monitor Activity for Backup / Clone 5. Verity Files on Data Domain systems A and B 6. Backup application Console: Restore files from Backup Clone Implement VTL: 1. Install Configure HBA Cards. 2. Configure FC zoning. 3. Configure VTL on Data Domain System. 4. On Backup application administrative management perform the device discovery and configuration. 5. On Backup application administrative management Run and monitor backup jobs and validate backup. Implement CIFS: 1. Install Backup software Management and Media (Storage Node) servers and clients components . 2. On Data Domain System Configure Networking and CIFS parameters, Create a Backup user and CIFS share. o Create a new user and Choose Admin from the Privilege drop down menu. o Choose the Admin option. Note of these new user credentials. They will be added to the services logon credentials on the Networker Server. o map a CIFS share on the Data Domain system, to be used in Networker backup administration. This directory will be the target for backups. o Map a network drive to the ClFS share created. 3. Additional Configuration on Backup Application AKA Networker (configure a CIFS AFTD Networker device on Networker server or storage node) 4. Verify CIFS access from Backup system. 5. From Backup application create a backup Job, Run and monitor it, then validate that backup job. 6. From Backup application recover backup and validate it. 7. On Data Domain Analyze system statistics. Implement NFS: 1. Install Backup software Management and Media (Storage Node) servers and clients components. 2. On Data Domain System Configure Networking and NFS parameters. 3. Backup Application additional Configuration: o Mount Points. o Mount Data Domain Directory. o Modify /etc/fstab. o Create Backup directory. 4. Validate functionality for NFS by creating and copying files from the Server to the Data Domain backup directory.
Thank you for the reply. I am facing little navigation queries on this portal and unable to find the correct comunity options.
How do we measure disk load at peak activity for a RAID 5 configuration, where in an application generates 8400 small random I/Os at peak workloads with a read/write ratio of 2:1? Need the detail... See more...
How do we measure disk load at peak activity for a RAID 5 configuration, where in an application generates 8400 small random I/Os at peak workloads with a read/write ratio of 2:1? Need the details or explanation or logic to be followed.
Hi all, 1) I am stuck with a query related to disk drives. The below question is from one of the EMC links/ external certification material and need the calculations or logic please.: An or... See more...
Hi all, 1) I am stuck with a query related to disk drives. The below question is from one of the EMC links/ external certification material and need the calculations or logic please.: An organization plans to deploy a new application in their environment. The new application requires 3 TB of storage space. During peak workloads, the application is expected to generate 2450 IOPS with a typical I/O size of 2 KB. The capacity of each available disk drive is 500 GB. The maximum number of IOPS a drive can perform at with 70 percent utilization is 90 IOPS. What is the minimum number of disk drives needed to meet the application’s capacity and performance requirements given a RAID 0 configuration? The Answer: 28 2) Similarly another scenario: An organization plans to deploy a new application in their environment. The new application requires 4 TB of storage space. During peak workloads, the application is expected to generate 4900 IOPS with a typical I/O size of 8 KB. The capacity of each available disk drive is 500 GB. The maximum number of IOPS a drive can perform at with a 70 percent utilization is 110 IOPS. What is the minimum number of disk drives needed to meet the application’s capacity and performance requirements given a RAID 0 configuration? The Answer: 45