Start a Conversation

Unsolved

This post is more than 5 years old

4324

November 1st, 2016 22:00

Unable to login to the unisphere

Dear All ,

I was planning to do health check with the customer .I tried to uni-sphere i am getting error below error .It is VNX 5300 unified with 2 data mover and one Cs0 .Unable to scan all the data-store from the Vcenter.I have did the proper shutdown and power up as per the EMC procedure and tried to connect directly to the SP same error .

Error.jpg

Below are show server  3 is faulty

[nasadmin@AYASSRV20 ~]$ nas_server -list

id      type  acl  slot groupID  state  name

1        4    0     2              2    server_2.faulted.server_3

2        1    0     3              0    server_2

[nasadmin@AYASSRV20 ~]$ server_sysstat

Error 2100: usage: server_sysstat { | ALL } [ -blockmap ]

[nasadmin@AYASSRV20 ~]$ nas_checkup

Check Version:  7.0.14.0

Check Command:  /nas/bin/nas_checkup

Check Log    :  /nas/log/checkup-run.161101-171142.log

-------------------------------------Checks-------------------------------------

Control Station: Checking statistics groups database....................... Pass

Control Station: Checking if file system usage is under limit.............. Pass

Control Station: Checking if NAS Storage API is installed correctly........ Pass

Control Station: Checking if NAS Storage APIs match........................  N/A

Control Station: Checking if NBS clients are started....................... Pass

Control Station: Checking if NBS configuration exists...................... Pass

Control Station: Checking if NBS devices are accessible.................... Pass

Control Station: Checking if NBS service is started........................ Pass

Control Station: Checking if PXE service is stopped........................ Pass

Control Station: Checking if standby is up.................................  N/A

Control Station: Checking integrity of NASDB............................... Warn

Control Station: Checking if primary is active............................. Pass

Control Station: Checking all callhome files delivered..................... Warn

Control Station: Checking resolv conf...................................... Warn

Control Station: Checking if NAS partitions are mounted.................... Pass

Control Station: Checking ipmi connection.................................. Pass

Control Station: Checking nas site eventlog configuration.................. Pass

Control Station: Checking nas sys mcd configuration........................ Pass

Control Station: Checking nas sys eventlog configuration................... Pass

Control Station: Checking logical volume status............................ Pass

Control Station: Checking valid nasdb backup files......................... Pass

Control Station: Checking root disk reserved region........................ Pass

Control Station: Checking if RDF configuration is valid....................  N/A

Control Station: Checking if fstab contains duplicate entries.............. Pass

Control Station: Checking if sufficient swap memory available.............. Pass

Control Station: Checking for IP and subnet configuration.................. Fail

Control Station: Checking auto transfer status............................. Warn

Control Station: Checking for invalid entries in etc hosts................. Pass

Control Station: Checking the hard drive in the control station............ Pass

Control Station: Checking if Symapi data is present........................ Fail

Control Station: Checking if Symapi is synced with Storage System.......... Pass

Blades         : Checking boot files....................................... Pass

Blades         : Checking if primary is active.............................    ?

Blades         : Checking if root filesystem is too large..................    ?

Blades         : Checking if root filesystem has enough free space.........    ?

Blades         : Checking if using standard DART image.....................    ?

Blades         : Checking network connectivity............................. Fail

Blades         : Checking status........................................... Warn

Blades         : Checking dart release compatibility.......................    ?

Blades         : Checking dart version compatibility.......................    ?

Blades         : Checking server name......................................    ?

Blades         : Checking unique id........................................    ?

Blades         : Checking CIFS file server configuration................... Pass

Blades         : Checking domain controller connectivity and configuration. Pass

Blades         : Checking DNS connectivity and configuration............... Pass

Blades         : Checking connectivity to WINS servers..................... Pass

Blades         : Checking I18N mode and unicode translation tables......... Pass

Blades         : Checking connectivity to NTP servers...................... Pass

Blades         : Checking connectivity to NIS servers...................... Pass

Blades         : Checking virus checker server configuration............... Pass

Blades         : Checking if workpart is OK................................ Pass

Blades         : Checking if free full dump is available................... Pass

Blades         : Checking if each primary Blade has standby................ Info

Blades         : Checking if Blade parameters use EMC default values....... Pass

Blades         : Checking VDM root filesystem space usage..................  N/A

Blades         : Checking if file system usage is under limit.............. Pass

Blades         : Checking for excessive memory utilization.................  N/A

Blades         : Checking for REPV2 component configuration................ Fail

Storage System : Checking disk emulation type.............................. Pass

Storage System : Checking disk high availability access.................... Pass

Storage System : Checking disks read cache enabled......................... Pass

Storage System : Checking disks and storage processors write cache enabled. Fail

Storage System : Checking if FLARE is committed............................    ?

Storage System : Checking if FLARE is supported............................    ?

Storage System : Checking array model......................................    ?

Storage System : Checking if microcode is supported........................  N/A

Storage System : Checking no disks or storage processors are failed over... Warn

Storage System : Checking that no disks or storage processors are faulted.. Fail

Storage System : Checking that no hot spares are in use....................

Regards

DineshJ

8.6K Posts

November 2nd, 2016 03:00

you seem to have more than one problem

you can try Troubleshooting the storage side by looking at the DM log and nas_storage

they why the DM failed over

If you are not comfortable with CLI Troubleshooting I would suggest to open a service request with EMC customer service

11 Posts

November 2nd, 2016 04:00

Hi Dinesh,

Please verify below commands:

nas_storage -c -a

df -h

/nas/sbin/navicli -h spa faults -list

/nas/sbin/navicli -h spb faults -list

Verify if the server commands provide you any proper outputs(if the server commands fail then I would recommend you to open a SR with us):

server_date ALL

server_df ALL

server_export ALL

/nas/sbin/clariion_mgmt -info                        (Proxy-ARP must have stopped)

Also check if the domain files are present and up to date:

ll /nas/http/domain

/nas/sbin/navicli -h spa domain -list

-ramya

4 Posts

November 2nd, 2016 05:00

Dear Ramya,


Thank you for your response i will be executing the commands tomorrow and will post the update.


Regards

Dinesh.j

4 Posts

November 2nd, 2016 23:00

Dear Team,

Please find the below command output .

[nasadmin@AYASSRV20 ~]$ sudo -i

Sorry, user nasadmin is not allowed to execute '/bin/bash' as root on AYASSRV20.

[nasadmin@AYASSRV20 ~]$ nas_storage -c -a

Discovering storage on AYASSRV20 (may take several minutes)

Error 5026: CKM00112500077 root_disk, Unknown, doesn't match any storage profile

Error 5017: storage health check failed

CKM00112500077  write cache disabled

CKM00112500077 SPA is faulted/removed

CKM00112500077 SPB is failed over

CKM00112500077 root_disk, no storage API data available

CKM00112500077 root_ldisk, no storage API data available

CKM00112500077 d3, no storage API data available

CKM00112500077 d4, no storage API data available

CKM00112500077 d5, no storage API data available

CKM00112500077 d6, no storage API data available

CKM00112500077 d9 is failed over

[nasadmin@AYASSRV20 ~]$ df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/hda3             2.0G  1.2G  669M  65% /

tmpfs                1012M     0 1012M   0% /dev/shm

/dev/hda1             259M   16M  230M   7% /boot

/dev/mapper/emc_vg_pri_ide-emc_lv_home

                      591M   17M  545M   3% /home

/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_backup

                      827M   62M  724M   8% /celerra/backup

/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_backendmonitor

                      7.8M  1.2M  6.3M  16% /celerra/backendmonitor

/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_audit

                      117M   87M   24M  79% /celerra/audit

/dev/nde1             1.7G  884M  729M  55% /nbsnas

/dev/hda5             2.0G  788M  1.1G  42% /nas

/dev/nda1             134M   32M  102M  24% /nbsnas/dos

/dev/mapper/emc_vg_lun_0-emc_lv_nbsnas_jserver

                      1.4G   61M  1.3G   5% /nbsnas/jserver

/dev/mapper/emc_vg_pri_ide-emc_lv_nas_jserver

                      1.4G   61M  1.3G   5% /nas/jserver

/dev/mapper/emc_vg_lun_5-emc_lv_nas_var

                       97M  5.6M   87M   7% /nbsnas/var

/dev/mapper/emc_vg_lun_5-emc_lv_nas_var_dump

                      7.9G  147M  7.4G   2% /nbsnas/var/dump

/dev/mapper/emc_vg_lun_0-emc_lv_nas_var_auditing

                      117M   12M   99M  11% /nbsnas/var/auditing

/dev/mapper/emc_vg_lun_5-emc_lv_nas_var_backup

                      827M   63M  723M   8% /nbsnas/var/backup

/dev/mapper/emc_vg_lun_5-emc_lv_nas_var_emcsupport

                      552M  341M  184M  65% /nbsnas/var/emcsupport

/dev/mapper/emc_vg_lun_5-emc_lv_nas_var_log

                      206M  5.8M  189M   3% /nbsnas/var/log

/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_commoncache

                      496M   34M  437M   8% /celerra/commoncache

/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_ccc

                      552M   17M  507M   4% /celerra/ccc

/dev/mapper/emc_vg_pri_ide-emc_lv_celerra_wbem

                     1008M  397M  561M  42% /celerra/wbem

[nasadmin@AYASSRV20 ~]$ server_date ALL

server_2.faulted.server_3 :

Error 4000: server_2.faulted.server_3 : unable to connect to host

server_2 : Thu Nov  3 09:33:40 GST 2016

[nasadmin@AYASSRV20 ~]$ server_df ALL

server_2.faulted.server_3 :

Error 4000: server_2.faulted.server_3 : unable to connect to host

server_2 :

Filesystem          kbytes         used        avail capacity Mounted on

FS1            16909464560     28381104  16881083456    0%    /FS1

FS2            16909464560         2736  16909461824    0%    /FS2

root_fs_common       15368         5280        10088   34%    /.etc_common

root_fs_2           258128         9064       249064    4%    /

[nasadmin@AYASSRV20 ~]$ /nas/sbin/clarrion_mgmt -info

-bash: /nas/sbin/clarrion_mgmt: No such file or directory

[nasadmin@AYASSRV20 ~]$ /nas/sbin/clariion_mgmt -info

Public IP address for SPA: 10.1.213.21

Public IP address for SPB: 10.1.213.22

Start on boot            : yes

Current implementation   : Proxy-ARP

Status                   :

Error 8: Terminated by user

WARNING: This error may have caused the network settings to be in an inconsistent

WARNING: state. This utility must be run again with the '-retry' flag to

WARNING: re-attempt this operation.

[nasadmin@AYASSRV20 ~]$ /nas/sbin/clariion_mgmt -info -retry

Public IP address for SPA: 10.1.213.21

Public IP address for SPB: 10.1.213.22

Start on boot            : yes

Current implementation   : Proxy-ARP

Status                   :

[nasadmin@AYASSRV20 ~]$ server_export ALL

server_2.faulted.server_3 :

export "/" anon=0 access=128.221.252.100:128.221.253.100:128.221.252.101:128.221.253.101

server_2 :

export "/" anon=0 access=128.221.252.100:128.221.253.100:128.221.252.101:128.221.253.101

share "VC1" "/FS1" umask=022 maxusr=4294967295 netbios=AYASNAS01

share "VC2" "/FS2" umask=022 maxusr=4294967295 netbios=AYASNAS01

share "YAS-SHARE" "/FS1" umask=022 maxusr=4294967295 netbios=AYASNAS01

unable to excute the below commands i havent installed the navicli in the management system.Tried to get the navicli but was unable to get the correct path to download 

nas/sbin/navicli -h spa faults -list

/nas/sbin/navicli -h spb faults -list

/nas/sbin/navicli -h spa domain -list

11 Posts

November 3rd, 2016 05:00

Dinesh, please collect the SP collects and open a SR(attach the logs) with EMC immediately for the better assistance.

/nas/tools/./.get_spcollect

Also, you can execute the /nas/sbin/navicli without any navicli tool in management system.

-ramya

No Events found!

Top