Start a Conversation

This post is more than 5 years old

Solved!

Go to Solution

5179

May 6th, 2016 06:00

Too many mount points?

I have been trying to create a new file system on our VNX5800 and it keeps telling me that we have exceeded the number of mount points for server_2. I've done a bit of digging around and found with the output of a server_mount server_2 | wc -l there are 2082 mountpoints. However 371 are showing as unmounted (These were mounts created by the checkpoints which we have unmounted).

[nasadmin@ c0 ~]$ server_mount server_2 | wc -l

2082

[nasadmin@ c0 ~]$ server_mount server_2 | grep unmounted | wc -l

371

So my question is when you do a server_umount on a file system does it not actually free up a mount point because by my simple logic if there are 2082 total mounts and 371 listed as unmounted we are well below the maximum of 2048 mounts per data mover so i am failing to see why i cannot create a new file system/mount point.

Am i missing something obvious here? Any help would be appreciated.

43 Posts

May 9th, 2016 02:00

It sounds that these mount points have been created due to sort of replications or NDMP Snapsure backups. However,

1) By Default, Deleting (umounting) the mount points from the Unisphere GUI will automatically use -perm option which will permanently delete the mount points and decrease the counter for the total mounts (in this case, you should not have any problem when you use the GUI - you will not reach the maximum number of mounts 2048).

2) By Default, Deleting (umounting) the mount points using CLI will use -temp option which will keep the mount point available with status (will NOT decrease the counter for the total mounts)... you can verify it using server_mount server_2 command.

3) As long as you have the original FS or checkpoints available, no harm to delete/umount the mount points using -perm option (if you want to mount any FS or checkpoint in the future you can do so by creating new mount point!).

4) Since you have reached the maximum # of FS Per DataMover Blade 2048; you will not be able to create new FS on the same blade. you might be able to create it on a different DataMover Blade until you reach the Max # of FS per VNX 4096.

you can go and re-run the umount command but include the -p flag

a) backup the original FS mounts status (in case if you want to mount any FS with the old mountpoint name)

server_mount server_2 > /original_mounts.txt

b) server_mount server_2 | grep -i "" | awk '{print $1}' | xargs server_umount server_2 -perm

This will filter the FS based on the status and umount them using -perm flag (server_umount server_2 -perm FS_NAME).

or you can use excel to manually filter them ... good luck

1 Message

May 6th, 2016 22:00

Hello Andrew,

Sound as if you me a checkpoint schedules creating some number of checkpoints per filesystem that exceeds the total number of mounts per DM which last I new was 2048. The fact you mentioned server_mount shows you there are files systems leads me to believe some trigger has "temp" unmounted leaving them linked in some way to the DM and mountpoint.

It could be those check point savvols have consumed too much space due to the amount of changed blocks all checkpoint are writing there. If the savvil maxes out in size oldest checkpoint for that filesystem will end up in the temp unmounted state.

Not performing perm umounts on FS or checkpoints will cause similar effect. Ie

server_umount server_2 [-p] /fsname

Where -p is the optional switch to permanently umount the filesystem. A file system unmounted with -p no longer shows in server_mount and an additional housekeeping step I like to follow is deleting the mountpount via server_mountpoint server_2 -d /fsname. I would also have deleted shares, exports, related checkpoints, their schedules, as well as any IP replication sessions related to these filesystem as these replication sessions when active can have 2 active hidden replication checkpoints and yet additional for VDMs and the replication session.

Because of the number of steps involved I usually used a recursive deprovisioning script.

To simplify I would reduce the number of checkpoints taking the -perm option into account and modify the nas_ckpt_schedule accordingly.

Hope this helps

-mck

8.6K Posts

May 7th, 2016 05:00

you could also ask your EMC pre-sales to raise an RPQ for you to increase the number of mounts

8.6K Posts

May 7th, 2016 06:00

yes temp unmounted ckpt count towards the 2048 limit

other option is to create ckpt manually as not mounted with your own scheduler / script

you can still mount them when you need them

43 Posts

May 8th, 2016 02:00

Hello,

Please check the mounted file systems that you have including the checkpoints ... umount the unwanted Filesystems/Checkpoints using -perm option

server_umount { |ALL} -perm {fs_name>|

delete the unwanted mountpoint

server_mountpoint { | ALL } -list

server_mountpoint { | ALL } -delete

32 Posts

May 9th, 2016 00:00

Thanks for the bits of information.

From what you've have said it sounds like the problem i've seen is a combination of not unmounting them with the -p flag so they are only temporarily unmounted and also trying to use Unisphere to create the new filesystem/mountpoint. So i guess i can go re-run the unmount commands but include the -p flag and then follow up with deleting the actual mountpoint.

One thing you have said Rainer_EMC has got be thinking. You mentioned creating the new filesystem manually and then still be able to mount it? Are you saying that if i do it via CLI so i have to create the filesystem and then mount it as separate commands that it ignores the 2048 limit?

8.6K Posts

May 9th, 2016 09:00

no - the 2048 max mounts is just the same for CLI

No Events found!

Top