This post is more than 5 years old
9 Posts
0
1921
Networker: why does mminfo -m not show the correct device capacity?
Hi!
just had to recreate the File systems and volumes recently
3,6 Tb each volume.
But mminfo -m does not recognize that.
root@brn01> mminfo -m
state volume written (%) expires read mounts capacity
brdtcen01.003 566 MB 100% 11/12/15 0 KB 0 0 KB
brdtcen01.004 21 GB 100% 11/12/15 0 KB 2 0 KB
root@brn01> df -h | grep store
/dev/dsk/c3t600A0B80002FCCED0000081D54633558d0s0 3.6T 45G 3.6T 2% /nsrstore0
/dev/dsk/c3t600A0B80002FCCED0000081954633494d0s0 3.6T 21G 3.6T 1% /nsrstore1
root@brn01> nsradmin
NetWorker administration program.
Use the "help" command for help, "visual" for full-screen mode.
nsradmin> show name ; device default capacity
nsradmin> option hidden
Hidden display option turned on
Display options:
Dynamic: Off;
Hidden: On;
Raw I18N: Off;
Resource ID: Off;
Regexp: Off;
nsradmin> p type : nsr device
name: /nsrstore0;
device default capacity: 3672 Gb;
name: /nsrstore1;
device default capacity: 3672 Gb;
root@brn01> uname -a
SunOS brdtcen01 5.10 Generic_147440-12 sun4v sparc SUNW,Netra-T5220
root@brn01> pkginfo -l SMAWnwsrv
PKGINST: SMAWnwsrv
NAME: NetWorker Save/Recover - Server
CATEGORY: application
ARCH: sparc
VERSION: 7.4A00SP7
BASEDIR: /opt
VENDOR: Fujitsu Technology Solutions GmbH
DESC: NetWorker Save and Recover Server for Solaris
PSTAMP: elba20100713033214
INSTDATE: Jan 22 2013 15:35
EMAIL: http://ts.fujitsu.com/services
STATUS: completely installed
FILES: 403 installed pathnames
5 shared pathnames
5 linked files
16 directories
168 executables
369783 blocks used (approx)
What can be wrong?
Thanks
BR
Fernando Silva
FSSilva1
9 Posts
0
November 27th, 2014 08:00
Hi!
No it is not a Version issue.
The point is that "volume device capacity" has to be filled in before creating the volume.
After filling in the values and recreating the volume I get the volume capacity usage in the output of mminfo -m.
Thanks for your help.
BR
Fernando Silva
ble1
2 Intern
2 Intern
•
14.3K Posts
0
November 13th, 2014 10:00
You are running FSC version of NW which has its own code changes so you must ask them, but chances are these is the same as NW where this is known to be the case. It is more or less cosmetic issue which you can easily ignore. Not sure if newer and supported versions like NW8+ have the same, but this was fairly common with NW7.
bingo.1
2.4K Posts
1
November 13th, 2014 11:00
What can be wrong? - I hope nothing ;-)
The story will become very complicated if multiple processes use the same volume. You could imagine that not only NW but also an application/OS can potentially write to the device as well - even at the same time. Even if you exclusively use a disk for NW, an installation (especially on Windows) could use it at least temporarily because it has the largest amount of free disk space. This is just one example.
I assume that you have set the 'volume default capacity' as by default there is no value assigned. And it does not need one because with respect to what i just said before, this value can change. And due to that reason, there is no true reference which can be used to calculate the %used value. Per definition an Advanced File Type Device (AFTD) media ...
- will not honor the volume default capacity
- The '%used' value will always be 100%
- will never get the status 'full' assigned ... even is the whole disk has been filled.
This was different with the predecessor file type device (FTD) some years ago. Here
- the admin could set a volume default capacity
- and the '%used' value was calculated using this reference.
- when reaching the default capacity, the volume was set to 'full' and NW requested another piece of media.
Just for curiosity verify it yourself.
FSSilva1
9 Posts
0
November 14th, 2014 02:00
Hi!
I see what you mean and it makes sense.
Still what puzzles me is that formerly - before we had to rearrange the devices and reduce their size for reliability reasons - we were able to see the capacity in mminfo -m
root@brn01> mminfo -m
state volume written (%) expires read mounts capacity
brdtcen01.001 427 GB 8% 10/28/15 0 KB 2 5362 GB
brdtcen01.002 288 GB 5% 10/23/15 0 KB 0 5362 GB
root@bren01> df -h /nsrstore0
Filesystem size used avail capacity Mounted on
/dev/dsk/c3t600A0B800076D02B0000057D4F72C3DFd0s0
5.0T 563G 4.4T 12% /nsrstore0
root@bren01> df -h /nsrstore1
Filesystem size used avail capacity Mounted on
/dev/dsk/c3t600A0B800076D02B0000057F4F72C3F3d0s0
5.0T 273G 4.7T 6% /nsrstore1
I am setting manually the capacity values in nsradmin.
Granted I only did it after the volumes had been labeled.
But not only it does not affect the output of mminfo -m,
but also volume current capacity: comes back to "0 Kb" or "" after a while.
nsradmin> p
name: /nsrstore1;
volume default capacity: 3672 GB;
volume current capacity: 3636 GB;
device default capacity: 3672 GB;
name: /nsrstore0;
volume default capacity: 3672 GB;
volume current capacity: ;
device default capacity: 3672 GB;
nsradmin> q
root@bren01> mminfo -m
state volume written (%) expires read mounts capacity
brdtcen01.003 171 GB 100% 11/14/15 0 KB 0 0 KB
brdtcen01.004 60 GB 100% 11/14/15 0 KB 3 0 KB
root@brdtcen01>
Any ideas?
Thanks
BR
Fernando Silva
bingo.1
2.4K Posts
0
November 14th, 2014 03:00
This must ve a version/release specific issue. I just verified with 8.1.1.9 & 8.2.0.2.
FSSilva1
9 Posts
0
November 27th, 2014 09:00
You are right!
Label the media is the correct way of saying it.
thanks
bingo.1
2.4K Posts
1
November 27th, 2014 09:00
Sorry - I forgot that. If you have set/changed the default capacity, you must label the media to activate the new settings.