Manage Storage
Configure custom pools
Pools are the groups of drives on which you create storage resources. Configure pools based on the type of storage resource and usage that will be associated with the pool, such as file system storage optimized for database usage. The storage characteristics differ according to the following:
- Type of drive used to provide the storage.
- (dual-SP virtual deployments only) RAID level implemented for the storage.
|
NOTE:
Before you create storage resources, you must configure at least one pool.
|
The following table lists the attributes for pools:
Attribute
|
Description
|
||
---|---|---|---|
ID
|
ID of the pool.
|
||
Name
|
Name of the pool.
|
||
Type
|
Pool type. Valid values are:
|
||
Description
|
Brief description of the pool.
|
||
Total space
|
Total storage capacity of the pool.
|
||
Current allocation
|
Amount of storage in the pool allocated to storage resources.
|
||
Preallocated space
|
Amount of storage space reserved in the pool by storage resources for future needs to make writes more efficient. The pool may be able to reclaim some of this space if total pool space is running low. This value equals the sum of the
sizePreallocated values of each storage resource in the pool.
|
||
Remaining space
|
Amount of storage in the pool not allocated to storage resources.
|
||
Subscription
|
For thin provisioning, the total storage space subscribed to the pool. All pools support both standard and thin provisioned storage resources. For standard storage resources, the entire requested size is allocated from the pool when the resource is created, for thin provisioned storage resources only incremental portions of the size are allocated based on usage. Because thin provisioned storage resources can subscribe to more storage than is actually allocated to them, pools can be over provisioned to support more storage capacity than they actually possess.
|
||
Subscription percent
|
For thin provisioning, the percentage of the total space in the pool that is subscription storage space.
|
||
Alert threshold
|
Threshold for the system to send an alert when hosts have consumed a specific percentage of the subscription space. Value range is 50 to 85.
|
||
Drives
|
List of the types of drives on the system, including the number of drives of each type, in the pool. If FAST VP is installed, you can mix different types of drives to make a tiered pool. However, SAS Flash 4 drives must be used in a homogeneous pool.
|
||
Number of drives
|
Total number of drives in the pool.
|
||
Number of unused drives
|
Number of drives in the pool that are not being used.
|
||
RAID level
(physical deployments only)
|
RAID level of the drives in the pool.
|
||
Stripe length
(physical deployments only)
|
Number of drives the data is striped across.
|
||
Rebalancing
|
Indicates whether a pool rebalancing is in progress. Valid values are:
|
||
Rebalancing progress
|
Indicates the progress of the pool rebalancing as a percentage.
|
||
System defined pool
|
Indication of whether the system configured the pool automatically. Valid values are:
|
||
Health state
|
Health state of the pool. The health state code appears in parentheses. Valid values are:
|
||
Health details
|
Additional health information. See Appendix A, Reference, for health information details.
|
||
FAST Cache enabled
(physical deployments only)
|
Indicates whether FAST Cache is enabled on the pool. Valid values are:
|
||
Non-base size used
|
Quantity of storage used for thin clone and snapshot data.
|
||
Auto-delete state
|
Indicates the state of an auto-delete operation on the pool. Valid values are:
|
||
Auto-delete paused
|
Indicates whether an auto-delete operation is paused. Valid values are:
|
||
Auto-delete pool full threshold enabled
|
Indicates whether the system will check the pool full high water mark for auto-delete. Valid values are:
|
||
Auto-delete pool full high water mark
|
The pool full high watermark on the pool.
|
||
Auto-delete pool full low water mark
|
The pool full low watermark on the pool.
|
||
Auto-delete snapshot space used threshold enabled
|
Indicates whether the system will check the snapshot space used high water mark for auto-delete. Valid values are:
|
||
Auto-delete snapshot space used high water mark
|
High watermark for snapshot space used on the pool.
|
||
Auto-delete snapshot space used low water mark
|
Low watermark for snapshot space used on the pool.
|
||
Data Reduction space saved
(physical deployments only)
|
Storage size saved on the pool by using data reduction.
|
||
Data Reduction percent
(physical deployments only)
|
Storage percentage saved on the pool by using data reduction.
|
||
Data Reduction ratio
(physical deployments only)
|
Ratio between data without data reduction and data after data reduction savings.
|
||
All flash pool
|
Indicates whether the pool contains only Flash drives. Valid values are:
|
Create pools
Create a dynamic or traditional pool:
- Both traditional pools and dynamic pools are supported in the CLI and REST API for Unity All-Flash models running OE version 4.2.x or later. The default pool type is dynamic.
- Traditional pools are supported in all Unity hybrid and virtual models. They are also supported in Unity All-Flash models running OE version 4.1.x or earlier.
Format
/stor/config/pool create [-async] -name <value> [-type {dynamic | traditional}] [-descr <value>] {-diskGroup <value> -drivesNumber <value> [-storProfile <value>] | -disk <value>} [-alertThreshold <value>] [-snapPoolFullThresholdEnabled {yes|no}] [-snapPoolFullHWM <value>] [-snapPoolFullLWM <value>] [-snapSpaceUsedThresholdEnabled {yes|no}] [-snapSpaceUsedHWM <value>] [-snapSpaceUsedLWM <value>]Action qualifier
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-name
|
Type a name for the pool.
|
||
-type
|
(Available only for systems that support dynamic pools) Specify the type of pool to create. Value is one of the following:
Default value is dynamic. |
||
-descr
|
Type a brief description of the pool.
|
||
-storProfile
(physical deployments only)
|
Type the ID of the storage profiles, separated by commas, to apply to the pool, based on the type of storage resource that will use the pool and the intended usage of the pool.
View storage profiles (physical deployments only) explains how to view the IDs of available storage profiles on the system. If this option is not specified, a default RAID configuration is selected for each particular drive type in the selected drive group: NL-SAS (RAID 6 with a stripe length of 8), SAS (RAID 5 with a stripe length of 5), or Flash (RAID 5 with a stripe length of 5).
|
||
-diskGroup
(physical deployments only)
|
Type a comma-separated list of IDs of the drive groups to use in the pool. Specifying drive groups with different drive types causes the creation of a multi-tier pool.
View drive groups explains how to view the IDs of the drive groups on the system.
|
||
-drivesNumber
(physical deployments only)
|
Specify the drive numbers, separated by commas, from the selected drive groups to use in the pool. If this option is specified when
-storProfile is not specified, the operation may fail when the
-drivesNumber value does not match the default RAID configuration for each drive type in the selected drive group.
|
||
-disk
(virtual deployments only)
|
Specify the list of drive IDs, separated by commas, to use in the pool. Specified drives must be reliable storage objects that do not require additional protection.
|
||
-alertThreshold
|
For thin provisioning, specify the threshold, as a percentage, when the system will alert on the amount of subscription space used. When hosts consume the specified percentage of subscription space, the system sends an alert. Value range is 50% to 85%.
|
||
-FASTCacheEnabled
(physical deployments only)
|
Specify whether to enable FAST Cache on the pool. Value is one of the following:
|
||
-snapPoolFullThresholdEnabled
|
Indicate whether the system should check the pool full high water mark for auto-delete. Value is one of the following:
|
||
-snapPoolFullHWM
|
Specify the pool full high watermark for the pool. Valid values are 1-99. Default value is 95.
|
||
-snapPoolFullLWM
|
Specify the pool full low watermark for the pool. Valid values are 0-98. Default value is 85.
|
||
-snapSpaceUsedThresholdEnabled
|
Indicate whether the system should check the snapshot space used high water mark for auto-delete. Value is one of the following:
|
||
-snapSpaceUsedHWM
|
Specify the snapshot space used high watermark to trigger auto-delete on the pool. Valid values are
1-99. Default value is
95.
|
||
-snapSpaceUsedLWM
|
Specify the snapshot space used low watermark to trigger auto-delete on the pool. Valid values are
0-98. Default value is
20.
|
|
NOTE:
Use the
Change disk settings (virtual deployments only) command to change the assigned tiers for specific drives.
|
Example 1 (physical deployments only)
The following command creates a dynamic pool. This example uses storage profiles profile_1 and profile_2, six drives from drive group dg_2, and ten drives from drive group dg_28. The configured pool receives ID pool_2.
|
NOTE:
Before using the
stor/config/pool create command, use the
/stor/config/profile show command to display the dynamic pool profiles and the
/stor/config/dg show command to display the drive groups.
|
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = pool_2
Operation completed successfully.
Example 2 (physical deployments only)
The following command creates a traditional pool in models that support dynamic pools. This example uses storage profiles tprofile_1 and tprofile_2, five drives from drive group dg_3, and nine drives from drive group dg_28. The configured pool receives ID pool_6.
|
NOTE:
Before using the
stor/config/pool create command, use the
/stor/config/profile -traditional show command to display the traditional pool profiles (which start with "t") and the
/stor/config/dg show command to display the drive groups.
|
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = pool_6
Operation completed successfully.
Example 3 (physical deployments only)
The following command creates a traditional pool in models that do not support dynamic pools. This example uses storage profiles profile_19 and profile_20, five drives from drive group dg_15, and nine drives from drive group dg_16. The configured pool receives ID pool_5.
|
NOTE:
Before using the
stor/config/pool create command, use the
/stor/config/profile show command to display the traditional pool profiles and the
/stor/config/dg show command to display the drive groups.
|
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = pool_5
Operation completed successfully.
Example 4 (virtual deployments only)
The following command creates a traditional pool with two virtual disks, vdisk_0 and vdisk_2 in the Extreme Performance tier. The configured pool receives ID pool_4.
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool create -name vPool -descr "my virtual pool" -disk vdisk_0,vdisk_2
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = pool_4
Operation completed successfully.
Change pool settings
Change the subscription alert threshold, FAST Cache, and snapshot threshold settings for a pool.
Format
/stor/config/pool {-id <value> | -name <value>} set [-async] –name <value> [-descr <value>] [-alertThreshold <value>] [-snapPoolFullThresholdEnabled {yes|no}] [-snapPoolFullHWM <value>] [-snapPoolFullLWM <value>] [-snapSpaceUsedThresholdEnabled {yes|no}] [-snapSpaceUsedHWM <value>] [-snapSpaceUsedLWM <value>] [-snapAutoDeletePaused no]Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of the pool to change.
|
-name
|
Type the name of the pool to change.
|
Action qualifier
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-name
|
Type a name for the pool.
|
||
-descr
|
Type a brief description of the pool.
|
||
-alertThreshold
|
For thin provisioning, specify the threshold, as a percentage, when the system will alert on the amount of subscription space used. When hosts consume the specified percentage of subscription space, the system sends an alert. Value range is 50% to 84%.
|
||
-FASTCacheEnabled
(physical deployments only)
|
Specify whether to enable FAST Cache on the pool. Value is one of the following:
|
||
-snapPoolFullThresholdEnabled
|
Indicate whether the system should check the pool full high water mark for auto-delete. Value is one of the following:
|
||
-snapPoolFullHWM
|
Specify the pool full high watermark for the pool. Valid values are
1-99. Default value is
95.
|
||
-snapPoolFullLWM
|
Specify the pool full low watermark for the pool. Valid values are
0-98. Default value is
85.
|
||
-snapSpaceUsedThresholdEnabled
|
Indicate whether the system should check the snapshot space used high water mark for auto-delete. Value is one of the following:
|
||
-snapSpaceUsedHWM
|
Specify the snapshot space used high watermark to trigger auto-delete on the pool. Valid values are
1-99. Default value is
95.
|
||
-snapSpaceUsedLWM
|
Specify the snapshot space used low watermark to trigger auto-delete on the pool. Valid values are
0-98. Default value is
20.
|
||
-snapAutoDeletePaused
|
Specify whether to pause snapshot auto-delete. Typing
no resumes the auto-delete operation.
|
Example
The following command sets the subscription alert threshold for pool pool_1 to 70%:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool -id pool_1 -set -alertThreshold 70 -FASTCacheEnabled no
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = pool_1
Operation completed successfully.
Add drives to pools
Add new drives to a pool to increase its storage capacity.
Format
/stor/config/pool {-id <value> | -name <value>} extend [-async] {-diskGroup <value> -drivesNumber <value> [-storProfile <value>] |-disk <value>}Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of the pool to extend.
|
-name
|
Type the name of the pool to extend.
|
Action qualifier
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
-diskGroup
(physical deployments only)
|
Type the IDs of the drive groups, separated by commas, to add to the pool.
|
-drivesNumber
(physical deployments only)
|
Type the number of drives from the specified drive groups, separated by commas, to add to the pool. If this option is specified when
-storProfile is not specified, the operation may fail when the
-drivesNumber value does not match the default RAID configuration for each drive type in the selected drive group.
|
-storProfile
(physical deployments only)
|
Type the IDs of the storage profiles, separated by commas, to apply to the pool. If this option is not specified, a default RAID configuration is selected for each particular drive type in the selected drive group: NL-SAS (RAID 6 with a stripe length of 8), SAS (RAID 5 with a stripe length of 5), or Flash (RAID 5 with a stripe length of 5).
|
-disk
(virtual deployments only)
|
Specify the list of drives, separated by commas, to add to the pool. Specified drives must be reliable storage objects that do not require additional protection.
|
Example 1 (physical deployments only)
The following command extends pool pool_1 with seven drives from drive group DG_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool –id pool_1 extend –diskGroup dg_1 –drivesNumber 7 -storProfile profile_12
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = pool_1
Operation completed successfully.
Example 2 (virtual deployments only)
The following command extends pool pool_1 by adding two virtual disks, vdisk_1 and vdisk_5.
uemcli -d 10.0.0.2 -u Local/joe -p MyPassword456! /stor/config/pool –id pool_1 extend –disk vdisk_1,vdisk_5
Storage system address: 10.0.0.2
Storage system port: 443
HTTPS connection
ID = pool_1
Operation completed successfully.
View pools
View a list of pools. You can filter on the pool ID.
|
Format
/stor/config/pool {-id <value> | -name <value>}] showObject qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of a pool.
|
-name
|
Type the name of a pool.
|
Example 1 (physical deployments only)
The following command shows details about all pools on a hybrid system:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: ID = pool_1
Name = Performance
Description = Multi-tier pool
Total space = 8663754342400 (7.8T)
Current allocation = 0
Preallocated space = 38310387712 (35.6G)
Remaining space = 8663754342400 (7.8T)
Subscription = 0
Subscription percent = 0%
Alert threshold = 70%
Drives = 5 x 600.0G SAS; 5 x 1.6T SAS Flash 3
Number of drives = 10
RAID level = 5
Stripe length = 5
Rebalancing = no
Rebalancing progress =
Health state = OK (5)
Health details = "The component is operating normally. No action is required."
FAST Cache enabled = no
Protection size used = 0
Non-base size used = 0
Auto-delete state = Idle
Auto-delete paused = no
Auto-delete pool full threshold enabled = yes
Auto-delete pool full high water mark = 95%
Auto-delete pool full low water mark = 85%
Auto-delete snapshot space used threshold enabled = no
Auto-delete snapshot space used high water mark = 25%
Auto-delete snapshot space used low water mark = 20%
Compression space saved = 0
Compression Percent = 0%
Compression Ratio = 1:1
Data Reduction space saved = 0
Data Reduction percent = 0%
Data Reduction ratio = 1:1
All flash pool = no
2: ID = pool_2
Name = Capacity
Description =
Total space = 4947802324992 (4.5T)
Current allocation = 3298534883328 (3T)
Preallocated space = 22194823168 (20.6G)
Remaining space = 4947802324992 (1.5T)
Subscription = 10995116277760 (10T)
Subscription percent = 222%
Alert threshold = 70%
Drives = 12 x 2TB NL-SAS
Number of drives = 12
Unused drives = 7
RAID level = 6
Stripe length = 6
Rebalancing = yes
Rebalancing progress = 46%
Health state = OK (5)
Health details = "The component is operating normally. No action is required."
FAST Cache enabled = yes
Protection size used = 10995116238 (10G)
Non-base size used = 10995116238 (10G)
Auto-delete state = Running
Auto-delete paused = no
Auto-delete pool full threshold enabled = yes
Auto-delete pool full high water mark = 95%
Auto-delete pool full low water mark = 85%
Auto-delete snapshot space used threshold enabled = yes
Auto-delete snapshot space used high water mark = 25%
Auto-delete snapshot space used low water mark = 20%
Compression space saved = 4947802324992 (1.5T)
Compression percent = 23%
Compression ratio = 1.3:1
Data Reduction space saved = 4947802324992 (1.5T)
Data Reduction percent = 23%
Data Reduction ratio = 1.3:1
All flash pool = no
3: ID = pool_3
Name = Extreme Performance
Description =
Total space = 14177955479552 (12.8T)
Current allocation = 0
Preallocated space = 14177955479552 (12.8T)
Remaining space = 14177955479552 (12.8T)
Subscription = 0
Subscription percent = 0%
Alert threshold = 70%
Drives = 9 x 1.6T SAS Flash 3; 5 x 400.0G SAS Flash 2
Number of drives = 14
RAID level = 5
Stripe length = Mixed
Rebalancing = no
Rebalancing progress =
Health state = OK (5)
Health details = "The component is operating normally. No action is required."
FAST Cache enabled = no
Protection size used = 0
Non-base size used = 0
Auto-delete state = Idle
Auto-delete paused = no
Auto-delete pool full threshold enabled = yes
Auto-delete pool full high water mark = 95%
Auto-delete pool full low water mark = 85%
Auto-delete snapshot space used threshold enabled = no
Auto-delete snapshot space used high water mark = 25%
Auto-delete snapshot space used low water mark = 20%
Compression space saved = 0
Compression Percent = 0%
Compression Ratio = 1:1
Data Reduction space saved = 0
Data Reduction percent = 0%
Data Reduction ratio = 1:1
All flash pool = yes
Example 2
The following example shows all pools for a model that supports dynamic pools.
uemcli -d 10.0.0.2 -u Local/joe -p MyPassword456! /stor/config/pool show -detail
[Response]
Storage system address: 10.64.75.201
Storage system port: 443
HTTPS connection
1: ID = pool_3
Type = Traditional
Name = MyPool
Description = traditional pool
Total space = 14177955479552 (12.8T)
Current allocation = 0
Preallocated space = 38310387712 (35.6G)
Remaining space = 14177955479552 (12.8T)
Subscription = 0
Subscription percent = 0%
Alert threshold = 70%
Drives = 9 x 1.6T SAS Flash 3; 5 x 400.0G SAS Flash 2
Number of drives = 14
RAID level = 5
Stripe length = Mixed
Rebalancing = no
Rebalancing progress =
Health state = OK (5)
Health details = "The component is operating normally. No action is required."
FAST Cache enabled = no
Protection size used = 0
Non-base size used = 0
Auto-delete state = Idle
Auto-delete paused = no
Auto-delete pool full threshold enabled = yes
Auto-delete pool full high water mark = 95%
Auto-delete pool full low water mark = 85%
Auto-delete snapshot space used threshold enabled = no
Auto-delete snapshot space used high water mark = 25%
Auto-delete snapshot space used low water mark = 20%
Compression space saved = 0
Compression Percent = 0%
Compression Ratio = 1:1
Data Reduction space saved = 0
Data Reduction percent = 0%
Data Reduction ratio = 1:1
All flash pool = yes
2: ID = pool_4
Type = Dynamic
Name = dynamicPool
Description =
Total space = 1544309178368 (1.4T)
Current allocation = 0
Preallocated space = 38310387712 (35.6G)
Remaining space = 1544309178368 (1.4T)
Subscription = 0
Subscription percent = 0%
Alert threshold = 70%
Drives = 6 x 400.0G SAS Flash 2
Number of drives = 6
RAID level = 5
Stripe length = 5
Rebalancing = no
Rebalancing progress =
Health state = OK (5)
Health details = "The component is operating normally. No action is required."
Protection size used = 0
Non-base size used = 0
Auto-delete state = Idle
Auto-delete paused = no
Auto-delete pool full threshold enabled = yes
Auto-delete pool full high water mark = 95%
Auto-delete pool full low water mark = 85%
Auto-delete snapshot space used threshold enabled = no
Auto-delete snapshot space used high water mark = 25%
Auto-delete snapshot space used low water mark = 20%
Compression space saved = 0
Compression Percent = 0%
Compression Ratio = 1:1
Data Reduction space saved = 0
Data Reduction percent = 0%
Data Reduction ratio = 1:1
All flash pool = yes
Example 3 (virtual deployments only)
The following command shows details for all pools on a virtual system.
uemcli -d 10.0.0.2 -u Local/joe -p MyPassword456! /stor/config/pool show -detail
Storage system address: 10.0.0.2
Storage system port: 443
HTTPS connection
1: ID = pool_1
Name = Capacity
Description =
Total space = 4947802324992 (4.5T)
Current allocation = 3298534883328 (3T)
Preallocated space = 38310387712 (35.6G)
Remaining space = 4947802324992 (1.5T)
Subscription = 10995116277760 (10T)
Subscription percent = 222%
Alert threshold = 70%
Drives = 1 x 120GB Virtual; 1 x 300GB Virtual
Number of drives = 2
Health state = OK (5)
Health details = "The component is operating normally. No action is required."
Non-base size used = 1099511625 (1G)
Auto-delete state = Running
Auto-delete paused = no
Auto-delete pool full threshold enabled = yes
Auto-delete pool full high water mark = 95%
Auto-delete pool full low water mark = 85%
Auto-delete snapshot space used threshold enabled = yes
Auto-delete snapshot space used high water mark = 25%
Auto-delete snapshot space used low water mark = 20%
Delete pools
Delete a pool.
Format
/stor/config/pool {-id <value> | -name <value>} delete [-async]Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of the pool to delete.
|
-name
|
Type the name of the pool to delete.
|
Action qualifier
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
Example
The following deletes pool pool_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool –id pool_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Manage FAST VP pool settings
Fully Automated Storage Tiering for Virtual Pools (FAST VP) is a storage efficiency technology that automatically moves data between storage tiers within a pool based on data access patterns.
The following table lists the attributes for FAST VP pool settings.
Attribute
|
Description
|
||
---|---|---|---|
Pool
|
Identifies the pool.
|
||
Status
|
Identifies the status of data relocation on the pool. Value is one of the following:
|
||
Relocation type
|
Type of data relocation. Value is one of the following:
|
||
Schedule enabled
|
Identifies whether the pool is rebalanced according to the system FAST VP schedule. Value is one of the following:
|
||
Start time
|
Indicates the time the current data relocation started.
|
||
End time
|
Indicates the time the current data relocation is scheduled to end.
|
||
Data relocated
|
The amount of data relocated during an ongoing relocation, or the previous relocation if a data relocation is not occurring. The format is: <value> [suffix] where:
|
||
Rate
|
Identifies the transfer rate for the data relocation. Value is one of the following:
|
||
Data to move up
|
The amount of data in the pool scheduled to be moved to a higher storage tier.
|
||
Data to move down
|
The amount of data in the pool scheduled to be moved to a lower storage tier.
|
||
Data to move within
|
The amount of data in the pool scheduled to be moved within the same storage tiers for rebalancing.
|
||
Data to move up per tier
|
The amount of data per tier that is scheduled to be moved to a higher tier. The format is: <tier_name>:[value] where:
|
||
Data to move down per tier
|
The amount of data per tier that is scheduled to be moved to a lower tier. The format is: <tier_name>:[value] where:
|
||
Data to move within per tier
|
The amount of data per tier that is scheduled to be moved to within the same tier for rebalancing. The format is: <tier_name>:[value] where:
|
||
Estimated relocation time
|
Identifies the estimated time required to perform the next data relocation.
|
Change FAST VP pool settings
Modify FAST VP settings on an existing pool.
Format
/stor/config/pool/fastvp {-pool <value> | -poolName <value>} set [-async] -schedEnabled {yes | no}Object qualifiers
Qualifier
|
Description
|
---|---|
-pool
|
Type the ID of the pool.
|
-poolName
|
Type the name of the pool.
|
Action qualifier
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-schedEnabled
|
Specify whether the pool is rebalanced according to the system FAST VP schedule. Value is one of the following:
|
Example
The following example enables the rebalancing schedule on pool pool_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/fastvp -pool pool_1 set -schedEnabled yes
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Pool ID = pool_1
Operation completed successfully.
View FAST VP pool settings
View FAST VP settings on a pool.
Format
/stor/config/pool/fastvp [{-pool <value> | -poolName <value>}] showObject qualifiers
Qualifier
|
Description
|
---|---|
-pool
|
Type the ID of the pool.
|
-poolName
|
Type the name of the pool.
|
Example
The following command lists the FAST VP settings on the storage system:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/fastvp –show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: Pool = pool_1
Relocation type = manual
Status = Active
Schedule enabled = no
Start time = 2013-09-20 12:55:32
End time = 2013-09-20 21:10:17
Data relocated = 100111454324 (100G)
Rate = high
Data to move up = 4947802324992 (4.9T)
Data to move down = 4947802324992 (4.9T)
Data to move within = 4947802324992 (4.9T)
Data to move up per tier = Performance: 500182324992 (500G), Capacity: 1000114543245 (1.0T)
Data to move down per tier = Extreme Performance: 1000114543245 (1.0T), Performance: 500182324992 (500G)
Data to move within per tier = Extreme Performance: 500182324992 (500G), Performance: 500182324992 (500G), Capacity: 500182324992 (500G)
Estimated relocation time = 7h 30m
Start data relocation
Start data relocation on a pool.
Format
/stor/config/pool/fastvp {-pool <value> | -poolName <value>} start [-async] [-rate {low | medium | high}] [-endTime <value>]Object qualifiers
Qualifier
|
Description
|
---|---|
-pool
|
Type the ID of the pool to resume data relocation.
|
-poolName
|
Type the name of the pool to resume data relocation.
|
Action qualifier
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-pool
|
Type the ID of the pool.
|
||
-endTime
|
Specify the time to stop the data relocation. The format is: [HH:MM] where:
|
||
-rate
|
Specify the transfer rate for the data relocation. Value is one of the following:
|
Example
The following command starts data relocation on pool pool_1, and directs it to end at 04:00:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/fastvp -pool pool_1 start -endTime 04:00
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Stop data relocation
Stop data relocation on a pool.
Format
/stor/config/pool/fastvp {-pool <value> | -poolName <value>} stop [-async]Object qualifiers
Qualifier
|
Description
|
---|---|
-pool
|
Type the ID of the pool.
|
-poolName
|
Type the name of the pool.
|
Action qualifier
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
Example
The following command stops data relocation on pool pool_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/fastvp –pool pool_1 stop
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Manage pool tiers
Storage tiers allow users to move data between different types of drives in a pool to maximize storage efficiency. Storage tiers are defined by the following characteristics:
- Drive performance.
- Drive capacity.
The following table lists the attributes for storage profiles:
Attribute
|
Description
|
---|---|
Name
|
Storage tier name.
|
Drives
|
The list of drive types, and the number of drives of each type in the storage tier.
|
RAID level
(physical deployments only)
|
RAID level of the storage tier.
|
Stripe length
(physical deployments only)
|
Comma-separated list of the stripe length of the drives in the storage tier.
|
Total space
|
Total capacity in the storage tier.
|
Current allocation
|
Currently allocated space.
|
Remaining space
|
Remaining space.
|
View storage tiers
View a list of storage tiers. You can filter on the pool ID.
|
Format
/stor/config/pool/tier {-pool <value> | -poolName <value>} showObject qualifiers
Qualifier
|
Description
|
---|---|
-pool
|
Type the ID of a pool.
|
-poolName
|
Type the name of a pool.
|
Example 1 (physical deployments only)
The following command shows tier details about the specified pool:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/tier -pool pool_1 show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: Name = Extreme Performance
Drives = 2 x 200.0G SAS Flash 2; 2 x 800.0G SAS Flash 2
Drive type = SAS Flash
RAID level = 10
Stripe length = 2
Total space = 868120264704 (808.5G)
Current allocation = 56371445760 (52.5G)
Remaining space = 811748818944 (756.0G)
2: Name = Performance
Drives = 15 x 600.0G SAS
Drive type = SAS
RAID level = 5
Stripe length = 5
Total space = 7087501344768 (6.4T)
Current allocation = 0
Remaining space = 7087501344768 (6.4T)
3: Name = Capacity
Drives = 8 x 6.0T NL-SAS
Drive type = NL-SAS
RAID level = 6
Stripe length = 8
Total space = 35447707271168 (32.2T)
Current allocation = 1610612736 (1.5G)
Remaining space = 35446096658432 (32.2T)
Example 2 (virtual deployments only)
The following command shows details about pool pool_1 on a virtual system.
uemcli -d 10.0.0.2 -u Local/joe -p MyPassword456! /stor/config/pool/tier –pool pool_1 show -detail
Storage system address: 10.0.0.2
Storage system port: 443
HTTPS connection
1: Name = Extreme Performance
Drives =
Total space = 0
Current allocation = 0
Remaining space = 0
2: Name = Performance
Drives = 1 x 500GB Virtual
Total space = 631242752000 (500.0G)
Current allocation = 12624855040 (10.0G)
Remaining space = 618617896960 (490.0G)
3: Name = Capacity
Drives =
Total space = 0
Current allocation = 0
Remaining space = 0
View pool resources
This command displays a list of storage resources allocated in a pool. This can be storage resources provisioned on the specified pool and NAS servers that have file systems allocated in the pool.
The following table lists the attributes for pool resources.
Attribute
|
Description
|
---|---|
ID
|
Storage resource identifier.
|
Name
|
Name of the storage resource.
|
Resource type
|
Type of the resource. Valid values are:
|
Pool
|
Name of the pool.
|
Total pool space used
|
Total space in the pool used by a storage resource. This includes primary data used size, snapshot used size, and metadata size. Space in the pool can be freed if snapshots and thin clones for storage resources are deleted, or have expired.
|
Total pool space preallocated
|
Total space reserved from the pool by the storage resource for future needs to make writes more efficient. The pool may be able to reclaim some of this if space is running low. Additional pool space can be freed if snapshots or thin clones are deleted or expire, and also if Data Reduction is applied.
|
Total pool non-base space used
|
Total pool space used by snapshots and thin clones.
|
Health state
|
Health state of the file system. The health state code appears in parentheses.
|
Health details
|
Additional health information. See Appendix A, Reference, for health information details.
|
Format
/stor/config/pool/sr [{-pool <value> | -poolName <value>}] showObject qualifiers
Qualifier
|
Description
|
---|---|
-pool
|
Type the ID of the pool.
|
-poolName
|
Type the name of the pool.
|
Example
The following command shows details for all storage resources associated with the pool pool_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/pool/sr -pool pool_1 show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: ID = res_1
Name = File_System_1
Resource type = File System
Pool = pool_1
Total pool space used = 53024473088 (49.3G)
Total pool preallocated = 15695003648 (14.6G)
Total pool snapshot space used = 7179124736 (6.6G)
Total pool non-base space used = 7179124736 (6.6G)
Health state = OK (5)
Health details = "The component is operating normally. No action is required."
2: ID = sv_1
Name = AF LUN 1
Resource type = LUN
Pool = pool_1
Total pool space used = 14448566272 (13.4G)
Total pool preallocated = 4610351104 (4.2G)
Total pool snapshot space used = 4593991680 (4.2G)
Total pool non-base space used = 4593991680 (4.2G)
Health state = OK (5)
Health details = "The LUN is operating normally. No action is required."
3: ID = res_2
Name = File_System_2
Resource type = File System
Pool = pool_1
Total pool space used = 117361025024 (109.3G)
Total pool preallocated = 3166494720 (2.9G)
Total pool snapshot space used = 41022308352 (38.2G)
Total pool non-base space used = 41022308352 (38.2G)
Health state = OK (5)
Health details = "The component is operating normally. No action is required."
4: ID = sv_2
Name = AF LUN 2
Resource type = LUN
Pool = pool_1
Total pool space used = 9500246016 (8.8G)
Total pool preallocated = 2579349504 (2.4G)
Total pool snapshot space used = 0
Total pool non-base space used = 0
Health state = OK (5)
Health details = "The LUN is operating normally. No action is required."
5: ID = res_3
Name = CG1
Resource type = LUN group
Pool = pool_1
Total pool space used = 892542287872 (831.2G)
Total pool preallocated = 8863973376 (8.2G)
Total pool snapshot space used = 231799308288 (215.8G)
Total pool non-base space used = 231799308288 (215.8G)
Health state = OK (5)
Health details = "The component is operating normally. No action is required."
Manage FAST VP general settings
Fully Automated Storage Tiering for Virtual Pools (FAST VP) is a storage efficiency technology that automatically moves data between storage tiers within a pool based on data access patterns.
The following table lists the attributes for FAST VP general settings.
Attribute
|
Description
|
||
---|---|---|---|
Paused
|
Identifies whether the data relocation is paused. Value is one of the following:
|
||
Schedule-enabled
|
Identifies whether the pool is rebalanced according to the system FAST VP schedule. Value is one of the following:
|
||
Frequency
|
Data relocation schedule. The format is:
Every <days_of_the_week> at <start_time> until <end_time> where:
|
||
Rate
|
Identifies the transfer rate for the data relocation. Value is one of the following:
|
||
Data to move up
|
The amount of data in the pool scheduled to be moved to a higher storage tier.
|
||
Data to move down
|
The amount of data in the pool scheduled to be moved to a lower storage tier.
|
||
Data to move within
|
The amount of data in the pool scheduled to be moved within the same storage tiers for rebalancing.
|
||
Estimated scheduled relocation time
|
Identifies the estimated time required to perform the next data relocation.
|
Change FAST VP general settings
Change FAST VP general settings.
Format
/stor/config/fastvp set [-async] [-schedEnabled {yes | no}] [-days <value>] [-at <value>] [-until <value>] [-rate {low | medium | high}] [-paused {yes | no}]Action qualifier
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
-paused
|
Specify whether to pause data relocation on the storage system. Valid values are:
|
-schedEnabled
|
Specify whether the pool is rebalanced according to the system FAST VP schedule. Valid values are:
|
-days
|
Specify a comma-separated list of the days of the week to schedule data relocation. Valid values are:
|
-at
|
Specify the time to start the data relocation. The format is: [HH:MM] where:
|
-until
|
Specify the time to stop the data relocation. The format is: [HH:MM] where:
|
-rate
|
Specify the transfer rate for the data relocation. Value is one of the following:
|
Example
The following command changes the data relocation schedule to run on Mondays and Fridays from 23:00 to 07:00:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastvp set -schedEnabled yes -days "Mon,Fri" -at 23:00 -until 07:00
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
View FAST VP general settings
View the FAST VP general settings.
Format
/stor/config/fastvp show -detailExample
The following command displays the FAST VP general settings:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastvp show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: Paused = no
Schedule enabled = yes
Frequency = Every Mon, Fri at 22:30 until 8:00
Rate = high
Data to move up = 4947802324992 (1.5T)
Data to move down = 4947802324992 (1.5T)
Data to move within = 4947802324992 (1.5T)
Estimated scheduled relocation time = 7h 30m
Manage FAST Cache (supported physical deployments only)
FAST Cache is a storage efficiency technology that uses drives to expand the cache capability of the storage system to provide improved performance.
The following table lists the attributes for FAST Cache:
Attribute
|
Description
|
---|---|
Capacity
|
Capacity of the FAST Cache.
|
Drives
|
The list of drive types, and the number of drives of each type in the FAST Cache.
|
Number of drives
|
Total number of drives in the FAST Cache.
|
RAID level
|
RAID level applied to the FAST Cache drives. This value is always
RAID 1.
|
Health state
|
Health state of the FAST Cache. The health state code appears in parentheses.
|
Health details
|
Additional health information. See Appendix A, Reference, for health information details.
|
Create FAST Cache
Configure FAST Cache. The storage system generates an error if FAST Cache is already configured.
Format
/stor/config/fastcache create [-async] -diskGroup <value> -drivesNumber <value> [-enableOnExistingPools {yes | no}]Action qualifier
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-diskGroup
|
Specify the drive group to include in the FAST Cache.
|
||
-drivesNumber
|
Specify the number of drives to include in the FAST Cache.
|
||
-enableOnExistingPools
|
Specify whether FAST Cache is enabled on all existing pools. Valid values are:
|
Example
The following command configures FAST Cache with six drives from drive group dg_2, and enables FAST Cache on existing pools:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastcache create -diskGroup dg_2 -drivesNumber 6 -enableOnExistingPools yes
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
View FAST Cache settings
View the FAST Cache parameters.
Format
/stor/config/fastcache showExample
The following command displays the FAST Cache parameters for a medium endurance Flash drive:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastcache show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: Total space = 536870912000 (500G)
Drives = 6 x 200GB SAS Flash 2
Number of drives = 6
RAID level = 1
Health state = OK (5)
Health details = "The component is operating normally. No action is required."
Extend FAST Cache
Extend the FAST Cache by adding more drives.
Format
/stor/config/fastcache extend [-async] -diskGroup <value> -drivesNumber <value>Action qualifier
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
-diskGroup
|
Specify the comma-separated list of SAS Flash drives to add to the FAST Cache. Any added drives must have the same drive type and drive size as the existing drives.
|
-drivesNumber
|
Specify the number of drives for each corresponding drive group to be added to the FAST Cache.
|
Example
The following command adds six drives from drive group "dg_2" to FAST cache.
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastcache extend -diskGroup dg_2 -drivesNumber 6
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Shrink FAST Cache
Shrink the FAST Cache by removing storage objects.
Format
/stor/config/fastcache shrink [-async] -so <value>Action qualifier
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
-so
|
Specify the comma-separated list of storage objects to remove from the FAST Cache. Run the
/stor/config/fastcache/so show command to obtain a list of all storage objects currently in the FAST Cache.
|
Example
The following command removes Raid Group RG_1 from the FAST Cache.
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastcache shrink –so rg_1
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Delete FAST Cache
Delete the FAST Cache configuration. The storage system generates an error if FAST Cache is not configured on the system.
Format
/stor/config/fastcache delete [-async]Action qualifier
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
Example
The following command deletes the FAST Cache configuration:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastcache delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Manage FAST Cache storage objects (physical deployments only)
FAST Cache storage objects include the RAID groups and drives that are in the FAST Cache.
Attribute
|
Description
|
---|---|
ID
|
Identifier of the storage object.
|
Type
|
Type of storage object.
|
RAID level
|
RAID level applied to the storage object.
|
Drive type
|
Type of drive.
|
Number of drives
|
Number of drives in the storage object.
|
Drives
|
Comma-separated list of the drive IDs for each storage object.
|
Total space
|
Total space used by the storage object.
|
Device state
|
The status of the FAST Cache device. Values are:
|
View FAST Cache storage objects
View a list of all storage objects, including RAID groups and drives, that are in the FAST Cache.
Format
/stor/config/fastcache/so [-id <value> ] showObject qualifier
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of the storage object in the FAST Cache.
|
Example 1
The following example shows FAST Cache storage objects on the system.
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/fastcache/so show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: ID = rg_6
Type = RAID group
Stripe length = 2
RAID level = 1
Number of drives = 2
Drive type = SAS Flash 2
Drives = dae_0_1_disk_1, dae_0_1_disk_2
Total space = 195400433664 (181.9G)
Device state = OK
View storage profiles (physical deployments only)
Storage profiles are preconfigured settings for configuring pools based on the following:
- Types of storage resources that will use the pools.
- Intended usage of the pool.
For example, create a pool for file system storage resources intended for general use. When configuring a pool, specify the ID of the storage profile to apply to the pool.
|
NOTE:
Storage profiles are not restrictive with regard to storage provisioning. For example, you can provision file systems from an FC or iSCSI database pool. However, the characteristics of the storage will be best suited to the indicated storage resource type and use.
|
Each storage profile is identified by an ID.
The following table lists the attributes for storage profiles.
Attribute
|
Description
|
||
---|---|---|---|
ID
|
ID of the storage profile.
|
||
Type
|
(Available only for systems that support dynamic pools) Type of pool the profile can create. Value is one of the following:
|
||
Description
|
Brief description of the storage profile.
|
||
Drive type
|
Types of drives for the storage profile.
|
||
RAID level
|
RAID level number for the storage profile. Value is one of the following:
|
||
Maximum capacity
|
Maximum storage capacity for the storage profile.
|
||
Stripe length
|
Number of drives the data is striped across.
|
||
Disk group
|
List of drive groups recommended for the storage pool configurations of the specified storage profile. This is calculated only when the
-configurable option is specified.
|
||
Maximum drives to configure
|
List of the maximum number of drives allowed for the specified storage profile in the recommended drive groups. This is calculated only when the
-configurable option is specified.
|
||
Maximum capacity to configure
|
List of the maximum number of free capacity of the drives available to configure for the storage profile in the recommended drive groups. This is calculated only when the
-configurable option is specified.
|
|
Format
/stor/config/profile [-id <value> | -driveType <value> [-raidLevel <value>] | -traditional] [-configurable] showObject qualifier
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of a storage profile.
|
-driveType
|
Specify the type of drive.
|
-raidLevel
|
Specify the RAID type of the profile.
|
-traditional
|
(Available only for systems that support dynamic pools) Specify this option to view the profiles that you can use for creating traditional pools. To view the profiles you can use for creating dynamic pools, omit this option.
|
-configurable
|
Show only profiles that can be configured, that is, those with non-empty drive group information. If specified, calculates the following drive group information for each profile:
If the profile is for a dynamic pool, the calculated information indicates whether the drive group has enough drives for pool creation. The calculation assumes that the pool will be created with the drives in the specified drive group only. |
Example 1
The following command shows details for storage profiles that can be used to create dynamic pools:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/profile -configurable show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: ID = profile_22
Type = Dynamic
Description = SAS Flash 2 RAID5 (4+1)
Drive type = SAS Flash 2
RAID level = 5
Maximum capacity = 4611148087296 (4.1T)
Stripe length = Maximum capacity
Disk group =
Maximum drives to configure =
Maximum capacity to configure =
2: ID = profile_30
Type = Dynamic
Description = SAS Flash 2 RAID10 (1+1)
Drive type = SAS Flash 2
RAID level = 10
Maximum capacity = 9749818597376 (8.8T)
Stripe length = 2
Disk group =
Maximum drives to configure =
Maximum capacity to configure =
3: ID = profile_31
Type = Dynamic
Description = SAS Flash 2 RAID10 (2+2)
Drive type = SAS Flash 2
RAID level = 10
Maximum capacity = 9749818597376 (8.8T)
Stripe length = 4
Disk group =
Maximum drives to configure =
Maximum capacity to configure =
Example 2
The following command shows details for storage profiles that can be used to create traditional pools in models that support dynamic pools:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/profile -traditional -configurable show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: ID = tprofile_22
Type = Traditional
Description = SAS Flash 3 RAID5 (4+1)
Drive type = SAS Flash 3
RAID level = 5
Maximum capacity = 4611148087296 (4.1T)
Stripe length = Maximum capacity
Disk group = dg_16
Maximum drives to configure = 5
Maximum capacity to configure = 1884243623936 (1.7T)
2: ID = tprofile_30
Type = Traditional
Description = SAS Flash 3 RAID10 (1+1)
Drive type = SAS Flash 3
RAID level = 10
Maximum capacity = 9749818597376 (8.8T)
Stripe length = 2
Disk group = dg_13, dg_15
Maximum drives to configure = 10, 10
Maximum capacity to configure = 1247522127872 (1.1T), 2954304921600 (2.6T)
3: ID = tprofile_31
Type = Traditional
Description = SAS Flash 3 RAID10 (2+2)
Drive type = SAS Flsh 3
RAID level = 10
Maximum capacity = 9749818597376 (8.8T)
Stripe length = 4
Disk group = dg_13, dg_15
Maximum drives to configure = 8, 8
Maximum capacity to configure = 2363443937280 (2.1T), 952103075840 (886.7G)
Manage drive groups (physical deployments only)
Drive groups are the groups of drives on the system with similar characteristics, including type, capacity, and spindle speed. When configuring pools, you select the drove group to use and the number of drives from the group to add to the pool.
Each drive group is identified by an ID.
The following table lists the attributes for drive groups.
Attribute
|
Description
|
---|---|
ID
|
ID of the drive group.
|
Drive type
|
Type of drives in the drive group.
|
FAST Cache
|
Indicates whether the drive group's drives can be added to FAST Cache.
|
Drive size
|
Capacity of one drive in the drive group.
|
Rotational speed
|
Rotational speed of the drives in the group.
|
Number of drives
|
Total number of drives in the drive group.
|
Unconfigured drives
|
Total number of drives in the drive group that are not in a pool.
|
Capacity
|
Total capacity of all drives in the drive group.
|
Recommended number of spares
|
Number of spares recommended for the drive group.
|
Drives past EOL
|
Number of drives past EOL (End of Life) in the group.
|
Drives approaching EOL
|
Number of drives that will reach EOL in 0-30 days, 0-60 days, 0-90 days and 0-180 days.
|
View drive groups
View details about drive groups on the system. You can filter on the drive group ID.
|
Format
/stor/config/dg [-id <value>] [-traditional] showObject qualifier
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of a drive group.
|
-traditional
|
(Available only for systems that support dynamic pools) Specify this qualifier to have the system assume that the pools to be created are traditional pools.
|
Example 1
The following command shows details about all drive groups that can be used to configure dynamic pools:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/dg show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: ID = dg_3
Drive type = SAS Flash 2
FAST Cache = yes
Drive size = 393846128640 (366.7G)
Vendor size = 400.0G
Rotational speed = 0 rpm
Number of drives = 3
Unconfigured drives = 3
Capacity = 1181538385920 (1.1T)
Recommended number of spares = 0
Drives past EOL = 0
Drives approaching EOL = 0 (0-30 days), 0 (0-60 days), 0 (0-90 days), 0 (0-180 days)
2: ID = dg_2
Drive type = SAS Flash 2
FAST Cache = yes
Drive size = 196971960832 (183.4G)
Vendor size = 200.0G
Rotational speed = 0 rpm
Number of drives = 7
Unconfigured drives = 7
Capacity = 1378803725824 (1.2T)
Recommended number of spares = 0
Drives past EOL = 0
Drives approaching EOL = 1 (0-30 days), 2 (0-60 days), 2 (0-90 days), 3 (0-180 days)
Example 2
The following command shows details about all drive groups that can be used to configure traditional pools in models that support dynamic pools:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/dg -traditional show
[Response]
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
[Response]
Storage system address: 10.244.223.141
Storage system port: 443
HTTPS connection
1: ID = dg_8
Drive type = NL-SAS
FAST Cache = no
Drive size = 1969623564288 (1.7T)
Vendor size = 2.0T
Rotational speed = 7200 rpm
Number of drives = 7
Unconfigured drives = 7
Capacity = 13787364950016 (12.5T)
Recommended number of spares = 1
2: ID = dg_15
Drive type = SAS
FAST Cache = no
Drive size = 590894538752 (550.3G)
Vendor size = 600.0G
Rotational speed = 15000 rpm
Number of drives = 16
Unconfigured drives = 4
Capacity = 9454312620032 (8.5T)
Recommended number of spares = 1
View recommended drive group configurations
View the recommended drive groups from which to add drives to a pool based on a specified storage profile or pool type.
|
Format
/stor/config/dg recom {–profile <value>| -pool <value> | -poolName <value>}Action qualifier
Qualifier
|
Description
|
---|---|
-profile
|
Type the ID of a storage profile. The output will include the list of drive groups recommended for the specified storage profile.
|
-pool
|
Type the ID of a pool. The output will include the list of drive groups recommended for the specified pool.
|
-poolName
|
Type the name of a pool. The output will include the list of drive groups recommended for the specified pool.
|
Example
The following command shows the recommended drive groups for pool pool_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/config/dg recom -pool pool_1
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: ID = DG_1
Drive type = SAS
Drive size = 536870912000 (500GB)
Number of drives = 8
Allowed numbers of drives = 4,8
Capacity = 4398046511104 (4TB)
2: ID = DG_2
Drive type = SAS
Drive size = 268435456000 (250GB)
Number of drives = 4
Allowed numbers of drives = 4
Capacity = 1099511627776 (1TB)
Manage storage system capacity settings
The following table lists the general storage system capacity attributes:
Attributes
|
Description
|
||
---|---|---|---|
Free space
|
Specifies the amount of space that is free (available to be used) in all storage pools on the storage system.
|
||
Used space
|
Specifies the amount of space that is used in all storage pools on the storage system.
|
||
Preallocated space
|
Space reserved across all of the pools on the storage system. This space is reserved for future needs of storage resources, which can make writes more efficient. Each pool may be able to reclaim preallocated space from storage resources if the storage resources are not using the space, and the pool space is running low.
|
||
Total space
|
Specifies the total amount of space, both free and used, in all storage pools on the storage system.
|
||
Data Reduction space saved
|
Specifies the storage size saved on the entire system when using data reduction.
|
||
Data Reduction percent
|
Specifies the storage percentage saved on the entire system when using data reduction.
|
||
Data Reduction ratio
|
Specifies the ratio between data without data reduction and data after data reduction savings.
|
View system capacity settings
View the current storage system capacity settings.
Format
/stor/general/system showExample
The following command displays details about the storage capacity on the system:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/general/system show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: Free space = 4947802324992 (1.5T)
Used space = 4947802324992 (1.5T)
Total space = 9895604649984 (3.0T)
Preallocated space = 60505210880 (56.3G)
Compression space saved = 4947802324992 (1.5T)
Compression percent = 50%
Compression ratio = 1
Data Reduction space saved = 4947802324992 (1.5T)
Data Reduction percent = 50%
Data Reduction ratio = 1
Manage system tier capacity settings
The following table lists the general system tier capacity attributes:
Attributes
|
Description
|
---|---|
Name
|
Name of the tier. One of the following:
|
Free space
|
Specifies the amount of space that is free (available to be used) in the tier.
|
Used space
|
Specifies the amount of space that is used in the tier.
|
Total space
|
Specifies the total amount of space, both free and used, in the tier.
|
View system tier capacity
View the current system tier capacity settings.
Format
/stor/general/tier showExample
The following command displays details about the storage tier capacity on the system:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/general/tier show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: Name = Extreme Performance Tier
Free space = 4947802324992 (1.5T)
Used space = 4947802324992 (1.5T)
Total space = 9895604649984 (3.0T)
2: Name = Capacity Tier
Free space = 4947802324992 (1.5T)
Used space = 4947802324992 (1.5T)
Total space = 9895604649984 (3.0T)
Manage file systems
File systems are logical containers on the system that provide file-based storage resources to hosts. You configure file systems on NAS servers, which maintain and manage the file systems. You create network shares on the file system, which connected hosts map or mount to access the file system storage. When creating a file system, you can enable support for the following network shares:
- SMB shares (previously named CIFS shares), which provide storage access to Windows hosts.
- Network file system (NFS) shares, which provide storage access to Linux/UNIX hosts.
An ID identifies each file system.
The following table lists the attributes for file systems:
Attribute
|
Description
|
||
---|---|---|---|
ID
|
ID of the file system.
|
||
Name
|
Name of the file system.
|
||
Description
|
Description of the file system.
|
||
Health state
|
Health state of the file system. The health state code appears in parentheses. Value is one of the following:
|
||
Health details
|
Additional health information. See Appendix A, Reference, for health information details.
|
||
File system
|
Identifier for the file system. Output of some metrics commands displays only the file system ID. This enables you to easily identify the file system in the output.
|
||
Server
|
Name of the NAS server that the file system is mounted on.
|
||
Storage pool ID
|
ID of the storage pool the file system is using.
|
||
Storage pool
|
Name of the storage pool that the file system uses.
|
||
Format
|
Format of the file system. Value is
UFS64.
|
||
Protocol
|
Protocol used to enable network shares from the file system. Value is:
|
||
Access policy
|
File system access policy option. Value is one of the following:
|
||
Folder rename policy
|
File system folder rename policy option. This policy controls the circumstances under which NFS and SMB clients can rename a directory. Value is one of the following:
|
||
Locking policy
|
File system locking policy option. This policy controls whether NFSv4 range locks must be honored. Value is one of the following:
|
||
Size
|
Quantity of storage reserved for primary data.
|
||
Size used
|
Quantity of storage currently used for primary data.
|
||
Maximum size
|
Maximum size to which you can increase the primary storage capacity.
|
||
Thin provisioning enabled
|
Identifies whether thin provisioning is enabled. Value is
yes or
no. Default is
no. All storage pools support both standard and thin provisioned storage resources. For standard storage resources, the entire requested size is allocated from the pool when the resource is created, for thin provisioned storage resources only incremental portions of the size are allocated based on usage. Because thin provisioned storage resources can subscribe to more storage than is actually allocated to them, storage pools can be over provisioned to support more storage capacity than they actually possess.
|
||
Data Reduction enabled
|
Identifies whether data reduction is enabled for this file system. Valid values are:
|
||
Data Reduction space saved
|
Total space saved (in gigabytes) for this file system by using data reduction.
|
||
Data Reduction percent
|
Total file system storage percentage saved for the file system by using data reduction.
|
||
Data Reduction ratio
|
Ratio between data without data reduction and data after data reduction savings.
|
||
Advanced deduplication enabled
|
Identifies whether advanced deduplication is enabled for this file system. This option is available only after data reduction has been enabled. An empty value indicates that advanced deduplication is not supported on the file system. Valid values are:
|
||
Current allocation
|
If enabled, the quantity of primary storage currently allocated through thin provisioning.
|
||
Total pool space preallocated
|
Space reserved from the pool for the file system for future needs to make writes more efficient. The pool may be able to reclaim some of this space if pool space is low.
|
||
Total pool space used
|
Total pool space used in the pool for the file system. This includes the allocated space and allocations for snaps and overhead. This does not include preallocated space.
|
||
Minimum size allocated
|
(Displays for file systems created on a Unity system running OE version 4.1.) Minimum quantity of primary storage allocated through thin provisioning. File shrink operations cannot decrease the file system size lower than this value.
|
||
Protection size used
|
Quantity of storage currently used for protection data.
|
||
Protection schedule
|
ID of an applied protection schedule.
|
||
Protection schedule paused
|
Identifies whether an applied protection schedule is currently paused. Value is yes or no.
|
||
FAST VP policy
|
FAST VP tiering policy for the file system. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations. Valid values (case-insensitive):
|
||
FAST VP distribution
|
Percentage of the file system storage assigned to each tier. The format is: <tier_name>:<value>% where:
|
||
CIFS synchronous write
|
Identifies whether SMB synchronous writes option is enabled. Value is
yes or
no.
|
||
CIFS oplocks
|
Identifies whether opportunistic file locks (oplocks) for SMB network shares are enabled. Value is
yes or
no.
|
||
CIFS notify on write
|
Identifies whether write notifications for SMB network shares are enabled. Value is
yes or
no. When enabled, Windows applications receive notifications each time a user writes or changes a file on the SMB share.
|
||
CIFS notify on access
|
Identifies whether file access notifications for SMB shares are enabled. Value is
yes or
no. When enabled, Windows applications receive notifications each time a user accesses a file on the SMB share.
|
||
CIFS directory depth
|
For write and access notifications on SMB network shares, the subdirectory depth permitted for file notifications. Value range is 1-512. Default is 512.
|
||
Replication type
|
Identifies what type of asynchronous replication this file system is participating in. Valid values are:
|
||
Synchronous replication type
|
Identifies what type of synchronous replication this file system is participating in. Valid values are:
|
||
Replication destination
|
Identifies whether the storage resource is a destination for a replication session (local or remote). Valid values are:
|
||
Migration destination
|
Identifies whether the storage resource is a destination for a NAS import session. Valid values are:
|
||
Creation time
|
Date and time when the file system was created.
|
||
Last modified time
|
Date and time when the file system settings were last changed.
|
||
Snapshot count
|
Number of snapshots created on the file system.
|
||
Pool full policy
|
Policy to follow when the pool is full and a write to the file system is attempted. This attribute enables you to preserve snapshots on the file system when a pool is full. Valid values are:
|
||
Event publishing protocols
|
List of file system access protocols enabled for Events Publishing. By default, the list is empty. Valid values are:
|
||
FLR mode
|
Specifies which verison of File-level Retention (FLR) is enabled. Values are:
|
||
FLR has protected files
|
Indicates whether the file system contains protected files. Values are:
|
||
FLR clock time
|
Indicates file system clock time to track the retention date. For example,
2019-02-20 12:55:32.
|
||
FLR max retention date
|
Maximum date and time that has been set on any locked file in an FLR-enabled file system.
2020-09-20 11:00:00
|
||
FLR min retention period
|
Indicates the shortest retention period for which files on an FLR-enabled file system can be locked and protected from deletion. The format is
(<integer> d|m|y) | infinite. Values are:
Any non-infinite values plus the current date must be less than the maximum retention period of 2106-Feb-07. |
||
FLR default retention period
|
Indicates the default retention period that is used in an FLR-enabled file system when a file is locked and a retention period is not specified at the file level.
The format is (<integer> d|m|y) | infinite. Values are:
Any non-infinite values plus the current date must be less than the maximum retention period of 2106-Feb-07. |
||
FLR max retention period
|
Indicates the longest retention period for which files on an FLR-enabled file system can be locked and protected from deletion. Values are:
The value should be greater than 1 day. Any non-infinite values plus the current date must be less than the maximum retention period of 2106-Feb-07. |
||
FLR auto lock enabled
|
Indicates whether automatic file locking for all files in an FLR-enabled file system is enable. Values are:
|
||
FLR auto delete enabled
|
Indicates whether automatic deletion of locked files from an FLR-enabled file system once the retention period has expired is enabled. Values are:
|
||
FLR policy interval
|
When Auto-lock new files is enabled, this indicates a time interval for how long to wait after files are modified before the files are automatically locked in an FLR-enabled file system.
The format is <value> <qualifier>, where value is an integer and the qualifier is:
The value should be greater than 1 minute and less than 366 days. |
||
Error Threshold
|
Specifies the threshold of used space in the file system as a percentage. When exceeded, error alert messages will be generated. The default value is 95%. If the threshold value is set to 0, this alert is disabled. This option must be set to a value greater than the Warning Threshold and Info Threshold.
|
||
Warning Threshold
|
Specifies the threshold of used space in the file system as a percentage. When exceeded, warning alert messages will be generated. The default value is 75%. If the threshold value is set to 0, this alert is disabled. This option must be set to a value less than the Error Threshold value, and greater than or equal to the Info Threshold value.
|
||
Info Threshold
|
Specifies the threshold of used space in the file system as a percentage. When exceeded, informational alert messages will be generated. The default value is 0 (disabled). This option must be set to a value less than the Warning Threshold value.
|
Create file systems
Prerequisites
- Configure at least one storage pool for the file system to use and allocate at least one drive to the pool.
- Configure at least one NAS server to which to associate the file system.
Format
/stor/prov/fs create [-async] -name <value> [-descr <value>] {-server <value> | -serverName <value>} {-pool <value> | -poolName <value>} -size <value> [-thin {yes | no}] [-dataReduction {yes [-advancedDedup {yes | no}] | no}] [–minSizeAllocated <value>] -type {{multiprotocol [-accessPolicy {native | Windows | Unix}] [-folderRenamePolicy {allowedAll | forbiddenSmb | forbiddenAll}] [-lockingPolicy {advisory | mandatory}]} [–cifsSyncWrites {yes | no}] [-cifsOpLocks {yes | no}] [-cifsNotifyOnWrite {yes | no}] [-cifsNotifyOnAccess {yes | no}] [-cifsNotifyDirDepth <value>] | nfs} [-fastvpPolicy {startHighThenAuto | auto | highest | lowest}] [-sched <value> [-schedPaused {yes | no}]] [-replDest {yes | no}][-eventProtocols <value>] [-flr {disabled | {enterprise | compliance} [-flrMinRet <value>] [-flrDefRet <value>] [-flrMaxRet <value>]}]Action qualifiers
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-name
|
Type a name for the file system.
|
||
-descr
|
Type a brief description of the file system.
|
||
-server
|
Type the ID of the NAS server that will be the parent NAS server for the file system.
View NAS servers explains how to view the IDs of the NAS servers on the system.
|
||
-serverName
|
Type the name of the NAS server that will be the parent NAS server for the file system.
|
||
-pool
|
Type the ID of the pool to be used for the file system.
|
||
-poolName
|
Type the name of the pool to be used for the file system. This value is case insensitive.
|
||
-size
|
Type the quantity of storage to reserve for the file system.
|
||
-thin
|
Enable thin provisioning on the file system. Valid values are:
|
||
-dataReduction
|
Specify whether data reduction is enabled for the thin file system. Valid values are:
|
||
-advancedDedup
|
Specify whether advanced deduplication is enabled for the thin file system. This option is available only after data reduction has been enabled. Valid values are:
|
||
-minSizeAllocated
|
(Option available on a Unity system running OE version 4.1.) Specify the minimum size to allocate for the thin file system. Automatic and manual file shrink operations cannot decrease the file system size lower than this value. The default value is 3G, which is the minimum thin file system size.
|
||
-type
|
Specify the type of network shares to export from the file system. Valid values are:
|
||
-accessPolicy
|
(Applies to multiprotocol file systems only.) Specify the access policy for this file system. Valid values are:
|
||
-folderRenamePolicy
|
(Applies to multiprotocol file systems only.) Specify the rename policy for the file system. Valid values are:
|
||
-lockingPolicy
|
(Applies to multiprotocol file systems only.) Specify the locking policy for the file system. Valid values are:
|
||
-cifsSyncWrites
|
Enable synchronous write operations for CIFS network shares. Valid values are:
|
||
-cifsOpLocks
|
Enable opportunistic file locks (oplocks) for CIFS network shares. Valid values are:
|
||
-cifsNotifyOnWrite
|
Enable to receive notifications when users write to a CIFS share. Valid values are:
|
||
-cifsNotifyOnAccess
|
Enable to receive notifications when users access a CIFS share. Valid values are:
|
||
-cifsNotifyDirDepth
|
If the value for
-cifsNotifyOnWrite or
-cifsNotifyOnAccess is
yes (enabled), specify the subdirectory depth to which the notifications will apply. Value range is within range 1–512. Default is 512.
|
||
-folderRenamePolicy
|
Specify to rename the policy type for the specified file system. Valid values are:
|
||
-lockingPolicy
|
Set the locking policy for this type of file system. Valid values are:
|
||
-fastvpPolicy
|
Specify the FAST VP tiering policy for the file system. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations. Valid values (case insensitive):
|
||
-sched
|
Type the ID of a protection schedule to apply to the storage resource.
|
||
-schedPaused
|
Specify whether to pause the protection schedule specified for
-sched. Valid values are:
|
||
-replDest
|
Specifies whether the resource is a replication destination. Valid values are:
|
||
-eventProtocols
|
Specifies the comma-separated list of file system access protocols enabled for Events Publishing. By default, the list is empty. Valid values are:
|
||
-flr
|
Specifies whether File-level Retention (FLR) is enabled and if so, which version of FLR is being used. Valid values are:
|
||
-flrMinRet
|
Specify the shortest retention period for which files on an FLR-enabled file system will be locked and protected from deletion. Valid values are:
|
||
-flrDefRet
|
Specify the default retention period that is used in an FLR-enabled file system where a file is locked, but a retention period was not specified at the file level.
The format is (<integer> d|m|y) | infinite.
Any non-infinite values plus the current date must be less than the maximum retention period of 2106-Feb-07. The value of this parameter must be greater than the minimum retention period -flrMinRet. |
||
-flrMaxRet
|
Specify the maximum date and time that has been set on any locked file in an FLR-enabled file system. Values are:
The value should be greater than 1 day. Any non-infinite values plus the current date must be less than the maximum retention period of 2106-Feb-07. The value of this parameter must be greater than the default retention period -flrDefRet. |
Example
The following command creates a file system with these settings:
- Name is FileSystem01.
- Description is "Multiprotocol file system".
- Uses the capacity storage pool.
- Uses NAS server nas_2 as the parent NAS server.
- Primary storage size is 3 GB.
- Supports multiprotocol network shares.
- Has a native access policy.
- Is a replication destination.
The file system receives the ID res_28:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs create -name FileSystem01 -descr "Multiprotocol file system" -server nas_2 -pool capacity -size 3G -type multiprotocol -accessPolicy native -replDest yes
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = res_28
Operation completed successfully.
View file systems
View details about a file system. You can filter on the file system ID.
|
Format
/stor/prov/fs [{-id <value> | -name <value> | -server <value> | -serverName <value>}] showObject qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of a file system.
|
-name
|
Type the name of a file system.
|
-server
|
Type the ID of the NAS server for which the file systems will be displayed.
|
-serverName
|
Type the name of the NAS server for which the file systems will be displayed.
|
Example
The following command lists details about all file systems on the storage system:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: ID = res_1
Name = fs
Description =
Health state = OK (5)
Health details = "The component is operating normally. No action is required."
File system = fs_1
Server = nas_1
Storage pool ID = pool_1
Storage pool = pool
Format = UFS64
Protocol = nfs
Access policy = unix
Folder rename policy = forbiddenSmb
Locking policy = mandatory
Size = 53687091200 (50.0G)
Size used = 1620303872 (1.5G)
Maximum size = 281474976710656 (256.0T)
Thin provisioning enabled = yes
Compression enabled = no
Compression space saved = 0
Compression percent = 0%
Compression ratio = 1.0:1
Data Reduction enabled = no
Data Reduction space saved = 0
Data Reduction percent = 0%
Data Reduction ratio = 1.0:1
Advanced deduplication enabled = no
Current allocation = 283140096 (270.0M)
Preallocated = 2401214464 (2.2G)
Total Pool Space Used = 4041236480 (3.7G)
Minimum size allocated =
Protection size used = 0
Snapshot count = 0
Protection schedule =
Protection schedule paused = no
FLR mode = Disabled
FLR has protected files =
FLR clock time =
FLR max retention date =
FLR min retention period =
FLR default retention period =
FLR max retention period =
FLR auto lock enabled =
FLR auto delete enabled =
FLR policy interval =
Error threshold = 95%
Warning threshold = 75%
Info threshold = 10%
CIFS synchronous write = no
CIFS oplocks = no
CIFS notify on write = no
CIFS notify on access = no
CIFS directory depth = 512
Replication type = none
Synchronous replication type = none
Replication destination = no
Migration destination = no
FAST VP policy = Start high then auto-tier
FAST VP distribution =
Creation time = 2018-12-03 10:04:10
Last modified time = 2018-12-04 06:49:31
Pool full policy = Fail Writes
Event publishing protocols =
Change file system settings
Change the settings for a file system.
Format
/stor/prov/fs {-id <value> | -name <value>} set [-async] [-descr <value>] [-accessPolicy {native | Unix | Windows}] [-folderRenamePolicy {allowedAll | forbiddenSmb | forbiddenAll}] [-lockingPolicy {advisory | mandatory}] [-size <value>] [-minSizeAllocated <value>] [-dataReduction {yes [-advancedDedup {yes | no}] | no}] [-cifsSyncWrites {yes | no}] [-fastvpPolicy {startHighThenAuto | auto | highest | lowest | none}] [-cifsOpLocks {yes | no}] [-cifsNotifyOnWrite {yes | no}] [-cifsNotifyOnAccess {yes | no}] [-cifsNotifyDirDepth <value>] [{-sched <value> | -noSched}] [-schedPaused {yes | no}] [-replDest {yes | no}] [-poolFullPolicy {deleteAllSnaps | failWrites}] [-eventProtocols <value>] [-flr [-flrMinRet <value>] [-flrDefRet <value>] [-flrMaxRet <value>] [-flrAutoLock {yes | no}] [-flrAutoDelete {yes | no}] [-flrPolicyInterval <value>]] [-errorThreshold <value>] [-warningThreshold <value>] [-infoThreshold <value>]Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of the file system to change.
|
-name
|
Type the name of the file system to change.
|
Action qualifiers
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-descr
|
Type a brief description of the file system.
|
||
-accessPolicy
|
Specify the access policy for the file system. Valid values are:
|
||
-folderRenamePolicy
|
Specify the rename policy for the file system. Valid values are:
|
||
-lockingPolicy
|
Specify the locking policy for the file system. Valid values are:
|
||
-size
|
Type the amount of storage in the pool to reserve for the file system.
|
||
-minSizeAllocated
|
(Option available on a Unity system running OE version 4.1.) Specify the minimum size to allocate for the thin file system. Automatic and manual file shrink operations cannot decrease the file system size lower than this value. The default value is 3G, which is the minimum thin file system size.
|
||
-dataReduction
|
Enable data reduction on the thin file system. Valid values are:
|
||
-advancedDedup
|
Enable advanced deduplication on the thin file system. This option is available only after data reduction has been enabled. Valid values are:
|
||
-cifsSyncWrites
|
Enable synchronous write operations for CIFS (SMB) network shares. Valid values are:
|
||
-cifsOpLocks
|
Enable opportunistic file locks (oplocks) for CIFS network shares. Valid values are:
|
||
-cifsNotifyOnWrite
|
Enable to receive notifications when users write to a CIFS share. Valid values are:
|
||
-cifsNotifyOnAccess
|
Enable to receive notifications when users access a CIFS share. Valid values are:
|
||
-cifsNotifyDirDepth
|
If the value for
-cifsNotifyOnWrite or
-cifsNotifyOnAccess is
yes (enabled), specify the subdirectory depth to which the notifications will apply. Value range is 1–512. Default is 512.
|
||
-sched
|
Type the ID of the schedule to apply to the file system.
|
||
-schedPaused
|
Pause the schedule specified for the
-sched qualifier. Valid values are:
|
||
-noSched
|
Unassigns the protection schedule.
|
||
-fastvpPolicy
|
Specify the FAST VP tiering policy for the file system. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations. Valid values (case-insensitive):
|
||
-replDest
|
Specifies whether the resource is a replication destination. Valid values are:
|
||
-poolFullPolicy
|
Specifies the policy to follow when the pool is full and a write to the file system is tried. This attribute enables you to preserve snapshots on the file system when a pool is full. Valid values are:
|
||
-eventProtocols
|
Specifies a list of file system access protocols enabled for Events Publishing. By default, the list is empty. Valid values are:
|
||
-flrMinRet
|
Specify the shortest retention period for which files on an FLR-enabled file system will be locked and protected from deletion. Valid values are:
|
||
-flrDefRet
|
Specify the default retention period that is used in an FLR-enabled file system where a file is locked, but a retention period was not specified at the file level.
The format is (<integer> d|m|y) | infinite.
Any non-infinite values plus the current date must be less than the maximum retention period of 2106-Feb-07. The value of this parameter must be greater than the minimum retention period -flrMinRet. |
||
-flrMaxRet
|
Specify the maximum date and time that has been set on any locked file in an FLR-enabled file system. Values are:
The value should be greater than 1 day. Any non-infinite values plus the current date must be less than the maximum retention period of 2106-Feb-07. The value of this parameter must be greater than the default retention period -flrDefRet. |
||
-flrAutoLock
|
Specify whether automatic file locking is enabled for all new files in an FLR-enabled file system. Valid values are:
|
||
-flrAutoDelete
|
Specify whether locked files in an FLR-enabled file system will automatically be deleted once the retention period expires. Valid values are:
|
||
-flrPolicyInterval
|
If
-flrAutoLock is set to
yes, specify a time interval for how long after files are modified they will be automatically locked in an FLR-enabled file system.
The format is <value> <qualifier>, where value is an integer and the qualifier is:
The value should be greater than 1 minute and less than 366 days. |
||
-errorThreshold
|
Specify the threshold percentage that, when exceeded, error alert messages will be generated. The range is from 0 to 99, and the default value is 95%. If the threshold value is set to 0, this alert is disabled. This option must be set to a value greater than the
-warningThreshold.
|
||
-warningThreshold
|
Specify the threshold percentage that, when exceeded, warning alert messages will be generated. The range is from 0 to 99, and the default value is 75%. If the threshold value is set to 0, this alert is disabled. This option must be set to a value less than the
-errorThreshold value, and greater than or equal to the
-infoThreshold value.
|
||
-infoThreshold
|
Specify the threshold percentage that, when exceeded, informational alert messages will be generated. The range is from 0 to 99, and the default value is 0 (disabled). This option must be set to a value less than the
-warningThreshold value.
|
Example
The following command specifies Events Publishing protocols:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs -id res_1 set -eventProtocols nfs,cifs
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = res_1
Operation completed successfully.
Delete file systems
Delete a file system.
|
NOTE:
Deleting a file system removes all network shares, and optionally snapshots associated with the file system from the system. After the file system is deleted, the files and folders inside it cannot be restored from snapshots. Back up the data from a file system before deleting it from the storage system.
|
|
NOTE:
You cannot delete an FLR-C enabled file system that has currently locked and protected files. An FLR-E file system can be deleted, even if it does contain protected files.
|
Format
/stor/prov/fs {-id <value> | -name <value>} delete [-deleteSnapshots {yes | no}] [-async]Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of the file system to delete.
|
-name
|
Type the name of the file system to delete.
|
Action qualifiers
Qualifier
|
Description
|
---|---|
-deleteSnapshots
|
Specifies that snapshots of the file system can be deleted along with the file system itself. Valid values are:
|
-async
|
Run the operation in asynchronous mode.
|
Example
The following command deletes file system FS_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs -id res_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Manage user quotas for file systems and quota trees
A user quota limits the amount of storage consumed by an individual user storing data on a file system or quota tree.
The following table lists the attributes for user quotas:
Attribute
|
Description
|
||
---|---|---|---|
File system
|
Identifier for the file system that the quota will act upon. The file system cannot be read-only or a replication destination.
|
||
Path
|
Quota tree path relative to the root of the file system. If the user quota is on a file system, either do not use this qualifier, or set its value to
/.
|
||
User ID
|
User identifier on the file system.
|
||
Unix name
|
Comma-separated list of Unix user names associated with the user quota. Multiple Unix names may appear when the file system is a multiple protocol file system and multiple Unix names map to one Windows name in the user mapping configuration file (nxtmap).
|
||
Windows SIDs
|
Comma-separated list of Windows SIDs associated with the user quota.
|
||
Windows name
|
Comma-separated list of Windows user names associated with the user quota. Multiple Windows names may appear when the file system is a multiple protocol file system and multiple Windows names map to one Unix name in the user mapping configuration file (nxtmap).
|
||
Space used
|
Spaced used on the file system or quota tree by the specified user.
|
||
Soft limit
|
Preferred limit on storage usage. The system issues a warning when the soft limit is reached.
|
||
Hard limit
|
Absolute limit on storage usage. If the hard limit is reached for a user quota on a file system or quota tree, the user will not be able to write data to the file system or tree until more space becomes available.
|
||
Grace period left
|
Time period for which the system counts down days once the soft limit is met. If the user's grace period expires, users cannot write to the file system or quota tree until more space becomes available, even if the hard limit has not been reached.
|
||
State
|
State of the user quota. Valid values are:
|
Create a user quota on a file system or quota tree
You can create user quotas on a file system or quota tree:
- Create a user quota on a file system to limit or track the amount of storage space that an individual user consumes on that file system. When you create or modify a user quota on a file system, you have the option to use default hard and soft limits that are set at the file-system level.
- Create a user quota on a quota tree to limit or track the amount of storage space that an individual user consumes on that tree. When you create a user quota on a quota tree, you have the option to use the default hard and soft limits that are set at the quota-tree level.
Format
/quota/user create [-async] {-fs <value> | -fsName <value>} [-path <value>] {-userId <value> | -unixName <value> | -winName <value>} {-default | [-softLimit <value>] [-hardLimit <value>]}Action qualifiers
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-fs
|
Specify the ID of the file system that the quota will act upon. The file system cannot be read-only or a replication destination.
|
||
-fsName
|
Specify the name of the file system that the quota will act upon. The file system cannot be read-only or a replication destination.
|
||
-path
|
Specify either of the following:
|
||
-userId
|
Specify the user ID on the file system or quota tree.
|
||
-unixName
|
Specify the UNIX user name associated with the specified user ID.
|
||
-winName
|
Specify the Windows user name associated with the specified user ID. The format is: [<domain>\]<name> |
||
-default
|
Inherit the default quota limit settings for the user. To view the default limits, use the following command: /quota/config -fs <value> -path <value> showIf a soft limit or hard limit has not been specified for the user, the default limit is applied. |
||
-softLimit
|
Specify the preferred limit on storage usage by the user. A value of
0 means no limitation. If the hard limit is specified and the soft limit is not specified, there will be no soft limitation.
|
||
-hardLimit
|
Specify the absolute limit on storage usage by the user. A value of
0 means no limitation. If the soft limit is specified and the hard limit is not specified, there will be no hard limitation.
|
Example
The following command creates a user quota for user 201 on file system res_1, quota tree /qtree_1. The new user quota has the following limits:
- Soft limit is 20 GB.
- Hard limit is 50 GB.
Storage system address: 10.64.75.201
Storage system port: 443
HTTPS connection
Operation completed successfully.
View user quotas
You can display space usage and limit information for user quotas on a file system or quota tree.
Because there can be a large amount of user quotas on a file system or quota tree, to reduce the impact on system performance, the system only updates user quota data every 24 hours. You can use the refresh action to update the data more often. Use the /quota/config show command to see the time spent for the data refresh.
|
NOTE:
The Unix name and Windows name values are returned only when displaying a single user quota.
|
|
Format
/quota/user {-fs <value> | -fsName <value>} [-path <value>] [-userId <value> | -unixName <value> | -winName <value>] [-exceeded] showObject qualifiers
Qualifier
|
Description
|
---|---|
-fs
|
Specify the ID of the file system.
|
-fsName
|
Specify the name of the file system.
|
-path
|
Specify either of the following:
|
-userId
|
Specify the user ID on the file system or quota tree.
|
-unixName
|
Specify the Unix user name.
|
-winName
|
Specify the Windows user name. The format is: [<domain>\]<name> |
-exceeded
|
Only show user quotas whose state is not
OK.
|
Example
The following command displays space usage information for user nasadmin on file system res_1, quota tree /qtree_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/user -fs res_1 -path /qtree_1 unixName nasadmin show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: User ID = 201
Unix name = nasadmin
Windows names = dell asadmin, dell asad
Windows SIDs = S-1-5-32-544, S-1-5-32-545
Space used = 32768 (32K)
Soft limit = 16384 (16K)
Hard limit = 65536 (64K)
Grace period left = 7d 3h
State = Soft limit exceeded
Change quota limits for a specific user
You can change limits for user quotas on a file system or quota tree.
Format
/quota/user {-fs | -fsName <value>} [-path <value>] {-userId <value> | -unixName <value> | winName <value>} set [-async] {-default | [-softLimit <value>] [-hardLimit <value>]}Object qualifiers
Qualifier
|
Description
|
---|---|
-fs
|
Specify the ID of the file system.
|
-fsName
|
Specify the name of the file system.
|
-path
|
Specify either of the following:
|
-userId
|
Specify the user ID on the file system or quota tree.
|
-unixName
|
Specify the UNIX user name associated with the specified user ID.
|
-winName
|
Specify the Windows user name associated with the specified user ID. The format is: [<domain>\]<name> |
Action qualifiers
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-default
|
Inherit the default quota limit settings for the user. To view the default limit, use the command:
config -fs <value> -path <value> show If a soft or hard limit has not been specified for the user, the default limit is applied. |
||
-softLimit
|
Specify the preferred limit on storage usage by the user. A value of
0 means no limitation. If the hard limit is specified and the soft limit is not specified, there will be no soft limitation.
|
||
-hardLimit
|
Specify the absolute limit on storage usage by the user. A value of
0 means no limitation. If the soft limit is specified and the hard limit is not specified, there will be no hard limitation.
|
Example
The following command makes the following changes to the user quota for user 201 on file system res_1, quota tree path /qtree_1:
- Sets the soft limit to 10 GB.
- Sets the hard limit to 20 GB.
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Refresh user quotas
Because there can be a large amount of user quotas on a file system or quota tree, to reduce the impact on system performance, the system only updates user quota data every 24 hours. Use the refresh action to update the data more often. Use the /quota/config show command to view the time spent for the data refresh.
Format
/quota/user {-fs <value> | -fsName <value>} [-path <value>] refresh [-updateNames] [-async]Object qualifiers
Qualifier
|
Description
|
||
---|---|---|---|
-fs
|
Specify the ID of the file system.
|
||
-fsName
|
Specify the name of the file system.
|
||
-path
|
Specify either of the following:
|
||
-updateNames
|
Refresh the usage data of user quotas and the Windows user names, Windows SIDs, and Unix user names within a file system or quota tree.
|
Action qualifier
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
Example
The following command refreshes all user quotas on file system res_1, quota tree tree_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/user -fs res_1 -path /tree_1 refresh
[Response]
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Manage quota trees
A quota tree is a directory that has a quota applied to it, which limits or tracks the total storage space consumed that directory. The hard limit, soft limit, and grace period settings you define for a quota tree are used as defaults for the quota tree's user quotas. You can override the hard and soft limit settings by explicitly specifying these settings when you create or modify a user quota.
The following table lists the attributes for quota trees:
Attribute
|
Description
|
---|---|
File system
|
Identifier for the file system.
|
Path
|
Quota tree path relative to the root of the file system.
|
Description
|
Quota tree description.
|
Soft limit
|
Preferred limit on storage usage. The system issues a warning when the soft limit is reached.
|
Hard limit
|
Absolute limit on storage usage. If the hard limit is reached for a quota tree, users will not be able to write data to tree until more space becomes available.
|
Grace period left
|
Period that counts down time once the soft limit is met. If the quota tree's grace period expires, users cannot write to the quota tree until more space becomes available, even if the hard limit has not been reached.
|
State
|
State of the user quota. Valid values are:
|
Create a quota tree
Create a quota tree to track or limit the amount of storage consumed on a directory. You can use quota trees to:
- Set storage limits on a project basis. For example, you can establish quota trees for a project directory that has multiple users sharing and creating files in it.
- Track directory usage by setting the quota tree's hard and soft limits to 0 (zero).
Format
/quota/tree create [-async] { -fs <value> | -fsName <value>} -path <value> [-descr <value>] {-default | [-softLimit <value>] [-hardLimit <value>]}Action qualifiers
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-fs
|
Specify the ID of the file system in which the quota tree will reside. The file system cannot be read-only or a replication destination.
|
||
-fsName
|
Specify the name of the file system in which the quota tree will reside. The file system cannot be read-only or a replication destination
|
||
-path
|
Specify the quota tree path relative to the root of the file system.
|
||
-descr
|
Specify the quota tree description.
|
||
-default
|
Specify to inherit the default quota limit settings for the tree. Use the
View quota trees command to view these default limits.
|
||
-softLimit
|
Specify the preferred limit for storage space consumed on the quota tree. A value of
0 means no limitation. If the hard limit is specified and soft limit is not specified, there will be no soft limitation.
|
||
-hardLimit
|
Specify the absolute limit for storage space consumed on the quota tree. A value of
0 means no limitation. If the soft limit is specified and the hard limit is not specified, there will be no hard limitation.
|
Example
The following command creates quota tree /qtree_1 on file system res_1. The new quota tree has the following characteristics:
- Soft limit is 100 GB.
- Hard limit is 200 GB.
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
View quota trees
You can display space usage and limit information for all quota trees on a file system or a single quota tree.
Because there can be a large amount of quota trees on a file system, to reduce the impact on system performance, the system only updates quota data every 24 hours. You can use the refresh action to update the data more often. Use the /quota/config show command to view the time spent for the data refresh.
|
Format
/quota/tree {-fs <value> | -fsName <value>} [-path <value>] [-exceeded] showObject qualifiers
Qualifier
|
Description
|
---|---|
-fs
|
Specify the ID of the file system.
|
-fsName
|
Specify the name of the file system.
|
-path
|
Specify the quota tree path, which is relative to the root of the file system.
|
-exceeded
|
Only show quota trees whose state is not
OK.
|
Example
The following command displays space usage information for all quota trees on file system res_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/tree -fs res_1 show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: Path = /qtree_1
Description = this is tree 1
Space used = 32768 (32K)
Soft limit = 53687091200 (50G)
Hard limit = 107374182400 (100G)
Grace period left = 7d
State = OK
2: Path = /qtree_2
Description =
Space used = 32768 (32K)
Soft limit = 16384 (16K)
Hard limit = 65536 (64K)
Grace period left = 7d
State = Soft limit exceeded
Set quota limits for a specific quota tree
You can specify that a specific quota tree inherit the associated file system's default quota limit settings, or you can manually set soft and hard limits on the quota tree.
Format
/quota/tree {-fs <value> | -fsName <value>} -path <value> set [-async] [-descr <value>] {-default | [-softLimit <value>] [-hardLimit <value>]}Object qualifiers
Qualifier
|
Description
|
---|---|
-fs
|
Specify the ID of the file system.
|
-fsName
|
Specify the name of the file system.
|
-path
|
Specify the quota tree path, which is relative to the root of the file system.
|
Action qualifiers
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-descr
|
Quota tree description.
|
||
-default
|
Inherit the default quota limit settings from the associated file system. To view the default limits, use the following command:
/quota/config -fs <value> -path <value> show |
||
-softLimit
|
Specify the preferred limit for storage space consumed on the quota tree. A value of
0 means no limitation.
|
||
-hardLimit
|
Specify the absolute limit for storage space consumed on the quota tree. A value of
0 means no limitation.
|
Example
The following command makes the following changes to quota tree /qtree_1 in file system res_1:
- Sets the soft limit is 50 GB.
- Sets the hard limit is to 100 GB.
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Refresh all quota trees on a file system
Because there can be a large amount of quota trees on a file system, to reduce the impact on system performance, the system only updates quota data every 24 hours. You can use the refresh action to update the data more often. To view the updating time of the data refresh, see the output field Tree quota update time for the /quota/config show command.
Format
/quota/tree {-fs <value> | -fsName <value>} refresh [-async]Object qualifier
Qualifier
|
Description
|
---|---|
-fs
|
Specify the ID of the file system.
|
-fsname
|
Specify the name of the file system.
|
Action qualifier
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
Example
The following command refreshes quota information for all quota trees on file system res_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/tree -fs res_1 refresh /
[Response]
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Delete quota trees
You can delete all quota trees on a file system or a specified quota tree.
Format
/quota/tree {-fs <value> | -fsName <value>} -path <value> delete [-async]Object qualifier
Qualifier
|
Description
|
---|---|
-fs
|
Specify the ID of the file system.
|
-fsName
|
Specify the name of the file system.
|
-path
|
Specify either of the following:
|
Action qualifier
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
Example
The following command deletes quota tree /qtree_1 on file system res_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/tree -fs res_1 -path /qtree_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Manage quota settings
Managing quota settings includes selecting a quota policy for a file system, setting default limits for a file system or quota tree, setting a default grace period, and disabling the enforcement of space usage limits for a quota tree and user quotas on the tree.
The following table lists the attributes for configuration quota functionality:
Attribute
|
Description
|
||
---|---|---|---|
Path
|
Quota tree path relative to the root of the file system. For a file system, either do not use this attribute, or set its value to
/.
|
||
Quota policy
|
(Applies to file systems only.) Quota policy for the file system. Valid values are:
|
||
User quota
|
(Applies to file systems only.) Indicates whether to enforce user quotas on the file system. Valid values are:
|
||
Deny access
|
Indicates whether to enforce quota space usage limits for the file system. Value is one of the following:
|
||
Grace period
|
Time period for which the system counts down days once the soft limit is met. If the grace period expires for a file system or quota tree, users cannot write to the file system or quota tree until more space becomes available, even if the hard limit has not been crossed.
|
||
Default soft limit
|
Default preferred limit on storage usage for user quotas on the file system, quota trees in the file system, and user quotas on the quota trees in the file system. The system issues a warning when the soft limit is reached.
|
||
Default hard limit
|
Default hard limit for on storage usage for user quotas on the file system, quota trees in the file system, and user quotas on the quota trees in the file system. If the hard limit is reached for a file system or quota tree, users will not be able to write data to the file system or tree until more space becomes available. If the hard limit is reached for a user quota on a file system or quota tree, that user will not be able to write data to the file system or tree.
|
||
Tree quota update time
|
Tree quota report updating time. The format is YYYY-MM-DD HH:MM:SS.
|
||
User quota update time
|
User quota report updating time. The format is YYYY-MM-DD HH:MM:SS.
|
Configure quota settings
You can configure quota configuration settings for a file system or quota tree.
Format
/quota/config {-fs <value> | -fsName <value>} [-path <value>] set [-async] {-policy {blocks | filesize} | [-userQuota {on | off | clear}] [-gracePeriod <value>] [-defaultSoft <value>] [-defaultHard <value>] [-denyAccess {yes | no}]}Object qualifiers
Qualifier
|
Description
|
---|---|
-fs
|
Specify the ID of the file system for which you are configuring quota settings. The file system cannot be read-only or a replication destination.
|
-fsname
|
Specify the name of the file system for which you are configuring quota settings. The file system cannot be read-only or a replication destination.
|
-path
|
Specify the quota tree path relative to the root of the file system. For a file system, either do not use this attribute, or set its value to
/.
|
Action qualifiers
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-userQuota
|
Indicates whether to enforce user quotas on the file system or quota tree. Valid values are:
|
||
-policy
|
Specify the quota policy for the file system. Valid values are:
For more information, see Configure quota settings |
||
-gracePeriod
|
Specify the time period for which the system counts down days once the soft limit is met. If the grace period expires for a quota tree, users cannot write to the quota tree until more space becomes available, even if the hard limit has not been crossed. If the grace period expires for a user quota on a file system or quota tree, the individual user cannot write to the file system or quota tree until more space becomes available for that user. The default grace period is 7 days.
The format is: <value><qualifier>where:
|
||
-defaultSoft
|
Specifies the default preferred limit on storage usage for user quotas on the file system, quota trees in the file system, and user quotas on the quota trees in the file system. The system issues a warning when the soft limit is reached.
|
||
-defaultHard
|
Specify the default hard limit for on storage usage for user quotas on the file system, quota trees in the file system, and user quotas on the file system's quota trees. If the hard limit is reached for a quota tree, users will not be able to write data to the file system or tree until more space becomes available. If the hard limit is reached for a user quota on a file system or quota tree, that particular user will not be able to write data to the file system or tree.
|
||
-denyAccess
|
Indicates whether to enable quota limits for the file system. Valid values are:
|
Example
The following command configures quota tree /qtree_1 in file system res_1 as follows:
- Sets the default grace period to 5 days.
- Sets the default soft limit 10 GB.
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
View quota configuration settings
You can display the quota configuration settings for a file system, a specific quota tree, or a file system and all of its quota trees.
Format
/quota/config {-fs <value> | -fsName <value>} [-path <value>] showObject qualifiers
Qualifier
|
Description
|
---|---|
-fs
|
Specify the ID of the file system.
|
-fsname
|
Specify the name of the file system.
|
-path
|
Specify the quota tree path relative to the root of the file system. For a file system, either do not use this attribute, or set its value to /. If this value is not specified, the command displays the quota configuration of the file system level and the quota configuration of all quota tree within the specified file system.
|
Example
The following command lists configuration information for quota tree /quota/config on file system res_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /quota/config -fs res_1 show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: Path = /
Quota policy = blocks
User quota = on
Deny access = yes
Grace period = 7d
User soft limit = 53687091200 (50G)
User hard limit = 107374182400 (100G)
Tree quota update time = 2014-10-31 13:17:28
User quota update time = 2014-10-31 13:20:22
2: Path = /qtree_1
Quota policy = blocks
User quota = on
Deny access = yes
Grace period = 7d
User soft limit = 1073741824 (1G)
User hard limit = 10737418240 (10G)
Tree quota update time =
User quota update time =
Manage NFS network shares
Network file system (NFS) network shares use the NFS protocol to provide an access point for configured Linux/UNIX hosts, or IP subnets, to access file system storage. NFS network shares are associated with an NFS file system.
Each NFS share is identified by an ID.
The following table lists the attributes for NFS network shares:
Attribute
|
Description
|
||
---|---|---|---|
ID
|
ID of the share.
|
||
Name
|
Name of the share.
|
||
Description
|
Brief description of the share.
|
||
Local path
|
Name of the path relative to the file system of the directory that the share will provide access to. Default is /root of the file system. A local path must point to an existing directory within the file system.
|
||
Export path
|
Export path, used by hosts to connect to the share.
|
||
File system
|
ID of the parent file system associated with the NFS share.
|
||
Default access
|
Default share access settings for host configurations and for unconfigured hosts that can reach the share. Value is one of the following:
|
||
Advanced host management enabled
|
Indicates whether host lists are configured by specifying the IDs of registered hosts or by using a string. (A registered host is defined by using the
/remote/host command.) Values are (case insensitive):
For information about specifying host lists by using a string, see Specifying host lists by using a string. |
||
Read-only hosts
|
Comma-separated list of hosts that have read-only access to the share and its snapshots. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
|
||
Read/write hosts
|
Comma-separated list of hosts that have read-write access to the share and its snapshots. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
|
||
Read-only root hosts
|
Comma-separated list of hosts that have read-only root access to the share and its snapshots. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
|
||
Root hosts
|
Comma-separated list of hosts that have read-write root access to the share and its snapshots. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
|
||
No access hosts
|
Comma-separated list of hosts that have no access to the share or its snapshots. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
|
||
Allow SUID
|
Specifies whether to allow users to set the
setuid and
setgid Unix permission bits. Values are (case insensitive):
|
||
Anonymous UID
|
(Applies when the host does not have
"allow root" access provided to it.) UID of the anonymous account. This account is mapped to client requests that arrive with a user ID of 0 (zero), which is typically associated with the user name
root. The default value is 4294967294 (-2), which is typically associated with the
nobody user (root squash).
|
||
Anonymous GID
|
(Applies when the host does not have
"allow root" access provided to it.) GID of the anonymous account. This account is mapped to client requests that arrive with a user ID of 0 (zero), which is typically associated with the user name
root. The default value is 4294967294 (-2), which is typically associated with the
nobody user (root squash).
|
||
Creation time
|
Creation time of the share.
|
||
Last modified time
|
Last modified time of the share.
|
||
Role
|
The specific usage of the file share. Value is one of the following:
|
||
Minimum security
|
Specifies a minimal security option that must be provided by client for nfs mount operation (in fstab). Value is one of the following, from lower to higher security level:
|
Specifying host lists by using a string
If advanced host management is disabled, a host list can contain a combination of network host names, IP addresses, subnets, netgroups, or DNS domains. The following formatting rules apply:
- An IP address can be an IPv4 or IPv6 address.
- A subnet can be an IP address/netmask or IP address/prefix length (for example: 168.159.50.0/255.255.255.0 or 168.159.50.0/24).
- The format of the DNS domain follows the UNIX/Linux format; for example, *.example.com. When specifying wildcards in fully qualified domain names, dots are not included in the wildcard. For example, *.example.com includes one.example.com, but does not include one.two.example.com.
- To specify that a name is a netgroup name, prepend the name with @. Otherwise, it is considered to be a host name.
If advanced host management is enabled, host lists contain the host IDs of existing hosts. You can obtain these IDs by using the /remote/host command.
Create NFS network shares
Create an NFS share to export a file system through the NFS protocol.
|
NOTE:
Share access permissions set for specific hosts take effect only if the host-specific setting is less restrictive than the default access setting for the share. Additionally, setting access for a specific host to “No Access” always takes effect over the default access setting.
|
- Example 1: If the default access setting for a share is Read-Only, setting the access for a specific host configuration to Read/Write will result in an effective host access of Read/Write.
- Example 2: If the default access setting for the share is Read-Only, setting the access permission for a particular host configuration to No Access will take effect and prevent that host from accessing to the share.
- Example 3: If the default access setting for a share is Read-Write, setting the access permission for a particular host configuration to Read-Only will result in an effective host access of Read/Write.
Prerequisite
Configure a file system to which to associate the NFS network shares. Create file systems explains how to create file systems on the system.
Format
/stor/prov/fs/nfs create [-async] –name <value> [-descr <value>] {-fs <value> | -fsName <value>} -path <value> [-defAccess {ro |rw | roroot | root | na}] [-advHostMgmtEnabled {yes | no}] [-roHosts <value>] [-rwHosts <value>] [-roRootHosts <value>] [-rootHosts <value>] [-naHosts <value>] [-minSecurity {sys | krb5 | krb5i | krb5p}] [-allowSuid {yes | no}] [-anonUid <value>] [-anonGid <value>]Action qualifiers
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
-name
|
Type a name for the share. By default, this value, along with the network name or the IP address of the NAS server, constitutes the export path by which hosts access the share.
You can use the forward slash character (/) to create a " virtual" name space that is different from the real path name used by the share. For example, /fs1 and /fs2 can be represented as vol/fs1 and vol/fs2. The following considerations apply:
|
-descr
|
Type a brief description of the share.
|
-fs
|
Type the ID of the parent file system associated with the NFS share.
|
-fsName
|
Type the name of the parent file system associated with the NFS share.
|
-path
|
Type a name for the directory on the system where the share will reside. This path must correspond to an existing directory/folder name within the share that was created from the host-side.
|
-defAccess
|
Specify the default share access settings for host configurations and for unconfigured hosts that can reach the share. Value is one of the following:
|
-advHostMgmtEnabled
|
Specify whether host lists are configured by specifying the IDs of registered hosts or by using a string. (A registered host is defined by using the
/remote/host command.) Values are (case insensitive):
For information about specifying host lists by using a string, see Specifying host lists by using a string. |
-roHosts
|
Type the IDs of hosts that have read-only access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
|
-rwHosts
|
Type the IDs of hosts that have read-write access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
|
-roRootHosts
|
Type the IDs of hosts that have read-only root access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
|
-rootHosts
|
Type the IDs of hosts that have read-write root access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
|
-naHosts
|
Type the ID of each host configuration for which you want to block access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
|
-minSecurity
|
Specify a minimal security option that must be provided by client for nfs mount operation (in fstab). Value is one of the following, from lower to higher security level. All higher security levels are supported, and can be enforced by the client at negotiations for secure NFS access.
|
-allowSuid
|
Specifies whether to allow users to set the
setuid and
setgid Unix permission bits. Values are (case insensitive):
|
-anonUid
|
Specify the UID of the anonymous account.
|
-anonGid
|
Specify the GID of the anonymous account.
|
Example 1
The following command shows output for when the path is not found because the path does not start with "/", and the shares are not created successfully.
uemcli -u admin -p Password123! /stor/prov/fs/nfs create -name testnfs112 -fs res_26 -path "mypath"
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation failed. Error code: 0x900a002
The system could not find the specified path. Please use an existing path. (Error Code:0x900a002)
Job ID = N-1339
Example 2
The following command shows output for when the path is correctly specified and the shares are successfully created. The new NFS share has the following settings:
- NFS share name of "testnfs112"
- Parent file system of "res_26"
- On the directory "/mypath"
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = NFSShare_20
Operation completed successfully.
View NFS share settings
View details of an NFS share. You can filter on the NFS share ID or view the NFS network shares associated with a file system ID.
Format
/stor/prov/fs/nfs [{-id <value> | -name <value> | -fs <value> | -fsName <value>}] showObject qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of an NFS share.
|
-name
|
Type the name of an NFS share.
|
-fs
|
Type the ID of an NFS file system to view the associated NFS network shares.
|
-fsName
|
Type the name of an NFS file system to view the associated NFS network shares.
|
Example
The following command lists details for all NFS network shares on the system:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/nfs show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: ID = NFSShare_1
Name = MyNFSshare1
Description = My nfs share
File system = res_26
Local path = /mypath
Export path = SATURN.domain.emc.com:/MyNFSshare1
Default access = na
Advanced host mgmt. = yes
Read-only hosts = 1014, 1015
Read/write hosts = 1016
Read-only root hosts =
Root hosts =
No access hosts =
Creation time = 2012-08-24 12:18:22
Last modified time = 2012-08-24 12:18:22
Role = production
Minimum security = krb5
Allow SUID = yes
Anonymous UID = 4294967294
Anonymous GID = 4294967294
Change NFS share settings
Change the settings of an NFS share.
Format
/ stor/prov/fs/nfs {-id <value> | -name <value>} set [-async][-descr <value>] [-defAccess {ro | rw | roroot | root | na}] [-advHostMgmtEnabled {yes | no}] [-roHosts <value>] [-rwHosts <value>] [-roRootHosts <value>] [-rootHosts <value>] [-naHosts <value>] [-minSecurity {sys | krb5 | krb5i | krb5p}] [- allowSuid { yes | no }] [-anonUid <value>] [-anonGid <value>]Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of an NFS share to change.
View NFS share settings explains how to view the IDs of the NFS network shares on the system.
|
-name
|
Type the name of an NFS share to change.
|
Action qualifiers
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
-descr
|
Type a brief description of the share.
|
-defAccess
|
Specify the default share access settings for host configurations and for unconfigured hosts who can reach the share. Value is one of the following:
|
-advHostMgmtEnabled
|
Specify whether host lists are configured by specifying the IDs of registered hosts or by using a string. (A registered host is defined by using the
/remote/host command.) Values are (case insensitive):
|
-roHosts
|
Type the IDs of hosts that have read-only access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
|
-rwHosts
|
Type the IDs of hosts that have read-write access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
|
-roRootHosts
|
Type the IDs of hosts that have read-only root access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
|
-rootHosts
|
Type the IDs of hosts that have read-write root access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups.
|
-naHosts
|
Type the ID of each host configuration for which you want to block access to the share and its snapshots. Separate the IDs with commas. If advanced host management is enabled, this is a list of the IDs of registered hosts. Otherwise, this is a list of network host names, IPs, subnets, domains, or netgroups
|
-minSecurity
|
Specifies a minimal security option that must be provided by client for NFS mount operation. Value is one of the following, from lower to higher security level. All higher security levels are supported, and can be enforced by the client at negotiations for secure NFS access.
|
-allowSuid
|
Specifies whether to allow users to set the
setuid and
setgid Unix permission bits. Values are (case insensitive):
|
-anonUid
|
Specify the UID of the anonymous account.
|
-anonGid
|
Specify the GID of the anonymous account.
|
Example
The following command changes NFS share NFSShare_1 to block access to the share and its snapshots for host HOST_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/nfs –id NFSShare_1 set -descr “My share” -naHosts ”HOST_1”
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = NFSShare_1
Operation completed successfully.
Delete NFS network shares
Delete an NFS share.
Format
/stor/prov/fs/nfs {-id <value> | -name <value>} delete [-async]Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of an NFS share to change.
View NFS share settings explains how to view the IDs of the NFS network shares on the system.
|
-name
|
Type the name of an NFS share to change.
|
Action qualifier
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
Example
The following command deletes NFS share NFSShare_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/nfs –id NFSShare_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Manage SMB network shares
Server Message Block (SMB) network shares use the SMB (formerly known as CIFS) protocol to provide an access point for configured Windows hosts, or IP subnets, to access file system storage. SMB network shares are associated with a SMB file system.
Each SMB share is identified by an ID.
The following table lists the attributes for SMB network shares:
Attribute
|
Description
|
||
---|---|---|---|
ID
|
ID of the share.
|
||
Name
|
Name of the share.
|
||
Description
|
Brief description of the share.
|
||
Local path
|
Name of the directory within the file system that the share provides access to.
|
||
Export path
|
Export path, used by hosts to connect to the share.
|
||
File system
|
ID of the parent file system associated with the SMB share.
|
||
Creation time
|
Creation time of the share.
|
||
Last modified time
|
Last modified time of the share.
|
||
Availability enabled
|
Continuous availability state.
|
||
Encryption enabled
|
SMB encryption state.
|
||
Umask
|
Indicates the default Unix umask for new files created on the share. If not specified, the umask defaults to 022.
|
||
ABE enabled
|
Indicates whether an Access-Based Enumeration (ABE) filter is enabled. Valid values include:
|
||
DFS enabled
|
Indicates whether Distributed File System (DFS) is enabled. Valid values include:
|
||
BranchCache enabled
|
Indicates whether BranchCache is enabled. Valid values include:
|
||
Offline availability
|
Indicates whether Offline availability is enabled. When enabled, users can use this feature on their computers to work with shared folders stored on a server, even when they are not connected to the network. Valid values include:
|
Create CIFS network shares
Create a CIFS (SMB) share to export a file system through the CIFS protocol.
Prerequisite
Configure a file system to which to associate the CIFS network shares. Create file systems explains how to create file systems on the system.
Format
/stor/prov/fs/cifs create [-async] –name <value> [-descr <value>] {-fs <value> | -fsName <value>} -path <value> [-enableContinuousAvailability {yes|no}] [-enableCIFSEncryption {yes|no}] [-umask <value> ] [-enableABE {yes | no} ] [-enableBranchCache {yes | no}] [-offlineAvailability {none | documents | programs | manual} ]Action qualifiers
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-name
|
Type a name for the share.
|
||
-descr
|
Type a brief description of the share.
|
||
-fs
|
Type the ID of the parent file system associated with the CIFS share.
|
||
-fsName
|
Type the name of the parent file system associated with the CIFS share.
|
||
-path
|
Type the path to the directory within the file system that will be shared. This path must correspond to an existing directory/folder name within the share that was created from the host-side. The default path is the root of the file system. Local paths must point to an existing directory within the file system.
|
||
-enableContinuousAvailability
|
Specify whether continuous availability is enabled.
|
||
-enableCIFSEncryption
|
Specify whether CIFS encryption is enabled.
|
||
-umask
|
Type the default Unix umask for new files created on the share.
|
||
-enableABE
|
Specify if Access-based Enumeration (ABE) is enabled. Valid values include:
|
||
-enableBranchCache
|
Specify if
BranchCache is enabled. Valid values include:
|
||
-offlineAvailability
|
Specify the type of offline availability. Valid values include:
|
Example
The following command creates a CIFS share with these settings:
- Name is CIFSshare.
- Description is “My share.”
- Associated to file system res_1.
- Local path on the file system is directory "/cifsshare".
- Continuous availability is enabled.
- CIFS encryption is enabled.
The share receives ID CIFSShare_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/cifs create –name CIFSshare -descr “My share” –fs fs1 -path ”/cifsshare” -enableContinuousAvailability yes -enableCIFSEncryption yes
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = CIFS_1
Operation completed successfully.
View CIFS share settings
View details of a CIFS (SMB) share. You can filter on the CIFS share ID or view the CIFS network shares associated with a file system ID.
|
Format
/stor/prov/fs/cifs [{-id <value> | -name <value> | -fs <value> | -fsName <value>}]showObject qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of a CIFS share.
|
-name
|
Type the name of a CIFS share.
|
-fs
|
Type the ID of a CIFS file system to view the associated CIFS network shares.
|
-fsName
|
Type the name of a CIFS file system to view the associated CIFS network shares.
|
Example
The following command lists details for all CIFS network shares on the system:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/cifs show
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: ID = SMBShare_1
Name = fsmup
Description =
File system = res_1
Local path = /
Export path = \\sys-123.abc.xyz123.test.lab.emc.com\fsmup, \\10.0.0.0\fsmup
2: ID = SMBShare_2
Name = fsmup
Description =
File system = res_5
Local path = /
Export path = \\sys-123.abc.xyz123.test.lab.emc.com\fsmup, \\10.0.0.0\fsmup
Change CIFS share settings
Change the settings of an CIFS (SMB) share.
Format
/stor/prov/fs/cifs {-id <value> | -name <value>} set [-async] –name <value> [-descr <value>] [-enableContinuousAvailability {yes|no}] [-enableCIFSEncryption {yes|no}] [-umask <value> ] [-enableABE {yes | no} ] [-enableBranchCache {yes | no}] [-offlineAvailability {none | documents | programs | manual} ]Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of a CIFS share to change.
|
-name
|
Type the name of a CIFS share to change.
|
Action qualifier
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
-descr
|
Specifies the description for the CIFS share.
|
-enableContinuousAvailability
|
Specifies whether continuous availability is enabled.
|
-enableCIFSEncryption
|
Specifies whether CIFS encryption is enabled.
|
-umask
|
Type the default Unix umask for new files created on the share.
|
-enableABE
|
Specify if Access-Based Enumeration (ABE) is enabled. Valid values include:
|
-enableBranchCache
|
Specify if BranchCache is enabled. Valid values include:
|
-offlineAvailability
|
Specify the type of offline availability. Valid values include:
|
Example
The following command sets the description of CIFS share SMBShare_1 to My share.
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/cifs –id SMBShare_1 set -descr “My share”
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = SMBShare_1
Operation completed successfully.
Delete CIFS network shares
Delete a CIFS (SMB) share.
Format
/stor/prov/fs/cifs {-id <value> | -name <value>} delete [-async]Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of a CIFS share to delete.
|
-name
|
Type the name of a CIFS share to delete.
|
Action qualifier
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
Example
The following command deletes CIFS share CIFSShare_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/fs/cifs –id CIFSShare_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Manage LUNs
A LUN is a single unit of storage that represents a specific storage pool and quantity of Fibre Channel (FC) or iSCSI storage. Each LUN is associated with a name and logical unit number identifier (LUN ID).
The following table lists the attributes for LUNs:
Attribute
|
Description
|
||
---|---|---|---|
ID
|
ID of the LUN.
|
||
Name
|
Name of the LUN.
|
||
Description
|
Brief description of the LUN.
|
||
Group
|
Name of the consistency group of which the LUN is a member.
|
||
Storage pool ID
|
ID of the storage pool the LUN is using.
|
||
Storage pool
|
Name of the storage pool the LUN is using.
|
||
Type
|
Type of LUN. Value is one of the following (case insensitive):
|
||
Base storage resource
|
(Applies to thin clones only) ID of the base LUN for the thin clone. |
||
Source
|
(Applies to thin clones only) ID of the source snapshot for the thin clone. |
||
Original parent
|
(Applies to thin clones only) ID of the parent LUN for the thin clone. |
||
Health state
|
Health state of the LUN. The health state code appears in parentheses. Value is one of the following:
|
||
Health details
|
Additional health information.
|
||
Size
|
Current size of the LUN.
|
||
Maximum size
|
Maximum size of the LUN.
|
||
Thin provisioning enabled
|
Identifies whether thin provisioning is enabled. Valid values are:
All storage pools support both standard and thin provisioned storage resources. For standard storage resources, the entire requested size is allocated from the pool when the resource is created, for thin provisioned storage resources only incremental portions of the size are allocated based on usage. Because thin provisioned storage resources can subscribe to more storage than is actually allocated to them, storage pools can be over subscribed to support more storage capacity than they actually possess.
|
||
Data Reduction enabled
|
Identifies whether data reduction is enabled. Valid values are:
|
||
Data Reduction space saved
|
Total space saved for the LUN (in gigabytes) by using data reduction.
|
||
Data Reduction percent
|
Total storage percentage saved for the LUN by using data reduction.
|
||
Data Reduction ratio
|
Ratio between data without data reduction and data after data reduction savings.
|
||
Advanced deduplication enabled
|
Identifies whether advanced deduplication is enabled for this LUN. This option is available only after data reduction has been enabled. An empty value indicates that advanced deduplication is not supported on the LUN. Valid values are:
|
||
Current allocation
|
If thin provisioning is enabled, the quantity of primary storage currently allocated through thin provisioning.
|
||
Total pool space preallocated
|
Space reserved from the pool by the LUN for future needs to make writes more efficient. The pool may be able to reclaim some of this space if unused and pool space is running low.
|
||
Total pool space used
|
Total pool space used by the LUN.
|
||
Non-base size used
|
(Applies to standard LUNs only) Quantity of the storage used for the snapshots and thin clones associated with this LUN.
|
||
Family size used
|
(Applies to standard LUNs only) Quantity of the storage used for the whole LUN family.
|
||
Snapshot count
|
Number of snapshots created on the LUN.
|
||
Family snapshot count
|
(Applies to standard LUNs only) Number of snapshots created in the LUN family, including all derivative snapshots.
|
||
Family thin clone count
|
(Applies to standard LUNs only) Number of thin clones created in the LUN family, including all derivative thin clones.
|
||
Protection schedule
|
ID of a protection schedule applied to the LUN.
View protection schedules explains how to view the IDs of the schedules on the system.
|
||
Protection schedule paused
|
Identifies whether an applied protection schedule is currently paused.
|
||
WWN
|
World Wide Name of the LUN.
|
||
Replication destination
|
Identifies whether the storage resource is a destination for a replication session (local or remote). Valid values are:
|
||
Creation time
|
Time the resource was created.
|
||
Last modified time
|
Time the resource was last modified.
|
||
SP owner
|
Identifies the default owner of the LUN. Value is
SP A or
SP B.
|
||
Trespassed
|
Identifies whether the LUN is trespassed to the peer SP. Valid values are:
|
||
FAST VP policy
|
FAST VP tiering policy for the LUN. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations. Valid values (case-insensitive):
|
||
FAST VP distribution
|
Percentage of the LUN assigned to each tier. The format is:
<tier_name>:<value>% where:
|
||
LUN access hosts
|
List of hosts with access permissions to the LUN.
|
||
Host LUN IDs
|
Comma-separated list of HLUs (Host LUN identifiers), which the corresponding hosts use to access the LUN.
|
||
Snapshots access hosts
|
List of hosts with access to snapshots of the LUN.
|
||
IO limit
|
Name of the host I/O limit policy applied.
|
||
Effective maximum IOPS
|
The effective maximum IO per second for the LUN. For LUNs with a density-based IO limit policy, this value is equal to the product of the
Maximum IOPS and the
Size of the attached LUN.
|
||
Effective maximum KBPS
|
The effective maximum KBs per second for the LUN. For LUNs with a density-based IO limit policy, this value is equal to the product of the
Maximum KBPS and the
Size of the attached LUN.
|
Create LUNs
Create a LUN to which host initiators connect to access storage.
Prerequisites
Configure at least one storage pool for the LUN to use and allocate at least one drive to the pool. Configure custom pools explains how to create a custom storage pool on the system.
Format
/stor/prov/luns/lun create [-async] -name <value> [-descr <value>] [-type {primary | tc {-source <value> | -sourceName <value>}] [{-group <value> | groupName <value>}] [ {-pool <value> | -poolName <value>}] [-size <value>] [-thin {yes | no}] [-sched <value> [-schedPaused {yes | no}]] [-spOwner {spa | spb}] [-fastvpPolicy {startHighThenAuto | auto | highest | lowest}] [-lunHosts <value> [-hlus <value>]] [-snapHosts <value>] [-replDest {yes | no}] [-ioLimit <value>] [-dataReduction {yes [-advancedDedup {yes | no}] | no}]Action qualifiers
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-name
|
Type the name of the LUN.
|
||
-descr
|
Type a brief description of the LUN.
|
||
-type
|
Specify the type of LUN. Valid values are (case insensitive):
|
||
-source
|
(Applies to thin clones only) Specify the ID of the source object to use for thin clone creation.
|
||
-sourceName
|
(Applies to thin clones only) Specify the name of the source object to use for thin clone creation.
|
||
-group
|
(Not applicable when creating a thin clone) Type the ID of a consistency group to which to associate the new LUN.
View consistency groups explains how to view information on consistency groups.
|
||
-groupName
|
(Not applicable when creating a thin clone) Type the name of a consistency group to which to associate the new LUN.
|
||
-pool
|
(Not applicable when creating a thin clone) Type the ID of the storage pool that the LUN will use.
|
||
-poolName
|
(Not applicable when creating a thin clone) Type the name of the storage pool that the LUN will use.
|
||
-size
|
(Not applicable when creating a thin clone) Type the quantity of storage to allocate for the LUN.
|
||
-thin
|
(Not applicable when creating a thin clone) Enable thin provisioning on the LUN. Valid values are:
|
||
-sched
|
Type the ID of a protection schedule to apply to the storage resource.
View protection schedules explains how to view the IDs of the schedules on the system.
|
||
-schedPaused
|
Pause the schedule specified for the
-sched qualifier. Valid values are:
|
||
-spOwner
|
(Not applicable when creating a thin clone) Specify the default SP to which the LUN will belong. The storage system determines the default value. Valid values are:
|
||
-fastvpPolicy
|
(Not applicable when creating a thin clone) Specify the FAST VP tiering policy for the LUN. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations. Valid values (case-insensitive):
|
||
-lunHosts
|
Specify a comma-separated list of hosts with access to the LUN.
|
||
-hlus
|
Specifies the comma-separated list of Host LUN identifiers to be used by the corresponding hosts which were specified in the
-lunHosts option. The number of items in the two lists must match. However, an empty string is a valid value for any element of the Host LUN identifiers list, as long as commas separate the list elements. Such an empty element signifies that the system should automatically assign the Host LUN identifier value by which the corresponding host will access the LUN.
If not specified, the system will automatically assign the Host LUN identifier value for every host specified in the -lunHosts argument list. |
||
-snapHosts
|
Specify a comma-separated list of hosts with access to snapshots of the LUN.
|
||
-replDest
|
(Not applicable when creating a thin clone) Specifies whether the resource is a replication destination. Valid values are:
|
||
-ioLimit
|
Specify the name of the host I/O limit policy to be applied.
|
||
-dataReduction
|
(Not applicable when creating a thin clone) Specify whether data reduction is enabled for this LUN. Valid values are:
|
||
-advancedDedup
|
Specify whether advanced deduplication is enabled for this LUN. This option is available only after data reduction has been enabled. An empty value indicates that advanced deduplication is not supported on the LUN. Valid values are:
|
Example 1
The following command creates a LUN with these settings:
- Name is MyLUN.
- Description is “My LUN.”
- Associated with LUN consistency group group_1.
- Uses the pool_1 storage pool.
- Primary storage size is 100 MB.
The LUN receives the ID lun_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/lun create -name "MyLUN" -descr "My LUN" -type primary -group group_1 -pool pool_1 -size 100M
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = lun_1
Operation completed successfully.
Example 2
The following command creates a thin clone called MyTC from SNAP_1. The thin clone receives the ID lun_3.
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/lun create -name "MyTC" -descr "My FC" -type tc -source SNAP_1
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = lun_3
Operation completed successfully.
View LUNs
Display the list of existing LUNs.
|
Format
/stor/prov/luns/lun [{-id <value> | name <value> | -group <value> | -groupName <value> | -standalone}] [-type {primary | tc [{-baseRes <value> | -baseResName <value> | -originalParent <value> | -originalParentName <value> | -source <value> | -sourceName <value>}]}] showObject qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of a LUN.
|
-name
|
Type the name of a LUN.
|
-group
|
Type the ID of a consistency group. The list of LUNs in the specified consistency group are displayed.
|
-groupName
|
Type the name of a consistency group. The list of LUNs in the specified consistency group are displayed.
|
-standalone
|
Displays only LUNs that are not part of a consistency group.
|
-type
|
Identifies the type of resources to display. Valid values are (case insensitive):
|
-baseRes
|
(Applies to thin clones only) Type the ID of a base LUN by which to filter thin clones.
|
-baseResName
|
(Applies to thin clones only) Type the name of a base LUN by which to filter thin clones.
|
-originalParent
|
(Applies to thin clones only) Type the ID of a parent LUN by which to filter thin clones.
|
-originalParentName
|
Applies to thin clones only) Type the name of a parent LUN by which to filter thin clones.
|
-source
|
(Applies to thin clones only) Type the ID of a source snapshot by which to filter thin clones.
|
-sourceName
|
(Applies to thin clones only) Type the name of a source snapshot by which to filter thin clones.
|
Example 1
The following command displays information about all LUNs and thin clones on the system:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/lun show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: ID = sv_1
Name = AF LUN 1
Description =
Group =
Storage pool ID = pool_1
Storage pool = Pool 1
Type = Primary
Base storage resource = sv_1
Source =
Original parent =
Health state = OK (5)
Health details = "The LUN is operating normally. No action is required."
Size = 21474836480 (20.0G)
Maximum size = 281474976710656 (256.0T)
Thin provisioning enabled = yes
Compression enabled = yes
Compression space saved = 5637144576 (5.2G)
Compression percent = 44%
Compression ratio = 1.8:1
Data Reduction enabled = yes
Data Reduction space saved = 5637144576 (5.2G)
Data Reduction percent = 44%
Data Reduction ratio = 1.8:1
Advanced deduplication enabled = no
Current allocation = 4606345216 (4.2G)
Protection size used = 0
Non-base size used = 0
Family size used = 12079595520 (11.2G)
Snapshot count = 2
Family snapshot count = 2
Family thin clone count = 0
Protection schedule = snapSch_1
Protection schedule paused = no
WWN = 60:06:01:60:10:00:43:00:B7:15:A5:5B:B1:7C:01:2B
Replication destination = no
Creation time = 2018-09-21 16:00:55
Last modified time = 2018-09-21 16:01:41
SP owner = SPB
Trespassed = no
LUN access hosts = Host_2
Host LUN IDs = 0
Snapshots access hosts =
IO limit =
Effective maximum IOPS = N/A
Effective maximum KBPS = N/A
Change LUNs
Change the settings for a LUN.
Format
/stor/prov/luns/lun {-id <value> | -name <value>} set [-async] [-name <value>] [-descr <value>] [-size <value>] [{-group <value> | -groupName <value> | -standalone}] [{-sched <value> | -noSched}] [-schedPaused {yes | no}] [-spOwner {spa | spb}] [-fastvpPolicy {startHighThenAuto | auto | highest | lowest}] [-lunHosts <value> [-hlus <value>]] [-snapHosts <value>] [-replDest {yes | no}] [-ioLimit <value> | -noIoLimit] [-dataReduction {yes [-advancedDedup {yes | no}] | no}]Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of the LUN to change.
|
-name
|
Type the name of the LUN to change.
|
Action qualifiers
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-name
|
Type the name of the LUN.
|
||
-descr
|
Type a brief description of the LUN.
|
||
-group
|
(Not applicable to thin clones) Type the ID of a consistency group to which to associate the new LUN.
View consistency groups explains how to view information on consistency groups.
|
||
-groupName
|
(Not applicable to thin clones) Type the name of a consistency group to which to associate the new LUN.
|
||
-size
|
Type the quantity of storage to allocate for the LUN.
|
||
-standalone
|
(Not applicable to thin clones) Remove the LUN from the consistency group.
|
||
-sched
|
Type the ID of the schedule to apply to the LUN.
View protection schedules explains how to view the IDs of the schedules on the system.
|
||
-schedPaused
|
Pause the schedule specified for the
-sched qualifier. Valid values are:
|
||
-noSched
|
Unassigns the protection schedule.
|
||
-spOwner
|
(Not applicable to thin clones) Specify the default owner of the LUN. Valid values are:
|
||
-fastvpPolicy
|
(Not applicable to thin clones) Specify the FAST VP tiering policy for the LUN. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations. Valid values (case-insensitive):
|
||
-lunHosts
|
Specify a comma-separated list of hosts with access to the LUN.
|
||
-hlus
|
Specifies the comma-separated list of Host LUN identifiers to be used by the corresponding hosts which were specified in the
-lunHosts option. The number of items in the two lists must match. However, an empty string is a valid value for any element of the Host LUN identifiers list, as long as commas separate the list elements. Such an empty element signifies that the system should automatically assign the Host LUN identifier value by which the corresponding host will access the LUN.
If not specified, the system will automatically assign the Host LUN identifier value for every host specified in the -lunHosts argument list. |
||
-snapHosts
|
Specify a comma-separated list of hosts with access to snapshots of the LUN.
|
||
-replDest
|
Specifies whether the resource is a replication destination. Valid values are:
|
||
-ioLimit
|
Specify the name of the host I/O limit policy to be applied.
|
||
-noIoLimit
|
Specify the removal of an applied host I/O limit policy.
|
||
-dataReduction
|
(Not applicable to thin clones) Specify whether data reduction is enabled for the LUN. Valid values are:
|
||
-advancedDedup
|
Specify whether advanced deduplication is enabled for this LUN. This option is available only after data reduction has been enabled. An empty value indicates that advanced deduplication is not supported on the LUN. Valid values are:
|
Example 1
The following command updates LUN lun_1 with these settings:
- Name is NewName.
- Description is “My new description.”
- Primary storage size is 150 MB.
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = lun_1
Operation completed successfully.
Example 2
The following command adds access for two new hosts to LUN lun_2 in addition to it's existing hosts:
- host13
- host14
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = lun_2
Operation completed successfully.
Delete LUNs
Delete a LUN.
|
NOTE:
Deleting a LUN removes all associated data from the system. After a LUN is deleted, you cannot restore the data inside it from snapshots. Back up the data from a LUN to another host before deleting it from the system.
|
Format
/stor/prov/luns/lun {-id <value> | -name <value>} delete [-deleteSnapshots {yes | no}] [-async]Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of the LUN to delete.
|
-name
|
Type the name of the LUN to delete.
|
Action qualifiers
Qualifier
|
Description
|
---|---|
-deleteSnapshots
|
Specify that snapshots of the LUN can be deleted along with the LUN itself. Valid values are:
|
-async
|
Run the operation in asynchronous mode.
|
Example
The following command deletes LUN lun_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/lun -id lun_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Refresh thin clones of a LUN
(Applies to thin clones only) Refresh a LUN's thin clone. This updates the thin clone's data with data from the specified source snapshot and re-parents the thin clone to that snapshot.
Format
/stor/prov/luns/lun {-id <value> | -name <value>} refresh [-async] {-source <value> | -sourceName <value>} [-copyName <value>] [-force]Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of the thin clone to refresh.
|
-name
|
Type the name of the thin clone to refresh.
|
Action qualifiers
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
-source
|
Specify the ID of the snapshot to be used for the thin clone refresh. The snapshot must be part of the base LUN family.
|
-sourceName
|
Specify the name of the snapshot to be used for the thin clone refresh. The snapshot must be part of the base LUN family.
|
-copyName
|
Specify the name of the copy to be created before the thin clone refresh.
|
-force
|
Specify to unconditionally refresh the LUN, even if it has host access configured.
|
Example
The following command refreshes the thin clone called lun_5_tc with data from snapshot SNAP_2.
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/lun -id lun_5_tc refresh -source SNAP_2 -copyName Backup1
[Response]
Storage system address: 10.64.75.201
Storage system port: 443
HTTPS connection
ID = 38654705846
Operation completed successfully.
Manage consistency groups
Consistency groups provide a way to organize and group LUNs together to simplify storage tiering and snapshots when an application spans multiple LUNs.
The following table lists the attributes for consistency groups:
Attribute
|
Description
|
||
---|---|---|---|
ID
|
ID of the consistency group.
|
||
Name
|
Name of the consistency group.
|
||
Description
|
Brief description of the consistency group.
|
||
Type
|
Type of consistency group. Value is one of the following (case insensitive):
|
||
Base storage resource
|
(Applies to thin clones only) ID of the base consistency group for the thin clone.
|
||
Source
|
(Applies to thin clones only) ID of the source snapshot for the thin clone.
|
||
Original parent
|
(Applies to thin clones only) ID of the parent consistency group for the thin clone.
|
||
Health state
|
Health state of the consistency group. The health state code appears in parentheses. Value is one of the following:
|
||
Health details
|
Additional health information. See Appendix A, Reference, for health information details.
|
||
Total capacity
|
Total capacity of all associated LUNs.
|
||
Total current allocation
|
Total current allocation of all associated LUNs.
|
||
Total pool space preallocated
|
Space reserved from the pool by all associated LUNs for future needs to make writes more efficient. Equal to the sum of all the
sizePreallocated values of each LUN in the group. The pool may be able to reclaim some of this space if pool space is running low.
|
||
Total pool space used
|
Total pool space used in the pool for all the associated LUNs, their snapshots or thin clones, and overhead.
|
||
Thin provisioning enabled
|
Identifies whether thin provisioning is enabled. Value is yes or no. Default is no. All storage pools support both standard and thin provisioned storage resources. For standard storage resources, the entire requested size is allocated from the pool when the resource is created, for thin provisioned storage resources only incremental portions of the size are allocated based on usage. Because thin provisioned storage resources can subscribe to more storage than is actually allocated to them, storage pools can be over provisioned to support more storage capacity than they actually possess.
|
||
Data Reduction enabled
|
Identifies whether data reduction is enabled. Valid values are:
|
||
Advanced deduplication enabled
|
Identifies whether advanced deduplication is enabled. This option is available only after data reduction has been enabled. An empty value indicates that advanced deduplication is not supported for the LUN in the consistency group. Valid values are:
|
||
Total non-base size used
|
(Applies to standard consistency groups only) Quantity of storage used for the snapshots and thin clones associated with this consistency group.
|
||
Total family size used
|
(Applies to standard consistency groups only) Quantity of storage used for the whole consistency group family.
|
||
Snapshot count
|
Number of snapshots created on the resource.
|
||
Family snapshot count
|
(Applies to standard consistency groups only) Number of snapshots created in the consistency group family, including all derivative snapshots.
|
||
Family thin clone count
|
(Applies to standard consistency groups only) Number of thin clones created in the consistency group family, including all derivative thin clones.
|
||
Protection schedule
|
ID of a protection schedule applied to the consistency group.
View protection schedules explains how to view the IDs of the schedules on the system.
|
||
Protection schedule paused
|
Identifies whether an applied protection schedule is currently paused.
|
||
LUN access hosts
|
List of hosts with access permissions to the associated LUNs.
|
||
Snapshots access hosts
|
List of hosts with access to snapshots of the associated LUNs.
|
||
Replication destination
|
Identifies whether the storage resource is a destination for a replication session (local or remote). Valid values are:
|
||
Creation time
|
Time the consistency group was created.
|
||
Last modified time
|
Time the consistency group was last modified.
|
||
FAST VP policy
|
FAST VP tiering policy for the consistency group. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations for each LUN in the consistency group. Valid values (case-insensitive):
|
||
FAST VP distribution
|
Percentage of the resource assigned to each tier. The format is:
<tier_name>:<value>% where:
|
Create a consistency group
Create a consistency group.
Format
/stor/prov/luns/group create [-async] -name <value> [-descr <value>] [-type {primary | tc { -source <value> | -sourceName <value> } }] [-sched <value> [-schedPaused {yes | no}]] [-replDest {yes | no}]Action qualifiers
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-name
|
Type the name of the consistency group.
|
||
-descr
|
Type a brief description of the consistency group.
|
||
-type
|
Specify the type of consistency group. Valid values are (case insensitive):
|
||
-source
|
(Applies to thin clones only) Specify the ID of the source snapshot to use for thin clone creation.
|
||
-sourceName
|
(Applies to thin clones only) Specify the name of the source snapshot to use for thin clone creation.
|
||
-sched
|
Type the ID of a protection schedule to apply to the consistency group.
View protection schedules explains how to view the IDs of the schedules on the system.
|
||
-schedPaused
|
Specify whether to pause the protection schedule specified for
-sched. Valid values are:
|
||
-replDest
|
(Not applicable when creating a thin clone) Specifies whether the resource is a replication destination. Valid values are:
|
Example
The following command creates a consistency group with these settings:
- Name is GenericStorage01.
- Description is “MyStorage.”
- Uses protection schedule SCHD_1.
The storage resource receives the ID group_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/group create -name GenericStorage01 -descr "MyStorage" -sched SCHD_1
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = group_1
Operation completed successfully.
Example 2
The following command creates a thin clone with these settings:
- Name is MyFC.
- Source is SNAP_1.
The storage resource receives the ID group_2:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/group create name "MyFC" -descr "My FC" -type tc -sourceName SNAP_1
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = group_1
Operation completed successfully.
View consistency groups
Display the list of existing consistency groups.
Format
group [{-id <value> | -name <value> | -type {primary | tc [{-originalParent <value> | -originalParentName <value> | -source <value> | -sourceName <value> | -baseRes <value> | -baseResName <value>}]}}] showObject qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of a consistency group.
|
-name
|
Type the name of a consistency group.
|
-type
|
Identifies the type of resources to display. Valid values are (case insensitive):
|
-originalParent
|
(Applies to thin clones only) Type the ID of a parent consistency group by which to filter thin clones.
|
-originalParentName
|
(Applies to thin clones only) Type the name of a parent consistency group by which to filter thin clones.
|
-source
|
(Applies to thin clones only) Type the ID of a source snapshot by which to filter thin clones.
|
-sourceName
|
(Applies to thin clones only) Type the name of a source snapshot by which to filter thin clones.
|
-baseRes
|
(Applies to thin clones only) Type the ID of a base consistency group by which to filter thin clones.
|
-baseResName
|
(Applies to thin clones only) Type the name of a base consistency group by which to filter thin clones.
|
Example
The following command display details about the consistency groups and thin clones on the system:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/group show -detail
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
1: ID = group_1
Name = MyLUNGroup
Description = My Consistency group
Type = Primary
Base storage resource =
Source =
Original parent =
Health state = OK (5)
Health details = "The component is operating normally. No action is required."
Total capacity = 107374182400 (100G)
Thin provisioning enabled = no
Total current allocation = 107374182400 (100G)
Total pool space preallocated = 4292853760 (3.9G)
Total Pool Space Used = 9128919040 (8.5G)
Total protection size used = 0
Snapshot count = 0
Compression enabled = yes
Data Reduction enabled = yes
Advanced deduplication enabled = yes
Total current allocation = 10737418240 (10G)
Protection schedule = SCHD_1
Protection schedule paused = no
LUNs access hosts = 1014, 1015
Snapshots access hosts = 1016(mixed)
Replication destination = no
Creation time = 2012-12-21 12:55:32
Last modified time = 2013-01-15 10:31:56
FAST VP policy = mixed
FAST VP distribution = Best Performance: 55%, High Performance: 10%, High Capacity: 35%
2: ID = group_2
Name = MyLUNGroupFC
Description = My Consistency group
Type = Thin clone
Base storage resource = group_1
Source = snap_1
Original parent = group_1
Health state = OK (5)
Health details = "The component is operating normally. No action is required."
Total capacity = 107374182400 (100G)
Thin provisioning enabled = yes
Total current allocation =
Total pool space preallocated =
Total Pool Space Used =
Total protection size used =
Total non-base size used = 0
Total family size used = 0
Snapshot count = 0
Compression enabled = no
Data Reduction enabled = no
Advanced deduplication enabled = no
Protection schedule = SCHD_1
Protection schedule paused = no
LUNs access hosts = 1014, 1015
Snapshots access hosts =
Replication destination = no
Creation time = 2012-12-21 12:55:32
Last modified time = 2013-01-15 10:31:56
FAST VP policy = mixed
FAST VP distribution =
Change consistency groups
Change the settings for a consistency group.
Format
/stor/prov/luns/group {-id <value> | -name <value>} set [-async] [-name <value>] [-descr <value>] [{-sched <value> | -noSched}] [-schedPaused {yes | no}] [-lunHosts <value>] [-snapHosts <value>] [-replDest {yes | no}] [-fastvpPolicy {startHighThenAuto | auto | highest | lowest | none}] [-dataReduction {yes [-advancedDedup {yes | no}] | no} ]Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of the consistency group to change.
|
-name
|
Type the name of the consistency group to change.
|
Action qualifier
Qualifier
|
Description
|
||
---|---|---|---|
-async
|
Run the operation in asynchronous mode.
|
||
-name
|
Type the name of the consistency group.
|
||
-descr
|
Type a brief description of the consistency group.
|
||
-sched
|
Type the ID of the schedule to apply to the consistency group.
View protection schedules explains how to view the IDs of the schedules on the system.
|
||
-schedPaused
|
Pause the schedule specified for the
-sched qualifier. Valid values are:
|
||
-noSched
|
Unassign the protection schedule.
|
||
-lunHosts
|
Specify a comma-separated list of hosts with access to the LUN.
|
||
-snapHosts
|
Specify a comma-separated list of hosts with access to snapshots of the LUN.
|
||
-replDest
|
Specify whether the resource is a replication destination. Valid values are:
|
||
-fastvpPolicy
|
(Cannot be changed for thin clones) Specify the FAST VP tiering policy for the consistency group. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations. Valid values (case-insensitive):
|
||
-dataReduction
|
(Cannot be changed for thin clones) Specify whether data reduction is enabled for LUNs in this consistency group. Valid values are:
|
||
-advancedDedup
|
Specify whether advanced deduplication is enabled for LUNs in this consistency group. This option is available only after data reduction has been enabled. An empty value indicates that advanced deduplication is not supported for LUNs in this consistency group. Valid values are:
|
Example
The following command updates the consistency group group_1 with these settings:
- Name is NewName.
- Description is “New description.”
- Uses protection schedule SCHD_2.
- The selected schedule is currently paused.
- The FAST VP policy is start high then auto-tier.
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
ID = group_1
Operation completed successfully.
Delete consistency groups
Delete a consistency group.
|
NOTE:
Deleting a consistency group removes all LUNs and data associated with the consistency group from the system. After a consistency group is deleted, you cannot restore the data from snapshots. Back up the data from the consistency group before deleting it.
|
Format
/stor/prov/luns/group {-id <value> | -name <value> } delete -id <value> [-async]Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of the consistency group to delete.
|
-name
|
Type the name of the consistency group to delete.
|
Action qualifier
Qualifier
|
Description
|
---|---|
-deleteSnapshots
|
Specify that snapshots of the LUN can be deleted along with the LUN itself. Valid values are:
|
-async
|
Run the operation in asynchronous mode.
|
Example
The following command deletes LUN consistency group storage resource group_1:
uemcli -d 10.0.0.1 -u Local/joe -p MyPassword456! /stor/prov/luns/group -id group_1 delete
Storage system address: 10.0.0.1
Storage system port: 443
HTTPS connection
Operation completed successfully.
Refresh thin clones of a consistency group
(Applies to thin clones only) Refresh a consistency group's thin clone. This updates the thin clones' data with data from the specified source snapshot and re-parents the thin clone to that snapshot.
Format
/stor/prov/luns/group {-id <value> | -name <value>} refresh [-async] {-source <value> | -sourceName <value>} [-copyName <value>] [-force]Object qualifiers
Qualifier
|
Description
|
---|---|
-id
|
Type the ID of the consistency group to refresh.
|
-name
|
Type the name of the consistency group to refresh.
|
Action qualifiers
Qualifier
|
Description
|
---|---|
-async
|
Run the operation in asynchronous mode.
|
-source
|
Specify the ID of the snapshot to be used for thin clone refresh. The snapshot must be part of the base consistency group family.
|
-sourceName
|