Dell EMC VPLEX: DU post changing the LUN type from thin to thick on BE array
Summary: This article talks about how to mitigate DU when the LUN type is changed to thick on BE array, which was previously provisioned as thin on VPlex.
This article applies to
This article does not apply to
This article is not tied to any specific product.
Not all product versions are identified in this article.
Symptoms
Issue:
DU/High performance impact is seen against an affected volume which is converted from thin to thick LUN on the Back-End array.
Below firmware events are observed during the issue:
1. Streaming SCSI/27 with the sense code - 05/20/00 ~ UA responses for UNMAP commands ( cmd 0x42) reported against the storage-volume whose LUN type was changed to thick on the BE array as follows:
firmware.log_20200213085454.1:128.221.252.68/cpu0/log:5988:W/"0xxxxxxxxxxxxxxxx-2":99648:<6>2020/04/11 10:14:53.65: scsi/27 tgt VPD83T3:6XXXXXXXXXXXXXXX cmd 0x42 status 0x2 valid 0 resp 0x70 seg 0x0 bits 0x0 key 0x5 info 0x0 alen 10 csi 0x0 asc 0x20 ascq 0x0 fru 0x0 sks 0x0
firmware.log_20200213085454.1:128.221.252.68/cpu0/log:5988:W/"0xxxxxxxxxxxxxxxx-2":99649:<6>2020/04/11 10:14:53.79: scsi/27 tgt VPD83T3:6XXXXXXXXXXXXXXX cmd 0x42 status 0x2 valid 0 resp 0x70 seg 0x0 bits 0x0 key 0x5 info 0x0 alen 10 csi 0x0 asc 0x20 ascq 0x0 fru 0x0 sks 0x0
2. Since the LUN type was changed to thick, all the UNMAP command sent to the BE by VPlex will fail and after 20 consecutive UNMAP command/write failures the affected storage-volume will be marked dead as follows:
NOTE: Meanwhile VPlex will also try to auto-resurrect the storage-volume.
firmware.log_20200213085454.8:128.221.253.67/cpu0/log:5988:W/"0xxxxxxxxxxxxxxxx-1":22086:<4>2020/04/11 00:03:20.69: amf/45 disk VPD83T3:6XXXXXXXXXXXXXXX: write failure: marking this in-use disk dead
firmware.log_20200213085454.8:128.221.253.67/cpu0/log:5988:W/"0xxxxxxxxxxxxxxxx-1":22097:<6>2020/04/11 00:03:31.34: amf/125 disk VPD83T3:6XXXXXXXXXXXXXXX resurrected
NOTE: Meanwhile VPlex will also try to auto-resurrect the storage-volume.
firmware.log_20200213085454.8:128.221.253.67/cpu0/log:5988:W/"0xxxxxxxxxxxxxxxx-1":22086:<4>2020/04/11 00:03:20.69: amf/45 disk VPD83T3:6XXXXXXXXXXXXXXX: write failure: marking this in-use disk dead
firmware.log_20200213085454.8:128.221.253.67/cpu0/log:5988:W/"0xxxxxxxxxxxxxxxx-1":22097:<6>2020/04/11 00:03:31.34: amf/125 disk VPD83T3:6XXXXXXXXXXXXXXX resurrected
3. In scenario s when the volume was initially provisioned as thin on VPlex and then changed to thick, the thin-capable property is not auto-updated in VPlex and hence,the affected virtual-volume continues to report thin-capable as true as follows:
VPlexcli:/clusters/cluster-1/virtual-volumes/device_****_vol> ll
Name Value
-------------------------- ----------------------------------------
block-count 429654016
block-size 4K
cache-mode synchronous
capacity 12G
consistency-group -
expandable true
expandable-capacity 0B
expansion-method storage-volume
expansion-status -
health-indications []
health-state critical-failure
locality distributed
operational-status error
recoverpoint-protection-at []
recoverpoint-usage -
scsi-release-delay 0
service-status running
storage-array-family clariion
storage-tier -
supporting-device device_****_1
system-id device_***_1_vol
thin-capable true
thin-enabled disabled
volume-type virtual-volume
vpd-id VPD83T3:60001440000****************
Name Value
-------------------------- ----------------------------------------
block-count 429654016
block-size 4K
cache-mode synchronous
capacity 12G
consistency-group -
expandable true
expandable-capacity 0B
expansion-method storage-volume
expansion-status -
health-indications []
health-state critical-failure
locality distributed
operational-status error
recoverpoint-protection-at []
recoverpoint-usage -
scsi-release-delay 0
service-status running
storage-array-family clariion
storage-tier -
supporting-device device_****_1
system-id device_***_1_vol
thin-capable true
thin-enabled disabled
volume-type virtual-volume
vpd-id VPD83T3:60001440000****************
Cause
In current release, there is an issue with VPLEX Back-end code where it can erroneously consider a LUN as thin capable if underlying LUN at the back-end array is converted from thin to non-thin capable provisioning.
The thin-capable attribute needs to be updated at both level's i.e.,Virtual-Volume and Storage-Volume automatically when LUN type is changed at the back-end array. Please be informed that thin-capable attribute should get auto-updated at storage-volume level as 'thin-capable' is a read-only attribute at storage-volume level.
If the thin-capable attribute is not manually changed at Virtual-Volume level, VPlex will continue to send UNMAP request to the logical-unit whose LUN type is changed to thick and all those request will be aborted by the back-end LUN.
The thin-capable attribute needs to be updated at both level's i.e.,Virtual-Volume and Storage-Volume automatically when LUN type is changed at the back-end array. Please be informed that thin-capable attribute should get auto-updated at storage-volume level as 'thin-capable' is a read-only attribute at storage-volume level.
If the thin-capable attribute is not manually changed at Virtual-Volume level, VPlex will continue to send UNMAP request to the logical-unit whose LUN type is changed to thick and all those request will be aborted by the back-end LUN.
Resolution
Resolution:
This issue is addressed in GeoSynchrony 6.2.0.00.00.32 and above releases.
Workaround Steps:
1. Post changing the LUN type from thin to thick on the BE array, make sure that the "Thin-capable" attribute is changed on virtual-volume accordingly. Changing the attribute to false on virtual-volume will not send anymore UNMAP commands to BE LUN as follows:
1.a) Log into the vplexcli context as follows:
NOTE: VPLEX running GeoSynchrony before 6.x when accessing the vplexcli will require the service account credentials for login.
service@ManagementServer:~> vplexcli
Trying ::1...
Connected to localhost.
Escape character is '^]'.
Enter User Name: service
Password:
Creating logfile:/var/log/VPlex/cli/session.log_service_localhost_Logfile_T24531_yyyymmddhhmmss
1.b) Get into the concerned virtual-volume context and issue the below command as follows which shows "thin-capable" attribute is set to "true" even after LUN type is changed from thin to thick at the BE array:
1.c) Manually disable "thin-capable" attribute to "false" as follows which will disable thin provisioning at virtual volume level as follows:
Example:
VPlexcli:/clusters/cluster-1/virtual-volumes/device_****_vol> set thin-capable false
1.d) After changing 'thin-capable' attribute to "false" at virtual-volume, the problematic virtual-volume health should be changed to "OK". Run the 'cluster status' command to check the overall health of the VPlex as follows:
Example:
VPlexcli:/clusters/cluster-1/virtual-volumes/device_****_vol> ll
Name Value
-------------------------- ----------------------------------------
block-count 429654016
block-size 4K
cache-mode synchronous
capacity 12G
consistency-group -
expandable true
expandable-capacity 0B
expansion-method storage-volume
expansion-status -
health-indications []
health-state ok
locality distributed
operational-status ok
recoverpoint-protection-at []
recoverpoint-usage -
scsi-release-delay 0
service-status running
storage-array-family clariion
storage-tier -
supporting-device device_****_1
system-id device_**_1_vol
thin-capable false
thin-enabled disabled
volume-type virtual-volume
vpd-id VPD83T3:60001440000****************
VPlexcli:/> cluster status
Cluster cluster-1
operational-status: ok
transitioning-indications:
transitioning-progress:
health-state: ok
health-indications:
local-com: ok
Cluster cluster-2
operational-status: ok
transitioning-indications:
transitioning-progress:
health-state: ok
health-indications:
local-com: ok
wan-com: ok
2. If virtual-volume health is still reporting "error" or "critical-failure" state post following above step's, then perform array re-discover against the BE array, where the problematic logical-unit belongs to. The array re-discover should automatically refresh the attribute at storage-volume level as follows:
Example:
VPlexcli:/> array re-discover -a /clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-CKM0018******* -c cluster-1
3. Even after multiple attempts of array re-discover, if the problematic virtual-volume health is still reporting "error" or "critical-failure" then the corresponding logical unit at the back-end array side needs to be removed from array's storage-group/pool and added back to it, and re-run array re-discover command so that a manual discovery gets triggered at VPLEX side.
4. If none of the above step's helps in resolving the issue, we recommend user to perform upgrade to fixed version mentioned above and then proceed with LUN type change activity.
This issue is addressed in GeoSynchrony 6.2.0.00.00.32 and above releases.
Workaround Steps:
1. Post changing the LUN type from thin to thick on the BE array, make sure that the "Thin-capable" attribute is changed on virtual-volume accordingly. Changing the attribute to false on virtual-volume will not send anymore UNMAP commands to BE LUN as follows:
1.a) Log into the vplexcli context as follows:
NOTE: VPLEX running GeoSynchrony before 6.x when accessing the vplexcli will require the service account credentials for login.
service@ManagementServer:~> vplexcli
Trying ::1...
Connected to localhost.
Escape character is '^]'.
Enter User Name: service
Password:
Creating logfile:/var/log/VPlex/cli/session.log_service_localhost_Logfile_T24531_yyyymmddhhmmss
1.b) Get into the concerned virtual-volume context and issue the below command as follows which shows "thin-capable" attribute is set to "true" even after LUN type is changed from thin to thick at the BE array:
Example:
VPlexcli:/clusters/cluster-1/virtual-volumes/device_****_vol> ll
Name Value
-------------------------- ----------------------------------------
block-count 429654016
block-size 4K
cache-mode synchronous
capacity 12G
consistency-group -
expandable true
expandable-capacity 0B
expansion-method storage-volume
expansion-status -
health-indications []
health-state critical-failure
locality distributed
operational-status error
recoverpoint-protection-at []
recoverpoint-usage -
scsi-release-delay 0
service-status running
storage-array-family clariion
storage-tier -
supporting-device device_****_1
system-id device_***_1_vol
thin-capable true
thin-enabled disabled
volume-type virtual-volume
vpd-id VPD83T3:60001440000****************
VPlexcli:/clusters/cluster-1/virtual-volumes/device_****_vol> ll
Name Value
-------------------------- ----------------------------------------
block-count 429654016
block-size 4K
cache-mode synchronous
capacity 12G
consistency-group -
expandable true
expandable-capacity 0B
expansion-method storage-volume
expansion-status -
health-indications []
health-state critical-failure
locality distributed
operational-status error
recoverpoint-protection-at []
recoverpoint-usage -
scsi-release-delay 0
service-status running
storage-array-family clariion
storage-tier -
supporting-device device_****_1
system-id device_***_1_vol
thin-capable true
thin-enabled disabled
volume-type virtual-volume
vpd-id VPD83T3:60001440000****************
1.c) Manually disable "thin-capable" attribute to "false" as follows which will disable thin provisioning at virtual volume level as follows:
Example:
VPlexcli:/clusters/cluster-1/virtual-volumes/device_****_vol> set thin-capable false
1.d) After changing 'thin-capable' attribute to "false" at virtual-volume, the problematic virtual-volume health should be changed to "OK". Run the 'cluster status' command to check the overall health of the VPlex as follows:
Example:
VPlexcli:/clusters/cluster-1/virtual-volumes/device_****_vol> ll
Name Value
-------------------------- ----------------------------------------
block-count 429654016
block-size 4K
cache-mode synchronous
capacity 12G
consistency-group -
expandable true
expandable-capacity 0B
expansion-method storage-volume
expansion-status -
health-indications []
health-state ok
locality distributed
operational-status ok
recoverpoint-protection-at []
recoverpoint-usage -
scsi-release-delay 0
service-status running
storage-array-family clariion
storage-tier -
supporting-device device_****_1
system-id device_**_1_vol
thin-capable false
thin-enabled disabled
volume-type virtual-volume
vpd-id VPD83T3:60001440000****************
VPlexcli:/> cluster status
Cluster cluster-1
operational-status: ok
transitioning-indications:
transitioning-progress:
health-state: ok
health-indications:
local-com: ok
Cluster cluster-2
operational-status: ok
transitioning-indications:
transitioning-progress:
health-state: ok
health-indications:
local-com: ok
wan-com: ok
2. If virtual-volume health is still reporting "error" or "critical-failure" state post following above step's, then perform array re-discover against the BE array, where the problematic logical-unit belongs to. The array re-discover should automatically refresh the attribute at storage-volume level as follows:
Example:
VPlexcli:/> array re-discover -a /clusters/cluster-1/storage-elements/storage-arrays/EMC-CLARiiON-CKM0018******* -c cluster-1
3. Even after multiple attempts of array re-discover, if the problematic virtual-volume health is still reporting "error" or "critical-failure" then the corresponding logical unit at the back-end array side needs to be removed from array's storage-group/pool and added back to it, and re-run array re-discover command so that a manual discovery gets triggered at VPLEX side.
4. If none of the above step's helps in resolving the issue, we recommend user to perform upgrade to fixed version mentioned above and then proceed with LUN type change activity.
Affected Products
VPLEX SeriesProducts
VPLEX for All Flash, VPLEX GeoSynchrony, VPLEX Series, VPLEX VS2, VPLEX VS6Article Properties
Article Number: 000172418
Article Type: Solution
Last Modified: 17 Jun 2025
Version: 3
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.