ECS: Partially allocated disk reported; drives are mounted but not showing in the UI
Summary: After expand or replace partially allocated disk reported in the Diskhealth script or drives are mounted in the container, and fabric but do not appear mounted in the UI. This is due to SSM not showing the disks as assigned to an owner. ...
This article applies to
This article does not apply to
This article is not tied to any specific product.
Not all product versions are identified in this article.
Symptoms
ECS disk partially allocated disk
ECS disk has no block bins
ECS disk is 1% full
ECS disk is not 100% full
ECS disk is not preallocated
To confirm if you have this issue:
Command:
Example: (truncated output disks in unknown are the replaced or disks expanded to)
# svc_disk list -capacity -a
Example: (truncated output disks in unknown are the replaced or disks expanded to)
ecs03:~ # svc_disk list -capacity -a (truncated output) r1n3 /dev/sdh1 A06 e70c8956-edfe-461b-8863-5b6523a7a24a 380.00 55.00 325.00 r1n3 /dev/sdi1 A07 436d8761-1a24-48ca-84fe-51b023fd626d 380.00 56.62 323.38 r1n3 /dev/sdj1 A08 8b20a88c-5019-4a57-81b4-61de03ce358b 380.00 51.25 328.75 r1n3 /dev/sdk1 A09 fd24991a-051c-4167-88cc-e0c965588ab5 380.00 53.87 326.13 r1n3 /dev/sdm1 A10 e2b8f29e-3e41-4a7d-a491-4591b4e84063 Unknown Unknown Unknown r1n3 /dev/sdn1 A11 c2b1eefb-b416-4aa6-8370-7301f4660fda Unknown Unknown Unknown r1n3 /dev/sdo1 B00 13d9e678-2b2f-492b-a0a5-f7a29911cccc Unknown Unknown Unknown r1n3 /dev/sdp1 B01 a8f69cd4-e446-484e-915b-48b9301af708 Unknown Unknown Unknown r1n3 /dev/sdq1 B02 8abd06d2-932b-46b7-b889-25123268a25b Unknown Unknown Unknown r1n3 /dev/sdr1 B03 ecc79b91-26c7-49ad-9ecd-13fe4ccf5741 Unknown Unknown Unknown r1n3 /dev/sds1 B04 6d1b1864-9182-4d70-a0a5-51d191d251db Unknown Unknown Unknown r1n3 /dev/sdt1 B05 77dbfa2f-8a4e-4c2a-af52-455340b31a82 Unknown Unknown Unknown r1n3 /dev/sdu1 B06 c3600600-205c-4617-89b8-ddfae9028fbb Unknown Unknown Unknown r1n3 /dev/sdv1 B07 2662abb3-4ecd-4f76-a2b3-8fa4253e5118 Unknown Unknown Unknown r1n3 /dev/sdw1 B08 4dd503cb-6090-4fae-9b41-d6ab0e7c9d6e Unknown Unknown Unknown r1n3 /dev/sdx1 B09 d3c3c0e4-fdfd-4534-ba99-35e354d2cef0 Unknown Unknown Unknown r1n3 /dev/sdy1 B10 b74cf08a-ccc7-4e8b-b788-99b07163f68d Unknown Unknown Unknown r1n3 /dev/sdz1 B11 b8d95702-525d-470d-b595-3c2e98d4e1fb Unknown Unknown Unknown r1n4 /dev/sdaa1 C00 3184e585-db6c-4697-a6da-5c4abb78c2aa 380.00 2.50 377.50 r1n4 /dev/sdc1 A00 fee50f0d-cc91-44b4-a994-b1280482de71 380.00 46.37 333.63 r1n4 /dev/sdd1 A01 8c47ff29-f58b-4f8c-96b5-c6c79e28d49a 380.00 52.50 327.50 r1n4 /dev/sde1 A02 c2077176-e44f-4db9-999a-cfcd65eb7b81 380.00 49.37 330.63 r1n4 /dev/sdf1 A03 968b6a4e-bc73-42c9-9a92-f70f55209956 380.00 54.87 325.13 r1n4 /dev/sdg1 A05 43e18255-4a46-438f-bd73-d637df394dc8 380.00 47.75 332.25 (truncated output)
Disk Health 0.92 and higher will also report this issue:
ECS: Disk Health script
Checking for partially allocated disks. If problem found, see KB 504837 There seems to be an issue with 192.168.219.8. /dev/sdt1 7811938304 33152 7811905152 1% /dae/uuid-cada0901-dd4b-4045-9a23-a342b05b8a34 /dev/sdp1 7811938304 33152 7811905152 1% /dae/uuid-950b1859-319a-4ee7-93d8-d4e0604b8e6c
Further confirm that the new drives are counted in cs_hal, fabric, and the object-main container. (in this example 15 is detected after adding 5 on each node)
Command:
viprexec -i "docker exec object-main df | grep -c uuid-"
Example:
### Checking the object-main container for mounted disk count admin@ecs:~> viprexec -i "docker exec object-main df | grep -c uuid-" Output from host : 192.168.219.1 15 Output from host : 192.168.219.2 15 Output from host : 192.168.219.3 15 Output from host : 192.168.219.4 15 Output from host : 192.168.219.5 15 Output from host : 192.168.219.6 15
Command:
viprexec -i "/opt/emc/caspian/fabric/cli/bin/fcli agent disk.disks | grep -cw MOUNTED"
Example:
### Checking the fabric agent for mounted disk count admin@ecs:~> viprexec -i "/opt/emc/caspian/fabric/cli/bin/fcli agent disk.disks | grep -cw MOUNTED" Output from host : 192.168.219.1 15 Output from host : 192.168.219.2 15 Output from host : 192.168.219.3 15 Output from host : 192.168.219.4 15 Output from host : 192.168.219.5 15 Output from host : 192.168.219.6 15
Command:
viprexec -i "cs_hal list disks | grep total"
Example:
### Checking cs_hal for disk count admin@ecs:~> viprexec -i "cs_hal list disks | grep total" Output from host : 192.168.219.1 total: 15 Output from host : 192.168.219.2 total: 15 Output from host : 192.168.219.3 total: 15 Output from host : 192.168.219.4 total: 15 Output from host : 192.168.219.5 total: 15 Output from host : 192.168.219.6 total: 15
When querying SSM for one of the nodes you can see the new disks UUIDs are missing.(Should be missing the ones "unknown" in svc_disk command above)
Command:
ssh 169.254.<rack number>.<node number> "/opt/emc/caspian/fabric/cli/bin/fcli agent node.id" | grep id
Example:
### Get device ID of impacted node admin@ecs:~> ssh 192.168.219.6 "/opt/emc/caspian/fabric/cli/bin/fcli agent node.id" | grep id "id" : "515190ac-35a0-4557-80af-8bc9790449b4",
Command:
svc_dt search SS/1/SSTABLE_KEY -t PARTITION -k device=<id from above> | grep schema
Example:
### Query SSM of node6 admin@ecs:~> svc_dt search SS/1/SSTABLE_KEY -t PARTITION -k device=515190ac-35a0-4557-80af-8bc9790449b4 | grep schema schemaType SSTABLE_KEY type PARTITION device 515190ac-35a0-4557-80af-8bc9790449b4 partition 19ffd192-1f8b-4b92-8da9-ade2521ef32c schemaType SSTABLE_KEY type PARTITION device 515190ac-35a0-4557-80af-8bc9790449b4 partition 29f398dc-57a1-425c-aaa8-c4c47fe40384 schemaType SSTABLE_KEY type PARTITION device 515190ac-35a0-4557-80af-8bc9790449b4 partition 2caf424e-cd4e-4796-9c62-0c11301ad78a schemaType SSTABLE_KEY type PARTITION device 515190ac-35a0-4557-80af-8bc9790449b4 partition 659dca69-7e96-49e2-9535-1b1fa1cc225d schemaType SSTABLE_KEY type PARTITION device 515190ac-35a0-4557-80af-8bc9790449b4 partition 7eeb09e7-2d9b-4a2f-8fff-eafbbb949cc5 schemaType SSTABLE_KEY type PARTITION device 515190ac-35a0-4557-80af-8bc9790449b4 partition 9b9e62ec-598e-4b3e-b913-34d82070b526 schemaType SSTABLE_KEY type PARTITION device 515190ac-35a0-4557-80af-8bc9790449b4 partition aa0a9f6e-6ff2-47da-a131-c3832196230e schemaType SSTABLE_KEY type PARTITION device 515190ac-35a0-4557-80af-8bc9790449b4 partition c1827e62-464f-47e9-8148-0fc5b42f1800 schemaType SSTABLE_KEY type PARTITION device 515190ac-35a0-4557-80af-8bc9790449b4 partition ca18a993-0f82-4978-b377-e40fa3bbe97b schemaType SSTABLE_KEY type PARTITION device 515190ac-35a0-4557-80af-8bc9790449b4 partition db362f53-6740-45cb-9403-cc930497bc60
Cause
Minor code issue. Once it is confirmed SSM has a different count than object-main mounted disks, fabric-mounted disks, and cs_hal total count go to the resolution section.
Resolution
Reach out to ECS Support and refer to this KB upon opening the service request for further assistance.
Affected Products
ECS ApplianceProducts
ECS Appliance Hardware Gen3 EX5000, ECS Appliance, ECS Appliance Gen 1, ECS Appliance Gen 2, ECS Appliance Gen 3, ECS Appliance Hardware Gen3 EX300, ECS Appliance Hardware Gen3 EX3000, ECS Appliance Hardware Gen1 U-Series
, ECS Appliance Hardware Gen1 C-Series, ECS Appliance Hardware Gen2 C-Series, ECS Appliance Hardware Gen2 D-Series, ECS Appliance Hardware Gen2 U-Series, ECS Appliance Hardware Gen3 EX500, ECS Appliance Hardware Gen3 EXF900, ECS Appliance Hardware Series, ECS Appliance Software with Encryption, ECS Appliance Software without Encryption, Elastic Cloud Storage
...
Article Properties
Article Number: 000026542
Article Type: Solution
Last Modified: 08 Sept 2025
Version: 4
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.