Unsolved
This post is more than 5 years old
85 Posts
0
2237
Max # of DA Write Pending Slots : 0
Hi,
We have issues with the VMax where the EMc support Team is saying that the DA are highly utilised. When we veryfy we see that DF are 70-80 used. But the IO on each DA is 100 to 200 IO and 40 to 60 mb.
But when i see out put with symcfg list -v
I see the below output
Symmwin Version | : 188 | |
Enginuity Build Version | : 5875.249.188 | |
Service Processor Time Offset | : + 02:51:07 |
Cache Size (Mirrored) | : 481280 (MB) | ||
# of Available Cache Slots | : 4948528 | ||
Max # of System Write Pending Slots : 3711396 | |||
Max # of DA Write Pending Slots | : | 0 | |
Max # of Device Write Pending Slots : 185569 | |||
Replication Cache Usage (Percent) | : | 16 |
Can some one tell me why my Max # of DA Write Pending Slots =0 when i see data on many primus the value is more than 0.
IS this some thing issue with the data or Sym.
Quincy561
1.3K Posts
0
October 26th, 2012 06:00
The DA write pending limit hasn't been in use for many years, I think DMX3 is where it was no longer used.
100-200 IOPs on a DA is practically idle. If they are showing 60-70% busy, they are doing a lot of background tasks.
Quincy561
1.3K Posts
0
October 26th, 2012 07:00
I have never seen a Symmetrix system where the DA ports were a limiting factor. I would ignore any DA port statistics, EXCEPT on VMAX 40K and the new 10K systems where there is one DA CPU per port.
Sounds like your DA CPUs are out of gas. Moving write workload off of RAID6 and onto RAID1 would be a good start.
Govindagouda
85 Posts
0
October 26th, 2012 07:00
DA stats
{p1san:rgovind1:/home/rgovind1} symstat -DA all -i 5 -c 5
DIRECTOR IO/sec Cache Requests/sec % RW
10:05:28 Disk READ WRITE RW Hits
10:05:45 DF-1A 1201 657 177 835 0
DF-2A 1508 870 223 1094 0
DF-3A 1411 888 190 1078 0
DF-4A 1698 1055 254 1309 0
DF-5A 1128 668 152 821 0
DF-6A 1212 644 173 818 0
DF-7A 1230 693 165 858 0
DF-8A 1189 646 174 821 0
DF-9A 1199 667 170 837 0
DF-10A 1198 647 171 818 0
DF-11A 1241 692 171 864 0
DF-12A 1207 684 174 858 0
DF-13A 1689 860 247 1107 0
DF-14A 1751 1068 258 1327 0
DF-15A 1216 648 182 831 0
DF-16A 1412 860 185 1045 0
DF-1B 1292 654 192 846 0
DF-2B 1228 633 188 822 0
DF-3B 1749 902 276 1178 0
DF-4B 2455 1268 402 1670 0
DF-5B 1341 680 196 877 0
DF-6B 1058 418 191 609 0
DF-7B 2017 1023 307 1330 0
DF-8B 1064 452 175 628 0
DF-9B 1350 666 207 873 0
DF-10B 1069 431 177 608 0
DF-11B 1249 672 176 848 0
DF-12B 1102 431 188 619 0
DF-13B 1662 835 263 1098 0
DF-14B 1702 858 260 1118 0
DF-15B 1220 635 172 808 0
DF-16B 1255 618 200 819 0
DF-1C 1429 837 201 1039 0
DF-2C 1189 641 172 814 0
DF-3C 1809 1046 253 1299 0
DF-4C 1676 869 245 1115 0
DF-5C 1229 648 191 839 0
DF-6C 1247 653 185 838 0
DF-7C 1360 652 213 866 0
DF-8C 1257 642 180 822 0
DF-9C 1348 666 207 873 0
DF-10C 1394 692 224 916 0
DF-11C 2047 997 325 1323 0
DF-12C 1370 681 210 892 0
DF-13C 1975 1058 301 1359 0
DF-14C 1742 876 293 1170 0
DF-15C 1445 862 199 1061 0
DF-16C 1229 623 189 812 0
DF-1D 1261 627 196 823 0
DF-2D 1233 615 183 799 0
DF-3D 1757 837 281 1118 0
DF-4D 1710 833 265 1098 0
DF-5D 1061 445 173 619 0
DF-6D 1328 633 206 840 0
DF-7D 1118 436 190 627 0
DF-8D 1209 664 174 839 0
DF-9D 1135 425 203 629 0
DF-10D 1265 652 184 836 0
DF-11D 1071 428 183 611 0
DF-12D 1206 663 167 830 0
DF-13D 1759 866 264 1130 0
DF-14D 1693 840 267 1107 0
DF-15D 1263 648 191 840 0
DF-16D 1210 635 179 814 0
------ ------ ------ ------ ---
Total 89328 46113 13530 59670 0
Quincy561
1.3K Posts
0
October 26th, 2012 07:00
A host write can create 6 backend IOs when protected with RAID6.
1,500 IOs/sec is not an insignificant amount for the DAs.
Also your DA IOPs do not look balanced. Some are showing 1,100 IOs/sec and others doing 2,5000.
And don't use 5 second samples with symstat. With a big box, I would not go any lower than 30 seconds, and maybe 60 seconds.
I have no idea what you mean by port 0 and 1, do you mean on the FA? You should NEVER assign the same host to both ports 0 and 1 of the same FA CPU.
Govindagouda
85 Posts
0
October 26th, 2012 07:00
Quincy,
What %ge of over head raid 6 on Sata will be in use for the SATA Disks.
current symstat shows me as below
what is your thought on assigning a device to port 0 and port 1 on vmax. Does that cause any issues.
Govindagouda
85 Posts
0
October 26th, 2012 07:00
When I check on the Port Utilization DF i see sum of 0 and 1 port is 13% and over all DF usage is 90%.
Is there anyway how we can balance the DF .
We dont use port 0 and 1 for the same host.
Thanks
Govind
Quincy561
1.3K Posts
0
October 26th, 2012 08:00
The best practice is for every DA and every engine to have the same quantity of drives active for balance.
Adding EFDs with RAID5 protection, and moving your FC drives to RAID1 and then enabling FAST VP (I'm assuming you have VP and not thick devices) could help reduce the DA load significantly.
There isn't much that can be done to reduce the fact that 1 host write = 6 backend IOs with RAID6 except adding more disks and DAs.
Govindagouda
85 Posts
0
October 26th, 2012 08:00
I hear you. What is your thought on adding SSD and enable the FAST. This is one way I can think of moving write work load to another tier.
This is good lesson that we should not rely on FC and SATA on SAME DF where we have raid 6 on SATA.
Govindagouda
85 Posts
0
October 26th, 2012 10:00
The best practice is for every DA and every engine to have the same quantity of drives active for balance.
The above scenario is not possible with full scaled vmax with 2400 drives. Some of the DFs will have more Disk than others and also some engines will have to serve more io.
I am not sure if VMAX is able to do 300K ios. But in our scenarion we are 75K to 80K ios.
Why DFS are not able to handled more than 1200 IOPS and not more than 50 mb of throughput.
Thanks
Quincy561
1.3K Posts
0
October 26th, 2012 17:00
Yes, some engines can't go beyond 2 drive bays, however beyond two drive bays you are getting into the range of a capacity configuration vs. a performance configuration.
As to the MB/sec vs. IOs/sec it depends on the IO size. With small IOs, the CPU will be the limit, with large IOs the plumbing will be the limit.
I would have to look at your STP data, but it sure sounds like some of your DAs are out of gas, while others have more performance to offer but can't because they are being limited by the busier ones.