schotten's Posts

schotten's Posts

Hi Ian! Thanks for the reply. The SDS were in the same Fault Set. We just waited for the rebuild (took 40 hours) so things went back to normal. Thanks again.
Hi guys! We had a switch stack issue and our 3 SDS node rebooted at same time. The system came back with no degraded area but making a huge rebuild (55 TB, almost all the data we haver over... See more...
Hi guys! We had a switch stack issue and our 3 SDS node rebooted at same time. The system came back with no degraded area but making a huge rebuild (55 TB, almost all the data we haver over it). There were some Path and Original-Path scrambles, I did the update_device_original_path in only one device but we didn't have any change in rebuild process. Right now I'm limiting the rebuild bandwidth to be able to access decently some data. The very weird thing is that there's no degraded area. I tried to disable the Rebuild/Rebalance in Backend at Storage Pools backend but no difference. Any help/hint would be very appreciated. Thanks.
Hi guys! I'm doing a lab with NVMe and SR-IOV and I´m having latency problems with kernels < 4.11. The highest kernel available for SDC/XCACHE running RH (@ ftp.emc.com) is the elrepo 4.10.5,... See more...
Hi guys! I'm doing a lab with NVMe and SR-IOV and I´m having latency problems with kernels < 4.11. The highest kernel available for SDC/XCACHE running RH (@ ftp.emc.com) is the elrepo 4.10.5, is possible update that for, maybe, 4.12 (still elrepo)? I tried to set my lab environment for Ubuntu but it absolutelly didn´t run with NVMe and SR-IOV. Thanks in advance.
Hi all! At the last 2 weeks we stood rebalancing our ScaleIO environment (280 TB raw) for SDS device migration between nodes. We're removing the SDS devices from one server, waited for reba... See more...
Hi all! At the last 2 weeks we stood rebalancing our ScaleIO environment (280 TB raw) for SDS device migration between nodes. We're removing the SDS devices from one server, waited for rebalance, inserted in another node, configured it using perccli, delivered to Linux (/dev/sd*) then added again in SIO. Is there a cleverer way to do that? Like just remapping the drives in ScaleIO (maybe after a perc foreign import at destination node). I didn't try to remove it "in hot state" from source and inserted in destination node, but since ScaleIO is smart with "actual path" inside the same node, may it can handle that. Thanks in advance.
Hi guys! I have a question regarding /sys/devices path of scini devices. We created (and introduced) volumes to a Linux box running one legacy application that requires the path of the low ... See more...
Hi guys! I have a question regarding /sys/devices path of scini devices. We created (and introduced) volumes to a Linux box running one legacy application that requires the path of the low level device like /sys/devices/pci000:00/0000:00:01:0/... and goes on (examples below): lrwxrwxrwx  1 root root 0 Jun 28 16:31 sda -> ../devices/pci0000:00/0000:00:01.0/0000:01:00.0/host0/target0:0:0/0:0:0:0/block/sda lrwxrwxrwx  1 root root 0 Jun 28 16:31 sdb -> ../devices/pci0000:00/0000:00:01.0/0000:01:00.0/host0/target0:0:1/0:0:1:0/block/sdb lrwxrwxrwx  1 root root 0 Jun 28 16:31 sdc -> ../devices/pci0000:00/0000:00:01.0/0000:01:00.0/host0/target0:0:2/0:0:2:0/block/sdc The scinia device has this path: lrwxrwxrwx  1 root root 0 Jun 28 16:31 scinia -> ../devices/virtual/block/scinia Is possible somehow change the sysfs /sys/devices/virtual/block/scinia to something else thru a drv_cfg.txt parameter or kind? The application requires the " pci0000:00/0000:00:01.0/0000:01:00.0/host0/target0:0:0/0:0:0:0" structure to goes on. Thanks in advance.
Hi guys! Couldn't find any information about that in history. I need to upgrade a CX4-120 from 1 Gbps iSCSI to 10 Gb iSCSI and can't find to buy a 303-081-103B, that is the "CX4 native", ca... See more...
Hi guys! Couldn't find any information about that in history. I need to upgrade a CX4-120 from 1 Gbps iSCSI to 10 Gb iSCSI and can't find to buy a 303-081-103B, that is the "CX4 native", can only find the 303-081-105B, that is transcript in the VNX Parts Location Guide. Question: is there a chance of 105B works a CX4-120?