First EQL SANs, like any block storage device has no awareness of filesystems or files. So if you connect two or more hosts to a volume there is no protection at the SAN level for corruption. It's up to the hosts to figure that out.
For example, ESXi uses a cluster filesystem, VMFS, registers VMs to one host only, WITH lock files and SCSI level Atomic Test & Set (ATS) locking. Failing that it will use SCSI2 Exclusive Reservations for certain operations.
Windows Clustering uses SCSI3 Persistent Reservations to keep nodes from overwriting each other.
So it will be up to your hosts to keep things straight. A cluster filesystem like GFS2 is always better than trying to leverage EXT4. When you have an EXT4 filesystem mounted by two hosts, they don't keep in sync. So a write from host1 isn't going to be seen by host2. The OS's don't periodically re-read the disk to check for updated contents. So host2 will be free to write anywhere it likes as will host1. So they can use the same location for a write. Last one to write 'wins' but very likely a corrupted filesystem. It's something I have seem often with Windows hosts. Someone trying to mount a volume to a backup server.
I'm personally not familiar with SANlock nor RHEV. So I can't offer you any advice on that.
thank you for your quick reply!
Very interesting what you tell about how VMWare is handling it. Something of it rings a bit familiar to what i read about SANLock.
I actually did not expect the EQL to take care of concurrent access on the application level (being a block device), but possibly on the network level? In that it would not allow to grant access to a LUN to more than one initiator at a time? Just in the irregular situation, when fencing did not succeed in shutting down a failed host (i plan to use IPMI here), while having succeeded with bringing up the take-over host..
As on the other hand such a behaviour would make sporting a cluster filesystem some levels up impossible, it would have to be a "toggable" configuration feature, but i did not find something suitable in the group manager yet...
Therefore just wondering...
Re: Concurrent access. Absolutely. There's a check box in the volume properties to "Allow multi-host initiators" w/o setting that checkbox even if the ACL allows for more than one host, only the first one to connect will be allowed. All the following connection requests from the other hosts will be denied. Before this setting it was very common to get support calls on concurrent access corruption. As some customers confused the block device SAN with a file server NAS device.
If you are going to use a non-clustered filesytem then some old clusters use SCSI2 Exclusive Reservations. So the active host would a SCSI RESERVE command. This doesn't allow any other hosts to see the filesystem. It stays connected to the volume, but shows up as blank to the other host. Then if you wanted to failover to the other host, a SCSI RELEASE command would clear the status and a new RESERVE command would be sent.
It's pretty "old school" but has been around for a long time.
Hello Don, brilliant!
Those support callers, that's me, a couple of years on, probably 😉
I wasn't quite sure whether disabling this option would actually focus on concurrent access, or also imply a host ACL in general.
Immediately it's a bit foggy to me whether disabling this feature or using SANLock is the better approach, but will investigate that.
As for your "old school" approach: that sounds brilliant. especially the persistant connection promises to speed up the transition.
I'll check out that one too, whether and how one can make use of it in the current constellation.
You are very welcome. Basically EQL like all modern SANs support the industry standards for clustering.
Like the "old school" approach I mentioned, that used to be done with SCSI adapters before iSCSI was available and Fibre Channel was still way too expensive. You could hook up two computers to storage via a SCSI cable and using SCSI2 Persistent Reservations have one active and one passive server.
Hopefully there is clustering services already available for the OS you are planning on using vs. trying to put together your own.