Start a Conversation

Unsolved

H

43 Posts

2171

November 8th, 2018 05:00

Generic KVM Equallogic vs. SANLock

Hello forum, in context of implementing KVM with a storage backend based on an Equallogic iSCSI-SAN i am still concerned about how to thoroughly prevent data corruptions in a HA-deployment. In that context, i stumbled upon the "libvirt-lock-sanlock" plugin, as well as the "sanlock" rpm itself, which apparently aims at preventing the launching of a VM/guest on multiple hosts. If i got that right, this would obsolete the measure to employ a cluster filesystem? 0) To my (still limited beginner's) understanding, i expected pacemaker would control which VM is launched under current conditions; but still SANLock exist and apparently also is featured in RHEV... 1) Does anybody has experience, whether the approach via SANlock is the better option then facilitating a cluster file system on the VM? I personally favour this approach based on the perception, that the locking happens on infrastructure level, allowing the VM to stay completely unconscious of the storage implementation in the backend and using the stock ext4 which would be preferred. 2) Is all of this actually a topic at all, because an Equallogic is doing the storage hosting, and already controls concurrent access to the iSCSI volume attempted by cluster members sufficiently? Any hint would be appreciated, Best

1 Rookie

 • 

1.5K Posts

November 8th, 2018 08:00

Hello, 

 First EQL SANs, like any block storage device has no awareness of filesystems or files.  So if you connect two or more hosts to a volume there is no protection at the SAN level for corruption.  It's up to the hosts to figure that out. 

 For example,  ESXi uses a cluster filesystem, VMFS, registers VMs to one host only, WITH lock files and SCSI level Atomic Test & Set (ATS) locking.  Failing that it will use SCSI2 Exclusive Reservations for certain operations.   

 Windows Clustering uses SCSI3 Persistent Reservations to keep nodes from overwriting each other. 

 So it will be up to your hosts to keep things straight.  A cluster filesystem like GFS2 is always better than trying to leverage EXT4.   When you have an EXT4 filesystem mounted by two hosts, they don't keep in sync.  So a write from host1 isn't going to be seen by host2.  The OS's don't periodically re-read the disk to check for updated contents.  So host2 will be free to write anywhere it likes as will host1.  So they can use the same location for a write.  Last one to write 'wins' but very likely a  corrupted filesystem.   It's something I have seem often with Windows hosts.  Someone trying to mount a volume to a backup server.  

 I'm personally not familiar with SANlock nor RHEV.  So I can't offer you any advice on that. 

 Regards,

Don 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

43 Posts

November 9th, 2018 01:00

Hello Don,

thank you for your quick reply!

Very interesting what you tell about how VMWare is handling it. Something of it rings a bit familiar to what i read about SANLock.

I actually did not expect the EQL to take care of concurrent access on the application level (being a block device), but possibly on the network level? In that it would not allow to grant access to a LUN to more than one initiator at a time? Just in the irregular situation, when fencing did not succeed in shutting down a failed host (i plan to use IPMI here), while having succeeded with bringing up the take-over host..

As on the other hand such a behaviour would make sporting a cluster filesystem some levels up impossible, it would have to be a "toggable" configuration feature, but i did not find something suitable in the group manager yet...

Therefore just wondering...

Best,

F

1 Rookie

 • 

1.5K Posts

November 9th, 2018 10:00

Hello,

Re: Concurrent access. Absolutely. There's a check box in the volume properties to "Allow multi-host initiators" w/o setting that checkbox even if the ACL allows for more than one host, only the first one to connect will be allowed. All the following connection requests from the other hosts will be denied. Before this setting it was very common to get support calls on concurrent access corruption. As some customers confused the block device SAN with a file server NAS device.

If you are going to use a non-clustered filesytem then some old clusters use SCSI2 Exclusive Reservations. So the active host would a SCSI RESERVE command. This doesn't allow any other hosts to see the filesystem. It stays connected to the volume, but shows up as blank to the other host. Then if you wanted to failover to the other host, a SCSI RELEASE command would clear the status and a new RESERVE command would be sent.

It's pretty "old school" but has been around for a long time.

Regards,

Don

43 Posts

November 12th, 2018 05:00

Hello Don, brilliant!

Those support callers, that's me, a couple of years on, probably ;-)

I wasn't quite sure whether disabling this option would actually focus on concurrent access, or also imply a host ACL in general.

Immediately it's a bit foggy to me whether disabling this feature or using SANLock is the better approach, but will investigate that.

As for your "old school" approach: that sounds brilliant. especially the persistant connection promises to speed up the transition.

I'll check out that one too, whether and how one can make use of it in the current constellation.

Thanks again,

best

F

1 Rookie

 • 

1.5K Posts

November 12th, 2018 08:00

Hello, 

 You are very welcome.  Basically EQL like all modern SANs support the industry standards for clustering. 

Like the "old school" approach I mentioned, that used to be done with SCSI adapters before iSCSI was available and Fibre Channel was still way too expensive.  You could hook up two computers to storage via a SCSI cable and using SCSI2 Persistent Reservations have one active and one passive server. 

 Hopefully there is clustering services already available for the OS you are planning on using vs. trying to put together your own. 

 Regards,

Don 

No Events found!

Top