[Gluster-users] corruption using gluster and iSCSI with LIO

Olivier Lambert lambert.olivier at gmail.com
Thu Nov 17 20:00:52 UTC 2016


Hi there,

First off, thanks for this great product :)

I have a corruption issue when using Glusterfs with LIO iSCSI target:


Node 1 (Glusterfs + LIO) <-----
                                           |---------------> Client (multipath)
Node 2 (Glusterfs + LIO) <-----


1. Node 1 and 2 are using "replicated 2" configuration. Inside these
nodes, I mount the Gluster volume locally. Then I create one big file
(eg 60GB with fallocate).
2. This big file is exported via LIO configured with the same wnn ID
(as explained here:
https://blog.gluster.org/2016/04/using-lio-with-gluster/ )
3. My client is connecting to both target using multipath

It works! The client is XenServer hypervisor, and the iSCSI volume is
used to create VMs.

The problem is, if I shutdown one Node (let's say Node 2), it
continues to work on client perspective (using another path). **But**,
if I restart the halted host, everything back on line for the client
perspective (all paths up) but the VM is no longer usable. All files
are corrupted (the VM is still live but can't start any program and if
I reboot, it won't work because the disk lost its boots partitions).

Nodes are running:

* CentOS 7 with Gluster 3.8
* LIO with targetcli in 2.1
* Replication 2 (shards at 16MB, but same issue without shards)

I can reproduce it 100% of the time, it's corrupted as soon as the
halted Node back online.

About the context (why not mounting with Gluster client directly):

* I got only iSCSI and NFSv3 (no v4.1) on client side, can't use anything else
* I want to  avoid any SPOF (one Node on 2 could be down without
client stopping to work)
* I want to keep it simple as possible
* I want to be able to scale if possible (more than 2 nodes available
in the future)

That's why I started with iSCSI and multipath, but if there is a
simple solution to have no SPOF and distributed reads/writes with NFS,
I'm ready to change my mind.


Regards,



Olivier.


More information about the Gluster-users mailing list