[Gluster-users] Split-brain after uploading file

Miloš Kozák milos.kozak at lejmr.com
Mon Nov 30 22:45:49 UTC 2015


I am using Gluster for a few years without any significant issue (after I tweaked configuration  for v3.5). My configuration is as follows:

network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.io-thread-count: 6
network.ping-timeout: 2
performance.cache-max-file-size: 0
performance.flush-behind: on
features.barrier: disable
snap-max-soft-limit: 7
auto-delete: on

I use it for running virtual servers on top such a volume.  Currently I run this version of Gluster:

glusterfs-cli-3.6.5-1.el6.x86_64
glusterfs-3.6.5-1.el6.x86_64
glusterfs-api-3.6.5-1.el6.x86_64
glusterfs-server-3.6.5-1.el6.x86_64
glusterfs-libs-3.6.5-1.el6.x86_64
glusterfs-fuse-3.6.5-1.el6.x86_64

With recent CentOS 6.

I have experienced an issue when I move some files from an hdd onto gluster volume such that one node gets overloaded in the middle of file upload. Therefore, I decided to upload it through ssh onto other server than where original images are store. I know that this sounds just weird, but it does not lead to overloading! 

Along these lines, I decided to upload 10G image onto gluster volume and the upload speed varied, but no overloading at all… Right after upload was done I realized that some virtuals are not running properly. Hence I checked heal status where I discoverd that 4 images are in split-brain state. I had to act quickly, so I resolved the split brain, and let gluster heal. When heal was done everything works… 

However, I have got a few more VMs to upload, and I am not sure what can happen.. 

My volume configuration:

Volume Name: ph-fs-0
Type: Replicate
Volume ID: 71ac6456-03e4-4bb3-a624-937f4605b2cb
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.11.100.1:/gfs/s3-sata-10k/fs
Brick2: 10.11.100.2:/gfs/s3-sata-10k/fs
Options Reconfigured:
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.io-thread-count: 6
network.ping-timeout: 2
performance.cache-max-file-size: 0
performance.flush-behind: on
features.barrier: disable
snap-max-soft-limit: 7
auto-delete: on


and logs are attached.

Miloš
-------------- next part --------------
A non-text attachment was scrubbed...
Name: glustershd.log
Type: application/octet-stream
Size: 9071 bytes
Desc: not available
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151130/69e54bf1/attachment.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: var-lib-one-datastores-103.log
Type: application/octet-stream
Size: 122560 bytes
Desc: not available
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151130/69e54bf1/attachment-0001.obj>


More information about the Gluster-users mailing list