[Gluster-users] GlusterFS Changing Hash of Large Files?

Ravishankar N ravishankar at redhat.com
Mon Jul 29 04:35:27 UTC 2019


An even replica count is prone to split-brains with the default quorum 
settings, so replica 3 is recommended.

But a replica 4 setup should not cause any checksum change if one node 
is unreachable, as long as the reads/writes are done via the gluster 
mount (and not directly the bricks). I wasn't able to re-create this 
when I copied a huge file on to a 1x4 volume (glusterfs 6.3) with one of 
the bricks down. Is this something that you can reproduce? Do you see 
anything suspicious in the mount or brick logs?

Regards,
Ravi
On 28/07/19 10:47 AM, Strahil wrote:
>
> I never thought that replica 4  is allowed optiion. I always thought 
> that 3 copies is the maximum.
>
> Best Regards,
> Strahil Nikolov
>
> On Jul 27, 2019 16:30, Matthew Evans <runmatt at live.com> wrote:
>
>     Hi Ravishankar - I figured out the issue. The 4th node was showing
>     "online" under 'gluster peer status' as well as 'gluster volume
>     status' - but 'gluster volume status' wasn't showing a TCP port
>     for that 4th node. When I opened 49152 in firewalld and then
>     re-copied the ISO, the hash didn't change.
>
>     So, now I guess the question would be, why would having one
>     malfunctioning node override 3 functioning nodes and cause a file
>     to be altered? I wasn't performing the initial copy onto the
>     malfunctioning node.
>
>     matt at docker1:~$ sudo glusterfs --version
>     glusterfs 6.3
>
>     matt at docker1:~$ sudo gluster volume info
>
>     Volume Name: swarm-vols
>     Type: Replicate
>     Volume ID: 0b51e6b3-786e-454e-8a16-89b47e94828a
>     Status: Started
>     Snapshot Count: 0
>     Number of Bricks: 1 x 4 = 4
>     Transport-type: tcp
>     Bricks:
>     Brick1: docker1:/gluster/data
>     Brick2: docker2:/gluster/data
>     Brick3: docker3:/gluster/data
>     Brick4: docker4:/gluster/data
>     Options Reconfigured:
>     performance.client-io-threads: off
>     nfs.disable: on
>     transport.address-family: inet
>     auth.allow: 10.5.22.*
>
>     ------------------------------------------------------------------------
>     *From:* Ravishankar N <ravishankar at redhat.com>
>     *Sent:* Saturday, July 27, 2019 2:04 AM
>     *To:* Matthew Evans <runmatt at live.com>; gluster-users at gluster.org
>     <gluster-users at gluster.org>
>     *Subject:* Re: [Gluster-users] GlusterFS Changing Hash of Large
>     Files?
>
>
>     On 26/07/19 6:50 PM, Matthew Evans wrote:
>
>         I've got a new glusterfs 4 node replica cluster running under
>         CentOS 7.  All hosts are backed by SSD drives and are
>         connected to a 1Gbps Ethernet network. 3 nodes are running on
>         CentOS 7 under ESXi on the same physical host, 1 is running on
>         CentOS 7 under Hyper-V. I use this for my docker swarm
>         persistent storage and all seems to work well.
>
>         Yesterday however, I copied a 4GB .ISO file to my volume for a
>         friend to download. I noticed the SHA256 hash of the ISO was
>         altered. I downloaded a fresh copy to my desktop, verified the
>         hash, scp'd it to the local glusterfs host storage and again,
>         re-verified the hash. The moment I copied it to my glusterfs
>         volume, the file hash changed. When my friend downloaded the
>         ISO, his hash matched changed hash.
>
>     Can you provide the below details?
>     - glusterfs version
>
>     -`gluster volume info`
>
>
>
>         I am new to glusterfs, having deployed this as my first
>         cluster ever about a week ago. Can someone help me work
>         through why this file
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190729/f496a907/attachment.html>


More information about the Gluster-users mailing list