[Gluster-users] Client and server file "view", different results?! Client can't see the right file.

Martin Schenker martin.schenker at profitbricks.com
Tue May 17 05:43:32 UTC 2011


Yes, it is!

Here's the volfile:

cat  /mnt/gluster/brick0/config/vols/storage0/storage0-fuse.vol:

volume storage0-client-0
    type protocol/client
    option remote-host de-dc1-c1-pserver3
    option remote-subvolume /mnt/gluster/brick0/storage
    option transport-type rdma
    option ping-timeout 5
end-volume

volume storage0-client-1
    type protocol/client
    option remote-host de-dc1-c1-pserver5
    option remote-subvolume /mnt/gluster/brick0/storage
    option transport-type rdma
    option ping-timeout 5
end-volume

volume storage0-client-2
    type protocol/client
    option remote-host de-dc1-c1-pserver3
    option remote-subvolume /mnt/gluster/brick1/storage
    option transport-type rdma
    option ping-timeout 5
end-volume

volume storage0-client-3
    type protocol/client
    option remote-host de-dc1-c1-pserver5
    option remote-subvolume /mnt/gluster/brick1/storage
    option transport-type rdma
    option ping-timeout 5
end-volume

volume storage0-client-4
    type protocol/client
    option remote-host de-dc1-c1-pserver12
    option remote-subvolume /mnt/gluster/brick0/storage
    option transport-type rdma
    option ping-timeout 5
end-volume

volume storage0-client-5
    type protocol/client
    option remote-host de-dc1-c1-pserver13
    option remote-subvolume /mnt/gluster/brick0/storage
    option transport-type rdma
    option ping-timeout 5
end-volume

volume storage0-client-6
    type protocol/client
    option remote-host de-dc1-c1-pserver12
    option remote-subvolume /mnt/gluster/brick1/storage
    option transport-type rdma
    option ping-timeout 5
end-volume

volume storage0-client-7
    type protocol/client
    option remote-host de-dc1-c1-pserver13
    option remote-subvolume /mnt/gluster/brick1/storage
    option transport-type rdma
    option ping-timeout 5
end-volume

volume storage0-replicate-0
    type cluster/replicate
    subvolumes storage0-client-0 storage0-client-1
end-volume

volume storage0-replicate-1
    type cluster/replicate
    subvolumes storage0-client-2 storage0-client-3
end-volume

volume storage0-replicate-2
    type cluster/replicate
    subvolumes storage0-client-4 storage0-client-5
end-volume

volume storage0-replicate-3
    type cluster/replicate
    subvolumes storage0-client-6 storage0-client-7
end-volume

volume storage0-dht
    type cluster/distribute
    subvolumes storage0-replicate-0 storage0-replicate-1
storage0-replicate-2 storage0-replicate-3
end-volume

volume storage0-write-behind
    type performance/write-behind
    subvolumes storage0-dht
end-volume

volume storage0-read-ahead
    type performance/read-ahead
    subvolumes storage0-write-behind
end-volume

volume storage0-io-cache
    type performance/io-cache
    option cache-size 4096MB
    subvolumes storage0-read-ahead
end-volume

volume storage0-quick-read
    type performance/quick-read
    option cache-size 4096MB
    subvolumes storage0-io-cache
end-volume

volume storage0-stat-prefetch
    type performance/stat-prefetch
    subvolumes storage0-quick-read
end-volume

volume storage0
    type debug/io-stats
    subvolumes storage0-stat-prefetch
end-volume


> -----Original Message-----
> From: Pranith Kumar. Karampuri [mailto:pranithk at gluster.com] 
> Sent: Tuesday, May 17, 2011 7:16 AM
> To: Martin Schenker
> Cc: gluster-users at gluster.org
> Subject: Re: [Gluster-users] Client and server file "view", 
> different results?! Client can't see the right file.
> 
> 
> Martin,
>       Is this a distributed-replicate setup?. Could you 
> attach the vol-file of the client.
> 
> Pranith
> ----- Original Message -----
> From: "Martin Schenker" <martin.schenker at profitbricks.com>
> To: gluster-users at gluster.org
> Sent: Monday, May 16, 2011 2:49:29 PM
> Subject: [Gluster-users] Client and server file "view",	
> different results?! Client can't see the right file.
> 
> 
> Client and server file "view", different results?! Client 
> can't see the right file. 
> 
> Hi all! 
> 
> Here we have another mismatch between the client "view" and 
> the server mounts: 
> 
> From the server site everything seems well, the 20G file is 
> visible and the attributes seem to match: 
> 
> 0 root at pserver5:~ # getfattr -R -d -e hex -m "trusted.afr." 
> /mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8
> f-8542864da6ef/hdd-images/ 
> 
> # file: 
> mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f
> -8542864da6ef/hdd-images//20964 
> trusted.afr.storage0-client-2=0x000000000000000000000000 
> trusted.afr.storage0-client-3=0x000000000000000000000000 
> 
> 0 root at pserver5:~ # find /mnt/gluster/ -name 20964 | xargs -i 
> ls -al {} 
> -rwxrwx--- 1 libvirt-qemu vcb 21474836480 May 13 11:21 
> /mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8
> f-8542864da6ef/hdd-images/20964 
> 
> But the client view shows 2!! files with 0 byte size!! And 
> these aren't any link files created by Gluster. ( with the T 
> on the end) 
> 
> 0 root at pserver5:~ # find /opt/profitbricks/storage/ -name 
> 20964 | xargs -i ls -al {} 
> -rwxrwx--- 1 libvirt-qemu kvm 0 May 13 11:24 
> /opt/profitbricks/storage/images/2078/ebb83b05-3a83-9d18-ad8f-
> 8542864da6ef/hdd-images/20964 
> 
> -rwxrwx--- 1 libvirt-qemu kvm 0 May 13 11:24 
> /opt/profitbricks/storage/images/2078/ebb83b05-3a83-9d18-ad8f-
> 8542864da6ef/hdd-images/20964 
> 
> I'm a bit stumped that we seem to have so many weird errors 
> cropping up. Any ideas? I've checked the ext4 filesystem on 
> all boxes, no real problems. We run a distributed cluster 
> with 4 servers offering 2 bricks each. 
> 
> Best, Martin 
> 
> 
> 
> 
> > -----Original Message-----
> > From: Mohit Anchlia [ mailto:mohitanchlia at gmail.com ] 
> > Sent: Monday, May 16, 2011 2:24 AM 
> > To: Martin Schenker 
> > Cc: gluster-users at gluster.org 
> > Subject: Re: [Gluster-users] Brick pair file mismatch, 
> > self-heal problems? 
> > 
> > 
> > Try this to trigger self heal:
> > 
> > find <gluster-mount> -noleaf -print0 -name <file name>| xargs
> > --null stat >/dev/null 
> > 
> > 
> > 
> > On Sun, May 15, 2011 at 11:20 AM, Martin Schenker
> > <martin.schenker at profitbricks.com> wrote: 
> > > Can someone enlighten me what's going on here? We have a 
> two peers,
> > > the file 21313 is shown through the client mountpoint as 
> > "1Jan1970",
> > > attribs on server pserver3 don't match but NO self-heal or
> > repair can
> > > be triggered through "ls -alR"?!?
> > > 
> > > Checking the files through the server mounts show that 
> two versions
> > > are on the system. But the wrong one (as with the 
> > "1Jan1970") seems to
> > > be the preferred one by the client?!?
> > > 
> > > Do I need to use setattr or what in order to get the client
> > to see the
> > > RIGHT version?!? This is not the ONLY file displaying this
> > problematic
> > > behaviour!
> > > 
> > > Thanks for any feedback.
> > > 
> > > Martin
> > > 
> > > pserver5:
> > > 
> > > 0 root at pserver5:~ # ls -al
> > > 
> > 
> /mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-854286
> > > 4da6ef
> > > /hdd-images 
> > > 
> > > -rwxrwx--- 1 libvirt-qemu vcb 483183820800 May 13 13:41 21313
> > > 
> > > 0 root at pserver5:~ # getfattr -R -d -e hex -m "trusted.afr."
> > > 
> > 
> /mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-854286
> > > 4da6ef
> > > /hdd-images/21313 
> > > getfattr: Removing leading '/' from absolute path names 
> > > # file: 
> > > 
> > mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f
> > -8542864da6ef/ 
> > > hdd-images/21313
> > > trusted.afr.storage0-client-2=0x000000000000000000000000 
> > > trusted.afr.storage0-client-3=0x000000000000000000000000 
> > > 
> > > 0 root at pserver5:~ # ls -alR
> > > 
> > 
> /opt/profitbricks/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864d
> > > a6ef/h
> > > dd-images/21313 
> > > -rwxrwx--- 1 libvirt-qemu kvm 483183820800 Jan 1 1970 
> > > 
> > /opt/profitbricks/storage/images/2078/ebb83b05-3a83-9d18-ad8f-
> > 8542864da6ef/h 
> > > dd-images/21313
> > > 
> > > pserver3:
> > > 
> > > 0 root at pserver3:~ # ls -al
> > > 
> > 
> /mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-854286
> > > 4da6ef
> > > /hdd-images 
> > > 
> > > -rwxrwx--- 1 libvirt-qemu kvm 483183820800 Jan 1 1970 21313
> > > 
> > > 0 root at pserver3:~ # ls -alR
> > > 
> > 
> /opt/profitbricks/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864d
> > > a6ef/h
> > > dd-images/21313 
> > > -rwxrwx--- 1 libvirt-qemu kvm 483183820800 Jan 1 1970 
> > > 
> > /opt/profitbricks/storage/images/2078/ebb83b05-3a83-9d18-ad8f-
> > 8542864da6ef/h 
> > > dd-images/21313
> > > 
> > > 0 root at pserver3:~ # getfattr -R -d -e hex -m "trusted.afr."
> > > /mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18- 
> > > ad8f-8542864da6ef/hdd-images/21313 
> > > getfattr: Removing leading '/' from absolute path names 
> > > # file: 
> > > 
> > 
> mnt/gluster/brick1/storage/images/2078/ebb83b05-3a83-9d18-ad8f-8542864
> > > da6ef/
> > > hdd-images/21313 
> > > trusted.afr.storage0-client-2=0x000000000000000000000000 
> > > trusted.afr.storage0-client-3=0x0b0000090900000000000000 
> > <- mismatch,
> > > should be targeted for self-heal/repair? Why is there a
> > difference in the
> > > views?
> > > 
> > > 
> > > From the volfile:
> > > 
> > > volume storage0-client-2
> > > type protocol/client 
> > > option remote-host de-dc1-c1-pserver3 
> > > option remote-subvolume /mnt/gluster/brick1/storage 
> > > option transport-type rdma 
> > > option ping-timeout 5 
> > > end-volume 
> > > 
> > > volume storage0-client-3
> > > type protocol/client 
> > > option remote-host de-dc1-c1-pserver5 
> > > option remote-subvolume /mnt/gluster/brick1/storage 
> > > option transport-type rdma 
> > > option ping-timeout 5 
> > > end-volume 
> > > 
> > > 
> > > 
> > > _______________________________________________
> > > Gluster-users mailing list 
> > > Gluster-users at gluster.org 
> > > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 
> > > 
> > 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org 
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 




More information about the Gluster-users mailing list