[Gluster-devel] Files not available in all clients immediately
Amar S. Tumballi
amar at zresearch.com
Wed Mar 19 01:00:34 UTC 2008
Hi Claudio,
I made a fix for that bug and patch-710 should work fine for you. You can
just upgrade the client machine to make a quick test.
Regards,
Amar
On Tue, Mar 18, 2008 at 5:37 PM, Amar S. Tumballi <amar at zresearch.com>
wrote:
> Nope, thats the latest. But this should be fixed soon (at office time
> IST).
> sorry for the inconvenience.
>
> -amar
>
>
> On Tue, Mar 18, 2008 at 5:07 PM, Claudio Cuqui <claudio at c3systems.com.br>
> wrote:
>
> > Hi Avati,
> >
> > I tried, but it don´t even allow me to start it:
> >
> > TLA Repo Revision: glusterfs--mainline--2.5--patch-709
> > Time : 2008-03-18 20:52:32
> > Signal Number : 11
> >
> > /C3Systems/gluster/bin/sbin/glusterfs -f
> > /C3Systems/gluster/bin/etc/glusterfs/glusterfs-client.vol -l
> > /C3Systems/gluster/bin-patch709/var/log/glusterfs/glusterfs.log -L
> > WARNING /C3Systems/data/domains/webmail.pop.com.br/attachments
> > volume fuse
> > type mount/fuse
> > option direct-io-mode 1
> > option entry-timeout 1
> > option attr-timeout 1
> > option mount-point
> > /C3Systems/data/domains/webmail.pop.com.br/attachments
> > subvolumes iocache
> > end-volume
> >
> > volume iocache
> > type performance/io-cache
> > option page-count 2
> > option page-size 256KB
> > subvolumes readahead
> > end-volume
> >
> > volume readahead
> > type performance/read-ahead
> > option page-count 2
> > option page-size 1MB
> > subvolumes client
> > end-volume
> >
> > volume client
> > type protocol/client
> > option remote-subvolume attachments
> > option remote-host 200.175.8.85
> > option transport-type tcp/client
> > end-volume
> >
> > frame : type(1) op(34)
> >
> > /lib64/libc.so.6[0x3edca300c0]
> > /lib64/libc.so.6(strcmp+0x0)[0x3edca75bd0]
> >
> > /C3Systems/gluster/bin-patch709/lib/glusterfs/1.3.8/xlator/mount/fuse.so[0x2aaaab302937]
> >
> > /C3Systems/gluster/bin-patch709/lib/glusterfs/1.3.8/xlator/mount/fuse.so[0x2aaaab302b42]
> >
> > /C3Systems/gluster/bin-patch709/lib/glusterfs/1.3.8/xlator/performance/io-
> > cache.so(ioc_lookup_cbk+0x67)[0x2aaaab0f6557]
> > /C3Systems/gluster/bin-patch709/lib/libglusterfs.so.0[0x2aaaaaab8344]
> >
> > /C3Systems/gluster/bin-patch709/lib/glusterfs/1.3.8/xlator/protocol/client.so(client_lookup_cbk+0x1b3)[0x2aaaaace93a3]
> >
> > /C3Systems/gluster/bin-patch709/lib/glusterfs/1.3.8/xlator/protocol/client.so(notify+0x8fc)[0x2aaaaace273c]
> >
> > /C3Systems/gluster/bin-patch709/lib/libglusterfs.so.0(sys_epoll_iteration+0xc0)[0x2aaaaaabdb90]
> >
> > /C3Systems/gluster/bin-patch709/lib/libglusterfs.so.0(poll_iteration+0x75)[0x2aaaaaabd095]
> > [glusterfs](main+0x658)[0x4026b8]
> > /lib64/libc.so.6(__libc_start_main+0xf4)[0x3edca1d8a4]
> > [glusterfs][0x401b89]
> > ---------
> >
> > Is there any other release that I should try ?
> >
> > Regards,
> >
> > Cuqui
> >
> > Anand Avati wrote:
> > > Claudio,
> > > Can you try with glusterfs--mainline--2.5--patch-709 ? A similar
> > > issue is addressed in that revision. We are interested to know if that
> > > solves your issue as well.
> > >
> > > thanks,
> > >
> > > avati
> > >
> > > 2008/3/19, Claudio Cuqui <claudio at c3systems.com.br
> > > <mailto:claudio at c3systems.com.br>>:
> > >
> > > Hi there !
> > >
> > > We are using gluster on an environment with multiple webservers
> > > and load
> > > balancer, where we have only one server and multiple clients (6).
> > > All servers are running Fedora Core 6 X86_64 with kernel
> > > 2.6.22.14-72.fc6 (with exactly same packages installed in all
> > server).
> > > The gluster version used is 1.3.8pre2 + 2.7.2glfs8 (both compiled
> > > locally). The underlying FS is reiserfs mounted with the follow
> > > options
> > > rw,noatime,nodiratime,notail. This filesystem has almost 4
> > thousand
> > > files from 2k - 10Mb in size. We are using gluster to export this
> > > filesystem for all other webservers. Below the config file used by
> > > gluster server:
> > >
> > > ### Export volume "brick" with the contents of "/home/export"
> > > directory.
> > > volume attachments-nl
> > > type storage/posix # POSIX FS translator
> > > option directory
> > > /C3Systems/data/domains/webmail.pop.com.br/attachments
> > > end-volume
> > >
> > > volume attachments
> > > type features/posix-locks
> > > subvolumes attachments-nl
> > > option mandatory on
> > > end-volume
> > >
> > >
> > > ### Add network serving capability to above brick.
> > > volume server
> > > type protocol/server
> > > option transport-type tcp/server # For TCP/IP transport
> > > option client-volume-filename
> > > /C3Systems/gluster/bin/etc/glusterfs/glusterfs-client.vol
> > > subvolumes attachments-nl attachments
> > > option auth.ip.attachments-nl.allow * # Allow access to
> > > "attachments-nl" volume
> > > option auth.ip.attachments.allow * # Allow access to
> > > "attachments" volume
> > > end-volume
> > >
> > > The problem happen when the LB sent the post (the uploaded file)
> > > to one
> > > webserver and than the next post goes to other webserver that try
> > to
> > > access the same file. When this happen, the other client got these
> > > messages:
> > >
> > > PHP Warning:
> > >
> > fopen(/C3Systems/data/domains/c3systems.com.br/attachments/27gBgFQSIiOLDEo7AvxlpsFkqZw9jdnZ):
> > > failed to open stream: File Not Found.
> > > PHP Warning:
> > >
> > unlink(/C3Systems/data/domains/c3systems.com.br/attachments/5Dech7jNxjORZ2cZ9IAbR7kmgmgn2vTE):
> > > File Not Found.
> > >
> > > LB is using RoundRobin to distribute the load between servers.
> > >
> > > Below, you can find the gluster configuration file used by all
> > > clients:
> > >
> > > ### file: client-volume.spec.sample
> > >
> > > ##############################################
> > > ### GlusterFS Client Volume Specification ##
> > > ##############################################
> > >
> > > #### CONFIG FILE RULES:
> > > ### "#" is comment character.
> > > ### - Config file is case sensitive
> > > ### - Options within a volume block can be in any order.
> > > ### - Spaces or tabs are used as delimitter within a line.
> > > ### - Each option should end within a line.
> > > ### - Missing or commented fields will assume default values.
> > > ### - Blank/commented lines are allowed.
> > > ### - Sub-volumes should already be defined above before
> > referring.
> > >
> > > ### Add client feature and attach to remote subvolume
> > > volume client
> > > type protocol/client
> > > option transport-type tcp/client # for TCP/IP transport
> > > # option ib-verbs-work-request-send-size 1048576
> > > # option ib-verbs-work-request-send-count 16
> > > # option ib-verbs-work-request-recv-size 1048576
> > > # option ib-verbs-work-request-recv-count 16
> > > # option transport-type ib-sdp/client # for Infiniband transport
> > > # option transport-type ib-verbs/client # for ib-verbs transport
> > > option remote-host 1.2.3.4 <http://1.2.3.4> # IP address of
> > > the remote brick
> > > # option remote-port 6996 # default server port is
> > 6996
> > >
> > > # option transport-timeout 30 # seconds to wait for a
> > reply
> > > # from server for each
> > request
> > > option remote-subvolume attachments # name of the remote volume
> > > end-volume
> > >
> > > ### Add readahead feature
> > > volume readahead
> > > type performance/read-ahead
> > > option page-size 1MB # unit in bytes
> > > option page-count 2 # cache per file = (page-count x
> > > page-size)
> > > subvolumes client
> > > end-volume
> > >
> > > ### Add IO-Cache feature
> > > volume iocache
> > > type performance/io-cache
> > > option page-size 256KB
> > > option page-count 2
> > > subvolumes readahead
> > > end-volume
> > >
> > > ### Add writeback feature
> > > #volume writeback
> > > # type performance/write-behind
> > > # option aggregate-size 1MB
> > > # option flush-behind off
> > > # subvolumes iocache
> > > #end-volume
> > >
> > > When I do the test manually, everything goes fine. What I think is
> > > happening is that gluster isn´t having enough time to sync all
> > clients
> > > before clients trying to access the files (those servers are very
> > busy
> > > ones.....they receive millions of requests per day).
> > >
> > > Is this configuration appropriate for this situation ? a bug ? a
> > > feature
> > > ;-) ? Is there any option like the sync used in NFS that I can use
> > in
> > > order guarantee that when the file is write down, all the clients
> > > already
> > > have it ?
> > >
> > > TIA,
> > >
> > > Claudio Cuqui
> > >
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > Gluster-devel mailing list
> > > Gluster-devel at nongnu.org <mailto:Gluster-devel at nongnu.org>
> > > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> > >
> > >
> > >
> > >
> > > --
> > > If I traveled to the end of the rainbow
> > > As Dame Fortune did intend,
> > > Murphy would be there to tell me
> > > The pot's at the other end.
> >
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at nongnu.org
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
>
>
>
> --
> Amar Tumballi
> Gluster/GlusterFS Hacker
> [bulde on #gluster/irc.gnu.org]
> http://www.zresearch.com - Commoditizing Supercomputing and Superstorage!
--
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Supercomputing and Superstorage!
More information about the Gluster-devel
mailing list