[Gluster-users] (no subject)

Lakshmipathi lakshmipathi at gluster.com
Tue Dec 21 05:53:32 UTC 2010


Hi Jeroen,
Sorry for the delay.We recommend glusterfs-volgen generated configuration/volume files for gluster-3.0.x 
and gluster cli for 3.1.x


-- 
----
Cheers,
Lakshmipathi.G
FOSS Programmer.
----- Original Message -----
From: "Jeroen Koekkoek" <j.koekkoek at perrit.nl>
To: "Lakshmipathi (lakshmipathi at gluster.com)" <lakshmipathi at gluster.com>
Cc: "gluster-users at gluster.org" <gluster-users at gluster.org>
Sent: Wednesday, December 15, 2010 10:10:18 PM
Subject: RE: [Gluster-users] (no subject)

Hi Lakshmipathi,

I decoupled the client and server and tested again. The problem did not occur. That leads me to the following question: Is my original configuration a supported one? Meaning is my original configuration supposed to work? Because that configuration was really, really fast compared to the traditional client/server model.

I tested this with GlusterFS 3.0.7. I'll repeat the steps with 3.1.1 tomorrow.

Regards,
Jeroen

> -----Original Message-----
> From: Jeroen Koekkoek
> Sent: Wednesday, December 15, 2010 2:00 PM
> To: 'Lakshmipathi'
> Subject: RE: [Gluster-users] (no subject)
> 
> Hi Lakshmipathi,
> 
> I forgot to mention that I use a single volfile for both the server and
> the client, so that the client is actually the server and vice-versa.
> The same process is connected to the mount point and serving the brick
> over tcp. Below is my configuration for a single host.
> 
> ----- cut -----
> ### file: glusterfs.vol
> 
> ################################################
> ###  GlusterFS Server and Client Volume File  ##
> ################################################
> 
> volume posix
>   type storage/posix
>   option directory /var/vmail_local
> end-volume
> 
> volume local_brick_mta1
>   type features/locks
>   subvolumes posix
> end-volume
> 
> volume server
>   type protocol/server
>   option transport-type tcp
>   option transport.socket.bind-address 172.16.104.21
>   option auth.addr.local_brick_mta1.allow 172.16.104.22
>   subvolumes local_brick_mta1
> end-volume
> 
> volume remote_brick
>   type protocol/client
>   option transport-type tcp
>   option remote-host 172.16.104.22
>   option remote-subvolume local_brick_mta2 end-volume
> 
> volume afr
>   type cluster/replicate
> #  option read-subvolume local_brick
>   subvolumes remote_brick local_brick_mta1 end-volume
> 
> volume writebehind
>   type performance/write-behind
>   option cache-size 1MB
>   subvolumes afr
> end-volume
> 
> volume quickread
>   type performance/quick-read
>   option cache-timeout 1
>   option max-file-size 1MB
>   subvolumes writebehind
> end-volume
> ----- /cut -----
> 
> Regards,
> Jeroen
> 
> > -----Original Message-----
> > From: Lakshmipathi [mailto:lakshmipathi at gluster.com]
> > Sent: Wednesday, December 15, 2010 10:34 AM
> > To: Jeroen Koekkoek
> > Subject: Re: [Gluster-users] (no subject)
> >
> > Hi Jeroen Koekkoek,
> > We are unable to  reproduce your issue with 3.1.1.
> >
> > Steps -
> > setup 2 afr server and mount it on 2 clients.
> >
> > client1-mntpt#touch file.txt
> >
> > this file avail. on both client mounts - verified with ls command.
> > client1-mntpt#ls -l file.txt
> > client2-mntpt#ls -l file.txt
> >
> > Now unmounted client2.
> > umount <mntpt-client>
> >
> > now removed the file from client1.
> > client1-mntpt#rm file.txt
> >
> > then mount client2 again and did a ls - client2-mntpt#ls -l
> >
> > file.txt is not available on both clients now,as expected.
> >
> >
> > If you are still facing this issue,sent us the server and client logs
> > along with exact steps to reproduce this issue.
> > Thanks.
> >
> >
> >
> > --
> > ----
> > Cheers,
> > Lakshmipathi.G
> > FOSS Programmer.
> >
> > ----- Original Message -----
> > From: "Jeroen Koekkoek" <j.koekkoek at perrit.nl>
> > To: gluster-users at gluster.org
> > Sent: Wednesday, December 15, 2010 1:05:50 PM
> > Subject: [Gluster-users] (no subject)
> >
> > Hi,
> >
> > I have a question regarding glusterfs and replicate. I have a two node
> > setup. The following problem arises: if I create a file on the mount
> > point, then unmount gfs on the 2nd machine, remove the file from the
> > 1st (through the mount point) and brind the mount point on the 2nd
> > machine back up. The file is removed (from the 2nd) if I `ls` the
> > mount point on the 1st machine, the file is created (on the 1st) if I
> > `ls` the mount point on the 2nd.
> >
> > If I update the file instead of removing it, everything goes fine. The
> > file is up-to-date on both machines.
> >
> > I looked at the .landfill directory, but that is only used in a self-
> > heal situation. Is there a way I can work around this? Maybe using the
> > trash translator?
> >
> > Best regards,
> > Jeroen Koekkoek
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users





More information about the Gluster-users mailing list