[Gluster-users] server side afr, workstation unify?

Wolfgang Pauli Wolfgang.Pauli at Colorado.EDU
Wed Aug 6 05:31:04 UTC 2008


Too bad. I will just set this up as a big unify volume then. Should work as 
well with nightly backups with tob.

In regard to your personal note. You are lucky that you did not have to deal 
with the CU network policy then. In our offices, 10MBit/s are for free, 100 
are 20 per jack, and 1GBit/s are 90 per jack. That's why we need to use the 
unify translator with the nufa scheduler to keep things local. :( Even though 
client-side afr would be the way to go ...

I tried to use all the different performance translators to speed things up, 
but to no avail. 

Wolfgang

On Tuesday 05 August 2008 22:58:40 Keith Freedman wrote:
> AHA.   yes. I now see the problem, and this is probably a bug report
> that needs to be sent to the devs.
>
> If you mean, the xattr isn't being set on the UNIFY volume, then this
> makes sense.
> Most likely the extended attributes aren't being passed through the
> unify volume so they dont get written to the disk.
>
> I would suppose there are 2 options for this..  One is for gluster to
> always use extended attributes when the filesystem supports them
> (this could be an option on the storage translator possibly to turn
> it off [or on depending on what they think the default should be])...
> The advantage of this is that at any time, AFR can be enabled  on a
> gluster filesystem and it doesn't require any preparation.
> The other is to require the intermediary translators to pass through
> extended attributes when necessary.
>
> On a personal note.. all my brothers went to CU.. I went to CSU :)
>
> Keith
>
> At 09:41 PM 8/5/2008, Wolfgang Pauli wrote:
> >Hi,
> >
> >Hm ... I think I understood your email and we are on the same page.
> > However, it seems like an afr of a unify volume and a posix volume don't
> > work.
> >
> >Files created in the unify volume never show up in the mounted afr volume.
> > If I create a file in the afr volume, that works, but is followed by
> > Input/Output errors, until I do a setfattr -x trusted.glusterfs.version
> > on the directories. (Can give a more detailed description in case this
> > looks like a bug)
> >
> >Is it possible that a afr over a unify is just not supposed to work?
> >
> >Thanks!
> >
> >Wolfgang
> >
> >On Tuesday 05 August 2008 20:56:10 Keith Freedman wrote:
> > > ok.  I see what you're trying to do.
> > > I belive the afr of the unify in the 2nd another node should be fine.
> > >
> > > I'm guessing what you're experiencing is that dream-mirror is empty?
> > >
> > > for the AFR brick, I'd add a local read volume as your unify, unless
> > > you want anothernode2 to read from the mirror, which I think you don't.
> > >
> > > Is something mounting afr0?   As I understand it, the afr happens
> > > when files are accessed through the AFR volume.
> > > So, just defining the volume doesn't accomplish anything.
> > > When you access a file that's mounted on the afr volume (or on a
> > > volume below the afr volume), the AFR translator, asks both bricks
> > > for the extended attributes and the file timestamp (and for the
> > > directory as well).   If they're not the same, then it copies over
> > > the new one to the mirror(s).  However, if no file request ever
> > > passes through the AFR translator, then nothing gets replicated.
> > >
> > > So, some node has to moun the AFR brick.
> > >
> > > your configuration is actually a good one for doing periodic mirroring.
> > > you mount the afr volume, run a find (there are examples in the wiki)
> > > across the new mount point, thus causing auto-healing of the entire
> > > volume. then unmount it.
> > > And you have effectively a point in time snapshot.
> > >
> > > You then mount it again, run find to auto-heal.
> > >
> > > However, if you want "live" replication, then you need the AFR volume
> > > to be in use and active..
> > >
> > > Ideally, you should have ALL the nodes using the AFR config, and
> > > mounting the AFR volume --- set the unify brick as the read volume.
> > >
> > > This way, anytime any node reads data, it reads from the unify and
> > > anytime any of them write data, it gets written to the mirro via the
> > > AFR translator.
> > >
> > > I hope I understood your intentions clearly.
> > >
> > > Keith
> > >
> > > At 04:09 PM 8/5/2008, Wolfgang Pauli wrote:
> > > >Hi,
> > > >
> > > >Thanks for you your reply.
> > > >
> > > >Here is the part of my configuration file that might explain better
> > > > what I am trying to do.
> > > >
> > > >-----------
> > > >#glusterfs-server-dream.vol
> > > >
> > > ># dream-mirror is the directory where I would like to have a complete
> > > > copy of unify0
> > > >volume dream-mirror
> > > >    type storage/posix
> > > >    option directory /glusterfs-mirror
> > > >end-volume
> > > >
> > > >volume dream
> > > >    type storage/posix
> > > >    option directory /glusterfs
> > > >end-volume
> > > >
> > > ># namespace for unify0
> > > >volume dream-ns
> > > >    type storage/posix
> > > >    option directory /glusterfs-ns
> > > >end-volume
> > > >
> > > ># another node
> > > >volume neo
> > > >    type protocol/client
> > > >    option transport-type tcp/client
> > > >    option remote-host neo # defined in /etc/hosts
> > > >    option remote-subvolume neo
> > > >end-volume
> > > >
> > > >#another node
> > > >volume echo
> > > >    type protocol/client
> > > >    option transport-type tcp/client
> > > >    option remote-host echo
> > > >    option remote-subvolume echo
> > > >end-volume
> > > >
> > > >volume unify0
> > > >   type cluster/unify
> > > >   option scheduler rr # round robin # going to switch to NUFA
> > > >   option namespace dream-ns
> > > >   subvolumes dream echo neo
> > > >end-volume
> > > >
> > > >volume afr0
> > > >   type cluster/afr
> > > >   subvolumes unify0 dream-mirror
> > > >end-volume
> > > >
> > > >volume server
> > > >    type protocol/server
> > > >    option transport-type tcp/server
> > > >    subvolumes dream dream-ns unify0
> > > >    option auth.ip.dream.allow *
> > > >    option auth.ip.dream-ns.allow *
> > > >    option auth.ip.unify0.allow *
> > > >#   option auth.ip.dream-mirror.allow *
> > > >#   option auth.ip.afr0.allow *
> > > >end-volume
> > > >
> > > >----------
> > > >
> > > ># glusterfs-client-dream.vol
> > > >
> > > >volume unify0
> > > >    type protocol/client
> > > >    option transport-type tcp/client
> > > >    option remote-host 127.0.0.1   # IP address of server2
> > > >    option remote-subvolume unify0   # use dream on server2
> > > >end-volume
> > > >
> > > >----------------
> > > >
> > > >the problem with our network is that it is slow (100MBit/s). So it
> > > > would be great if all files (talking about /home/*) would just stay
> > > > on the workstations, unless needed somewhere else. So I would like to
> > > > do a afr over a unify guy, but so far volume dream-mirror remains
> > > > empty.
> > > >
> > > >Thanks!
> > > >
> > > >Wolfgang
> > > >
> > > >
> > > >_______________________________________________
> > > >Gluster-users mailing list
> > > >Gluster-users at gluster.org
> > > >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
> >
> >_______________________________________________
> >Gluster-users mailing list
> >Gluster-users at gluster.org
> >http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users





More information about the Gluster-users mailing list