[Gluster-users] GlusterFS running, but not syncing is done

Stas Oskin stas.oskin at gmail.com
Mon Mar 16 09:52:29 UTC 2009


Hi.

This was the missing step :).

Before mounting as explained here (
http://www.gluster.org/docs/index.php/Execution_guide), I simply launched
the glusterfs with client.vol file, without mounting it in the process. Took
me a while to figure out it can't be as simple.

Questions:

1) How do I ensure that server running on boot - are there any init scripts?

2) How do I modprobe the fuse on boot - or it's enough to mount via fstab as
described here (
http://www.gluster.org/docs/index.php/Mounting_a_GlusterFS_Volume)

3) There were some messages in this list about tuning block-sizes - is this
the relevant link?
http://www.gluster.org/docs/index.php/Guide_to_Optimizing_GlusterFS#Block_Device_Tuning

Thanks!


2009/3/16 Krishna Srinivas <krishna at zresearch.com>

> Volume file looks fine. Looking at the error message:
>
> >> > 2009-03-15 14:21:51 E [glusterfsd.c:551:glusterfs_graph_init]
> glusterfs:
> >> > no
> >> > valid translator loaded at the top or no mount point given. exiting
> >> > 2009-03-15 14:21:51 E [glusterfsd.c:1127:main] glusterfs: translator
> >> > initialization failed.  exiting
>
> Did you give the mount point in the command line? what is the command
> that you used to mount the glusterfs?
>
> Krishna
>
> On Mon, Mar 16, 2009 at 1:53 AM, Stas Oskin <stas.oskin at gmail.com> wrote:
> > Hi.
> >
> > It's exactly same that you posted earlier:
> >
> > client.vol
> > ----
> > ## Reference volume "home1" from remote server
> > volume home1
> >  type protocol/client
> >  option transport-type tcp/client
> >  option remote-host 192.168.253.41      # IP address of remote host
> >  option remote-subvolume posix-locks-home1     # use home1 on remote host
> >  option transport-timeout 10           # value in seconds; it should be
> set
> > relatively low
> > end-volume
> >
> > ## Reference volume "home2" from remote server
> > volume home2
> >  type protocol/client
> >  option transport-type tcp/client
> >  option remote-host 192.168.253.42      # IP address of remote host
> >  option remote-subvolume posix-locks-home1     # use home1 on remote host
> >  option transport-timeout 10           # value in seconds; it should be
> set
> > relatively low
> > end-volume
> >
> > volume home
> >  type cluster/afr
> >  option metadata-self-heal on
> >  subvolumes home1 home2
> > end-volume
> >
> >
> > server.vol
> > ---
> > volume home1
> >  type storage/posix                   # POSIX FS translator
> >  option directory /media/storage        # Export this directory
> > end-volume
> >
> > volume posix-locks-home1
> >  type features/posix-locks
> >  option mandatory-locks on
> >  subvolumes home1
> > end-volume
> >
> > ### Add network serving capability to above home.
> > volume server
> >  type protocol/server
> >  option transport-type tcp
> >  subvolumes posix-locks-home1
> >  option auth.addr.posix-locks-home1.allow * # Allow access to "home1"
> volume
> > end-volume
> >
> > Regards.
> >
> > 2009/3/15 Krishna Srinivas <krishna at zresearch.com>
> >>
> >> Can you paste your client vol file? and the command you used to mount
> >> the glusterfs?
> >>
> >> Krishna
> >>
> >> On Sun, Mar 15, 2009 at 5:57 PM, Stas Oskin <stas.oskin at gmail.com>
> wrote:
> >> > Hi.
> >> >
> >> > Just tried this, server works but the client fails.
> >> >
> >> > Here is the error that the client prints:
> >> >
> >> > 2009-03-15 14:21:51 E [glusterfsd.c:551:glusterfs_graph_init]
> glusterfs:
> >> > no
> >> > valid translator loaded at the top or no mount point given. exiting
> >> > 2009-03-15 14:21:51 E [glusterfsd.c:1127:main] glusterfs: translator
> >> > initialization failed.  exiting
> >> >
> >> >
> >> > Two possible reasons I can think of:
> >> >
> >> > 1) The volume is always home1 on both the servers, while in client
> file
> >> > both
> >> > home1 and home2 are referenced. Shouldn't the .42 have home2 defined
> as
> >> > it's
> >> > volume? Or it doesn't matter, as home2 is client-only volume label?
> >> >
> >> > 2) The "/media/directory" is a regular directory on disk. Should it be
> a
> >> > mount or something else?
> >> >
> >> > 3) I'm using stock kernel without any modifications. Nor I did any
> >> > changes
> >> > to filesystems for extended attributes (using ext3). Would fuse work
> >> > without
> >> > any problems?
> >> >
> >> > Thanks!
> >> >
> >> > 2009/3/12 Krishna Srinivas <krishna at zresearch.com>
> >> >>
> >> >> server.vol :
> >> >> -----
> >> >> volume home1
> >> >>  type storage/posix                   # POSIX FS translator
> >> >>  option directory /media/storage        # Export this directory
> >> >> end-volume
> >> >>
> >> >> volume posix-locks-home1
> >> >>  type features/posix-locks
> >> >>  option mandatory-locks on
> >> >>  subvolumes home1
> >> >> end-volume
> >> >>
> >> >> ### Add network serving capability to above home.
> >> >> volume server
> >> >>  type protocol/server
> >> >>  option transport-type tcp
> >> >>  subvolumes posix-locks-home1
> >> >>  option auth.addr.posix-locks-home1.allow * # Allow access to "home1"
> >> >> volume
> >> >> end-volume
> >> >>
> >> >> ----------
> >> >>
> >> >> client.vol:
> >> >> ---------
> >> >>
> >> >> ## Reference volume "home1" from remote server
> >> >> volume home1
> >> >>  type protocol/client
> >> >>  option transport-type tcp/client
> >> >>  option remote-host 192.168.253.41      # IP address of remote host
> >> >>  option remote-subvolume posix-locks-home1     # use home1 on remote
> >> >> host
> >> >>  option transport-timeout 10           # value in seconds; it should
> >> >> be set relatively low
> >> >> end-volume
> >> >>
> >> >> ## Reference volume "home2" from remote server
> >> >> volume home2
> >> >>  type protocol/client
> >> >>  option transport-type tcp/client
> >> >>  option remote-host 192.168.253.42      # IP address of remote host
> >> >>  option remote-subvolume posix-locks-home1     # use home1 on remote
> >> >> host
> >> >>  option transport-timeout 10           # value in seconds; it should
> >> >> be set relatively low
> >> >> end-volume
> >> >>
> >> >> volume home
> >> >>  type cluster/afr
> >> >>  option metadata-self-heal on
> >> >>  subvolumes home1 home2
> >> >> end-volume
> >> >>
> >> >> --------
> >> >>
> >> >> Make sure the IP addresses are correct.
> >> >> You can use the same server.vol and client.vol for both the machines.
> >> >> (assuming you have backend directory names same)
> >> >>
> >> >> Krishna
> >> >>
> >> >> On Thu, Mar 12, 2009 at 8:28 PM, Stas Oskin <stas.oskin at gmail.com>
> >> >> wrote:
> >> >> > Hi.
> >> >> >
> >> >> > Did you mean to change their order to become something like this?
> >> >> >
> >> >> > Otherwise can you please just post the correct version? I'm not
> quite
> >> >> > familiar with the syntax, and will appreciate an example I can work
> >> >> > and
> >> >> > learn from.
> >> >> >
> >> >> > Thanks!
> >> >> >
> >> >> >
> >> >> > glusterfs.vol (client)
> >> >> > ### Create automatic file replication
> >> >> > volume home
> >> >> >  type cluster/afr
> >> >> >  option metadata-self-heal on
> >> >> >  option read-subvolume posix-locks-home1
> >> >> > #  option favorite-child home2
> >> >> >  subvolumes posix-locks-home1 home2
> >> >> > end-volume
> >> >> > ## Reference volume "home2" from remote server
> >> >> > volume home2
> >> >> >  type protocol/client
> >> >> >  option transport-type tcp/client
> >> >> >  option remote-host 192.168.253.41      # IP address of remote host
> >> >> >  option remote-subvolume posix-locks-home1     # use home1 on
> remote
> >> >> > host
> >> >> >  option transport-timeout 10           # value in seconds; it
> should
> >> >> > be
> >> >> > set
> >> >> > relatively low
> >> >> > end-volume
> >> >> >
> >> >> > glusterfsd.vol (server)
> >> >> >
> >> >> > ### Add network serving capability to above home.
> >> >> > volume server
> >> >> >  type protocol/server
> >> >> >  option transport-type tcp
> >> >> >  subvolumes posix-locks-home1
> >> >> >  option auth.addr.posix-locks-home1.allow 192.168.253.41,127.0.0.1
> #
> >> >> > Allow
> >> >> > access to "home1" volume
> >> >> > end-volume
> >> >> > volume posix-locks-home1
> >> >> >  type features/posix-locks
> >> >> >  option mandatory-locks on
> >> >> >  subvolumes home1
> >> >> > end-volume
> >> >> > volume home1
> >> >> >  type storage/posix                   # POSIX FS translator
> >> >> >  option directory /media/storage        # Export this directory
> >> >> > end-volume
> >> >> > Regards.
> >> >> >
> >> >> > 2009/3/12 Krishna Srinivas <krishna at zresearch.com>
> >> >> >>
> >> >> >> Hi Stats,
> >> >> >> Excuse me for missing out on this mail.
> >> >> >>
> >> >> >> Your vol files for having 2 servers and 2 clients are incorrect.
> >> >> >>
> >> >> >> on server vol (both the machines) you need to have:
> >> >> >> protocol/server -> features/locks -> storage/posix
> >> >> >>
> >> >> >> On client vol (both the machines) you need to have:
> >> >> >> cluster/afr -> (two protocol/clients)
> >> >> >>
> >> >> >> each of the protocol/clients connect to each of the servers.
> >> >> >>
> >> >> >> You would use the client vol to mount the glusterfs.
> >> >> >>
> >> >> >> Let us know if you still face problems.
> >> >> >>
> >> >> >> Krishna
> >> >> >>
> >> >> >> On Tue, Mar 10, 2009 at 1:32 AM, Stas Oskin <stas.oskin at gmail.com
> >
> >> >> >> wrote:
> >> >> >> > Hi.
> >> >> >> > The boxes participating in AFR are running OpenVZ host kernels -
> >> >> >> > can
> >> >> >> > it
> >> >> >> > be
> >> >> >> > related in any way to the issue?
> >> >> >> > Regards.
> >> >> >> >
> >> >> >> > 2009/3/9 Stas Oskin <stas.oskin at gmail.com>
> >> >> >> >>
> >> >> >> >> Hi.
> >> >> >> >> These are my new 2 vol files, one for client and one for
> server.
> >> >> >> >> Can you advice if they are correct?
> >> >> >> >> Thanks in advance.
> >> >> >> >> glusterfs.vol (client)
> >> >> >> >> ## Reference volume "home2" from remote server
> >> >> >> >> volume home2
> >> >> >> >>  type protocol/client
> >> >> >> >>  option transport-type tcp/client
> >> >> >> >>  option remote-host 192.168.253.41      # IP address of remote
> >> >> >> >> host
> >> >> >> >>  option remote-subvolume posix-locks-home1     # use home1 on
> >> >> >> >> remote
> >> >> >> >> host
> >> >> >> >>  option transport-timeout 10           # value in seconds; it
> >> >> >> >> should
> >> >> >> >> be
> >> >> >> >> set relatively low
> >> >> >> >> end-volume
> >> >> >> >> ### Create automatic file replication
> >> >> >> >> volume home
> >> >> >> >>  type cluster/afr
> >> >> >> >>  option metadata-self-heal on
> >> >> >> >>  option read-subvolume posix-locks-home1
> >> >> >> >> #  option favorite-child home2
> >> >> >> >>  subvolumes posix-locks-home1 home2
> >> >> >> >> end-volume
> >> >> >> >>
> >> >> >> >> glusterfsd.vol (server)
> >> >> >> >>
> >> >> >> >> volume home1
> >> >> >> >>  type storage/posix                   # POSIX FS translator
> >> >> >> >>  option directory /media/storage        # Export this directory
> >> >> >> >> end-volume
> >> >> >> >> volume posix-locks-home1
> >> >> >> >>  type features/posix-locks
> >> >> >> >>  option mandatory-locks on
> >> >> >> >>  subvolumes home1
> >> >> >> >> end-volume
> >> >> >> >> ### Add network serving capability to above home.
> >> >> >> >> volume server
> >> >> >> >>  type protocol/server
> >> >> >> >>  option transport-type tcp
> >> >> >> >>  subvolumes posix-locks-home1
> >> >> >> >>  option auth.addr.posix-locks-home1.allow
> >> >> >> >> 192.168.253.41,127.0.0.1 #
> >> >> >> >> Allow
> >> >> >> >> access to "home1" volume
> >> >> >> >> end-volume
> >> >> >> >> 2009/3/9 Krishna Srinivas <krishna at zresearch.com>
> >> >> >> >>>
> >> >> >> >>> Stats,
> >> >> >> >>>
> >> >> >> >>> I think there was nothing changed between rc2 and rc4 that
> could
> >> >> >> >>> affect this functionality.
> >> >> >> >>>
> >> >> >> >>> Your vol files look fine, i will look into why it is not
> >> >> >> >>> working.
> >> >> >> >>>
> >> >> >> >>> Do not use single process as both server and client as we saw
> >> >> >> >>> issues
> >> >> >> >>> related to locking. Can you see if using different processes
> for
> >> >> >> >>> server and client works fine w.r.t replication?
> >> >> >> >>>
> >> >> >> >>> Also subvolumes list of all AFRs should be in same order (in
> >> >> >> >>> your
> >> >> >> >>> case
> >> >> >> >>> its interchanged)
> >> >> >> >>>
> >> >> >> >>> Regards
> >> >> >> >>> Krishna
> >> >> >> >>>
> >> >> >> >>> On Mon, Mar 9, 2009 at 5:44 PM, Stas Oskin
> >> >> >> >>> <stas.oskin at gmail.com>
> >> >> >> >>> wrote:
> >> >> >> >>> > Actually, I see a new version came out, rc4.
> >> >> >> >>> > Any idea if anything related was fixed?
> >> >> >> >>> > Regards.
> >> >> >> >>> > 2009/3/9 Stas Oskin <stas.oskin at gmail.com>
> >> >> >> >>> >>
> >> >> >> >>> >> Hi.
> >> >> >> >>> >>>
> >> >> >> >>> >>> Was it working for your previously? Any other error logs
> on
> >> >> >> >>> >>> machine
> >> >> >> >>> >>> with afr? what version are you using? If it was working
> >> >> >> >>> >>> previously
> >> >> >> >>> >>> what changed in your setup recently? Can you paste your
> vol
> >> >> >> >>> >>> files
> >> >> >> >>> >>> (just to be sure)
> >> >> >> >>> >>
> >> >> >> >>> >>
> >> >> >> >>> >> Nope, it actually my first setup in lab. No errors - it
> just
> >> >> >> >>> >> seems
> >> >> >> >>> >> as
> >> >> >> >>> >> not
> >> >> >> >>> >> synchronizing anything. The version I'm using is the latest
> >> >> >> >>> >> one
> >> >> >> >>> >> - 2
> >> >> >> >>> >> rc2.
> >> >> >> >>> >> Perhaps I need to modify anything else in addition to
> >> >> >> >>> >> GlusterFS
> >> >> >> >>> >> installation - like file-systems attributes or something?
> >> >> >> >>> >> The approach I'm using is the one that was recommended by
> >> >> >> >>> >> Kieth
> >> >> >> >>> >> over
> >> >> >> >>> >> direct emails (Keith, hope you don't mind me posting them
> :)
> >> >> >> >>> >> ).
> >> >> >> >>> >> The idea is basically to have single vol file both for
> client
> >> >> >> >>> >> and
> >> >> >> >>> >> for
> >> >> >> >>> >> server, and to have one glusterfs process doing the job
> both
> >> >> >> >>> >> as
> >> >> >> >>> >> client
> >> >> >> >>> >> and
> >> >> >> >>> >> as server.
> >> >> >> >>> >> Thanks for the help.
> >> >> >> >>> >> Server 1:
> >> >> >> >>> >> volume home1
> >> >> >> >>> >>  type storage/posix                   # POSIX FS translator
> >> >> >> >>> >>  option directory /media/storage        # Export this
> >> >> >> >>> >> directory
> >> >> >> >>> >> end-volume
> >> >> >> >>> >>
> >> >> >> >>> >> volume posix-locks-home1
> >> >> >> >>> >>  type features/posix-locks
> >> >> >> >>> >>  option mandatory-locks on
> >> >> >> >>> >>  subvolumes home1
> >> >> >> >>> >> end-volume
> >> >> >> >>> >>
> >> >> >> >>> >> ## Reference volume "home2" from remote server
> >> >> >> >>> >> volume home2
> >> >> >> >>> >>  type protocol/client
> >> >> >> >>> >>  option transport-type tcp/client
> >> >> >> >>> >>  option remote-host 192.168.253.42      # IP address of
> >> >> >> >>> >> remote
> >> >> >> >>> >> host
> >> >> >> >>> >>  option remote-subvolume posix-locks-home1     # use home1
> on
> >> >> >> >>> >> remote
> >> >> >> >>> >> host
> >> >> >> >>> >>  option transport-timeout 10           # value in seconds;
> it
> >> >> >> >>> >> should
> >> >> >> >>> >> be
> >> >> >> >>> >> set relatively low
> >> >> >> >>> >> end-volume
> >> >> >> >>> >>
> >> >> >> >>> >> ### Add network serving capability to above home.
> >> >> >> >>> >> volume server
> >> >> >> >>> >>  type protocol/server
> >> >> >> >>> >>  option transport-type tcp
> >> >> >> >>> >>  subvolumes posix-locks-home1
> >> >> >> >>> >>  option auth.addr.posix-locks-home1.allow
> >> >> >> >>> >> 192.168.253.42,127.0.0.1
> >> >> >> >>> >> #
> >> >> >> >>> >> Allow
> >> >> >> >>> >> access to "home1" volume
> >> >> >> >>> >> end-volume
> >> >> >> >>> >>
> >> >> >> >>> >> ### Create automatic file replication
> >> >> >> >>> >> volume home
> >> >> >> >>> >>  type cluster/afr
> >> >> >> >>> >>  option metadata-self-heal on
> >> >> >> >>> >>  option read-subvolume posix-locks-home1
> >> >> >> >>> >> #  option favorite-child home2
> >> >> >> >>> >>  subvolumes home2 posix-locks-home1
> >> >> >> >>> >> end-volume
> >> >> >> >>> >>
> >> >> >> >>> >>
> >> >> >> >>> >> Server 2:
> >> >> >> >>> >>
> >> >> >> >>> >> volume home1
> >> >> >> >>> >>  type storage/posix                   # POSIX FS translator
> >> >> >> >>> >>  option directory /media/storage        # Export this
> >> >> >> >>> >> directory
> >> >> >> >>> >> end-volume
> >> >> >> >>> >>
> >> >> >> >>> >> volume posix-locks-home1
> >> >> >> >>> >>  type features/posix-locks
> >> >> >> >>> >>  option mandatory-locks on
> >> >> >> >>> >>  subvolumes home1
> >> >> >> >>> >> end-volume
> >> >> >> >>> >>
> >> >> >> >>> >> ## Reference volume "home2" from remote server
> >> >> >> >>> >> volume home2
> >> >> >> >>> >>  type protocol/client
> >> >> >> >>> >>  option transport-type tcp/client
> >> >> >> >>> >>  option remote-host 192.168.253.41      # IP address of
> >> >> >> >>> >> remote
> >> >> >> >>> >> host
> >> >> >> >>> >>  option remote-subvolume posix-locks-home1     # use home1
> on
> >> >> >> >>> >> remote
> >> >> >> >>> >> host
> >> >> >> >>> >>  option transport-timeout 10           # value in seconds;
> it
> >> >> >> >>> >> should
> >> >> >> >>> >> be
> >> >> >> >>> >> set relatively low
> >> >> >> >>> >> end-volume
> >> >> >> >>> >>
> >> >> >> >>> >> ### Add network serving capability to above home.
> >> >> >> >>> >> volume server
> >> >> >> >>> >>  type protocol/server
> >> >> >> >>> >>  option transport-type tcp
> >> >> >> >>> >>  subvolumes posix-locks-home1
> >> >> >> >>> >>  option auth.addr.posix-locks-home1.allow
> >> >> >> >>> >> 192.168.253.41,127.0.0.1
> >> >> >> >>> >> #
> >> >> >> >>> >> Allow
> >> >> >> >>> >> access to "home1" volume
> >> >> >> >>> >> end-volume
> >> >> >> >>> >>
> >> >> >> >>> >> ### Create automatic file replication
> >> >> >> >>> >> volume home
> >> >> >> >>> >>  type cluster/afr
> >> >> >> >>> >>  option metadata-self-heal on
> >> >> >> >>> >>  option read-subvolume posix-locks-home1
> >> >> >> >>> >> #  option favorite-child home2
> >> >> >> >>> >>  subvolumes home2 posix-locks-home1
> >> >> >> >>> >> end-volume
> >> >> >> >
> >> >> >
> >> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090316/45557e04/attachment.html>


More information about the Gluster-users mailing list