[Gluster-devel] Trying to run a mailcluster
Guido Smit
guido at comlog.nl
Fri Feb 15 15:51:05 UTC 2008
Only 1 machine is server and client, the rest is server or client. The
client share is under /mail
After the log, there is nothing more, I had to kill the processes
because all services were hanging.
Anand Avati wrote:
> Guido,
> do you share the same machines as server and client? is your mount
> point directly under / ?
>
> avati
>
> 2008/2/15, Guido Smit <guido at comlog.nl <mailto:guido at comlog.nl>>:
>
> Hi all,
>
> I'm trying to set up a mailcluster for a while now, using glusterfs as
> filesystem. I've installed fuse2.7.2glfs8, glusterfs-tla662 on Centos5
>
> As for now, running on 3 servers. 1 configured as dedicated
> server, 1 as
> client only, 1 as server and client.
>
> Normal file operations are doing great, speed can be a little faster,
> but overall I'm happy with it. When starting Dovecot, the following
> error shows up (1st line repeats for every client that logs in):
>
> 2008-02-15 14:27:43 E [afr.c:3730:afr_lk_cbk] afr:
> (path=/comlog.nl/xxxxxxxxx/Maildir/dovecot.index.log child=pop2-mail)
> op_ret=-1 op_errno=77
> 2008-02-15 14:30:52 C [client-protocol.c:222:call_bail] pop2-mail-ns:
> bailing transport
> 2008-02-15 14:30:52 C [client-protocol.c:222:call_bail] pop2-mail:
> bailing transport
> 2008-02-15 14:32:40 C [client-protocol.c:222:call_bail] pop2-mail-ns:
> bailing transport
>
> Everything stalls, no ls, no df nothing. I have to kill all dovecot
> processes, kill glusterfs and glusterfsd
> I've tried with an empty namespace on both servers, but it didn't
> resolve this.
>
> I really need some advise here.
>
> My configs:
> glusterfs-server.vol
>
> volume pop1-mail-ns
> type storage/posix
> option directory /home/namespace
> end-volume
>
> volume pop1-mail-ds
> type storage/posix
> option directory /home/export
> end-volume
>
> volume pop1-mail
> type features/posix-locks
> option mandatory on # enables mandatory
> locking on all files
> subvolumes pop1-mail-ds
> end-volume
>
> volume pop2-mail
> type protocol/client
> option transport-type tcp/client
> option remote-host 62.59.252.42 <http://62.59.252.42>
> option remote-subvolume pop2-mail-ds
> end-volume
>
> volume pop2-mail-ns
> type protocol/client
> option transport-type tcp/client
> option remote-host 62.59.252.42 <http://62.59.252.42>
> option remote-subvolume pop2-mail-ns
> end-volume
>
> volume afr
> type cluster/afr
> subvolumes pop1-mail pop2-mail
> end-volume
>
> volume afr-ns
> type cluster/afr
> subvolumes pop1-mail-ns pop2-mail-ns
> end-volume
>
> volume unify
> type cluster/unify
> option namespace afr-ns
> option scheduler rr
> subvolumes afr
> end-volume
>
> volume mail-ds-readahead
> type performance/read-ahead
> option page-size 128kB # 256KB is the default
> option
> option page-count 4 # 2 is default option
> option force-atime-update off # default is off
> subvolumes unify
> end-volume
>
> volume mail-ds-writebehind
> type performance/write-behind
> option aggregate-size 1MB # default is 0bytes
> option flush-behind on # default is 'off'
> subvolumes mail-ds-readahead
> end-volume
>
> volume mail-ds
> type performance/io-threads
> option thread-count 4 # deault is 1
> option cache-size 32MB #64MB
> subvolumes mail-ds-writebehind
> end-volume
>
> volume server
> type protocol/server
> option transport-type tcp/server
> subvolumes pop1-mail-ds pop1-mail-ns mail-ds
> option auth.ip.pop1-mail-ds.allow 62.59.252.*,127.0.0.1
> <http://127.0.0.1>
> option auth.ip.pop1-mail-ns.allow 62.59.252.*,127.0.0.1
> <http://127.0.0.1>
> option auth.ip.mail-ds.allow 62.59.252.*,127.0.0.1
> <http://127.0.0.1>
> end-volume
>
>
> glusterfs-client.vol:
>
> volume mailspool
> type protocol/client
> option transport-type tcp/client
> option remote-host 62.59.252.41 <http://62.59.252.41>
> option remote-subvolume mail-ds
> end-volume
>
> volume readahead
> type performance/read-ahead
> option page-size 128kB
> option page-count 16
> option force-atime-update off # default is off
> subvolumes mailspool
> end-volume
>
> volume writeback
> type performance/write-behind
> option aggregate-size 1MB
> option flush-behind on # default is 'off'
> subvolumes readahead
> end-volume
>
> volume iothreads
> type performance/io-threads
> option thread-count 4 # deault is 1
> option cache-size 32MB #64MB
> subvolumes writeback
> end-volume
>
> volume io-cache
> type performance/io-cache
> option cache-size 128MB # default is 32MB
> option page-size 1MB #128KB is default option
> option priority *:0 # default is '*:0'
> option force-revalidate-timeout 2 # default is 1
> subvolumes iothreads
> end-volume
>
>
>
> --
> No virus found in this outgoing message.
> Checked by AVG Free Edition.
> Version: 7.5.516 / Virus Database: 269.20.5/1279 - Release Date:
> 2/14/2008 6:35 PM
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org <mailto:Gluster-devel at nongnu.org>
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
>
>
> --
> If I traveled to the end of the rainbow
> As Dame Fortune did intend,
> Murphy would be there to tell me
> The pot's at the other end.
> ------------------------------------------------------------------------
>
> No virus found in this incoming message.
> Checked by AVG Free Edition.
> Version: 7.5.516 / Virus Database: 269.20.5/1279 - Release Date: 2/14/2008 6:35 PM
>
--
Met vriendelijke groet,
Guido Smit
ComLog B.V.
Televisieweg 133
1322 BE Almere
T. 036 5470500
F. 036 5470481
-------------- next part --------------
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.516 / Virus Database: 269.20.5/1279 - Release Date: 2/14/2008 6:35 PM
More information about the Gluster-devel
mailing list