[Gluster-devel] glusterfs and email store problem.
jeff at bofus.org
jeff at bofus.org
Wed Nov 7 18:48:38 UTC 2007
I am obviously new to glusterfs, however, I thought I had enabled
posix-locks?
volume posix-locks-knworksmail
type features/posix-locks
option mandatory on
subvolumes knworksmail
end-volume
or am I missing something?
-JPH
Daniel van Ham Colchete wrote:
> Jeff,
>
> reading the dovecot website, I saw this: Dovecot allows mailboxes and
> their indexes to be modified by multiple computers at the same time,
> while still performing well. This means that Dovecot works with NFS
> and clustered filesystems.
>
> The only way of doing this is using locks (flock or fnctl). Try
> activating posix-locks.
>
> I had a similar problem with maildrop recently. Because fnctl wasn't
> working it wouldn't change one file and report a filesystem error.
>
> Although, this doesn't explain the error message in the log, this is
> one problem you also have to solve.
>
> Best,
> Daniel
>
> On Nov 7, 2007 4:12 PM, jeff at bofus.org <mailto:jeff at bofus.org>
> <jeff at bofus.org <mailto:jeff at bofus.org>> wrote:
>
> I am hoping someone can shed some light on this issue for me.
>
> version info first:
>
> server OS: CentOS release 4.5 (Final)
> fuse: fuse-2.7.0-glfs5
> glusterfs: glusterfs-1.3.7
>
> client OS: CentOS release 4.5 (Final)
> fuse: fuse-2.7.0-glfs5
> glusterfs: glusterfs-1.3.7
>
> Mount:
> glusterfs on /mnt/glusterfs type fuse
> (rw,nosuid,nodev,allow_other,default_permissions,max_read=1048576)
>
> Configuration contents listed below issue.
>
> Issue:
> Looking at the logs on my mail (dovecot) server, I see the
> following errors:
> mmap() failed with index file
> /opt/GFS/postfix/vmail/jeff@ bofus.org/.Trash/.imap.index
> <http://bofus.org/.Trash/.imap.index>: No such device
> mmap() failed with custom flags file
> /opt/GFS/postfix/vmail/jeff at bofus.org/.Trash/.customflags
> <http://bofus.org/.Trash/.customflags>: No such device
>
> These of course are on the gluster mount, and the files really do
> exist:
> -rw------- 1 vmail vmail 6816 Nov 5 21:07
> /opt/GFS/postfix/vmail/jeff@ bofus.org/.Trash/.imap.index
> -rw------- <http://bofus.org/.Trash/.imap.index-rw-------> 1
> vmail vmail 100 Oct 15 14:11
> /opt/GFS/postfix/vmail/jeff at bofus.org/.Trash/.customflags
> <http://bofus.org/.Trash/.customflags>
>
> I was not using posix-locks at first and this same type issue came up
> but with the .subscription file. I am not sure whether including
> posix-locks or the restart/remount required to enable it fixed this
> issue for the .subscription file.
>
> This does not happen when I use a plain ext3 local disk mountpoint.
> Only on glusterfs mountpoint.
> Does anyone know why the files say "No such device" when they are
> clearly there on the filesystem?
>
> Thanks for any assistance!
>
> -Jeff Humes
>
>
>
>
>
> #################################
> # server config:
> volume knworksmail
> type storage/posix
> option directory /glusterfs/knworksmail
> end-volume
>
> volume posix-locks-knworksmail
> type features/posix-locks
> option mandatory on
> subvolumes knworksmail
> end-volume
>
> volume server
> type protocol/server
> option transport-type tcp/server
> subvolumes posix-locks-knworksmail
> option auth.ip.knworksmail.allow *
> option auth.ip.posix-locks-knworksmail.allow *
> end-volume
>
> volume writebehind
> type performance/write-behind
> option aggregate-size 1MB
> option flush-behind on
> subvolumes knworksmail
> end-volume
>
> #################################
> # client config:
> volume gluster01
> type protocol/client
> option transport-type tcp/client
> option remote-host 10.1.2.226 <http://10.1.2.226>
> #option remote-subvolume knworksmail
> option remote-subvolume posix-locks-knworksmail
> end-volume
>
> volume writebehind
> type performance/write-behind
> option aggregate-size 131072
> subvolumes gluster01
> end-volume
>
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org <mailto:Gluster-devel at nongnu.org>
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
More information about the Gluster-devel
mailing list