[Gluster-users] try to understand how it works...

David Touzeau david at touzeau.eu
Sun Dec 27 16:39:56 UTC 2009


Thanks for this answer, i have changed the cyrus-imap working directory
to the mount point.
Everything works except permissions problems

What i means...

here it is the cyrus-imap events in the mail.log: 

Dec 27 16:40:05 ub-cluster2 cyrus/ctl_cyrusdb[9145]: DBERROR:
dbenv->open '/var/lib/cyrus/db' failed: Permission denied
Dec 27 16:40:05 ub-cluster2 cyrus/ctl_cyrusdb[9145]: DBERROR: init() on
berkeley
Dec 27 16:40:05 ub-cluster2 cyrus/ctl_cyrusdb[9145]: checkpointing cyrus
databases
Dec 27 16:40:05 ub-cluster2 cyrus/ctl_cyrusdb[9145]: DBERROR:
archive /var/lib/cyrus-clustered/db: cyrusdb error
Dec 27 16:40:05 ub-cluster2 cyrus/ctl_cyrusdb[9145]: DBERROR db4:
txn_checkpoint interface requires an environment configured for the
transaction subsystem
Dec 27 16:40:05 ub-cluster2 cyrus/ctl_cyrusdb[9145]: DBERROR: couldn't
checkpoint: Invalid argument
Dec 27 16:40:05 ub-cluster2 cyrus/ctl_cyrusdb[9145]: DBERROR:
sync /var/lib/cyrus-clustered/db: cyrusdb error
Dec 27 16:40:05 ub-cluster2 cyrus/ctl_cyrusdb[9145]: DBERROR db4:
DB_ENV->log_archive interface requires an environment configured for the
logging subsystem
Dec 27 16:40:05 ub-cluster2 cyrus/ctl_cyrusdb[9145]: DBERROR: error
listing log files: Invalid argument
Dec 27 16:40:05 ub-cluster2 cyrus/ctl_cyrusdb[9145]: DBERROR:
archive /var/lib/cyrus-clustered/db: cyrusdb error
Dec 27 16:40:05 ub-cluster2 cyrus/ctl_cyrusdb[9145]: archiving database
file: /var/lib/cyrus-clustered/mailboxes.db
Dec 27 16:40:05 ub-cluster2 cyrus/ctl_cyrusdb[9145]: error
opening /var/lib/cyrus-clustered/mailboxes.db for reading

------------------------------------------------------------------
it seems there is ownership permissions problems.
------------------------------------------------------------------

cyrus-imap run has cyrus:mail for user and group.

when doing an ls -la on the mounted path : 

drwxr-xr-x 15 cyrus mail  4096 2009-12-27 17:19 .
drwxr-xr-x 15 cyrus mail  4096 2009-12-27 17:19 ..
-rwxr-x---  1 bind  mail   144 2009-12-27 12:01 annotations.db
-rwxr-x---  1 bind  mail   144 2009-11-25 17:36 annotations.db~
drwxr-x--- 13 bind  mail  4096 2009-11-25 17:36 cyrus
drwxr-x---  2 bind  mail  4096 2009-12-27 12:01 db
drwx------  2 cyrus mail  4096 2009-12-27 16:57 db.backup1
drwxr-x---  2 bind  mail  4096 2009-12-27 16:29 db.backup2
-rwxr-x---  1 bind  mail  8192 2009-12-27 04:05 deliver.db
-rwxr-x---  1 bind  mail  8192 2009-11-25 17:36 deliver.db~
drwxr-x---  2 bind  mail  4096 2009-12-23 19:30 log
-rwxr-x---  1 bind  mail  8744 2009-12-27 12:01 mailboxes.db
-rwxr-x---  1 bind  mail  1640 2009-11-25 17:36 mailboxes.db~
-rwxr-x---  1 bind  mail  3676 2009-11-25 17:36 mailboxes.db.old~
-rwxr-x---  1 bind  mail   373 2009-11-25 17:36 mailboxlist.txt~
drwxr-x---  2 bind  mail  4096 2009-12-23 19:30 msg
drwxr-x---  2 bind  mail 12288 2009-12-27 15:29 proc
drwxr-x--- 28 bind  mail  4096 2009-12-23 19:28 quota
drwxr-x---  2 bind  mail  4096 2009-12-23 19:30 rpm
drwxr-x---  2 bind  mail  4096 2009-11-26 08:22 socket
drwxr-x---  2 bind  mail  4096 2009-12-23 19:30 srvtab
drwxr-x---  2 bind  mail  4096 2009-11-25 17:36 sync
-rwxr-x---  1 bind  mail     0 2009-12-27 03:13 titi
-rwxr-x---  1 bind  mail  8192 2009-12-27 04:05 tls_sessions.db
-rwxr-x---  1 bind  mail  8192 2009-11-25 17:36 tls_sessions.db~
drwxr-x--- 28 bind  mail  4096 2009-12-23 19:28 user

you can see that files and folder are owned by bind:mail instead
cyrus:mail

mount points are : 
/etc/artica-cluster/dispatcher-2.vol on /var/spool/cyrus/mail-clustered
type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)
/etc/artica-cluster/dispatcher-1.vol on /var/lib/cyrus-clustered type
fuse.glusterfs (rw,allow_other,default_permissions,max_read=13107

how to force the client to mount has a specific user in order to not
corrupt the permissions settings ? 






-------- Message initial --------
De: Raghavendra G <raghavendra at gluster.com>
À: david <david at touzeau.eu>
Sujet: Re: [Gluster-users] try to understand how it works...
Date: Sun, 27 Dec 2009 19:00:18 +0400

I meant, you can have the cyrus-imap installation directory on glusterfs
mount point, which helps you to maintain the mail directory and
indexation directory on gluster mount thereby replicating the contents
of both the directories.

2009/12/27 raghavendra <raghavendra at gluster.com>

        
        
        
        On Sun, Dec 27, 2009 at 6:27 PM, David Touzeau
        <david at touzeau.eu> wrote:
        
                Thanks for this answer
                
                i real production mode, i would like to create a cluster
                for cyrus-imap 
                
                cyrus-imap use  2 directories.
                /var/lib/cyrus 
                /var/spool/cyrus/mail
                
                I have 3 servers with cyrus-imap installed.
                The goal is when a mail is saved on one of these servers
                in /var/spool/cyrus/mail and in /var/lib/cyrus
                (indexation directory)  it is automatically replicated
                on the 2 servers.
                
                According your answer how i can create this kind of
                structure ?
                
        
        You can have this by maintaining /var as a glusterfs mount-point
        (the configuration used to mount glusterfs should have
        replicate).
        
        
                
                
                
                
                -------- Message initial --------
                De: Raghavendra G <raghavendra at gluster.com>
                À: david <david at touzeau.eu>
                Cc: gluster-users <gluster-users at gluster.org>
                Sujet: Re: [Gluster-users] try to understand how it
                works...
                Date: Sun, 27 Dec 2009 18:03:06 +0400
                
                
                
                
                Hi David,
                
                Please find the comments inlined below.
                
                On Sun, Dec 27, 2009 at 4:26 PM, David Touzeau
                <david at touzeau.eu> wrote:
                
                        Dear
                        
                        My english is poor but after several research i
                        will not really
                        understand how the replication works.
                        I have created 3 clusters server that share 2
                        directories in
                        cluster/replicate mode
                        
                        /home/replicate
                        /home/replicate2
                        
                        each server mount the pool clusters
                        in /mnt/replicate
                        and /mnt/replicate2
                        
                        if i add a file in /home/replicate in one node,
                        nothing happen in others
                        node.
                        The file is not added in the desired
                        folder /home/replicate on others
                        nodes.
                
                
                you should not do any filesystem operations on the
                back-end directory directly. The correct way of doing
                it, is to create a file on /mnt/replicate
                or /mnt/replicate2 (two mount-points are not necessary
                for correct working of replicate, unless you really want
                two mount points) and checking the backend directories
                (ls on both backend directories should show the file
                being created).
                 
                
                
                        But if a do an "ls mnt/replicate" in the mounted
                        directory the file is
                        correctly added on all nodes.
                        
                        did i must cron the "ls" in order to execute the
                        replication ?
                
                
                No, during normal operation, replicate automatically
                replicates. Only when there is a node failure, and when
                the node comes up you need to execute 'ls
                -lR /mnt/replicate" to heal the node which is up now
                with the node which was running fine.
                 
                
                
                        did the cluster mode is not in "real time" and
                        this is the standard
                        procedure ?
                        
                        
                        best regards
                        
                        
                        _______________________________________________
                        Gluster-users mailing list
                        Gluster-users at gluster.org
                        http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
                        
                
                
                
                regards,
                -- 
                Raghavendra G
                
                
        
        
        
        
        -- 
        Raghavendra G
        



-- 
Raghavendra G



More information about the Gluster-users mailing list