[Gluster-users] attempt to run ldap/zimbra on glusterfs fails
Bryan Whitehead
driver at megahappy.net
Thu Dec 4 18:55:00 UTC 2008
My server config has posix-locks loaded. Did I do it wrong? or do I need
to load posix-locks on the client side instead of server? or both?
I think logging was disabled for glusterfs and glusterfsd, I will
attempt again tonight with logging turned on.
Anand Avati wrote:
> Bryan,
> can you please post the glusterfs log files? Also please try with
> posix-locks loaded.
>
> thanks,
> avati
>
> 2008/12/4 Bryan Whitehead <driver at megahappy.net>:
>> glusterfs 1.3.12
>>
>> Client conf:
>> volume beavis
>> type protocol/client
>> option transport-type tcp/client
>> option remote-host beavis
>> option remote-subvolume gluster1
>> end-volume
>>
>> volume butthead
>> type protocol/client
>> option transport-type tcp/client
>> option remote-host butthead
>> option remote-subvolume gluster1
>> end-volume
>>
>> volume mirror0
>> type cluster/afr
>> subvolumes beavis butthead
>> end-volume
>>
>> Server conf:
>> volume gluster1
>> type storage/posix
>> option directory /export/gluster1
>> end-volume
>>
>> volume gluster1-locks
>> type features/posix-locks
>> option mandatory on
>> subvolumes gluster1
>> end-volume
>>
>> volume gluster1-io-thr
>> type performance/io-threads
>> subvolumes gluster1-locks
>> end-volume
>>
>> volume gluster1-wb
>> type performance/write-behind
>> subvolumes gluster1-io-thr
>> end-volume
>>
>> volume gluster1-ra
>> type performance/read-ahead
>> subvolumes gluster1-wb
>> end-volume
>>
>> volume server
>> type protocol/server
>> option transport-type tcp/server
>> option auth.ip.gluster1.allow 10.*
>> subvolumes gluster1-ra
>> end-volume
>>
>> If you'd like a complete openldap error, I'll have to do another attempt
>> tonight. Previously, I had not used the posix-locks, io-threads,
>> write-behind, or read-ahead. My first attempt was with a server config that
>> looked like this:
>>
>> volume gluster1
>> type storage/posix
>> option directory /export/gluster1
>> end-volume
>>
>> volume server
>> type protocol/server
>> option transport-type tcp/server
>> option auth.ip.gluster1.allow 10.*
>> subvolumes gluster1
>> end-volume
>>
>> But with the error from ldap, I thought that posix-locks might be needed.
>>
>> Anand Avati wrote:
>>> Can you give some more details - the version, spec files and log files?
>>>
>>> avati
>>>
>>> 2008/12/4 Bryan Whitehead <driver at megahappy.net>:
>>>> I have a very simple setup of gluster. I basically have a mirror using
>>>> afr on the client side. When I attempt to start zimbra (which begins by
>>>> trying to start openldap) I get a weird error:
>>>>
>>>> backend_startup_one: bi_db_open failed! (-1)
>>>>
>>>> I have a functional zimbra setup. What I do is stop zimbra, rsync over
>>>> the installation, mv the existing directory aside, then symlink (or
>>>> create directory and directly mount from gluster). I cannot get zimbra
>>>> to startup. :(
>>>>
>>>> example:
>>>>
>>>> /opt/zimbra is where everything is. Program, data, everything.
>>>>
>>>> /etc/init.d/zimbra stop
>>>> time rsync -arv --delete /opt/* /import/gluster1/zimbra/opt/
>>>> cd /opt
>>>> mv zimbra zimbra.works
>>>> ln -s /import/gluster1/zimbra/opt/zimbra
>>>> /etc/init.d/zimbra start
>>>>
>>>> if I undo the symlink (or mountpoint/directory) and move the
>>>> /opt/zimbra.works back to /opt/zimbra, I can start just fine like
>>>> nothing happened.
>>>>
>>>> -Bryan
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>>>
More information about the Gluster-users
mailing list