[Gluster-users] problem with booster when multiple volumes are exported per node

Shehjar Tikoo shehjart at gluster.com
Tue Sep 15 06:15:35 UTC 2009


Wei Dong wrote:
> I've attached all configuration files.  The log file is empty.  The 
> attached configuration is a simplified version of what I tried to do.  
> It causes the same problem.  Basically a single server exports 2 volumes 
> and the client import the 2 volumes and runs DHT over them.

Hi

The fix will be available in a day, at most. Please track the bug
here:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=260

Thanks
-Shehjar
> 
> Thanks,
> 
> - Wei
> 
> Shehjar Tikoo wrote:
>> Wei Dong wrote:
>>> Hi All,
>>>
>>> I'm experiencing a problem of booster when the server side nodes have 
>>> more than one volumes exported.  The symptom is that when I run "ls 
>>> MOUNT_POINT" with booster, I get something like the following:
>>>
>>> ls: closing directory MOUNT_POINT: File descriptor in bad state.
>>>
>>> The server configuration file is the following:
>>>
>>> volume posix0
>>> type storage/posix
>>> option directory /state/partition1/gluster
>>> end-volume
>>>
>>> volume lock0
>>> type features/locks
>>> subvolumes posix0
>>> end-volume
>>>
>>> volume brick0
>>> type performance/io-threads
>>> option thread-count 2
>>> subvolumes lock0
>>> end-volume
>>>
>>> volume posix1
>>> type storage/posix
>>> option directory /state/partition2/gluster
>>> end-volume
>>>
>>> volume lock1
>>> type features/locks
>>> subvolumes posix1
>>> end-volume
>>>
>>> volume brick1
>>> type performance/io-threads
>>> option thread-count 2
>>> subvolumes lock1
>>> end-volume
>>>
>>> volume server
>>> type protocol/server
>>> option transport-type tcp
>>> option transport.socket.listen-port 7001
>>> option auth.addr.brick0.allow 192.168.99.*
>>> option auth.addr.brick1.allow 192.168.99.*
>>> subvolumes brick0 brick1
>>> end-volume
>>>
>>>
>>> On the client side, the bricks on the same server are imported 
>>> separately.
>>>
>>>
>>> The problem only appears when I use booster.  Nothing seems to go 
>>> wrong when I mount GlusterFS.  Also everything is find if I only 
>>> export one brick from each server.  There's also no warning or errors 
>>> in the log file in all cases.
>>>
>>> Any one has some idea on what's happening?
>>
>> Please post the contents of booster FSTAB file. It'll tell us
>> which subvolume from the client volfile gets used by booster.
>>
>> If the log file is available, do post that also.
>>
>> Thanks
>> -Shehjar
>>
>>>
>>> - Wei
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
> 




More information about the Gluster-users mailing list