[Gluster-devel] Cascading different translator doesn't work as expectation
yaomin @ gmail
yangyaomin at gmail.com
Mon Jan 5 14:52:36 UTC 2009
Krishna,
Thank you for your quick response.
There are two log information in the client's log file when setting up the client.
2009-01-05 18:44:59 W [fuse-bridge.c:389:fuse_entry_cbk] glusterfs-fuse: 2: (34) / => 1 Rehashing 0/0
2009-01-05 18:48:04 W [fuse-bridge.c:389:fuse_entry_cbk] glusterfs-fuse: 2: (34) / => 1 Rehashing 0/0
There is no any information in the storage node's log file.
Although I changed the scheduler from ALU to RR, there only the No.3(192.168.13.5) and No.4(192.168.13.7) storage nodes on working.
Each machine has 2GB memory.
Thanks,
Alfred
The following is the vol file on server for each storage node.
##############################################
### GlusterFS Server Volume Specification ##
##############################################
#### CONFIG FILE RULES:
### "#" is comment character.
### - Config file is case sensitive
### - Options within a volume block can be in any order.
### - Spaces or tabs are used as delimitter within a line.
### - Multiple values to options will be : delimitted.
### - Each option should end within a line.
### - Missing or commented fields will assume default values.
### - Blank/commented lines are allowed.
### - Sub-volumes should already be defined above before referring.
volume name_space
type storage/posix
option directory /locfsb/name_space
end-volume
volume brick1
type storage/posix # POSIX FS translator
option directory /locfs/brick # Export this directory
end-volume
volume brick2
type storage/posix # POSIX FS translator
option directory /locfsb/brick # Export this directory
end-volume
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
# option listen-port 6996 # Default is 6996
# option client-volume-filename /etc/glusterfs/glusterfs-client.vol
subvolumes brick1 brick2 name_space
option auth.ip.brick1.allow 192.168.13.* # Allow access to "brick1" volume
option auth.ip.brick2.allow 192.168.13.* # Allow access to "brick2" volume
option auth.ip.name_space.allow 192.168.13.* # Allow access to "name_space" volume
end-volume
### Add io-threads feature
volume iot
type performance/io-threads
option thread-count 1 # deault is 1
option cache-size 16MB #64MB
subvolumes brick1 #bricks
end-volume
### Add readahead feature
volume readahead
type performance/read-ahead
option page-size 1MB # unit in bytes
option page-count 4 # cache per file = (page-count x page-size)
subvolumes iot
end-volume
### Add IO-Cache feature
volume iocache
type performance/io-cache
option page-size 256KB
option page-count 8
subvolumes readahead
end-volume
### Add writeback feature
volume writeback
type performance/write-behind
option aggregate-size 1MB #option flush-behind off
option window-size 3MB # default is 0bytes
# option flush-behind on # default is 'off'
subvolumes iocache
end-volume
### Add io-threads feature
volume iot2
type performance/io-threads
option thread-count 1 # deault is 1
option cache-size 16MB #64MB
subvolumes brick2 #bricks
end-volume
### Add readahead feature
volume readahead
type performance/read-ahead
option page-size 1MB # unit in bytes
option page-count 4 # cache per file = (page-count x page-size)
subvolumes iot2
end-volume
### Add IO-Cache feature
volume iocache
type performance/io-cache
option page-size 256KB
option page-count 8
subvolumes readahead
end-volume
### Add writeback feature
volume writeback
type performance/write-behind
option aggregate-size 1MB #option flush-behind off
option window-size 3MB # default is 0bytes
# option flush-behind on # default is 'off'
subvolumes iocache
end-volume
--------------------------------------------------
From: "Krishna Srinivas" <krishna at zresearch.com>
Sent: Monday, January 05, 2009 2:07 PM
To: "yaomin @ gmail" <yangyaomin at gmail.com>
Cc: <gluster-devel at nongnu.org>
Subject: Re: [Gluster-devel] Cascading different translator doesn't work as expectation
> Alfred,
>
> Can you check client logs for any error messages?
> You are using ALU, it might be creating the files on the disk with max
> space (which being your storage nodes 3, 4)
> You can check with RR scheduler to see if all the nodes are participating.
>
> How much memory do the servers and client use?
>
> Krishna
>
> On Sun, Jan 4, 2009 at 6:48 PM, yaomin @ gmail <yangyaomin at gmail.com> wrote:
>> Hey,
>>
>> I try to use the following cascading mode to enhance the throughput
>> performance, but the result is bad.
>> There are four storage nodes and each exports 2 directories.
>>
>> (on client) unify(alu) translator
>> /
>> \
>>
>> / \
>>
>> / \
>>
>> / \
>>
>> / \
>> (translator on client) stripe
>> stripe
>> /
>> \ / \
>> /
>> \ / \
>> /
>> \ / \
>> (translator on client) AFR AFR AFR
>> AFR
>> / \ /
>> \ / \ / \
>> / \ /
>> \ / \ / \
>> #1-1 #2-1 #3-1 #4-4 #1-2 #2-2
>> #3-2 #4-2
>> When I use iozone to test with 10 concurrent processes, I only find the
>> #3 and #4 storages working, and the other 2 nodes doesn't work. As my
>> expectation, the 4 storage nodes should simultaneously work at any time, but
>> it is out of my mind. what's wrong with it?
>> Another issue is that the memory is exhausted on storage nodes when
>> writing and on client server when reading, and it is not what I want. Is
>> there any method to limit the usage of memory?
>>
>>
>> Best Wishes.
>> Alfred
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090105/f9c57229/attachment-0003.html>
More information about the Gluster-devel
mailing list