[Gluster-users] novice question about mounting and glusterfs-volgen
dboyer
david.boyer at univ-provence.fr
Tue May 4 16:39:47 UTC 2010
Dear all,
I just subscribed and started reading archives, and I have two points where
I'm unsure of my conf, and I don't want to start wrong...
I used glusterfs-volgen to generate my conf, which is at this point :
* glusterfsd.vol :
volume posix1
type storage/posix
option directory /srv/glus
end-volume
volume locks1
type features/locks
subvolumes posix1
end-volume
volume brick1
type performance/io-threads
option thread-count 8
subvolumes locks1
end-volume
volume posix2
type storage/posix
option directory /srv/glus2
end-volume
volume locks2
type features/locks
subvolumes posix2
end-volume
volume brick2
type performance/io-threads
option thread-count 8
subvolumes locks2
end-volume
volume server-tcp
type protocol/server
option transport-type tcp
option auth.addr.brick1.allow *
option auth.addr.brick2.allow *
option transport.socket.listen-port 6996
option transport.socket.nodelay on
subvolumes brick1 brick2
end-volume
*************************************************** (2 bricks on 1 node,
and 1 brick on 2 others nodes not shown)
* glusterfs.vol :
volume 192.168.108.83-1
type protocol/client
option transport-type tcp
option remote-host 192.168.108.83
option transport.socket.nodelay on
option remote-port 6996
option remote-subvolume brick1
end-volume
volume 192.168.108.83-2
type protocol/client
option transport-type tcp
option remote-host 192.168.108.83
option transport.socket.nodelay on
option remote-port 6996
option remote-subvolume brick2
end-volume
volume 192.168.108.13-1
type protocol/client
option transport-type tcp
option remote-host 192.168.108.13
option transport.socket.nodelay on
option remote-port 6996
option remote-subvolume brick1
end-volume
volume 192.168.106.8-1
type protocol/client
option transport-type tcp
option remote-host 192.168.106.8
option transport.socket.nodelay on
option remote-port 6996
option remote-subvolume brick1
end-volume
volume mirror-0
type cluster/replicate
subvolumes 192.168.108.83-1 192.168.108.13-1 192.168.106.8-1
end-volume
volume readahead
type performance/read-ahead
option page-count 4
subvolumes mirror-0
end-volume
volume iocache
type performance/io-cache
option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed
's/[^0-9]//g') / 5120 ))`MB
option cache-timeout 1
subvolumes readahead
end-volume
volume quickread
type performance/quick-read
option cache-timeout 1
option max-file-size 64kB
subvolumes iocache
end-volume
volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes quickread
end-volume
volume statprefetch
type performance/stat-prefetch
subvolumes writebehind
end-volume
volume readahead2
type performance/read-ahead
option page-count 4
subvolumes 192.168.108.83-2
end-volume
volume iocache2
type performance/io-cache
option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed
's/[^0-9]//g') / 5120 ))`MB
option cache-timeout 1
subvolumes readahead2
end-volume
volume quickread2
type performance/quick-read
option cache-timeout 1
option max-file-size 64kB
subvolumes iocache2
end-volume
volume writebehind2
type performance/write-behind
option cache-size 4MB
subvolumes quickread2
end-volume
volume statprefetch2
type performance/stat-prefetch
subvolumes writebehind2
end-volume
*************************************************** (one brick
replicated 3 times, one brick on just a node)
I've tried this on 4 lenny box, ok (3 servers and 1 client)
So my questions :
- when I mount the node with 2 bricks, I use :
# mount -t glusterfs -o volume-name=statprefetch 192.168.108.83 /mnt/glus
# mount -t glusterfs -o volume-name=statprefetch2 192.168.108.83 /mnt/glus2
ok, but I am right to choose statprefetch ? I guess because it seems on
"top of stack",
but maybe a weird thing ? If I try "volume-name=brick1", it doesn't work.
If I say "volume-name=192.168.108.83-1" it's ok, but will I get the
performances
enhancements by choosing the "bottom of stack" ?
- when I use glusterfs-volgen asking for a 3 nodes mirror, it complains
about "wrong
number of nodes for replication" - however, making conf by hand like the
one above,
replication seems to work ok on the 3 nodes (and I have find somewhere a
3-nodes conf on
internet) : again, doing something weird ?
And a little "wish-list" :
I would be very pleased if some userquota capabilities would be included
on a future glusterfs version (I saw a post about this on the ML, so I
join my voice !)
Many thanks for any answer !
Cheers
DB
More information about the Gluster-users
mailing list