[Gluster-users] Question regarding Replicate Translator
Steve
steeeeeveee at gmx.net
Thu Jun 4 15:45:05 UTC 2009
I have a question regarding the Replicate Translator in v2.0.x:
1) Assuming I have 3 servers running the Replicate Translator and I set the [data|metadata|entry]-lock-server-count to 3 what will happen if one of the servers would be down? Will GlusterFS still work or will it refuse to work?
2) I have a setup where I have 2 GlusterFS servers running with Replicate and NUFA Translator and on each of the servers there is a GlusterFS client. See below the config (from just one system):
===================================================
SERVER:
===================================================
volume gfs-srv-ds
type storage/posix
option directory /mnt/glusterfs/mailstore01
end-volume
volume gfs-srv-ds-locks
type features/locks
option mandatory-locks off
subvolumes gfs-srv-ds
end-volume
volume gfs-srv-ds-remote
type protocol/client
option transport-type tcp
option remote-host 192.168.0.77
option remote-port 6997
option frame-timeout 600
option ping-timeout 10
option remote-subvolume gfs-srv-ds-locks
end-volume
volume gfs-srv-ds-replicate
type cluster/replicate
option data-self-heal on
option metadata-self-heal on
option entry-self-heal on
option data-change-log on
option metadata-change-log on
option entry-change-log on
option data-lock-server-count 1
option metadata-lock-server-count 1
option entry-lock-server-count 1
subvolumes gfs-srv-ds-locks gfs-srv-ds-remote
end-volume
volume gfs-srv-ds-nufa
type cluster/nufa
option local-volume-name gfs-srv-ds-locks
subvolumes gfs-srv-ds-locks gfs-srv-ds-remote
end-volume
volume gfs-srv-ds-io-threads
type performance/io-threads
option thread-count 16
subvolumes gfs-srv-ds-nufa
end-volume
volume gfs-srv-ds-io-cache
type performance/io-cache
option page-size 16KB
option cache-size 64MB
option priority *:0
option cache-timeout 1
subvolumes gfs-srv-ds-io-threads
end-volume
volume gfs-srv-ds-server
type protocol/server
option transport-type tcp
option transport.socket.listen-port 6997
option auth.addr.gfs-srv-ds-locks.allow 192.168.0.*,127.0.0.1
option auth.addr.gfs-srv-ds-io-cache.allow 192.168.0.*,127.0.0.1
subvolumes gfs-srv-ds-io-cache
end-volume
===================================================
===================================================
CLIENT:
===================================================
volume gfs-cli-ds-client
type protocol/client
option transport-type tcp
option remote-host 127.0.0.1
option remote-port 6997
option frame-timeout 600
option ping-timeout 10
option remote-subvolume gfs-srv-ds-io-cache
end-volume
volume gfs-cli-ds-write-back
type performance/write-behind
option cache-size 4MB
option flush-behind on
subvolumes gfs-cli-ds-client
end-volume
volume gfs-cli-ds-io-cache
type performance/io-cache
option page-size 256KB
option cache-size 64MB
option priority *:0
option cache-timeout 1
subvolumes gfs-cli-ds-write-back
end-volume
===================================================
The difference between the servers is just the address used to connect to each other but that's all. Other configuration is +/- the same.
Now my problem is that the exported brick is 100GB big but when looking with the client the total size is 200 GB. I always had the impression that Replicate Translator is a RAID-1 type setup. Why does the "df" command shows me the double size instead of just 100GB? To be honest: The used diskspace somehow seems to be double as well. So at the end I think the storage will not hold more then it can but still... the double size confuses me. What is the reason for that?
// Steve
--
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01
More information about the Gluster-users
mailing list