[Gluster-users] Replication and Distribution behavior
Yi Ling
lingyi.pro at gmail.com
Fri Aug 28 02:55:17 UTC 2009
hi,mike~~
it's right according to you configuration.
because you made pair1 and pair2 set to "distribution", the data/file you
scp to /gluster mount point will store in pair1 or pair2, which means that
there wouldn't be 4 copies of data in all 4 nodes ath the same time.
you could try to copy more files into /gluster. you would find that there
will be some files stored in pair1 (node1 and node2), and other files stored
in pair2.
try more translators~~~
Yi Ling
----------------------------------------------------------------------
Date: Wed, 26 Aug 2009 13:49:57 -0600
From: mike foster <mfosterm at gmail.com>
Subject: [Gluster-users] Replication and Distribution behavior
To: gluster-users at gluster.org
Message-ID:
<ff7f05da0908261249h7052a3c1l6403804f1db8d76d at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"
I apologize if this has already been covered, but I couldn't find anything
close enough to this scenario in searching the archives.
I'm evaluating a 4 node cluster, with nodes 1 and 2 replicating, nodes 3 and
4 replicating and pair 1 (nodes 1 and 2) and pair 2 (nodes 3 and 4) set to
"distribution".
However, when I copy data to any node on the /gluster mount point from a 5th
machine using scp, all of the data shows up in the exported share on nodes 1
and 2 only. The data does not get replicated to nodes 3 and 4, even when
directly connected to those servers.
Am I missing something or ...
Here's some configuration details:
cat /proc/mounts:
glusterfs#/etc/glusterfs/client.vol /gluster fuse
rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0
auto mounting the glusterfs in /etc/rc.local: glusterfs -f
/etc/glusterfs/client.vol /gluster
--- server.vol ---
# Gluster directory on raid volume /dev/md0
volume posix
type storage/posix
option directory /mnt/raid/gluster/export
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume brick
type performance/io-threads
option thread-count 4
subvolumes locks
end-volume
### Add network serving capability to above brick
volume server
type protocol/server
option transport-type ib-verbs
option auth.addr.brick.allow *
subvolumes brick
end-volume
--- end of server.vol ---
--- client.vol ---
volume cf01
type protocol/client
option transport-type ib-verbs
option remote-host 10.185.17.11
option remote-subvolume brick
end-volume
volume cf02
type protocol/client
option transport-type ib-verbs
option remote-host 10.185.17.12
option remote-subvolume brick
end-volume
volume cf03
type protocol/client
option transport-type ib-verbs
option remote-host 10.185.17.13
option remote-subvolume brick
end-volume
volume cf04
type protocol/client
option transport-type ib-verbs
option remote-host 10.185.17.14
option remote-subvolume brick
end-volume
# Replicate data across each servers in 2 pairs
volume pair01
type cluster/replicate
subvolumes cf01 cf02
end-volume
volume pair02
type cluster/replicate
subvolumes cf03 cf04
end-volume
# Distribute data across all pairs
volume bricks
type cluster/distribute
subvolumes pair01 pair02
end-volume
# For performance
volume writebehind
type performance/write-behind
option cache-size 4MB
subvolumes bricks
end-volume
--- end of client.vol ---
More information about the Gluster-users
mailing list