[Gluster-users] Replication logic

Zenon Panoussis oracle at provocation.net
Mon Dec 28 21:14:16 UTC 2020

>  And you always got the option to reduce the quorum statically to "1" 

This is a very interesting tidbit of information. I was
wondering if there was some way to preload data on a brick,
and I think you might have just given me one.

I have a volume of three peers, one brick each. Two peers
have a fast connection, the third one has a very slow
connection. In normal operation this doesn't matter,
because there will only be fairly small changes to the
filesystem over time. However, when loading the initial
data on the volume before it becomes operative, the one
slow connection becomes a bottleneck for two fast ones.
So I'm thinking now whether I could

1. join the three peers and build the empty volume,
2. take the slow peer off-line,
3. load the data on the crippled volume, so that it is
   written to the two fast peers that are still online,
4. take the two fast peers offline and put the slow peer
5. reduce quorum to 1,
6. load the exact same data locally to the slow peer, and
7. put the two fast peers back online and increase quorum
   to 2.

This would lead to all three bricks having the exact same
data without the delay of the slow transfer, but it will
only work if the exact same metadata are created for the
same files during the two separate loads. That is, if a
given file foo always produces the exact same metadata,
after loading foo to different bricks on different
occasions, the metadata of all bricks will be identical
and no healing would be needed.

Is that so, or am imagining impossible acrobatics?


More information about the Gluster-users mailing list