[Gluster-users] A question about healing

Toby Corkindale toby.corkindale at strategicdata.com.au
Mon Jul 15 08:52:21 UTC 2013


On 12/07/13 06:44, Michael Peek wrote:
> Hi gurus,
>
> So I have a cluster that I've set up and I'm banging on.  It's comprised
> of four machines with two drives in each machine.  (By the way, the
> 3.2.5 version that comes with stock Ubuntu 12.04 seems to have a lot of
> bugs/instability.  I was screwing it up daily just by putting it through
> some heavy-use tests.  Then I downloaded 3.3.1 from the PPA, and so far
> things seem a LOT more stable.  I haven't managed to break anything yet,
> although the night is still young.)
>
> I'm dumping data to it like mad, and I decide to simulate a filesystem
> error my remounting half of the cluster's drives in read-only mode with
> "mount -o remount,ro".
>
> The cluster seems to slow just slightly, but it kept on ticking.  Great.


While you're performing your testing, can I suggest you include testing 
following behaviour too, to ensure the performance meets your needs.

Fill the volumes up with data, to a point similar to what you expect to 
reach in production use. Not just in terms of disk space, but number of 
files and directories as well. You might need to write a small script 
that can build a simulated directory tree, populated with a range of 
file sizes.

Take one of the nodes offline (or read-only), and then touch and modify 
a large number of files randomly around the volume. Imagine that a node 
was offline for 24 hours, and that you're simulating the quantity of 
write patterns that would occur in total over that time.

Now bring the "failed" node back online and start the healing process.
Meanwhile, continue to simulate client access patterns on the files you 
were modifying earlier. Ensure that performance is still sufficient for 
your needs.


It's a more complicated test to run, but it's important to measure how 
gluster performs with your workload in non-ideal circumstances that you 
will eventually hit.



More information about the Gluster-users mailing list