[Gluster-users] Question about the number of nodes

Lindsay Mathieson lindsay.mathieson at gmail.com
Tue Apr 19 13:07:50 UTC 2016


On 19/04/2016 10:34 PM, Kevin Lemonnier wrote:
>> don't forget to update the opversion for the cluster:
>> >
>> >      gluster volume set all cluster.op-version 30710
>> >
>> >(its not 30711 as there are no new features from 3.7.10 to 3.7.11)
> Is that needed even if it's a new install ? I'm setting it up from scratch, then moving the
> VM disks on it. I know I could just update, but I have to replace two servers anyway so
> it's easier I think.

Ah yes, i think you're right. "gluster volume <DS> get 
cluster.op-version" will confirm it anyway.

> >
> Yes, but it's 90% full and we should be adding a bunch of new VMs soon, so I do need the extra space.

So you'll be setting up a distributed replicated volume?


>
>> >What sort of network setup do you have? mine is relatively low end.
>> >2*1GB Eth on each node, LACP bonding.
>> >
> If only:). It's a single 1Gb link, don't have any control over that part unfortunatly.


Thats a bugger, you'll be only able to get 1G/2 write speeds max. I can 
get 112 MB/s in one VM when the volume is idle. Of course that has to be 
shared amongst all the other VM's. Having said that, IOPS seems to 
matter more that raw write speed for most VM usage.


Have upgraded my test volume to 3.7.11, no issues so far :) with 12 VM's 
running on two nodes I did a reboot of the 3rd node. Number of damaged 
64MB shards got to 300 before it came back up (its a very slow booting 
node, server motherboard). It took 6.5 minutes to heal - quite pleased 
with that.

I was running a reasonably heavy work load on the VMM's - build, disk 
benchmark, windows update and there was no appreciable slow down. 
iowaits stayed below 10%.

I believe these are the relevant heal settings which I bumped up:

     cluster.self-heal-window-size: 1024
     cluster.heal-wait-queue-length: 256
     cluster.background-self-heal-count: 16


cheers,


-- 
Lindsay Mathieson



More information about the Gluster-users mailing list