[Gluster-users] different size of nodes

Thomas Wakefield twake at cola.iges.org
Mon Mar 18 12:43:59 UTC 2013


You can set the free disk space limit.  This will force gluster to write files to another volume.

gluster volume set "volume"  cluster.min-free-disk XXGB    (you insert your volume name and the amount of free space you want, probably like 2-300GB)

Running a rebalance would help move your files around so that gl4 is not filled up.
gluster volume rebalance "volume"  start

-Tom


On Mar 16, 2013, at 6:54 AM, Papp Tamas <tompos at martos.bme.hu> wrote:

> hi All,
> 
> There is a distributed cluster with 5 bricks:
> 
> gl0
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/sda4       5.5T  4.1T  1.5T  75% /mnt/brick1
> gl1
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/sda4       5.5T  4.3T  1.3T  78% /mnt/brick1
> gl2
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/sda4       5.5T  4.1T  1.4T  76% /mnt/brick1
> gl3
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/sda4       4.1T  4.1T  2.1G 100% /mnt/brick1
> gl4
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/sda4       4.1T  4.1T   24M 100% /mnt/brick1
> 
> 
> Volume Name: w-vol
> Type: Distribute
> Volume ID: 89e31546-cc2e-4a27-a448-17befda04726
> Status: Started
> Number of Bricks: 5
> Transport-type: tcp
> Bricks:
> Brick1: gl0:/mnt/brick1/export
> Brick2: gl1:/mnt/brick1/export
> Brick3: gl2:/mnt/brick1/export
> Brick4: gl3:/mnt/brick1/export
> Brick5: gl4:/mnt/brick1/export
> Options Reconfigured:
> nfs.mount-udp: on
> nfs.addr-namelookup: off
> nfs.ports-insecure: on
> nfs.port: 2049
> cluster.stripe-coalesce: on
> nfs.disable: off
> performance.flush-behind: on
> performance.io-thread-count: 64
> performance.quick-read: on
> performance.stat-prefetch: on
> performance.io-cache: on
> performance.write-behind: on
> performance.read-ahead: on
> performance.write-behind-window-size: 4MB
> performance.cache-refresh-timeout: 1
> performance.cache-size: 4GB
> network.frame-timeout: 60
> performance.cache-max-file-size: 1GB
> 
> 
> 
> As you can see 2 of the bricks are smaller and they're full.
> The gluster volume is not full of course:
> 
> gl0:/w-vol       25T   21T  4.0T  84% /W/Projects
> 
> 
> I'm not able to write to the volume. Why? Is it an issue? If so, is it known?
> How can I stop writing to full nodes?
> 
> Thanks,
> tamas
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list