[Gluster-users] different size of nodes

Papp Tamas tompos at martos.bme.hu
Sat Mar 16 10:54:04 UTC 2013


hi All,

There is a distributed cluster with 5 bricks:

gl0
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda4       5.5T  4.1T  1.5T  75% /mnt/brick1
gl1
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda4       5.5T  4.3T  1.3T  78% /mnt/brick1
gl2
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda4       5.5T  4.1T  1.4T  76% /mnt/brick1
gl3
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda4       4.1T  4.1T  2.1G 100% /mnt/brick1
gl4
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda4       4.1T  4.1T   24M 100% /mnt/brick1


Volume Name: w-vol
Type: Distribute
Volume ID: 89e31546-cc2e-4a27-a448-17befda04726
Status: Started
Number of Bricks: 5
Transport-type: tcp
Bricks:
Brick1: gl0:/mnt/brick1/export
Brick2: gl1:/mnt/brick1/export
Brick3: gl2:/mnt/brick1/export
Brick4: gl3:/mnt/brick1/export
Brick5: gl4:/mnt/brick1/export
Options Reconfigured:
nfs.mount-udp: on
nfs.addr-namelookup: off
nfs.ports-insecure: on
nfs.port: 2049
cluster.stripe-coalesce: on
nfs.disable: off
performance.flush-behind: on
performance.io-thread-count: 64
performance.quick-read: on
performance.stat-prefetch: on
performance.io-cache: on
performance.write-behind: on
performance.read-ahead: on
performance.write-behind-window-size: 4MB
performance.cache-refresh-timeout: 1
performance.cache-size: 4GB
network.frame-timeout: 60
performance.cache-max-file-size: 1GB



As you can see 2 of the bricks are smaller and they're full.
The gluster volume is not full of course:

gl0:/w-vol       25T   21T  4.0T  84% /W/Projects


I'm not able to write to the volume. Why? Is it an issue? If so, is it known?
How can I stop writing to full nodes?

Thanks,
tamas




More information about the Gluster-users mailing list