[Gluster-users] Distributed runs out of Space

Franco Broi Franco.Broi at iongeo.com
Fri Jan 31 23:15:49 UTC 2014


This explains what should happen and you were correct, it should they another brick if the hash target brick has less than min free space available.

http://hekafs.org/index.php/2012/03/glusterfs-algorithms-distribution/

On 28 Jan 2014 22:28, Dragon <Sunghost at gmx.de> wrote:

Hi,

The node is not new and there are a lot of file on it. Until now i could save files...The node3 is complete setup with all disks and added to the existing ones. Strange....

------------

My understanding is, if you add a brick and don't fix layout, any existing directories will only contain files on the old bricks, even if you make new files in those old directories.  Any new directories will have files spread over all the bricks, new and old.

Running fix layout fixes the existing directories so that any new files added to those directories can go to the new brick but doesn't move files, you need to rebalance to make that happen.

So, are you adding the new file to a directory that existed before you added the extra brick?

On 28 Jan 2014 22:06, Dragon <Sunghost at gmx.de> wrote:

Hi,

ok - no problem. The question is wheather this is the correct solution or is there another probleme? How whould i do that? Like this: "# gluster volume rebalance VOLNAME start" or only # "gluster volume rebalance VOLNAME fix-layout start" The free size of node3 is with 400GB smaller as each of the disk with 3tb. So i whould say its an rebalance problem than a layout one, or? Could anything damage if i do that, thats my biggest concerns about the next steps.

----------------------

Just a warning,  rebalancing is very slow. Not sure why....

On 28 Jan 2014 21:45, Dragon <Sunghost at gmx.de> wrote:

Hi,

ok right. Filesize is from small Kb to >30GB. As far as i remember, first was the 2node cluster and later i added one node more. I also changed the min-free option from 100GB to 500MB. Ok and now, what is the solution, rebalance and fix layouts and do glusterfs do the job ;)?

thx Franco

---------------------

When you add more bricks you have to tell gluster to rebalance, ie move files from existing disks to the newly added empty disks.

Do you have some very big files? Just wondering why one of your bricks has much more free space than the other.

On 28 Jan 2014 21:28, Dragon <Sunghost at gmx.de> wrote:

Hi Franco,

??? Ok no balancing - it was the wrong wording. I mean, the glusterfs controlles where is the right space left for files. I dont mean that glusterfs balance the files between the nodes. So i have over 400GB on node3, why should i delete files on node1 or node2? This must happend automaticly, or why should i set the option min-free-disk if cluster ignore it? Please explain, also what dht mean. If youre right it would be horrible, because if i add one more node i have to copy lots of data between the other 3nodes to have enough space? Cant believe that this is like it work.

------------------

The target brick for dht is determined using a hash, it doesn't do any sort of capacity balancing. You need to make some space on all the bricks.

------------------

On 28 Jan 2014 21:02, Dragon <Sunghost at gmx.de> wrote:
Hi,
after find out that the Fuseclient runs in Version 3.4.2 i updated all 3 Nodes to 3.4.2, restart all but get the same trouble. Than i mounted via Fuse Gluster the Client directly to the 3 Node which have enough space left, but as i can see the files goes to the 1 node ?!
Is this a Bug? Need asap help.

thx

________________________________


This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140131/48b833ac/attachment.html>


More information about the Gluster-users mailing list