[Gluster-users] Is the size of bricks limiting the size of files I can store?
rgowdapp at redhat.com
Tue Apr 3 02:19:30 UTC 2018
On Mon, Apr 2, 2018 at 11:37 PM, Andreas Davour <ante at update.uu.se> wrote:
> On Mon, 2 Apr 2018, Nithya Balachandran wrote:
> On 2 April 2018 at 14:48, Andreas Davour <ante at update.uu.se> wrote:
>>> I've found something that works so weird I'm certain I have missed how
>>> gluster is supposed to be used, but I can not figure out how. This is my
>>> I have a volume, created from 16 nodes, each with a brick of the same
>>> size. The total of that volume thus is in the Terabyte scale. It's a
>>> distributed volume with a replica count of 2.
>>> The filesystem when mounted on the clients is not even close to getting
>>> full, as displayed by 'df'.
>>> But, when one of my users try to copy a file from another network storage
>>> to the gluster volume, he gets a 'filesystem full' error. What happened?
>>> looked at the bricks and figured out that one big file had ended up on a
>>> brick that was half full or so, and the big file did not fit in the space
>>> that was left on that brick.
>> This is working as expected. As files are not split up (unless you are
>> using shards) the size of the file is restricted by the size of the
>> individual bricks.
> Thanks a lot for that definitive answer. Is there a way to manage this?
> Can you shard just those files, making them replicated in the process?
+Krutika, xlator/shard maintainer for the answer.
> I just can't have users see 15TB free and fail copying a 15GB file. They
> will show me the bill they paid for those "disks" and flay me.
> "economics is a pseudoscience; the astrology of our time"
> Kim Stanley Robinson
> Gluster-users mailing list
> Gluster-users at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users