[Gluster-users] Is the size of bricks limiting the size of files I can store?
Krutika Dhananjay
kdhananj at redhat.com
Fri Apr 13 09:51:26 UTC 2018
Sorry about the late reply, I missed seeing your mail.
To begin with, what is your use-case? Sharding is currently supported only
for virtual machine image storage use-case.
It *could* work in other single-writer use-cases but it's only tested
thoroughly for the vm use-case.
If yours is not a vm store use-case, you might want to do some tests first
to see if it works fine.
If you find any issues, you can raise a bug. I'll be more than happy to fix
them.
On Fri, Apr 13, 2018 at 1:19 AM, Andreas Davour <ante at update.uu.se> wrote:
> On Tue, 3 Apr 2018, Raghavendra Gowdappa wrote:
>
> On Mon, Apr 2, 2018 at 11:37 PM, Andreas Davour <ante at update.uu.se> wrote:
>>
>> On Mon, 2 Apr 2018, Nithya Balachandran wrote:
>>>
>>> On 2 April 2018 at 14:48, Andreas Davour <ante at update.uu.se> wrote:
>>>
>>>>
>>>>
>>>> Hi
>>>>>
>>>>> I've found something that works so weird I'm certain I have missed how
>>>>> gluster is supposed to be used, but I can not figure out how. This is
>>>>> my
>>>>> scenario.
>>>>>
>>>>> I have a volume, created from 16 nodes, each with a brick of the same
>>>>> size. The total of that volume thus is in the Terabyte scale. It's a
>>>>> distributed volume with a replica count of 2.
>>>>>
>>>>> The filesystem when mounted on the clients is not even close to getting
>>>>> full, as displayed by 'df'.
>>>>>
>>>>> But, when one of my users try to copy a file from another network
>>>>> storage
>>>>> to the gluster volume, he gets a 'filesystem full' error. What
>>>>> happened?
>>>>> I
>>>>> looked at the bricks and figured out that one big file had ended up on
>>>>> a
>>>>> brick that was half full or so, and the big file did not fit in the
>>>>> space
>>>>> that was left on that brick.
>>>>>
>>>>> Hi,
>>>>
>>>> This is working as expected. As files are not split up (unless you are
>>>> using shards) the size of the file is restricted by the size of the
>>>> individual bricks.
>>>>
>>>>
>>> Thanks a lot for that definitive answer. Is there a way to manage this?
>>> Can you shard just those files, making them replicated in the process?
>>>
>>
Is your question about whether you can shard just that big file that caused
space to run out and keep the rest of the files unsharded?
This is a bit tricky. From the time you enable sharding on your volume, all
newly created shards will get sharded once their size
exceeds features.shard-block-size value (which is configurable) because
it's a volume-wide option.
As for volumes which have pre-existing data even before shard is enabled,
for you to shard them, you'll need to perform either of the two steps below:
1. move the existing file to a local fs from your glusterfs volume and then
move it back into the volume.
2. copy the existing file into a temporary file on the same volume and
rename the file back to its original name.
-Krutika
>>>
>> +Krutika, xlator/shard maintainer for the answer.
>>
>>
>> I just can't have users see 15TB free and fail copying a 15GB file. They
>>> will show me the bill they paid for those "disks" and flay me.
>>>
>>
> Any input on that Krutika?
>
> /andreas
>
> --
> "economics is a pseudoscience; the astrology of our time"
> Kim Stanley Robinson
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180413/7c6e8cec/attachment.html>
More information about the Gluster-users
mailing list