[Gluster-devel] [Gluster-users] Glusterfs meta data space consumption issue
ABHISHEK PALIWAL
abhishpaliwal at gmail.com
Mon Apr 17 03:09:52 UTC 2017
There is no need but it could happen accidentally and I think it should be
protect or should not be permissible.
On Mon, Apr 17, 2017 at 8:36 AM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
>
> On Mon, 17 Apr 2017 at 08:23, ABHISHEK PALIWAL <abhishpaliwal at gmail.com>
> wrote:
>
>> Hi All,
>>
>> Here we have below steps to reproduce the issue
>>
>> Reproduction steps:
>>
>>
>>
>> root at 128:~# gluster volume create brick 128.224.95.140:/tmp/brick force
>> ----- create the gluster volume
>>
>> volume create: brick: success: please start the volume to access data
>>
>> root at 128:~# gluster volume set brick nfs.disable true
>>
>> volume set: success
>>
>> root at 128:~# gluster volume start brick
>>
>> volume start: brick: success
>>
>> root at 128:~# gluster volume info
>>
>> Volume Name: brick
>>
>> Type: Distribute
>>
>> Volume ID: a59b479a-2b21-426d-962a-79d6d294fee3
>>
>> Status: Started
>>
>> Number of Bricks: 1
>>
>> Transport-type: tcp
>>
>> Bricks:
>>
>> Brick1: 128.224.95.140:/tmp/brick
>>
>> Options Reconfigured:
>>
>> nfs.disable: true
>>
>> performance.readdir-ahead: on
>>
>> root at 128:~# gluster volume status
>>
>> Status of volume: brick
>>
>> Gluster process TCP Port RDMA Port Online Pid
>>
>> ------------------------------------------------------------
>> ------------------
>>
>> Brick 128.224.95.140:/tmp/brick 49155 0 Y 768
>>
>>
>>
>> Task Status of Volume brick
>>
>> ------------------------------------------------------------
>> ------------------
>>
>> There are no active volume tasks
>>
>>
>>
>> root at 128:~# mount -t glusterfs 128.224.95.140:/brick gluster/
>>
>> root at 128:~# cd gluster/
>>
>> root at 128:~/gluster# du -sh
>>
>> 0 .
>>
>> root at 128:~/gluster# mkdir -p test/
>>
>> root at 128:~/gluster# cp ~/tmp.file gluster/
>>
>> root at 128:~/gluster# cp tmp.file test
>>
>> root at 128:~/gluster# cd /tmp/brick
>>
>> root at 128:/tmp/brick# du -sh *
>>
>> 768K test
>>
>> 768K tmp.file
>>
>> root at 128:/tmp/brick# rm -rf test --------- delete the test directory and
>> data in the server side, not reasonable
>>
>> root at 128:/tmp/brick# ls
>>
>> tmp.file
>>
>> root at 128:/tmp/brick# du -sh *
>>
>> 768K tmp.file
>>
>> *root at 128:/tmp/brick# du -sh (brick dir)*
>>
>> *1.6M .*
>>
>> root at 128:/tmp/brick# cd .glusterfs/
>>
>> root at 128:/tmp/brick/.glusterfs# du -sh *
>>
>> 0 00
>>
>> 0 2a
>>
>> 0 bb
>>
>> 768K c8
>>
>> 0 c9
>>
>> 0 changelogs
>>
>> 768K d0
>>
>> 4.0K health_check
>>
>> 0 indices
>>
>> 0 landfill
>>
>> *root at 128:/tmp/brick/.glusterfs# du -sh (.glusterfs dir)*
>>
>> *1.6M .*
>>
>> root at 128:/tmp/brick# cd ~/gluster
>>
>> root at 128:~/gluster# ls
>>
>> tmp.file
>>
>> *root at 128:~/gluster# du -sh * (Mount dir)*
>>
>> *768K tmp.file*
>>
>>
>>
>> In the reproduce steps, we delete the test directory in the server side,
>> not in the client side. I think this delete operation is not reasonable.
>> Please ask the customer to check whether they do this unreasonable
>> operation.
>>
>
> What's the need of deleting data from backend (i.e bricks) directly?
>
>
>> *It seems while deleting the data from BRICK, metadata will not deleted
>> from .glusterfs directory.*
>>
>>
>> *I don't know whether it is a bug of limitations, please let us know
>> about this?*
>>
>>
>> Regards,
>>
>> Abhishek
>>
>>
>> On Thu, Apr 13, 2017 at 2:29 PM, Pranith Kumar Karampuri <
>> pkarampu at redhat.com> wrote:
>>
>>>
>>>
>>> On Thu, Apr 13, 2017 at 12:19 PM, ABHISHEK PALIWAL <
>>> abhishpaliwal at gmail.com> wrote:
>>>
>>>> yes it is ext4. but what is the impact of this.
>>>>
>>>
>>> Did you have a lot of data before and you deleted all that data? ext4 if
>>> I remember correctly doesn't decrease size of directory once it expands it.
>>> So in ext4 inside a directory if you create lots and lots of files and
>>> delete them all, the directory size would increase at the time of creation
>>> but won't decrease after deletion. I don't have any system with ext4 at the
>>> moment to test it now. This is something we faced 5-6 years back but not
>>> sure if it is fixed in ext4 in the latest releases.
>>>
>>>
>>>>
>>>> On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <
>>>> pkarampu at redhat.com> wrote:
>>>>
>>>>> Yes
>>>>>
>>>>> On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL <
>>>>> abhishpaliwal at gmail.com> wrote:
>>>>>
>>>>>> Means the fs where this brick has been created?
>>>>>> On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri" <
>>>>>> pkarampu at redhat.com> wrote:
>>>>>>
>>>>>>> Is your backend filesystem ext4?
>>>>>>>
>>>>>>> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <
>>>>>>> abhishpaliwal at gmail.com> wrote:
>>>>>>>
>>>>>>>> No,we are not using sharding
>>>>>>>> On Apr 12, 2017 7:29 PM, "Alessandro Briosi" <ab1 at metalit.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>>>>>>>>>
>>>>>>>>> I have did more investigation and find out that brick dir size is
>>>>>>>>> equivalent to gluster mount point but .glusterfs having too much difference
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> You are probably using sharding?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Buon lavoro.
>>>>>>>>> *Alessandro Briosi*
>>>>>>>>>
>>>>>>>>> *METAL.it Nord S.r.l.*
>>>>>>>>> Via Maioliche 57/C - 38068 Rovereto (TN)
>>>>>>>>> Tel.+39.0464.430130 - Fax +39.0464.437393
>>>>>>>>> www.metalit.com
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Gluster-users mailing list
>>>>>>>> Gluster-users at gluster.org
>>>>>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Pranith
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Pranith
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>>
>>>>
>>>>
>>>> Regards
>>>> Abhishek Paliwal
>>>>
>>>
>>>
>>>
>>> --
>>> Pranith
>>>
>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
> --
> - Atin (atinm)
>
--
Regards
Abhishek Paliwal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170417/0fb48edb/attachment-0001.html>
More information about the Gluster-devel
mailing list