[Gluster-users] How to partition directory structur for 300K files?

Mathieu Chateau mathieu.chateau at lotp.fr
Mon Aug 24 16:02:09 UTC 2015


Hello,

this is to do on brick, not on client side


Cordialement,
Mathieu CHATEAU
http://www.lotp.fr

2015-08-24 17:57 GMT+02:00 Merlin Morgenstern <merlin.morgenstern at gmail.com>
:

> thank you for the recommendation on parameters.
>
> I tried:
>
> gs2:/volume1    /data/nfs       glusterfs
> defaults,_netdev,backupvolfile-server=gs1,noatime,nodiratime,logbufs=8,logbsize=256k,largeio,inode64,swalloc,allocsize=131072k,nobarrier
>       0       0
>
> The system reports:
>
> #sudo mount -a
>
> Invalid option noatime
>
> Same with nodiratime and logbufs as soon as I remove noatime
>
>
>
> 2015-08-24 16:48 GMT+02:00 Mathieu Chateau <mathieu.chateau at lotp.fr>:
>
>> Re,
>>
>> putting back mailing list so they keep up.
>>
>> With newer version, fuse perform much better, and provide transparent
>> failover. As both brick are 2 VM in same host, latency will not be an issue.
>> Most important is to ensure you have drivers /tools inside VM to get best
>> perf.
>>
>> on client and servers, I use this in /etc/sysctl.conf
>>
>> vm.swappiness=0
>> net.core.rmem_max=67108864
>> net.core.wmem_max=67108864
>> # increase Linux autotuning TCP buffer limit to 32MB
>> net.ipv4.tcp_rmem="4096 87380 33554432"
>> net.ipv4.tcp_wmem="4096 65536 33554432"
>> # increase the length of the processor input queue
>> net.core.netdev_max_backlog=30000
>> # recommended default congestion control is htcp
>> net.ipv4.tcp_congestion_control=htcp
>>
>>
>> options I set on gluster volumes:
>>
>> server.allow-insecure: on
>> performance.client-io-threads: on
>> performance.read-ahead: on
>> performance.readdir-ahead: enable
>> performance.cache-size: 1GB
>> performance.io-thread-count: 16
>>
>> Options I set on brick in fstab for XFS mounted volumes used by gluster:
>>
>>
>> defaults,noatime,nodiratime,logbufs=8,logbsize=256k,largeio,inode64,swalloc,allocsize=131072k,nobarrier
>>
>> Cordialement,
>> Mathieu CHATEAU
>> http://www.lotp.fr
>>
>> 2015-08-24 16:11 GMT+02:00 Merlin Morgenstern <
>> merlin.morgenstern at gmail.com>:
>>
>>> re your questions:
>>>
>>> > did you do some basic tuning to help anyway ?
>>>
>>> no, this is a basica setup. Can you please direct me to the most
>>> important tuning parameters to look at?
>>>
>>> > using latest version ?
>>> glusterfs 3.7.3 built on Jul 28 2015 15:14:43
>>>
>>> > in replication or only distributed ?
>>> re 2
>>>
>>> > Why using NFS and not native fuse client to mount volume?
>>> I was reading that the NFS-Client is better with small files. (typical
>>> 2-20KB in my case)
>>>
>>>
>>> > did you install VM tools (if using VMware fusion) ?
>>> I am using virtualbox 5.0.3 on Mac OS X 10.10
>>>
>>>
>>>
>>> 2015-08-24 16:01 GMT+02:00 Mathieu Chateau <mathieu.chateau at lotp.fr>:
>>>
>>>> Hello,
>>>>
>>>> did you do some basic tuning to help anyway ?
>>>> using latest version ?
>>>> in replication or only distributed ?
>>>> Why using NFS and not native fuse client to mount volume?
>>>> did you install VM tools (if using VMware fusion) ?
>>>>
>>>> Cordialement,
>>>> Mathieu CHATEAU
>>>> http://www.lotp.fr
>>>>
>>>> 2015-08-24 15:20 GMT+02:00 Merlin Morgenstern <
>>>> merlin.morgenstern at gmail.com>:
>>>>
>>>>> I am running into trouble while syncing (rsync, cp ... ) my files to
>>>>> glusterfs. After about 50K files, one machine dies and has to be rebooted.
>>>>>
>>>>> As there are about 300K files in one directory, I am thinking about to
>>>>> cluster that in a directory structure in order to overcome that problem.
>>>>>
>>>>> e.g. /0001/filename /0002/filename
>>>>>
>>>>> That would cut down the amount of files in one directory. However this
>>>>> is something I would like to avoid if possible due to SEO - changing the
>>>>> url of the file brings a lot of trouble.
>>>>>
>>>>> The system underneith are 2 seperate VM instances, each running ubuntu
>>>>> 14.04. Cluster NFS client on same machine as Gluster server. Macbook Pro 13
>>>>> retina with capable SSD and 1G internal network between the VMs.
>>>>>
>>>>> Thank you for any help on this.
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150824/7c029e66/attachment.html>


More information about the Gluster-users mailing list