[Gluster-users] client is terrible with large amount of small files

Atin Mukherjee amukherj at redhat.com
Thu Apr 30 11:37:19 UTC 2015


On 04/30/2015 03:09 PM, gjprabu wrote:
> Hi Amukher,
> 
>           How to resolve this issue, till we need to wait for 3.7 release or any work around is there.
You will have to as this feature is in for 3.7.
> 
> RegardsPrabu
> 
> 
> 
> 
> 
> ---- On Thu, 30 Apr 2015 14:49:46 +0530 Atin Mukherjee<amukherj at redhat.com> wrote ---- 
> 
>  
>  
> On 04/30/2015 02:32 PM, gjprabu wrote: 
> > Hi bturner, 
> > 
> > 
> > I am getting below error while adding server.event 
> > 
> > gluster v set integvol server.event-threads 3 
> > volume set: failed: option : server.event-threads does not exist 
> > Did you mean server.gid-timeout or ...manage-gids? 
> This option is not available in 3.6, its going to come in 3.7 
>  
> > 
> > 
> > Glusterfs version has been upgraded to 3.6.3 
> > Also os kernel upgraded to 6.6 kernel 
> > Yes two brick are running in KVM and one is physical machine and we are not using thinp. 
> > 
> > Regards 
> > G.J 
> > 
> > 
> > 
> > 
> > 
> > ---- On Thu, 30 Apr 2015 00:37:44 +0530 Ben Turner<bturner at redhat.com> wrote ---- 
> > 
> > ----- Original Message ----- 
> > > From: "gjprabu" <gjprabu at zohocorp.com> 
> > > To: "A Ghoshal" <a.ghoshal at tcs.com> 
> > > Cc: gluster-users at gluster.org, gluster-users-bounces at gluster.org 
> > > Sent: Wednesday, April 29, 2015 9:07:07 AM 
> > > Subject: Re: [Gluster-users] client is terrible with large amount of small files 
> > > 
> > > Hi Ghoshal, 
> > > 
> > > Please find the details below. 
> > > 
> > > A) Glusterfs version 
> > > glusterfs 3.6.2 
> > 
> > Upgrade to 3.6.3 and set client.event-threads and server.event-threads to at least 4. Here is a guide on tuning MT epoll: 
> > 
> > https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Small_File_Performance_Enhancements.html 
> > 
> > > 
> > > B) volume configuration (gluster v <volname> info) 
> > > gluster volume info 
> > > 
> > > 
> > > Volume Name: integvol 
> > > Type: Replicate 
> > > Volume ID: b8f3a19e-59bc-41dc-a55a-6423ec834492 
> > > Status: Started 
> > > Number of Bricks: 1 x 3 = 3 
> > > Transport-type: tcp 
> > > Bricks: 
> > > Brick1: integ-gluster2:/srv/sdb1/brick 
> > > Brick2: integ-gluster1:/srv/sdb1/brick 
> > > Brick3: integ-gluster3:/srv/sdb1/brick 
> > > 
> > > 
> > > C) host linux version 
> > > CentOS release 6.5 (Final) 
> > 
> > Are your bricks on LVM? Are you using thinp? If so update to the latest kernel as thinp perf was really bad in 6.5 and early 6.6 kernels. 
> > 
> > > 
> > > D) details about the kind of network you use to connect your servers making 
> > > up your storage pool. 
> > > We are connecting LAN to LAN there is no special network configuration done 
> > > 
> > > Frome client we use to mount like below 
> > > mount -t glusterfs gluster1:/integvol /mnt/gluster/ 
> > > 
> > > 
> > > Regards 
> > > Prabu 
> > > 
> > > 
> > > 
> > > ---- On Wed, 29 Apr 2015 17:58:16 +0530 A Ghoshal<a.ghoshal at tcs.com> wrote 
> > > ---- 
> > > 
> > > 
> > > 
> > > Performance would largely depend upon setup. While I cannot think of any 
> > > setup that would cause write to be this slow, if would help if you share the 
> > > following details: 
> > > 
> > > A) Glusterfs version 
> > > B) volume configuration (gluster v <volname> info) 
> > > C) host linux version 
> > > D) details about the kind of network you use to connect your servers making 
> > > up your storage pool. 
> > > 
> > > Thanks, 
> > > Anirban 
> > > 
> > > 
> > > 
> > > From: gjprabu < gjprabu at zohocorp.com > 
> > > To: < gluster-users at gluster.org > 
> > > Date: 04/29/2015 05:52 PM 
> > > Subject: Re: [Gluster-users] client is terrible with large amount of small 
> > > files 
> > > Sent by: gluster-users-bounces at gluster.org 
> > > 
> > > 
> > > 
> > > 
> > > Hi Team, 
> > > 
> > > If anybody know the solution please share us. 
> > > 
> > > Regards 
> > > Prabu 
> > > 
> > > 
> > > 
> > > ---- On Tue, 28 Apr 2015 19:32:40 +0530 gjprabu < gjprabu at zohocorp.com > 
> > > wrote ---- 
> > > Hi Team, 
> > > 
> > > We are using glusterfs newly and testing data transfer part in client using 
> > > fuse.glusterfs file system but it is terrible with large amount of small 
> > > files (Large amount of small file 150MB of size it's writing around 18min). 
> > > I can able copy small files and syncing between the server brick are working 
> > > fine but it is terrible with large amount of small files. 
> > > 
> > > if anybody please share the solution for the above issue. 
> > > 
> > > Regards 
> > > Prabu 
> > > 
> > > _______________________________________________ 
> > > Gluster-users mailing list 
> > > Gluster-users at gluster.org 
> > > http://www.gluster.org/mailman/listinfo/gluster-users 
> > > 
> > > _______________________________________________ 
> > > Gluster-users mailing list 
> > > Gluster-users at gluster.org 
> > > http://www.gluster.org/mailman/listinfo/gluster-users 
> > > 
> > > 
> > > =====-----=====-----===== 
> > > Notice: The information contained in this e-mail 
> > > message and/or attachments to it may contain 
> > > confidential or privileged information. If you are 
> > > not the intended recipient, any dissemination, use, 
> > > review, distribution, printing or copying of the 
> > > information contained in this e-mail message 
> > > and/or attachments to it are strictly prohibited. If 
> > > you have received this communication in error, 
> > > please notify us by reply e-mail or telephone and 
> > > immediately and permanently delete the message 
> > > and any attachments. Thank you 
> > > 
> > > 
> > > 
> > > _______________________________________________ 
> > > Gluster-users mailing list 
> > > Gluster-users at gluster.org 
> > > http://www.gluster.org/mailman/listinfo/gluster-users 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > _______________________________________________ 
> > Gluster-users mailing list 
> > Gluster-users at gluster.org 
> > http://www.gluster.org/mailman/listinfo/gluster-users 
> > 
>  
> 

-- 
~Atin


More information about the Gluster-users mailing list