[Gluster-users] Fwd:Re: client is terrible with large amount of small files

Kamal kamalakannan at zohocorp.com
Tue May 5 07:29:18 UTC 2015


Hi Amukher, 

         Even after upgrade to 3.7 small files transfer rate is slow.

Below is the volume info. 

Volume Name: integvol1
Type: Replicate
Volume ID: 31793ba4-eeca-462a-a0cd-9adfb281225b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: integ-gluster1:/srv/sdb2/brick4
Brick2: integ-gluster2:/srv/sdb2/brick4
Options Reconfigured:
server.event-threads: 30
client.event-threads: 30
----

  I understand that for replication it would take some more time, but here its taking more time. 


Time taken for git clone in non gluster directory = 25 sec

Time taken for git clone in gluster directory = 14 minutes

Its a huge difference.  Plz let me know any other tuning parameters need to be done. 


Regards,
Kamal


     









============ Forwarded Message ============

>From : bturner at redhat.com 

To : gjprabu at zohocorp.com 

Cc : gluster-users at gluster.org,amukherj at redhat.com 

Date : Thu, 30 Apr 2015 17:14:00 +0530 

Subject : Re: [Gluster-users] client is terrible with large amount of small files

============ Forward Message ============



----- Original Message ----- 

> From: "Atin Mukherjee" <amukherj at redhat.com> 

> To: "gjprabu" <gjprabu at zohocorp.com> 

> Cc: "Ben Turner" <bturner at redhat.com>, gluster-users at gluster.org 

> Sent: Thursday, April 30, 2015 7:37:19 AM 

> Subject: Re: [Gluster-users] client is terrible with large amount of small files 

> 

> 

> On 04/30/2015 03:09 PM, gjprabu wrote: 

> > Hi Amukher, 

> > 

> > How to resolve this issue, till we need to wait for 3.7 release 

> > or any work around is there. 

> You will have to as this feature is in for 3.7. 



My apologies, I didn't realize that MT epoll didn't land in 3.6. If you want to test it out there is an alpha build available: 



http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs-3.7/epel-6-x86_64 



I wouldn't run this in production until 3.7 is released though. Again sorry for the confusion. 



-b 



> > 

> > RegardsPrabu 

> > 

> > 

> > 

> > 

> > 

> > ---- On Thu, 30 Apr 2015 14:49:46 +0530 Atin 

> > Mukherjee<amukherj at redhat.com> wrote ---- 

> > 

> > 

> > 

> > On 04/30/2015 02:32 PM, gjprabu wrote: 

> > > Hi bturner, 

> > > 

> > > 

> > > I am getting below error while adding server.event 

> > > 

> > > gluster v set integvol server.event-threads 3 

> > > volume set: failed: option : server.event-threads does not exist 

> > > Did you mean server.gid-timeout or ...manage-gids? 

> > This option is not available in 3.6, its going to come in 3.7 

> > 

> > > 

> > > 

> > > Glusterfs version has been upgraded to 3.6.3 

> > > Also os kernel upgraded to 6.6 kernel 

> > > Yes two brick are running in KVM and one is physical machine and we 

> > are not using thinp. 

> > > 

> > > Regards 

> > > G.J 

> > > 

> > > 

> > > 

> > > 

> > > 

> > > ---- On Thu, 30 Apr 2015 00:37:44 +0530 Ben 

> > Turner<bturner at redhat.com> wrote ---- 

> > > 

> > > ----- Original Message ----- 

> > > > From: "gjprabu" <gjprabu at zohocorp.com> 

> > > > To: "A Ghoshal" <a.ghoshal at tcs.com> 

> > > > Cc: gluster-users at gluster.org, 

> > gluster-users-bounces at gluster.org 

> > > > Sent: Wednesday, April 29, 2015 9:07:07 AM 

> > > > Subject: Re: [Gluster-users] client is terrible with large 

> > amount of small files 

> > > > 

> > > > Hi Ghoshal, 

> > > > 

> > > > Please find the details below. 

> > > > 

> > > > A) Glusterfs version 

> > > > glusterfs 3.6.2 

> > > 

> > > Upgrade to 3.6.3 and set client.event-threads and server.event-threads 

> > to at least 4. Here is a guide on tuning MT epoll: 

> > > 

> > > 

> > https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Small_File_Performance_Enhancements.html 

> > > 

> > > > 

> > > > B) volume configuration (gluster v <volname> 

> > info) 

> > > > gluster volume info 

> > > > 

> > > > 

> > > > Volume Name: integvol 

> > > > Type: Replicate 

> > > > Volume ID: b8f3a19e-59bc-41dc-a55a-6423ec834492 

> > > > Status: Started 

> > > > Number of Bricks: 1 x 3 = 3 

> > > > Transport-type: tcp 

> > > > Bricks: 

> > > > Brick1: integ-gluster2:/srv/sdb1/brick 

> > > > Brick2: integ-gluster1:/srv/sdb1/brick 

> > > > Brick3: integ-gluster3:/srv/sdb1/brick 

> > > > 

> > > > 

> > > > C) host linux version 

> > > > CentOS release 6.5 (Final) 

> > > 

> > > Are your bricks on LVM? Are you using thinp? If so update to the 

> > latest kernel as thinp perf was really bad in 6.5 and early 6.6 kernels. 

> > > 

> > > > 

> > > > D) details about the kind of network you use to connect your 

> > servers making 

> > > > up your storage pool. 

> > > > We are connecting LAN to LAN there is no special network 

> > configuration done 

> > > > 

> > > > Frome client we use to mount like below 

> > > > mount -t glusterfs gluster1:/integvol /mnt/gluster/ 

> > > > 

> > > > 

> > > > Regards 

> > > > Prabu 

> > > > 

> > > > 

> > > > 

> > > > ---- On Wed, 29 Apr 2015 17:58:16 +0530 A 

> > Ghoshal<a.ghoshal at tcs.com> wrote 

> > > > ---- 

> > > > 

> > > > 

> > > > 

> > > > Performance would largely depend upon setup. While I cannot 

> > think of any 

> > > > setup that would cause write to be this slow, if would help 

> > if you share the 

> > > > following details: 

> > > > 

> > > > A) Glusterfs version 

> > > > B) volume configuration (gluster v <volname> 

> > info) 

> > > > C) host linux version 

> > > > D) details about the kind of network you use to connect your 

> > servers making 

> > > > up your storage pool. 

> > > > 

> > > > Thanks, 

> > > > Anirban 

> > > > 

> > > > 

> > > > 

> > > > From: gjprabu < gjprabu at zohocorp.com > 

> > > > To: < gluster-users at gluster.org > 

> > > > Date: 04/29/2015 05:52 PM 

> > > > Subject: Re: [Gluster-users] client is terrible with large 

> > amount of small 

> > > > files 

> > > > Sent by: gluster-users-bounces at gluster.org 

> > > > 

> > > > 

> > > > 

> > > > 

> > > > Hi Team, 

> > > > 

> > > > If anybody know the solution please share us. 

> > > > 

> > > > Regards 

> > > > Prabu 

> > > > 

> > > > 

> > > > 

> > > > ---- On Tue, 28 Apr 2015 19:32:40 +0530 gjprabu < 

> > gjprabu at zohocorp.com > 

> > > > wrote ---- 

> > > > Hi Team, 

> > > > 

> > > > We are using glusterfs newly and testing data transfer part 

> > in client using 

> > > > fuse.glusterfs file system but it is terrible with large 

> > amount of small 

> > > > files (Large amount of small file 150MB of size it's writing 

> > around 18min). 

> > > > I can able copy small files and syncing between the server 

> > brick are working 

> > > > fine but it is terrible with large amount of small files. 

> > > > 

> > > > if anybody please share the solution for the above issue. 

> > > > 

> > > > Regards 

> > > > Prabu 

> > > > 

> > > > _______________________________________________ 

> > > > Gluster-users mailing list 

> > > > Gluster-users at gluster.org 

> > > > http://www.gluster.org/mailman/listinfo/gluster-users 

> > > > 

> > > > _______________________________________________ 

> > > > Gluster-users mailing list 

> > > > Gluster-users at gluster.org 

> > > > http://www.gluster.org/mailman/listinfo/gluster-users 

> > > > 

> > > > 

> > > > =====-----=====-----===== 

> > > > Notice: The information contained in this e-mail 

> > > > message and/or attachments to it may contain 

> > > > confidential or privileged information. If you are 

> > > > not the intended recipient, any dissemination, use, 

> > > > review, distribution, printing or copying of the 

> > > > information contained in this e-mail message 

> > > > and/or attachments to it are strictly prohibited. If 

> > > > you have received this communication in error, 

> > > > please notify us by reply e-mail or telephone and 

> > > > immediately and permanently delete the message 

> > > > and any attachments. Thank you 

> > > > 

> > > > 

> > > > 

> > > > _______________________________________________ 

> > > > Gluster-users mailing list 

> > > > Gluster-users at gluster.org 

> > > > http://www.gluster.org/mailman/listinfo/gluster-users 

> > > 

> > > 

> > > 

> > > 

> > > 

> > > 

> > > 

> > > 

> > > _______________________________________________ 

> > > Gluster-users mailing list 

> > > Gluster-users at gluster.org 

> > > http://www.gluster.org/mailman/listinfo/gluster-users 

> > > 

> > 

> > 

> 

> -- 

> ~Atin 

> 
























-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150505/61bbd142/attachment.html>


More information about the Gluster-users mailing list