[Gluster-users] Fwd:Re: client is terrible with large amount of small files
Joe Julian
joe at julianfamily.org
Fri May 8 06:17:13 UTC 2015
Looks like 17 seconds. That's not 14 minutes.
On May 7, 2015 10:55:05 PM PDT, gjprabu <gjprabu at zohocorp.com> wrote:
>Hi Team,
>
> Any options to solve below issues.
>
>Regards
>Prabu
>
>
>---- On Thu, 07 May 2015 12:23:02 +0530 <gjprabu at zohocorp.com>
>wrote ----
>
>Hi Vijay,
>
> Do we have any other options to increase the performance.
>
>Regards
>Prabu
>
>
>
>
>
>---- On Wed, 06 May 2015 15:51:20 +0530 gjprabu
><gjprabu at zohocorp.com> wrote ----
>
>Hi Vijay,
>
> We tired on physical machines but its doesn't improve speed.
>
># gluster volume info
>
>Volume Name: integvoltest
>Type: Replicate
>Volume ID: 6c66afb9-d466-428e-b944-e15d7a1be5f2
>Status: Started
>Number of Bricks: 1 x 2 = 2
>Transport-type: tcp
>Bricks:
>Brick1: integ-gluster3:/srv/sdb1/brick7
>Brick2: integ-gluster4:/srv/sdb1/brick7
>Options Reconfigured:
>diagnostics.count-fop-hits: on
>diagnostics.latency-measurement: on
>cluster.ensure-durability: off
>cluster.readdir-optimize: on
>performance.readdir-ahead: on
>server.event-threads: 30
>client.event-threads: 30
>
>
>
>
>gluster volume profile integvoltest info
>Brick: integ-gluster3:/srv/sdb1/brick7
>--------------------------------------
>Cumulative Stats:
>Block Size: 4b+ 8b+
>16b+
>No. of Reads: 0 0
> 1
>No. of Writes: 2 4
> 8
>
>Block Size: 32b+ 64b+
>128b+
>No. of Reads: 2 2
> 2
>No. of Writes: 8 6
> 6
>
>Block Size: 256b+ 512b+
>1024b+
>No. of Reads: 0 0
> 0
>No. of Writes: 4 2
> 6
>
> Block Size: 2048b+ 4096b+
> No. of Reads: 0 0
>No. of Writes: 2 6507
>%-latency Avg-latency Min-Latency Max-Latency No. of calls
> Fop
>--------- ----------- ----------- ----------- ------------
> ----
>0.00 0.00 us 0.00 us 0.00 us 37
>FORGET
>0.00 0.00 us 0.00 us 0.00 us 61
>RELEASE
>0.00 0.00 us 0.00 us 0.00 us 202442
>RELEASEDIR
>0.00 142.00 us 142.00 us 142.00 us 1
>REMOVEXATTR
>0.00 101.00 us 79.00 us 142.00 us 3
>STAT
>0.00 313.00 us 313.00 us 313.00 us 1
>XATTROP
>0.00 120.00 us 96.00 us 145.00 us 3
>READ
>0.00 101.75 us 69.00 us 158.00 us 4
>STATFS
>0.00 131.25 us 112.00 us 147.00 us 4
>GETXATTR
>0.00 256.00 us 216.00 us 309.00 us 3
>UNLINK
>0.00 820.00 us 820.00 us 820.00 us 1
>SYMLINK
>0.00 109.80 us 72.00 us 197.00 us 10
>READDIR
>0.00 125.58 us 100.00 us 161.00 us 12
>SETATTR
>0.00 138.36 us 102.00 us 196.00 us 11
>OPEN
>0.00 55.38 us 24.00 us 240.00 us 29
>FLUSH
>0.00 445.00 us 125.00 us 937.00 us 4
>READDIRP
>0.01 306.43 us 165.00 us 394.00 us 7
>RENAME
>0.01 199.55 us 153.00 us 294.00 us 11
>SETXATTR
>0.01 72.64 us 28.00 us 227.00 us 47
>FINODELK
>0.02 67.69 us 30.00 us 241.00 us 96
>ENTRYLK
>0.03 1038.18 us 943.00 us 1252.00 us 11
>MKDIR
>0.03 251.49 us 147.00 us 865.00 us 53
>FXATTROP
>0.06 1115.60 us 808.00 us 1860.00 us 20
>CREATE
>0.07 323.83 us 31.00 us 22132.00 us 88
>INODELK
>1.41 170.57 us 79.00 us 2022.00 us 3262
>WRITE
>26.35 103.15 us 4.00 us 260.00 us 100471
>OPENDIR
>71.98 139.07 us 47.00 us 471.00 us 203591
>LOOKUP
>
> Duration: 1349 seconds
> Data Read: 624 bytes
>Data Written: 26675732 bytes
>
>Interval 25 Stats:
> Block Size: 4096b+
> No. of Reads: 0
>No. of Writes: 108
>%-latency Avg-latency Min-Latency Max-Latency No. of calls
> Fop
>--------- ----------- ----------- ----------- ------------
> ----
>0.00 0.00 us 0.00 us 0.00 us 4887
>RELEASEDIR
>1.06 155.19 us 117.00 us 248.00 us 108
>WRITE
>25.69 83.24 us 35.00 us 260.00 us 4888
>OPENDIR
>73.25 117.83 us 51.00 us 406.00 us 9844
>LOOKUP
>
> Duration: 17 seconds
> Data Read: 0 bytes
>Data Written: 442368 bytes
>
>Brick: integ-gluster4:/srv/sdb1/brick7
>--------------------------------------
>Cumulative Stats:
>Block Size: 4b+ 8b+
>16b+
>No. of Reads: 0 0
> 1
>No. of Writes: 2 4
> 8
>
>Block Size: 32b+ 64b+
>128b+
>No. of Reads: 2 2
> 2
>No. of Writes: 8 6
> 6
>
>Block Size: 256b+ 512b+
>1024b+
>No. of Reads: 0 0
> 0
>No. of Writes: 4 2
> 6
>
> Block Size: 2048b+ 4096b+
> No. of Reads: 0 0
>No. of Writes: 2 6507
>%-latency Avg-latency Min-Latency Max-Latency No. of calls
> Fop
>--------- ----------- ----------- ----------- ------------
> ----
>0.00 0.00 us 0.00 us 0.00 us 37
>FORGET
>0.00 0.00 us 0.00 us 0.00 us 61
>RELEASE
>0.00 0.00 us 0.00 us 0.00 us 202444
>RELEASEDIR
>0.00 124.00 us 124.00 us 124.00 us 1
>REMOVEXATTR
>0.00 186.00 us 186.00 us 186.00 us 1
>XATTROP
>0.00 90.00 us 49.00 us 131.00 us 3
>GETXATTR
>0.00 276.00 us 276.00 us 276.00 us 1
>SYMLINK
>0.00 92.75 us 49.00 us 119.00 us 4
>STATFS
>0.00 125.00 us 58.00 us 161.00 us 3
>UNLINK
>0.00 135.75 us 90.00 us 173.00 us 4
>READ
>0.00 64.73 us 54.00 us 131.00 us 11
>SETXATTR
>0.00 93.80 us 40.00 us 129.00 us 10
>READDIR
>0.00 95.00 us 75.00 us 159.00 us 12
>SETATTR
>0.00 121.27 us 93.00 us 156.00 us 11
>OPEN
>0.00 257.14 us 106.00 us 404.00 us 7
>RENAME
>0.00 62.76 us 26.00 us 245.00 us 29
>FLUSH
>0.00 313.73 us 256.00 us 397.00 us 11
>MKDIR
>0.00 74.08 us 23.00 us 243.00 us 49
>FINODELK
>0.01 62.12 us 20.00 us 145.00 us 88
>INODELK
>0.01 58.81 us 18.00 us 120.00 us 96
>ENTRYLK
>0.01 370.35 us 285.00 us 479.00 us 20
>CREATE
>0.01 149.85 us 70.00 us 230.00 us 53
>FXATTROP
>0.03 69.19 us 32.00 us 167.00 us 350
>STAT
>0.46 130.36 us 59.00 us 1042.00 us 3262
>WRITE
>10.07 91.73 us 5.00 us 759.00 us 100472
>OPENDIR
>24.90 111.87 us 40.00 us 954.00 us 203595
>LOOKUP
>64.49 293.59 us 28.00 us 2779.00 us 200938
>READDIRP
>
> Duration: 1349 seconds
> Data Read: 624 bytes
>Data Written: 26675732 bytes
>
>Interval 25 Stats:
> Block Size: 4096b+
> No. of Reads: 0
>No. of Writes: 108
>%-latency Avg-latency Min-Latency Max-Latency No. of calls
> Fop
>--------- ----------- ----------- ----------- ------------
> ----
>0.00 0.00 us 0.00 us 0.00 us 4888
>RELEASEDIR
>0.03 92.18 us 55.00 us 161.00 us 17
>STAT
>0.31 144.44 us 108.00 us 382.00 us 108
>WRITE
>11.20 113.73 us 53.00 us 203.00 us 4887
>OPENDIR
>26.70 134.65 us 79.00 us 265.00 us 9844
>LOOKUP
>61.76 313.65 us 56.00 us 652.00 us 9774
>READDIRP
>
> Duration: 17 seconds
> Data Read: 0 bytes
>Data Written: 442368 bytes
>
>strace -Tcf git clone
>% time seconds usecs/call calls errors syscall
>------ ----------- ----------- --------- --------- ----------------
>100.00 0.000026 2 13 mmap
> 0.00 0.000000 0 3 read
> 0.00 0.000000 0 77 write
> 0.00 0.000000 0 4 open
> 0.00 0.000000 0 4 close
> 0.00 0.000000 0 4 fstat
> 0.00 0.000000 0 7 mprotect
> 0.00 0.000000 0 1 munmap
> 0.00 0.000000 0 3 brk
> 0.00 0.000000 0 2 rt_sigaction
> 0.00 0.000000 0 1 rt_sigprocmask
> 0.00 0.000000 0 1 1 access
> 0.00 0.000000 0 1 execve
> 0.00 0.000000 0 1 getrlimit
> 0.00 0.000000 0 1 arch_prctl
> 0.00 0.000000 0 2 1 futex
> 0.00 0.000000 0 1 set_tid_address
> 0.00 0.000000 0 1 set_robust_list
>------ ----------- ----------- --------- --------- ----------------
>100.00 0.000026 127 2 total
>
>
>Regards
>Prabu
>
>
>
>
>
>---- On Tue, 05 May 2015 15:51:58 +0530 Vijay
>Bellur<vbellur at redhat.com> wrote ----
>
>On 05/05/2015 03:43 PM, Kamal wrote:
>> Hi Vijay,
>>
>> We Tried the same. But its doesn't improve speed.
>>
>> For testing glusterfs, we are running storage in virtual machines.
>Is
>> that will make any difference ?
>
>Performance testing on physical machines is a better bet as it takes
>some variables away from the equation.
>
>Since you happen to use a replicated volume, can you try by enabling
>this option?
>
>gluster volume set <volname> cluster.ensure-durability off
>
>Additionally you might want to try "strace -Tcf git clone .." and
>"gluster volume profile ..." to figure out where the latency is
>stemming
>from.
>
>
>> But, copying the same folder between two storage machines is
>really
>> fast. fyi.
>>
>
>Does this copying involve gluster or not?
>
>Regards,
>Vijay
>
>> Regards,
>> Kamal
>>
>>
>>
>> ---- On Tue, 05 May 2015 15:08:27 +0530 *Vijay
>> Bellur<vbellur at redhat.com>* wrote ----
>>
>> On 05/05/2015 12:59 PM, Kamal wrote:
>> > Hi Amukher,
>> >
>> > Even after upgrade to 3.7 small files transfer rate is slow.
>> >
>> > Below is the volume info.
>> >
>> > Volume Name: integvol1
>> > Type: Replicate
>> > Volume ID: 31793ba4-eeca-462a-a0cd-9adfb281225b
>> > Status: Started
>> > Number of Bricks: 1 x 2 = 2
>> > Transport-type: tcp
>> > Bricks:
>> > Brick1: integ-gluster1:/srv/sdb2/brick4
>> > Brick2: integ-gluster2:/srv/sdb2/brick4
>> > Options Reconfigured:
>> > server.event-threads: 30
>> > client.event-threads: 30
>> > ----
>> >
>> > I understand that for replication it would take some more
>time, but
>> > here its taking more time.
>> >
>> >
>> > Time taken for git clone in non gluster directory = 25 sec
>> >
>> > Time taken for git clone in gluster directory = 14 minutes
>> >
>> > Its a huge difference. Plz let me know any other tuning
>> parameters need
>> > to be done.
>> >
>> >
>>
>> I have seen this before and it primarily seems to be related to
>the
>> readdir calls done by git clone.
>>
>> Turning on these options might help to some extent:
>>
>> gluster volume set <volname> performance.readdir-ahead on
>>
>> gluster volume set <volname> cluster.readdir-optimize on
>>
>> Please do let us know what you observe with these options enabled.
>
>>
>> Regards,
>> Vijay
>>
>>
>>
>
>_______________________________________________
>Gluster-users mailing list
>Gluster-users at gluster.org
>http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>Gluster-users mailing list
>Gluster-users at gluster.org
>http://www.gluster.org/mailman/listinfo/gluster-users
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150507/3fdd6360/attachment.html>
More information about the Gluster-users
mailing list