[Gluster-users] New cluster - first experience
Pranith Kumar Karampuri
pkarampu at redhat.com
Tue Jul 12 15:17:37 UTC 2016
You got this for single dd workload? Ideally single file dd workload should
be dominated by 'WRITE' operation, but seems like it is dominated by too
many FINODELK, I see quite a few mknods too. What is puzzling is the number
of ENTRYLKs which is of the order of 10k. I see some discussion about
enabling sharding, Did you enable sharding by anychance on this volume?
Sharding is not yet ready for general purpose workloads. As long as you
have single writer workload it is fine. It is very well tested for VM
workload.
dd workload generally looks like this (dd if=/dev/zero of=a.txt bs=1M
count=1000):
Brick: localhost.localdomain:/home/gfs/r2_0
-------------------------------------------
Cumulative Stats:
Block Size: 131072b+ 262144b+
No. of Reads: 0 0
No. of Writes: 7996 2
%-latency Avg-latency Min-Latency Max-Latency No. of calls
Fop
--------- ----------- ----------- ----------- ------------
----
0.00 0.00 us 0.00 us 0.00 us 1
RELEASE
0.00 0.00 us 0.00 us 0.00 us 3
RELEASEDIR
0.00 24.00 us 24.00 us 24.00 us 1
STATFS
0.00 22.50 us 22.00 us 23.00 us 2
ENTRYLK
0.00 28.00 us 27.00 us 29.00 us 2
FINODELK
0.00 67.00 us 67.00 us 67.00 us 1
GETXATTR
0.00 35.50 us 28.00 us 43.00 us 2
FLUSH
0.01 342.00 us 342.00 us 342.00 us 1
CREATE
0.10 134.61 us 54.00 us 379.00 us 18
FXATTROP
0.23 67.71 us 41.00 us 156.00 us 83
LOOKUP
99.65 307.53 us 61.00 us 50633.00 us 7998
WRITE
On Tue, Jul 12, 2016 at 8:16 PM, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:
> 2016-07-12 15:55 GMT+02:00 Pranith Kumar Karampuri <pkarampu at redhat.com>:
> > Could you do the following?
> >
> > # gluster volume profile <volname> start
> > # run dd command
> > # gluster volume profile <volname> info >
> > /path/to/file/that/you/need/to/send/us.txt
>
> http://pastebin.com/raw/wcA0i335
>
--
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160712/e2f299a2/attachment.html>
More information about the Gluster-users
mailing list