[Gluster-users] Giving up [ was: Re: read-subvolume]
landman at scalableinformatics.com
Wed Jul 10 19:24:17 UTC 2013
On 07/10/2013 03:18 PM, Joe Julian wrote:
> The "small file" complaint is all about latency though. There's very
> little disk overhead (all inode lookups) to doing a self-heal check. "ls
> -l" on a 50k file directory and nearly all the delay is from network RTT
> for self-heal checks (check that with wireshark).
Try it with localhost. Build a small test gluster brick, take
networking out of the loop, create 50k files, and launch the self heal.
RTT is part of it, but not the majority (last I checked it wasn't a
significant fraction relative to other metadata bits).
I did an experiment with 3.3.x a while ago with 2x ramdisks I created a
set of files, looped back with losetup, built xfs fs atop them,
mirrored them with glusterfs, and then set about to doing metadata/small
file heavy workloads. Performance was still abysmal. Pretty sure none
of that was RTT. Definitely a stack traversal problem, but I didn't
trace it far enough back to be definitively sure where it was.
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics, Inc.
email: landman at scalableinformatics.com
web : http://scalableinformatics.com
phone: +1 734 786 8423 x121
fax : +1 866 888 3112
cell : +1 734 612 4615
More information about the Gluster-users