[Gluster-users] libgfapi access
Ankireddypalle Reddy
areddy at commvault.com
Tue Dec 8 14:58:04 UTC 2015
Vijay,
We are trying to write data backed up by Commvault simpana to glusterfs volume. The data being written is around 30 GB. Two kinds of write requests happen.
1) 1MB requests
2) Small write requests of size 128 bytes. In case of libgfapi access these are cached and a single 128KB write request is made where as in case of FUSE the 128 byte write request is handled to FUSE directly.
glusterfs 3.6.5 built on Aug 24 2015 10:02:43
Volume Name: dispersevol
Type: Disperse
Volume ID: c5d6ccf8-6fec-4912-ab2e-6a7701e4c4c0
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: ssdtest:/mnt/ssdfs1/brick3
Brick2: sanserver2:/data/brick3
Brick3: santest2:/home/brick3
Options Reconfigured:
performance.cache-size: 512MB
performance.write-behind-window-size: 8MB
performance.io-thread-count: 32
performance.flush-behind: on
Thanks and Regards,
Ram
-----Original Message-----
From: Vijay Bellur [mailto:vbellur at redhat.com]
Sent: Monday, December 07, 2015 6:13 PM
To: Ankireddypalle Reddy; gluster-users at gluster.org
Subject: Re: [Gluster-users] libgfapi access
On 12/07/2015 10:29 AM, Ankireddypalle Reddy wrote:
> Hi,
>
> I am trying to use libgfapi interface to access gluster
> volume. What I noticed is that reads/writes to the gluster volume
> through libgfapi interface are slower than FUSE. I was expecting the
> contrary. Are there any recommendations/settings suggested to be used
> while using libgfapi interface.
>
Can you please provide more details about your tests? Providing information like I/O block size, file size, throughput would be helpful.
Thanks,
Vijay
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************
More information about the Gluster-users
mailing list