[Gluster-users] libgfapi access
Ankireddypalle Reddy
areddy at commvault.com
Wed Dec 9 22:01:22 UTC 2015
Hi,
I upgraded my setup to gluster 3.7.3. I tested writes by performing writes through fuse and through libgfapi. Attached are the profiles generated from fuse and libgfapi. The test programs essentially writes 10000 blocks each of 128K.
[root at santest2 Base]# time ./GlusterFuseTest /ws/glus 131072 10000
Mount path: /ws/glus
Block size: 131072
Num of blocks: 10000
Will perform write test on mount path : /ws/glus
Succesfully created file /ws/glus/1449697583.glfs
Successfully filled file /ws/glus/1449697583.glfs
Write test succeeded
Write test succeeded.
real 0m18.722s
user 0m3.913s
sys 0m1.126s
[root at santest2 Base]# time ./GlusterLibGFApiTest dispersevol santest2 24007 131072 10000
Host name: santest2
Volume: dispersevol
Port: 24007
Block size: 131072
Num of blocks: 10000
Will perform write test on volume: dispersevol
Successfully filled file 1449697651.glfs
Write test succeeded
Write test succeeded.
real 0m18.630s
user 0m8.804s
sys 0m1.870s
Thanks and Regards,
Ram
-----Original Message-----
From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
Sent: Wednesday, December 09, 2015 1:39 AM
To: Ankireddypalle Reddy; Vijay Bellur; gluster-users at gluster.org
Subject: Re: [Gluster-users] libgfapi access
On 12/08/2015 08:28 PM, Ankireddypalle Reddy wrote:
> Vijay,
> We are trying to write data backed up by Commvault simpana to glusterfs volume. The data being written is around 30 GB. Two kinds of write requests happen.
> 1) 1MB requests
> 2) Small write requests of size 128 bytes. In case of libgfapi access these are cached and a single 128KB write request is made where as in case of FUSE the 128 byte write request is handled to FUSE directly.
>
> glusterfs 3.6.5 built on Aug 24 2015 10:02:43
>
> Volume Name: dispersevol
> Type: Disperse
> Volume ID: c5d6ccf8-6fec-4912-ab2e-6a7701e4c4c0
> Status: Started
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ssdtest:/mnt/ssdfs1/brick3
> Brick2: sanserver2:/data/brick3
> Brick3: santest2:/home/brick3
> Options Reconfigured:
> performance.cache-size: 512MB
> performance.write-behind-window-size: 8MB
> performance.io-thread-count: 32
> performance.flush-behind: on
hi,
Things look okay. May be we can find something using profile info.
Could you post the results of the following operations:
1) gluster volume profile <volname> start
2) Run the fuse workload
3) gluster volume profile <volname> info > /path/to/file-1/to/send/us
4) Run the libgfapi workload
5)gluster volume profile <volname> info > /path/to/file-2/to/send/us
Send both these files to us to check what are the extra fops if any that are sent over network which may be causing the delay.
I see that you are using disperse volume. If you are going to use disperse volume for production usecases, I suggest you use 3.7.x preferably 3.7.3. We fixed a bug in releases from 3.7.4 till 3.7.6 which will be released in 3.7.7.
Pranith
>
> Thanks and Regards,
> Ram
>
>
> -----Original Message-----
> From: Vijay Bellur [mailto:vbellur at redhat.com]
> Sent: Monday, December 07, 2015 6:13 PM
> To: Ankireddypalle Reddy; gluster-users at gluster.org
> Subject: Re: [Gluster-users] libgfapi access
>
> On 12/07/2015 10:29 AM, Ankireddypalle Reddy wrote:
>> Hi,
>>
>> I am trying to use libgfapi interface to access gluster
>> volume. What I noticed is that reads/writes to the gluster volume
>> through libgfapi interface are slower than FUSE. I was expecting the
>> contrary. Are there any recommendations/settings suggested to be used
>> while using libgfapi interface.
>>
> Can you please provide more details about your tests? Providing information like I/O block size, file size, throughput would be helpful.
>
> Thanks,
> Vijay
>
>
>
>
>
> ***************************Legal Disclaimer***************************
> "This communication may contain confidential and privileged material
> for the sole use of the intended recipient. Any unauthorized review,
> use or distribution by others is strictly prohibited. If you have
> received the message by mistake, please advise the sender by reply email and delete the message. Thank you."
> **********************************************************************
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************
-------------- next part --------------
A non-text attachment was scrubbed...
Name: fuse.profile
Type: application/octet-stream
Size: 8657 bytes
Desc: fuse.profile
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151209/62bd6fe7/attachment.obj>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: libgfapi.profile
Type: application/octet-stream
Size: 8657 bytes
Desc: libgfapi.profile
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151209/62bd6fe7/attachment-0001.obj>
More information about the Gluster-users
mailing list