[Gluster-users] libgfapi using unix domain sockets
Ankireddypalle Reddy
areddy at commvault.com
Wed May 25 10:03:41 UTC 2016
Poornima,
Thanks for checking this. We are using disperse volumes. Unix domain sockets will be used for communication between libgfapi and the brick daemons on the local server. The communication to brick daemons on the other nodes of the volume would be through tcp/rdma. Is my assumption correct.
Thanks and Regards,
Ram
From: Poornima Gurusiddaiah [mailto:pgurusid at redhat.com]
Sent: Wednesday, May 25, 2016 2:09 AM
To: Ankireddypalle Reddy
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] libgfapi using unix domain sockets
Hi,
Whenever a new fd is created it is allocated from the mem-pool, if the mem pool is full it will be calloc'd. The current limit for fd-mem-pool is 1024, if there are more than 1024 fd's open, then the perf may be affected.
Also, the unix socket used while glfs_set_volfile_server() is only for Vol file, i.e. only for the connection btw Client and glusterd (management deamon). Hence, you may not see the IO performance increase, the patch http://review.gluster.org/#/c/12709/ introduces unix socket domain for IO path. This is what you may be interested in i guess.
Regards,
Poornima
________________________________
From: "Ankireddypalle Reddy" <areddy at commvault.com<mailto:areddy at commvault.com>>
To: gluster-users at gluster.org<mailto:gluster-users at gluster.org>
Sent: Tuesday, May 24, 2016 9:16:31 PM
Subject: Re: [Gluster-users] libgfapi using unix domain sockets
Is there any suggested best practice for the number of glfs_fd_t that can be associated with a glfs_t. Does having a single glfs_t in an application with large number of glfs_fd_t cause any resource contention issues.
Thanks and Regards,
Ram
From: gluster-users-bounces at gluster.org<mailto:gluster-users-bounces at gluster.org> [mailto:gluster-users-bounces at gluster.org] On Behalf Of Ankireddypalle Reddy
Sent: Tuesday, May 24, 2016 11:34 AM
To: gluster-users at gluster.org<mailto:gluster-users at gluster.org>
Subject: Re: [Gluster-users] libgfapi using unix domain sockets
I figured it out.
Protocol: unix
Hostname: /var/run/glusterd.socket
Port: 0
Thanks and Regards,
Ram
From: gluster-users-bounces at gluster.org<mailto:gluster-users-bounces at gluster.org> [mailto:gluster-users-bounces at gluster.org] On Behalf Of Ankireddypalle Reddy
Sent: Tuesday, May 24, 2016 10:20 AM
To: gluster-users at gluster.org<mailto:gluster-users at gluster.org>
Subject: [Gluster-users] libgfapi using unix domain sockets
Hi,
I am trying to use libgfapi for connecting to a gluster volume using unix domain sockets. I am not able to find the socket path that should be provided while making the “glfs_set_volfile_server” function call.
ps -eaf | grep gluster
root 15178 31450 0 09:52 pts/1 00:00:00 grep --color=auto gluster
root 26739 26291 0 May16 ? 00:01:52 /opt/commvault/Base/IndexingService -serviceName IndexingService_cleanup -cn glustervm6 cvshost:glustervm6*glustervm6 cvsport:58600:0 cvsmyplatform:2 cvsremoteplatform:4 -vm Instance001
root 28335 1 0 May12 ? 00:02:15 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
root 30047 1 0 May12 ? 00:06:38 /usr/sbin/glusterfsd -s glustervm6sds --volfile-id StoragePool.glustervm6sds.ws-disk1-ws_brick -p /var/lib/glusterd/vols/StoragePool/run/glustervm6sds-ws-disk1-ws_brick.pid -S /var/run/gluster/9ed1d13b4265b95be4ed642578e7f28b.socket --brick-name /ws/disk1/ws_brick -l /var/log/glusterfs/bricks/ws-disk1-ws_brick.log --xlator-option *-posix.glusterd-uuid=3ab81d79-9a99-4822-abb2-62e76a029240 --brick-port 49152 --xlator-option StoragePool-server.listen-port=49152
root 30066 1 0 May12 ? 00:13:58 /usr/sbin/glusterfsd -s glustervm6sds --volfile-id StoragePool.glustervm6sds.ws-disk2-ws_brick -p /var/lib/glusterd/vols/StoragePool/run/glustervm6sds-ws-disk2-ws_brick.pid -S /var/run/gluster/be6fc96032a95d6bf00d41049ca0356a.socket --brick-name /ws/disk2/ws_brick -l /var/log/glusterfs/bricks/ws-disk2-ws_brick.log --xlator-option *-posix.glusterd-uuid=3ab81d79-9a99-4822-abb2-62e76a029240 --brick-port 49153 --xlator-option StoragePool-server.listen-port=49153
root 30088 1 0 May12 ? 00:00:21 /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S /var/run/gluster/93db4047a97542a6457b2178ce6512d7.socket
root 30093 1 0 May12 ? 00:10:24 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/3d435606821403370720761863000928.socket --xlator-option *replicate*.node-uuid=3ab81d79-9a99-4822-abb2-62e76a029240
root 30186 1 0 May12 ? 00:00:31 /usr/sbin/glusterfs --volfile-server=glustervm6sds --volfile-id=/StoragePool /ws/glus
Thanks and Regards,
Ram
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
http://www.gluster.org/mailman/listinfo/gluster-users
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160525/53539cab/attachment.html>
More information about the Gluster-users
mailing list