[Gluster-users] hanging of mounted filesystem (3.3.1)

Michael Colonno mcolonno at stanford.edu
Fri Feb 1 00:00:42 UTC 2013

            Do I need to blow up and rebuild the brick to make that happen
or can this be set on the fly? Possibly relevant fact: I do not have my IB
fabric in place yet but I'm happy to use IPoIB for this deployment when I
do. I included it as a placeholder. 


            Related: any official word on full RDMA support in Gluster 3.x? 



            ~Mike C. 


From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of Bryan Whitehead
Sent: Thursday, January 31, 2013 3:52 PM
To: Michael Colonno
Cc: gluster-users
Subject: Re: [Gluster-users] hanging of mounted filesystem (3.3.1)


remove the transport rdma and try again. When using RDMA I've also had
extremely bad CPU eating issues.


I currently run gluster with IPoIB to get the speed of infiniband and the
non-crazy cpu usage of rdma gluster.


On Thu, Jan 31, 2013 at 9:20 AM, Michael Colonno
<mike at hpccloudsolutions.com> wrote:

        Hi All ~

        I created an eight-brick gluster 3.3.1 volume (2x replication) on
eight CentOS 6.3 x64 systems. I was able to form and start the volume
without issue. I was also able to mount it through /etc/fstab as a native
glusterfs mount. I have a couple questions issues at this point:

        - On a client machine, "glusterfs" is not recognized as a valid type
unless gluster-server is installed. This seems to contradict the
documentation - wanted to make sure I'm not doing something wrong. More a
clarification than issue here.

        - The glusterfs process is taking between 50% and 80% CPU on both
the brick and client systems (these are fairly powerful, brand new servers).

        - No doubt linked to the above, the mounted filesystem hangs
indefinitely when accessed. I tried an "ls -a" on the mounted filesystem,
for example, which hangs forever. I tested this by mounting a brick system
to itself and to a client which is not a brick and the same behavior was
observed. Both were glusterfs mounts.

        There is nothing special about my deployment except for the use of
transport = tcp,rdma. I am running on Ethernet now but will be migrating to
Infiniband after this is debugged.

        Thanks for any advice,
        ~Mike C.

Gluster-users mailing list
Gluster-users at gluster.org


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130131/c9b2b096/attachment.html>

More information about the Gluster-users mailing list