[Gluster-users] adding a brick causes mountpoints to hang

Andrei Mikhailovsky andrei at arhont.com
Mon Feb 4 14:34:57 UTC 2013

Hello guys, 

I am having a bit of an issue with adding a new server to the glusterfs. I was wondering if anyone could help me. I am running glusterfs 3.3.0 over rdma on ubuntu 12.04 server. 

Basically, I had a single glusterfs server that was used by two host servers for running a bunch of vms. The host servers used nfs to mount the glusterfs mount point. I've added a new server with the hope to replicate data. The server was added fine and the clients could mount the mountpoint which had both servers. 

Following that I've ran "ls -laR /mountpoint" on the client side to kick off the self heal process so that all the data is copied across to the second server. Following a few seconds delay I started seeing file server contents from the ls command. This lasted about 10 seconds after which point the mountpoint froze on both clients. However, I can see data being copied to the new storage server while the mountpoints were still hanging. I've waited about two hours, but the mountpoints were still frozen. 

Does anyone know what the problem could be? 

After about three hours I've had to stop the glusterfs service on the newly introduced server as I had to turn on the infrastructure. After the service stopped, the mountpoints came back and started working again. 

I was wondering if it's possible to add new bricks without the client mountpoint hangs? How can I find out what the problem is? 

Many thanks 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130204/4cd5661d/attachment.html>

More information about the Gluster-users mailing list