[Gluster-users] gluster NFS proces takes 100% cpu

Gerald Brandt gbr at majentis.com
Mon Feb 24 16:00:50 UTC 2014


Hi,

I've set up a 2 brick replicate system, using bonded GigE.

eth0 - management
eth1 & eth2 - bonded 192.168.20.x
eth3 & eth4 - bonded 192.168.10.x

I created the replicate over the 192.168.10 interfaces.

# gluster volume info

Volume Name: raid5
Type: Replicate
Volume ID: 02b24ff0-e55c-4f92-afa5-731fd52d0e1a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: filer-1:/gluster-exported/raid5/data
Brick2: filer-2:/gluster-exported/raid5/data
Options Reconfigured:
performance.nfs.stat-prefetch: on
performance.nfs.io-cache: on
performance.nfs.read-ahead: on
performance.nfs.io-threads: on
nfs.trusted-sync: on
performance.cache-size: 13417728
performance.io-thread-count: 64
performance.write-behind-window-size: 4MB
performance.io-cache: on
performance.read-ahead: on

I attached an NFS client across the 192.168.20 interface.  The NFS works fine.  Under load, though, I get 100% CPU usage of the nfs process and lose connectivity.

My plan was to replicate across the 192.168.10 bond as well as do gluster mounts.  The NFS mount on 192.168.20 was to keep NFS traffic off the gluster link.

Is this a supported configuration?  Does anyone else do this?

Gerald

-- 
Gerald Brandt
Majentis Technologies
gbr at majentis.com
204-229-6595
www.majentis.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140224/f2b4c7df/attachment.html>


More information about the Gluster-users mailing list