[Gluster-users] 100% cpu on brick replication

Pranith Kumar Karampuri pkarampu at redhat.com
Fri May 29 05:44:29 UTC 2015



On 05/27/2015 08:48 PM, Pedro Oriani wrote:
> Hi All,
> I'm writing because I'm experiecing an issue with gluster's 
> replication feature.
> I've a brick on srv1 with about 2TB of mixed side files, ranging from 
> 10k a 300k
> When I add a new replication brick on srv2, the glusterfs process take 
> all the cpu.
> This is unsuitable because the volume is not responding at normal r/w 
> queries.
>
> Glusterfs version is 3.7.0
Is it because of self-heals? Was the brick offline until then?

Pranith
>
> the underlaying volume is xfs.
>
>
> Volume Name: vol1
> Type: Replicate
> Volume ID:
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 172.16.0.1:/data/glusterfs/vol1/brick1/brick
> Brick2: 172.16.0.2:/data/glusterfs/vol1/brick1/brick
> Options Reconfigured:
> performance.cache-size: 1gb
> cluster.self-heal-daemon: off
> cluster.data-self-heal-algorithm: full
> cluster.metadata-self-heal: off
> performance.cache-max-file-size: 2MB
> performance.cache-refresh-timeout: 1
> performance.stat-prefetch: off
> performance.read-ahead: on
> performance.quick-read: off
> performance.write-behind-window-size: 4MB
> performance.flush-behind: on
> performance.write-behind: on
> performance.io-thread-count: 32
> performance.io-cache: on
> network.ping-timeout: 2
> nfs.addr-namelookup: off
> performance.strict-write-ordering: on
>
>
> there is any parameter or hint that I can follow to limit cpu 
> occupation to grant a replication with few lag on normal operations ?
>
> thank
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150529/278bcea4/attachment.html>


More information about the Gluster-users mailing list