[Gluster-users] 0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)

Raghavendra Gowdappa rgowdapp at redhat.com
Fri Sep 15 06:48:14 UTC 2017



----- Original Message -----
> From: "Sam McLeod" <mailinglists at smcleod.net>
> To: gluster-users at gluster.org
> Sent: Friday, September 15, 2017 6:42:13 AM
> Subject: [Gluster-users] 0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
> 
> Howdy,
> 
> I'm setting up several gluster 3.12 clusters running on CentOS 7 and have
> having issues with glusterd.log and glustershd.log both being filled with
> errors relating to null client errors and client-callback functions.
> 
> They seem to be related to high CPU usage across the nodes although I don't
> have a way of confirming that (suggestions welcomed!).
> 
> 
> in /var/log/glusterfs/glusterd.log:
> 
> csvc_request_init+0x7f) [0x7f382007b93f]
> -->/lib64/libglusterfs.so.0(gf_client_ref+0x179) [0x7f3820315e59] )
> 0-client_t: null client [Invalid argument]
> [2017-09-15 00:54:14.454022] E [client_t.c:324:gf_client_ref]
> (-->/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf8) [0x7f382007e7e8]
> -->/lib64/libgfrpc.so.0(rpcsvc_request_init+0x7f) [0x7f382007b93f]
> -->/lib64/libglusterfs.so.0(gf_client_ref+0x179) [0x7f3820315e59] )
> 0-client_t: null client [Invalid argument]

This issue of spurious logging is fixed in v3.12.1. Thanks to Nithya for bringing this issue to my notice.

The issue of high cpu usage seems to be a different issue provided logging itself is not driving cpu usage.

> 
> 
> This is repeated _thousands_ of times and is especially noisey when any node
> is running gluster volume set <volname> <option> <value>.
> 
> and I'm unsure if it's related but in /var/log/glusterfs/glustershd.log:
> 
> [2017-09-15 00:36:21.654242] W [MSGID: 114010]
> [client-callback.c:28:client_cbk_fetchspec] 0-my_volume_name-client-0: this
> function should not be called
> 
> 
> ---
> 
> 
> Cluster configuration:
> 
> Gluster 3.12
> CentOS 7.4
> Replica 3, Arbiter 1
> NFS disabled (using Kubernetes with the FUSE client)
> Each node is 8 Xeon E5-2660 with 16GB RAM virtualised on XenServer 7.2
> 
> 
> root at int-gluster-03:~  # gluster get
> glusterd state dumped to /var/run/gluster/glusterd_state_20170915_110532
> 
> [Global]
> MYUUID: 0b42ffb2-217a-4db6-96bf-cf304a0fa1ae
> op-version: 31200
> 
> [Global options]
> cluster.brick-multiplex: enable
> 
> [Peers]
> Peer1.primary_hostname: int-gluster-02.fqdn.here
> Peer1.uuid: e614686d-0654-43c9-90ca-42bcbeda3255
> Peer1.state: Peer in Cluster
> Peer1.connected: Connected
> Peer1.othernames:
> Peer2.primary_hostname: int-gluster-01.fqdn.here
> Peer2.uuid: 9b0c82ef-329d-4bd5-92fc-95e2e90204a6
> Peer2.state: Peer in Cluster
> Peer2.connected: Connected
> Peer2.othernames:
> 
> (Then volume options are listed)
> 
> 
> ---
> 
> 
> Volume configuration:
> 
> root at int-gluster-03:~ # gluster volume info my_volume_name
> 
> Volume Name: my_volume_name
> Type: Replicate
> Volume ID: 6574a963-3210-404b-97e2-bcff0fa9f4c9
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: int-gluster-01.fqdn.here:/mnt/gluster-storage/my_volume_name
> Brick2: int-gluster-02.fqdn.here:/mnt/gluster-storage/my_volume_name
> Brick3: int-gluster-03.fqdn.here:/mnt/gluster-storage/my_volume_name
> Options Reconfigured:
> performance.stat-prefetch: true
> performance.parallel-readdir: true
> performance.client-io-threads: true
> network.ping-timeout: 5
> diagnostics.client-log-level: WARNING
> diagnostics.brick-log-level: WARNING
> cluster.readdir-optimize: true
> cluster.lookup-optimize: true
> transport.address-family: inet
> nfs.disable: on
> cluster.brick-multiplex: enable
> 
> 
> --
> Sam McLeod
> @s_mcleod
> https://smcleod.net
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
> 


More information about the Gluster-users mailing list