[Gluster-users] 0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)

Milind Changire mchangir at redhat.com
Mon Sep 18 11:57:05 UTC 2017


Sam,
You might want to give glusterfs-3.12.1 a try instead.



On Fri, Sep 15, 2017 at 6:42 AM, Sam McLeod <mailinglists at smcleod.net>
wrote:

> Howdy,
>
> I'm setting up several gluster 3.12 clusters running on CentOS 7 and have
> having issues with glusterd.log and glustershd.log both being filled with
> errors relating to null client errors and client-callback functions.
>
> They seem to be related to high CPU usage across the nodes although I
> don't have a way of confirming that (suggestions welcomed!).
>
>
> in /var/log/glusterfs/glusterd.log:
>
> csvc_request_init+0x7f) [0x7f382007b93f] -->/lib64/libglusterfs.so.0(gf_client_ref+0x179)
> [0x7f3820315e59] ) 0-client_t: null client [Invalid argument]
> [2017-09-15 00:54:14.454022] E [client_t.c:324:gf_client_ref]
> (-->/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf8) [0x7f382007e7e8]
> -->/lib64/libgfrpc.so.0(rpcsvc_request_init+0x7f) [0x7f382007b93f]
> -->/lib64/libglusterfs.so.0(gf_client_ref+0x179) [0x7f3820315e59] )
> 0-client_t: null client [Invalid argument]
>
>
> This is repeated _thousands_ of times and is especially noisey when any
> node is running gluster volume set <volname> <option> <value>.
>
> and I'm unsure if it's related but in /var/log/glusterfs/glustershd.log:
>
> [2017-09-15 00:36:21.654242] W [MSGID: 114010]
> [client-callback.c:28:client_cbk_fetchspec] 0-my_volume_name-client-0:
> this function should not be called
>
>
> ---
>
>
> Cluster configuration:
>
> Gluster 3.12
> CentOS 7.4
> Replica 3, Arbiter 1
> NFS disabled (using Kubernetes with the FUSE client)
> Each node is 8 Xeon E5-2660 with 16GB RAM virtualised on XenServer 7.2
>
>
> root at int-gluster-03:~  # gluster get
> glusterd state dumped to /var/run/gluster/glusterd_state_20170915_110532
>
> [Global]
> MYUUID: 0b42ffb2-217a-4db6-96bf-cf304a0fa1ae
> op-version: 31200
>
> [Global options]
> cluster.brick-multiplex: enable
>
> [Peers]
> Peer1.primary_hostname: int-gluster-02.fqdn.here
> Peer1.uuid: e614686d-0654-43c9-90ca-42bcbeda3255
> Peer1.state: Peer in Cluster
> Peer1.connected: Connected
> Peer1.othernames:
> Peer2.primary_hostname: int-gluster-01.fqdn.here
> Peer2.uuid: 9b0c82ef-329d-4bd5-92fc-95e2e90204a6
> Peer2.state: Peer in Cluster
> Peer2.connected: Connected
> Peer2.othernames:
>
> (Then volume options are listed)
>
>
> ---
>
>
> Volume configuration:
>
> root at int-gluster-03:~ # gluster volume info my_volume_name
>
> Volume Name: my_volume_name
> Type: Replicate
> Volume ID: 6574a963-3210-404b-97e2-bcff0fa9f4c9
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: int-gluster-01.fqdn.here:/mnt/gluster-storage/my_volume_name
> Brick2: int-gluster-02.fqdn.here:/mnt/gluster-storage/my_volume_name
> Brick3: int-gluster-03.fqdn.here:/mnt/gluster-storage/my_volume_name
> Options Reconfigured:
> performance.stat-prefetch: true
> performance.parallel-readdir: true
> performance.client-io-threads: true
> network.ping-timeout: 5
> diagnostics.client-log-level: WARNING
> diagnostics.brick-log-level: WARNING
> cluster.readdir-optimize: true
> cluster.lookup-optimize: true
> transport.address-family: inet
> nfs.disable: on
> cluster.brick-multiplex: enable
>
>
> --
> Sam McLeod
> @s_mcleod
> https://smcleod.net
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Milind
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170918/02d72c18/attachment.html>


More information about the Gluster-users mailing list