[Gluster-users] [EXT] Re: [Glusterusers] log file spewing on one node but not the

W Kern wkmail at bneit.com
Tue Jul 25 21:06:14 UTC 2023


Well as I indicated a day or so later. in the RESOLVED subject addition.

unmounting the folder, and running xfs_repair seemed to solve the problem.

No issues noted in dmesg.   Uptime was almost 300 days.

We shall see if the problem returns.

-wk

On 7/25/23 1:22 PM, Strahil Nikolov wrote:
> What is the uptime of the affected node ?
> There is a similar error reported in 
> https://access.redhat.com/solutions/5518661 which could indicate a 
> possible problem in a memory area named ‘lru’ .
> Have you noticed any ECC errors in dmesg/IPMI of the system ?
>
> At least I would reboot the node and run hardware diagnostics to check 
> that everything is fine.
>
> Best Regards,
> Strahil Nikolov
>
> Sent from Yahoo Mail for iPhone 
> <https://mail.onelink.me/107872968?pid=nativeplacement&c=Global_Acquisition_YMktg_315_Internal_EmailSignature&af_sub1=Acquisition&af_sub2=Global_YMktg&af_sub3=&af_sub4=100000604&af_sub5=EmailSignature__Static_>
>
> On Tuesday, July 25, 2023, 4:31 AM, W Kern <wkmail at bneit.com> wrote:
>
>     we have an older 2+1 arbiter gluster cluster running 6.10  on
>     Ubuntu18LTS
>
>     It has run beautifully for years. Only occaisionally needing
>     attention
>     as drives have died, etc
>
>     Each peer has two volumes. G1 and G2 with a shared 'gluster' network.
>
>     Since July 1st one of the peers for one volume is spewing the logfile
>     /var-lib-G1.log with the following errors.
>
>     The volume (G2) is not showing this nor are there issue with other
>     peer
>     and the arbiter for the G1 volume.
>
>     So its one machine with one volume that has the problem.  There have
>     been NO issues with the volumes themselves.
>
>     It simply a matter of the the logfiles generating GBs of entries
>     every
>     hour (which is how we noticed it when we started running out of
>     log space).
>
>     According to google there are mentions of this error, but that it was
>     fixed in the 6.x series.  I can find no other mentions.
>
>     I have tried restarting glusterd with no change. there doesn't
>     seem to
>     be any hardware issues.
>
>     I am wondering if perhaps this is an XFS file corruption issue and
>     if I
>     were to unmount the Gluster run xfs_repair and bring it back, that
>     would
>     solve the issue.
>
>     Any other suggestions?
>
>     [2023-07-21 18:51:38.260507] W [inode.c:1638:inode_table_prune]
>     (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/features/shard.so(+0x21b47)
>
>     [0x7fb261c13b47]
>     -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36)
>     [0x7fb26947f416]
>     -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a)
>     [0x7fb26947f37a] ) 0-GLB1image-shard: Empty inode lru list found but
>     with (-2) lru_size
>     [2023-07-21 18:51:38.261231] W [inode.c:1638:inode_table_prune]
>     (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/mount/fuse.so(+0xba51)
>
>     [0x7fb266cdca51]
>     -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36)
>     [0x7fb26947f416]
>     -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a)
>     [0x7fb26947f37a] ) 0-fuse: Empty inode lru list found but with
>     (-2) lru_size
>     [2023-07-21 18:51:38.261377] W [inode.c:1638:inode_table_prune]
>     (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(loc_wipe+0x12)
>     [0x7fb26946bd72]
>     -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36)
>     [0x7fb26947f416]
>     -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a)
>     [0x7fb26947f37a] ) 0-GLB1image-shard: Empty inode lru list found but
>     with (-2) lru_size
>     [2023-07-21 18:51:38.261806] W [inode.c:1638:inode_table_prune]
>     (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/cluster/replicate.so(+0x5ca57)
>
>     [0x7fb26213ba57]
>     -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36)
>     [0x7fb26947f416]
>     -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a)
>     [0x7fb26947f37a] ) 0-GLB1image-replicate-0: Empty inode lru list
>     found
>     but with (-2) lru_size
>     [2023-07-21 18:51:38.261933] W [inode.c:1638:inode_table_prune]
>     (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(fd_unref+0x1ef)
>     [0x7fb269495eaf]
>     -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36)
>     [0x7fb26947f416]
>     -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a)
>     [0x7fb26947f37a] ) 0-GLB1image-client-1: Empty inode lru list
>     found but
>     with (-2) lru_size
>     [2023-07-21 18:51:38.262684] W [inode.c:1638:inode_table_prune]
>     (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/cluster/replicate.so(+0x5ca57)
>
>     [0x7fb26213ba57]
>     -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36)
>     [0x7fb26947f416]
>     -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a)
>     [0x7fb26947f37a] ) 0-GLB1image-replicate-0: Empty inode lru list
>     found
>     but with (-2) lru_size
>
>     -wk
>
>     ________
>
>
>
>     Community Meeting Calendar:
>
>     Schedule -
>     Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>     Bridge: https://meet.google.com/cpu-eiue-hvk
>     Gluster-users mailing list
>     Gluster-users at gluster.org
>     https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20230725/79d37418/attachment.html>


More information about the Gluster-users mailing list