[Gluster-users] GlusterD errors
RASTELLI Alessandro
alessandro.rastelli at skytv.it
Tue May 12 06:59:08 UTC 2015
Hi,
my bricks are under the filesystem /storage, which is properly mounted.
Here's the output of xfs_info:
[root at gluster01-mi ~]# xfs_info /storage/
meta-data=/dev/mapper/3600508b1001cca34012aecdd267f8aaep1 isize=256 agcount=32, agsize=183139904 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=5860476928, imaxpct=5
= sunit=64 swidth=192 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=64 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
I can't see any signs of corruption or other warnings in /var/log/messages.
thank you
A.
-----Original Message-----
From: Ben Turner [mailto:bturner at redhat.com]
Sent: lunedì 11 maggio 2015 17:11
To: RASTELLI Alessandro
Cc: gluster-users at gluster.org; Gaurav Garg
Subject: Re: [Gluster-users] GlusterD errors
----- Original Message -----
> From: "Gaurav Garg" <ggarg at redhat.com>
> To: "RASTELLI Alessandro" <alessandro.rastelli at skytv.it>
> Cc: gluster-users at gluster.org
> Sent: Monday, May 11, 2015 4:42:59 AM
> Subject: Re: [Gluster-users] GlusterD errors
>
> Hi Rastelli,
>
> Could you tell us what steps you followed or what command you executed
> for getting these log.
Also, what about manually running xfs_info on your brick filesystems and looking for errors. From the logs it looks like gluster can't get the inode size and xfs_info is return a non zero return code. Is it mounted? Do you see signs of FS corruption? Look in /var/log/message for XFS related errors.
-b
>
> ~ Gaurav
>
> ----- Original Message -----
> From: "RASTELLI Alessandro" <alessandro.rastelli at skytv.it>
> To: gluster-users at gluster.org
> Sent: Monday, May 11, 2015 2:05:16 PM
> Subject: [Gluster-users] GlusterD errors
>
> Signature electronique
>
>
> Hi,
>
> we’ve got a lot of these errors in /etc-glusterfs-glusterd.vol.log in
> our Glusterfs environment.
>
> Just wanted to know if I can do anything about that, or if I can ignore them.
>
> Thank you
>
>
>
> [2015-05-11 08:22:43.848305] E
> [glusterd-utils.c:7364:glusterd_add_inode_size_to_dict] 0-management:
> xfs_info exited with non-zero exit status
>
> [2015-05-11 08:22:43.848347] E
> [glusterd-utils.c:7390:glusterd_add_inode_size_to_dict] 0-management:
> failed to get inode size
>
> [2015-05-11 08:22:52.911718] E
> [glusterd-op-sm.c:207:glusterd_get_txn_opinfo]
> 0-: Unable to get transaction opinfo for transaction ID :
> ace2f066-1acb-4e00-9cca-721f88691dce
>
> [2015-05-11 08:23:53.266666] E
> [glusterd-syncop.c:961:_gd_syncop_commit_op_cbk] 0-management: Failed
> to aggregate response from node/brick
>
>
>
> Alessandro
>
>
>
>
> From: gluster-users-bounces at gluster.org
> [mailto:gluster-users-bounces at gluster.org] On Behalf Of Pierre Léonard
> Sent: venerdì 10 aprile 2015 16:18
> To: gluster-users at gluster.org
> Subject: Re: [Gluster-users] one node change uuid in the night
>
>
>
>
>
> Hi Atin and all,
>
>
>
>
> have corrected with the data in glusterd.info and suppress the bad
> peers file.
> Could you clarify what steps did you perform here. Also could you try
> to start glusterd with -LDEBUG and share the glusterd log file with us.
> Also do you see any delta in glusterd.info file between node 10 and
> the other nodes?
> ~Atin
>
>
> The problem is solved. It came from a miwe of uuid file and their
> contents on the 10 node.
> As we said here "Ouf !" because I have vacation on next week.
>
> May be It could be necessary to save the peers directory, as many
> problem came from their contents.
>
> As the log the name volfile in an error line I search on the web and
> found that page :
> http://www.gluster.org/community/documentation/index.php/Understanding
> _vol-file
>
> I have added some section of the example file. Is that pertinent for
> our 14 node cluster or do I have to forget or change notably for the
> number of threads ?
>
> Many thank's for all ,
>
>
> --
>
>
>
>
>
>
>
>
>
>
>
>
> Pierre Léonard
>
>
> Senior IT Manager
>
>
> MetaGenoPolis
>
>
>
> Pierre.Leonard at jouy.inra.fr
>
>
> Tél. : +33 (0)1 34 65 29 78
>
>
>
> Centre de recherche INRA
>
>
> Domaine de Vilvert – Bât. 325 R+1
>
>
> 78 352 Jouy-en-Josas CEDEX
>
>
> France
>
>
> www.mgps.eu
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list