[Gluster-users] after upgrade to 3.6.7 : Internal error xfs_attr3_leaf_write_verify
Brian Foster
bfoster at redhat.com
Thu Dec 3 21:02:31 UTC 2015
On Thu, Dec 03, 2015 at 03:16:54PM -0500, Vijay Bellur wrote:
> Looks like an issue with xfs. Adding Brian to check if it is a familiar problem.
>
> Regards,
> Vijay
>
> ----- Original Message -----
> > From: "Dietmar Putz" <putz at 3qmedien.net>
> > To: gluster-users at gluster.org
> > Sent: Thursday, December 3, 2015 6:06:11 AM
> > Subject: [Gluster-users] after upgrade to 3.6.7 : Internal error xfs_attr3_leaf_write_verify
> >
> > Hello all,
> >
> > on 1st december i upgraded two 6 node cluster from glusterfs 3.5.6 to 3.6.7.
> > all of them are equal in hw, os and patchlevel, currently running ubuntu
> > 14.04 lts by an do-release-upgrade from 12.04 lts (this was done before
> > gfs upgrade to 3.5.6, not directly before upgrading to 3.6.7).
> > because of a geo-replication issue all of the nodes have rsync 3.1.1.3
> > installed instead 3.1.0 which comes by the repositories. this is the
> > only deviation from ubuntu repositories for 14.04 lts.
> > since upgrade to gfs 3.6.7 the glusterd on two nodes of the same cluster
> > are going offline after getting an xfs_attr3_leaf_write_verify error for
> > the underlying bricks as shown below.
> > this happens about every 4-5 hours after the problem was solved by an
> > umount / remount of the brick. it makes no difference to run a xfs_check
> > / xfs_repair before remount.
> > xfs_check / xfs_repair did not show any faults. the underlying hw is a
> > raid 5 vol on lsi-9271 8i. megacli does not show any errors.
> > the syslog does not show more than the dmesg output below.
> > every time the same two nodes of the same cluster are affected.
> > as shown in dmesg and syslog, the system recognizes the
> > xfs_attr_leaf_write_verify error about 38 min. before finally giving up.
> > for both events i can not found corresponding events in gluster logs.
> > this is strange...the gluster is historical grown from 3.2.5, 3.3, to
> > 3.4.6/7 which was running well for month, gfs 3.5.6 was running for
> > about two weeks and upgrade to 3.6.7 was done because of a geo-repl
> > log-flood.
> > even when i have no hint/evidence that this is caused by gfs 3.6.7
> > somehow i believe that this is the case...
> > does anybody experienced such an error or have some hints to getting out
> > of this big problem...?
> > unfortunately the affected cluster is the master of a geo-replication
> > which is not well running since update from gfs 3.4.7...fortunately both
> > affected gluster-nodes are not of the same sub-volume.
> >
> > any help is appreciated...
> >
> > best regards
> > dietmar
> >
...
> > - root at gluster-ger-ber-10 /var/log $dmesg -T
> > ...
> > [Wed Dec 2 12:43:47 2015] XFS (sdc1): xfs_log_force: error 5 returned.
> > [Wed Dec 2 12:43:48 2015] XFS (sdc1): xfs_log_force: error 5 returned.
> > [Wed Dec 2 12:45:58 2015] XFS (sdc1): Mounting Filesystem
> > [Wed Dec 2 12:45:58 2015] XFS (sdc1): Starting recovery (logdev: internal)
> > [Wed Dec 2 12:45:59 2015] XFS (sdc1): Ending recovery (logdev: internal)
> > [Wed Dec 2 13:11:53 2015] XFS (sdc1): Mounting Filesystem
> > [Wed Dec 2 13:11:54 2015] XFS (sdc1): Ending clean mount
> > [Wed Dec 2 13:12:29 2015] init: statd main process (25924) killed by
> > KILL signal
> > [Wed Dec 2 13:12:29 2015] init: statd main process ended, respawning
> > [Wed Dec 2 13:13:24 2015] init: statd main process (13433) killed by
> > KILL signal
> > [Wed Dec 2 13:13:24 2015] init: statd main process ended, respawning
> > [Wed Dec 2 17:22:28 2015] ffff8807076b1000: 00 00 00 00 00 00 00 00 fb
> > ee 00 00 00 00 00 00 ................
> > [Wed Dec 2 17:22:28 2015] ffff8807076b1010: 10 00 00 00 00 20 0f e0 00
> > 00 00 00 00 00 00 00 ..... ..........
> > [Wed Dec 2 17:22:28 2015] ffff8807076b1020: 00 00 00 00 00 00 00 00 00
> > 00 00 00 00 00 00 00 ................
> > [Wed Dec 2 17:22:28 2015] ffff8807076b1030: 00 00 00 00 00 00 00 00 00
> > 00 00 00 00 00 00 00 ................
> > [Wed Dec 2 17:22:28 2015] XFS (sdc1): Internal error
> > xfs_attr3_leaf_write_verify at line 216 of file
> > /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller
> > 0xffffffffa01a66f0
That's a write verifier error on an extended attribute write. The
purpose of the write verifier is to check metadata structure immediately
prior to write submission. Failure means some kind of corruption has
occurred in memory and the filesystem shuts down to prevent any further
damage.
Is this an upstream stable 3.13 kernel or a distro kernel? You could try
something more recent and see if it resolves the problem. Otherwise, I
don't recall any known related issues, but it might be best to collect
the following information and report to the XFS mailing list:
http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
Brian
> > [Wed Dec 2 17:22:28 2015] CPU: 4 PID: 13162 Comm: xfsaild/sdc1 Not
> > tainted 3.13.0-67-generic #110-Ubuntu
> > [Wed Dec 2 17:22:28 2015] Hardware name: Supermicro X10SLL-F/X10SLL-F,
> > BIOS 1.1b 11/01/2013
> > [Wed Dec 2 17:22:28 2015] 0000000000000001 ffff8801c5691bd0
> > ffffffff817240e0 ffff8801b15c3800
> > [Wed Dec 2 17:22:28 2015] ffff8801c5691be8 ffffffffa01aa6fb
> > ffffffffa01a66f0 ffff8801c5691c20
> > [Wed Dec 2 17:22:28 2015] ffffffffa01aa755 000000d800200200
> > ffff8804a59ac780 ffff8800d917e658
> > [Wed Dec 2 17:22:28 2015] Call Trace:
> > [Wed Dec 2 17:22:28 2015] [<ffffffff817240e0>] dump_stack+0x45/0x56
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa01aa6fb>]
> > xfs_error_report+0x3b/0x40 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa01a66f0>] ?
> > _xfs_buf_ioapply+0x70/0x3a0 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa01aa755>]
> > xfs_corruption_error+0x55/0x80 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa01c7b70>]
> > xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa01a66f0>] ?
> > _xfs_buf_ioapply+0x70/0x3a0 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa01a83d5>] ?
> > xfs_bdstrat_cb+0x55/0xb0 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa01a66f0>]
> > _xfs_buf_ioapply+0x70/0x3a0 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffff8109ac90>] ? wake_up_state+0x20/0x20
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa01a83d5>] ?
> > xfs_bdstrat_cb+0x55/0xb0 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa01a8336>]
> > xfs_buf_iorequest+0x46/0x90 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa01a83d5>]
> > xfs_bdstrat_cb+0x55/0xb0 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa01a856b>]
> > __xfs_buf_delwri_submit+0x13b/0x210 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa01a9000>] ?
> > xfs_buf_delwri_submit_nowait+0x20/0x30 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa0207af0>] ?
> > xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa01a9000>]
> > xfs_buf_delwri_submit_nowait+0x20/0x30 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa0207d27>] xfsaild+0x237/0x5c0 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffffa0207af0>] ?
> > xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
> > [Wed Dec 2 17:22:28 2015] [<ffffffff8108b7d2>] kthread+0xd2/0xf0
> > [Wed Dec 2 17:22:28 2015] [<ffffffff8108b700>] ?
> > kthread_create_on_node+0x1c0/0x1c0
> > [Wed Dec 2 17:22:28 2015] [<ffffffff81734c28>] ret_from_fork+0x58/0x90
> > [Wed Dec 2 17:22:28 2015] [<ffffffff8108b700>] ?
> > kthread_create_on_node+0x1c0/0x1c0
> > [Wed Dec 2 17:22:28 2015] XFS (sdc1): Corruption detected. Unmount and
> > run xfs_repair
> > [Wed Dec 2 17:22:28 2015] XFS (sdc1): xfs_do_force_shutdown(0x8) called
> > from line 1320 of file
> > /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_buf.c. Return address =
> > 0xffffffffa01a671c
> > [Wed Dec 2 17:22:28 2015] XFS (sdc1): Corruption of in-memory data
> > detected. Shutting down filesystem
> > [Wed Dec 2 17:22:28 2015] XFS (sdc1): Please umount the filesystem and
> > rectify the problem(s)
> > [Wed Dec 2 17:22:28 2015] XFS (sdc1): xfs_log_force: error 5 returned.
> > [Wed Dec 2 17:22:49 2015] XFS (sdc1): xfs_log_force: error 5 returned.
> > ...
> >
> > [ 19:10:49 ] - root at gluster-ger-ber-10 /var/log $xfs_info /gluster-export
> > meta-data=/dev/sdc1 isize=256 agcount=32,
> > agsize=152596472 blks
> > = sectsz=512 attr=2
> > data = bsize=4096 blocks=4883087099, imaxpct=5
> > = sunit=0 swidth=0 blks
> > naming =version 2 bsize=4096 ascii-ci=0
> > log =internal bsize=4096 blocks=521728, version=2
> > = sectsz=512 sunit=0 blks, lazy-count=1
> > realtime =none extsz=4096 blocks=0, rtextents=0
> > [ 19:10:55 ] - root at gluster-ger-ber-10 /var/log $
> >
> > [ 09:36:37 ] - root at gluster-ger-ber-10 /var/log $stat /gluster-export
> > stat: cannot stat ‘/gluster-export’: Input/output error
> > [ 09:36:45 ] - root at gluster-ger-ber-10 /var/log $
> >
> >
> > [ 08:50:43 ] - root at gluster-ger-ber-10 ~/tmp/syslog $dmesg -T | grep
> > xfs_attr3_leaf_write_verify
> > [Di Dez 1 23:24:53 2015] XFS (sdc1): Internal error
> > xfs_attr3_leaf_write_verify at line 216 of file
> > /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller
> > 0xffffffffa01a66f0
> > [Di Dez 1 23:24:53 2015] [<ffffffffa01c7b70>]
> > xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
> > [Mi Dez 2 12:19:16 2015] XFS (sdc1): Internal error
> > xfs_attr3_leaf_write_verify at line 216 of file
> > /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller
> > 0xffffffffa01a66f0
> > [Mi Dez 2 12:19:16 2015] [<ffffffffa01c7b70>]
> > xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
> > [Mi Dez 2 17:22:28 2015] XFS (sdc1): Internal error
> > xfs_attr3_leaf_write_verify at line 216 of file
> > /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller
> > 0xffffffffa01a66f0
> > [Mi Dez 2 17:22:28 2015] [<ffffffffa01c7b70>]
> > xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
> > [Mi Dez 2 23:06:32 2015] XFS (sdc1): Internal error
> > xfs_attr3_leaf_write_verify at line 216 of file
> > /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller
> > 0xffffffffa01a66f0
> > [Mi Dez 2 23:06:32 2015] [<ffffffffa01c7b70>]
> > xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
> >
> > [ 08:06:28 ] - root at gluster-ger-ber-10
> > /var/log/glusterfs/geo-replication $grep xfs_attr3_leaf_write_verify
> > /root/tmp/syslog/syslog*
> > Dec 2 00:01:50 gluster-ger-ber-10 kernel: [2278489.906268] XFS (sdc1):
> > Internal error xfs_attr3_leaf_write_verify at line 216 of file
> > /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller
> > 0xffffffffa01a66f0
> > Dec 2 00:01:50 gluster-ger-ber-10 kernel: [2278489.906448]
> > [<ffffffffa01c7b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
> > Dec 2 12:56:57 gluster-ger-ber-10 kernel: [2324952.509891] XFS (sdc1):
> > Internal error xfs_attr3_leaf_write_verify at line 216 of file
> > /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller
> > 0xffffffffa01a66f0
> > Dec 2 12:56:57 gluster-ger-ber-10 kernel: [2324952.510414]
> > [<ffffffffa01c7b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
> > xfs_check
> > xfs_repair -> no fault
> > Dec 2 18:00:27 gluster-ger-ber-10 kernel: [2343144.298098] XFS (sdc1):
> > Internal error xfs_attr3_leaf_write_verify at line 216 of file
> > /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller
> > 0xffffffffa01a66f0
> > Dec 2 18:00:27 gluster-ger-ber-10 kernel: [2343144.298259]
> > [<ffffffffa01c7b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
> > Dec 2 23:44:52 gluster-ger-ber-10 kernel: [2363788.969849] XFS (sdc1):
> > Internal error xfs_attr3_leaf_write_verify at line 216 of file
> > /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller
> > 0xffffffffa01a66f0
> > Dec 2 23:44:52 gluster-ger-ber-10 kernel: [2363788.970217]
> > [<ffffffffa01c7b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
> > [ 08:06:37 ] - root at gluster-ger-ber-10 /var/log/glusterfs/geo-replication $
> >
> > [ 08:04:51 ] - root at gluster-ger-ber-12 ~/tmp/syslog $grep
> > xfs_attr3_leaf_write_verify syslog*
> > Dec 2 00:01:10 gluster-ger-ber-12 kernel: [2276785.772229] XFS (sdc1):
> > Internal error xfs_attr3_leaf_write_verify at line 216 of file
> > /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller
> > 0xffffffffa019a6f0
> > Dec 2 00:01:10 gluster-ger-ber-12 kernel: [2276785.772504]
> > [<ffffffffa01bbb70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
> > Dec 2 12:59:08 gluster-ger-ber-12 kernel: [2323418.198659] XFS (sdc1):
> > Internal error xfs_attr3_leaf_write_verify at line 216 of file
> > /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller
> > 0xffffffffa019a6f0
> > Dec 2 12:59:08 gluster-ger-ber-12 kernel: [2323418.199085]
> > [<ffffffffa01bbb70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
> > xfs_check
> > xfs_repair -> no fault
> > Dec 2 18:30:47 gluster-ger-ber-12 kernel: [2343298.342473] XFS (sdc1):
> > Internal error xfs_attr3_leaf_write_verify at line 216 of file
> > /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller
> > 0xffffffffa019a6f0
> > Dec 2 18:30:47 gluster-ger-ber-12 kernel: [2343298.342850]
> > [<ffffffffa01bbb70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
> > Dec 2 23:48:38 gluster-ger-ber-12 kernel: [15001.493190] XFS (sdc1):
> > Internal error xfs_attr3_leaf_write_verify at line 216 of file
> > /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller
> > 0xffffffffa01936f0
> > Dec 2 23:48:38 gluster-ger-ber-12 kernel: [15001.493550]
> > [<ffffffffa01b4b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
> > [ 08:05:02 ] - root at gluster-ger-ber-12 ~/tmp/syslog $
> >
> > gluster-ger-ber-10-int:
> > glustershd.log :
> > [2015-12-02 23:45:33.160852] W [socket.c:620:__socket_rwv]
> > 0-ger-ber-01-client-3: readv on 10.0.1.103:49152 failed (No data available)
> > [2015-12-02 23:45:33.170590] I [client.c:2203:client_rpc_notify]
> > 0-ger-ber-01-client-3: disconnected from ger-ber-01-client-3. Client
> > process will keep trying to connect to glusterd until brick's port is
> > available
> > [2015-12-02 23:45:43.784388] E
> > [client-handshake.c:1496:client_query_portmap_cbk]
> > 0-ger-ber-01-client-3: failed to get the port number for remote
> > subvolume. Please run 'gluster volume status' on server to see if brick
> > process is running.
> > [2015-12-02 23:45:43.784543] I [client.c:2203:client_rpc_notify]
> > 0-ger-ber-01-client-3: disconnected from ger-ber-01-client-3. Client
> > process will keep trying to connect to glusterd until brick's port is
> > available
> > [2015-12-02 23:45:50.000203] W
> > [client-rpc-fops.c:1090:client3_3_getxattr_cbk] 0-ger-ber-01-client-3:
> > remote operation failed: Transport endpoint is not connected. Path: /
> > (00000000-0000-0000-0000-000000000001). Key: trusted.glusterfs.pathinfo
> > [2015-12-02 23:49:33.524740] W [socket.c:620:__socket_rwv]
> > 0-ger-ber-01-client-1: readv on 10.0.1.107:49152 failed (No data available)
> > [2015-12-02 23:49:33.524934] I [client.c:2203:client_rpc_notify]
> > 0-ger-ber-01-client-1: disconnected from ger-ber-01-client-1. Client
> > process will keep trying to connect to glusterd until brick's port is
> > available
> > [2015-12-02 23:49:43.882976] E
> > [client-handshake.c:1496:client_query_portmap_cbk]
> > 0-ger-ber-01-client-1: failed to get the port number for remote
> > subvolume. Please run 'gluster volume status' on server to see if
> > brick process is running.
> >
> > sdn.log :
> > [2015-12-02 23:45:33.160963] W [socket.c:620:__socket_rwv]
> > 0-ger-ber-01-client-3: readv on 10.0.1.103:49152 failed (No data available)
> > [2015-12-02 23:45:33.168504] I [client.c:2203:client_rpc_notify]
> > 0-ger-ber-01-client-3: disconnected from ger-ber-01-client-3. Client
> > process will keep trying to connect to glusterd until brick's port is
> > available
> > [2015-12-02 23:45:43.395787] E
> > [client-handshake.c:1496:client_query_portmap_cbk]
> > 0-ger-ber-01-client-3: failed to get the port number for remote
> > subvolume. Please run 'gluster volume status' on server to see if brick
> > process is running.
> >
> > nfs.log :
> > [2015-12-02 23:45:33.160856] W [socket.c:620:__socket_rwv]
> > 0-ger-ber-01-client-3: readv on 10.0.1.103:49152 failed (No data available)
> > [2015-12-02 23:45:33.180366] I [client.c:2203:client_rpc_notify]
> > 0-ger-ber-01-client-3: disconnected from ger-ber-01-client-3. Client
> > process will keep trying to connect to glusterd until brick's port is
> > available
> > [2015-12-02 23:45:43.780186] E
> > [client-handshake.c:1496:client_query_portmap_cbk]
> > 0-ger-ber-01-client-3: failed to get the port number for remote
> > subvolume. Please run 'gluster volume status' on server to see if brick
> > process is running.
> > [2015-12-02 23:45:43.780340] I [client.c:2203:client_rpc_notify]
> > 0-ger-ber-01-client-3: disconnected from ger-ber-01-client-3. Client
> > process will keep trying to connect to glusterd until brick's port is
> > available
> >
> > geo-replication log :
> > [2015-12-02 23:44:34.624957] I [master(/gluster-export):514:crawlwrap]
> > _GMaster: 0 crawls, 0 turns
> > [2015-12-02 23:44:54.798414] E
> > [syncdutils(/gluster-export):270:log_raise_exception] <top>: FAIL:
> > Traceback (most recent call last):
> > File
> > "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line
> > 164, in main main_i()
> > File
> > "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line
> > 643, in main_i local.service_loop(*[r for r in [remote] if r])
> > File
> > "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py",
> > line 1325, in service_loop g3.crawlwrap(oneshot=True)
> > File
> > "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line
> > 527, in crawlwrap brick_stime = self.xtime('.', self.slave)
> > File
> > "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line
> > 362, in xtime return self.xtime_low(rsc, path, **opts)
> > File
> > "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line
> > 132, in xtime_low xt = rsc.server.stime(path, self.uuid)
> > File
> > "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py",
> > line 1259, in <lambda> uuid + '.' + gconf.slave_id)
> > File
> > "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py",
> > line 322, in ff return f(*a)
> > File
> > "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py",
> > line 510, in stime 8)
> > File
> > "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/libcxattr.py",
> > line 55, in lgetxattr return cls._query_xattr(path, siz, 'lgetxattr',
> > attr)
> > File
> > "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/libcxattr.py",
> > line 47, in _query_xattr cls.raise_oserr()
> > File
> > "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/libcxattr.py",
> > line 37, in raise_oserr raise OSError(errn, os.strerror(errn))
> > OSError: [Errno 5] Input/output error
> > [2015-12-02 23:44:54.845763] I
> > [syncdutils(/gluster-export):214:finalize] <top>: exiting.
> > [2015-12-02 23:44:54.847527] I [repce(agent):92:service_loop]
> > RepceServer: terminating on reaching EOF.
> > [2015-12-02 23:44:54.847784] I [syncdutils(agent):214:finalize] <top>:
> > exiting.
> > [2015-12-02 23:44:54.849092] I [monitor(monitor):141:set_state] Monitor:
> > new state: faulty
> >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list