[Gluster-users] Some problems
Jeff Darcy
jdarcy at redhat.com
Fri Nov 30 17:51:47 UTC 2012
On 11/28/2012 10:27 AM, Atıf CEYLAN wrote:
> My first question: if GlusterFS was start before than the imap/pop3 server
> can't be map 993 and 995 ports by imap/pop3 server. Because GlusterFS use them.
> I didn't understand why it use these ports?
Like many other system programs, GlusterFS tries to use ports below 1024 which
are supposed to be privileged, hunting downward until it finds one that's
available. If this is a problem for you, I suggest looking into the
"portreserve" command.
> Second, one of two debian was crash and boot up again. When it was start,
> GlusterFS heal process was start. But a few minutes later written below records
> into the log and GlusterFS native client (or FUSE) was crash.
>
> [2012-11-28 12:11:33.763486] E
> [afr-self-heal-data.c:763:afr_sh_data_fxattrop_fstat_done] 0-m3-replicate-0:
> Unable to self-heal contents of
> '/domains/1/abc.com/info/Maildir/dovecot.index.log' (possible split-brain).
> Please delete the file from all but the preferred subvolume.
> [2012-11-28 12:11:33.763659] E
> [afr-self-heal-common.c:2160:afr_self_heal_completion_cbk] 0-m3-replicate-0:
> background meta-data data self-heal failed on
> /domains/1/O/abc.com/info/Maildir/dovecot.index.log
> [2012-11-28 12:11:33.763927] W [afr-open.c:213:afr_open] 0-m3-replicate-0:
> failed to open as split brain seen, returning EIO
> [2012-11-28 12:11:33.763958] W [fuse-bridge.c:1948:fuse_readv_cbk]
> 0-glusterfs-fuse: 432877: READ =-1 (Input/output error)
> [2012-11-28 12:11:33.764039] W [afr-open.c:213:afr_open] 0-m3-replicate-0:
> failed to open as split brain seen, returning EIO
> [2012-11-28 12:11:33.764062] W [fuse-bridge.c:1948:fuse_readv_cbk]
> 0-glusterfs-fuse: 432878: READ =-1 (Input/output error)
> [2012-11-28 12:11:33.764062] W [fuse-bridge.c:1948:fuse_readv_cbk]
> 0-glusterfs-fuse: 432878: READ =-1 (Input/output error)
> [2012-11-28 12:11:36.274580] E
> [afr-self-heal-data.c:763:afr_sh_data_fxattrop_fstat_done] 0-m3-replicate-0:
> Unable to self-heal contents of
> '/domains/xxx.com/info/Maildir/dovecot.index.log' (possible split-brain).
> Please delete the file from all but the preferred subvolume.
> [2012-11-28 12:11:36.274781] E
> [afr-self-heal-common.c:2160:afr_self_heal_completion_cbk] 0-m3-replicate-0:
> background meta-data data self-heal failed on
> /domains/xxx.com/info/Maildir/dovecot.index.log
The phrase "split brain" means that we detected changes to both replicas, and
it would be unsafe to let one override the other (i.e. might lost data) so we
keep our hands off until the user has a chance to intervene. This can happen
in two distinct ways:
* Network partition: client A can only reach replica X, client B can only reach
replica Y, both make changes which end up causing split brain.
* Multiple failures over time. X goes down, changes occur only on Y, then Y
goes down and X comes up (or X comes up and Y goes down before self-heal is
finished) so changes only occur at X.
The quorum feature should address both of these, at the expense of returning
errors if an insufficient number of replicas are available (so it works best
with replica count >= 3).
It's also usually worth figuring out why such problems happened in the first
place. Do you have a lot of network problems or server failures? Are these
servers widely separated? Either is likely to cause problems not only with
GlusterFS but with any distributed filesystem, so it's a good idea to address
such issues or at least mention them when reporting problems.
More information about the Gluster-users
mailing list