[Bugs] [Bug 1466123] New: [RFE] Pass slave volume in geo-rep as read-only

bugzilla at redhat.com bugzilla at redhat.com
Thu Jun 29 06:28:33 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1466123

            Bug ID: 1466123
           Summary: [RFE] Pass slave volume in geo-rep as read-only
           Product: Red Hat Gluster Storage
         Component: geo-replication
          Severity: medium
          Assignee: avishwan at redhat.com
          Reporter: gpezza2 at illinois.edu
        QA Contact: rhinduja at redhat.com
                CC: amukherj at redhat.com, avishwan at redhat.com,
                    bugs at gluster.org, chrisw at redhat.com, csaba at redhat.com,
                    cyril at peponnet.fr, khiremat at redhat.com,
                    nlevinki at redhat.com, pladd at redhat.com,
                    rhs-bugs at redhat.com, sankarshan at redhat.com,
                    storage-qa-internal at redhat.com, vshankar at redhat.com,
                    zsarosi at gmail.com



+++ This bug was initially created as a clone of Bug #1430608 +++

Description of problem:

Geo-Replication cannot write to a Read-Only slave volume

This supposedly was fixed as of 3.11.0 according to the bug report this is
cloned from, however, a fresh install of 3.11.1 shows that if the slave gluster
volume is set to Read-Only, geo-replication fails stating that the slave volume
is set to read-only.

from log-file on master:

"[2017-06-29 06:15:18.447021] I [master(/brick/brick1/gvol0):1363:crawl]
_GMaster: processing xsync changelog
/var/lib/misc/glusterfsd/gvol0/ssh%3A%2F%2Froot%40172.22.6.151%3Agluster%3A%2F%2F127.0.0.1%3Ageovol/b7cdfed7a45ded34d6b360dc29e54688/xsync/XSYNC-CHANGELOG.1498716917
[2017-06-29 06:15:18.455240] E [repce(/brick/brick1/gvol0):207:__call__]
RepceClient: call 15612:140708044412736:1498716918.45 (entry_ops) failed on
peer with OSError
[2017-06-29 06:15:18.455410] E
[syncdutils(/brick/brick1/gvol0):312:log_raise_exception] <top>: FAIL:

Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 204, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 782, in
main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1656, in
service_loop
    g1.crawlwrap(oneshot=True, register_time=register_time)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 600, in
crawlwrap
    self.crawl()
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1364, in
crawl
    self.process([item[1]], 0)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1039, in
process
    self.process_change(change, done, retry)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 960, in
process_change
    failures = self.slave.server.entry_ops(entries)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 226, in
__call__
    return self.ins(self.meth, *a)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 208, in
__call__
    raise res
OSError: [Errno 30] Read-only file system"

Version-Release number of selected component (if applicable):

Any

How reproducible:

Always

Steps to Reproduce:
1. create a geo-repo
2. Set slave to read-only
3. Start geo-repo and look for status of "Faulty" plus error in master log file
for the geo-repo session.

Actual results:

Several:

geo-replication never runs

Expected results:

as far as the patch states, internal functions such as gsyncd should be able to
write to a read-only volume.

--- Additional comment from Kotresh HR on 2017-03-09 01:08:06 EST ---

upstream patch :
https://review.gluster.org/#/c/16854/
https://review.gluster.org/#/c/16855/

--- Additional comment from Worker Ant on 2017-03-09 01:40:52 EST ---

REVIEW: https://review.gluster.org/16854 (performance/write-behind: Honor the
client pid set) posted (#2) for review on master by Kotresh HR
(khiremat at redhat.com)

--- Additional comment from Worker Ant on 2017-03-09 01:41:00 EST ---

REVIEW: https://review.gluster.org/16855 (features/read-only: Allow internal
clients to r/w) posted (#2) for review on master by Kotresh HR
(khiremat at redhat.com)

--- Additional comment from Worker Ant on 2017-03-10 00:17:03 EST ---

COMMIT: https://review.gluster.org/16854 committed in master by Raghavendra G
(rgowdapp at redhat.com) 
------
commit b9e1c911833ca1916055622e5265672d5935d925
Author: Kotresh HR <khiremat at redhat.com>
Date:   Mon Mar 6 10:34:05 2017 -0500

    performance/write-behind: Honor the client pid set

    write-behind xlator does not honor the client pid being
    set. It doesn't pass down the client pid saved in
    'frame->root->pid'. This patch fixes the same.

    Change-Id: I838dcf43f56d6d0aa1d2c88811a2b271d9e88d05
    BUG: 1430608
    Signed-off-by: Kotresh HR <khiremat at redhat.com>
    Reviewed-on: https://review.gluster.org/16854
    Smoke: Gluster Build System <jenkins at build.gluster.org>
    NetBSD-regression: NetBSD Build System <jenkins at build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins at build.gluster.org>
    Reviewed-by: Vijay Bellur <vbellur at redhat.com>
    Reviewed-by: Raghavendra G <rgowdapp at redhat.com>

--- Additional comment from Worker Ant on 2017-03-30 00:40:03 EDT ---

REVIEW: https://review.gluster.org/16855 (features/read-only: Allow internal
clients to r/w) posted (#3) for review on master by Kotresh HR
(khiremat at redhat.com)

--- Additional comment from Worker Ant on 2017-04-27 05:55:25 EDT ---

REVIEW: https://review.gluster.org/16855 (features/read-only: Allow internal
clients to r/w) posted (#4) for review on master by Kotresh HR
(khiremat at redhat.com)

--- Additional comment from Shyamsundar on 2017-05-30 14:47:11 EDT ---

This bug is getting closed because a release has been made available that
should address the reported issue. In case the problem is still not fixed with
glusterfs-3.11.0, please open a new bug report.

glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages
for several distributions should become available in the near future. Keep an
eye on the Gluster Users mailinglist [2] and the update infrastructure for your
distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html
[2] https://www.gluster.org/pipermail/gluster-users/

--- Additional comment from Worker Ant on 2017-06-09 14:50:51 EDT ---

REVIEW: https://review.gluster.org/16855 (features/read-only: Allow internal
clients to r/w) posted (#5) for review on master by Kotresh HR
(khiremat at redhat.com)

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=UynNKZS8DU&a=cc_unsubscribe


More information about the Bugs mailing list