[Bugs] [Bug 1793490] New: snapshot clone volume is not exported via NFS-Ganesha

bugzilla at redhat.com bugzilla at redhat.com
Tue Jan 21 13:34:52 UTC 2020


https://bugzilla.redhat.com/show_bug.cgi?id=1793490

            Bug ID: 1793490
           Summary: snapshot clone volume is not exported via NFS-Ganesha
           Product: GlusterFS
           Version: mainline
            Status: NEW
         Component: snapshot
          Assignee: bugs at gluster.org
          Reporter: rkavunga at redhat.com
                CC: bugs at gluster.org
  Target Milestone: ---
    Classification: Community



This bug was initially created as a copy of Bug #1724526

I am copying this bug because: 



Description of problem:

If a snapshot is taken for a volume exported via nfs-ganesha and then cloned,
the resultant clone volume should also be exported via nfs-ganesha, which is
not happening.


Version-Release number of selected component (if applicable):
glusterfs-6.0-6.el7rhgs.x86_64
nfs-ganesha-2.7.3-5.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Export a volume via NFS-Ganesha
2. Create a snapshot (say 'snap1') of that volume
3. Now clone that snapshot. 

Actual results:

The clone volume contains ganesha.enable set to 'on' but is not exported via
NFS-Ganesha


[root at dhcp41-180 ~]# gluster snapshot clone snap1_clone snap1_notimestamp
snapshot clone: success: Clone snap1_clone created successfully
[root at dhcp41-180 ~]# 
[root at dhcp41-180 ~]# showmount -e localhost
Export list for localhost:
/cpu_vol  (everyone)
/snap_vol (everyone)
[root at dhcp41-180 ~]# gluster v info snap1_clone

Volume Name: snap1_clone
Type: Distributed-Replicate
Volume ID: 2f5cb9b8-90c5-42b1-8e13-41200162e8d6
Status: Created
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1:
dhcp43-70.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick1/s1
Brick2:
dhcp41-180.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick2/s1
Brick3:
dhcp43-212.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick3/s1
Brick4:
dhcp43-70.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick4/s2
Brick5:
dhcp41-180.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick5/s2
Brick6:
dhcp43-212.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick6/s2
Brick7:
dhcp43-70.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick7/s3
Brick8:
dhcp41-180.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick8/s3
Brick9:
dhcp43-212.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick9/s3
Brick10:
dhcp43-70.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick10/s4
Brick11:
dhcp41-180.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick11/s4
Brick12:
dhcp43-212.lab.eng.blr.redhat.com:/run/gluster/snaps/2f5cb9b890c542b18e1341200162e8d6/brick12/s4
Options Reconfigured:
ganesha.enable: on
features.cache-invalidation: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
features.quota: off
features.inode-quota: off
features.quota-deem-statfs: off
cluster.enable-shared-storage: enable
nfs-ganesha: enable
[root at dhcp41-180 ~]# 

[root at dhcp41-180 ~]# ls /var/run/gluster/shared_storage/nfs-ganesha/exports/
export.cpu_vol.conf  export.snap_vol.conf
[root at dhcp41-180 ~]# 

[root at dhcp41-180 ~]# gluster v set snap1_clone ganesha.enable on
volume set: failed: ganesha.enable is already 'on'.
[root at dhcp41-180 ~]#

[root at dhcp41-180 ~]# gluster v set snap1_clone ganesha.enable off
volume set: failed: Dynamic export addition/deletion failed. Please see log
file for details
[root at dhcp41-180 ~]#

Expected results:

snap1_clone (snapshot clone volume) should be exported via NFS-Ganesha

Additional info:

For this volume, ganesha.enable can neither be turned on or off. To get past
this issue, we should either edit glusterd options or create export config file
and export the volume manually

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list