[Bugs] [Bug 1620580] New: Deleted a volume and created a new volume with similar but not the same name. The kubernetes pod still keeps on running and doesn 't crash. Still possible to write to gluster mount

bugzilla at redhat.com bugzilla at redhat.com
Thu Aug 23 08:46:47 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1620580

            Bug ID: 1620580
           Summary: Deleted a volume and created a new volume with similar
                    but not the same name. The kubernetes pod still keeps
                    on running and doesn't crash. Still possible to write
                    to gluster mount
           Product: GlusterFS
           Version: 4.1
         Component: unclassified
          Severity: urgent
          Assignee: bugs at gluster.org
          Reporter: jimmybob-leon at hotmail.co.uk
                CC: bugs at gluster.org



Description of problem:
We deleted a current dispersed volume named `volume` then split this volume up
into 3 new ones called `volume1`, `volume2` and `volume3` using the same
bricks. We had a kubernetes pod running with the gluster mount mounted into the
pod. After the creation of the new volumes I tried writing to the mount point
and it appeared on all three volumes. `df` still shows that the original volume
is mounted `IP:/volume` but we observe replication on all three new volumes.


Version-Release number of selected component (if applicable):
- 4.1 Gluster Server
- Linux 18.04
- Azure Kubernetes Service 1.11.1


How reproducible:
Steps to Reproduce:
1.Create dispersed `volume`
2.start and mount `volume`
3.stop and delete `volume`
4.reuse bricks to create `volume1,2,3`

Actual results:
Original mount point still active. Data written to it is still replicated.

Expected results:
Mount point interrupted and error message should indicate mount `volume` is not
found.

Additional info:

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list