[Bugs] [Bug 1205540] New: Data Classification:3.7.0:data loss:detach-tier not flushing data to cold-tier

bugzilla at redhat.com bugzilla at redhat.com
Wed Mar 25 07:18:46 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1205540

            Bug ID: 1205540
           Summary: Data Classification:3.7.0:data loss:detach-tier not
                    flushing data to cold-tier
           Product: GlusterFS
           Version: mainline
         Component: core
          Severity: urgent
          Assignee: bugs at gluster.org
          Reporter: nchilaka at redhat.com
                CC: bugs at gluster.org, gluster-bugs at redhat.com



Description of problem:
=======================
In a tiered volume, when we detach a tier, the operation passes successfully,
but doesnt flush data to cold tier.
This leads to data loss.


Version-Release number of selected component (if applicable):
============================================================
3.7 upstream nightlies build
http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.7dev-0.777.git2308c07.autobuild/


How reproducible:
=================
Easy to reproduce


Steps to Reproduce:
==================
1.create a gluster volume(i created a distribute type) and start the volume
2.attach a tier to the volume using attach-tier
3.now write some files to the volume. All files(if sufficient space available)
will be written to the hot-tier
4. Now detach the tier using detach-tier command.


Actual results:
===============
When we detach the tier, the tier gets detached without flushing the data in
hot tier to cold. Due to this there is data loss

Expected results:
================
Detach tier should succeed only after all data is flushed to cold tier.


Additional info(CLI logs):
===============
[root at rhs-client44 everglades]# gluster v info vol1

Volume Name: vol1
Type: Distribute
Volume ID: 3382e788-ee37-4d6c-b214-8469ca68e376
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: rhs-client44:/pavanbrick1/vol1/b1
Brick2: rhs-client38:/pavanbrick1/vol1/b1
Brick3: rhs-client37:/pavanbrick1/vol1/b1
[root at rhs-client44 everglades]# gluster v status vol1
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick rhs-client44:/pavanbrick1/vol1/b1     49152     0          Y       29969
Brick rhs-client38:/pavanbrick1/vol1/b1     49152     0          Y       30514
Brick rhs-client37:/pavanbrick1/vol1/b1     49152     0          Y       29475
NFS Server on localhost                     2049      0          Y       29993
NFS Server on rhs-client38                  2049      0          Y       30538
NFS Server on rhs-client37                  2049      0          Y       29499

Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks

[root at rhs-client44 everglades]# gluster v attach-tier vol1
rhs-client44:/pavanbrick2/vol1_hot/hb1 rhs-client37:/pavanbrick2/vol1_hot/hb1
volume add-brick: success
[root at rhs-client44 everglades]# gluster v info vol1

Volume Name: vol1
Type: Tier
Volume ID: 3382e788-ee37-4d6c-b214-8469ca68e376
Status: Started
Number of Bricks: 5 x 1 = 5
Transport-type: tcp
Bricks:
Brick1: rhs-client37:/pavanbrick2/vol1_hot/hb1
Brick2: rhs-client44:/pavanbrick2/vol1_hot/hb1
Brick3: rhs-client44:/pavanbrick1/vol1/b1
Brick4: rhs-client38:/pavanbrick1/vol1/b1
Brick5: rhs-client37:/pavanbrick1/vol1/b1



[root at rhs-client44 everglades]# gluster v detach-tier vol1
volume remove-brick unknown: success
[root at rhs-client44 everglades]# gluster v info vol1

Volume Name: vol1
Type: Distribute
Volume ID: 3382e788-ee37-4d6c-b214-8469ca68e376
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: rhs-client44:/pavanbrick1/vol1/b1
Brick2: rhs-client38:/pavanbrick1/vol1/b1
Brick3: rhs-client37:/pavanbrick1/vol1/b1

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list