[Bugs] [Bug 1229233] Data Tiering:3.7.0:data loss:detach-tier not flushing data to cold-tier

bugzilla at redhat.com bugzilla at redhat.com
Mon Jun 29 12:26:13 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1229233

nchilaka <nchilaka at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|ON_QA                       |ASSIGNED



--- Comment #9 from nchilaka <nchilaka at redhat.com> ---
Moving the bug to failed QA due to below
=====Problem 1===(refer attachment 1044341)
1) had a setup with 3 nodes, A(tettnang), B(zod) and C(yarrow)
2)Now created a 2x2 dist-rep volume with bricks belonging to only node B and C
[root at tettnang ~]# gluster v info v1

Volume Name: v1
Type: Distributed-Replicate
Volume ID: acd70756-8a8c-4cd9-a4c4-b5cc4bfad8ee
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: zod:/rhs/brick1/v1
Brick2: yarrow:/rhs/brick1/v1
Brick3: zod:/rhs/brick2/v1
Brick4: yarrow:/rhs/brick2/v1
Options Reconfigured:
performance.readdir-ahead: on
[root at tettnang ~]# gluster v status v1
Status of volume: v1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick zod:/rhs/brick1/v1                    49160     0          Y       26484
Brick yarrow:/rhs/brick1/v1                 49159     0          Y       11082
Brick zod:/rhs/brick2/v1                    49161     0          Y       26504
Brick yarrow:/rhs/brick2/v1                 49160     0          Y       11100
NFS Server on localhost                     N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       21765
NFS Server on yarrow                        N/A       N/A        N       N/A  
Self-heal Daemon on yarrow                  N/A       N/A        Y       11130
NFS Server on zod                           N/A       N/A        N       N/A  
Self-heal Daemon on zod                     N/A       N/A        Y       26548

Task Status of Volume v1
------------------------------------------------------------------------------
There are no active volume tasks

3)attached hot tier pure distribute again with brick belonging to only node B
and C
4)now created some files on the mount point (fuse mount)
6)I did a detach-tier start. On detach-tier start, i check the backend bricks
and found that link files were created on the cold tier with the hot tier still
having the cached file contents
7)now i did a detach-tier commit, and the commit passed.
But the files even though still existed on the mount(which means the files are
migrated to the cold, atleast the same filenames are created due to T files)
But the file contents are missing.

I check the backend bricks and found that if i read the cold brick files(T
files) the content is missing
But on hot bricks, the file contents remain, which means the data of files are
not getting flushed.

Note: I have done a read or access of another file after detach start but
before commit, that file has its contents to moved to cold brick.


-====Problem2 ===
On same setup I created a tier volume with distribute over dist-rep, but this
time used all nodes.
On detach i got following error
2)[root at tettnang ~]# gluster v detach-tier v2 start
volume detach-tier start: failed: Bricks not from same subvol for distribute


[2015-06-29 11:21:40.950707] W [socket.c:642:__socket_rwv] 0-nfs: readv on
/var/run/gluster/54cb1d4c2770a75d3e2bccd62ecdecc8.socket failed (Invalid
argument)
[2015-06-29 11:21:41.487010] I [MSGID: 106484]
[glusterd-brick-ops.c:819:__glusterd_handle_remove_brick] 0-management:
Received rem brick req
[2015-06-29 11:21:41.494410] E [MSGID: 106265]
[glusterd-brick-ops.c:1063:__glusterd_handle_remove_brick] 0-management: Bricks
not from same subvol for distribute
[2015-06-29 11:21:43.951084] W [socket.c:642:__socket_rwv] 0-nfs: readv on
/var/run/gluster/54cb1d4c2770a75d3e2bccd62ecdecc8.socket failed (Invalid
argument)
[2015-06-29 11:21:46.951408] W [socket.c:642:__socket_rwv] 0-nfs: readv on
/var/run/gluster/54cb1d4c2770a75d3e2bccd62ecdecc8.socket failed (Invalid
argument)
[2015-06-29 11:21:49.951739] W [socket.c:642:__socket_rwv] 0-nfs: readv on
/var/run/gluster/54cb1d4c2770a75d3e2bccd62ecdecc8.socket failed (Invalid
argument)
[2015-06-29 11:21:52.952016] W [socket.c:642:__socket_rwv] 0-nfs: readv on
/var/run/gluster/54cb1d4c2770a75d3e2bccd62ecdecc8.socket failed (Invalid
argument)
[2015-06-29 11:21:55.952298] W [socket.c:642:__socket_rwv] 0-nfs: readv on
/var/run/gluster/54cb1d4c2770a75d3e2bccd62ecdecc8.socket failed (Invalid
argument)
[2015-06-29 11:21:58.444632] E [MSGID: 106301]
[glusterd-op-sm.c:4043:glusterd_op_ac_send_stage_op] 0-management: Staging of
operation 'Volume Rebalance' failed on localhost : Detach-tier not started.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=PYhcBXxMRa&a=cc_unsubscribe


More information about the Bugs mailing list