[Bugs] [Bug 1224077] Directories are missing on the mount point after attaching tier to distribute replicate volume.
bugzilla at redhat.com
bugzilla at redhat.com
Thu Jun 11 17:21:35 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1224077
Triveni Rao <trao at redhat.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ON_QA |VERIFIED
--- Comment #4 from Triveni Rao <trao at redhat.com> ---
this bug is verified and found no issues
[root at rhsqa14-vm1 ~]# gluster v create venus replica 2
10.70.47.165:/rhs/brick1/m0 10.70.47.163:/rhs/brick1/m0
10.70.47.165:/rhs/brick2/m0 10.70.47.16
3:/rhs/brick2/m0 force
volume create: venus: success: please start the volume to access data
[root at rhsqa14-vm1 ~]# gluster v start venus
volume start: venus: success
[root at rhsqa14-vm1 ~]# gluster v info
Volume Name: venus
Type: Distributed-Replicate
Volume ID: ad3a7752-93f3-4a61-8b3c-b40bc5d9af4a
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.47.165:/rhs/brick1/m0
Brick2: 10.70.47.163:/rhs/brick1/m0
Brick3: 10.70.47.165:/rhs/brick2/m0
Brick4: 10.70.47.163:/rhs/brick2/m0
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm1 ~]#
[root at rhsqa14-vm1 ~]# gluster v status
Status of volume: venus
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.47.165:/rhs/brick1/m0 49152 0 Y 3547
Brick 10.70.47.163:/rhs/brick1/m0 49152 0 Y 3097
Brick 10.70.47.165:/rhs/brick2/m0 49153 0 Y 3565
Brick 10.70.47.163:/rhs/brick2/m0 49153 0 Y 3115
NFS Server on localhost 2049 0 Y 3588
Self-heal Daemon on localhost N/A N/A Y 3593
NFS Server on 10.70.47.163 2049 0 Y 3138
Self-heal Daemon on 10.70.47.163 N/A N/A Y 3145
Task Status of Volume venus
------------------------------------------------------------------------------
There are no active volume tasks
[root at rhsqa14-vm1 ~]#
(reverse-i-search)`gluster v ': ^Custer v status
[root at rhsqa14-vm1 ~]# ^C
[root at rhsqa14-vm1 ~]# gluster v attach-tier venus replica 2
10.70.47.165:/rhs/brick3/m0 10.70.47.163:/rhs/brick3/m0
Attach tier is recommended only for testing purposes in this release. Do you
want to continue? (y/n) y
volume attach-tier: success
volume rebalance: venus: success: Rebalance on venus has been started
successfully. Use rebalance status command to check status of the rebalance
process.
ID: 1bf4b512-7246-403d-b50e-f395e4051555
[root at rhsqa14-vm1 ~]# gluster v rebalance venus status
Node Rebalanced-files size
scanned failures skipped status run time in secs
--------- ----------- -----------
----------- ----------- ----------- ------------ --------------
localhost 0 0Bytes
0 0 0 in progress 18.00
10.70.47.163 0 0Bytes
0 0 0 in progress 19.00
volume rebalance: venus: success:
[root at rhsqa14-vm1 ~]#
root at rhsqa14-vm5 mnt]# cd triveni/
[root at rhsqa14-vm5 triveni]# touch 1
[root at rhsqa14-vm5 triveni]# touch 2
[root at rhsqa14-vm5 triveni]# touch 4
[root at rhsqa14-vm5 triveni]# ls -la
total 0
drwxr-xr-x. 2 root root 36 Jun 11 13:12 .
drwxr-xr-x. 5 root root 106 Jun 11 13:12 ..
-rw-r--r--. 1 root root 0 Jun 11 13:12 1
-rw-r--r--. 1 root root 0 Jun 11 13:12 2
-rw-r--r--. 1 root root 0 Jun 11 13:12 4
[root at rhsqa14-vm5 triveni]# cd ..
[root at rhsqa14-vm5 mnt]# ls
triveni
[root at rhsqa14-vm5 mnt]#
[root at rhsqa14-vm5 mnt]# ls -la
total 4
drwxr-xr-x. 5 root root 159 Jun 11 13:13 .
dr-xr-xr-x. 30 root root 4096 Jun 11 11:15 ..
drwxr-xr-x. 3 root root 72 Jun 11 13:13 .trashcan
drwxr-xr-x. 2 root root 42 Jun 11 13:13 triveni
[root at rhsqa14-vm5 mnt]# l s-la triveni/^C
[root at rhsqa14-vm5 mnt]# ls -la triveni/
total 0
drwxr-xr-x. 2 root root 42 Jun 11 13:13 .
drwxr-xr-x. 5 root root 159 Jun 11 13:13 ..
-rw-r--r--. 1 root root 0 Jun 11 13:12 1
-rw-r--r--. 1 root root 0 Jun 11 13:12 2
-rw-r--r--. 1 root root 0 Jun 11 13:12 4
[root at rhsqa14-vm5 mnt]#
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=65gcpYInyo&a=cc_unsubscribe
More information about the Bugs
mailing list