[Bugs] [Bug 1210629] New: [GlusterFS 3.6.2 ] Gluster volume status shows junk characters even if volume exists

bugzilla at redhat.com bugzilla at redhat.com
Fri Apr 10 08:48:39 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1210629

            Bug ID: 1210629
           Summary: [GlusterFS 3.6.2 ] Gluster volume status shows junk
                    characters even if  volume exists
           Product: GlusterFS
           Version: 3.6.2
         Component: glusterd
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: ssamanta at redhat.com
                CC: bugs at gluster.org, gluster-bugs at redhat.com



Description of problem
gluster volume status shows junk characters even if the volume exists.


Version-Release number of selected component (if applicable):
[root at gqas009 glusterd]# rpm -qa | grep gluster
glusterfs-hadoop-distribution-glusterfs-hadoop-setup_hadoop-0.1-122.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_gluster_selfheal-0.1-6.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_gluster_quota_selfheal-0.2-11.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_hbase-0.1-4.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_hadoop_mapreduce-0.1-6.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_home_dir_listing-0.1-5.noarch
glusterfs-resource-agents-3.5.3-1.fc20.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_default_block_size-0.1-4.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_multiuser_support-0.1-4.noarch
glusterfs-libs-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_setting_working_directory-0.1-2.noarch
glusterfs-extra-xlators-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_junit_shim-0.1-13.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_gridmix3-0.1-2.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-setup_common-0.2-117.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-setup_gluster-0.2-78.noarch
glusterfs-cli-3.6.2-1.fc20.x86_64
glusterfs-geo-replication-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-glusterd_tests-0.2-1.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop-0.1-7.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_special_char_in_path-0.1-2.noarch
glusterfs-debuginfo-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_fs_counters-0.1-11.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_multiple_volumes-0.1-18.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_generate_gridmix2_data-0.1-3.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_hive-0.1-12.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_selinux_persistently_disabled-0.1-2.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_hadoop_hcfs_quota-0.1-6.noarch
glusterfs-api-3.6.2-1.fc20.x86_64
glusterfs-fuse-3.6.2-1.fc20.x86_64
glusterfs-hadoop-2.1.2-2.fc20.noarch
glusterfs-api-devel-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_hadoop_hcfs_testcli-0.2-7.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_dfsio-0.1-2.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_multifilewc_null_pointer_exception-0.1-6.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-setup_hadoop_security-0.0.1-9.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_dfsio_io_exception-0.1-8.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_ldap-0.1-6.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_hadoop_hcfs_fileappend-0.1-5.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_missing_dirs_create-0.1-4.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_sqoop-0.1-2.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_erroneous_multivolume_filepaths-0.1-4.noarch
glusterfs-hadoop-javadoc-2.1.2-2.fc20.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_groovy_sync-0.1-24.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-setup_rhs_georep-0.1-3.noarch
glusterfs-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_append_to_file-0.1-6.noarch
glusterfs-devel-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_shim_access_error_messages-0.1-6.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_mahout-0.1-6.noarch
glusterfs-rdma-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_brick_sorted_order_of_filenames-0.1-2.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-setup_bigtop-0.2.1-24.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_user_mapred_job-0.1-4.noarch
glusterfs-server-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_file_dir_permissions-0.1-9.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_pig-0.1-9.noarch
[root at gqas009 glusterd]# 

How reproducible:
Tried once


Steps to Reproduce:
1.Create a gluster volume using the bricks of one node
2.Start the volume
3.Issue gluster volume status command.

Actual results:
gluster volume status shows junk characters.

Expected results:
gluster volume status should not show junk characters.


Additional info:

[root at gqas009 glusterd]# gluster volume status
��� does not exist

[root at gqas009 glusterd]# gluster volume info

Volume Name: testvol4
Type: Distributed-Replicate
Volume ID: 11e6dc91-a50b-45ec-a60e-90297a63245f
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.16.156.24:/rhs/brick1/new_testvol
Brick2: 10.16.156.24:/rhs/brick2/new_testvol
Brick3: 10.16.156.24:/rhs/brick3/new_testvol
Brick4: 10.16.156.24:/rhs/brick4/new_testvol
[root at gqas009 glusterd]# 

After starting the volume by force option the gluster volume status shows
properly.

[root at gqas009 glusterd]# gluster volume start testvol4 force
volume start: testvol4: success
[root at gqas009 glusterd]# gluster volume status
Status of volume: testvol4
Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------
Brick 10.16.156.24:/rhs/brick1/new_testvol        49152    Y    24711
Brick 10.16.156.24:/rhs/brick2/new_testvol        49153    Y    24722
Brick 10.16.156.24:/rhs/brick3/new_testvol        49154    Y    24733
Brick 10.16.156.24:/rhs/brick4/new_testvol        49155    Y    24744
NFS Server on gqas006.sbu.lab.eng.bos.redhat.com    2049    Y    3436
Self-heal Daemon on gqas006.sbu.lab.eng.bos.redhat.com    N/A    Y    3445
NFS Server on gqas005.sbu.lab.eng.bos.redhat.com    2049    Y    24582
Self-heal Daemon on gqas005.sbu.lab.eng.bos.redhat.com    N/A    Y    24591
NFS Server on gqas006.sbu.lab.eng.bos.redhat.com    2049    Y    24756
Self-heal Daemon on gqas006.sbu.lab.eng.bos.redhat.com    N/A    Y    24765

Task Status of Volume testvol4
------------------------------------------------------------------------------
There are no active volume tasks

[root at gqas009 glusterd]# 

glusterd logs
=============

2015-04-10 08:23:14.309645] W [socket.c:2992:socket_connect] 0-management:
Ignore failed connection attempt on , (No such file or directory)
[2015-04-10 08:23:15.313962] I [rpc-clnt.c:969:rpc_clnt_connection_init]
0-management: setting frame-timeout to 600
[2015-04-10 08:23:15.314156] W [socket.c:2992:socket_connect] 0-management:
Ignore failed connection attempt on , (No such file or directory)
[2015-04-10 08:23:15.314347] I [mem-pool.c:545:mem_pool_destroy] 0-management:
size=588 max=0 total=0
[2015-04-10 08:23:15.314508] I [mem-pool.c:545:mem_pool_destroy] 0-management:
size=124 max=0 total=0
[2015-04-10 08:23:15.315593] W [socket.c:611:__socket_rwv] 0-management: readv
on /var/run/aa0e1a5524794a3182fd0fac91ec4bd1.socket failed (Invalid argument)
[2015-04-10 08:23:15.315639] I [MSGID: 106006]
[glusterd-handler.c:4257:__glusterd_nodesvc_rpc_notify] 0-management: nfs has
disconnected from glusterd.
[2015-04-10 08:23:15.315687] I [mem-pool.c:545:mem_pool_destroy] 0-management:
size=588 max=0 total=0
[2015-04-10 08:23:15.315712] I [mem-pool.c:545:mem_pool_destroy] 0-management:
size=124 max=0 total=0
[2015-04-10 08:23:15.315891] W [socket.c:611:__socket_rwv] 0-management: readv
on /var/run/a5b06d12456fd4e65e330fcb32324b80.socket failed (Invalid argument)
[2015-04-10 08:23:15.315941] I [MSGID: 106006]
[glusterd-handler.c:4257:__glusterd_nodesvc_rpc_notify] 0-management:
glustershd has disconnected from glusterd.
[2015-04-10 08:23:15.316841] E [run.c:190:runner_log] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f2cf54b14c6] (-->
/lib64/libglusterfs.so.0(runner_log+0xfc)[0x7f2cf54fc24c] (-->
/usr/lib64/glusterfs/3.6.2/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x47a)[0x7f2cea74745a]
(-->
/usr/lib64/glusterfs/3.6.2/xlator/mgmt/glusterd.so(+0xd0a15)[0x7f2cea747a15]
(--> /lib64/libpthread.so.0(+0x7ee5)[0x7f2cf4c44ee5] ))))) 0-management: Failed
to execute script: /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
--volname=testvol4 --first=yes --version=1 --volume-op=start
--gd-workdir=/var/lib/glusterd
[2015-04-10 08:23:15.318677] E [run.c:190:runner_log] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f2cf54b14c6] (-->
/lib64/libglusterfs.so.0(runner_log+0xfc)[0x7f2cf54fc24c] (-->
/usr/lib64/glusterfs/3.6.2/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x47a)[0x7f2cea74745a]
(-->
/usr/lib64/glusterfs/3.6.2/xlator/mgmt/glusterd.so(+0xd0a15)[0x7f2cea747a15]
(--> /lib64/libpthread.so.0(+0x7ee5)[0x7f2cf4c44ee5] ))))) 0-management: Failed
to execute script: /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
--volname=testvol4 --first=yes --version=1 --volume-op=start
--gd-workdir=/var/lib/glusterd
[2015-04-10 08:23:15.345311] I [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap:
adding brick /rhs/brick1/new_testvol on port 49152
[2015-04-10 08:23:15.371485] I [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap:
adding brick /rhs/brick2/new_testvol on port 49153
[2015-04-10 08:23:15.397314] I [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap:
adding brick /rhs/brick3/new_testvol on port 49154
[2015-04-10 08:23:15.423099] I [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap:
adding brick /rhs/brick4/new_testvol on port 49155
[2015-04-10 08:23:22.509673] W
[glusterd-op-sm.c:4021:glusterd_op_modify_op_ctx] 0-management: op_ctx
modification failed
[2015-04-10 08:23:22.511160] I
[glusterd-handler.c:3803:__glusterd_handle_status_volume] 0-management:
Received status volume req for volume testvol4
[2015-04-10 08:25:21.773155] W [socket.c:611:__socket_rwv] 0-management: readv
on 10.16.156.15:24007 failed (No data available)
[2015-04-10 08:23:22.509673] W
[glusterd-op-sm.c:4021:glusterd_op_modify_op_ctx] 0-management: op_ctx
modification failed
[2015-04-10 08:23:22.511160] I
[glusterd-handler.c:3803:__glusterd_handle_status_volume] 0-management:
Received status volume req for volume testvol4
[2015-04-10 08:25:21.773155] W [socket.c:611:__socket_rwv] 0-management: readv
on 10.16.156.15:24007 failed (No data available)
[2015-04-10 08:25:21.773232] I [MSGID: 106004]
[glusterd-handler.c:4365:__glusterd_peer_rpc_notify] 0-management: Peer
5d35d1e2-c497-49f7-91f5-6769d6d859e0, in Peer in Cluster state, has
disconnected from glusterd.
[2015-04-10 08:25:21.773524] W [glusterd-locks.c:647:glusterd_mgmt_v3_unlock]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f2cf54b14c6] (-->
/usr/lib64/glusterfs/3.6.2/xlator/mgmt/glusterd.so(glusterd_mgmt_v3_unlock+0x3f1)[0x7f2cea749531]
(-->
/usr/lib64/glusterfs/3.6.2/xlator/mgmt/glusterd.so(__glusterd_peer_rpc_notify+0x1a2)[0x7f2cea6c1442]
(-->
/usr/lib64/glusterfs/3.6.2/xlator/mgmt/glusterd.so(glusterd_big_locked_notify+0x4c)[0x7f2cea6ba01c]
(--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x90)[0x7f2cf52862c0] )))))
0-management: Lock for vol testvol4 not held
[2015-04-10 08:25:25.956321] I
[glusterd-handshake.c:1119:__glusterd_mgmt_hndsk_versions_ack] 0-management:
using the op-version 30600
[2015-04-10 08:25:25.961384] I
[glusterd-handler.c:2216:__glusterd_handle_incoming_friend_req] 0-glusterd:
Received probe from uuid: 1be72175-4ccf-4f92-86d0-70c9d872362d
[2015-04-10 08:25:32.385520] I
[glusterd-handler.c:3334:glusterd_xfer_friend_add_resp] 0-glusterd: Responded
to gqas006.sbu.lab.eng.bos.redhat.com (0), ret: 0
[2015-04-10 08:25:32.392088] I
[glusterd-handler.c:2373:__glusterd_handle_friend_update] 0-glusterd: Received
friend update from uuid: 1be72175-4ccf-4f92-86d0-70c9d872362d
[2015-04-10 08:25:32.392138] I
[glusterd-handler.c:2416:__glusterd_handle_friend_update] 0-management:
Received my uuid as Friend
[2015-04-10 08:25:32.397032] I
[glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received
ACC from uuid: 7077773b-0c03-40be-ba30-2542b3022a22
[2015-04-10 08:25:34.402503] I
[glusterd-rpc-ops.c:633:__glusterd_friend_update_cbk] 0-management: Received
ACC from uuid: 1be72175-4ccf-4f92-86d0-70c9d872362d
[2015-04-10 08:25:34.408180] I
[glusterd-rpc-ops.c:436:__glusterd_friend_add_cbk] 0-glusterd: Received ACC
from uuid: 1be72175-4ccf-4f92-86d0-70c9d872362d, host:
gqas006.sbu.lab.eng.bos.redhat.com, port: 0
[2015-04-10 08:25:34.410366] I
[glusterd-handler.c:2373:__glusterd_handle_friend_update] 0-glusterd: Received
friend update from uuid: 1be72175-4ccf-4f92-86d0-70c9d872362d
[2015-04-10 08:25:34.410408] I
[glusterd-handler.c:2416:__glusterd_handle_friend_update] 0-management:
Received my uuid as Friend
[2015-04-10 08:25:35.833278] I
[glusterd-handler.c:2373:__glusterd_handle_friend_update] 0-glusterd: Received
friend update from uuid: 1be72175-4ccf-4f92-86d0-70c9d872362d
[2015-04-10 08:25:35.833323] I
[glusterd-handler.c:2416:__glusterd_handle_friend_update] 0-management:
Received my uuid as Friend

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list