[Bugs] [Bug 1209831] New: peer probe fails because of missing glusterd.info file

bugzilla at redhat.com bugzilla at redhat.com
Wed Apr 8 09:54:46 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1209831

            Bug ID: 1209831
           Summary: peer probe fails because of missing glusterd.info file
           Product: GlusterFS
           Version: 3.6.2
         Component: glusterd
          Severity: medium
          Assignee: bugs at gluster.org
          Reporter: ssamanta at redhat.com
                CC: bugs at gluster.org, gluster-bugs at redhat.com



Description of problem:
Peer-Probe on a fresh cluster fails because the missing glusterd.info file.


Version-Release number of selected component (if applicable):
[root at gqas009 ~]# rpm -qa | grep glusterfs
glusterfs-api-devel-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_hive-0.1-11.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_hbase-0.1-3.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_fs_counters-0.1-10.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_multiuser_support-0.1-3.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_hadoop_hcfs_fileappend-0.1-4.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-setup_hadoop-0.1-121.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_hadoop_hcfs_quota-0.1-5.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_multiple_volumes-0.1-17.noarch
glusterfs-libs-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_dfsio_io_exception-0.1-8.noarch
glusterfs-fuse-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_shim_access_error_messages-0.1-5.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_sqoop-0.1-1.noarch
glusterfs-devel-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-setup_gluster-0.2-77.noarch
glusterfs-resource-agents-3.5.3-1.fc20.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_brick_sorted_order_of_filenames-0.1-1.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-setup_bigtop-0.2.1-23.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_erroneous_multivolume_filepaths-0.1-3.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_gluster_selfheal-0.1-5.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_file_dir_permissions-0.1-8.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_selinux_persistently_disabled-0.1-1.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_user_mapred_job-0.1-4.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_generate_gridmix2_data-0.1-2.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-setup_hadoop_security-0.0.1-7.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_dfsio-0.1-1.noarch
glusterfs-api-3.6.2-1.fc20.x86_64
glusterfs-extra-xlators-3.6.2-1.fc20.x86_64
glusterfs-server-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-setup_common-0.2-111.noarch
glusterfs-hadoop-2.1.2-2.fc20.noarch
glusterfs-geo-replication-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_special_char_in_path-0.1-1.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_groovy_sync-0.1-23.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_gluster_quota_selfheal-0.2-10.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_multifilewc_null_pointer_exception-0.1-5.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_pig-0.1-8.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_gridmix3-0.1-1.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_setting_working_directory-0.1-1.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-setup_rhs_georep-0.1-2.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_home_dir_listing-0.1-4.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_hadoop_hcfs_testcli-0.2-6.noarch
glusterfs-hadoop-javadoc-2.1.2-2.fc20.noarch
glusterfs-debuginfo-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_missing_dirs_create-0.1-3.noarch
glusterfs-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_hadoop_mapreduce-0.1-5.noarch
glusterfs-cli-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_append_to_file-0.1-5.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop_mahout-0.1-5.noarch
glusterfs-rdma-3.6.2-1.fc20.x86_64
glusterfs-hadoop-distribution-glusterfs-hadoop-test_bigtop-0.1-7.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_default_block_size-0.1-3.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_ldap-0.1-6.noarch
glusterfs-hadoop-distribution-glusterfs-hadoop-test_junit_shim-0.1-12.noarch
[root at gqas009 ~]# 


How reproducible:
Tried once


Steps to Reproduce:
1.Installed fedora-20 and install the glusterfs rpms for 3.6.2 in 2 nodes
2.Started the glusterd service after modifying the glusterd.vol file to allow
rpc requests from non-privillege ports.
3.Issue the command from node1 <gluster peer probe node2-ip>


Actual results:
Peer probe fails as the /var/lib/glusterd/glusterd.info file is missing.

Expected results:
Peer probe should not fail.

Workaround: After creating a volume with the bricks of the node1 and then peer
probe is successful.

I think the starting of glusterd on the node should have been failed if there
is a missing glusterd.info file for some reason.

Additional info:
I will attach the sos-reports shortly.


[root at gqas009 ~]# cat /etc/glusterfs/glusterd.vol
volume management
    type mgmt/glusterd
    option working-directory /var/lib/glusterd
    option transport-type socket,rdma
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
    option transport.socket.read-fail-log off
    option ping-timeout 30
    option rpc-auth-allow-insecure on
#   option base-port 49152
end-volume
[root at gqas009 ~

[root at gqas009 ~]# pgrep glusterd
 25489
[root at gqas009 ~]#

root at gqas009 ~]# less /var/lib/glusterd/glusterd.info
/var/lib/glusterd/glusterd.info: No such file or directory
[root at gqas009 ~]#


Create a volume with set of bricks hosted on the same node.

[root at gqas009 ~]# gluster volume info

Volume Name: testvol1
Type: Distributed-Replicate
Volume ID: 5ee47ecc-e22c-4099-acfa-53d5364a16cc
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.16.156.24:/rhs/brick1/new_testvol2
Brick2: 10.16.156.24:/rhs/brick2/new_testvol2
Brick3: 10.16.156.24:/rhs/brick3/new_testvol2
Brick4: 10.16.156.24:/rhs/brick4/new_testvol2
Options Reconfigured:
server.ssl: on
client.ssl: on

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list