[Bugs] [Bug 1540249] Gluster is trying to use a port outside documentation and firewalld' s glusterfs.xml

bugzilla at redhat.com bugzilla at redhat.com
Mon Feb 12 09:47:02 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1540249



--- Comment #23 from devianca at gmail.com ---
(In reply to devianca from comment #21)
> Both *were already* set, check
> https://bugzilla.redhat.com/show_bug.cgi?id=1540249#c1
> 
> Now, also disabled both clients, and still getting Disconnected,
> 
> node1:
> 
> [root at ProdigyX ~]# uptime
>  10:37:35 up 11 min,  1 user,  load average: 0,00, 0,01, 0,03
> [root at ProdigyX ~]# systemctl status gluster.mount
> ● gluster.mount - Mount Gluster
>    Loaded: loaded (/etc/systemd/system/gluster.mount; disabled; vendor
> preset: disabled)
>    Active: inactive (dead)
>     Where: /gluster
>      What: 127.0.0.1:/replica1
> [root at ProdigyX ~]# gluster pool list
> UUID                                    Hostname        State
> 2f6697f4-2529-4072-910c-8862fdc43562    10.250.1.1      Disconnected
> f7976943-b81a-4bb4-a1fb-06253bf064c4    localhost       Connected
> [root at ProdigyX ~]# gluster peer status
> Number of Peers: 1
> 
> Hostname: 10.250.1.1
> Uuid: 2f6697f4-2529-4072-910c-8862fdc43562
> State: Peer in Cluster (Disconnected)
> [root at ProdigyX ~]# gluster volume status
> Status of volume: replica1
> Gluster process                             TCP Port  RDMA Port  Online  Pid
> -----------------------------------------------------------------------------
> -
> Brick 10.250.1.2:/array0/brick1             49152     0          Y       1390
> Self-heal Daemon on localhost               N/A       N/A        Y       1373
> 
> Task Status of Volume replica1
> -----------------------------------------------------------------------------
> -
> There are no active volume tasks
> 
> [root at ProdigyX ~]# gluster volume info
> 
> Volume Name: replica1
> Type: Replicate
> Volume ID: 5331fac2-42b6-4530-bf79-1ec0236efbc4
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.250.1.2:/array0/brick1
> Brick2: 10.250.1.1:/raid0array1/brick2
> Options Reconfigured:
> client.bind-insecure: off
> performance.client-io-threads: off
> auth.allow: 10.250.1.1,10.250.1.2
> transport.address-family: inet
> nfs.disable: on
> server.event-threads: 8
> performance.io-thread-count: 64
> performance.cache-size: 32MB
> performance.write-behind-window-size: 64MB
> server.allow-insecure: off
> [root at ProdigyX ~]# netstat -tap | grep gluster
> tcp        0      0 0.0.0.0:49152           0.0.0.0:*               LISTEN  
> 1390/glusterfsd
> tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN  
> 1361/glusterd
> tcp        0      0 ProdigyX:1019           10.250.1.1:49152       
> ESTABLISHED 1373/glusterfs
> tcp        0      0 ProdigyX:1020           ProdigyX:49152         
> ESTABLISHED 1373/glusterfs
> tcp        0      0 ProdigyX:49152          10.250.1.1:exp2        
> ESTABLISHED 1390/glusterfsd
> tcp        0      0 ProdigyX:24007          10.250.1.1:49151       
> ESTABLISHED 1361/glusterd
> tcp        0      1 ProdigyX:49151          10.250.1.1:24007        SYN_SENT
> 1361/glusterd
> tcp        0      0 ProdigyX:24007          ProdigyX:49149         
> ESTABLISHED 1361/glusterd
> tcp        0      0 ProdigyX:49152          ProdigyX:1020          
> ESTABLISHED 1390/glusterfsd
> tcp        0      0 ProdigyX:49149          ProdigyX:24007         
> ESTABLISHED 1390/glusterfsd
> tcp        0      0 localhost:49150         localhost:24007        
> ESTABLISHED 1373/glusterfs
> tcp        0      0 localhost:24007         localhost:49150        
> ESTABLISHED 1361/glusterd
> 
> node2:
> 
> [root at BUNKER ~]# uptime
>  10:37:34 up 26 min,  1 user,  load average: 0,00, 0,01, 0,05
> [root at BUNKER ~]# systemctl status gluster.mount
> ● gluster.mount - Mount Gluster
>    Loaded: loaded (/etc/systemd/system/gluster.mount; disabled; vendor
> preset: disabled)
>    Active: inactive (dead)
>     Where: /gluster
>      What: 127.0.0.1:/replica1
> [root at BUNKER ~]# gluster pool list
> UUID                                    Hostname        State
> f7976943-b81a-4bb4-a1fb-06253bf064c4    10.250.1.2      Connected
> 2f6697f4-2529-4072-910c-8862fdc43562    localhost       Connected
> [root at BUNKER ~]# gluster peer status
> Number of Peers: 1
> 
> Hostname: 10.250.1.2
> Uuid: f7976943-b81a-4bb4-a1fb-06253bf064c4
> State: Peer in Cluster (Connected)
> [root at BUNKER ~]# gluster volume status
> Status of volume: replica1
> Gluster process                             TCP Port  RDMA Port  Online  Pid
> -----------------------------------------------------------------------------
> -
> Brick 10.250.1.2:/array0/brick1             49152     0          Y       1390
> Brick 10.250.1.1:/raid0array1/brick2        49152     0          Y       1334
> Self-heal Daemon on localhost               N/A       N/A        Y       1149
> Self-heal Daemon on 10.250.1.2              N/A       N/A        Y       1373
> 
> Task Status of Volume replica1
> -----------------------------------------------------------------------------
> -
> There are no active volume tasks
> 
> [root at BUNKER ~]# gluster volume info
> 
> Volume Name: replica1
> Type: Replicate
> Volume ID: 5331fac2-42b6-4530-bf79-1ec0236efbc4
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: 10.250.1.2:/array0/brick1
> Brick2: 10.250.1.1:/raid0array1/brick2
> Options Reconfigured:
> client.bind-insecure: off
> performance.client-io-threads: off
> auth.allow: 10.250.1.1,10.250.1.2
> transport.address-family: inet
> nfs.disable: on
> server.event-threads: 8
> performance.io-thread-count: 64
> performance.cache-size: 32MB
> performance.write-behind-window-size: 64MB
> server.allow-insecure: off
> [root at BUNKER ~]# netstat -tap | grep gluster
> tcp        0      0 0.0.0.0:49152           0.0.0.0:*               LISTEN  
> 1334/glusterfsd
> tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN  
> 866/glusterd
> tcp        0      0 BUNKER:49152            10.250.1.2:1019        
> ESTABLISHED 1334/glusterfsd
> tcp        0      0 BUNKER:exp2             10.250.1.2:49152       
> ESTABLISHED 1149/glusterfs
> tcp        0      0 BUNKER:24007            BUNKER:49149           
> ESTABLISHED 866/glusterd
> tcp        0      0 BUNKER:1020             BUNKER:49152           
> ESTABLISHED 1149/glusterfs
> tcp        0      0 BUNKER:49151            10.250.1.2:24007       
> ESTABLISHED 866/glusterd
> tcp        0      0 BUNKER:49152            BUNKER:1020            
> ESTABLISHED 1334/glusterfsd
> tcp        0      0 localhost:49150         localhost:24007        
> ESTABLISHED 1149/glusterfs
> tcp        0      0 localhost:24007         localhost:49150        
> ESTABLISHED 866/glusterd
> tcp        0      0 BUNKER:49149            BUNKER:24007           
> ESTABLISHED 1334/glusterfsd

Both *were already* set, I ment these:
server.allow-insecure: off
client.bind-insecure: off

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list