[Gluster-users] WG: Strange issu concerning glusterfs 3.5.1 on centos 6.5

Daniel Müller mueller at tropenklinik.de
Thu Jul 31 12:33:38 UTC 2014


Working 172.17.2.30,
[root at centclust2 glusterfs]# telnet 172.17.2.30 49152
Trying 172.17.2.30...
Connected to 172.17.2.30.
Escape character is '^]'.

Working 192.168.135.36,

[root at centclust1 ssl]# telnet 192.168.135.36 49152
Trying 192.168.135.36...
Connected to centclust1 (192.168.135.36).
Escape character is '^]'.

Working 172.17.2.31,

[root at centclust1 ~]# telnet 172.17.2.31 49152
Trying 172.17.2.31...
Connected to centclust2 (172.17.2.31).
Escape character is '^]'.


Working 192.168.135.46,

[root at centclust1 ~]# telnet 192.168.135.46 49152
Trying 192.168.135.46...
Connected to centclust2 (192.168.135.46).
Escape character is '^]'.

This did work even before!

Greetings
Daniel


EDV Daniel Müller

Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 Tübingen 
Tel.: 07071/206-463, Fax: 07071/206-499
eMail: mueller at tropenklinik.de
Internet: www.tropenklinik.de





-----Ursprüngliche Nachricht-----
Von: Vijaikumar M [mailto:vmallika at redhat.com] 
Gesendet: Donnerstag, 31. Juli 2014 13:33
An: mueller at tropenklinik.de
Cc: 'Krishnan Parthasarathi'; gluster-devel-bounces at gluster.org; gluster-users at gluster.org
Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs 3.5.1 on centos 6.5

Hi Daniel,

Check if telnet works on brick port from both the interface.

telnet 172.17.2.30    <brick-port1>
telnet 192.168.135.36 <brick-port1>

telnet 172.17.2.31    <brick-port2>
telnet 192.168.135.46 <brick-port2>


Thanks,
Vijay


On Thursday 31 July 2014 04:37 PM, Daniel Müller wrote:
> So,
>
> [root at centclust1 ~]# ifconfig
> eth0      Link encap:Ethernet  Hardware Adresse 00:25:90:80:D9:E8
>            inet Adresse:172.17.2.30  Bcast:172.17.2.255  Maske:255.255.255.0
>            inet6 Adresse: fe80::225:90ff:fe80:d9e8/64 Gültigkeitsbereich:Verbindung
>            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>            RX packets:3506528 errors:0 dropped:0 overruns:0 frame:0
>            TX packets:169905 errors:0 dropped:0 overruns:0 carrier:0
>            Kollisionen:0 Sendewarteschlangenlänge:1000
>            RX bytes:476128477 (454.0 MiB)  TX bytes:18788266 (17.9 MiB)
>            Speicher:fe860000-fe880000
>
> eth1      Link encap:Ethernet  Hardware Adresse 00:25:90:80:D9:E9
>            inet Adresse:192.168.135.36  Bcast:192.168.135.255  Maske:255.255.255.0
>            inet6 Adresse: fe80::225:90ff:fe80:d9e9/64 Gültigkeitsbereich:Verbindung
>            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>            RX packets:381664693 errors:0 dropped:0 overruns:0 frame:0
>            TX packets:380924973 errors:0 dropped:0 overruns:0 carrier:0
>            Kollisionen:0 Sendewarteschlangenlänge:1000
>            RX bytes:477454156923 (444.6 GiB)  TX bytes:476729269342 (443.9 GiB)
>            Speicher:fe8e0000-fe900000
>
> lo        Link encap:Lokale Schleife
>            inet Adresse:127.0.0.1  Maske:255.0.0.0
>            inet6 Adresse: ::1/128 Gültigkeitsbereich:Maschine
>            UP LOOPBACK RUNNING  MTU:16436  Metric:1
>            RX packets:93922879 errors:0 dropped:0 overruns:0 frame:0
>            TX packets:93922879 errors:0 dropped:0 overruns:0 carrier:0
>            Kollisionen:0 Sendewarteschlangenlänge:0
>            RX bytes:462579764180 (430.8 GiB)  TX bytes:462579764180 
> (430.8 GiB)
>
>
> [root at centclust2 ~]# ifconfig
> eth0      Link encap:Ethernet  Hardware Adresse 00:25:90:80:EF:00
>            inet Adresse:172.17.2.31  Bcast:172.17.2.255  Maske:255.255.255.0
>            inet6 Adresse: fe80::225:90ff:fe80:ef00/64 Gültigkeitsbereich:Verbindung
>            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>            RX packets:1383117 errors:0 dropped:0 overruns:0 frame:0
>            TX packets:45828 errors:0 dropped:0 overruns:0 carrier:0
>            Kollisionen:0 Sendewarteschlangenlänge:1000
>            RX bytes:185634714 (177.0 MiB)  TX bytes:5357926 (5.1 MiB)
>            Speicher:fe860000-fe880000
>
> eth1      Link encap:Ethernet  Hardware Adresse 00:25:90:80:EF:01
>            inet Adresse:192.168.135.46  Bcast:192.168.135.255  Maske:255.255.255.0
>            inet6 Adresse: fe80::225:90ff:fe80:ef01/64 Gültigkeitsbereich:Verbindung
>            UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>            RX packets:340364283 errors:0 dropped:0 overruns:0 frame:0
>            TX packets:59930672 errors:0 dropped:0 overruns:0 carrier:0
>            Kollisionen:0 Sendewarteschlangenlänge:1000
>            RX bytes:473823738544 (441.2 GiB)  TX bytes:9973035418 (9.2 GiB)
>            Speicher:fe8e0000-fe900000
>
> lo        Link encap:Lokale Schleife
>            inet Adresse:127.0.0.1  Maske:255.0.0.0
>            inet6 Adresse: ::1/128 Gültigkeitsbereich:Maschine
>            UP LOOPBACK RUNNING  MTU:16436  Metric:1
>            RX packets:1102979 errors:0 dropped:0 overruns:0 frame:0
>            TX packets:1102979 errors:0 dropped:0 overruns:0 carrier:0
>            Kollisionen:0 Sendewarteschlangenlänge:0
>            RX bytes:126066547 (120.2 MiB)  TX bytes:126066547 (120.2 
> MiB)
>
>
> [root at centclust1 ~]# route
> Kernel IP Routentabelle
> Ziel            Router          Genmask         Flags Metric Ref    Use Iface
> 192.168.135.0   *               255.255.255.0   U     1      0        0 eth1
> 172.17.2.0      *               255.255.255.0   U     1      0        0 eth0
> default         s4master        0.0.0.0         UG    0      0        0 eth1
>
>
> [root at centclust2 ~]# route
> Kernel IP Routentabelle
> Ziel            Router          Genmask         Flags Metric Ref    Use Iface
> 192.168.135.0   *               255.255.255.0   U     0      0        0 eth1
> 172.17.2.0      *               255.255.255.0   U     0      0        0 eth0
> link-local      *               255.255.0.0     U     1002   0        0 eth0
> link-local      *               255.255.0.0     U     1003   0        0 eth1
> default         s4master        0.0.0.0         UG    0      0        0 eth1
>
> [root at centclust1 ~]# route -n
> Kernel IP Routentabelle
> Ziel            Router          Genmask         Flags Metric Ref    Use Iface
> 192.168.135.0   0.0.0.0         255.255.255.0   U     1      0        0 eth1
> 172.17.2.0      0.0.0.0         255.255.255.0   U     1      0        0 eth0
> 0.0.0.0         192.168.135.230 0.0.0.0         UG    0      0        0 eth1
>
> [root at centclust2 ~]# route -n
> Kernel IP Routentabelle
> Ziel            Router          Genmask         Flags Metric Ref    Use Iface
> 192.168.135.0   0.0.0.0         255.255.255.0   U     0      0        0 eth1
> 172.17.2.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
> 169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
> 169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 eth1
> 0.0.0.0         192.168.135.230 0.0.0.0         UG    0      0        0 eth1
>
>
>
> EDV Daniel Müller
>
> Leitung EDV
> Tropenklinik Paul-Lechler-Krankenhaus
> Paul-Lechler-Str. 24
> 72076 Tübingen
> Tel.: 07071/206-463, Fax: 07071/206-499
> eMail: mueller at tropenklinik.de
> Internet: www.tropenklinik.de
>
>
>
>
>
> -----Ursprüngliche Nachricht-----
> Von: Krishnan Parthasarathi [mailto:kparthas at redhat.com]
> Gesendet: Donnerstag, 31. Juli 2014 12:55
> An: mueller at tropenklinik.de
> Cc: gluster-devel-bounces at gluster.org; gluster-users at gluster.org
> Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs 
> 3.5.1 on centos 6.5
>
> Daniel,
>
> Could you provide the following details from your original two NIC setup probed using hostname?
> 1) output of ifconfig of the two NICs on both the nodes .
> 2) output of route from both the nodes.
>
> ~KP
> ----- Original Message -----
>> Hello and thank you so far,
>> What I have recognized is, having more than one nic running this is 
>> confusing glusterfs 3.5. I never saw this on my glusterfs 3.4 and 3.2 
>> systems still working.
>> So I set up just clean erased gluster with yum glusterfs* erase  and did:
>> Logged in to my both nodes in the 135 subnet,ex:
>> Ssh 192.168.135.36 (centclust1)  (172.17.2.30 is the 2nd nic) Ssh
>> 192.168.135.46 (centclust2)  (172.17.2.31 is the 2nd nic) Started 
>> gluster on both nodes , service glusterd start.
>> Did the peer probe on 192.168.135.36/centclust1:
>> Gluster peer probe 192.168.135.46 //Former I did gluster peer probe
>> centclust2
>> This result in:
>> [root at centclust1 ~]# gluster peer status Number of Peers: 1
>>
>> Hostname: 192.168.135.46
>> Uuid: c395c15d-5187-4e5b-b680-57afcb88b881
>> State: Peer in Cluster (Connected)
>>
>> [root at centclust2 backup]# gluster peer status Number of Peers: 1
>>
>> Hostname: 192.168.135.36
>> Uuid: 94d5903b-ebe9-40d6-93bf-c2f2e92909a0
>> State: Peer in Cluster (Connected)
>> The signifent difference gluster now shows the ip of both nodes
>>
>> Now I did the create the replicating vol:
>> gluster volume create smbcluster replica 2 transport tcp 
>> 192.168.135.36:/sbu/glusterfs/export
>> 192.168.135.46:/sbu/glusterfs/export
>> started the volume
>> gluster volume status
>> Status of volume: smbcluster
>> Gluster process                                         Port    Online  Pid
>> ------------------------------------------------------------------------------
>> Brick 192.168.135.36:/sbu/glusterfs/export              49152   Y       27421
>> Brick 192.168.135.46:/sbu/glusterfs/export              49152   Y       12186
>> NFS Server on localhost                                 2049    Y       27435
>> Self-heal Daemon on localhost                           N/A     Y       27439
>> NFS Server on 192.168.135.46                            2049    Y       12200
>> Self-heal Daemon on 192.168.135.46                      N/A     Y       12204
>>
>> Task Status of Volume smbcluster
>> ---------------------------------------------------------------------
>> -
>> --------
>> There are no active volume tasks
>>
>> Mounted the volumes:
>>
>> Centclust1:mount -t glusterfs 192.168.135.36:/smbcluster /mntgluster 
>> -o acl Centclust2:mount -t glusterfs 192.168.135.46:/smbcluster 
>> /mntgluster -o acl
>>
>> And BINGO up and running!!!!!!!
>>
>>
>> EDV Daniel Müller
>>
>> Leitung EDV
>> Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24
>> 72076 Tübingen
>> Tel.: 07071/206-463, Fax: 07071/206-499
>> eMail: mueller at tropenklinik.de
>> Internet: www.tropenklinik.de
>>
>>
>>
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Krishnan Parthasarathi [mailto:kparthas at redhat.com]
>> Gesendet: Mittwoch, 30. Juli 2014 16:52
>> An: mueller at tropenklinik.de
>> Cc: gluster-devel-bounces at gluster.org; gluster-users at gluster.org
>> Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs
>> 3.5.1 on centos 6.5
>>
>> Daniel,
>>
>> I didn't get a chance to follow up with debugging this issue. I will
>> look into this and get back to you. I suspect that there is something
>> different about the network layer behaviour in your setup.
>>
>> ~KP
>>
>> ----- Original Message -----
>>> Just another other test:
>>> [root at centclust1 sicherung]# getfattr -d -e hex -m . /sicherung/bu
>>> getfattr: Entferne führenden '/' von absoluten Pfadnamen # file:
>>> sicherung/bu
>>> security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696
>>> c6
>>> 55f743a733000
>>> trusted.afr.smbbackup-client-0=0x000000000000000000000000
>>> trusted.afr.smbbackup-client-1=0x000000000000000200000001
>>> trusted.gfid=0x00000000000000000000000000000001
>>> trusted.glusterfs.dht=0x000000010000000000000000ffffffff
>>> trusted.glusterfs.volume-id=0x6f51d002e634437db58d9b952693f1df
>>>
>>> [root at centclust2 glusterfs]# getfattr -d -e hex -m . /sicherung/bu
>>> getfattr: Entferne führenden '/' von absoluten Pfadnamen # file:
>>> sicherung/bu
>>> security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696
>>> c6
>>> 55f743a733000
>>> trusted.afr.smbbackup-client-0=0x000000000000000200000001
>>> trusted.afr.smbbackup-client-1=0x000000000000000000000000
>>> trusted.gfid=0x00000000000000000000000000000001
>>> trusted.glusterfs.dht=0x000000010000000000000000ffffffff
>>> trusted.glusterfs.volume-id=0x6f51d002e634437db58d9b952693f1df
>>>
>>> Is this ok?
>>>
>>> After long testing and doing a /etc/init.d/network restart the
>>> replication started once/a short time then ended up!?
>>> Any idea???????
>>>
>>>
>>> EDV Daniel Müller
>>>
>>> Leitung EDV
>>> Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24
>>> 72076 Tübingen
>>> Tel.: 07071/206-463, Fax: 07071/206-499
>>> eMail: mueller at tropenklinik.de
>>> Internet: www.tropenklinik.de
>>>
>>> "Der Mensch ist die Medizin des Menschen"
>>>
>>>
>>>
>>>
>>> -----Ursprüngliche Nachricht-----
>>> Von: Krishnan Parthasarathi [mailto:kparthas at redhat.com]
>>> Gesendet: Mittwoch, 30. Juli 2014 11:09
>>> An: mueller at tropenklinik.de
>>> Cc: gluster-devel-bounces at gluster.org; gluster-users at gluster.org
>>> Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs
>>> 3.5.1 on centos 6.5
>>>
>>> Could you provide the output of the following command?
>>>
>>> netstat -ntap | grep gluster
>>>
>>> This should tell us if glusterfsd processes (bricks) are listening
>>> on all interfaces.
>>>
>>> ~KP
>>>
>>> ----- Original Message -----
>>>> Just one idea
>>>> I add a second NIC with a 172.2.17... adress on both machines.
>>>> Could this cause the trouble!?
>>>>
>>>> EDV Daniel Müller
>>>>
>>>> Leitung EDV
>>>> Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24
>>>> 72076 Tübingen
>>>> Tel.: 07071/206-463, Fax: 07071/206-499
>>>> eMail: mueller at tropenklinik.de
>>>> Internet: www.tropenklinik.de
>>>>
>>>>
>>>>
>>>>
>>>> -----Ursprüngliche Nachricht-----
>>>> Von: Krishnan Parthasarathi [mailto:kparthas at redhat.com]
>>>> Gesendet: Mittwoch, 30. Juli 2014 09:29
>>>> An: mueller at tropenklinik.de
>>>> Cc: gluster-devel-bounces at gluster.org; gluster-users at gluster.org
>>>> Betreff: Re: [Gluster-users] WG: Strange issu concerning glusterfs
>>>> 3.5.1 on centos 6.5
>>>>
>>>> Daniel,
>>>>
>>>>  From a quick look, I see that glustershd and the nfs client is
>>>> unable to connect to one of the bricks. This is resulting in data
>>>> from mounts being written to local bricks only.
>>>> I should have asked this before, could you provide the bricks logs
>>>> as well?
>>>>
>>>> Could you also try to connect to the bricks using telnet?
>>>> For eg, from centclust1, telnet centclust2 <brick-port>.
>>>>
>>>> ~KP
>>>>
>>>> ----- Original Message -----
>>>>> So my logs. I disable ssl meanwhile but it is the same situation.
>>>>> No replication!?
>>>>>
>>>>>
>>>>>
>>>>> EDV Daniel Müller
>>>>>
>>>>> Leitung EDV
>>>>> Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24
>>>>> 72076 Tübingen
>>>>> Tel.: 07071/206-463, Fax: 07071/206-499
>>>>> eMail: mueller at tropenklinik.de
>>>>> Internet: www.tropenklinik.de
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> -----Ursprüngliche Nachricht-----
>>>>> Von: Krishnan Parthasarathi [mailto:kparthas at redhat.com]
>>>>> Gesendet: Mittwoch, 30. Juli 2014 08:56
>>>>> An: mueller at tropenklinik.de
>>>>> Cc: gluster-users at gluster.org; gluster-devel-bounces at gluster.org
>>>>> Betreff: Re: [Gluster-users] WG: Strange issu concerning
>>>>> glusterfs
>>>>> 3.5.1 on centos 6.5
>>>>>
>>>>> Could you attach the entire mount and glustershd log files to
>>>>> this thread?
>>>>>
>>>>> ~KP
>>>>>
>>>>> ----- Original Message -----
>>>>>> NO ONE!??
>>>>>> This is an entry of my glustershd.log:
>>>>>> [2014-07-30 06:40:59.294334] W
>>>>>> [client-handshake.c:1846:client_dump_version_cbk]
>>>>>> 0-smbbackup-client-1:
>>>>>> received RPC status error
>>>>>> [2014-07-30 06:40:59.294352] I
>>>>>> [client.c:2229:client_rpc_notify]
>>>>>> 0-smbbackup-client-1: disconnected from 172.17.2.31:49152.
>>>>>> Client process will keep trying to connect to glusterd until
>>>>>> brick's port is available
>>>>>>
>>>>>>
>>>>>> This is from mnt-sicherung.log:
>>>>>> [2014-07-30 06:40:38.259850] E [socket.c:2820:socket_connect]
>>>>>> 1-smbbackup-client-0: connection attempt on 172.17.2.30:24007
>>>>>> failed, (Connection timed out) [2014-07-30 06:40:41.275120] I
>>>>>> [rpc-clnt.c:1729:rpc_clnt_reconfig]
>>>>>> 1-smbbackup-client-0: changing port to 49152 (from 0)
>>>>>>
>>>>>> [root at centclust1 sicherung]# gluster --remote-host=centclust1
>>>>>> peer status Number of Peers: 1
>>>>>>
>>>>>> Hostname: centclust2
>>>>>> Uuid: 4f15e9bd-9b5a-435b-83d2-4ed202c66b11
>>>>>> State: Peer in Cluster (Connected)
>>>>>>
>>>>>> [root at centclust1 sicherung]# gluster --remote-host=centclust2
>>>>>> peer status Number of Peers: 1
>>>>>>
>>>>>> Hostname: 172.17.2.30
>>>>>> Uuid: 99fe6a2c-df7e-4475-a7bc-a35abba620fb
>>>>>> State: Peer in Cluster (Connected)
>>>>>>
>>>>>> [root at centclust1 ssl]# ps aux | grep gluster
>>>>>> root     13655  0.0  0.0 413848 16872 ?        Ssl  08:10   0:00
>>>>>> /usr/sbin/glusterd --pid-file=/var/run/glusterd.pid
>>>>>> root     13958  0.0  0.0 12139920 44812 ?      Ssl  08:11   0:00
>>>>>> /usr/sbin/glusterfsd -s centclust1.tplk.loc --volfile-id
>>>>>> smbbackup.centclust1.tplk.loc.sicherung-bu -p
>>>>>> /var/lib/glusterd/vols/smbbackup/run/centclust1.tplk.loc-sicherung-bu.
>>>>>> pid -S /var/run/4c65260e12e2d3a9a5549446f491f383.socket
>>>>>> --brick-name /sicherung/bu -l
>>>>>> /var/log/glusterfs/bricks/sicherung-bu.log
>>>>>> --xlator-option
>>>>>> *-posix.glusterd-uuid=99fe6a2c-df7e-4475-a7bc-a35abba620fb
>>>>>> --brick-port
>>>>>> 49152 --xlator-option smbbackup-server.listen-port=49152
>>>>>> root     13972  0.0  0.0 815748 58252 ?        Ssl  08:11   0:00
>>>>>> /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
>>>>>> /var/lib/glusterd/nfs/run/nfs.pid -l
>>>>>> /var/log/glusterfs/nfs.log -S /var/run/ee6f37fc79b9cb1968eca387930b39fb.socket
>>>>>> root     13976  0.0  0.0 831160 29492 ?        Ssl  08:11   0:00
>>>>>> /usr/sbin/glusterfs -s localhost --volfile-id
>>>>>> gluster/glustershd -p
>>>>>> /var/lib/glusterd/glustershd/run/glustershd.pid -l
>>>>>> /var/log/glusterfs/glustershd.log -S
>>>>>> /var/run/aa970d146eb23ba7124e6c4511879850.socket --xlator-option *replicate*.node-uuid=99fe6a2c-df7e-4475-a7bc-a35abba620fb
>>>>>> root     15781  0.0  0.0 105308   932 pts/1    S+   08:47   0:00 grep
>>>>>> gluster
>>>>>> root     29283  0.0  0.0 451116 56812 ?        Ssl  Jul29   0:21
>>>>>> /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
>>>>>> /var/lib/glusterd/nfs/run/nfs.pid -l
>>>>>> /var/log/glusterfs/nfs.log -S /var/run/a7fcb1d1d3a769d28df80b85ae5d13c4.socket
>>>>>> root     29287  0.0  0.0 335432 25848 ?        Ssl  Jul29   0:21
>>>>>> /usr/sbin/glusterfs -s localhost --volfile-id
>>>>>> gluster/glustershd -p
>>>>>> /var/lib/glusterd/glustershd/run/glustershd.pid -l
>>>>>> /var/log/glusterfs/glustershd.log -S
>>>>>> /var/run/833e60f976365c2a307f92fb233942a2.socket --xlator-option *replicate*.node-uuid=64b1a7eb-2df3-47bd-9379-39c29e5a001a
>>>>>> root     31698  0.0  0.0 1438392 57952 ?       Ssl  Jul29   0:12
>>>>>> /usr/sbin/glusterfs --acl --volfile-server=centclust1.tplk.loc
>>>>>> --volfile-id=/smbbackup /mnt/sicherung
>>>>>>
>>>>>> [root at centclust2 glusterfs]#  ps aux | grep gluster
>>>>>> root      1561  0.0  0.0 1481492 60152 ?       Ssl  Jul29   0:12
>>>>>> /usr/sbin/glusterfs --acl --volfile-server=centclust2.tplk.loc
>>>>>> --volfile-id=/smbbackup /mnt/sicherung
>>>>>> root     15656  0.0  0.0 413848 16832 ?        Ssl  08:11   0:01
>>>>>> /usr/sbin/glusterd --pid-file=/var/run/glusterd.pid
>>>>>> root     15942  0.0  0.0 12508704 43860 ?      Ssl  08:11   0:00
>>>>>> /usr/sbin/glusterfsd -s centclust2.tplk.loc --volfile-id
>>>>>> smbbackup.centclust2.tplk.loc.sicherung-bu -p
>>>>>> /var/lib/glusterd/vols/smbbackup/run/centclust2.tplk.loc-sicherung-bu.
>>>>>> pid -S /var/run/40a554af3860eddd5794b524576d0520.socket
>>>>>> --brick-name /sicherung/bu -l
>>>>>> /var/log/glusterfs/bricks/sicherung-bu.log
>>>>>> --xlator-option
>>>>>> *-posix.glusterd-uuid=4f15e9bd-9b5a-435b-83d2-4ed202c66b11
>>>>>> --brick-port
>>>>>> 49152 --xlator-option smbbackup-server.listen-port=49152
>>>>>> root     15956  0.0  0.0 825992 57496 ?        Ssl  08:11   0:00
>>>>>> /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
>>>>>> /var/lib/glusterd/nfs/run/nfs.pid -l
>>>>>> /var/log/glusterfs/nfs.log -S /var/run/602d1d8ba7b80ded2b70305ed7417cf5.socket
>>>>>> root     15960  0.0  0.0 841404 26760 ?        Ssl  08:11   0:00
>>>>>> /usr/sbin/glusterfs -s localhost --volfile-id
>>>>>> gluster/glustershd -p
>>>>>> /var/lib/glusterd/glustershd/run/glustershd.pid -l
>>>>>> /var/log/glusterfs/glustershd.log -S
>>>>>> /var/run/504d01c7f7df8b8306951cc2aaeaf52c.socket
>>>>>> --xlator-option
>>>>>> *replicate*.node-uuid=4f15e9bd-9b5a-435b-83d2-4ed202c66b11
>>>>>> root     17728  0.0  0.0 105312   936 pts/0    S+   08:48   0:00 grep
>>>>>> gluster
>>>>>> root     32363  0.0  0.0 451100 55584 ?        Ssl  Jul29   0:21
>>>>>> /usr/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p
>>>>>> /var/lib/glusterd/nfs/run/nfs.pid -l
>>>>>> /var/log/glusterfs/nfs.log -S /var/run/73054288d1cadfb87b4b9827bd205c7b.socket
>>>>>> root     32370  0.0  0.0 335432 26220 ?        Ssl  Jul29   0:21
>>>>>> /usr/sbin/glusterfs -s localhost --volfile-id
>>>>>> gluster/glustershd -p
>>>>>> /var/lib/glusterd/glustershd/run/glustershd.pid -l
>>>>>> /var/log/glusterfs/glustershd.log -S
>>>>>> /var/run/de1427ce373c792c76c38b12c106f029.socket
>>>>>> --xlator-option
>>>>>> *replicate*.node-uuid=83e6d78c-0119-4537-8922-b3e731718864
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Leitung EDV
>>>>>> Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24
>>>>>> 72076 Tübingen
>>>>>> Tel.: 07071/206-463, Fax: 07071/206-499
>>>>>> eMail: mueller at tropenklinik.de
>>>>>> Internet: www.tropenklinik.de
>>>>>>
>>>>>>
>>>>>>
>>>>>> -----Ursprüngliche Nachricht-----
>>>>>> Von: Daniel Müller [mailto:mueller at tropenklinik.de]
>>>>>> Gesendet: Dienstag, 29. Juli 2014 16:02
>>>>>> An: 'gluster-users at gluster.org'
>>>>>> Betreff: Strange issu concerning glusterfs 3.5.1 on centos 6.5
>>>>>>
>>>>>> Dear all,
>>>>>>
>>>>>> there is a strange issue centos6.5 and glusterfs 3.5.1:
>>>>>>
>>>>>>   glusterd -V
>>>>>> glusterfs 3.5.1 built on Jun 24 2014 15:09:41 Repository revision:
>>>>>> git://git.gluster.com/glusterfs.git
>>>>>> Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
>>>>>> GlusterFS comes with ABSOLUTELY NO WARRANTY.
>>>>>> It is licensed to you under your choice of the GNU Lesser
>>>>>> General Public License, version 3 or any later version (LGPLv3
>>>>>> or later), or the GNU General Public License, version 2
>>>>>> (GPLv2), in all cases as published by the Free Software
>>>>>> Foundation
>>>>>>
>>>>>> I try to set up a replicated 2 brick vol on two centos 6.5 server.
>>>>>> I can probe well and my nodes are reporting no errors:
>>>>>>   
>>>>>> [root at centclust1 mnt]# gluster peer status Number of Peers: 1
>>>>>>
>>>>>> Hostname: centclust2
>>>>>> Uuid: 4f15e9bd-9b5a-435b-83d2-4ed202c66b11
>>>>>> State: Peer in Cluster (Connected)
>>>>>>
>>>>>> [root at centclust2 sicherung]# gluster peer status Number of Peers:
>>>>>> 1
>>>>>>
>>>>>> Hostname: 172.17.2.30
>>>>>> Uuid: 99fe6a2c-df7e-4475-a7bc-a35abba620fb
>>>>>> State: Peer in Cluster (Connected)
>>>>>>
>>>>>> Now I set up a replicating VOl on an XFS-Disk: /dev/sdb1 on
>>>>>> /sicherung type xfs (rw)
>>>>>>
>>>>>> gluster volume create smbbackup replica 2 transport tcp
>>>>>> centclust1.tplk.loc:/sicherung/bu
>>>>>> centclust2.tplk.loc:/sicherung/bu
>>>>>>
>>>>>> gluster volume smbbackup status reports ok:
>>>>>>
>>>>>> [root at centclust1 mnt]# gluster volume status smbbackup Status
>>>>>> of
>>>>>> volume: smbbackup
>>>>>> Gluster process                                         Port
>>>>>> Online
>>>>>> Pid
>>>>>> --------------------------------------------------------------
>>>>>> --
>>>>>> --
>>>>>> --
>>>>>> --
>>>>>> ------
>>>>>> --
>>>>>> Brick centclust1.tplk.loc:/sicherung/bu                 49152   Y
>>>>>> 31969
>>>>>> Brick centclust2.tplk.loc:/sicherung/bu                 49152   Y
>>>>>> 2124
>>>>>> NFS Server on localhost                                 2049    Y
>>>>>> 31983
>>>>>> Self-heal Daemon on localhost                           N/A     Y
>>>>>> 31987
>>>>>> NFS Server on centclust2                                2049    Y
>>>>>> 2138
>>>>>> Self-heal Daemon on centclust2                          N/A     Y
>>>>>> 2142
>>>>>>
>>>>>> Task Status of Volume smbbackup
>>>>>> --------------------------------------------------------------
>>>>>> --
>>>>>> --
>>>>>> --
>>>>>> --
>>>>>> ------
>>>>>> --
>>>>>> There are no active volume tasks
>>>>>>
>>>>>> [root at centclust2 sicherung]# gluster volume status smbbackup
>>>>>> Status of
>>>>>> volume: smbbackup
>>>>>> Gluster process                                         Port
>>>>>> Online
>>>>>> Pid
>>>>>> --------------------------------------------------------------
>>>>>> --
>>>>>> --
>>>>>> --
>>>>>> --
>>>>>> ------
>>>>>> --
>>>>>> Brick centclust1.tplk.loc:/sicherung/bu                 49152   Y
>>>>>> 31969
>>>>>> Brick centclust2.tplk.loc:/sicherung/bu                 49152   Y
>>>>>> 2124
>>>>>> NFS Server on localhost                                 2049    Y
>>>>>> 2138
>>>>>> Self-heal Daemon on localhost                           N/A     Y
>>>>>> 2142
>>>>>> NFS Server on 172.17.2.30                               2049    Y
>>>>>> 31983
>>>>>> Self-heal Daemon on 172.17.2.30                         N/A     Y
>>>>>> 31987
>>>>>>
>>>>>> Task Status of Volume smbbackup
>>>>>> --------------------------------------------------------------
>>>>>> --
>>>>>> --
>>>>>> --
>>>>>> --
>>>>>> ------
>>>>>> --
>>>>>> There are no active volume tasks
>>>>>>
>>>>>> I mounted the vol on both servers with:
>>>>>>
>>>>>> mount -t glusterfs centclust1.tplk.loc:/smbbackup
>>>>>> /mnt/sicherung -o acl mount -t glusterfs
>>>>>> centclust2.tplk.loc:/smbbackup /mnt/sicherung -o acl
>>>>>>
>>>>>> But when I write in /mnt/sicherung the files are not
>>>>>> replicated to the other node in anyway!??
>>>>>>
>>>>>> They rest on the local server in /mnt/sicherung and
>>>>>> /sicherung/bu On each node separate:#
>>>>>> [root at centclust1 sicherung]# pwd /mnt/sicherung
>>>>>>
>>>>>> [root at centclust1 sicherung]# touch test.txt
>>>>>> [root at centclust1 sicherung]# ls test.txt
>>>>>> [root at centclust2 sicherung]# pwd /mnt/sicherung
>>>>>> [root at centclust2 sicherung]# ls more.txt
>>>>>> [root at centclust1 sicherung]# ls -la /sicherung/bu insgesamt 0
>>>>>> drwxr-xr-x.  3 root root  38 29. Jul 15:56 .
>>>>>> drwxr-xr-x.  3 root root  15 29. Jul 14:31 ..
>>>>>> drw-------. 15 root root 142 29. Jul 15:56 .glusterfs
>>>>>> -rw-r--r--.  2 root root   0 29. Jul 15:56 test.txt
>>>>>> [root at centclust2 sicherung]# ls -la /sicherung/bu insgesamt 0
>>>>>> drwxr-xr-x. 3 root root 38 29. Jul 15:32 .
>>>>>> drwxr-xr-x. 3 root root 15 29. Jul 14:31 ..
>>>>>> drw-------. 7 root root 70 29. Jul 15:32 .glusterfs -rw-r--r--.
>>>>>> 2 root root  0 29. Jul 15:32 more.txt
>>>>>>
>>>>>>
>>>>>>
>>>>>> Greetings
>>>>>> Daniel
>>>>>>
>>>>>>
>>>>>>
>>>>>> EDV Daniel Müller
>>>>>>
>>>>>> Leitung EDV
>>>>>> Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24
>>>>>> 72076 Tübingen
>>>>>> Tel.: 07071/206-463, Fax: 07071/206-499
>>>>>> eMail: mueller at tropenklinik.de
>>>>>> Internet: www.tropenklinik.de
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Gluster-users mailing list
>>>>>> Gluster-users at gluster.org
>>>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users





More information about the Gluster-users mailing list