[Gluster-users] AttributeError: python: undefined symbol: gf_changelog_register

Marco marco.brignoli at marcobaldo.ch
Wed May 27 11:56:03 UTC 2015


Hello.

Tnx for your answer.

I have installed libgfchangelog in all nodes, it is probably a missing
dependency on OpenSuSE for the Gluster rpm (the libgfchangelog rpm was
not installed automatically).
 
However I still have an issue with the changelog replication mode

[2015-05-27 12:18:37.475994] I [monitor(monitor):222:distribute] <top>: slave bricks: [{'host': 'gluster3.marcobaldo.ch', 'dir': '/gluster_slave'}]
[2015-05-27 12:18:37.476838] I [monitor(monitor):238:distribute] <top>: worker specs: [('/gluster', 'ssh://root@gluster3.marcobaldo.ch:gluster://localhost:volume1_slave')]
[2015-05-27 12:18:37.477180] I [monitor(monitor):81:set_state] Monitor: new state: Initializing...
[2015-05-27 12:18:37.492440] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------
[2015-05-27 12:18:37.492871] I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker
[2015-05-27 12:18:37.582231] I [gsyncd(/gluster):532:main_i] <top>: syncing: gluster://localhost:volume1 -> ssh://root@gluster3.marcobaldo.ch:gluster://localhost:volume1_slave
[2015-05-27 12:18:41.440675] I [master(/gluster):58:gmaster_builder] <top>: setting up xsync change detection mode
[2015-05-27 12:18:41.441387] I [master(/gluster):357:__init__] _GMaster: using 'rsync' as the sync engine
[2015-05-27 12:18:41.443038] I [master(/gluster):58:gmaster_builder] <top>: setting up changelog change detection mode
[2015-05-27 12:18:41.443597] I [master(/gluster):357:__init__] _GMaster: using 'rsync' as the sync engine
[2015-05-27 12:18:41.445401] I [master(/gluster):1103:register] _GMaster: xsync temp directory: /var/run/gluster/volume1/ssh%3A%2F%2Froot%40192.168.178.233%3Agluster%3A%2F%2F127.0.0.1%3Avolume1_slave/1077eb0027f1f616115bcb74a330d1c2/xsync
[2015-05-27 12:18:51.463219] I [master(/gluster):682:fallback_xsync] _GMaster: falling back to xsync mode
[2015-05-27 12:18:51.478179] I [syncdutils(/gluster):192:finalize] <top>: exiting.
[2015-05-27 12:18:52.455018] I [monitor(monitor):157:monitor] Monitor: worker(/gluster) died in startup phase
[2015-05-27 12:18:52.455300] I [monitor(monitor):81:set_state] Monitor: new state: faulty
[2015-05-27 12:19:02.475231] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------
[2015-05-27 12:19:02.475580] I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker
[2015-05-27 12:19:02.582557] I [gsyncd(/gluster):532:main_i] <top>: syncing: gluster://localhost:volume1 -> ssh://root@gluster3.marcobaldo.ch:gluster://localhost:volume1_slave
[2015-05-27 12:19:05.739321] I [master(/gluster):58:gmaster_builder] <top>: setting up xsync change detection mode
[2015-05-27 12:19:05.739571] I [master(/gluster):357:__init__] _GMaster: using 'rsync' as the sync engine
[2015-05-27 12:19:05.740775] I [master(/gluster):58:gmaster_builder] <top>: setting up xsync change detection mode
[2015-05-27 12:19:05.741525] I [master(/gluster):357:__init__] _GMaster: using 'rsync' as the sync engine
[2015-05-27 12:19:05.744210] I [master(/gluster):1103:register] _GMaster: xsync temp directory: /var/run/gluster/volume1/ssh%3A%2F%2Froot%40192.168.178.233%3Agluster%3A%2F%2F127.0.0.1%3Avolume1_slave/1077eb0027f1f616115bcb74a330d1c2/xsync
[2015-05-27 12:19:05.744954] I [master(/gluster):1103:register] _GMaster: xsync temp directory: /var/run/gluster/volume1/ssh%3A%2F%2Froot%40192.168.178.233%3Agluster%3A%2F%2F127.0.0.1%3Avolume1_slave/1077eb0027f1f616115bcb74a330d1c2/xsync
[2015-05-27 12:19:05.752686] I [master(/gluster):421:crawlwrap] _GMaster: primary master with volume id 0952d1ce-f62c-40b6-809a-4e193db0f1f9 ...
[2015-05-27 12:19:05.780159] W [master(/gluster):327:get_initial_crawl_data] _GMaster: Creating new gconf.state_detail_file.
[2015-05-27 12:19:05.780477] I [master(/gluster):432:crawlwrap] _GMaster: crawl interval: 60 seconds
[2015-05-27 12:19:05.781547] I [master(/gluster):912:update_worker_status] _GMaster: Creating new /var/lib/glusterd/geo-replication/volume1_gluster3.marcobaldo.ch_volume1_slave/_gluster.status
[2015-05-27 12:19:05.799737] I [master(/gluster):1124:crawl] _GMaster: starting hybrid crawl...
[2015-05-27 12:19:07.812315] I [master(/gluster):1133:crawl] _GMaster: processing xsync changelog /var/run/gluster/volume1/ssh%3A%2F%2Froot%40192.168.178.233%3Agluster%3A%2F%2F127.0.0.1%3Avolume1_slave/1077eb0027f1f616115bcb74a330d1c2/xsync/XSYNC-CHANGELOG.1432721945
[2015-05-27 12:20:02.796933] I [monitor(monitor):81:set_state] Monitor: new state: Stable
[2015-05-27 12:24:41.907495] I [master(/gluster):1133:crawl] _GMaster: processing xsync changelog /var/run/gluster/volume1/ssh%3A%2F%2Froot%40192.168.178.233%3Agluster%3A%2F%2F127.0.0.1%3Avolume1_slave/1077eb0027f1f616115bcb74a330d1c2/xsync/XSYNC-CHANGELOG.1432721948


With xsync it works (fallback)

# gluster volume geo-replication volume1
gluster3.marcobaldo.ch::volume1_slave status
 
MASTER NODE    MASTER VOL    MASTER BRICK   
SLAVE                                    STATUS     CHECKPOINT STATUS   
CRAWL STATUS       
-----------------------------------------------------------------------------------------------------------------------------------
fs1            volume1       /gluster       
gluster3.marcobaldo.ch::volume1_slave    Active     N/A                 
Hybrid Crawl       
fs2            volume1       /gluster       
gluster3.marcobaldo.ch::volume1_slave    Passive    N/A                 
N/A


The slave volume is created with the sequence

gluster volume geo-replication volume1
gluster3.marcobaldo.ch::volume1_slave create
gluster volume geo-replication volume1
gluster3.marcobaldo.ch::volume1_slave config change-detector changelog
gluster volume geo-replication volume1
gluster3.marcobaldo.ch::volume1_slave start
gluster volume geo-replication volume1
gluster3.marcobaldo.ch::volume1_slave status

And the status is

Volume Name: volume1
Type: Replicate
Volume ID: 0952d1ce-f62c-40b6-809a-4e193db0f1f9
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1.marcobaldo.ch:/gluster
Brick2: gluster2.marcobaldo.ch:/gluster
Options Reconfigured:
nfs.disable: off
geo-replication.indexing: on
geo-replication.ignore-pid-check: on
changelog.changelog: on
 
Volume Name: volume1_slave
Type: Distribute
Volume ID: 95610eee-84c2-4ac0-8f0f-7531e942e77c
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: gluster3.marcobaldo.ch:/gluster_slav

I could not find a log file with usefull debugging information (on
gluster3.marcobaldo.ch maybe? Which one?)

If this matters: gluster1 and gluster2 are 64 bit KVM VMs, gluster3 a 32
bit physical machine deemed to be a backup server (all are up-to-date
OpenSuSE 13.2).

May I ask where I will find information to undertand why the
geo-replication worker fails, enters the faulty state and then it is
switched to xsync?

Tnx

Marco

Il 26. 05. 15 08:21, Kotresh Hiremath Ravishankar ha scritto:
> Hi Marco,
>
> 'gf_changelog_register' is an API exposed from the shared library 'libgfchangelog.so'.
> Please check whether 'libgfchangelog.so' is available to linker by using following command.
>
> #ldconfig -p | grep libgfchangelog
>
> If it is not found, please find where the libgfchangelog.so is installed and run ldconfig
> on it.
>
> e.g., If found at /usr/local/lib/libgfchangelog.so,
>
> #ldconfig /usr/local/lib
>
> After this, confirm whether the library is cached by using first command above and try restarting
> geo-replication.
>
> Let us know if the library is cached and still you face this issue.
> Hope this helps!
>
> Thanks and Regards,
> Kotresh H R
>
> ----- Original Message -----
>> From: "Marco" <marco.brignoli at marcobaldo.ch>
>> To: Gluster-users at gluster.org
>> Sent: Tuesday, May 26, 2015 4:25:26 AM
>> Subject: [Gluster-users] AttributeError: python: undefined symbol:	gf_changelog_register
>>
>> Hello all.
>>
>> I have an issue when I'm trying to populate a geo-replication volume:
>>
>> [2015-05-25 23:59:26.666712] I [monitor(monitor):129:monitor] Monitor:
>> ------------------------------------------------------------
>> [2015-05-25 23:59:26.667079] I [monitor(monitor):130:monitor] Monitor:
>> starting gsyncd worker
>> [2015-05-25 23:59:26.762124] I [gsyncd(/gluster):532:main_i] <top>:
>> syncing: gluster://localhost:volume1 ->
>> ssh://root@gluster3.marcobaldo.ch:gluster://localhost:volume1_slave
>> [2015-05-25 23:59:29.611541] I [master(/gluster):58:gmaster_builder]
>> <top>: setting up xsync change detection mode
>> [2015-05-25 23:59:29.612349] I [master(/gluster):357:__init__] _GMaster:
>> using 'rsync' as the sync engine
>> [2015-05-25 23:59:29.613812] I [master(/gluster):58:gmaster_builder]
>> <top>: setting up changelog change detection mode
>> [2015-05-25 23:59:29.614294] I [master(/gluster):357:__init__] _GMaster:
>> using 'rsync' as the sync engine
>> [2015-05-25 23:59:29.616271] I [master(/gluster):1103:register]
>> _GMaster: xsync temp directory:
>> /var/run/gluster/volume1/ssh%3A%2F%2Froot%40192.168.178.233%3Agluster%3A%2F%2F127.0.0.1%3Avolume1_slave/1077eb0027f1f616115bcb74a330d1c2/xsync
>> [2015-05-25 23:59:29.648611] E
>> [syncdutils(/gluster):240:log_raise_exception] <top>: FAIL:
>> Traceback (most recent call last):
>>   File "/usr/lib/glusterfs/python/syncdaemon/gsyncd.py", line 150, in main
>>     main_i()
>>   File "/usr/lib/glusterfs/python/syncdaemon/gsyncd.py", line 542, in main_i
>>     local.service_loop(*[r for r in [remote] if r])
>>   File "/usr/lib/glusterfs/python/syncdaemon/resource.py", line 1175, in
>> service_loop
>>     g2.register()
>>   File "/usr/lib/glusterfs/python/syncdaemon/master.py", line 1077, in
>> register
>>     workdir, logfile, 9, 5)
>>   File "/usr/lib/glusterfs/python/syncdaemon/resource.py", line 614, in
>> changelog_register
>>     Changes.cl_register(cl_brick, cl_dir, cl_log, cl_level, retries)
>>  File "/usr/lib/glusterfs/python/syncdaemon/libgfchangelog.py", line 23,
>> in cl_register
>>     ret = cls._get_api('gf_changelog_register')(brick, path,
>>   File "/usr/lib/glusterfs/python/syncdaemon/libgfchangelog.py", line
>> 19, in _get_api
>>     return getattr(cls.libgfc, call)
>>   File "/usr/lib64/python2.7/ctypes/__init__.py", line 378, in __getattr__
>>     func = self.__getitem__(name)
>>   File "/usr/lib64/python2.7/ctypes/__init__.py", line 383, in __getitem__
>>     func = self._FuncPtr((name_or_ordinal, self))
>> AttributeError: python: undefined symbol: gf_changelog_register
>> [2015-05-25 23:59:29.650513] I [syncdutils(/gluster):192:finalize]
>> <top>: exiting.
>> [2015-05-25 23:59:30.613435] I [monitor(monitor):157:monitor] Monitor:
>> worker(/gluster) died in startup phase
>>
>>
>> COMMANDS
>> ************
>> # gluster volume geo-replication volume1
>> gluster3.marcobaldo.ch::volume1_slave start
>> Starting geo-replication session between volume1 &
>> gluster3.marcobaldo.ch::volume1_slave has been successful
>>
>> # gluster volume geo-replication volume1
>> gluster3.marcobaldo.ch::volume1_slave status
>>  
>> MASTER NODE    MASTER VOL    MASTER BRICK
>> SLAVE                                    STATUS             CHECKPOINT
>> STATUS    CRAWL STATUS
>> -------------------------------------------------------------------------------------------------------------------------------------------
>> fs2            volume1       /gluster
>> gluster3.marcobaldo.ch::volume1_slave    Initializing...
>> N/A                  N/A
>> fs1            volume1       /gluster
>> gluster3.marcobaldo.ch::volume1_slave    Initializing...
>> N/A                  N/A
>>
>> and after a few seconds
>>
>> # gluster volume geo-replication volume1
>> gluster3.marcobaldo.ch::volume1_slave status
>>  
>> MASTER NODE    MASTER VOL    MASTER BRICK
>> SLAVE                                    STATUS    CHECKPOINT STATUS
>> CRAWL STATUS
>> ----------------------------------------------------------------------------------------------------------------------------------
>> fs2            volume1       /gluster
>> gluster3.marcobaldo.ch::volume1_slave    faulty    N/A
>> N/A
>> fs1            volume1       /gluster
>> gluster3.marcobaldo.ch::volume1_slave    faulty    N/A                  N/A
>>
>>
>> VOLUMES
>> **********
>>
>> Volume Name: volume1
>> Type: Replicate
>> Volume ID: 0952d1ce-f62c-40b6-809a-4e193db0f1f9
>> Status: Started
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster1.marcobaldo.ch:/gluster
>> Brick2: gluster2.marcobaldo.ch:/gluster
>> Options Reconfigured:
>> changelog.changelog: on
>> geo-replication.ignore-pid-check: on
>> geo-replication.indexing: on
>> nfs.disable: off
>>  
>> Volume Name: volume1_slave
>> Type: Distribute
>> Volume ID: b0b161d8-a642-4d41-808e-2bb076989f78
>> Status: Started
>> Number of Bricks: 1
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster3.marcobaldo.ch:/gluster_slave
>>
>>
>> VERSION
>> *********
>>
>> # glusterd -V
>> glusterfs 3.5.2 built on *bleep*
>> Repository revision: git://git.gluster.com/glusterfs.git
>> Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
>> GlusterFS comes with ABSOLUTELY NO WARRANTY.
>> It is licensed to you under your choice of the GNU Lesser
>> General Public License, version 3 or any later version (LGPLv3
>> or later), or the GNU General Public License, version 2 (GPLv2),
>> in all cases as published by the Free Software Foundation.
>>
>>
>> I'm running OpenSuSE 13.2 and I have installed Glusterfs from the
>> standard OpenSuSE repos. Currently I don't have any known problem with
>> "Replicate" volumes.
>>
>> May I ask for your help? I have been googling but I could not find any
>> input.
>>
>> Tnx in advance and have a nice day
>>
>> Marco
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150527/c0091534/attachment.html>


More information about the Gluster-users mailing list