[Gluster-users] Failed to Establish Geo-replication Session Please check gsync config file. Unable to get statefile's name

Strahil Nikolov hunter86_bg at yahoo.com
Wed Nov 26 10:30:14 UTC 2025


If you use gluser for hosting VM disks, you need to use the correct settings - there is a virt group for that purpose. GlusterFS was the primary storage for oVirt and I can confirm that several people have reported successful DR switch.

Best Regards,
Strahil Nikolov   В понеделник, 24 ноември 2025 г. в 22:41:45 ч. Гринуич+2, Gilberto Ferreira <gilberto.nunes32 at gmail.com> написа:  
 
 Look!
Never mind, ok????
Sorry about that, but this geo-replication is crap.Tried with gluster10 with bookworm and did the first replication and then started to get faulty status.I had installed a VM with Debian using the gluster vol replicated to the other 2 nodes, and it went well.But when I lost the primary and tried to start the VM on secondary, the VM disk crashed.I simply give upAnd is so sad, because over the past 3 years I spent so much time saying gluster FS is reliable and it isn't. Sorry... I am really sorry.












Em seg., 24 de nov. de 2025 às 15:16, Gilberto Ferreira <gilberto.nunes32 at gmail.com> escreveu:

Hi there
FYI, I tried with the gluster deb packages from github, i.e. gluster version 11.2, and could create the geo-rep session but got faulty status again...
Some logs 

[2025-11-24 18:13:33.110898] W [gsyncd(config-get):299:main] <top>: Session config file not exists, using the default config [{path=/var/lib/glusterd/geo-replication/VMS_gluster3_VMS-REP/gsyncd.conf}]
[2025-11-24 18:13:35.199845] I [subcmds(monitor-status):29:subcmd_monitor_status] <top>: Monitor Status Change [{status=Created}]
[2025-11-24 18:14:39.398460] I [gsyncdstatus(monitor):247:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}]
[2025-11-24 18:14:39.398788] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker [{brick=/mnt/pve/data1/vms}, {secondary_node=gluster4}]
[2025-11-24 18:14:39.491754] I [resource(worker /mnt/pve/data1/vms):1388:connect_remote] SSH: Initializing SSH connection between primary and secondary...
[2025-11-24 18:14:41.76723] I [resource(worker /mnt/pve/data1/vms):1436:connect_remote] SSH: SSH connection between primary and secondary established. [{duration=1.5848}]
[2025-11-24 18:14:41.76885] I [resource(worker /mnt/pve/data1/vms):1117:connect] GLUSTER: Mounting gluster volume locally...
[2025-11-24 18:14:42.108398] I [resource(worker /mnt/pve/data1/vms):1139:connect] GLUSTER: Mounted gluster volume [{duration=1.0314}]
[2025-11-24 18:14:42.108612] I [subcmds(worker /mnt/pve/data1/vms):84:subcmd_worker] <top>: Worker spawn successful. Acknowledging back to monitor
[2025-11-24 18:14:44.117002] I [primary(worker /mnt/pve/data1/vms):1662:register] _GPrimary: Working dir [{path=/var/lib/misc/gluster/gsyncd/VMS_gluster3_VMS-REP/mnt-pve-data1-vms}]
[2025-11-24 18:14:44.117345] I [resource(worker /mnt/pve/data1/vms):1292:service_loop] GLUSTER: Register time [{time=1764008084}]
[2025-11-24 18:14:44.124814] I [gsyncdstatus(worker /mnt/pve/data1/vms):280:set_active] GeorepStatus: Worker Status Change [{status=Active}]
[2025-11-24 18:14:44.218760] I [gsyncdstatus(worker /mnt/pve/data1/vms):252:set_worker_crawl_status] GeorepStatus: Crawl Status Change [{status=History Crawl}]
[2025-11-24 18:14:44.219035] I [primary(worker /mnt/pve/data1/vms):1573:crawl] _GPrimary: starting history crawl [{turns=1}, {stime=None}, {etime=1764008084}, {entry_stime=None}]
[2025-11-24 18:14:44.219154] I [resource(worker /mnt/pve/data1/vms):1309:service_loop] GLUSTER: No stime available, using xsync crawl
[2025-11-24 18:14:44.225654] I [primary(worker /mnt/pve/data1/vms):1692:crawl] _GPrimary: starting hybrid crawl [{stime=None}]
[2025-11-24 18:14:44.227337] I [gsyncdstatus(worker /mnt/pve/data1/vms):252:set_worker_crawl_status] GeorepStatus: Crawl Status Change [{status=Hybrid Crawl}]
[2025-11-24 18:14:45.227752] I [primary(worker /mnt/pve/data1/vms):1703:crawl] _GPrimary: processing xsync changelog [{path=/var/lib/misc/gluster/gsyncd/VMS_gluster3_VMS-REP/mnt-pve-data1-vms/xsync/XSYNC-CHANGELOG.1764008084}]
[2025-11-24 18:14:45.255496] I [primary(worker /mnt/pve/data1/vms):1430:process] _GPrimary: Entry Time Taken [{UNL=0}, {RMD=0}, {CRE=0}, {MKN=0}, {MKD=1}, {REN=0}, {LIN=0}, {SYM=0}, {duration=0.0071}]
[2025-11-24 18:14:45.255639] I [primary(worker /mnt/pve/data1/vms):1442:process] _GPrimary: Data/Metadata Time Taken [{SETA=1}, {meta_duration=0.0078}, {SETX=0}, {XATT=0}, {DATA=0}, {data_duration=0.0010}]
[2025-11-24 18:14:45.255797] I [primary(worker /mnt/pve/data1/vms):1452:process] _GPrimary: Batch Completed [{mode=xsync}, {duration=0.0277}, {changelog_start=1764008084}, {changelog_end=1764008084}, {num_changelogs=1}, {stime=None}, {entry_stime=None}]
[2025-11-24 18:14:45.260967] I [primary(worker /mnt/pve/data1/vms):1699:crawl] _GPrimary: finished hybrid crawl [{stime=(1764008084, 0)}]
[2025-11-24 18:14:45.266066] I [gsyncdstatus(worker /mnt/pve/data1/vms):252:set_worker_crawl_status] GeorepStatus: Crawl Status Change [{status=Changelog Crawl}]
[2025-11-24 18:14:55.280644] I [primary(worker /mnt/pve/data1/vms):1525:crawl] _GPrimary: secondary's time [{stime=(1764008084, 0)}]
[2025-11-24 18:14:55.645504] I [primary(worker /mnt/pve/data1/vms):2010:syncjob] Syncer: Sync Time Taken [{job=1}, {num_files=2}, {return_code=12}, {duration=0.0327}]
[2025-11-24 18:14:55.645687] E [syncdutils(worker /mnt/pve/data1/vms):845:errlog] Popen: command returned error [{cmd=rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls --ignore-missing-args . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-1u81eezq/dc97de7ffea2b18802fbdbd0b783902c.sock -caes128-ctr gluster4:/proc/27188/cwd}, {error=12}]
[2025-11-24 18:14:56.111115] I [monitor(monitor):227:monitor] Monitor: worker died in startup phase [{brick=/mnt/pve/data1/vms}]
[2025-11-24 18:14:56.122446] I [gsyncdstatus(monitor):247:set_worker_status] GeorepStatus: Worker Status Change [{status=Faulty}]






Em sáb., 22 de nov. de 2025 às 16:20, Gilberto Ferreira <gilberto.nunes32 at gmail.com> escreveu:

Here the log about the fault status
[2025-11-22 19:18:56.478297] I [gsyncdstatus(worker /mnt/pve/data1/vms):252:set_worker_crawl_status] GeorepStatus: Crawl Status Change [{status=History Crawl}]
[2025-11-22 19:18:56.478521] I [primary(worker /mnt/pve/data1/vms):1572:crawl] _GPrimary: starting history crawl [{turns=1}, {stime=(1763838427, 0)}, {etime=1763839136}, {entry_stime=(1763838802, 0)}]
[2025-11-22 19:18:57.479278] I [primary(worker /mnt/pve/data1/vms):1604:crawl] _GPrimary: secondary's time [{stime=(1763838427, 0)}]
[2025-11-22 19:18:57.922752] I [primary(worker /mnt/pve/data1/vms):2009:syncjob] Syncer: Sync Time Taken [{job=1}, {num_files=2}, {return_code=12}, {duration=0.0272}]
[2025-11-22 19:18:57.922921] E [syncdutils(worker /mnt/pve/data1/vms):845:errlog] Popen: command returned error [{cmd=rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls --ignore-missing-args . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-c9h5okjo/5afb71218219138854b3c5a8eab300a4.sock -caes128-ctr gluster3:/proc/4004/cwd}, {error=12}]
[2025-11-22 19:18:58.394410] I [monitor(monitor):227:monitor] Monitor: worker died in startup phase [{brick=/mnt/pve/data1/vms}]
[2025-11-22 19:18:58.406208] I [gsyncdstatus(monitor):247:set_worker_status] GeorepStatus: Worker Status Change [{status=Faulty}]
[2025-11-22 19:19:08.408871] I [gsyncdstatus(monitor):247:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}]
[2025-11-22 19:19:08.408998] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker [{brick=/mnt/pve/data1/vms}, {secondary_node=gluster3}]
[2025-11-22 19:19:08.475625] I [resource(worker /mnt/pve/data1/vms):1388:connect_remote] SSH: Initializing SSH connection between primary and secondary...
[2025-11-22 19:19:09.719658] I [resource(worker /mnt/pve/data1/vms):1436:connect_remote] SSH: SSH connection between primary and secondary established. [{duration=1.2439}]
[2025-11-22 19:19:09.719800] I [resource(worker /mnt/pve/data1/vms):1117:connect] GLUSTER: Mounting gluster volume locally...
[2025-11-22 19:19:10.740213] I [resource(worker /mnt/pve/data1/vms):1139:connect] GLUSTER: Mounted gluster volume [{duration=1.0203}]
[2025-11-22 19:19:10.740427] I [subcmds(worker /mnt/pve/data1/vms):84:subcmd_worker] <top>: Worker spawn successful. Acknowledging back to monitor
[2025-11-22 19:19:12.756579] I [primary(worker /mnt/pve/data1/vms):1661:register] _GPrimary: Working dir [{path=/var/lib/misc/gluster/gsyncd/VMS_gluster3_VMS-REP/mnt-pve-data1-vms}]
[2025-11-22 19:19:12.756854] I [resource(worker /mnt/pve/data1/vms):1292:service_loop] GLUSTER: Register time [{time=1763839152}]
[2025-11-22 19:19:12.771767] I [gsyncdstatus(worker /mnt/pve/data1/vms):280:set_active] GeorepStatus: Worker Status Change [{status=Active}]
[2025-11-22 19:19:12.834163] I [gsyncdstatus(worker /mnt/pve/data1/vms):252:set_worker_crawl_status] GeorepStatus: Crawl Status Change [{status=History Crawl}]
[2025-11-22 19:19:12.834344] I [primary(worker /mnt/pve/data1/vms):1572:crawl] _GPrimary: starting history crawl [{turns=1}, {stime=(1763838427, 0)}, {etime=1763839152}, {entry_stime=(1763838802, 0)}]
[2025-11-22 19:19:13.835162] I [primary(worker /mnt/pve/data1/vms):1604:crawl] _GPrimary: secondary's time [{stime=(1763838427, 0)}]
[2025-11-22 19:19:14.270295] I [primary(worker /mnt/pve/data1/vms):2009:syncjob] Syncer: Sync Time Taken [{job=1}, {num_files=2}, {return_code=12}, {duration=0.0274}]
[2025-11-22 19:19:14.270466] E [syncdutils(worker /mnt/pve/data1/vms):845:errlog] Popen: command returned error [{cmd=rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls --ignore-missing-args . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-1retjpoh/5afb71218219138854b3c5a8eab300a4.sock -caes128-ctr gluster3:/proc/4076/cwd}, {error=12}]
[2025-11-22 19:19:14.741245] I [monitor(monitor):227:monitor] Monitor: worker died in startup phase [{brick=/mnt/pve/data1/vms}]
[2025-11-22 19:19:14.752452] I [gsyncdstatus(monitor):247:set_worker_status] GeorepStatus: Worker Status Change [{status=Faulty}]



It seems to me that it failed to open something via ssh session.
I don't know... something like that.

Em sáb., 22 de nov. de 2025 às 16:17, Gilberto Ferreira <gilberto.nunes32 at gmail.com> escreveu:

Hi
I had succeeded in creating the session with the other side.But I got a Faulty statusI the gsyncd.log I got this:

Popen: command returned error [{cmd=rsync -aR0 --inplace --files-from=- --super --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls --ignore-missing-args . -e ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-2j2yeofa/5afb71218219138854b3c5a8eab300a4.sock -caes128-ctr gluster3:/proc/2662/cwd}, {error=12}]





This also happens with Gluster 12dev.

After compiling the gluster 12dev, I successfully create a geo-rep session but get the error above.




Any clue?




Best Regards






Em sáb., 22 de nov. de 2025 às 15:12, Strahil Nikolov <hunter86_bg at yahoo.com> escreveu:

Hi Gilberto,
It should as long as it's the same problem.
It will be nice to share your experience in the mailing list.
Best Regards,Strahil Nikolov 
 
 
  On Sat, Nov 22, 2025 at 18:13, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote:   Hi
Should it work with Debian Trixie (13) as well?
I will try it


---

Gilberto Nunes Ferreira+55 (47) 99676-7530 - Whatsapp / Telegram











Em sáb., 22 de nov. de 2025 às 12:37, Strahil Nikolov <hunter86_bg at yahoo.com> escreveu:

 Hi Gilberto,

I think debian12 packages don't have https://github.com/gluster/glusterfs/pull/4404/commits/c433a178e8208e1771fea4d61d0a22a95b8bc74b
Run on source and destination this command and try again:
sed -i 's/readfp/read_file/g' /usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncdconfig.py

In my test setup where source and destinations are each a single debian 12 with a hackishl created gluster_shared_storage volume and executing the sed, I got:


# gluster volume geo-replication vol1 geoaccount at gluster2::georep create push-pem
Creating geo-replication session between vol1 & geoaccount at gluster2::georep has been successful

Best Regards,
Strahil Nikolov

    В събота, 22 ноември 2025 г. в 16:34:30 ч. Гринуич+2, Gilberto Ferreira <gilberto.nunes32 at gmail.com> написа:  
 
 here the story about gluster 12dev faulty error:
https://github.com/gluster/glusterfs/issues/4632






Em sáb., 22 de nov. de 2025 às 10:59, Gilberto Ferreira <gilberto.nunes32 at gmail.com> escreveu:

Hello there My testing was with proxmox 9 which has Debian 13.I tried with gluster 11.1 from Debian repo and then version 11.2 from git repo.I got statefile's name issue with both.Then I compiled the version 12dev and could create geo-replication session with success but got faulty status 
So that's it.
---
Gilberto Nunes Ferreira 
+55 (47) 99676-7530
Proxmox VE

Em sáb., 22 de nov. de 2025, 10:14, Strahil Nikolov <hunter86_bg at yahoo.com> escreveu:

 Hi Gilberto,

What version of os and gluster do you use exaclty ?

Best Regards,
Strahil Nikolov

    В петък, 21 ноември 2025 г. в 14:08:19 ч. Гринуич+2, Gilberto Ferreira <gilberto.nunes32 at gmail.com> написа:  
 
 Hello there
If there is something else I could help, please let me know.
Thanks
Best Regards










Em qua., 19 de nov. de 2025 às 15:21, Gilberto Ferreira <gilberto.nunes32 at gmail.com> escreveu:

Hi there
So there is no special script.First I try to using this: https://github.com/aravindavk/gluster-georep-tools, and then I notice the issue.But after try do it by myself, I called for help.I tried:gluster volume geo-replication MASTERVOL root at SLAVENODE::slavevol create push-pemgluster volume geo-replication MASTERVOL root at SLAVENODE::slavevol start
And got the issue.
Thanks
---

Gilberto Nunes Ferreira+55 (47) 99676-7530 - Whatsapp / Telegram











Em qua., 19 de nov. de 2025 às 15:09, Strahil Nikolov <hunter86_bg at yahoo.com> escreveu:

 Hi Gilberto,

I have no idea why my previous message was not sent (sorry about that).
I suspect it's a bug. If you have some script or ansible playbook for the setup, it could help me reproduce it locally.


Best Regards,
Strahil Nikolov
    В вторник, 11 ноември 2025 г. в 16:55:59 ч. Гринуич+2, Gilberto Ferreira <gilberto.nunes32 at gmail.com> написа:  
 
 Any clue about this issue?











Em seg., 10 de nov. de 2025 às 14:47, Gilberto Ferreira <gilberto.nunes32 at gmail.com> escreveu:

I still don't get it, because with Debian BookWorm and Gluster 10, geo-rep works perfectly.It's something about Trixie and Gluster 11.x.




Em seg., 10 de nov. de 2025 às 14:45, Karl Kleinpaste <karl at kleinpaste.org> escreveu:

  On 11/10/25 12:21 PM, Gilberto Ferreira wrote:
  
 And yes. With gluster 11.2 from github repo, the very some error: gluster vol geo VMS gluster3::VMS-REP create push-pem
 Please check gsync config file. Unable to get statefile's name
 geo-replication command failed 
 
 I had this problem a year ago, Aug 2024. I went rounds and rounds with Strahil for a week, trying to find why I couldn't cross the finish line of successful georep. It always ends in:
 
 Please check gsync config file. Unable to get statefile's name
 geo-replication command failed
 
 The volumes were set up properly, the commands for georep were done correctly, per guidelines, but georep was left forever in a state of Created, never Active.
 
 Finally I just gave up. I can't use gluster if it won't work with me. I found that gluster would not give enough adequate diagnostics to provide a (useful!) explanation of what is actually wrong. ________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  

  

  
  




  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20251126/8a993626/attachment.html>


More information about the Gluster-users mailing list