[Gluster-users] Geo-rep failing initial sync
Saravanakumar Arumugam
sarumuga at redhat.com
Mon Oct 19 09:06:45 UTC 2015
Please check 'gluster volume status'
If the corresponding brick is down/ glusterfsd process crashed, Faulty
state is observed.
Thanks,
Saravana
On 10/19/2015 09:59 AM, Wade Fitzpatrick wrote:
> I have now tried to re-initialise the whole geo-rep setup but the
> replication slave went Faulty immediately. Any help here would be
> appreciated, I cannot even find how to recover a faulty node without
> recreating the geo-rep.
>
> root at james:~# gluster volume geo-replication static gluster-b1::static
> stop
> Stopping geo-replication session between static & gluster-b1::static
> has been successful
> root at james:~# gluster volume geo-replication static gluster-b1::static
> delete
> Deleting geo-replication session between static & gluster-b1::static
> has been successful
>
> I then destroyed the volume and re-created bricks on
> gluster-b1::static slave volume.
>
> root at palace:~# gluster volume stop static
> Stopping volume will make its data inaccessible. Do you want to
> continue? (y/n) y
> volume stop: static: success
> root at palace:~# gluster volume delete static
> Deleting volume will erase all information about the volume. Do you
> want to continue? (y/n) y
> volume delete: static: success
>
> root at palace:~# gluster volume create static stripe 2 transport tcp
> palace:/data/gluster1/static/brick1 madonna:/data/gluster1/static/brick2
> volume create: static: success: please start the volume to access data
> root at palace:~# gluster volume info
>
> Volume Name: static
> Type: Stripe
> Volume ID: dc14cd83-2736-4faf-8e11-c6d711ff8f56
> Status: Created
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: palace:/data/gluster1/static/brick1
> Brick2: madonna:/data/gluster1/static/brick2
> Options Reconfigured:
> performance.readdir-ahead: on
> root at palace:~# gluster volume start static
> volume start: static: success
>
>
> Then established the geo-rep sync again
>
> root at james:~# gluster volume geo-replication static
> ssh://gluster-b1::static create
> Creating geo-replication session between static &
> ssh://gluster-b1::static has been successful
> root at james:~# gluster volume geo-replication static
> ssh://gluster-b1::static config use_meta_volume true
> geo-replication config updated successfully
> root at james:~# gluster volume geo-replication static
> ssh://gluster-b1::static config use-tarssh true
> geo-replication config updated successfully
>
> root at james:~# gluster volume geo-replication static
> ssh://gluster-b1::static config
> special_sync_mode: partial
> state_socket_unencoded:
> /var/lib/glusterd/geo-replication/static_gluster-b1_static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic.socket
> gluster_log_file:
> /var/log/glusterfs/geo-replication/static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic.gluster.log
> ssh_command: ssh -oPasswordAuthentication=no
> -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/secret.pem
> use_tarssh: true
> ignore_deletes: false
> change_detector: changelog
> gluster_command_dir: /usr/sbin/
> state_file:
> /var/lib/glusterd/geo-replication/static_gluster-b1_static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic.status
> remote_gsyncd: /nonexistent/gsyncd
> log_file:
> /var/log/glusterfs/geo-replication/static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic.log
> changelog_log_file:
> /var/log/glusterfs/geo-replication/static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic-changes.log
> socketdir: /var/run/gluster
> working_dir:
> /var/lib/misc/glusterfsd/static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic
> state_detail_file:
> /var/lib/glusterd/geo-replication/static_gluster-b1_static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic-detail.status
> use_meta_volume: true
> ssh_command_tar: ssh -oPasswordAuthentication=no
> -oStrictHostKeyChecking=no -i
> /var/lib/glusterd/geo-replication/tar_ssh.pem
> pid_file:
> /var/lib/glusterd/geo-replication/static_gluster-b1_static/ssh%3A%2F%2Froot%40palace%3Agluster%3A%2F%2F127.0.0.1%3Astatic.pid
> georep_session_working_dir:
> /var/lib/glusterd/geo-replication/static_gluster-b1_static/
> gluster_params: aux-gfid-mount acl
>
> root at james:~# gluster volume geo-replication static
> ssh://gluster-b1::static start
> Geo-replication session between static and ssh://gluster-b1::static
> does not exist.
> geo-replication command failed
> root at james:~# gluster volume geo-replication static
> ssh://gluster-b1::static status detail
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE
> USER SLAVE SLAVE NODE STATUS CRAWL
> STATUS LAST_SYNCED ENTRY DATA META FAILURES CHECKPOINT
> TIME CHECKPOINT COMPLETED CHECKPOINT COMPLETION TIME
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> james static /data/gluster1/static/brick1
> root ssh://gluster-b1::static N/A N/A
> N/A N/A N/A N/A N/A N/A
> N/A N/A N/A
> hilton static /data/gluster1/static/brick3
> root ssh://gluster-b1::static N/A N/A
> N/A N/A N/A N/A N/A N/A
> N/A N/A N/A
> present static /data/gluster1/static/brick4
> root ssh://gluster-b1::static N/A N/A
> N/A N/A N/A N/A N/A N/A
> N/A N/A N/A
> cupid static /data/gluster1/static/brick2
> root ssh://gluster-b1::static N/A N/A
> N/A N/A N/A N/A N/A N/A
> N/A N/A N/A
>
> root at james:~# gluster volume geo-replication static
> ssh://gluster-b1::static start
> Starting geo-replication session between static &
> ssh://gluster-b1::static has been successful
> root at james:~# gluster volume geo-replication static
> ssh://gluster-b1::static status detail
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE
> USER SLAVE SLAVE NODE STATUS CRAWL STATUS
> LAST_SYNCED ENTRY DATA META FAILURES CHECKPOINT TIME
> CHECKPOINT COMPLETED CHECKPOINT COMPLETION TIME
> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> james static /data/gluster1/static/brick1
> root ssh://gluster-b1::static N/A Initializing...
> N/A N/A N/A N/A N/A N/A
> N/A N/A N/A
> hilton static /data/gluster1/static/brick3
> root ssh://gluster-b1::static palace Active
> Hybrid Crawl N/A 0 0 0 0
> N/A N/A N/A
> present static /data/gluster1/static/brick4
> root ssh://gluster-b1::static madonna Passive
> N/A N/A N/A N/A N/A N/A
> N/A N/A N/A
> cupid static /data/gluster1/static/brick2
> root ssh://gluster-b1::static madonna Active
> Hybrid Crawl N/A 0 0 0 0
> N/A N/A N/A
> root at james:~# gluster volume geo-replication static
> ssh://gluster-b1::static status detail
>
> MASTER NODE MASTER VOL MASTER BRICK SLAVE
> USER SLAVE SLAVE NODE STATUS CRAWL
> STATUS LAST_SYNCED ENTRY DATA META FAILURES CHECKPOINT
> TIME CHECKPOINT COMPLETED CHECKPOINT COMPLETION TIME
> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> james static /data/gluster1/static/brick1
> root ssh://gluster-b1::static N/A Faulty
> N/A N/A N/A N/A N/A N/A
> N/A N/A N/A
> hilton static /data/gluster1/static/brick3
> root ssh://gluster-b1::static palace Active Hybrid
> Crawl N/A 8191 8187 0 0 N/A
> N/A N/A
> present static /data/gluster1/static/brick4
> root ssh://gluster-b1::static madonna Passive
> N/A N/A N/A N/A N/A N/A
> N/A N/A N/A
> cupid static /data/gluster1/static/brick2
> root ssh://gluster-b1::static madonna Active Hybrid
> Crawl N/A 8191 8187 0 0 N/A
> N/A N/A
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list