[Gluster-users] Geo-rep failing initial sync

Aravinda avishwan at redhat.com
Thu Oct 15 11:27:54 UTC 2015


Status looks good. Two master bricks are Active and participating in 
syncing. Please let us know the issue you are observing.

regards
Aravinda

On 10/15/2015 11:40 AM, Wade Fitzpatrick wrote:
> I have twice now tried to configure geo-replication of our 
> Stripe-Replicate volume to a remote Stripe volume but it always seems 
> to have issues.
>
> root at james:~# gluster volume info
>
> Volume Name: gluster_shared_storage
> Type: Replicate
> Volume ID: 5f446a10-651b-4ce0-a46b-69871f498dbc
> Status: Started
> Number of Bricks: 1 x 4 = 4
> Transport-type: tcp
> Bricks:
> Brick1: james:/data/gluster1/geo-rep-meta/brick
> Brick2: cupid:/data/gluster1/geo-rep-meta/brick
> Brick3: hilton:/data/gluster1/geo-rep-meta/brick
> Brick4: present:/data/gluster1/geo-rep-meta/brick
> Options Reconfigured:
> performance.readdir-ahead: on
>
> Volume Name: static
> Type: Striped-Replicate
> Volume ID: 3f9f810d-a988-4914-a5ca-5bd7b251a273
> Status: Started
> Number of Bricks: 1 x 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: james:/data/gluster1/static/brick1
> Brick2: cupid:/data/gluster1/static/brick2
> Brick3: hilton:/data/gluster1/static/brick3
> Brick4: present:/data/gluster1/static/brick4
> Options Reconfigured:
> auth.allow: 10.x.*
> features.scrub: Active
> features.bitrot: on
> performance.readdir-ahead: on
> geo-replication.indexing: on
> geo-replication.ignore-pid-check: on
> changelog.changelog: on
>
> root at palace:~# gluster volume info
>
> Volume Name: static
> Type: Stripe
> Volume ID: 3de935db-329b-4876-9ca4-a0f8d5f184c3
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: palace:/data/gluster1/static/brick1
> Brick2: madonna:/data/gluster1/static/brick2
> Options Reconfigured:
> features.scrub: Active
> features.bitrot: on
> performance.readdir-ahead: on
>
> root at james:~# gluster vol geo-rep static ssh://gluster-b1::static 
> status detail
>
> MASTER NODE    MASTER VOL    MASTER BRICK                    SLAVE 
> USER    SLAVE                       SLAVE NODE    STATUS     CRAWL 
> STATUS       LAST_SYNCED            ENTRY    DATA    META FAILURES    
> CHECKPOINT TIME    CHECKPOINT COMPLETED    CHECKPOINT COMPLETION TIME
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 
>
> james          static        /data/gluster1/static/brick1 
> root          ssh://gluster-b1::static    10.37.1.11    Active 
> Changelog Crawl    2015-10-13 14:23:20    0        0       0 
> 1952064     N/A                N/A                     N/A
> hilton         static        /data/gluster1/static/brick3 
> root          ssh://gluster-b1::static    10.37.1.11    Active 
> Changelog Crawl    N/A                    0        0       0 
> 1008035     N/A                N/A                     N/A
> present        static        /data/gluster1/static/brick4 
> root          ssh://gluster-b1::static    10.37.1.12    Passive 
> N/A                N/A                    N/A      N/A     N/A 
> N/A         N/A                N/A                     N/A
> cupid          static        /data/gluster1/static/brick2 
> root          ssh://gluster-b1::static    10.37.1.12    Passive 
> N/A                N/A                    N/A      N/A     N/A 
> N/A         N/A                N/A                     N/A
>
>
> So just to clarify, data is striped over bricks 1 and 3; bricks 2 and 
> 4 are the replica.
>
> Can someone help me diagnose the problem and find a solution?
>
> Thanks in advance,
> Wade.
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151015/aabbde8c/attachment.html>


More information about the Gluster-users mailing list