[Gluster-users] healing of disperse volume
Ashish Pandey
aspandey at redhat.com
Fri Jun 7 14:18:50 UTC 2019
Hi,
First of all , following command is not for disperse volume -
gluster volume heal elastic-volume info split-brain
This is applicable for replicate volumes only.
Could you please let us know what exactly do you want to test?
If you want to test disperse volume against failure of bricks or servers, you can kill some of the brick process.
Maximum redundant number of brick process. In 4+2, 2 is the redundancy count.
After killing two brick process, by using kill command, you can write some data on volume and the do force start of the volume.
gluster v <volname> start force
This will start all the killed brick processes also. At the end you can see tha heal should be done by self heal daemon and volume should become healthy again.
---
Ashish
----- Original Message -----
From: "fusillator" <fusillator at gmail.com>
To: gluster-users at gluster.org
Sent: Friday, June 7, 2019 2:09:01 AM
Subject: [Gluster-users] healing of disperse volume
Hi all, I'm pretty new to glusterfs, I managed to setup a dispersed
volume (4+2) using the release 6.1 from centos repository.. Is it a stable release?
Then I forced the volume stop when the application were writing on the
mount point.. getting a wanted inconsistent state, I'm
wondering what are the best practice to solve this kinds of
situation...I just found a detailed explanation about how to solve
splitting-head state of replicated volume at
https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/
but it seems to be not applicable to the disperse volume type.
Do I miss to read some important piece of documentation? Please point
me to some reference.
Here's some command detail:
#gluster volume info elastic-volume
Volume Name: elastic-volume
Type: Disperse
Volume ID: 96773fef-c443-465b-a518-6630bcf83397
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: dev-netflow01.fineco.it:/data/gfs/lv_elastic/brick1/brick
Brick2: dev-netflow02.fineco.it:/data/gfs/lv_elastic/brick1/brick
Brick3: dev-netflow03.fineco.it:/data/gfs/lv_elastic/brick1/brick
Brick4: dev-netflow04.fineco.it:/data/gfs/lv_elastic/brick1/brick
Brick5: dev-netflow05.fineco.it:/data/gfs/lv_elastic/brick1/brick
Brick6: dev-netflow06.fineco.it:/data/gfs/lv_elastic/brick1/brick
Options Reconfigured:
performance.io-cache: off
performance.io-thread-count: 64
performance.write-behind-window-size: 100MB
performance.cache-size: 1GB
nfs.disable: on
transport.address-family: inet
# gluster volume heal elastic-volume info
Brick dev01:/data/gfs/lv_elastic/brick1/brick
<gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17>
/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log
<gfid:5c577478-9a2c-4d99-9189-36e9afed1039>
<gfid:813ccd43-1578-4275-a342-416a658cd714>
<gfid:60c74f7e-bed3-44a1-9129-99541a83e71b>
<gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174>
/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log
Status: Connected
Number of entries: 12
Brick dev02:/data/gfs/lv_elastic/brick1/brick
/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log
<gfid:5c577478-9a2c-4d99-9189-36e9afed1039>
<gfid:813ccd43-1578-4275-a342-416a658cd714>
<gfid:60c74f7e-bed3-44a1-9129-99541a83e71b>
<gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174>
<gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17>
/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log
Status: Connected
Number of entries: 12
Brick dev03:/data/gfs/lv_elastic/brick1/brick
/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log
<gfid:5c577478-9a2c-4d99-9189-36e9afed1039>
<gfid:813ccd43-1578-4275-a342-416a658cd714>
<gfid:60c74f7e-bed3-44a1-9129-99541a83e71b>
<gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174>
<gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17>
/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log
Status: Connected
Number of entries: 12
Brick dev04:/data/gfs/lv_elastic/brick1/brick
<gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17>
/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log
<gfid:5c577478-9a2c-4d99-9189-36e9afed1039>
<gfid:813ccd43-1578-4275-a342-416a658cd714>
<gfid:60c74f7e-bed3-44a1-9129-99541a83e71b>
<gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174>
/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log
Status: Connected
Number of entries: 12
Brick dev05:/data/gfs/lv_elastic/brick1/brick
/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log
<gfid:5c577478-9a2c-4d99-9189-36e9afed1039>
<gfid:813ccd43-1578-4275-a342-416a658cd714>
<gfid:60c74f7e-bed3-44a1-9129-99541a83e71b>
<gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174>
<gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17>
/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log
Status: Connected
Number of entries: 12
Brick dev06:/data/gfs/lv_elastic/brick1/brick
/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log
<gfid:5c577478-9a2c-4d99-9189-36e9afed1039>
<gfid:813ccd43-1578-4275-a342-416a658cd714>
<gfid:60c74f7e-bed3-44a1-9129-99541a83e71b>
<gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174>
<gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17>
/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log
/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log
/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log
Status: Connected
Number of entries: 12
# gluster volume heal elastic-volume info split-brain
Volume elastic-volume is not of type replicate
Any advice?
Best regards
Luca
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190607/ba28f7dc/attachment.html>
More information about the Gluster-users
mailing list