<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div>Hi,<br></div><div><br></div><div>First of all , following command is not for disperse volume - <br></div><div>gluster volume heal elastic-volume info split-brain</div><div><br></div><div>This is applicable for replicate volumes only.<br></div><div><br></div><div>Could you please let us know what exactly do you want to test?<br></div><div><br></div><div>If you want to test disperse volume against failure of bricks or servers, you can kill some of the brick process.<br></div><div>Maximum redundant number of brick process. In 4+2, 2 is the redundancy count.<br></div><div>After killing two brick process, by using kill command, you can write some data on volume and the do force start of the volume.<br></div><div>gluster v <volname> start force<br></div><div>This will start all the killed brick processes also. At the end you can see tha heal should be done by self heal daemon and volume should become healthy again.<br></div><div><br></div><div>---<br></div><div>Ashish<br></div><div><br></div><div><br></div><div><br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style="color: #000; font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>From: </b>"fusillator" <fusillator@gmail.com><br><b>To: </b>gluster-users@gluster.org<br><b>Sent: </b>Friday, June 7, 2019 2:09:01 AM<br><b>Subject: </b>[Gluster-users] healing of disperse volume<br><div><br></div>Hi all, I'm pretty new to glusterfs, I managed to setup a dispersed<br>volume (4+2) using the release 6.1 from centos repository.. Is it a stable release?<br>Then I forced the volume stop when the application were writing on the<br>mount point.. getting a wanted inconsistent state, I'm<br>wondering what are the best practice to solve this kinds of<br>situation...I just found a detailed explanation about how to solve<br>splitting-head state of replicated volume at<br>https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/<br>but it seems to be not applicable to the disperse volume type.<br>Do I miss to read some important piece of documentation? Please point<br>me to some reference.<br>Here's some command detail:<br><div><br></div>#gluster volume info elastic-volume<br><div><br></div>Volume Name: elastic-volume<br>Type: Disperse<br>Volume ID: 96773fef-c443-465b-a518-6630bcf83397<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 1 x (4 + 2) = 6<br>Transport-type: tcp<br>Bricks:<br>Brick1: dev-netflow01.fineco.it:/data/gfs/lv_elastic/brick1/brick<br>Brick2: dev-netflow02.fineco.it:/data/gfs/lv_elastic/brick1/brick<br>Brick3: dev-netflow03.fineco.it:/data/gfs/lv_elastic/brick1/brick<br>Brick4: dev-netflow04.fineco.it:/data/gfs/lv_elastic/brick1/brick<br>Brick5: dev-netflow05.fineco.it:/data/gfs/lv_elastic/brick1/brick<br>Brick6: dev-netflow06.fineco.it:/data/gfs/lv_elastic/brick1/brick<br>Options Reconfigured:<br>performance.io-cache: off<br>performance.io-thread-count: 64<br>performance.write-behind-window-size: 100MB<br>performance.cache-size: 1GB<br>nfs.disable: on<br>transport.address-family: inet<br><div><br></div><br># gluster volume heal elastic-volume info<br>Brick dev01:/data/gfs/lv_elastic/brick1/brick<br><gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17><br>/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log<br><gfid:5c577478-9a2c-4d99-9189-36e9afed1039><br><gfid:813ccd43-1578-4275-a342-416a658cd714><br><gfid:60c74f7e-bed3-44a1-9129-99541a83e71b><br><gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174><br>/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log<br>Status: Connected<br>Number of entries: 12<br><div><br></div>Brick dev02:/data/gfs/lv_elastic/brick1/brick<br>/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log<br><gfid:5c577478-9a2c-4d99-9189-36e9afed1039><br><gfid:813ccd43-1578-4275-a342-416a658cd714><br><gfid:60c74f7e-bed3-44a1-9129-99541a83e71b><br><gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174><br><gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17><br>/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log<br>Status: Connected<br>Number of entries: 12<br><div><br></div>Brick dev03:/data/gfs/lv_elastic/brick1/brick<br>/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log<br><gfid:5c577478-9a2c-4d99-9189-36e9afed1039><br><gfid:813ccd43-1578-4275-a342-416a658cd714><br><gfid:60c74f7e-bed3-44a1-9129-99541a83e71b><br><gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174><br><gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17><br>/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log<br>Status: Connected<br>Number of entries: 12<br><div><br></div>Brick dev04:/data/gfs/lv_elastic/brick1/brick<br><gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17><br>/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log<br><gfid:5c577478-9a2c-4d99-9189-36e9afed1039><br><gfid:813ccd43-1578-4275-a342-416a658cd714><br><gfid:60c74f7e-bed3-44a1-9129-99541a83e71b><br><gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174><br>/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log<br>Status: Connected<br>Number of entries: 12<br><div><br></div>Brick dev05:/data/gfs/lv_elastic/brick1/brick<br>/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log<br><gfid:5c577478-9a2c-4d99-9189-36e9afed1039><br><gfid:813ccd43-1578-4275-a342-416a658cd714><br><gfid:60c74f7e-bed3-44a1-9129-99541a83e71b><br><gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174><br><gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17><br>/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log<br>Status: Connected<br>Number of entries: 12<br><div><br></div>Brick dev06:/data/gfs/lv_elastic/brick1/brick<br>/data/logs/20190606/ns-coreiol-iol-app-chart.2019060615.log<br><gfid:5c577478-9a2c-4d99-9189-36e9afed1039><br><gfid:813ccd43-1578-4275-a342-416a658cd714><br><gfid:60c74f7e-bed3-44a1-9129-99541a83e71b><br><gfid:9417e4db-5c68-4812-9ab1-77b4f5ad7174><br><gfid:7d7d7292-76eb-430a-ac10-b4f5e9311a17><br>/data/logs/20190606/ns-coreiol-iol-lib-managers.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-news.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-trkd.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-listini.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-app-fns.2019060615.log<br>/data/logs/20190606/ns-coreiol-iol-lib-httpwrapper.2019060615.log<br>Status: Connected<br>Number of entries: 12<br><div><br></div># gluster volume heal elastic-volume info split-brain<br>Volume elastic-volume is not of type replicate<br><div><br></div>Any advice?<br><div><br></div>Best regards<br><div><br></div>Luca<br><div><br></div>_______________________________________________<br>Gluster-users mailing list<br>Gluster-users@gluster.org<br>https://lists.gluster.org/mailman/listinfo/gluster-users<br></div><div><br></div></div></body></html>