[Gluster-users] Healing issues

Strahil Nikolov hunter86_bg at yahoo.com
Fri Jul 30 21:39:34 UTC 2021


What is the 'gluster volume healinfo summary' output ?
Best Regards,Strahil Nikolov
 
 
  On Fri, Jul 30, 2021 at 23:44, Valerio Luccio<valerio.luccio at nyu.edu> wrote:     
Hello all,
 
I have a gluster (v. 5.13) on 4 CentOS 7.8 nodes. I recently had hardware problems on the RAIDs. I was able to get it back, but I noticed some odd things, so I did a "gluster volume heal info" and found a ton of errors. When I tried to do "gluster volume heal" I got the message:
 
 Launching heal operation to perform index self heal on volume MRIData has been unsuccessful:
Glusterd Syncop Mgmt brick op 'Heal' failed. Please check glustershd log file for details.
 
 
When I look at /var/log/glusterfs/glustershd.log it hasn't changed since this morning, so I'm not sure how to interpret the the above message. What am I supposed to look for in the log file ?
 
Here's a dump of the volume setup:
 
 Volume Name: MRIData
Type: Distributed-Replicate
Volume ID: e051ac20-ead1-4648-9ac6-a29b531515ca
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x (2 + 1) = 18
Transport-type: tcp
Bricks:
Brick1: hydra1:/gluster1/data
Brick2: hydra1:/gluster2/data
Brick3: hydra1:/arbiter/1 (arbiter)
Brick4: hydra1:/gluster3/data
Brick5: hydra2:/gluster1/data
Brick6: hydra1:/arbiter/2 (arbiter)
Brick7: hydra2:/gluster2/data
Brick8: hydra2:/gluster3/data
Brick9: hydra2:/arbiter/1 (arbiter)
Brick10: hydra3:/gluster1/data
Brick11: hydra3:/gluster2/data
Brick12: hydra3:/arbiter/1 (arbiter)
Brick13: hydra3:/gluster3/data
Brick14: hydra4:/gluster1/data
Brick15: hydra3:/arbiter/2 (arbiter)
Brick16: hydra4:/gluster2/data
Brick17: hydra4:/gluster3/data
Brick18: hydra4:/arbiter/1 (arbiter)
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.eager-lock: enable
network.remote-dio: enable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
network.ping-timeout: 10
server.allow-insecure: on
cluster.quorum-type: auto
cluster.self-heal-daemon: on
cluster.entry-self-heal: on
cluster.metadata-self-heal: on
cluster.data-self-heal: on
features.cache-invalidation: off
transport.address-family: inet
nfs.disable: on
nfs.exports-auth-enable: on
 
 
Thanks for all replies,
 
  -- 
 
| As a result of Coronavirus-related precautions, NYU and the Center for Brain Imaging operations will be managed remotely until further notice. |
| All telephone calls and e-mail correspondence are being monitored remotely during our normal business hours of 9am-5pm, Monday through Friday. |
|   |
| For MRI scanner-related emergency, please contact: Keith Sanzenbach at keith.sanzenbach at nyu.edu and/or Pablo Velasco at pablo.velasco at nyu.edu |
| For computer/hardware/software emergency, please contact: Valerio Luccio at valerio.luccio at nyu.edu |
| For TMS/EEG-related emergency, please contact: Chrysa Papadaniil at chrysa at nyu.edu |
| For CBI-related administrative emergency, please contact: Jennifer Mangan at jennifer.mangan at nyu.edu |

 
 
 
| Valerio Luccio |     | (212) 998-8736 |
| Center for Brain Imaging |     | 4 Washington Place, Room 158 |
| New York University |     | New York, NY 10003 |

 
 
 
"In an open world, who needs windows or gates ?"
  ________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210730/e0345f80/attachment.html>


More information about the Gluster-users mailing list