[Gluster-users] glusterfs disperse volume input output error
Alexey Shcherbakov
Alexey.Shcherbakov at kaspersky.com
Tue Apr 10 10:02:08 UTC 2018
Hi,
Could you help me?
i have a problem with file on disperse volume. When i try to read this from mount point i recieve error,
# md5sum /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2
md5sum: /mnt/glfs/vmfs/slake-test-bck-m1-d1.qcow2: Input/output error
Configuration and status of volume is:
# gluster volume info vol1
Volume Name: vol1
Type: Disperse
Volume ID: a7d52933-fccc-4b07-9c3b-5b92f398aa79
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (13 + 2) = 15
Transport-type: tcp
Bricks:
Brick1: glfs-node11.local:/data1/bricks/brick1
Brick2: glfs-node12.local:/data1/bricks/brick1
Brick3: glfs-node13.local:/data1/bricks/brick1
Brick4: glfs-node14.local:/data1/bricks/brick1
Brick5: glfs-node15.local:/data1/bricks/brick1
Brick6: glfs-node16.local:/data1/bricks/brick1
Brick7: glfs-node17.local:/data1/bricks/brick1
Brick8: glfs-node18.local:/data1/bricks/brick1
Brick9: glfs-node19.local:/data1/bricks/brick1
Brick10: glfs-node20.local:/data1/bricks/brick1
Brick11: glfs-node21.local:/data1/bricks/brick1
Brick12: glfs-node22.local:/data1/bricks/brick1
Brick13: glfs-node23.local:/data1/bricks/brick1
Brick14: glfs-node24.local:/data1/bricks/brick1
Brick15: glfs-node25.local:/data1/bricks/brick1
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
# gluster volume status vol1
Status of volume: vol1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick glfs-node11.local:/data1/bricks/brick
1 49152 0 Y 1781
Brick glfs-node12.local:/data1/bricks/brick
1 49152 0 Y 3026
Brick glfs-node13.local:/data1/bricks/brick
1 49152 0 Y 1991
Brick glfs-node14.local:/data1/bricks/brick
1 49152 0 Y 2029
Brick glfs-node15.local:/data1/bricks/brick
1 49152 0 Y 1745
Brick glfs-node16.local:/data1/bricks/brick
1 49152 0 Y 1841
Brick glfs-node17.local:/data1/bricks/brick
1 49152 0 Y 3597
Brick glfs-node18.local:/data1/bricks/brick
1 49152 0 Y 2035
Brick glfs-node19.local:/data1/bricks/brick
1 49152 0 Y 1785
Brick glfs-node20.local:/data1/bricks/brick
1 49152 0 Y 1755
Brick glfs-node21.local:/data1/bricks/brick
1 49152 0 Y 1772
Brick glfs-node22.local:/data1/bricks/brick
1 49152 0 Y 1757
Brick glfs-node23.local:/data1/bricks/brick
1 49152 0 Y 1825
Brick glfs-node24.local:/data1/bricks/brick
1 49152 0 Y 1963
Brick glfs-node25.local:/data1/bricks/brick
1 49152 0 Y 2376
Self-heal Daemon on localhost N/A N/A Y 2018
Self-heal Daemon on glfs-node15.local N/A N/A Y 38261
Self-heal Daemon on glfs-node16.local N/A N/A Y 36005
Self-heal Daemon on glfs-node12.local N/A N/A Y 25785
Self-heal Daemon on glfs-node27.local N/A N/A Y 13248
Self-heal Daemon on glfs-node19.local N/A N/A Y 38535
Self-heal Daemon on glfs-node18.local N/A N/A Y 21067
Self-heal Daemon on glfs-node21.local N/A N/A Y 5926
Self-heal Daemon on glfs-node22.local N/A N/A Y 12980
Self-heal Daemon on glfs-node23.local N/A N/A Y 8368
Self-heal Daemon on glfs-node26.local N/A N/A Y 8268
Self-heal Daemon on glfs-node25.local N/A N/A Y 7872
Self-heal Daemon on glfs-node17.local N/A N/A Y 15884
Self-heal Daemon on glfs-node11.local N/A N/A Y 36075
Self-heal Daemon on glfs-node24.local N/A N/A Y 37905
Self-heal Daemon on glfs-node30.local N/A N/A Y 31820
Self-heal Daemon on glfs-node14.local N/A N/A Y 3236
Self-heal Daemon on glfs-node13.local N/A N/A Y 25817
Self-heal Daemon on glfs-node29.local N/A N/A Y 21261
Self-heal Daemon on glfs-node28.local N/A N/A Y 32641
Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks
And heal info shows me this:
# gluster volume heal vol1 info
Brick glfs-node11.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Brick glfs-node12.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Brick glfs-node13.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Brick glfs-node14.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Brick glfs-node15.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Brick glfs-node16.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Brick glfs-node17.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Brick glfs-node18.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Brick glfs-node19.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Brick glfs-node20.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Brick glfs-node21.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Brick glfs-node22.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Brick glfs-node23.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Brick glfs-node24.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Brick glfs-node25.local:/data1/bricks/brick1
/vmfs/slake-test-bck-m1-d1.qcow2
Status: Connected
Number of entries: 1
Other data on volume are accesible.
How to recover only one file (/vmfs/slake-test-bck-m1-d1.qcow2) from this volume with this situation ?
Thank you so much!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180410/a97418da/attachment.html>
More information about the Gluster-users
mailing list