[Gluster-users] I/O error for one folder within the mountpoint
Ravishankar N
ravishankar at redhat.com
Fri Jul 7 09:31:30 UTC 2017
On 07/07/2017 01:23 PM, Florian Leleu wrote:
>
> Hello everyone,
>
> first time on the ML so excuse me if I'm not following well the rules,
> I'll improve if I get comments.
>
> We got one volume "applicatif" on three nodes (2 and 1 arbiter), each
> following command was made on node ipvr8.xxx:
>
> # gluster volume info applicatif
>
> Volume Name: applicatif
> Type: Replicate
> Volume ID: ac222863-9210-4354-9636-2c822b332504
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: ipvr7.xxx:/mnt/gluster-applicatif/brick
> Brick2: ipvr8.xxx:/mnt/gluster-applicatif/brick
> Brick3: ipvr9.xxx:/mnt/gluster-applicatif/brick (arbiter)
> Options Reconfigured:
> performance.read-ahead: on
> performance.cache-size: 1024MB
> performance.quick-read: off
> performance.stat-prefetch: on
> performance.io-cache: off
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: off
>
> # gluster volume status applicatif
> Status of volume: applicatif
> Gluster process TCP Port RDMA Port
> Online Pid
> ------------------------------------------------------------------------------
> Brick ipvr7.xxx:/mnt/gluster-applicatif/
> brick 49154 0 Y 2814
> Brick ipvr8.xxx:/mnt/gluster-applicatif/
> brick 49154 0 Y 2672
> Brick ipvr9.xxx:/mnt/gluster-applicatif/
> brick 49154 0 Y 3424
> NFS Server on localhost 2049 0 Y 26530
> Self-heal Daemon on localhost N/A N/A Y 26538
> NFS Server on ipvr9.xxx 2049 0 Y 12238
> Self-heal Daemon on ipvr9.xxx N/A N/A Y 12246
> NFS Server on ipvr7.xxx 2049 0 Y 2234
> Self-heal Daemon on ipvr7.xxx N/A N/A Y 2243
>
> Task Status of Volume applicatif
> ------------------------------------------------------------------------------
> There are no active volume tasks
>
> The volume is mounted with autofs (nfs) in /home/applicatif and one
> folder is "broken":
>
> l /home/applicatif/services/
> ls: cannot access /home/applicatif/services/snooper: Input/output error
> total 16
> lrwxrwxrwx 1 applicatif applicatif 9 Apr 6 15:53 config -> ../config
> lrwxrwxrwx 1 applicatif applicatif 7 Apr 6 15:54 .pwd -> ../.pwd
> drwxr-xr-x 3 applicatif applicatif 4096 Apr 12 10:24 querybuilder
> d????????? ? ? ? ? ? snooper
> drwxr-xr-x 3 applicatif applicatif 4096 Jul 6 02:57 snooper_new
> drwxr-xr-x 16 applicatif applicatif 4096 Jul 6 02:58 snooper_old
> drwxr-xr-x 4 applicatif applicatif 4096 Jul 4 23:45 ssnooper
>
> I checked wether there was a heal, and it seems so:
>
> # gluster volume heal applicatif statistics heal-count
> Gathering count of entries to be healed on volume applicatif has been
> successful
>
> Brick ipvr7.xxx:/mnt/gluster-applicatif/brick
> Number of entries: 8
>
> Brick ipvr8.xxx:/mnt/gluster-applicatif/brick
> Number of entries: 29
>
> Brick ipvr9.xxx:/mnt/gluster-applicatif/brick
> Number of entries: 8
>
> But actually in the brick on each server the folder "snooper" is fine.
>
> I tried rebooting the servers, restarting gluster after killing every
> process using it but it's not working.
>
> Has anyone already experienced that ? Any help would be nice.
>
Can you share the output of `gluster volume heal <volname> info` and
`gluster volume heal <volname> info split-brain`? If the second command
shows entries, please also share the getfattr output from the bricks for
these files (getfattr -d -m . -e hex /brick/path/to/file).
-Ravi
>
> Thanks a lot !
>
> --
>
> Cordialement,
>
> <http://www.cognix-systems.com/>
>
> Florian LELEU
> Responsable Hosting, Cognix Systems
>
> *Rennes* | Brest | Saint-Malo | Paris
> florian.leleu at cognix-systems.com <mailto:florian.leleu at cognix-systems.com>
>
> Tél. : 02 99 27 75 92
>
>
> Facebook Cognix Systems <https://www.facebook.com/cognix.systems/>
> Twitter Cognix Systems <https://twitter.com/cognixsystems>
> Logo Cognix Systems <http://www.cognix-systems.com/>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170707/72478615/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 4935 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170707/72478615/attachment.jpe>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 1444 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170707/72478615/attachment.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 1623 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170707/72478615/attachment-0001.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 1474 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170707/72478615/attachment-0002.png>
More information about the Gluster-users
mailing list