[Gluster-users] 'No data available' at clients, brick xattr ops errors on small I/O -- XFS stripe issue or repeat bug?
LaGarde, Owen M ERDC-RDE-ITL-MS Contractor
Owen.M.LaGarde at erdc.dren.mil
Sat Nov 14 01:08:46 UTC 2015
I've now tried the same repeater scenario against EXT2, EXT3, EXT4, and XFS formatted bricks. There's no change in behavior; the discriminating detail is still only whether the build-pgfid volume option is on. Number of bricks, distribution over servers, transport protocol, etc., can all be changed over a wide range without affecting the scope or nature of failure.
Is there anyone using build-pgfid=on, doing any fine-grained small-file I/O (such as building sizable project from source), and *not* getting xattr errors in the brick logs / undeletable files due to incomplete xattr ops? Anyone? Anyone? Buhler? Buhler?
________________________________
From: LaGarde, Owen M ERDC-RDE-ITL-MS Contractor
Sent: Friday, November 13, 2015 4:36 PM
To: gluster-users at gluster.org; LaGarde, Owen M ERDC-RDE-ITL-MS Contractor
Subject: RE: 'No data available' at clients, brick xattr ops errors on small I/O -- XFS stripe issue or repeat bug?
Looks like the errors occur only when the gfid-to-path translation [volume option] is on. Is anyone else seeing this? Anyone using 3.6.6-1 with XFS-formatted bricks?
________________________________
From: LaGarde, Owen M ERDC-RDE-ITL-MS Contractor
Sent: Tuesday, November 10, 2015 4:24 PM
To: gluster-users at gluster.org; LaGarde, Owen M ERDC-RDE-ITL-MS Contractor
Subject: RE: 'No data available' at clients, brick xattr ops errors on small I/O -- XFS stripe issue or repeat bug?
Update: I've tried a second cluster with AFAIK identical backing storage configuration from the LUNs up, identical gluster/xfsprogs/kernel on the servers, identical volume setup, and identical kernel/gluster on the clients. The reproducer does not fail on the new system. So far I can't find any delta between the two clusters' setup other than the brick count (28 bricks across 8 servers on the failing one, 14 bricks across 4 servers on the new one).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151114/7a85e827/attachment.html>
More information about the Gluster-users
mailing list