[Gluster-users] How reliable is XFS under Gluster?

Franco Broi Franco.Broi at iongeo.com
Sat Dec 7 08:04:08 UTC 2013


Been using ZFS for about 9 months and am about to add as other 400TB, no issues so far.

On 7 Dec 2013 04:23, Brian Foster <bfoster at redhat.com> wrote:
On 12/06/2013 01:57 PM, Kal Black wrote:
> Hello,
> I am in the point of picking up a FS for new brick nodes. I was used to
> like and use ext4 until now but I recently red for an issue introduced by a
> patch in ext4 that breaks the distributed translator. In the same time, it
> looks like the recommended FS for a brick is no longer ext4 but XFS which
> apparently will also be the default FS in the upcoming RedHat7. On the
> other hand, XFS is being known as a file system that can be easily
> corrupted (zeroing files) in case of a power failure. Supporters of the
> file system claim that this should never happen if an application has been
> properly coded (properly committing/fsync-ing data to storage) and the
> storage itself has been properly configured (disk cash disabled on
> individual disks and battery backed cache used on the controllers). My
> question is, should I be worried about losing data in a power failure or
> similar scenarios (or any) using GlusterFS and XFS? Are there best
> practices for setting up a Gluster brick + XFS? Has the ext4 issue been
> reliably fixed? (my understanding is that this will be impossible unless
> ext4 isn't being modified to allow popper work with Gluster)
>

Hi Kal,

You are correct in that Red Hat recommends using XFS for gluster bricks.
I'm sure there are plenty of ext4 (and other fs) users as well, so other
users should chime in as far as real experiences with various brick
filesystems goes. Also, I believe the dht/ext issue has been resolved
for some time now.

With regard to "XFS zeroing files on power failure," I'd suggest you
check out the following blog post:

http://sandeen.net/wordpress/computers/xfs-does-not-null-files-and-requires-no-flux/

My cursory understanding is that there were apparently situations where
the inode size of a recently extended file would be written to the log
before the actual extending data is written to disk, thus creating a
crash window where the updated size would be seen, but not the actual
data. In other words, this isn't a "zeroing files" behavior in as much
as it is an ordering issue with logging the inode size. This is probably
why you've encountered references to fsync(), because with the fix your
data is still likely lost (unless/until you've run an fsync to flush to
disk), you just shouldn't see the extended inode size unless the actual
data made it to disk.

Also note that this was fixed in 2007. ;)

Brian

> Best regards
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>

_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

________________________________


This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131207/ec421759/attachment.html>


More information about the Gluster-users mailing list