[Gluster-users] The continuing story ...

Stephan von Krawczynski skraw at ithnet.com
Fri Sep 18 11:51:18 UTC 2009


On Fri, 18 Sep 2009 10:35:22 +0200
Peter Gervai <grinapo at gmail.com> wrote:

> Funny thread we have.
> 
> Just a sidenote on the last week part about userspace cannot lock up
> the system: blocking resource waits / I/O waits can stall _all_ disk
> access, and try to imagine what you can do with a system without disk
> access. Obviously, you cannot log in, cannot start new programs,
> cannot load dynamic libraries. Yet the system pings, and your already
> logged in shells may function more or less, especially if you have a
> statically linked one (like sash).
> 
> As a bitter sidenote: google for 'xtreemfs', may be interesting if you
> only need a shared redundant access with extreme network fault
> tolerance. (And yes, it can stall the system, too. :-))

I would not want to use it for exactly this reason (from the docs):

-----------------------------
XtreemFS implements an object-based file system architecture (Fig. 2.1). The
name of this architecture comes from the fact that an object-based file system
splits file content into a series of fixed-size objects and stores them on its
storage servers. In contrast to block-based file systems, the size of such an
object can vary from file to file.

The metadata of a file (such as the file name or file size) is stored separate
from the file content on a Metadata server. This metadata server organizes
file system metadata as a set of volumes, each of which implements a separate
file system namespace in form of a directory tree. 
-----------------------------

That's exactly what we don't want. We want a disk layout that is accessible
even if glusterfs (or call it the "network fs") has a bad day and doesn't want
to start.

> Another sidenote: I tend to see FUSE as a low-speed toy nowadays. It
> doesn't seem to be able to handle any serious I/O load.

Really, i can't judge. I haven't opened (this) pandora's box up to now ...

> -- 
>  byte-byte,
>     grin

-- 
Regards,
Stephan




More information about the Gluster-users mailing list