[Gluster-users] dm-glusterfs (was Re: IO performance cut down when VM on Gluster)

Jeff Darcy jdarcy at redhat.com
Mon Jan 14 11:53:58 UTC 2013


On 1/13/13 11:25 PM, Bharata B Rao wrote:
> Just wondering if there is a value in doing dm-glusterfs on the lines
> similar to dm-nfs
> (https://blogs.oracle.com/OTNGarage/entry/simplify_your_storage_management_with).
> 
> I understand GlusterFS due to its stackable translator nature and
> having to deal with multiple translators at the client end might not
> easily fit to this model, but may be something to think about ?


It's an interesting idea.  You're also right that there are some issues with
the stackable translator model and so on.  Porting all of that code into the
kernel would require an almost suicidal suspension of all other development
activity while competitors continue to catch up on manageability or add other
features, so that's not every appealing.  Keeping it all out in user space with
a minimal kernel-interception layer would give us something better than FUSE (I
did something like this in a previous life BTW), but probably not enough better
to be compelling.  A hybrid "fast path, slow path" approach might work.  Keep
all of the code for common-case reads and writes in the kernel, punt everything
else back up to user space with hooks to disable the fast path when necessary
(e.g. during a config change).  OTOH, how would this be better than e.g. an
iSCSI target, which is deployable today with essentially the same functionality
and even greater generality (e.g. to non-Linux platforms)?

It's good to think about these things.  We could implement ten other
alternative access mechanisms (Apache/nginx modules anyone?) and still burn
fewer resources than we would with "just put it all in the kernel" inanity.  I
tried one of our much-touted alternatives recently and, despite having a kernel
client, they achieved less than 1/3 of our performance on this kind of
workload.  If we want to eliminate sources of overhead we need to address more
than just that one.



More information about the Gluster-users mailing list