[Gluster-devel] Sharding - Inode write fops - recoverability from failures - design

Krutika Dhananjay kdhananj at redhat.com
Tue Feb 24 05:06:06 UTC 2015


----- Original Message -----

> From: "Vijay Bellur" <vbellur at redhat.com>
> To: "Krutika Dhananjay" <kdhananj at redhat.com>, "Gluster Devel"
> <gluster-devel at gluster.org>
> Sent: Monday, February 23, 2015 5:25:57 PM
> Subject: Re: [Gluster-devel] Sharding - Inode write fops - recoverability
> from failures - design

> On 02/22/2015 06:08 PM, Krutika Dhananjay wrote:
> > Hi,
> >
> > Please find the design doc for one of the problems in sharding which
> > Pranith and I are trying to solve and its solution @
> > http://review.gluster.org/#/c/9723/1.
> > Reviews and feedback are much appreciated.
> >

> Can this feature be made optional? I think there are use cases like
> virtual machine image storage, hdfs etc. where the number of metadata
> queries might not be very high. It would be an acceptable tradeoff in
> such cases to not be very efficient for answering metadata queries but
> be very efficient for data operations.

> IOW, can we have two possible modes of operation for the sharding
> translator to answer metadata queries?

> 1. One that behaves like a regular filesystem where we expect a mix of
> data and metadata operations. Your document seems to cover that part
> well. We can look at optimizing behavior for multi-threaded single
> writer use cases after an initial implementation is in place. Techniques
> like eager locking can be applied here.

> 2. Another mode where we do not expect a lot of metadata queries. In
> this mode, we can visit all nodes where we have shards to answer these
> queries.
But for sharding translator to be able to visit all shards, it is required to know the last shard number. 
Without this, it will never know when to stop looking up the different shards. For this to happen, we 
still need to maintain the size attribute for each file. 

-Krutika 

> -Vijay
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150224/10c61005/attachment.html>


More information about the Gluster-devel mailing list