[Gluster-users] sharding in glusterfs

Pranith Kumar Karampuri pkarampu at redhat.com
Tue Sep 18 05:30:31 UTC 2018


On Mon, Sep 17, 2018 at 4:14 AM Ashayam Gupta <ashayam.gupta at alpha-grep.com>
wrote:

> Hi All,
>
> We are currently using glusterfs for storing large files with write-once
> and multiple concurrent reads, and were interested in understanding one of
> the features of glusterfs called sharding for our use case.
>
> So far from the talk given by the developer [
> https://www.youtube.com/watch?v=aAlLy9k65Gw] and the git issue [
> https://github.com/gluster/glusterfs/issues/290] , we know that it was
> developed for large VM images as use case and the second link does talk
> about a more general purpose usage , but we are not clear if there are some
> issues if used for non-VM image large files [which is the use case for us].
>
> Therefore it would be helpful if we can have some pointers or more
> information about the more general use-case scenario for sharding and any
> shortcomings if any , in case we use it for our scenario which is non-VM
> large files with write-once and multiple concurrent reads.Also it would be
> very helpful if you can suggest the best approach/settings for our use case
> scenario.
>

Sharding is developed for Big file usecases and at the moment only supports
single writer workload. I also added the maintainers for sharding to the
thread. May be giving a bit of detail about access pattern w.r.t. number of
mounts that are used for writing/reading would be helpful. I am assuming
write-once and multiple concurrent reads means that Reads will not happen
until the file is completely written to. Could you explain  a bit more
about the workload?


>
> Thanks
> Ashayam Gupta
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180918/bcec02aa/attachment.html>


More information about the Gluster-users mailing list