<div dir="ltr">Please find our workload details as requested by you :<div><br></div><div>* Only 1 write-mount point as of now</div><div>* Read-Mount : Since we auto-scale our machines this can be as big as 300-400 machines during peak times</div><div>* >" multiple concurrent reads means that Reads will not happen until the file is completely written to" Yes , in our current scenario we can ensure that indeed this is the case.</div><div><br></div><div>But when you say it only supports single writer workload we would like to understand the following scenarios with respect to multiple writers and the current behaviour of glusterfs with sharding </div><div><ul><li>Multiple Writer writes to different files </li><li>Multiple Writer writes to same file </li><ul><li>they write to same file but different shards of same file</li><li>they write to same file (no gurantee if they write to different shards)</li></ul></ul><div>There might be some more cases which are known to you , would be helpful if you can describe us about those scenarios as well or may point us to the relevant documents.<br>Also it would be helpful if you can suggest the most stable version of glusterfs with sharding feature to use , since we would like to use this in production.</div><div><br></div><div>Thanks</div><div>Ashayam Gupta</div></div></div><br><div class="gmail_quote"><div dir="ltr">On Tue, Sep 18, 2018 at 11:00 AM Pranith Kumar Karampuri <<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Mon, Sep 17, 2018 at 4:14 AM Ashayam Gupta <<a href="mailto:ashayam.gupta@alpha-grep.com" target="_blank">ashayam.gupta@alpha-grep.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr">Hi All,<div><br></div><div>We are currently using glusterfs for storing large files with write-once and multiple concurrent reads, and were interested in understanding one of the features of glusterfs called sharding for our use case.<br><br>So far from the talk given by the developer [<a href="https://www.youtube.com/watch?v=aAlLy9k65Gw" target="_blank">https://www.youtube.com/watch?v=aAlLy9k65Gw</a>] and the git issue [<a href="https://github.com/gluster/glusterfs/issues/290" target="_blank">https://github.com/gluster/glusterfs/issues/290</a>] , we know that it was developed for large VM images as use case and the second link does talk about a more general purpose usage , but we are not clear if there are some issues if used for non-VM image large files [which is the use case for us].</div><div><br></div><div>Therefore it would be helpful if we can have some pointers or more information about the more general use-case scenario for sharding and any shortcomings if any , in case we use it for our scenario which is non-VM large files with write-once and multiple concurrent reads.Also it would be very helpful if you can suggest the best approach/settings for our use case scenario.</div></div></div></div></blockquote><div><br></div><div>Sharding is developed for Big file usecases and at the moment only supports single writer workload. I also added the maintainers for sharding to the thread. May be giving a bit of detail about access pattern w.r.t. number of mounts that are used for writing/reading would be helpful. I am assuming write-once and multiple concurrent reads means that Reads will not happen until the file is completely written to. Could you explain a bit more about the workload?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><br></div><div>Thanks</div><div>Ashayam Gupta</div></div></div></div>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="m_546772857348082071gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div></div>
</blockquote></div>