<div dir="auto">Georep expose to another problem:<div dir="auto">When using gluster as storage for VM, the VM file is saved as qcow. Changes are inside the qcow, thus rsync has to sync the whole file every time</div><div dir="auto"><br></div><div dir="auto">A little workaround would be sharding, as rsync has to sync only the changed shards, but I don't think this is a good solution</div></div><div class="gmail_extra"><br><div class="gmail_quote">Il 23 mar 2017 8:33 PM, "Joe Julian" <<a href="mailto:joe@julianfamily.org">joe@julianfamily.org</a>> ha scritto:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<p>In many cases, a full backup set is just not feasible. Georep to
the same or different DC may be an option if the bandwidth can
keep up with the change set. If not, maybe breaking the data up
into smaller more manageable volumes where you only keep a smaller
set of critical data and just back that up. Perhaps an object
store (swift?) might handle fault tolerance distribution better
for some workloads.</p>
<p>There's no one right answer.</p>
<br>
<div class="m_-3599642909736746536moz-cite-prefix">On 03/23/17 12:23, Gandalf
Corvotempesta wrote:<br>
</div>
<blockquote type="cite">
<div dir="auto">Backing up from inside each VM doesn't solve the
problem
<div dir="auto">If you have to backup 500VMs you just need more
than 1 day and what if you have to restore the whole gluster
storage?</div>
<div dir="auto"><br>
</div>
<div dir="auto">How many days do you need to restore 1PB?</div>
<div dir="auto"><br>
</div>
<div dir="auto">Probably the only solution should be a georep in
the same datacenter/rack with a similiar cluster, </div>
<div dir="auto">ready to became the master storage.</div>
<div dir="auto">In this case you don't need to restore anything
as data are already there, </div>
<div dir="auto">only a little bit back in time but this double
the TCO</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">Il 23 mar 2017 6:39 PM, "Serkan Çoban"
<<a href="mailto:cobanserkan@gmail.com" target="_blank">cobanserkan@gmail.com</a>>
ha scritto:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Assuming a
backup window of 12 hours, you need to send data at 25GB/s<br>
to backup solution.<br>
Using 10G Ethernet on hosts you need at least 25 host to
handle 25GB/s.<br>
You can create an EC gluster cluster that can handle this
rates, or<br>
you just backup valuable data from inside VMs using open
source backup<br>
tools like borg,attic,restic , etc...<br>
<br>
On Thu, Mar 23, 2017 at 7:48 PM, Gandalf Corvotempesta<br>
<<a href="mailto:gandalf.corvotempesta@gmail.com" target="_blank">gandalf.corvotempesta@gmail.c<wbr>om</a>>
wrote:<br>
> Let's assume a 1PB storage full of VMs images with each
brick over ZFS,<br>
> replica 3, sharding enabled<br>
><br>
> How do you backup/restore that amount of data?<br>
><br>
> Backing up daily is impossible, you'll never finish the
backup that the<br>
> following one is starting (in other words, you need
more than 24 hours)<br>
><br>
> Restoring is even worse. You need more than 24 hours
with the whole cluster<br>
> down<br>
><br>
> You can't rely on ZFS snapshot due to sharding (the
snapshot took from one<br>
> node is useless without all other node related at the
same shard) and you<br>
> still have the same restore speed<br>
><br>
> How do you backup this?<br>
><br>
> Even georep isn't enough, if you have to restore the
whole storage in case<br>
> of disaster<br>
><br>
> ______________________________<wbr>_________________<br>
> Gluster-users mailing list<br>
> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br>
</blockquote>
</div>
</div>
<br>
<fieldset class="m_-3599642909736746536mimeAttachmentHeader"></fieldset>
<br>
<pre>______________________________<wbr>_________________
Gluster-users mailing list
<a class="m_-3599642909736746536moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a class="m_-3599642909736746536moz-txt-link-freetext" href="http://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
</div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div></div>