<html>
<head>
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hi Yuhao,<br>
<br>
Since ram is relatively inexpensive, if you have another 64GB
laying around, why don't you stick them in to make it total 128GB
to serve your 482TB?<br>
<br>
From what I have read it appears there is a general recommendation
1GB Ram / 1 TB Disk<br>
</p>
Our setup here we are using a Raid card to setup RAID 10 and present
it to gluster as a single brick. I have seen issue if i directly
write to the RAID 10, I would get around 250MB/sec, via gluster
mounted volume 100MB and if i put NFS on top it would reduce to
30MB/sec<br>
<br>
Regards,<br>
Edy<br>
<br>
<div class="moz-cite-prefix">On 8/8/2018 1:49 PM, Yuhao Zhang wrote:<br>
</div>
<blockquote type="cite"
cite="mid:AEE92B0D-ED09-45DD-BE6E-B254EBF8BD8B@gmail.com">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
Hi Xavi,
<div class=""><br class="">
</div>
<div class="">Thank you for the suggestions, these are extremely
helpful. I haven't thought it could be ZFS problem. I went back
and checked a longer monitoring window and now I can see a
pattern. Please see this attached Grafana screenshot (also
available here:�<a href="https://cl.ly/070J2y3n1u0F" class=""
moz-do-not-send="true">https://cl.ly/070J2y3n1u0F</a>�. Note
that the data gaps were when I took down the server for
rebooting):</div>
<div class=""><br class="">
</div>
<div class=""><img apple-inline="yes"
id="E2C98C60-010D-41C6-A758-54A51DE54118"
src="cid:part2.44048C42.E1F87799@edylie.net" class=""
height="586" width="1249"><br class="">
<div><br class="">
</div>
<div>Between 8/4 - 8/6, I tried two transfer tests, and
experienced 2 the gluster hanging problems. One during the
first transfer, and another one happened shortly after the
second transfer. I blocked both in pink lines.�</div>
<div><br class="">
</div>
<div>Looks like during my transfer tests, free memory was almost
exhausted. The system has a very high cached memory, which I
think was due to ZFS ARC. However, I am under the impression
that ZFS will release space from ARC if it observes low system
available memory. I am not sure why it didn't do that.�</div>
<div><br class="">
</div>
<div>I did't tweak related ZFS parameters.�zfs_arc_max was set
to 0 (default value). According to doc, it is "Max arc size of
ARC in bytes. If set to 0 then it will consume 1/2 �of �system
RAM." So it appeared that this setting didn't work.</div>
<div><br class="">
</div>
<div>When the server was under heavy IO, the used memory was
instead decreased, which I can't explain.</div>
<div><br class="">
</div>
<div>May I ask if you, or anyone else in this group, has
recommendation on ZFS settings for my setup? My server has
64GB physical memory and 150GB SSD space reserved for
L2_ARC.The zpool has 6 vdevs and each has 12TB * 10 hard
drives on raidz2. Total usable space in the zpool is 482TB.</div>
<div><br class="">
</div>
<div>Thank you,</div>
<div>Yuhao</div>
<div><br class="">
</div>
<div>
<blockquote type="cite" class="">
<div class="">On Aug 7, 2018, at 01:36, Xavi Hernandez <<a
href="mailto:jahernan@redhat.com" class=""
moz-do-not-send="true">jahernan@redhat.com</a>>
wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<div dir="auto" class="">
<div class="">Hi Yuhao,�<br class="">
<br class="">
<div class="gmail_quote">
<div dir="ltr" class="">On Mon, 6 Aug 2018, 15:26
Yuhao Zhang, <<a href="mailto:zzyzxd@gmail.com"
class="" moz-do-not-send="true">zzyzxd@gmail.com</a>>
wrote:<br class="">
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div
style="word-wrap:break-word;line-break:after-white-space"
class="">
<div class="">Hello,</div>
<div class=""><br class="">
</div>
I just experienced another hanging one hour ago
and the server was not even under heavy IO.
<div class=""><br class="">
</div>
<div class="">Atin, I attached the process
monitoring results and another statedump.</div>
<div class=""><br class="">
</div>
<div class="">Xavi, ZFS was fine, during the
hanging, I can still write directly to the ZFS
volume. My ZFS version: ZFS: Loaded module
v0.6.5.6-0ubuntu16, ZFS pool version 5000, ZFS
filesystem version 5</div>
</div>
</blockquote>
</div>
</div>
<div dir="auto" class=""><br class="">
</div>
<div dir="auto" class="">I highly recommend you to
upgrade to version 0.6.5.8 at least. It fixes a kernel
panic that can happen when used with gluster. However
this is not your current problem.</div>
<div dir="auto" class=""><br class="">
</div>
<div dir="auto" class="">Top statistics show low
available memory and high CPU utilization of kswapd
process (along with one of the gluster processes).
I've seen frequent memory management problems with
ZFS. Have you configured any ZFS parameters? It's
highly recommendable to tweak some memory limits.</div>
<div dir="auto" class=""><br class="">
</div>
<div dir="auto" class="">If that were the problem,
there's one thing that should alleviate it (and see if
it could be related):</div>
<div dir="auto" class=""><br class="">
</div>
<div dir="auto" class="">echo 3
>/proc/sys/vm/drop_caches</div>
<div dir="auto" class=""><br class="">
</div>
<div dir="auto" class="">This should be done on all
bricks from time to time. You can wait until the
problem appears, but in this case the recovery time
can be larger.�</div>
<div dir="auto" class=""><br class="">
</div>
<div dir="auto" class="">I think this should fix the
high CPU usage of kswapd. If so, we'll need to tweak
some ZFS parameters.</div>
<div dir="auto" class=""><br class="">
</div>
<div dir="auto" class="">I'm not sure if the high CPU
usage of gluster could be related to this or not.</div>
<div dir="auto" class=""><br class="">
</div>
<div dir="auto" class="">Xavi</div>
<div dir="auto" class="">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div
style="word-wrap:break-word;line-break:after-white-space"
class="">
<div class="">
<div class=""><br class="">
</div>
<div class="">Thank you,</div>
<div class="">Yuhao</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
<br class="">
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a></pre>
</blockquote>
<br>
</body>
</html>