<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Gagan:</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
I actually tried what Alexander mentioned below, as a separate experiment.</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
I have two AMD Ryzen 9 5950X compute nodes and one AMD Ryzen 9 7950X compute node, and each node has 128 GB of RAM and also a Mellanox ConnectX-4 100 Gbps Infiniband network card, and I was using I think one Intel 670p 1 TB NVMe SSD and two Silicon Power US70
1 TB NVMe SSD.</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
>From the Ceph perspective, giving it quote "this much hardware", didn't significantly improve the performance.<br>
<br>
Of course, 100 Gbps Infiniband is faster than 1 GbE, but Ceph barely even noticed.<br>
<br>
Where it played a more significant role was when I was doing live migrations of VMs and LXCs where it was able to use the 100 Gbps IB network speeds more so for migrating RAM states between nodes than the actual virtual disk drive of the VM. Ceph won't help
with the RAM state. And since the VM disk was already sitting on the shared, distributed, Ceph storage, I wasn't moving the disk over 100 Gbps IB, just the RAM state.</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
So depending on how you've set it up, throwing more hardware at Ceph, might not improve performance much.</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Thanks.</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Sincerely,</div>
<div class="elementToProof" style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Ewen</div>
<div id="appendonsend"></div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Gluster-users <gluster-users-bounces@gluster.org> on behalf of Alexander Schreiber <als@thangorodrim.ch><br>
<b>Sent:</b> April 17, 2025 7:54 AM<br>
<b>To:</b> gagan tiwari <gagan.tiwari@mathisys-india.com><br>
<b>Cc:</b> gluster-users@gluster.org <gluster-users@gluster.org><br>
<b>Subject:</b> Re: [Gluster-users] Gluster with ZFS</font>
<div> </div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt;">
<div class="PlainText">On Thu, Apr 17, 2025 at 02:44:28PM +0530, gagan tiwari wrote:<br>
> HI Alexander,<br>
> Thanks for the update. Initially, I also<br>
> thought of deploying Ceph but ceph is quite difficult to set-up and manage.<br>
> Moreover, it's also hardware demanding. I think it's most suitable for a<br>
> very large set-up with hundreds of clients.<br>
<br>
I strongly disagree. I run a small (3 nodes) Ceph cluster in my homelab<br>
and following the official docs it's pretty easy to set up. The hardware<br>
demands mostly depend on what performance one needs - the more performance<br>
(e.g. NVMe storage and 100 GBit networking) one wants, the more powerful<br>
hardware one has to provide, as usual. My nodes are Intel D-1521 with<br>
64G of (ECC, of course) RAM and Connect-X 4 cards running at 10 GBit<br>
and storage on HDDs which provide reasonable performance for my needs - not<br>
a HPC setup, of course.<br>
<br>
> What do you think of MooseFS ? Have you or anyone else tried MooseFS. If<br>
> yes, how was its performance?<br>
<br>
Last time I looked, MooseFS needs a commercial license for the full feature<br>
set (e.g. highly available metadata (_that_ is not negotiable!), erasure <br>
oding, Windows clients) which killed it for my non-commercial use case.<br>
<br>
Kind regards,<br>
Alex.<br>
-- <br>
"Opportunity is missed by most people because it is dressed in overalls and<br>
looks like work." -- Thomas A. Edison<br>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a><br>
Gluster-users mailing list<br>
Gluster-users@gluster.org<br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</div>
</span></font></div>
</body>
</html>