[Gluster-users] Gluster with ZFS

Karli Sjöberg karli at inparadise.se
Thu Apr 17 14:38:42 UTC 2025


On Thu, 2025-04-17 at 14:17 +0000, Ewen Chan wrote:
> Gagan:
> 
> Throwing my $0.02 in --
> 
> It depends on the system environment of how you are planning on
> deploying Gluster (and/or Ceph).
> 
> I have Ceph running on my three node HA Proxmox cluster using three
> OASLOA Mini PCs that only has the Intel N95 Processor (4-core/4-
> thread) with 16 GB of RAM and a cheap Microcenter store brand 512 GB
> NVMe M.2 2230 SSD and my Ceph cluster has been running without any
> issues.
> 
> As someone else mentioned, to state or claim that Ceph is "hardware
> demanding" isn't wholly accurate.
> 
> As for management, you can install the ceph-mgr-dashboard package
> (there is a video that apalrd's adventures put together on YouTube
> which goes over the installation process for this package, if you're
> running Debian and/or Proxmox (which runs on top of Debian anyways).)
> 
> From there, you can use said Ceph manager dashboard to do everything
> else, so that you don't have to deploy Ceph via the CLI.
> 
> I was able to create my erasure coded CRUSH rules using the
> dashboard, and then create my RBD pool and also my CephFS pool as
> well. (The metadata needs a replicate CRUSH rule, but the data itself
> can use erasure coded CRUSH rule.)
> 
> If your environment is such that you can do this, then Ceph might be
> a better option for you.
> 
> If you look at the benchmarks that tech YouTuber ElectronicsWizardry
> ran, ZFS is actually not all that performant. But what ZFS is good
> for are some of the other features like snapshots, replications, and
> it's copy-on-write schema for modifying files (which again, based on
> the testing that ElectronicsWizardry ran, does indeed create a write
> amplification effect as a result of the copy-on-write architecture)

And just adding to this- in case you didn't know- that Ceph also has
support for both snapshots and replication.

/K

> 
> Conversely, if you're looking for reliability, the more nodes that
> you have in the Ceph cluster, the more reliable and resilient to
> failures the Ceph backend will be.
> 
> Thanks.
> 
> Sincerely,
> Ewen
> ________________________________
> From: Gluster-users <gluster-users-bounces at gluster.org> on behalf of
> gagan tiwari <gagan.tiwari at mathisys-india.com>
> Sent: April 17, 2025 2:14 AM
> To: Alexander Schreiber <als at thangorodrim.ch>
> Cc: gluster-users at gluster.org <gluster-users at gluster.org>
> Subject: Re: [Gluster-users] Gluster with ZFS
> 
> HI Alexander,
>                               Thanks for the update. Initially, I 
> also thought of deploying Ceph but ceph is quite difficult to set-up
> and manage. Moreover, it's also  hardware demanding. I think it's
> most suitable for a very large set-up  with hundreds of clients.
> 
> What do you think of MooseFS ?  Have you or anyone else tried
> MooseFS. If yes, how was its performance?
> 
> Thanks,
> Gagan
> 
> 
> 
> On Thu, Apr 17, 2025 at 1:45 PM Alexander Schreiber
> <als at thangorodrim.ch<mailto:als at thangorodrim.ch>> wrote:
> On Thu, Apr 17, 2025 at 09:40:08AM +0530, gagan tiwari wrote:
> > Hi Guys,
> >                  We have been  using OpenZFS in our HPC environment
> > for
> > quite some time. And OpenZFS was going fine.
> > 
> > But we are now running into scalability issues since OpenZFS can't
> > be
> > scaled out.
> 
> Since ZFS is a local FS, you are essentially limited to how much
> storage
> you can stick into one machine, yes.
> 
> > So, I am planning to use Gluster on top of OpenZFS.
> 
> I don't think that is giving you the kind of long term scalability
> you might expect.
> 
> > So, I wanted to know if anyone has tried it. if yes, how it was and
> > any
> > deployment guide for it.
> 
> I'm running GlusterFS in a small cluster for backup storage.
> 
> > We have an HPC environment . Data security and extremely fast read
> > performance is very important for us.
> > 
> > So, please advise.
> 
> For that use case I would actually recommend Ceph over GlusterFS,
> since
> that can be pretty easily scaled out to very large setups, e.g. CERN
> is
> using multiple Ceph clusters sized at several PB and their use cases
> usually include very fast I/O.
> 
> Another concern is that Ceph is being quite actively developed
> whereas
> GlusterFS development seems to have slowed down to ... not much,
> these days.
> 
> Kind regards,
>             Alex.
> --
> "Opportunity is missed by most people because it is dressed in
> overalls and
>  looks like work."                                      -- Thomas A.
> Edison
> ________
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 659 bytes
Desc: This is a digitally signed message part
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20250417/c864f654/attachment.sig>


More information about the Gluster-users mailing list