[Gluster-users] Ceph or Gluster for implementing big NAS

Vlad Kopylov vladkopy at gmail.com
Mon Nov 12 19:22:18 UTC 2018


Good thing about gluster is that you have files as files. Whatever happens
good old file access is still there - if you need backup, or rebuilding
volumes - every replica brick has your files.
As a contrary to object blue..something storage with separate metadata, if
it gets lost/mixed you will be recovering it with magnifying glass...

If you go with monster VM approach - hypervisor uses gfapi which is little
faster then ceph on all simple tests. In really distributed environments
ceph (multiple buildings or datacenters) read performance of ceph will kill
the cluster.
Ceph CPU and Mem consumption will surprise you against Gluster as well.

For local FILE NAS (everything sitting in one room) something like BeeGFS
LizardFS would be a best option.

v


On Mon, Nov 12, 2018 at 6:51 AM Premysl Kouril <premysl.kouril at gmail.com>
wrote:

> Hi,
>
> We are planning to build NAS solution which will be primarily used via NFS
> and CIFS and workloads ranging from various archival application to more
> “real-time processing”. The NAS will not be used as a block storage for
> virtual machines, so the access really will always be file oriented.
>
> We are considering primarily two designs and I’d like to kindly ask for
> any thoughts, views, insights, experiences.
>
> Both designs utilize “distributed storage software at some level”. Both
> designs would be built from commodity servers and should scale as we grow.
> Both designs involve virtualization for instantiating "access virtual
> machines" which will be serving the NFS and CIFS protocol - so in this
> sense the access layer is decoupled from the data layer itself.
>
> First design is based on a distributed filesystem like Gluster or CephFS.
> We would deploy this software on those commodity servers and mount the
> resultant filesystem on the “access virtual machines” and they would be
> serving the mounted filesystem via NFS/CIFS.
>
> Second design is based on distributed block storage using CEPH. So we
> would build distributed block storage on those commodity servers, and then,
> via virtualization (like OpenStack Cinder) we would allocate the block
> storage into the access VM. Inside the access VM we would deploy ZFS which
> would aggregate block storage into a single filesystem. And this filesystem
> would be served via NFS/CIFS from the very same VM.
>
> Any advices and insights highly appreciated
>
> Cheers,
>
> Prema
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20181112/799eaa3a/attachment.html>


More information about the Gluster-users mailing list