[Gluster-users] State of Gluster project

Strahil Nikolov hunter86_bg at yahoo.com
Mon Jun 22 16:19:57 UTC 2020


Hey Erik,

I actually ment that  there  is  no point  in using controllers with fast storage like SAS SSDs or NVMEs.
They (the controllers) usually have 1-2 GB of RAM  to buffer writes until the risc processor analyzes the requests and stacks them - thus JBOD (in 'replica 3' )makes much more sense for any kind of software defined storage (no matter Gluster, CEPH or Lustre).

Of course, I could be wrong and I would be glad to read benchmark results on this topic.

Best Regards,
Strahil Nikolov




На 22 юни 2020 г. 18:48:43 GMT+03:00, Erik Jacobson <erik.jacobson at hpe.com> написа:
>> For NVMe/SSD  - raid controller is pointless ,  so JBOD makes  most
>sense.
>
>I am game for an education lesson here. We're still using spinng drives
>with big RAID caches but we keep discussing SSD in the context of RAID.
>I
>have read for many real-world workloads, RAID0 makes no sense with
>modern SSDs. I get that part. But if your concern is reliability and
>reducing the need to mess with Gluster to recover from a drive failure,
>a RAID1 or or RADI10 (or some other with redundancy) would seem to at
>least make sense from that perspective.
>
>Was your answer a performance answer? Or am I missing something about
>RAIDs for redundancy and SSDs being a bad choice?
>
>Thanks again as always,
>
>Erik


More information about the Gluster-users mailing list