[Gluster-users] A year's worth of Gluster
Nguyen Viet Cuong
mrcuongnv at gmail.com
Mon Dec 8 07:01:44 UTC 2014
Did you tweak some options belonging to the performance translator, such as
io-thread-count? If not, try to increase it to 64 from 16 (default).
On Mon, Dec 8, 2014 at 12:10 PM, Andrew Smith <smith.andrew.james at gmail.com>
> QDR Infiniband has a max theoretical input of 40Gbits, or about 4GB/s.
> My LSI controller RAID controllers typically deliver about 0.5-1.0 GB/s
> for direct disk access.
> I have tested it many ways. I typically start jobs on many clients and
> measure the total network bandwidth on the servers by monitoring the
> totals in /proc/net/dev or just count the bytes on the clients. I can't
> get more than about 300MB/s from each server. With a single job on
> a single client, I can't get more than about 100-150MB/s.
> On Dec 7, 2014, at 9:15 PM, Franco Broi <franco.broi at iongeo.com> wrote:
> > Our theoretical peak throughput is about 4Gbytes/sec or 4 x 10Gbits/Sec,
> > you can see from the graph that the maximum recorded is 3.6GB/Sec. This
> > was probably during periods of large sequential IO.
> > We have a small cluster of clients (10) with 10Gbit ethernet but the
> > majority of our machines (130) have gigabit. The throughput maximum for
> > the 10Gbit connected machines was just over 3GBytes/Sec with individual
> > machines recording about 800MB/Sec.
> > We can easily saturate our 10Gbit links on the servers as each JBOD is
> > capable of better than 500MB/Sec but with mixed sequential/random access
> > it seems like a good compromise.
> > We have another 2 server Gluster system with the same specs and we get
> > 1.8GB/Sec reads and 1.1GB/Sec writes.
> > What are you using to measure your throughput?
> > On Sun, 2014-12-07 at 20:52 -0500, Andrew Smith wrote:
> >> I have a similar system with 4 nodes and 2 bricks per node, where
> >> each brick is a single large filesystem (4TB x 24 RAID 6). The
> >> computers are all on QDR Infinband with Gluster using IPOIB. I
> >> have a cluster of Infiniband clients that access the data on the
> >> servers. I can only get about 1.0 to 1.2 GB/s throughput with my
> >> system though. Can you tell us the peak throughput that you are
> >> getting. I just don't have a sense of what I should expect from
> >> my system. A similar Luster setup could achieve 2-3 GB/s, which
> >> I attributed to the fact that it didn't use IPOIB, but instead used
> >> RDMA. I'd really like to know if I am wrong here and there is
> >> some configuration I can tweak to make things faster.
> >> Andy
> >> On Dec 7, 2014, at 8:43 PM, Franco Broi <franco.broi at iongeo.com> wrote:
> >>> On Fri, 2014-12-05 at 14:22 +0000, Kiebzak, Jason M. wrote:
> >>>> May I ask why you chose to go with 4 separate bricks per server
> rather than one large brick per server?
> >>> Each brick is a JBOD with 16 disks running RAIDZ2. Just seemed more
> >>> logical to keep the bricks and ZFS filesystems confined to physical
> >>> hardware units, ie I could disconnect a brick and move it to another
> >>> server.
> >>>> Thanks
> >>>> Jason
> >>>> -----Original Message-----
> >>>> From: gluster-users-bounces at gluster.org [mailto:
> gluster-users-bounces at gluster.org] On Behalf Of Franco Broi
> >>>> Sent: Thursday, December 04, 2014 7:56 PM
> >>>> To: gluster-users at gluster.org
> >>>> Subject: [Gluster-users] A year's worth of Gluster
> >>>> 1 DHT volume comprising 16 50TB bricks spread across 4 servers. Each
> server has 10Gbit Ethernet.
> >>>> Each brick is a ZOL RADIZ2 pool with a single filesystem.
> >>> _______________________________________________
> >>> Gluster-users mailing list
> >>> Gluster-users at gluster.org
> >>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> Gluster-users mailing list
> Gluster-users at gluster.org
Nguyen Viet Cuong
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users