[Gluster-users] [External] Re: anyone using gluster-block?

Amar Tumballi Suryanarayan atumball at redhat.com
Mon May 6 18:16:25 UTC 2019


Davide,

With release 0.4, gluster-block is now having more functionality, and we
did many stability fixes. Feel free to try out, and let us know how you
feel.

-Amar

On Fri, Nov 9, 2018 at 3:36 AM Davide Obbi <davide.obbi at booking.com> wrote:

> Hi Vijay,
>
> The Volume has been created using heketi-cli blockvolume create command.
> The block config is the config applied by heketi out of the box and in my
> case ended up to be:
> - 3 nodes each with 1 brick
> - the brick is carved from a VG with a single PV
> - the PV consists of a 1.2TB SSD, not partitioned and no HW RAID behind
> - the volume does not have any custom setting aside what configured in
> /etc/glusterfs/group-gluster-block by default
> performance.quick-read=off
> performance.read-ahead=off
> performance.io-cache=off
> performance.stat-prefetch=off
> performance.open-behind=off
> performance.readdir-ahead=off
> performance.strict-o-direct=on
> network.remote-dio=disable
> cluster.eager-lock=enable
> cluster.quorum-type=auto
> cluster.data-self-heal-algorithm=full
> cluster.locking-scheme=granular
> cluster.shd-max-threads=8
> cluster.shd-wait-qlength=10000
> features.shard=on
> features.shard-block-size=64MB
> user.cifs=off
> server.allow-insecure=on
> cluster.choose-local=off
>
> Kernel: 3.10.0-862.11.6.el7.x86_64
> OS: Centos 7.5.1804
> tcmu-runner: 0.2rc4.el7
>
> Each node has 32 cores and 128GB RAM and 10Gb connection.
>
> What i am trying to understand is what should be performance expectations
> with gluster-block since i couldnt find many benchmarks online.
>
> Regards
> Davide
>
>
> On Fri, Nov 9, 2018 at 7:07 AM Vijay Bellur <vbellur at redhat.com> wrote:
>
>> Hi Davide,
>>
>> Can you please share the block hosting volume configuration?
>>
>> Also, more details about the kernel and tcmu-runner versions could help
>> in understanding the problem better.
>>
>> Thanks,
>> Vijay
>>
>> On Tue, Nov 6, 2018 at 6:16 AM Davide Obbi <davide.obbi at booking.com>
>> wrote:
>>
>>> Hi,
>>>
>>> i am testing gluster-block and i am wondering if someone has used it and
>>> have some feedback regarding its performance.. just to set some
>>> expectations... for example:
>>> - i have deployed a block volume using heketi on a 3 nodes gluster4.1
>>> cluster. it's a replica3 volume.
>>> - i have mounted via iscsi using multipath config suggested, created
>>> vg/lv and put xfs on it
>>> - all done without touching any volume setting or customizing xfs
>>> parameters etc..
>>> - all baremetal running on 10Gb, gluster has a single block device, SSD
>>> in use by heketi
>>>
>>> so i tried a dd and i get a 4.7 MB/s?
>>> - on the gluster nodes i have in write ~200iops, ~15MB/s, 75% util
>>> steady and spiky await time up to 100ms alternating between the servers.
>>> CPUs are mostly idle but there is some waiting...
>>> - Glusterd and fsd utilization is below 1%
>>>
>>> The thing is that a gluster fuse mount on same platform does not have
>>> this slowness so there must be something wrong with my understanding of
>>> gluster-block?
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
> --
> Davide Obbi
> System Administrator
>
> Booking.com B.V.
> Vijzelstraat 66-80 Amsterdam 1017HL Netherlands
> Direct +31207031558
> [image: Booking.com] <https://www.booking.com/>
> Empowering People to experience the world since 1996
> 43 languages, 214+ offices worldwide, 141,000+ global destinations, 29
> million reported listings
> Subsidiary of Booking Holdings Inc. (NASDAQ: BKNG)
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Amar Tumballi (amarts)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190506/53cdf548/attachment.html>


More information about the Gluster-users mailing list