[Gluster-users] Glusterfs performance tweaks

Prasun Gera prasun.gera at gmail.com
Mon Apr 13 20:46:34 UTC 2015


I meant to ask if you are using the same underlying filesystem and
parameters for A/B comparisons. It isn't clear what A and B are in your
comparison. You said that your performance is different with and without
gluster even if you access bricks directly. That bit is puzzling. Adding a
brick to glusterfs shouldn't affect write performance to the brick directly
on the same system unless your system is bogged down somehow by client
writes via glusterfs. Can you turn off the gluster volume and evaluate the
performance for writes to the brick ?

On Sun, Apr 12, 2015 at 10:00 PM, Punit Dambiwal <hypunit at gmail.com> wrote:

> Hi Prasun,
>
> I partation the bricks with the following command :-
>
> mkfs.xfs -i size=512 /dev/sdb -f
>
> echo "/dev/sdb /brick1  xfs  defaults 1 2" >> /etc/fstab
>
> Please suggest me if any modification required....it's SSD disk with 256GB
> capacity...
>
> Thanks,
> Punit
>
> On Sat, Apr 11, 2015 at 12:19 PM, Prasun Gera <prasun.gera at gmail.com>
> wrote:
>
>> There is something that's not clear in what you are describing. Gluster
>> doesn't come into play until you access your data through the gulsterfs
>> mount. You can even stop your gluster volume and stop the glusterfs daemon
>> to confirm that it is not really interfering with your writes to the brick
>> in any way. What you are describing sounds like an issue with the way you
>> have partitioned your drive or set up the filesystem, which is probably xfs
>> in case of glusterfs if you are using defaults. Are you comparing the same
>> file system in both your cases ?
>>
>> On Fri, Apr 10, 2015 at 11:45 AM, Punit Dambiwal <hypunit at gmail.com>
>> wrote:
>>
>>> Hi Ben,
>>>
>>> That means if i will not attach the SSD in to brick...even not install
>>> glusterfs on the server...it gives me throughput about 300mb/s but once i
>>> will install glusterfs and add this ssd in to glusterfs volume it gives me
>>> 16 mb/s...
>>>
>>> On Fri, Apr 10, 2015 at 9:32 PM, Ben Turner <bturner at redhat.com> wrote:
>>>
>>>> ----- Original Message -----
>>>> > From: "Punit Dambiwal" <hypunit at gmail.com>
>>>> > To: "Ben Turner" <bturner at redhat.com>
>>>> > Cc: "Vijay Bellur" <vbellur at redhat.com>, gluster-users at gluster.org
>>>> > Sent: Thursday, April 9, 2015 9:36:59 PM
>>>> > Subject: Re: [Gluster-users] Glusterfs performance tweaks
>>>> >
>>>> > Hi Ben,
>>>> >
>>>> > But without glusterfs if i run the same command with dsync on the same
>>>> > ssd...it gives me good throughput...all setup (CPU,RAM,Network are
>>>> same)
>>>> > the only difference is no glusterfs...
>>>> >
>>>> > [root at cpu09 mnt]# dd if=/dev/zero of=test bs=64k count=4k oflag=dsync
>>>> > 4096+0 records in
>>>> > 4096+0 records out
>>>> > 268435456 bytes (268 MB) copied, 0.935646 s, 287 MB/s
>>>> > [root at cpu09 mnt]#
>>>> >
>>>> > [image: Inline image 1]
>>>> >
>>>> > But on the top of the glusterfs it gives too slow performance....i
>>>> run the
>>>> > ssd trim every night to clean the garbage collection...i think there
>>>> is
>>>> > something need to do from gluster or OS side to improve the
>>>> > performance....otherwise no use to use the ALL SSD with gluster
>>>> because
>>>> > with all SSD you will get the performance slower then SATA....
>>>> >
>>>> >
>>>> >
>>>> > On Fri, Apr 10, 2015 at 2:12 AM, Ben Turner <bturner at redhat.com>
>>>> wrote:
>>>> >
>>>> > > ----- Original Message -----
>>>> > > > From: "Punit Dambiwal" <hypunit at gmail.com>
>>>> > > > To: "Vijay Bellur" <vbellur at redhat.com>
>>>> > > > Cc: gluster-users at gluster.org
>>>> > > > Sent: Wednesday, April 8, 2015 9:55:38 PM
>>>> > > > Subject: Re: [Gluster-users] Glusterfs performance tweaks
>>>> > > >
>>>> > > > Hi Vijay,
>>>> > > >
>>>> > > > If i run the same command directly on the brick...
>>>>
>>>> What does this mean then?  Running directly on the brick to me means
>>>> running directly on the SSD.  The command below is the same thing as above,
>>>> what changed?
>>>>
>>>> -b
>>>>
>>>> > > >
>>>> > > > [root at cpu01 1]# dd if=/dev/zero of=test bs=64k count=4k
>>>> oflag=dsync
>>>> > > > 4096+0 records in
>>>> > > > 4096+0 records out
>>>> > > > 268435456 bytes (268 MB) copied, 16.8022 s, 16.0 MB/s
>>>> > > > [root at cpu01 1]# pwd
>>>> > > > /bricks/1
>>>> > > > [root at cpu01 1]#
>>>> > > >
>>>> > >
>>>> > > This is your problem.  Gluster is only as fast as its slowest
>>>> piece, and
>>>> > > here your storage is the bottleneck.  Being that you get 16 MB to
>>>> the brick
>>>> > > and 12 to gluster that works out to about 25% overhead which is
>>>> what I
>>>> > > would expect with a single thread, single brick, single client
>>>> scenario.
>>>> > > This may have something to do with the way SSDs write?  On my SSD
>>>> at my
>>>> > > desk I only get 11.4 MB / sec when I run that DD command:
>>>> > >
>>>> > > # dd if=/dev/zero of=test bs=64k count=4k oflag=dsync
>>>> > > 4096+0 records in
>>>> > > 4096+0 records out
>>>> > > 268435456 bytes (268 MB) copied, 23.065 s, 11.4 MB/s
>>>> > >
>>>> > > My thought is that maybe using dsync is forcing the SSD to clean
>>>> the data
>>>> > > or something else before writing to it:
>>>> > >
>>>> > > http://www.blog.solidstatediskshop.com/2012/how-does-an-ssd-write/
>>>> > >
>>>> > > Do your drives support fstrim?  It may be worth it to trim before
>>>> you run
>>>> > > and see what results you get.  Other than tuning the SSD / OS to
>>>> perform
>>>> > > better on the back end there isn't much we can do from the gluster
>>>> > > perspective on that specific DD w/ the dsync flag.
>>>> > >
>>>> > > -b
>>>> > >
>>>> > > >
>>>> > > > On Wed, Apr 8, 2015 at 6:44 PM, Vijay Bellur < vbellur at redhat.com
>>>> >
>>>> > > wrote:
>>>> > > >
>>>> > > >
>>>> > > >
>>>> > > > On 04/08/2015 02:57 PM, Punit Dambiwal wrote:
>>>> > > >
>>>> > > >
>>>> > > >
>>>> > > > Hi,
>>>> > > >
>>>> > > > I am getting very slow throughput in the glusterfs (dead
>>>> slow...even
>>>> > > > SATA is better) ... i am using all SSD in my environment.....
>>>> > > >
>>>> > > > I have the following setup :-
>>>> > > > A. 4* host machine with Centos 7(Glusterfs 3.6.2 | Distributed
>>>> > > > Replicated | replica=2)
>>>> > > > B. Each server has 24 SSD as bricks…(Without HW Raid | JBOD)
>>>> > > > C. Each server has 2 Additional ssd for OS…
>>>> > > > D. Network 2*10G with bonding…(2*E5 CPU and 64GB RAM)
>>>> > > >
>>>> > > > Note :- Performance/Throughput slower then Normal SATA 7200
>>>> RPM…even i
>>>> > > > am using all SSD in my ENV..
>>>> > > >
>>>> > > > Gluster Volume options :-
>>>> > > >
>>>> > > > +++++++++++++++
>>>> > > > Options Reconfigured:
>>>> > > > performance.nfs.write-behind- window-size: 1024MB
>>>> > > > performance.io-thread-count: 32
>>>> > > > performance.cache-size: 1024MB
>>>> > > > cluster.quorum-type: auto
>>>> > > > cluster.server-quorum-type: server
>>>> > > > diagnostics.count-fop-hits: on
>>>> > > > diagnostics.latency- measurement: on
>>>> > > > nfs.disable: on
>>>> > > > user.cifs: enable
>>>> > > > auth.allow: *
>>>> > > > performance.quick-read: off
>>>> > > > performance.read-ahead: off
>>>> > > > performance.io-cache: off
>>>> > > > performance.stat-prefetch: off
>>>> > > > cluster.eager-lock: enable
>>>> > > > network.remote-dio: enable
>>>> > > > storage.owner-uid: 36
>>>> > > > storage.owner-gid: 36
>>>> > > > server.allow-insecure: on
>>>> > > > network.ping-timeout: 0
>>>> > > > diagnostics.brick-log-level: INFO
>>>> > > > +++++++++++++++++++
>>>> > > >
>>>> > > > Test with SATA and Glusterfs SSD….
>>>> > > > ———————
>>>> > > > Dell EQL (SATA disk 7200 RPM)
>>>> > > > —-
>>>> > > > [root at mirror ~]#
>>>> > > > 4096+0 records in
>>>> > > > 4096+0 records out
>>>> > > > 268435456 bytes (268 MB) copied, 20.7763 s, 12.9 MB/s
>>>> > > > [root at mirror ~]# dd if=/dev/zero of=test bs=64k count=4k
>>>> oflag=dsync
>>>> > > > 4096+0 records in
>>>> > > > 4096+0 records out
>>>> > > > 268435456 bytes (268 MB) copied, 23.5947 s, 11.4 MB/s
>>>> > > >
>>>> > > > GlsuterFS SSD
>>>> > > > —
>>>> > > > [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k
>>>> oflag=dsync
>>>> > > > 4096+0 records in
>>>> > > > 4096+0 records out
>>>> > > > 268435456 bytes (268 MB) copied, 66.2572 s, 4.1 MB/s
>>>> > > > [root at sv-VPN1 ~]# dd if=/dev/zero of=test bs=64k count=4k
>>>> oflag=dsync
>>>> > > > 4096+0 records in
>>>> > > > 4096+0 records out
>>>> > > > 268435456 bytes (268 MB) copied, 62.6922 s, 4.3 MB/s
>>>> > > > ————————
>>>> > > >
>>>> > > > Please let me know what i should do to improve the performance of
>>>> my
>>>> > > > glusterfs…
>>>> > > >
>>>> > > >
>>>> > > > What is the throughput that you get when you run these commands
>>>> on the
>>>> > > disks
>>>> > > > directly without gluster in the picture?
>>>> > > >
>>>> > > > By running dd with dsync you are ensuring that there is no
>>>> buffering
>>>> > > anywhere
>>>> > > > in the stack and that is the reason why low throughput is being
>>>> observed.
>>>> > > >
>>>> > > > -Vijay
>>>> > > >
>>>> > > > -Vijay
>>>> > > >
>>>> > > >
>>>> > > >
>>>> > > > _______________________________________________
>>>> > > > Gluster-users mailing list
>>>> > > > Gluster-users at gluster.org
>>>> > > > http://www.gluster.org/mailman/listinfo/gluster-users
>>>> > >
>>>> >
>>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150413/80387fc1/attachment.html>


More information about the Gluster-users mailing list