[Gluster-users] Reg Performance issue in GlusterFS

Satheesaran Sundaramoorthi sasundar at redhat.com
Mon Oct 7 05:38:55 UTC 2019


On Mon, Oct 7, 2019 at 1:24 AM Soumya Koduri <skoduri at redhat.com> wrote:

> Hi Pratik,
>
> Offhand I do not see any issue with the configuration. But I think for
> VM images store, using gfapi may give better performance than compared
> to fuse. CC'in Kritika and Gobinda who have been working on this
> use-case and may be able to guide you.
>
> Thanks,
> Soumya
>
> On 10/5/19 11:25 AM, Pratik Chandrakar wrote:
> > Hello Soumya,
> >
> > This is Pratik from India. I am writing this mail because I am facing
> performance issue in my cluster and searched a lot in net for the tuning
> but not succeeded. It would be great if you can suggest me whether I should
> stick with glusterfs or move to any other technology. Currently I am using
> Glusterfs with fuse on CentOS for Storing images of Virtual Machine in
> CloudStack Setup. Majority of workload is of SQL Server & MariaDB database
> server, and some Web Servers. The issue is of slow booting and slow UI
> response of VMs and also lot of time outs in SQL server database even on
> small databases. I have dedicated 10G network for storage in my setup.
> >
> > Request you to please guide me, whether I am have miss-configured the
> cluster or need to change the storage layer.
> >
> > Below is the configuration for your reference...
> >
> > *Volume Name: vmstore5152-v2*
> > *Type: Replicate*
> > *Volume ID: aa27a2cb-c0f5-41b9-a50f-fdce4d4d8358*
> > *Status: Started*
> > *Snapshot Count: 0*
> > *Number of Bricks: 1 x (2 + 1) = 3*
> > *Transport-type: tcp*
> > *Bricks:*
> > *Brick1: storagenode51:/datav2/brick51-v2/brick*
> > *Brick2: storagenode52:/datav2/brick52-v2/brick*
> > *Brick3: indphyserver2:/arbitator/arbrick5152-v2/brick (arbiter)*
> > *Options Reconfigured:*
> > *cluster.choose-local: off*
> > *user.cifs: off*
> > *features.shard: on*
> > *cluster.shd-wait-qlength: 10000*
> > *cluster.shd-max-threads: 8*
> > *cluster.locking-scheme: granular*
> > *cluster.data-self-heal-algorithm: full*
> > *cluster.server-quorum-type: server*
> > *cluster.quorum-type: auto*
> > *cluster.eager-lock: enable*
> > *network.remote-dio: enable*
>

Hi Krutika,

Do you think turning off remote-dio and enabling strict-o-direct, will
improve performance ?

@Sahina, @Gobinda, are you aware of performance optimization for the
DB workload in the VMs ?

-- Satheesaran

> > *performance.low-prio-threads: 32*
> > *performance.io-cache: off*
> > *performance.read-ahead: off*
> > *performance.quick-read: off*
> > *storage.owner-gid: 107*
> > *storage.owner-uid: 107*
> > *cluster.lookup-optimize: on*
> > *client.event-threads: 4 *
> > *transport.address-family: inet*
> > *nfs.disable: on*
> > *performance.client-io-threads: on*
> >
> >
> > --
> > प्रतीक चंद्राकर | Pratik Chandrakar
> > वैज्ञानिक - सी | Scientist-C
> > एन.आई.सी - छत्तीसगढ़ राज्य केंद्र | NIC - Chhattisgarh State Centre
> > हॉल क्र. एडी2-14 , मंत्रालय | Hall no.-AD2-14, Mantralaya
> > महानदी भवन | Mahanadi Bhavan
> > नवा रायपुर अटल नगर | Nava Raipur Atal Nagar
> >
> >
> > <http://gandhi.gov.in>
> >
> > *Disclaimer:*
> >
> > This e-mail and its attachments may contain official Indian Government
> > information. If you are not the intended recipient, please notify the
> > sender immediately and delete this e-mail. Any dissemination or use of
> > this information by a person other than the intended recipient is
> > unauthorized. The responsibility lies with the recipient to check this
> > email and any attachment for the presence of viruses.
> >
> ________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191007/7abf50ca/attachment.html>


More information about the Gluster-users mailing list