[Gluster-users] Convert to Shard - Setting Guidance
Gambit15
dougti+gluster at gmail.com
Fri Jan 20 14:07:54 UTC 2017
It's a given, but test it well before going into production. People have
occasionally had problems with corruption when converting to shards.
In my initial tests, enabling sharding took our I/O down to 15Kbps from
300Mpbs without.
data-self-heal-algorithm full
>
That could be painful. Any particular reason you've chosen full?
>
> All Bricks 1TB SSD
Image Sizes – Up to 300GB
>
If your images easily fit within the bricks, why do you need sharding in
the first place? It adds an extra layer of complexity & removes the cool
feature of having entire files on each brick, making DR & things a lot
easier.
Doug
On 20 January 2017 at 00:11, Gustave Dahl <gustave at dahlfamily.net> wrote:
> I am looking for guidance on the recommend settings as I convert to
> shards. I have read most of the list back through last year and I think
> the conclusion I came to was to keep it simple.
>
>
>
> One: It may take months to convert my current VM images to shard’s, do you
> see any issues with this? My priority is to make sure future images are
> distributed as shards.
>
> Two: Settings, my intent is to set it as follows based on guidance on the
> Redhat site and what I have been reading here. Do these look okay?
> Additional suggestions?
>
>
>
> Modified Settings
>
> =====================
>
> features.shard enable
>
> features.shard-block-size 512MB
>
> data-self-heal-algorithm full
>
>
>
> Current Hardware
>
> =====================
>
> Hyper-converged. VM’s running Gluster Nodes
>
> Currently across three servers. Distributed-Replicate - All Bricks 1TB
> SSD
>
> Network - 10GB Connections
>
> Image Sizes – Up to 300GB
>
>
>
> Current Gluster Version
>
> =======================
>
> 3.8.4
>
>
>
> Current Settings
>
> =====================
>
> Type: Distributed-Replicate
>
> Number of Bricks: 4 x 3 = 12
>
> Transport-type: tcp
>
> Options Reconfigured:
>
> cluster.server-quorum-type: server
>
> cluster.quorum-type: auto
>
> network.remote-dio: enable
>
> cluster.eager-lock: enable
>
> performance.stat-prefetch: off
>
> performance.io-cache: off
>
> performance.read-ahead: off
>
> performance.quick-read: off
>
> server.allow-insecure: on
>
> performance.readdir-ahead: on
>
> performance.cache-size: 1GB
>
> performance.io-thread-count: 64
>
> nfs.disable: on
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170120/78967914/attachment.html>
More information about the Gluster-users
mailing list