[Gluster-users] Upgrade 3.6 to 3.8

Atin Mukherjee amukherj at redhat.com
Sat Aug 6 02:34:25 UTC 2016


On Saturday 6 August 2016, David Gossage <dgossage at carouselchecks.com>
wrote:

> I have 2 RHEL6 servers running gluster 3.6, and was thinking of moving to
> 3.8.  Is their any reason I would need to stop off at 3.7 on the way, or
> can I just move straight past?
>

You can directly upgrade to 3.8.


>
>
> It's a 2 brick replicate and I am planning on after update adding a 3rd
> node and creating a replica3 volume and migrating vm storage from one to
> the other.  I plan to bring volumes down during update then apply more
> current settings before mounting back to rhev.
>
> current 3.6
> Options Reconfigured:
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.server-quorum-type: server
> cluster.server-quorum-ratio: 60
>
> eventual 3.8 settings (will probably not add sharding settings to old
> volume, but will on new)
> Options Reconfigured:
> cluster.locking-scheme: granular
> diagnostics.brick-log-level: WARNING
> features.shard-block-size: 64MB
> features.shard: on
> performance.readdir-ahead: on
> storage.owner-uid: 36
> storage.owner-gid: 36
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: on
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> server.allow-insecure: on
> cluster.self-heal-window-size: 1024
> cluster.background-self-heal-count: 16
> performance.strict-write-ordering: off
> nfs.disable: on
> nfs.addr-namelookup: off
> nfs.enable-ino32: off
>
> *David Gossage*
> *Carousel Checks Inc. | System Administrator*
> *Office* 708.613.2284
>




-- 
--Atin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160806/e68a0367/attachment.html>


More information about the Gluster-users mailing list