[Gluster-users] [Gluster-devel] Gluster 3.7.0 released

Niels de Vos ndevos at redhat.com
Sat May 16 09:07:40 UTC 2015


On Fri, May 15, 2015 at 11:20:42AM +0100, Kingsley wrote:
> On Fri, 2015-05-15 at 10:18 +0200, Niels de Vos wrote:
> > Users of packages from the
> > distributions should not be worried that the version they are running
> > suddenly gets replaced with the brand new 3.7.0.
> 
> I thought the generally advised upgrade path was to stop all volumes and
> server daemons before applying an update, else it's necessary to do a
> volume heal between upgrading each server?

Yes, that is the general advise.

> That being the case, unless someone notices that a yum update is about
> to upgrade their gluster installation as well, they might not perform
> the above steps first. Would that not be a problem?

This normally is not a problem. But, it is definitely advised to stop
all services of the system before doing an update. It is (or should be)
common practice for servers that run production workloads.

I expect that admins want to update and verify one server at the time
(automated or manual). Things like automatic/nighty updates through
yum-cron or other tools should have at minimum a certain offset in their
schedule, it is pretty fatal for your services if all servers update and
reboot or restart services at the same time.

> >From my own perspective, I'm keen on some of the new features in 3.7.0,
> but would prefer to not put a brand new release into production until
> the dust has settled. We're running 3.6.3 currently, but had waited for
> that release before pushing it out into production.

The 3.7.0 release should be ready for usual production deployments. New
features that have been introduced have seen little real world
workloads. Most of the tests that I am aware of were run in labs and not
very much user facing. As with all new software and functionalities, you
should check if it matches your workload and fulfills your expectations.
Feedback and suggestions for improvement is always much appreciated.

Thanks,
Niels


More information about the Gluster-users mailing list