[Gluster-users] Brick Reboot => VMs slowdown, client crashes
Darrell Budic
budic at onholyground.com
Mon Aug 19 19:58:51 UTC 2019
You want sharding for sure, it keeps the entire disk from being locked while it heals. So you usually don’t notice it when you reboot a system, say.
It’s fine to enable after the fact, but existing files won’t be sharded. You can work around this by stopping the VM and copying the file to new location, then renaming it over the old version. If you’re running something that lets you migrate live volume, you can create a new share with sharding enabled, then migrate the volume.
> On Aug 19, 2019, at 12:01 PM, Carl Sirotic <csirotic at evoqarchitecture.com> wrote:
>
> No, I didn't.
>
> I am very interested about these settings.
>
> Also, is it possible to turn the shard feature AFTER the volume was started to be used ?
>
>
> Carl
>
> On 2019-08-19 12:08 p.m., Darrell Budic wrote:
>> You also need to make sure your volume is setup properly for best performance. Did you apply the gluster virt group to your volumes, or at least features.shard = on on your VM volume?
>>
>>> On Aug 19, 2019, at 11:05 AM, Carl Sirotic <csirotic at evoqarchitecture.com> wrote:
>>>
>>> Yes, I made sure there was no heal.
>>> This is what I am suspecting thet shutting down a host isn't the right way to go.
>>> Hi Carl, Did you check for any pending heals before rebooting the gluster server? Also, it was discussed that shutting down the node, does not stop the bricks properly and thus the clients will eait for a timeout before restoring full functionality. You can stop your glusterd and actually all processes by using a script in /usr/share/gluster/scripts (the path is based on memory and could be wrong). Best Regards, Strahil NikllovOn Aug 19, 2019 18:34, Carl Sirotic wrote: > > Hi, > > we have a replicate 3 cluster. > > 2 other servers are clients that run VM that are stored on the Gluster > volumes. > > I had to reboot one of the brick for maintenance. > > The whole VM setup went super slow and some of the client crashed. > > I think there is some timeout setting for KVM/Qemu vs timeout of > Glusterd that could fix this. > > Do anyone have an idea ? > > The whole point of having gluster for me was to be able to shut down one > of the host while the vm stay running. > > > Carl > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users AVIS DE CONFIDENTIALITÉ : Ce courriel peut contenir de l'information privilégiée et confidentielle. Nous vous demandons de le détruire immédiatement si vous n'êtes pas le destinataire. CONFIDENTIALITY NOTICE: This email may contain information that is privileged and confidential. Please delete immediately if you are not the intended recipient. _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list