[Gluster-users] [ovirt-users] Tracking down high writes in GlusterFS volume
Krutika Dhananjay
kdhananj at redhat.com
Tue Feb 26 07:01:17 UTC 2019
On Fri, Feb 15, 2019 at 12:30 AM Jayme <jaymef at gmail.com> wrote:
> Running an oVirt 4.3 HCI 3-way replica cluster with SSD backed storage.
> I've noticed that my SSD writes (smart Total_LBAs_Written) are quite high
> on one particular drive. Specifically I've noticed one volume is much much
> higher total bytes written than others (despite using less overall space).
>
Writes are higher on one particular volume? Or did one brick witness more
writes than its two replicas within the same volume? Could you share the
volume info output of the affected volume plus the name of the affected
brick if at all the issue is with one single brick?
Also, did you check if the volume was undergoing any heals (`gluster volume
heal <VOLNAME> info`)?
-Krutika
My volume is writing over 1TB of data per day (by my manual calculation,
> and with glusterfs profiling) and wearing my SSDs quickly, how can I best
> determine which VM or process is at fault here?
>
> There are 5 low use VMs using the volume in question. I'm attempting to
> track iostats on each of the vm's individually but so far I'm not seeing
> anything obvious that would account for 1TB of writes per day that the
> gluster volume is reporting.
> _______________________________________________
> Users mailing list -- users at ovirt.org
> To unsubscribe send an email to users-leave at ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/OZHZXQS4GUPPJXOZSBTO6X5ZL6CATFXK/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190226/a73076d3/attachment.html>
More information about the Gluster-users
mailing list