[Gluster-users] High I/O And Processor Utilization
Kyle Harris
kyle.harris98 at gmail.com
Sat Jan 9 15:44:36 UTC 2016
I can make the change to sharding and then export/import the VMs to give it
a try. So just to be clear, I am using v3.7.6-1. Is that sufficient? I
would rather not have to compile from source and would probably wait for
the next rpms if that is needed.
Also, given the output below. what would you recommend I use for the shard
block size and furthermore, how do you determine this?
-rw-r--r-- 1 root root 53G Jan 9 09:34
03070877-9cf4-4d55-a66c-fbd3538eedb9.vhd
-rw-r--r-- 1 root root 2.1M Jan 8 12:27
0b16f938-e859-41e3-bb33-fefba749a578.vhd
-rw-r--r-- 1 root root 1.6G Jan 7 16:39
3d77b504-3109-4c34-a803-e9236e35d8bf.vhd
-rw-r--r-- 1 root root 497M Jan 7 17:27
715ddb6c-67af-4047-9fa0-728019b49d63.vhd
-rw-r--r-- 1 root root 341M Jan 7 16:17
72a33878-59f7-4f6e-b3e1-e137aeb19ced.vhd
-rw-r--r-- 1 root root 2.1G Jan 9 09:34
7b7c8d8a-d223-4a47-bd35-8d72ee6927b9.vhd
-rw-r--r-- 1 root root 8.1M Dec 28 11:07
8b49029c-7e55-4569-bb73-88c3360d6a0c.vhd
-rw-r--r-- 1 root root 2.2G Jan 8 12:25
8c524ed9-e382-40cd-9361-60c23a2c1ae2.vhd
-rw-r--r-- 1 root root 3.2G Jan 9 09:34
930196aa-0b85-4482-97ab-3d05e9928884.vhd
-rw-r--r-- 1 root root 2.0G Jan 8 12:27
940ee016-8288-4369-9fb8-9c64cb3af256.vhd
-rw-r--r-- 1 root root 12G Jan 9 09:34
b0cdf43c-7e6b-44bf-ab2d-efb14e9d2156.vhd
-rw-r--r-- 1 root root 6.8G Jan 7 16:39
b803f735-cf7f-4568-be83-aedd746f6cec.vhd
-rw-r--r-- 1 root root 2.1G Jan 9 09:34
be18622b-042a-48cb-ab94-51541ffe24eb.vhd
-rw-r--r-- 1 root root 2.6G Jan 9 09:34
c2645723-efd9-474b-8cce-fe07ac9fbba9.vhd
-rw-r--r-- 1 root root 2.1G Jan 9 09:34
d2873b74-f6be-43a9-bdf1-276761e3e228.vhd
-rw-r--r-- 1 root root 1.4G Jan 7 17:27
db881623-490d-4fd8-8f12-9c82eea3c53c.vhd
-rw-r--r-- 1 root root 2.1M Jan 8 12:33
eb21c443-6381-4a25-ac7c-f53a82289f10.vhd
-rw-r--r-- 1 root root 13G Jan 7 16:39
f6b9cfba-09ba-478d-b8e0-543dd631e275.vhd
Thanks again.
On Fri, Jan 8, 2016 at 8:34 PM, Ravishankar N <ravishankar at redhat.com>
wrote:
> On 01/09/2016 07:42 AM, Krutika Dhananjay wrote:
>
>
>
> ------------------------------
>
> *From: *"Ravishankar N" <ravishankar at redhat.com> <ravishankar at redhat.com>
> *To: *"Kyle Harris" <kyle.harris98 at gmail.com> <kyle.harris98 at gmail.com>,
> gluster-users at gluster.org
> *Sent: *Saturday, January 9, 2016 7:06:04 AM
> *Subject: *Re: [Gluster-users] High I/O And Processor Utilization
>
> On 01/09/2016 01:44 AM, Kyle Harris wrote:
>
> It’s been a while since I last ran GlusterFS so I thought I might give it
> another try here at home in my lab. I am using the 3.7 branch on 2 systems
> with a 3rd being an arbiter node. Much like the last time I tried
> GlusterFS, I keep running into issues with the glusterfsd process eating up
> so many resources that the systems sometimes become all but unusable. A
> quick Google search tells me I am not the only one to run into this issue
> but I have yet to find a cure. The last time I ran GlusterFS, it was to
> host web sites and I just chalked the problem up to a large number of small
> files. This time, I am using it to host VM’s and there are only 7 of them
> and while they are running, they are not doing anything else.
>
>
> The performance improvements for self-heal are still a
> (stalled_at_the_moment)-work-in-progress. But for VM use cases, you can
> turn on sharding [1], which will drastically reduce data self-heal time.
> Why don't you give it a spin on your lab setup and let us know how it goes?
> You might have to create the VMs again though since only the files that are
> created after enabling the feature will be sharded.
>
> -Ravi
>
> [1] http://blog.gluster.org/2015/12/introducing-shard-translator/
>
>
> Kyle,
> I would recommend you to use glusterfs-3.7.6 if you intend to try
> sharding, because it contains some crucial bug fixes.
>
>
> If you're trying arbiter, it would be good if you can compile the 3.7
> branch and use it since it has an important fix (
> http://review.gluster.org/#/c/12479/) that will only make it to
> glusterfs-3.7.7. That way you'd get this fix and the sharding ones too
> right away.
>
> -Krutika
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> _______________________________________________
> Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160109/3f78d8a2/attachment.html>
More information about the Gluster-users
mailing list