[Gluster-users] libgfapi failover problem on replica bricks

Pranith Kumar Karampuri pkarampu at redhat.com
Mon Sep 1 06:41:35 UTC 2014


On 09/01/2014 12:08 PM, Roman wrote:
> Well, as for me, VM-s are not very impacted by healing process. At 
> least the munin server running with pretty high load (average rarely 
> goes below 0,9 :) )had no problems. To create some more load I've made 
> a copy of 590 MB file on the VM-s disk, It took 22 seconds. Which is 
> ca 27 MB /sec or 214 Mbps/sec
>
> Servers are connected via 10 gbit network. Proxmox client is connected 
> to the server with separate 1 gbps interface. We are thinking of 
> moving it to 10gbps also.
>
> Here are some heal info which is pretty confusing.
>
> right after 1st server restored it connection, it was pretty clear:
>
> root at stor1:~# gluster volume heal HA-2TB-TT-Proxmox-cluster info
> Brick stor1:/exports/HA-2TB-TT-Proxmox-cluster/2TB/
> /images/124/vm-124-disk-1.qcow2 - Possibly undergoing heal
> Number of entries: 1
>
> Brick stor2:/exports/HA-2TB-TT-Proxmox-cluster/2TB/
> /images/124/vm-124-disk-1.qcow2 - Possibly undergoing heal
> /images/112/vm-112-disk-1.raw - Possibly undergoing heal
> Number of entries: 2
>
>
> some time later is says
> root at stor1:~# gluster volume heal HA-2TB-TT-Proxmox-cluster info
> Brick stor1:/exports/HA-2TB-TT-Proxmox-cluster/2TB/
> Number of entries: 0
>
> Brick stor2:/exports/HA-2TB-TT-Proxmox-cluster/2TB/
> Number of entries: 0
>
> while I can still see traffic between servers and still there was no 
> messages about healing process completion.
On which machine do we have the mount?

Pranith
>
>
>
> 2014-08-29 10:02 GMT+03:00 Pranith Kumar Karampuri 
> <pkarampu at redhat.com <mailto:pkarampu at redhat.com>>:
>
>     Wow, this is great news! Thanks a lot for sharing the results :-).
>     Did you get a chance to test the performance of the applications
>     in the vm during self-heal?
>     May I know more about your use case? i.e. How many vms and what is
>     the avg size of each vm etc?
>
>     Pranith
>
>
>     On 08/28/2014 11:27 PM, Roman wrote:
>>     Here are the results.
>>     1. still have problem with logs rotation. logs are being written
>>     to .log.1 file, not .log file. any hints, how to fix?
>>     2. healing logs are now much more better, I can see the
>>     successful message.
>>     3. both volumes with HD off and on successfully synced. the
>>     volume with HD on synced much more faster.
>>     4. both VMs on volumes survived the outage, when new files were
>>     added  and deleted during outage.
>>
>>     So replication works well with both HD on and off for volumes for
>>     VM-s. With HD even faster. Need to solve the logging issue.
>>
>>     Seems we could start production storage from this moment :) The
>>     whole company will use it. Some distributed and some replicated.
>>     Thanks for great product.
>>
>>
>>     2014-08-27 16:03 GMT+03:00 Roman <romeo.r at gmail.com
>>     <mailto:romeo.r at gmail.com>>:
>>
>>         Installed new packages. Will make some tests tomorrow. thanx.
>>
>>
>>         2014-08-27 14:10 GMT+03:00 Pranith Kumar Karampuri
>>         <pkarampu at redhat.com <mailto:pkarampu at redhat.com>>:
>>
>>
>>             On 08/27/2014 04:38 PM, Kaleb KEITHLEY wrote:
>>
>>                 On 08/27/2014 03:09 AM, Humble Chirammal wrote:
>>
>>
>>
>>                     ----- Original Message -----
>>                     | From: "Pranith Kumar Karampuri"
>>                     <pkarampu at redhat.com <mailto:pkarampu at redhat.com>>
>>                     | To: "Humble Chirammal" <hchiramm at redhat.com
>>                     <mailto:hchiramm at redhat.com>>
>>                     | Cc: "Roman" <romeo.r at gmail.com
>>                     <mailto:romeo.r at gmail.com>>,
>>                     gluster-users at gluster.org
>>                     <mailto:gluster-users at gluster.org>, "Niels de
>>                     Vos" <ndevos at redhat.com <mailto:ndevos at redhat.com>>
>>                     | Sent: Wednesday, August 27, 2014 12:34:22 PM
>>                     | Subject: Re: [Gluster-users] libgfapi failover
>>                     problem on replica bricks
>>                     |
>>                     |
>>                     | On 08/27/2014 12:24 PM, Roman wrote:
>>                     | > root at stor1:~# ls -l /usr/sbin/glfsheal
>>                     | > ls: cannot access /usr/sbin/glfsheal: No such
>>                     file or directory
>>                     | > Seems like not.
>>                     | Humble,
>>                     |       Seems like the binary is still not packaged?
>>
>>                     Checking with Kaleb on this.
>>
>>                 ...
>>
>>                     | >>>             |
>>                     | >>>             | Humble/Niels,
>>                     | >>>  |      Do we have debs available for
>>                     3.5.2? In 3.5.1
>>                     | >>>  there was packaging
>>                     | >>>             | issue where /usr/bin/glfsheal
>>                     is not packaged along
>>                     | >>>  with the deb. I
>>                     | >>>             | think that should be fixed
>>                     now as well?
>>                     | >>>             |
>>                     | >>>  Pranith,
>>                     | >>>
>>                     | >>>  The 3.5.2 packages for debian is not
>>                     available yet. We
>>                     | >>>  are co-ordinating internally to get it
>>                     processed.
>>                     | >>>             I will update the list once its
>>                     available.
>>                     | >>>
>>                     | >>>  --Humble
>>
>>
>>                 glfsheal isn't in our 3.5.2-1 DPKGs either. We
>>                 (meaning I) started with the 3.5.1 packaging bits
>>                 from Semiosis. Perhaps he fixed 3.5.1 after giving me
>>                 his bits.
>>
>>                 I'll fix it and spin 3.5.2-2 DPKGs.
>>
>>             That is great Kaleb. Please notify semiosis as well in
>>             case he is yet to fix it.
>>
>>             Pranith
>>
>>
>>                 -- 
>>
>>                 Kaleb
>>
>>
>>
>>
>>
>>         -- 
>>         Best regards,
>>         Roman.
>>
>>
>>
>>
>>     -- 
>>     Best regards,
>>     Roman.
>
>
>
>
> -- 
> Best regards,
> Roman.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140901/e80de830/attachment.html>


More information about the Gluster-users mailing list