[Gluster-users] back to problems: gluster 3.5.4, qemu and debian 8

Roman romeo.r at gmail.com
Sun Jul 19 10:42:24 UTC 2015


I see! Thanx.

2015-07-19 7:34 GMT+03:00 Michael Mol <mikemol at gmail.com>:

> That's because they're playing out of Gluster's own playbook:
> http://www.gluster.org/community/documentation/index.php/Virt-store-usecase
>
> The point is that your data corruption issues are vastly more likely to
> have come from having write-behind enabled than having read-ahead enabled.
> Having write-behind enabled is like juggling your data with with a partner.
> Having write-behind disabled is like you and your partner handing data to
> each other, rather than tossing it. Having read-ahead disabled is like
> asking your partner for a page of data, and having him give you that page
> of data. Having read-ahead enabled is like asking your partner for a page
> of data, and having him give you a fifty page report, because he thinks you
> may need the extra information--except you already made allowances yourself
> in asking for that full page of data; the only data you *knew* you needed
> was a single table in that page.)
>
> As another example of why you wouldn't normally need read-ahead enabled in
> gluster, I could easily write a small books' worth of theory into an
> email detailing the concept further, but I've already given sufficient
> information to illustrate the relevant concepts; anything further would be
> unnecessary detail I'm only guessing you might need. ;)
>
> The read-ahead setting is about performance, not about data integrity.
> Virtual machines will be running an operating system. That operating system
> will be running block-device drivers and filesystem drivers. Both of those
> types of drivers have their own tunable concepts of read-ahead, so any
> further read-ahead at the gluster layer is unnecessary.
>
> (I'm not suggesting you enable read-ahead in prod; it's pointless. I'm
> just trying to point out that read-ahead shouldn't *break* anything for
> you. At the same time, if you enable it, making no other changes, and it
> *does* break things, that's something people would want to know about.)
>
>
> On Sat, Jul 18, 2015 at 2:42 PM Roman <romeo.r at gmail.com> wrote:
>
>> Hi!
>> Thanks for reply.
>>
>>
>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Quick_Start_Guide/chap-Quick_Start_Guide-Virtual_Preparation.html
>> reading this, RH recommends to keep read-ahead off for gluster volumes
>> used for VM-s.
>>
>> 2015-07-18 18:56 GMT+03:00 Michael Mol <mikemol at gmail.com>:
>>
>>> I think you'll find it's the write-behind that was killing you.
>>> Write-behind opens you up to a number of data consistency issues, and I
>>> strongly disrecommend it unless you have a rock-solid infrastructure from
>>> the writer all the way to the disk the data ultimately sits on.
>>>
>>> I bet that if you re-enabled read-ahead, you won't see the problem. Just
>>> leave write-behind off.
>>>
>>> On Sat, Jul 18, 2015, 10:44 AM Roman <romeo.r at gmail.com> wrote:
>>>
>>> solved after I've added (thanks to Niels de Vos) these options to the
>>> volumes:
>>>
>>> performance.read-ahead: off
>>>
>>> performance.write-behind: off
>>>
>>>
>>> 2015-07-15 17:23 GMT+03:00 Roman <romeo.r at gmail.com>:
>>>
>>> hey,
>>>
>>> I've updated the bug, if some1 has some ideas - share plz.
>>>
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1242913
>>>
>>>
>>> 2015-07-14 19:14 GMT+03:00 Kaushal M <kshlmster at gmail.com>:
>>>
>>>  Just a wild guess. What is the filesystem used for the debian 8
>>> installation? It could be the culprit.
>>>
>>> On Tue, Jul 14, 2015 at 7:27 PM, Roman <romeo.r at gmail.com> wrote:
>>> > I've done this way: installed debian8 on local disks using netinstall
>>> iso,
>>> > created a template of it and then cloned (full clone) it to glusterfs
>>> > storage backend. VM boots and runs fine... untill I start to install
>>> > something massive (DE ie). Last time it was mate failed to install due
>>> to
>>> > python-gtk2 package problems (complaing that it could not compile it)
>>> >
>>>
>>> > 2015-07-14 16:37 GMT+03:00 Scott Harvanek <scott.harvanek at login.com>:
>>> >>
>>> >> What happens if you install from a full CD and not a net-install?
>>> >>
>>> >> Limit the variables.  Currently you are relying on remote mirrors and
>>> >> Internet connectivity.
>>> >>
>>> >> It's either a Proxmox or Debian issue, I really don't think it's
>>> Gluster.
>>> >> We have hundreds of Jessie installs running on GlusterFS backends.
>>> >>
>>> >> --
>>> >> Scott H.
>>> >> Login, LLC.
>>> >>
>>> >>
>>> >>
>>> >> Roman
>>> >> July 14, 2015 at 9:30 AM
>>> >> Hey,
>>> >>
>>> >> thanks for reply.
>>> >> If it would be networking related, it would affect everything. But it
>>> is
>>> >> only debian 8 which won't install.
>>> >> And yes, i did iperf test between gluster and proxmox nodes. Its ok.
>>> >> Installation fails on every node, where i try to install d8.
>>> Sometimes it
>>> >> goes well (today 1 of 6 tries was fine). Other distros install fine.
>>> >> Sometimes installation process finishes, but VM won't start, just
>>> hangs
>>> >> with errors like in this attached.
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Best regards,
>>> >> Roman.
>>> >> Scott Harvanek
>>> >> July 14, 2015 at 9:17 AM
>>> >> We don't have this issue, I'll take a stab tho-
>>> >>
>>> >> Have you confirmed everything is good on the network side of things?
>>> >> MTU/Loss/Errors?
>>> >>
>>> >> Is your inconsistency linked to one specific brick? Have you tried
>>> running
>>> >> a replica instead of distributed?
>>> >>
>>> >>
>>> >>
>>> >> _______________________________________________
>>> >> Gluster-users mailing list
>>>
>>> >> Gluster-users at gluster.org
>>> >> http://www.gluster.org/mailman/listinfo/gluster-users
>>> >> Roman
>>> >> July 14, 2015 at 6:38 AM
>>> >> here is one of the errors example. its like files that debian
>>> installer
>>> >> copies to the virtual disk that is located on glusterfs storage are
>>> getting
>>> >> corrupted.
>>> >> in-target is /dev/vda1
>>> >>
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Best regards,
>>> >> Roman.
>>> >> _______________________________________________
>>> >> Gluster-users mailing list
>>> >> Gluster-users at gluster.org
>>> >> http://www.gluster.org/mailman/listinfo/gluster-users
>>> >> Roman
>>> >> July 14, 2015 at 4:50 AM
>>> >> Ubuntu 14.04 LTS base install and then mate install were fine!
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Best regards,
>>> >> Roman.
>>> >> _______________________________________________
>>> >> Gluster-users mailing list
>>> >> Gluster-users at gluster.org
>>> >> http://www.gluster.org/mailman/listinfo/gluster-users
>>> >> Roman
>>> >> July 13, 2015 at 7:35 PM
>>> >> Bah... the randomness of this issue is killing me.
>>> >> Not only HA volumes are affected. Got an error during installation of
>>> d8
>>> >> with mate (on python-gtk2 pkg) on Distributed volume also.
>>> >> I've checked the MD5SUM of installation iso, its ok.
>>> >>
>>> >> Shortly after that on the same VE node I installed D7 with Gnome
>>> without
>>> >> any problem on the HA glusterf volume.
>>> >>
>>> >> And on the same VE node I've installed D8 with both Mate and Gnome
>>> using
>>> >> local storage disks without problems. There is a bug somewhere in
>>> gluster or
>>> >> qemu... Proxmox uses RH kernel btw:
>>> >>
>>> >> Linux services 2.6.32-37-pve
>>> >> QEMU emulator version 2.2.1
>>> >> glusterfs 3.6.4
>>> >>
>>> >> any ideas?
>>> >> I'm ready to help to investigate this bug.
>>> >> When sun will shine, I'll try to install latest Ubuntu also. But now
>>> I'm
>>> >> going to sleep.
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Best regards,
>>> >> Roman.
>>> >> _______________________________________________
>>> >> Gluster-users mailing list
>>> >> Gluster-users at gluster.org
>>> >> http://www.gluster.org/mailman/listinfo/gluster-users
>>> >>
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > Best regards,
>>> > Roman.
>>> >
>>> > _______________________________________________
>>> > Gluster-users mailing list
>>> > Gluster-users at gluster.org
>>> > http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>>
>>> --
>>>
>>> Best regards,
>>> Roman.
>>>
>>>
>>>
>>> --
>>>
>>> Best regards,
>>> Roman.
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>>
>>
>>
>> --
>> Best regards,
>> Roman.
>>
>


-- 
Best regards,
Roman.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150719/c0b66ec8/attachment.html>


More information about the Gluster-users mailing list