[Gluster-devel] AFR problem with 2.0rc4
Amar Tumballi
amar at gluster.com
Wed Mar 18 08:33:28 UTC 2009
Hi Nicolas,
Sure, We are in the process of internal testing. It should be out as
release soon. Meanwhile, you can pull from git and test it out.
Regards,
On Wed, Mar 18, 2009 at 1:30 AM, nicolas prochazka <
prochazka.nicolas at gmail.com> wrote:
> Hello,
> I see in git tree correction of afr heal bug ,
> can wa test this release, is stable enough in compare rc release ?
> nicolas
>
> On Tue, Mar 17, 2009 at 9:39 PM, nicolas prochazka
> <prochazka.nicolas at gmail.com> wrote:
> > My test is :
> > Set two server in AFR mode
> > copy file to mount point ( /mnt/vdisk ) : ok , synchro is ok on two
> server.
> > Then delete (rm ) all file from storage on server 1 ( /mnt/disks/export )
> > then wait for synchronisation.
> > with rc2 and rc4 => file with good size ( ls -l) but nothing here (
> > df -b shows no disk usage ) and files are corrupt
> > with rc1 : all is ok, server resynchro perfectly., i think is the right
> way ;)
> >
> > nicoals
> >
> > On Tue, Mar 17, 2009 at 6:49 PM, Amar Tumballi <amar at gluster.com> wrote:
> >> Hi Nicolas,
> >> When you mean you 'add' a server here, you are adding another server to
> >> replicate subvolume? (ie, 2 to 3), or you had one server down when
> copying
> >> data (of 2 servers), and you bring back another server up and trigger
> the
> >> afr self heal ?
> >>
> >> Regards,
> >> Amar
> >>
> >> On Tue, Mar 17, 2009 at 7:22 AM, nicolas prochazka
> >> <prochazka.nicolas at gmail.com> wrote:
> >>>
> >>> Yes i'm trying without any translator but bugs persists.
> >>>
> >>> Into logs i can not see anything interesting, size of file seems to be
> >>> always ok when it begin synchronize.
> >>> As i write before, if i cp files during normal operation ( 2 servers
> >>> ok ) all is ok, problem appears only when i try to resynchronize ( rm
> >>> all on one of server ( in storage/posix) directory, gluster recreate
> >>> file but empty or with buggy data.
> >>>
> >>> I notice too, that with RC1, during resynchronise, if i try an ls on
> >>> mount point, ls is blocking until synchronisation is ending, with RC2,
> >>> ls is not blocking.
> >>>
> >>> Regards,
> >>> Nicolas
> >>>
> >>>
> >>>
> >>>
> >>> On Tue, Mar 17, 2009 at 2:50 PM, Gordan Bobic <gordan at bobich.net>
> wrote:
> >>> > Have you tried the later versions (rc2/rc4) without the performance
> >>> > trasnlators? Does the problem persist without them? Anything
> interesting
> >>> > looking in the logs?
> >>> >
> >>> > On Tue, 17 Mar 2009 14:46:41 +0100, nicolas prochazka
> >>> > <prochazka.nicolas at gmail.com> wrote:
> >>> >> hello again,
> >>> >> So this bug does not occur with RC1
> >>> >>
> >>> >> RC2,RC4 contains bug describe below, not RC1 , any idea ?
> >>> >> Nicolas
> >>> >>
> >>> >> On Tue, Mar 17, 2009 at 12:55 PM, nicolas prochazka
> >>> >> <prochazka.nicolas at gmail.com> wrote:
> >>> >>> I 'm just trying with rc2 , same bug as rc4.
> >>> >>> Regards,
> >>> >>> Nicolas
> >>> >>>
> >>> >>> On Tue, Mar 17, 2009 at 12:06 PM, Gordan Bobic <gordan at bobich.net>
> >>> > wrote:
> >>> >>>> Can you check if it works correctly with 2.0rc2 and/or 2.0rc1?
> >>> >>>>
> >>> >>>> On Tue, 17 Mar 2009 12:04:33 +0100, nicolas prochazka
> >>> >>>> <prochazka.nicolas at gmail.com> wrote:
> >>> >>>>> oups,
> >>> >>>>> same problem in fact with simple 8 bytes text file, the file
> seems
> >>> >>>>> to
> >>> >>>>> be corrupt.
> >>> >>>>>
> >>> >>>>> Regards,
> >>> >>>>> Nicolas Prochazka
> >>> >>>>>
> >>> >>>>> On Tue, Mar 17, 2009 at 11:20 AM, Gordan Bobic <
> gordan at bobich.net>
> >>> >>>>> wrote:
> >>> >>>>>> Are you sure this is rc4 specific? I've seen assorted weirdness
> >>> >>>>>> when
> >>> >>>>>> adding
> >>> >>>>>> and removing servers in all versions up to and including rc2
> (rc4
> >>> >>>>>> seems
> >>> >>>>>> to
> >>> >>>>>> lock up when starting udev on it, so I'm not using it).
> >>> >>>>>>
> >>> >>>>>> On Tue, 17 Mar 2009 11:15:30 +0100, nicolas prochazka
> >>> >>>>>> <prochazka.nicolas at gmail.com> wrote:
> >>> >>>>>>> Hello guys,
> >>> >>>>>>>
> >>> >>>>>>> strange problem :
> >>> >>>>>>> with rc4, afr synchronisation seems to be not work :
> >>> >>>>>>> - If i copy a file on mount gluster, all is ok on all servers
> >>> >>>>>>> - if i add a new server in gluster, this server create my files
> (
> >>> > 10G
> >>> >>>>>>> size ) , it's appear on XFS as 10G file but file does not
> contains
> >>> >>>>>>> original, just some octets,
> >>> >>>>>>> then gluster do not synchronise, perhaps because the size is
> same.
> >>> >>>>>>>
> >>> >>>>>>> regards,
> >>> >>>>>>> NP
> >>> >>>>>>>
> >>> >>>>>>>
> >>> >>>>>>> volume brickless
> >>> >>>>>>> type storage/posix
> >>> >>>>>>> option directory /mnt/disks/export
> >>> >>>>>>> end-volume
> >>> >>>>>>>
> >>> >>>>>>> volume brickthread
> >>> >>>>>>> type features/posix-locks
> >>> >>>>>>> option mandatory-locks on # enables mandatory locking
> >>> >>>>>>> on
> >>> >>>>>>> all
> >>> >>>>>> files
> >>> >>>>>>> subvolumes brickless
> >>> >>>>>>> end-volume
> >>> >>>>>>>
> >>> >>>>>>> volume brick
> >>> >>>>>>> type performance/io-threads
> >>> >>>>>>> option thread-count 4
> >>> >>>>>>> subvolumes brickthread
> >>> >>>>>>> end-volume
> >>> >>>>>>>
> >>> >>>>>>>
> >>> >>>>>>> volume server
> >>> >>>>>>> type protocol/server
> >>> >>>>>>> subvolumes brick
> >>> >>>>>>> option transport-type tcp
> >>> >>>>>>> option auth.addr.brick.allow 10.98.98.*
> >>> >>>>>>> end-volume
> >>> >>>>>>>
> >>> >>>>>>>
> >>> >>>>>>>
> >>> >>>>>>> -------------------------------------------
> >>> >>>>>>>
> >>> >>>>>>>
> >>> >>>>>>>
> >>> >>>>>>> volume brick_10.98.98.1
> >>> >>>>>>> type protocol/client
> >>> >>>>>>> option transport-type tcp/client
> >>> >>>>>>> option transport-timeout 120
> >>> >>>>>>> option remote-host 10.98.98.1
> >>> >>>>>>> option remote-subvolume brick
> >>> >>>>>>> end-volume
> >>> >>>>>>>
> >>> >>>>>>>
> >>> >>>>>>> volume brick_10.98.98.2
> >>> >>>>>>> type protocol/client
> >>> >>>>>>> option transport-type tcp/client
> >>> >>>>>>> option transport-timeout 120
> >>> >>>>>>> option remote-host 10.98.98.2
> >>> >>>>>>> option remote-subvolume brick
> >>> >>>>>>> end-volume
> >>> >>>>>>>
> >>> >>>>>>>
> >>> >>>>>>> volume last
> >>> >>>>>>> type cluster/replicate
> >>> >>>>>>> subvolumes brick_10.98.98.1 brick_10.98.98.2
> >>> >>>>>>> option read-subvolume brick_10.98.98.1
> >>> >>>>>>> option favorite-child brick_10.98.98.1
> >>> >>>>>>> end-volume
> >>> >>>>>>> volume iothreads
> >>> >>>>>>> type performance/io-threads
> >>> >>>>>>> option thread-count 4
> >>> >>>>>>> subvolumes last
> >>> >>>>>>> end-volume
> >>> >>>>>>>
> >>> >>>>>>> volume io-cache
> >>> >>>>>>> type performance/io-cache
> >>> >>>>>>> option cache-size 2048MB # default is 32MB
> >>> >>>>>>> option page-size 128KB #128KB is default option
> >>> >>>>>>> option cache-timeout 2 # default is 1
> >>> >>>>>>> subvolumes iothreads
> >>> >>>>>>> end-volume
> >>> >>>>>>>
> >>> >>>>>>> volume writebehind
> >>> >>>>>>> type performance/write-behind
> >>> >>>>>>> option aggregate-size 128KB # default is 0bytes
> >>> >>>>>>> option window-size 512KB
> >>> >>>>>>> option flush-behind off # default is 'off'
> >>> >>>>>>> subvolumes io-cache
> >>> >>>>>>> end-volume
> >>> >>>>>>>
> >>> >>>>>>>
> >>> >>>>>>> _______________________________________________
> >>> >>>>>>> Gluster-devel mailing list
> >>> >>>>>>> Gluster-devel at nongnu.org
> >>> >>>>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >>> >>>>>>
> >>> >>>>>>
> >>> >>>>>> _______________________________________________
> >>> >>>>>> Gluster-devel mailing list
> >>> >>>>>> Gluster-devel at nongnu.org
> >>> >>>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >>> >>>>>>
> >>> >>>>
> >>> >>>>
> >>> >>>> _______________________________________________
> >>> >>>> Gluster-devel mailing list
> >>> >>>> Gluster-devel at nongnu.org
> >>> >>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >>> >>>>
> >>> >>>
> >>> >
> >>> >
> >>> > _______________________________________________
> >>> > Gluster-devel mailing list
> >>> > Gluster-devel at nongnu.org
> >>> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >>> >
> >>>
> >>>
> >>> _______________________________________________
> >>> Gluster-devel mailing list
> >>> Gluster-devel at nongnu.org
> >>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >>
> >>
> >>
> >> --
> >> Amar Tumballi
> >>
> >>
> >
>
>
--
Amar Tumballi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090318/38eb034d/attachment-0003.html>
More information about the Gluster-devel
mailing list