[Gluster-users] Gluster Self Heal

Toby Corkindale toby.corkindale at strategicdata.com.au
Wed Jul 10 08:01:16 UTC 2013


On 09/07/13 18:17, 符永涛 wrote:
> Hi Toby,
>
> What's the bug #? I want to have a look and backport it to our
> production server if it helps. Thank you.

I think it was this one:
https://bugzilla.redhat.com/show_bug.cgi?id=947824

The bug being that the daemons were crashing out if you had a lot of 
volumes defined, I think?

Toby

> 2013/7/9 Toby Corkindale <toby.corkindale at strategicdata.com.au
> <mailto:toby.corkindale at strategicdata.com.au>>
>
>     On 09/07/13 15:38, Bobby Jacob wrote:
>
>         Hi,
>
>         I have a 2-node gluster with 3 TB storage.
>
>         1)I believe the “glusterfsd” is responsible for the self healing
>         between
>         the 2 nodes.
>
>         2)Due to some network error, the replication stopped for some
>         reason but
>
>         the application was accessing the data from node1.  When I
>         manually try
>         to start “glusterfsd” service, its not starting.
>
>         Please advice on how I can maintain the integrity of the data so
>         that we
>         have all the data in both the locations. ??
>
>
>     There were some bugs in the self-heal daemon present in 3.3.0 and
>     3.3.1. Our systems see the SHD crash out with segfaults quite often,
>     and it does not recover.
>
>     I reported this bug a long time ago, and it was fixed in trunk
>     relatively quickly -- however version 3.3.2 has still not been
>     released, despite the fix being found six months ago.
>
>     I find this quite disappointing.
>
>     T
>     _________________________________________________
>     Gluster-users mailing list
>     Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>     http://supercolony.gluster.__org/mailman/listinfo/gluster-__users
>     <http://supercolony.gluster.org/mailman/listinfo/gluster-users>
>
>
>
>
> --
> 符永涛




More information about the Gluster-users mailing list