[Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks
vbellur at redhat.com
Mon Nov 14 16:01:19 UTC 2016
On Mon, Nov 14, 2016 at 10:38 AM, Gandalf Corvotempesta
<gandalf.corvotempesta at gmail.com> wrote:
> 2016-11-14 15:54 GMT+01:00 Niels de Vos <ndevos at redhat.com>:
>> Obviously this is unacceptible for versions that have sharding as a
>> functional (not experimental) feature. All supported features are
>> expected to function without major problems (like corruption) for all
>> standard Gluster operations. Add-brick/replace-brick are surely such
>> Gluster operations.
> Is sharding an experimental feature even in 3.8 ?
> Because in 3.8 announcement, it's declared stable:
> "Sharding is now stable for VM image storage. "
sharding was an experimental feature in 3.7. Based on the feedback
that we received in testing, we called it out as stable in 3.8. The
add-brick related issue is something that none of us encountered in
testing and we will determine how we can avoid missing such problems
in the future.
>> FWIW sharding has several open bugs (like any other component), but it
>> is not immediately clear to me if the problem reported in this email is
>> in Bugzilla yet. These are the bugs that are expected to get fixed in
>> upcoming minor releases:
> My issue with sharding was reported in bugzilla on 2016-07-12
> 4 months for a IMHO, critical bug.
> If you disable sharding on a sharded volume with existing shared data,
> you corrupt every existing file.
Accessing sharded data after disabling sharding is something that we
did not visualize as a valid use case at any point in time. Also, you
could access the contents by enabling sharding again. Given these
factors I think this particular problem has not been prioritized by
As with many other projects, we are in a stage today where the number
of users and testers far outweigh the number of developers
contributing code. With this state it becomes hard to prioritize
problems from a long todo list for developers. If valuable community
members like you feel strongly about a bug or feature that need
attention of developers, please call such issues out on the mailing
list. We will be more than happy to help.
Having explained the developer perspective, I do apologize for any
inconvenience you might have encountered from this particular bug.
More information about the Gluster-users