[Gluster-users] 3.7.16 with sharding corrupts VMDK files when adding and removing bricks

Gandalf Corvotempesta gandalf.corvotempesta at gmail.com
Mon Nov 14 16:20:03 UTC 2016


2016-11-14 16:55 GMT+01:00 Krutika Dhananjay <kdhananj at redhat.com>:
> The only way to fix it is to have sharding be part of the graph *even* if
> disabled,
> except that in this case, its job should be confined to aggregating the
> already
> sharded files during reads but NOT shard new files that are created, since
> it is
> supposed to "act" disabled. This is a slightly bigger change and this is why
> I had
> suggested the workaround at
> https://bugzilla.redhat.com/show_bug.cgi?id=1355846#c1
> back then.

Why not keeping the shard xlator always on but set on a very high value so that
shard is never happening? Something at 100GB (just as proof of concept)

> FWIW, the documentation [1] does explain how to disable sharding the right
> way and has been in existence ever since sharding was first released in
> 3.7.0.
>
> [1] -
> http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/shard/

Ok but:
1) that's for 3.7 *beta1*. I'm using 3.8
2) "advisable" doesn't mean "you have to". It's an advice, not the
only way to disable a feature
3) i'm talking about a confirm to add in the cli, nothing strange. all
software ask for a confirm when bad things could happens.


More information about the Gluster-users mailing list