[Gluster-users] Ownership changed to root
Stephan von Krawczynski
skraw at ithnet.com
Mon Aug 27 13:08:21 UTC 2012
On Sun, 26 Aug 2012 20:01:20 +0100
Brian Candler <B.Candler at pobox.com> wrote:
> On Sun, Aug 26, 2012 at 03:50:16PM +0200, Stephan von Krawczynski wrote:
> > I'd like to point you to "[Gluster-devel] Specific bug question" dated few
> > days ago, where I describe a trivial situation when owner changes on a brick
> > can occur, asking if someone can point me to a patch for that.
> I guess this is
> This could be helpful but as far as I can see a lot of important information
> is missing: e.g. what glusterfs version you are using, what operating
> system and kernel version, what underlying filesystem is used for the
> bricks. Is the volume mounted on a separate client machine, or on one of
> the brick servers? "gluster volume info" would be useful too.
In fact I wrote the pieces of information that seemed really important for me,
only they seem unclear. The setup has two independant hardware bricks and one
client (on seperate hardware). It is an all-linux setup with ext4 on the
bricks. The kernel versions are really of no use because I tested quite some
and the behaviour is always the same.
The problem has to do with the load on the client which is about the only sure
thing I can say.
The gluster version is 2.X and cannot be changed. AFAIK the glusterfsd
versions are not downward compatible to a point where one can build a setup
with one brick 2.X and the other 3.X, which is - if true - a general design
flaw amongst others.
I did in fact not intend to enter a big discussion about the point. I thought
there must be at least one person knowing the code to an extent where my
question can be answered immediately with one sentence. All you have to know
is how it may be possible that a "mv" command overruns a former one that
should in fact have already completed its job, because it exited successfully.
More information about the Gluster-users