[Gluster-users] Expanding a replicated volume

Sjors Gielen sjors at sjorsgielen.nl
Fri Jul 3 15:46:25 UTC 2015


Hi all,

By attaching a debugger to the live glusterfsd process, I think I at least
figured out why the files only appear when the stat or du is done as root.
Curro Rodriguez, do you run the stats as root? As I mentioned that was
necessary for the heal to kick in.

It seems to be because of the function acl_permits in
xlators/system/posix-acl/src/posix-acl.c. The goal of this function is, I
think, to decide whether the glusterfsd will allow a certain operation to
take place. In this case, I try to list a directory that doesn't exist on
the new brick. This causes a "mkdir" heal operation to happen on the local
brick. "acl_permits" decides whether that mkdir is allowed to take place,
and because I'm not an owner, I'm not. However, if I ran the initial "ls"
as root, frame_is_super_user(frame) would be true, so it allows anything to
take place, even the mkdir.

IMO, acl_permits should always permit healing operations, even if the user
that originally requested the operation that caused the heal would not be
allowed to run the healing operation necessary.

Sjors

Op vr 3 jul. 2015 om 13:04 schreef M S Vishwanath Bhat <msvbhat at gmail.com>:

> On 3 July 2015 at 15:02, Sjors Gielen <sjors at sjorsgielen.nl> wrote:
>
>> Hi Vishwanath,
>>
>> Op do 2 jul. 2015 om 21:51 schreef M S Vishwanath Bhat <msvbhat at gmail.com
>> >:
>>
>>> AFAIK there are two ways you can trigger the self-heal
>>>
>>> 1. Use the gluster CLI "heal" command. I'm not sure why it didn't work
>>> for you and needs to be investigated.
>>>
>>
>> Do you think I should file a bug for this? I can reliably reproduce using
>> the steps in my original e-mail. (This is Gluster 3.7.2, by the way.)
>>
> Yes, you should file a bug if it's not working.
>
> Meanwhile Pranith or Xavi (self-heal developers) might be able to help you.
>
> Best Regards,
> Vishwanath
>
>
>>
>>> 2. Running 'stat' on files on gluster volume mountpoint, So if you run
>>> stat on the entire mountpoint, the files should be properly synced across
>>> all the replica bricks.
>>>
>>
>> This indeed seems to do the same as the `du`: when run as root on the
>> server running the complete brick, the file appears on the incomplete brick
>> as well. Initially as an empty file, but after a few seconds the complete
>> file exists. When the `stat` is not ran as root, this doesn't happen, which
>> I still think is bizarre.
>>
>> Thanks,
>> Sjors
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150703/703efd8d/attachment.html>


More information about the Gluster-users mailing list