[Gluster-devel] File Clobbering Bug

Gordan Bobic gordan at bobich.net
Tue Mar 3 11:12:13 UTC 2009


On Tue, 3 Mar 2009 16:37:27 +0530, Anand Avati <avati at zresearch.com> wrote:
> On Tue, Feb 17, 2009 at 7:15 AM, Gordan Bobic <gordan at bobich.net> wrote:
>> I've done a bit more digging on this one and there is some extra
>> weirdness
>> happening. If a directory gets deleted via samba on the client, when the
>> other server rejoins the file/directory can be seen with permissions
000.
>> But the file ends up still being there. It also seems to end up being
>> owned
>> as root.
>>
>> This sounds very suspiciously similar to the weirdness I was seeing with
>> the
>> ~/.openoffice directory. It looks like something doesn't replicate
>> correctly
>> when a server node rejoins. In this particular case, files were moved or
>> deleted, but they deletes don't get healed correctly.
>>
>> Trawling back through the logs, I can actually see entries from a few
>> days
>> ago:
>>
>> 2009-01-28 14:15:56 W
>> [afr-self-heal-entry.c:471:afr_sh_entry_expunge_rmdir]
>> home: removing directory /foo/bar on home2
>> 2009-01-28 14:15:56 E
>> [afr-self-heal-entry.c:449:afr_sh_entry_expunge_remove_cbk] home:
>> removing
>> /foo/bar on home2 failed (Directory not empty)
>> 2009-01-28 14:15:56 W
>> [afr-self-heal-entry.c:495:afr_sh_entry_expunge_unlink] home: unlinking
>> file
>> /foo/bar/baz on home2
>>
>> This was when the entry in question was being deleted with the server
>> down
>> (I think it was, at least).
>>
>> The files end up with 000 permissions owned by user 0 (root) group 0
>> (root).
>>
>> When I repair the ownership and permissions on the files and delete
them,
>> this appears in the logs:
>>
>> E [posix.c:2434:posix_xattrop] home-store: /foo/bar: Numerical result
out
>> of
>> range
> 
> 
> Did you change the number of sobvolumes in replicate?

Yes, I believe I had changed the number of subvolumes temporarily (added a
new node), but it was only while the data was replicated to the new node.
All the configs were updated and the daemons restarted, of course. I seem
to remember, however, that this behaviour was observed after one of the old
nodes was removed, so there were again only two nodes in the network
(except instead of home1 and home2, the shares were home2 and home3, with
home1 removed).

Gordan





More information about the Gluster-devel mailing list