[Gluster-users] posix_handle_hard [file exists]

Krutika Dhananjay kdhananj at redhat.com
Tue Nov 6 05:33:00 UTC 2018


I think this is because the way preallocation works is by sending lot of
writes.
In the newer version of ovirt, this is changed to use fallocate for faster
allocation.

Adding Sahina, Gobinda to help with the ovirt version number that has this
fix.

-Krutika

On Mon, Nov 5, 2018 at 8:23 PM Jorick Astrego <jorick at netbulae.eu> wrote:

> Hi Krutika,
>
> Thanks for the info.
>
> After a long time the preallocated disk has been created properly. It was
> a 1TB disk on a hdd pool so a bit of delay was expected.
>
> But it took a bit longer then expected. The disk had no other virtual
> disks on it. Is there something I can tweak or check for this?
>
> Regards, Jorick
>
> On 10/31/2018 01:10 PM, Krutika Dhananjay wrote:
>
> These log messages represent a transient state and are harmless and can be
> ignored. This happens when a lookup and mknod to create shards happen in
> parallel.
>
> Regarding the preallocated disk creation issue, could you check if there
> are any errors/warnings in the fuse mount logs (these are named as the
> hyphenated mountpoint name followed by a ".log" and are found under
> /var/log/glusterfs).
>
> -Krutika
>
>
> On Wed, Oct 31, 2018 at 4:58 PM Jorick Astrego <jorick at netbulae.eu> wrote:
>
>> Hi,
>>
>> I have the similar issues with ovirt 4.2 on a glusterfs-3.8.15 cluster.
>> This was a new volume and I created first a thin provisioned disk, then I
>> tried to create a preallocated disk but it hangs after 4MB. The only issue
>> I can find in the logs sofar are the [File exists] errors with the sharding.
>>
>>
>> The message "W [MSGID: 113096] [posix-handle.c:761:posix_handle_hard]
>> 0-hdd2-posix: link
>> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125365 ->
>> /data/hdd2/brick1/.glusterfs/16/a1/16a18a01-4f77-4c37-923d-9f0bc59f5cc7failed
>> [File exists]" repeated 2 times between [2018-10-31 10:46:33.810987] and
>> [2018-10-31 10:46:33.810988]
>> [2018-10-31 10:46:33.970949] W [MSGID: 113096]
>> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
>> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366 ->
>> /data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed
>> [File exists]
>> [2018-10-31 10:46:33.970950] W [MSGID: 113096]
>> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
>> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366 ->
>> /data/hdd2/brick1/.glusterfs/90/85/9085ea11-4089-4d10-8848-fa2d518fd86dfailed
>> [File exists]
>> [2018-10-31 10:46:35.601064] W [MSGID: 113096]
>> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
>> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369 ->
>> /data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed
>> [File exists]
>> [2018-10-31 10:46:35.601065] W [MSGID: 113096]
>> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
>> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125369 ->
>> /data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8failed
>> [File exists]
>> [2018-10-31 10:46:36.040564] W [MSGID: 113096]
>> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
>> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370 ->
>> /data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed
>> [File exists]
>> [2018-10-31 10:46:36.040565] W [MSGID: 113096]
>> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
>> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125370 ->
>> /data/hdd2/brick1/.glusterfs/30/93/3093fdb6-e62c-48b8-90e7-d4d72036fb69failed
>> [File exists]
>> [2018-10-31 10:46:36.319247] W [MSGID: 113096]
>> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
>> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 ->
>> /data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed
>> [File exists]
>> [2018-10-31 10:46:36.319250] W [MSGID: 113096]
>> [posix-handle.c:761:posix_handle_hard] 0-hdd2-posix: link
>> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 ->
>> /data/hdd2/brick1/.glusterfs/c3/c2/c3c272f5-50af-4e82-94bb-b76eaa7a9a39failed
>> [File exists]
>> [2018-10-31 10:46:36.319309] E [MSGID: 113020] [posix.c:1407:posix_mknod]
>> 0-hdd2-posix: setting gfid on
>> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125372 failed
>>
>>
>>         -rw-rw----. 2 root root 4194304 Oct 31 11:46
>> /data/hdd2/brick1/.shard/6573a019-dba5-4f97-bca9-0a00ce537318.125366
>>
>> -rw-rw----. 2 root root 4194304 Oct 31 11:46
>> /data/hdd2/brick1/.glusterfs/9b/eb/9bebaaac-f460-496f-b30d-aabe77bffbc8
>>
>> On 10/01/2018 12:36 PM, Jose V. Carrión wrote:
>>
>> Hi,
>>
>> I have a gluster 3.12.6-1 installation with 2 configured volumes.
>>
>> Several times at day , some bricks are reporting the lines below:
>>
>> [2018-09-30 20:36:27.348015] W [MSGID: 113096]
>> [posix-handle.c:770:posix_handle_hard] 0-volumedisk0-posix: link
>> /mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 ->
>> /mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277failed
>> [File exists]
>> [2018-09-30 20:36:27.383957] E [MSGID: 113020]
>> [posix.c:3162:posix_create] 0-volumedisk0-posix: setting gfid on
>> /mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5 failed
>>
>> I can access to the /mnt/glusterfs/vol0/brick1/6349/20180921/20180921.h5
>> and
>> /mnt/glusterfs/vol0/brick1/.glusterfs/3b/1c/3b1c5fe1-b141-4687-8eaf-2c28f9505277,
>> both files are hard links .
>>
>> What is the meaning of the error lines?
>>
>> Thanks in advance.
>>
>> Cheers.
>>
>>
>> _______________________________________________
>> Gluster-users mailing listGluster-users at gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>>
>>
>>
>> Met vriendelijke groet, With kind regards,
>>
>> Jorick Astrego
>>
>> * Netbulae Virtualization Experts *
>> ------------------------------
>> Tel: 053 20 30 270 info at netbulae.eu Staalsteden 4-3A KvK 08198180
>> Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
>> ------------------------------
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
>
>
> Met vriendelijke groet, With kind regards,
>
> Jorick Astrego
>
> *Netbulae Virtualization Experts *
> ------------------------------
> Tel: 053 20 30 270 info at netbulae.eu Staalsteden 4-3A KvK 08198180
> Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
> ------------------------------
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20181106/3b59e833/attachment.html>


More information about the Gluster-users mailing list