[Gluster-users] [ovirt-users] open error -13 = sanlock

paf1 at email.cz paf1 at email.cz
Thu Mar 3 15:12:03 UTC 2016


OK,
will extend replica 2 to replica 3 ( arbiter )  ASAP .

If is deleted "untouching" ids file on brick , healing of this file 
doesn't work .

regs.Pa.

On 3.3.2016 12:19, Nir Soffer wrote:
> On Thu, Mar 3, 2016 at 11:23 AM, paf1 at email.cz <mailto:paf1 at email.cz> 
> <paf1 at email.cz <mailto:paf1 at email.cz>> wrote:
>
>     This is replica 2, only , with following settings
>
>
> Replica 2 is not supported. Even if you "fix" this now, you will have 
> the same issue
> soon.
>
>
>     Options Reconfigured:
>     performance.quick-read: off
>     performance.read-ahead: off
>     performance.io-cache: off
>     performance.stat-prefetch: off
>     cluster.eager-lock: enable
>     network.remote-dio: enable
>     cluster.quorum-type: fixed
>     cluster.server-quorum-type: none
>     storage.owner-uid: 36
>     storage.owner-gid: 36
>     cluster.quorum-count: 1
>     cluster.self-heal-daemon: enable
>
>     If I'll create "ids" file manually (  eg. " sanlock direct init -s
>     3c34ad63-6c66-4e23-ab46-084f3d70b147:0:/STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids:0
>     " ) on both bricks,
>     vdsm is writing only to half of them ( that with 2 links = correct )
>     "ids" file has correct permittions, owner, size  on both bricks.
>     brick 1:  -rw-rw---- 1 vdsm kvm 1048576  2. bře 18.56
>     /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
>     - not updated
>     brick 2:  -rw-rw---- 2 vdsm kvm 1048576  3. bře 10.16
>     /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
>     - is continually updated
>
>     What happens when I'll restart vdsm ? Will oVirt storages go to
>     "disable " state ??? = disconnect VMs storages ?
>
>
>  Nothing will happen, the vms will continue to run normally.
>
> On block storage, stopping vdsm will prevent automatic extending of vm 
> disks
> when the disk become too full, but on file based storage (like 
> gluster) there is no issue.
>
>
>     regs.Pa.
>
>
>     On 3.3.2016 02:02, Ravishankar N wrote:
>>     On 03/03/2016 12:43 AM, Nir Soffer wrote:
>>>
>>>         PS:  # find /STORAGES -samefile
>>>         /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
>>>         -print
>>>         /STORAGES/g1r5p3/GFS/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/ids
>>>         = missing "shadowfile" in " .gluster " dir.
>>>         How can I fix it ?? - online !
>>>
>>>
>>>     Ravi?
>>     Is this the case in all 3 bricks of the replica?
>>     BTW, you can just stat the file on the brick and see the link
>>     count (it must be 2) instead of running the more expensive find
>>     command.
>>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160303/36dd6fef/attachment.html>


More information about the Gluster-users mailing list