<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, May 11, 2017 at 5:39 PM, Niels de Vos <span dir="ltr"><<a href="mailto:ndevos@redhat.com" target="_blank">ndevos@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Thu, May 11, 2017 at 12:35:42PM +0530, Krutika Dhananjay wrote:<br>
> Niels,<br>
><br>
> Allesandro's configuration does not have shard enabled. So it has<br>
> definitely not got anything to do with shard not supporting seek fop.<br>
<br>
</span>Yes, but in case sharding would have been enabled, the seek FOP would be<br>
handled correctly (detected as not supported at all).<br>
<br>
I'm still not sure how arbiter prevents doing shards though. We normally<br>
advise to use sharding *and* (optional) arbiter for VM workloads,<br>
arbiter without sharding has not been tested much. In addition, the seek<br>
functionality is only available in recent kernels, so there has been<br>
little testing on CentOS or similar enterprise Linux distributions.<br></blockquote><div><br></div>That is not true. Both are independent. There are quite a few questions we answered in the past ~1 year on gluster-users which don't use sharding+arbiter but plain old 2+1 configuration.<br></div><div class="gmail_quote"> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
HTH,<br>
Niels<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
> Copy-pasting volume-info output from the first mail:<br>
><br>
> Volume Name: datastore2<br>
> Type: Replicate<br>
> Volume ID: c95ebb5f-6e04-4f09-91b9-<wbr>bbbe63d83aea<br>
> Status: Started<br>
> Snapshot Count: 0<br>
> Number of Bricks: 1 x (2 + 1) = 3<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: srvpve2g:/data/brick2/brick<br>
> Brick2: srvpve3g:/data/brick2/brick<br>
> Brick3: srvpve1g:/data/brick2/brick (arbiter)<br>
> Options Reconfigured:<br>
> nfs.disable: on<br>
> performance.readdir-ahead: on<br>
> transport.address-family: inet<br>
><br>
><br>
> -Krutika<br>
><br>
><br>
> On Tue, May 9, 2017 at 7:40 PM, Niels de Vos <<a href="mailto:ndevos@redhat.com">ndevos@redhat.com</a>> wrote:<br>
><br>
> > ...<br>
> > > > client from<br>
> > > > srvpve2-162483-2017/05/08-10:<wbr>01:06:189720-datastore2-<wbr>client-0-0-0<br>
> > > > (version: 3.8.11)<br>
> > > > [2017-05-08 10:01:06.237433] E [MSGID: 113107]<br>
> > [posix.c:1079:posix_seek]<br>
> > > > 0-datastore2-posix: seek failed on fd 18 length 42957209600 [No such<br>
> > > > device or address]<br>
> ><br>
> > The SEEK procedure translates to lseek() in the posix xlator. This can<br>
> > return with "No suck device or address" (ENXIO) in only one case:<br>
> ><br>
> > ENXIO whence is SEEK_DATA or SEEK_HOLE, and the file offset is<br>
> > beyond the end of the file.<br>
> ><br>
> > This means that an lseek() was executed where the current offset of the<br>
> > filedescriptor was higher than the size of the file. I'm not sure how<br>
> > that could happen... Sharding prevents using SEEK at all atm.<br>
> ><br>
> > ...<br>
> > > > The strange part is that I cannot seem to find any other error.<br>
> > > > If I restart the VM everything works as expected (it stopped at ~9.51<br>
> > > > UTC and was started at ~10.01 UTC) .<br>
> > > ><br>
> > > > This is not the first time that this happened, and I do not see any<br>
> > > > problems with networking or the hosts.<br>
> > > ><br>
> > > > Gluster version is 3.8.11<br>
> > > > this is the incriminated volume (though it happened on a different one<br>
> > too)<br>
> > > ><br>
> > > > Volume Name: datastore2<br>
> > > > Type: Replicate<br>
> > > > Volume ID: c95ebb5f-6e04-4f09-91b9-<wbr>bbbe63d83aea<br>
> > > > Status: Started<br>
> > > > Snapshot Count: 0<br>
> > > > Number of Bricks: 1 x (2 + 1) = 3<br>
> > > > Transport-type: tcp<br>
> > > > Bricks:<br>
> > > > Brick1: srvpve2g:/data/brick2/brick<br>
> > > > Brick2: srvpve3g:/data/brick2/brick<br>
> > > > Brick3: srvpve1g:/data/brick2/brick (arbiter)<br>
> > > > Options Reconfigured:<br>
> > > > nfs.disable: on<br>
> > > > performance.readdir-ahead: on<br>
> > > > transport.address-family: inet<br>
> > > ><br>
> > > > Any hint on how to dig more deeply into the reason would be greatly<br>
> > > > appreciated.<br>
> ><br>
> > Probably the problem is with SEEK support in the arbiter functionality.<br>
> > Just like with a READ or a WRITE on the arbiter brick, SEEK can only<br>
> > succeed on bricks where the files with content are located. It does not<br>
> > look like arbiter handles SEEK, so the offset in lseek() will likely be<br>
> > higher than the size of the file on the brick (empty, 0 size file). I<br>
> > don't know how the replication xlator responds on an error return from<br>
> > SEEK on one of the bricks, but I doubt it likes it.<br>
> ><br>
> > We have <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1301647" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/<wbr>show_bug.cgi?id=1301647</a> to support<br>
> > SEEK for sharding. I suggest you open a bug for getting SEEK in the<br>
> > arbiter xlator as well.<br>
> ><br>
> > HTH,<br>
> > Niels<br>
> ><br>
</div></div><br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">Pranith<br></div></div>
</div></div>