<div dir="ltr">Hi Lindolfo,<div><br></div><div>Can you now share the 'gluster volume info' from your setup?</div><div><br></div><div>Please note some basic documentation on shard is available @ <a href="https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/shard/" target="_blank">https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/shard/</a></div><div><br></div><div>-Amar</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jan 23, 2019 at 7:55 PM Lindolfo Meira <<a href="mailto:meira@cesup.ufrgs.br">meira@cesup.ufrgs.br</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Does this remark has anything to do with the problem I'm talking about? <br>
Because I took the time the recreate the volume, changing its type and <br>
enabling shard and the problem persists :/<br>
<br>
<br>
Lindolfo Meira, MSc<br>
Diretor Geral, Centro Nacional de Supercomputação<br>
Universidade Federal do Rio Grande do Sul<br>
+55 (51) 3308-3139<br>
<br>
On Wed, 23 Jan 2019, Raghavendra Gowdappa wrote:<br>
<br>
> On Wed, Jan 23, 2019 at 1:59 AM Lindolfo Meira <<a href="mailto:meira@cesup.ufrgs.br" target="_blank">meira@cesup.ufrgs.br</a>> wrote:<br>
> <br>
> > Dear all,<br>
> ><br>
> > I've been trying to benchmark a gluster file system using the MPIIO API of<br>
> > IOR. Almost all of the times I try to run the application with more than 6<br>
> > tasks performing I/O (mpirun -n N, for N > 6) I get the error: "writev:<br>
> > Transport endpoint is not connected". And then each one of the N tasks<br>
> > returns "ERROR: cannot open file to get file size, MPI MPI_ERR_FILE:<br>
> > invalid file, (aiori-MPIIO.c:488)".<br>
> ><br>
> > Does anyone have any idea what's going on?<br>
> ><br>
> > I'm writing from a single node, to a system configured for stripe over 6<br>
> > bricks. The volume is mounted with the options _netdev and transport=rdma.<br>
> > I'm using OpenMPI 2.1.2 (I tested version 4.0.0 and nothing changed). IOR<br>
> > arguments used: -B -E -F -q -w -k -z -i=1 -t=2m -b=1g -a=MPIIO. Running<br>
> > OpenSUSE Leap 15.0 and GlusterFS 5.3. Output of "gluster volume info"<br>
> > follows bellow:<br>
> ><br>
> > Volume Name: gfs<br>
> > Type: Stripe<br>
> ><br>
> <br>
> +Dhananjay, Krutika <<a href="mailto:kdhananj@redhat.com" target="_blank">kdhananj@redhat.com</a>><br>
> stripe has been deprecated. You can use sharded volumes.<br>
> <br>
> <br>
> > Volume ID: ea159033-5f7f-40ac-bad0-6f46613a336b<br>
> > Status: Started<br>
> > Snapshot Count: 0<br>
> > Number of Bricks: 1 x 6 = 6<br>
> > Transport-type: rdma<br>
> > Bricks:<br>
> > Brick1: pfs01-ib:/mnt/data/gfs<br>
> > Brick2: pfs02-ib:/mnt/data/gfs<br>
> > Brick3: pfs03-ib:/mnt/data/gfs<br>
> > Brick4: pfs04-ib:/mnt/data/gfs<br>
> > Brick5: pfs05-ib:/mnt/data/gfs<br>
> > Brick6: pfs06-ib:/mnt/data/gfs<br>
> > Options Reconfigured:<br>
> > nfs.disable: on<br>
> ><br>
> ><br>
> > Thanks in advance,<br>
> ><br>
> > Lindolfo Meira, MSc<br>
> > Diretor Geral, Centro Nacional de Supercomputação<br>
> > Universidade Federal do Rio Grande do Sul<br>
> > +55 (51) 3308-3139_______________________________________________<br>
> > Gluster-users mailing list<br>
> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
> > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> _______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Amar Tumballi (amarts)<br></div></div></div></div></div>