[Gluster-users] writev: Transport endpoint is not connected

Lindolfo Meira meira at cesup.ufrgs.br
Wed Jan 23 14:43:04 UTC 2019


Hi Amar.

Yeah, I've taken a look at the documentation. Bellow is the output of 
volume info on the new volume. Pretty standard.

Volume Name: gfs
Type: Distribute
Volume ID: b5ef065f-1ba2-481f-8108-e8f6d2d3f036
Status: Started
Snapshot Count: 0
Number of Bricks: 6
Transport-type: rdma
Bricks:
Brick1: pfs01-ib:/mnt/data
Brick2: pfs02-ib:/mnt/data
Brick3: pfs03-ib:/mnt/data
Brick4: pfs04-ib:/mnt/data
Brick5: pfs05-ib:/mnt/data
Brick6: pfs06-ib:/mnt/data
Options Reconfigured:
features.shard: on
nfs.disable: on



Lindolfo Meira, MSc
Diretor Geral, Centro Nacional de Supercomputação
Universidade Federal do Rio Grande do Sul
+55 (51) 3308-3139

On Wed, 23 Jan 2019, Amar Tumballi Suryanarayan wrote:

> Hi Lindolfo,
> 
> Can you now share the 'gluster volume info' from your setup?
> 
> Please note some basic documentation on shard is available @
> https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/shard/
> 
> -Amar
> 
> On Wed, Jan 23, 2019 at 7:55 PM Lindolfo Meira <meira at cesup.ufrgs.br> wrote:
> 
> > Does this remark has anything to do with the problem I'm talking about?
> > Because I took the time the recreate the volume, changing its type and
> > enabling shard and the problem persists :/
> >
> >
> > Lindolfo Meira, MSc
> > Diretor Geral, Centro Nacional de Supercomputação
> > Universidade Federal do Rio Grande do Sul
> > +55 (51) 3308-3139
> >
> > On Wed, 23 Jan 2019, Raghavendra Gowdappa wrote:
> >
> > > On Wed, Jan 23, 2019 at 1:59 AM Lindolfo Meira <meira at cesup.ufrgs.br>
> > wrote:
> > >
> > > > Dear all,
> > > >
> > > > I've been trying to benchmark a gluster file system using the MPIIO
> > API of
> > > > IOR. Almost all of the times I try to run the application with more
> > than 6
> > > > tasks performing I/O (mpirun -n N, for N > 6) I get the error: "writev:
> > > > Transport endpoint is not connected". And then each one of the N tasks
> > > > returns "ERROR: cannot open file to get file size, MPI MPI_ERR_FILE:
> > > > invalid file, (aiori-MPIIO.c:488)".
> > > >
> > > > Does anyone have any idea what's going on?
> > > >
> > > > I'm writing from a single node, to a system configured for stripe over
> > 6
> > > > bricks. The volume is mounted with the options _netdev and
> > transport=rdma.
> > > > I'm using OpenMPI 2.1.2 (I tested version 4.0.0 and nothing changed).
> > IOR
> > > > arguments used: -B -E -F -q -w -k -z -i=1 -t=2m -b=1g -a=MPIIO. Running
> > > > OpenSUSE Leap 15.0 and GlusterFS 5.3. Output of "gluster volume info"
> > > > follows bellow:
> > > >
> > > > Volume Name: gfs
> > > > Type: Stripe
> > > >
> > >
> > > +Dhananjay, Krutika <kdhananj at redhat.com>
> > > stripe has been deprecated. You can use sharded volumes.
> > >
> > >
> > > > Volume ID: ea159033-5f7f-40ac-bad0-6f46613a336b
> > > > Status: Started
> > > > Snapshot Count: 0
> > > > Number of Bricks: 1 x 6 = 6
> > > > Transport-type: rdma
> > > > Bricks:
> > > > Brick1: pfs01-ib:/mnt/data/gfs
> > > > Brick2: pfs02-ib:/mnt/data/gfs
> > > > Brick3: pfs03-ib:/mnt/data/gfs
> > > > Brick4: pfs04-ib:/mnt/data/gfs
> > > > Brick5: pfs05-ib:/mnt/data/gfs
> > > > Brick6: pfs06-ib:/mnt/data/gfs
> > > > Options Reconfigured:
> > > > nfs.disable: on
> > > >
> > > >
> > > > Thanks in advance,
> > > >
> > > > Lindolfo Meira, MSc
> > > > Diretor Geral, Centro Nacional de Supercomputação
> > > > Universidade Federal do Rio Grande do Sul
> > > > +55 (51) 3308-3139_______________________________________________
> > > > Gluster-users mailing list
> > > > Gluster-users at gluster.org
> > > > https://lists.gluster.org/mailman/listinfo/gluster-users
> > > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> -- 
> Amar Tumballi (amarts)
> 


More information about the Gluster-users mailing list