[Gluster-users] "Input/output error" on mkdir for PPC64 based client
Walter Deignan
WDeignan at uline.com
Tue Sep 19 21:10:26 UTC 2017
I recently compiled the 3.10-5 client from source on a few PPC64 systems
running RHEL 7.3. They are mounting a Gluster volume which is hosted on
more traditional x86 servers.
Everything seems to be working properly except for creating new
directories from the PPC64 clients. The mkdir command gives a
"Input/output error" and for the first few minutes the new directory is
inaccessible. I checked the backend bricks and confirmed the directory was
created properly on all of them. After waiting for 2-5 minutes the
directory magically becomes accessible.
This inaccessible directory issue only appears from the client which
created it. When creating the directory from client #1 I can immediately
see it with no errors from client #2.
Using a pre-compiled 3.10-5 package on an x86 client doesn't show the
issue.
I poked around bugzilla but couldn't seem to find anything which matches
this.
[root at mqdev1 hafsdev1_gv0]# ls -lh
total 8.0K
drwxrwxr-x. 4 mqm mqm 4.0K Sep 19 15:47 data
drwxr-xr-x. 2 root root 4.0K Sep 19 15:47 testdir
[root at mqdev1 hafsdev1_gv0]# mkdir testdir2
mkdir: cannot create directory ?testdir2?: Input/output error
[root at mqdev1 hafsdev1_gv0]# ls
ls: cannot access testdir2: No such file or directory
data testdir testdir2
[root at mqdev1 hafsdev1_gv0]# ls -lht
ls: cannot access testdir2: No such file or directory
total 8.0K
drwxr-xr-x. 2 root root 4.0K Sep 19 15:47 testdir
drwxrwxr-x. 4 mqm mqm 4.0K Sep 19 15:47 data
d?????????? ? ? ? ? ? testdir2
[root at mqdev1 hafsdev1_gv0]# cd testdir2
-bash: cd: testdir2: No such file or directory
*Wait a few minutes...*
[root at mqdev1 hafsdev1_gv0]# ls -lht
total 12K
drwxr-xr-x. 2 root root 4.0K Sep 19 15:50 testdir2
drwxr-xr-x. 2 root root 4.0K Sep 19 15:47 testdir
drwxrwxr-x. 4 mqm mqm 4.0K Sep 19 15:47 data
[root at mqdev1 hafsdev1_gv0]#
My volume config...
[root at dc-hafsdev1a bricks]# gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID: a2d37705-05cb-4700-8ed8-2cb89376faf0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: dc-hafsdev1a.ulinedm.com:/gluster/bricks/brick1/data
Brick2: dc-hafsdev1b.ulinedm.com:/gluster/bricks/brick1/data
Brick3: dc-hafsdev1c.ulinedm.com:/gluster/bricks/brick1/data
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
network.ping-timeout: 2
features.bitrot: on
features.scrub: Active
cluster.server-quorum-ratio: 51%
-Walter Deignan
-Uline IT, Systems Architect
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170919/aa321f6f/attachment.html>
More information about the Gluster-users
mailing list