[Gluster-users] Glusterfs : unable to copy/move files to the replicated volume
Gopu Krishnan
gopukrishnantec at gmail.com
Thu May 28 13:59:35 UTC 2015
Hi all,
I am having issue to copy/move files to my replicated volume residing
inside the aws volume. I will explain my current setup. I have created two
instances in aws and attached a 50GB volume to each of them. I have mounted
those volumes at the mount points /gluster_brick1 and /gluster_brick2
respectively. I have created another directory /gluster_brick1/data/ and
/gluster_brick2/data/ as two bricks for the glusterfs replications. I have
followed the below link for replication setup that was working nicely for
me.
https://gopukrish.wordpress.com/glusterfs/
My volume name is datavol and is mounted to each of the servers in the
location /home/mytestlo/public_html.
gluster replication is working fine as expected when i create files. But
when I copy or move files to the datavolume, i get error : Remote I/O
error.
[root at mytestlocal1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.9G 7.1G 2.4G 76% /
none 1.8G 0 1.8G 0% /dev/shm
/dev/xvdb 50G 7.1G 40G 16% /gluster_brick1
mytestlocal1.com:/datavol
50G 7.1G 40G 16% /home/mytestlo/public_html
[root at mytestlocal1 ~]# cd /home/mytestlo/public_html
[root at mytestlocal1 public_html]# mv /root/phpmyadmin/
/home/mytestlo/public_html/
*mv: cannot create directory `/home/mytestlo/public_html/phpmyadmin':
Remote I/O error*
Only log I was able to find out was :
nfs.log
[2015-05-28 13:29:26.713278] I [MSGID: 109036]
[dht-common.c:6689:dht_log_new_layout_for_dir_selfheal] 0-datavol-dht:
Setting layout of /phpmyadmin with [Subvol_name: datavol-replicate-0, Err:
-1 , Start: 0 , Stop: 4294967295 , Hash: 1 ],
Following are my gluster details.
[root at mytestlocal1 public_html]# cat /etc/redhat-release
CentOS release 6.6 (Final)
[root at mytestlocal1 public_html]# glusterfs -V
glusterfs 3.7.0 built on May 20 2015 13:30:40
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root at mytestlocal1 ~]# gluster volume info
Volume Name: datavol
Type: Replicate
Volume ID: 1c77f0cf-c3b7-49a3-bcb4-e6fe950c0b6a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: mytestlocal1.com:/gluster_brick1/data
Brick2: mytestlocal2.com:/gluster_brick2/data
Options Reconfigured:
performance.readdir-ahead: on
auth.allow: IP1,IP2
[root at mytestlocal1 public_html]# gluster volume status
Status of volume: datavol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick mytestlocal1.com:/gluster_brick1/data 49152 0 Y
27422
Brick mytestlocal2.com:/gluster_brick2/data 49152 0 Y
26015
NFS Server on localhost 2049 0 Y
27407
Self-heal Daemon on localhost N/A N/A Y
27417
NFS Server on mytestlocal2.com 2049 0 Y
26004
Self-heal Daemon on mytestlocal2.com N/A N/A Y
26014
Task Status of Volume datavol
------------------------------------------------------------------------------
There are no active volume tasks
[root at mytestlocal1 public_html]# gluster peer status
Number of Peers: 1
Hostname: mytestlocal2.com
Uuid: 2a6a1341-9e8c-4c3a-980c-d0592e4aeeca
State: Peer in Cluster (Connected)
[root at mytestlocal1 public_html]# gluster volume heal datavol info
split-brain
Brick mytestlocal1.com:/gluster_brick1/data/
Number of entries in split-brain: 0
Brick mytestlocal2.com:/gluster_brick2/data/
Number of entries in split-brain: 0
I am not sure whether aws volume is causing the error. Please help !
Thanks,
Gopu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150528/0e716149/attachment.html>
More information about the Gluster-users
mailing list