[Gluster-users] After reboot, one brick is not being seen by clients
Ravishankar N
ravishankar at redhat.com
Thu Nov 28 04:21:59 UTC 2013
On 11/28/2013 03:12 AM, Pat Haley wrote:
>
> Hi,
>
> We are currently using gluster with 3 bricks. We just
> rebooted one of the bricks (mseas-data, also identified
> as gluster-data) which is actually the main server. After
> rebooting this brick, our client machine (mseas) only sees
> the files on the other 2 bricks. Note that if I mount
> the gluster filespace (/gdata) on the brick I rebooted,
> it sees the entire space.
>
> The last time I had this problem, there was an error in
> one of our /etc/hosts file. This does not seem to be the
> case now.
>
> What else can I look at to debug this problem?
>
> Some information I have from the gluster server
>
> [root at mseas-data ~]# gluster --version
> glusterfs 3.3.1 built on Oct 11 2012 22:01:05
>
> [root at mseas-data ~]# gluster volume info
>
> Volume Name: gdata
> Type: Distribute
> Volume ID: eccc3a90-212d-4563-ae8d-10a77758738d
> Status: Started
> Number of Bricks: 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster-0-0:/mseas-data-0-0
> Brick2: gluster-0-1:/mseas-data-0-1
> Brick3: gluster-data:/data
>
> [root at mseas-data ~]# ps -ef | grep gluster
>
> root 2781 1 0 15:16 ? 00:00:00 /usr/sbin/glusterd -p
> /var/run/glusterd.pid
> root 2897 1 0 15:16 ? 00:00:00 /usr/sbin/glusterfsd
> -s localhost --volfile-id gdata.gluster-data.data -p
> /var/lib/glusterd/vols/gdata/run/gluster-data-data.pid -S
> /tmp/e3eac7ce95e786a3d909b8fc65ed2059.socket --brick-name /data -l
> /var/log/glusterfs/bricks/data.log --xlator-option
> *-posix.glusterd-uuid=22f1102a-08e6-482d-ad23-d8e063cf32ed
> --brick-port 24009 --xlator-option gdata-server.listen-port=24009
> root 2903 1 0 15:16 ? 00:00:00 /usr/sbin/glusterfs -s
> localhost --volfile-id gluster/nfs -p
> /var/lib/glusterd/nfs/run/nfs.pid -l /var/log/glusterfs/nfs.log -S
> /tmp/d5c892de43c28a1ee7481b780245b789.socket
> root 4258 1 0 15:52 ? 00:00:00 /usr/sbin/glusterfs
> --volfile-id=/gdata --volfile-server=mseas-data /gdata
> root 4475 4033 0 16:35 pts/0 00:00:00 grep gluster
> [
>
From the ps output, the brick process (glusterfsd) doesn't seem to be
running on the gluster-data server. Run `gluster volume status` and
check if that is indeed the case. If yes, you could either restart
glusterd on the brick node (`service glusterd restart`) or restart the
entire volume (`gluster volume start gdata force`) which should bring
back the brick process online.
I'm not sure why glusterd did not start the brick process when you
rebooted the machine in the first place. You could perhaps check the
glusterd log for clues).
Hope this helps,
Ravi
> -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
> Pat Haley Email: phaley at mit.edu
> Center for Ocean Engineering Phone: (617) 253-6824
> Dept. of Mechanical Engineering Fax: (617) 253-8125
> MIT, Room 5-213 http://web.mit.edu/phaley/www/
> 77 Massachusetts Avenue
> Cambridge, MA 02139-4301
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list