[Gluster-users] Glusterfs self-heal on devices minor/major numbers
Vadim S. Khondar
v.khondar at o3.ua
Fri Mar 18 17:55:51 UTC 2011
Hello everyone,
I have a setup with two servers running glusterfs 3.1.2 (Repository
revision: v3.1.1-64-gf2a067c) with fuse init (API version 7.10) on
CentOS 5.5.
Glusterfs is used for replicated storage volume for openvz images.
The case is when I restore image of VPS on one server, device entries
in /dev of VPS are created with correct minor/major numbers. These are
also synced to another server and appear there soon but with wrong
minors/majors.
Here are parts of `ls -la' within /store (this directory acts as storage
backend for gluster volume) on both servers:
srv1:
...
crw-rw-rw- 1 root root 5, 1 Кві 13 2006 console
crw------- 1 root root 1, 6 Apr 13 2006 core
crw-r----- 1 root kmem 1, 2 Apr 13 2006 kmem
crw------- 1 root root 1, 11 Apr 13 2006 kmsg
crw-r----- 1 root kmem 1, 1 Apr 13 2006 mem
crw-rw-rw- 1 root root 1, 3 Apr 13 2006 null
crw-r----- 1 root kmem 1, 4 Apr 13 2006 port
crw-rw-rw- 1 root tty 3, 176 Apr 13 2006 ttya0
crw-rw-rw- 1 root tty 3, 177 Apr 13 2006 ttya1
crw-rw-rw- 1 root tty 3, 178 Apr 13 2006 ttya2
...
srv2:
...
crw-rw-rw- 1 root root 253, 1 Apr 13 2006 console
crw------- 1 root root 253, 1 Apr 13 2006 core
crw-r----- 1 root kmem 253, 1 Apr 13 2006 kmem
crw------- 1 root root 253, 1 Apr 13 2006 kmsg
crw-r----- 1 root kmem 253, 1 Apr 13 2006 mem
crw-rw-rw- 1 root root 253, 1 Apr 13 2006 null
crw-r----- 1 root kmem 253, 1 Apr 13 2006 port
crw-rw-rw- 1 root root 253, 1 Apr 13 2006 ptmx
crw-rw-rw- 1 root root 253, 1 Apr 13 2006 tty
crw-rw-rw- 1 root tty 253, 1 Apr 13 2006 ttya0
crw-rw-rw- 1 root tty 253, 1 Apr 13 2006 ttya1
crw-rw-rw- 1 root tty 253, 1 Apr 13 2006 ttya2
...
As you see, device numbers are screwed up.
I restore VPS not to /store, but to mounted glusterfs volume.
Besides, if I delete everything from wrong /dev and trigger self-heal
with something like `touch -a -c dev/*' device entries are being
recreated with same screwed maj/mins.
I wonder if this is a bug and if there is any solution beside manual fix
on backend storages of gluster volume?
--
With best regards,
Vadim
More information about the Gluster-users
mailing list