[Gluster-users] Issue recreating volumes
Amar Tumballi
amarts at redhat.com
Fri Jun 8 05:04:08 UTC 2012
Hi Brian,
Answers inline.
> Here are a couple of wrinkles I have come across while trying gluster 3.3.0
> under ubuntu-12.04.
>
> (1) At one point I decided to delete some volumes and recreate them. But
> it would not let me recreate them:
>
> root at dev-storage2:~# gluster volume create fast dev-storage1:/disk/storage1/fast dev-storage2:/disk/storage2/fast
> /disk/storage2/fast or a prefix of it is already part of a volume
>
> This is even though "gluster volume info" showed no volumes.
>
> Restarting glusterd didn't help either. Nor indeed did a complete reinstall
> of glusterfs, even with apt-get remove --purge and rm -rf'ing the state
> directories.
>
> Digging around, I found some hidden state files:
>
> # ls -l /disk/storage1/*/.glusterfs/00/00
> /disk/storage1/fast/.glusterfs/00/00:
> total 0
> lrwxrwxrwx 1 root root 8 Jun 7 14:23 00000000-0000-0000-0000-000000000001 -> ../../..
>
> /disk/storage1/safe/.glusterfs/00/00:
> total 0
> lrwxrwxrwx 1 root root 8 Jun 7 14:21 00000000-0000-0000-0000-000000000001 -> ../../..
>
> I deleted them on both machines:
>
> rm -rf /disk/*/.glusterfs
>
> Problem solved? No, not even with glusterd restart :-(
>
> root at dev-storage2:~# gluster volume create safe replica 2 dev-storage1:/disk/storage1/safe dev-storage2:/disk/storage2/safe
> /disk/storage2/safe or a prefix of it is already part of a volume
>
> In the end, what I needed was to delete the actual data bricks themselves:
>
> rm -rf /disk/*/fast
> rm -rf /disk/*/safe
>
> That allowed me to recreate the volumes.
>
> This is probably an understanding/documentation issue. I'm sure there's a
> lot of magic going on in the gluster 3.3 internals (is that long ID some
> sort of replica update sequence number?) which if it were fully documented
> would make it easier to recover from these situations.
>
Preventing of 'recreating' of a volume (actually internally, it just
prevents you from 're-using' the bricks, you can create same volume name
with different bricks), is very much intentional to prevent disasters
(like data loss) from happening.
We treat data separate from volume's config information. Hence, when a
volume is 'delete'd, only the configuration details of the volume is
lost, but data belonging to the volume is present on its brick as is. It
is admin's discretion to handle the data later.
Considering above point, now, if we allow 're-using' of the same brick
which was part of some volume earlier, it could lead to issues of data
placement in wrong brick, internal inode number clashes etc, which could
lead to 'heal' the data from client perspective, leading to deleting
some files which would be important.
If admin is aware of the case, and knows that there is no 'data' inside
the brick, then easier option is to delete the export dir and it gets
created by 'gluster volume create'. If you want to fix it without
deleting the export directory, then it is also possible, by deleting the
extended attributes on the brick like below.
bash# setfattr -x trusted.glusterfs.volume-id $brickdir
bash# setfattr -x trusted.gfid $brickdir
And now, creating the brick should succeed.
>
> (2) Minor point: the FUSE client no longer seems to understand or need the
> "_netdev" option, however it still invokes it if you use "defaults" in
> /etc/fstab, and so you get a warning about an unknown option:
>
> root at dev-storage1:~# grep gluster /etc/fstab
> storage1:/safe /gluster/safe glusterfs defaults,nobootwait 0 0
> storage1:/fast /gluster/fast glusterfs defaults,nobootwait 0 0
>
> root at dev-storage1:~# mount /gluster/safe
> unknown option _netdev (ignored)
>
Will look into this.
Regards,
Amar
More information about the Gluster-users
mailing list