[Gluster-users] SETVOLUME failed
Raghavendra G
raghavendra at gluster.com
Tue Jan 12 07:46:22 UTC 2010
Hi,
Is the volume spec file (by default it will be /etc/glusterfs/glusterfs.vol)
same on all the servers? If not, delete the file from servers, which are not
volfile-servers (other than 192.168.1.1 here) or make sure you have the same
file on all the servers.
regards,
On Mon, Jan 11, 2010 at 7:06 PM, <j.bittner at nbu.cz> wrote:
>
> Hi all,
>
> I have glusterFS at version 3.0.0 on debian testing and I have this
> problem:
>
> [2010-01-11 15:58:45] W [socket.c:1407:socket_init] trans: disabling
> non-blocking IO
> [2010-01-11 15:58:45] W [socket.c:1407:socket_init] trans: disabling
> non-blocking IO
>
> ================================================================================
> Version : glusterfs 3.0.0 built on Dec 9 2009 12:15:42
> git: 2.0.1-886-g8379edd
> Starting Time: 2010-01-11 15:58:45
> Command line : /usr/sbin/glusterfs --volfile-server=192.168.2.11 /mnt/
> PID : 3070
> System name : Linux
> Nodename : B125-GMHI-XXX
> Kernel Release : 2.6.31-14-generic
> Hardware Identifier: i686
>
> Given volfile:
>
> +------------------------------------------------------------------------------+
> 1: # RAID 1
> 2: # TRANSPORT-TYPE tcp
> 3: volume prema-1
> 4: type protocol/client
> 5: option transport-type tcp
> 6: option remote-host prema
> 7: option transport.socket.nodelay on
> 8: option remote-port 6996
> 9: option remote-subvolume brick
> 10: end-volume
> 11:
> 12: volume britain-1
> 13: type protocol/client
> 14: option transport-type tcp
> 15: option remote-host britain
> 16: option transport.socket.nodelay on
> 17: option remote-port 6996
> 18: option remote-subvolume brick
> 19: end-volume
> 20:
> 21: volume mirror-0
> 22: type cluster/replicate
> 23: subvolumes britain-1 prema-1
> 24: end-volume
> 25:
> 26: volume writebehind
> 27: type performance/write-behind
> 28: option cache-size 4MB
> 29: subvolumes mirror-0
> 30: end-volume
> 31:
> 32: volume readahead
> 33: type performance/read-ahead
> 34: option page-count 4
> 35: subvolumes writebehind
> 36: end-volume
> 37:
> 38: volume iocache
> 39: type performance/io-cache
> 40: option cache-size 1GB
> 41: option cache-timeout 1
> 42: subvolumes readahead
> 43: end-volume
> 44:
> 45: volume quickread
> 46: type performance/quick-read
> 47: option cache-timeout 1
> 48: option max-file-size 64kB
> 49: subvolumes iocache
> 50: end-volume
> 51:
> 52: volume statprefetch
> 53: type performance/stat-prefetch
> 54: subvolumes quickread
> 55: end-volume
> 56:
>
>
> +------------------------------------------------------------------------------+
> [2010-01-11 15:58:45] N [glusterfsd.c:1361:main] glusterfs: Successfully
> started
> [2010-01-11 15:58:45] N [client-protocol.c:6224:client_setvolume_cbk]
> britain-1: Connected to 192.168.2.11:6996, attached to remote volume
> 'brick'.
> [2010-01-11 15:58:45] N [afr.c:2625:notify] mirror-0: Subvolume 'britain-1'
> came back up; going online.
> [2010-01-11 15:58:45] N [client-protocol.c:6224:client_setvolume_cbk]
> britain-1: Connected to 192.168.2.11:6996, attached to remote volume
> 'brick'.
> [2010-01-11 15:58:45] N [afr.c:2625:notify] mirror-0: Subvolume 'britain-1'
> came back up; going online.
> [2010-01-11 15:58:45] E [client-protocol.c:6187:client_setvolume_cbk]
> prema-1: SETVOLUME on remote-host failed: volume-file checksum varies from
> earlier access
> [2010-01-11 15:58:45] C [fuse-bridge.c:3300:notify] fuse: Remote volume
> file changed, try re-mounting.
> [2010-01-11 15:58:45] W [glusterfsd.c:928:cleanup_and_exit] glusterfs:
> shutting down
> [2010-01-11 15:58:45] N [fuse-bridge.c:3519:fini] fuse: Unmounting '/mnt/'.
> [2010-01-11 15:58:45] N [fuse-bridge.c:3119:fuse_thread_proc]
> glusterfs-fuse: terminating upon getting EBADF when reading /dev/fuse
> [2010-01-11 15:58:45] W [glusterfsd.c:928:cleanup_and_exit] glusterfs:
> shutting down
>
> But if I do it this way "mount -t glusterfs /etc/glusterfs/glusterfs.vol
> /mnt/"(with the same glusterfs.vol settings) it works correctly. I dont know
> where is the problem. Do anyone know?
>
> ----------------------------------------------------------------
> This message was sent using IMP, the Internet Messaging Program.
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
--
Raghavendra G
More information about the Gluster-users
mailing list