[Gluster-users] Howto Unify Storage from other server without replication

Christeddy Parapat christeddy.parapat at mtbintl.com
Mon Aug 9 07:46:34 UTC 2010


Hi,

	I really need some body help here. I try to make 3 servers, 2 as server, and 1 as a client. I want to use "cluster/unify". But when i try to run, it always tell not connected. But if i comment the "cluster/unify" configuration it come to connected. Is there a way to make the glusterfs only able to unify all resources storage from other server in one pool data server only ?
Let me shared my configuration here ;

Server 1 Configuration (glusterfsd.vol) 

[root at fs-lb1 glusterfs]# cat glusterfsd.vol 
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
volume brick
  type storage/posix
  option directory /data
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option transport.socket.bind-address 192.168.0.10     # Default is to listen on all interfaces
  option transport.socket.listen-port 6996              # Default is 6996
  option client-volume-filename /etc/glusterfs/glusterfs-client.vol
  subvolumes brick
  option auth.addr.brick.allow 192.168.0.* # Allow access to "brick" volume
end-volume

volume brick-ns
  type storage/posix                    # POSIX FS translator
  option directory /data/export-ns      # Export this directory
end-volume

volume servers
  type protocol/server
  option transport-type tcp     # For TCP/IP transport
  option transport.socket.listen-port 6999              # Default is 6996
  subvolumes brick-ns
  option auth.addr.brick-ns.allow * 		# access to "brick" volume
end-volume
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Server 2 Configuration (glusterfsd.vol)

[root at fs1 glusterfs]# cat glusterfsd.vol
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
volume brick2
  type storage/posix                   # POSIX FS translator
  option directory /Data        # Export this directory
end-volume

volume server
  type protocol/server
  option transport-type tcp
 option transport.socket.bind-address 192.168.0.11     # Default is to listen on all interfaces
 option transport.socket.listen-port 6996              # Default is 6996
  subvolumes brick2
  option auth.addr.brick2.allow * # Allow access to "brick" volume
end-volume
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Client Configuration (glusterfs.vol); 

[root at appman glusterfs]# cat glusterfs.vol
 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
volume client
  type protocol/client
  option transport-type tcp
  option remote-host 192.168.0.10         # IP address of the remote brick
  option remote-subvolume brick        # name of the remote volume
end-volume

volume client2
  type protocol/client
  option transport-type tcp
  option remote-host 192.168.0.11
  option remote-subvolume brick2
end-volume

volume client-ns
  type protocol/client
  option transport-type tcp     # for TCP/IP transport
  option remote-host 192.168.0.10         # IP address of the remote brick
  option transport.socket.remote-port 6999              # default server port is 6996
  option remote-subvolume brick-ns     # name of the remote volume
end-volume

volume unify
  type cluster/unify
#  option scheduler rr
  option self-heal background # foreground off # default is foreground
  option scheduler alu
  option alu.limits.min-free-disk  5% #%
  option alu.limits.max-open-files 10000
  option alu.order disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
  option alu.disk-usage.entry-threshold 2GB
  option alu.disk-usage.exit-threshold  128MB
  option alu.open-files-usage.entry-threshold 1024
  option alu.open-files-usage.exit-threshold 32
  option alu.read-usage.entry-threshold 20 #%
  option alu.read-usage.exit-threshold 4 #%
  option alu.write-usage.entry-threshold 20 #%
  option alu.write-usage.exit-threshold 4 #%
  option alu.disk-speed-usage.entry-threshold 0 # DO NOT SET IT. SPEED IS CONSTANT!!!.
  option alu.disk-speed-usage.exit-threshold 0 # DO NOT SET IT. SPEED IS CONSTANT!!!.
  option alu.stat-refresh.interval 10sec
  option alu.stat-refresh.num-file-create 10
  option namespace client-ns
  subvolumes client client2
end-volume
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

I try to run with debug mode, i get the ouput here ( I don't know how to understand the output here) ;

+------------------------------------------------------------------------------+
[2010-08-09 14:31:07] D [glusterfsd.c:1382:main] glusterfs: running in pid 25827
[2010-08-09 14:31:07] D [unify.c:4347:init] unify: namespace node specified as client-ns
[2010-08-09 14:31:07] D [scheduler.c:53:get_scheduler] scheduler: attempt to load file alu.so
[2010-08-09 14:31:07] W [xlator.c:656:validate_xlator_volume_options] unify: option 'alu.stat-refresh.interval' is deprecated, preferred is 'scheduler.refresh-interval', continuing with correction
[2010-08-09 14:31:07] D [xlator.c:554:_volume_option_value_validate] unify: no range check required for 'option scheduler.refresh-interval 10sec'
[2010-08-09 14:31:07] W [xlator.c:656:validate_xlator_volume_options] unify: option 'alu.write-usage.exit-threshold' is deprecated, preferred is 'scheduler.alu.write-usage.exit-threshold', continuing with correction
[2010-08-09 14:31:07] D [xlator.c:317:_volume_option_value_validate] unify: no range check required for 'option scheduler.alu.write-usage.exit-threshold 4'
[2010-08-09 14:31:07] W [xlator.c:656:validate_xlator_volume_options] unify: option 'alu.write-usage.entry-threshold' is deprecated, preferred is 'scheduler.alu.write-usage.entry-threshold', continuing with correction
[2010-08-09 14:31:07] D [xlator.c:317:_volume_option_value_validate] unify: no range check required for 'option scheduler.alu.write-usage.entry-threshold 20'
[2010-08-09 14:31:07] W [xlator.c:656:validate_xlator_volume_options] unify: option 'alu.read-usage.exit-threshold' is deprecated, preferred is 'scheduler.alu.read-usage.exit-threshold', continuing with correction
[2010-08-09 14:31:07] D [xlator.c:317:_volume_option_value_validate] unify: no range check required for 'option scheduler.alu.read-usage.exit-threshold 4'
[2010-08-09 14:31:07] W [xlator.c:656:validate_xlator_volume_options] unify: option 'alu.read-usage.entry-threshold' is deprecated, preferred is 'scheduler.alu.read-usage.entry-threshold', continuing with correction
[2010-08-09 14:31:07] D [xlator.c:317:_volume_option_value_validate] unify: no range check required for 'option scheduler.alu.read-usage.entry-threshold 20'
[2010-08-09 14:31:07] W [xlator.c:656:validate_xlator_volume_options] unify: option 'alu.open-files-usage.exit-threshold' is deprecated, preferred is 'scheduler.alu.open-files-usage.exit-threshold', continuing with correction
[2010-08-09 14:31:07] D [xlator.c:285:_volume_option_value_validate] unify: no range check required for 'option scheduler.alu.open-files-usage.exit-threshold 32'
[2010-08-09 14:31:07] W [xlator.c:656:validate_xlator_volume_options] unify: option 'alu.open-files-usage.entry-threshold' is deprecated, preferred is 'scheduler.alu.open-files-usage.entry-threshold', continuing with correction
[2010-08-09 14:31:07] D [xlator.c:285:_volume_option_value_validate] unify: no range check required for 'option scheduler.alu.open-files-usage.entry-threshold 1024'
[2010-08-09 14:31:07] W [xlator.c:656:validate_xlator_volume_options] unify: option 'alu.disk-usage.exit-threshold' is deprecated, preferred is 'scheduler.alu.disk-usage.exit-threshold', continuing with correction
[2010-08-09 14:31:07] D [xlator.c:317:_volume_option_value_validate] unify: no range check required for 'option scheduler.alu.disk-usage.exit-threshold 128MB'
[2010-08-09 14:31:07] W [xlator.c:656:validate_xlator_volume_options] unify: option 'alu.disk-usage.entry-threshold' is deprecated, preferred is 'scheduler.alu.disk-usage.entry-threshold', continuing with correction
[2010-08-09 14:31:07] D [xlator.c:317:_volume_option_value_validate] unify: no range check required for 'option scheduler.alu.disk-usage.entry-threshold 2GB'
[2010-08-09 14:31:07] W [xlator.c:656:validate_xlator_volume_options] unify: option 'alu.order' is deprecated, preferred is 'scheduler.alu.order', continuing with correction
[2010-08-09 14:31:07] W [xlator.c:656:validate_xlator_volume_options] unify: option 'alu.limits.min-free-disk' is deprecated, preferred is 'scheduler.limits.min-free-disk', continuing with correction
[2010-08-09 14:31:07] D [unify.c:4379:init] unify: Child node count is 2
[2010-08-09 14:31:07] D [alu.c:145:alu_parse_options] alu: alu_init: order string: disk-usage
[2010-08-09 14:31:07] D [alu.c:197:alu_parse_options] alu: alu_init: = 2147483648,134217728
[2010-08-09 14:31:07] D [alu.c:145:alu_parse_options] alu: alu_init: order string: read-usage
[2010-08-09 14:31:07] D [alu.c:309:alu_parse_options] alu: alu_init: = 20,4
[2010-08-09 14:31:07] D [alu.c:145:alu_parse_options] alu: alu_init: order string: write-usage
[2010-08-09 14:31:07] D [alu.c:250:alu_parse_options] unify: alu_init: = 20,4
[2010-08-09 14:31:07] D [alu.c:145:alu_parse_options] alu: alu_init: order string: open-files-usage
[2010-08-09 14:31:07] D [alu.c:370:alu_parse_options] alu: alu.c->alu_init: = 1024,32
[2010-08-09 14:31:07] D [alu.c:145:alu_parse_options] alu: alu_init: order string: disk-speed-usage
[2010-08-09 14:31:07] D [alu.c:466:alu_init] alu: alu.limit.min-disk-free = 5
pending frames:

patchset: v3.0.4
signal received: 11
time of crash: 2010-08-09 14:31:07
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.4
/lib64/libc.so.6[0x33016302d0]
/usr/lib64/glusterfs/3.0.4/xlator/protocol/client.so(notify+0x228)[0x2b4c1e1ce6d8]
/usr/lib64/libglusterfs.so.0(xlator_notify+0x43)[0x2b4c1d7183e3]
/usr/lib64/glusterfs/3.0.4/xlator/cluster/unify.so(init+0x2e8)[0x2b4c1e3e7718]
/usr/lib64/libglusterfs.so.0(xlator_init+0x2b)[0x2b4c1d71821b]
/usr/lib64/libglusterfs.so.0(xlator_tree_init+0x69)[0x2b4c1d7182a9]
glusterfs(glusterfs_graph_init+0xc0)[0x403390]
glusterfs(main+0x9c1)[0x404211]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x330161d994]
glusterfs[0x402749]
---------
Segmentation fault (core dumped)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
And if I looking to the log files on the client, here is the output ;
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[root at appman glusterfs]# tail /var/log/glusterfs/glusterfs.log 
/usr/lib64/glusterfs/3.0.4/xlator/cluster/unify.so(init+0x2e8)[0x2ab98f215718]
/usr/lib64/libglusterfs.so.0(xlator_init+0x2b)[0x2ab98e54621b]
/usr/lib64/libglusterfs.so.0(xlator_tree_init+0x69)[0x2ab98e5462a9]
glusterfs(glusterfs_graph_init+0xc0)[0x403390]
glusterfs(main+0x9c1)[0x404211]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x330161d994]
glusterfs[0x402749]
---------
[2010-08-09 14:28:13] E [glusterfsd.c:202:gf_daemon] glusterfs: end of file- Inappropriate ioctl for device
[2010-08-09 14:28:13] E [glusterfsd.c:1314:main] glusterfs: unable to run in daemon mode: Inappropriate ioctl for device
[root at appman glusterfs]# tail /var/log/glusterfs/glusterfs.log 
/usr/lib64/glusterfs/3.0.4/xlator/cluster/unify.so(init+0x2e8)[0x2ab98f215718]
/usr/lib64/libglusterfs.so.0(xlator_init+0x2b)[0x2ab98e54621b]
/usr/lib64/libglusterfs.so.0(xlator_tree_init+0x69)[0x2ab98e5462a9]
glusterfs(glusterfs_graph_init+0xc0)[0x403390]
glusterfs(main+0x9c1)[0x404211]
/lib64/libc.so.6(__libc_start_main+0xf4)[0x330161d994]
glusterfs[0x402749]
---------
[2010-08-09 14:28:13] E [glusterfsd.c:202:gf_daemon] glusterfs: end of file- Inappropriate ioctl for device
[2010-08-09 14:28:13] E [glusterfsd.c:1314:main] glusterfs: unable to run in daemon mode: Inappropriate ioctl for device
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

And here is the same logs output between Server1 and Server2 Logs ;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
root at fs-lb1 glusterfs]# tail /var/log/glusterfs/glusterfsd.log 
 21: volume servers
 22:   type protocol/server
 23:   option transport-type tcp     # For TCP/IP transport
 24:   option transport.socket.listen-port 6999              # Default is 6996
 25:   subvolumes brick-ns
 26:   option auth.addr.brick-ns.allow * 		# access to "brick" volume
 27: end-volume

+------------------------------------------------------------------------------+
[2010-08-08 21:27:09] N [glusterfsd.c:1408:main] glusterfs: Successfully started
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Please help me out how to do the unify the storage to be a pool storage. Please correct my configuration. Is there a mistake on that ?
First of all, thank you very much for your kindly response.


Regards,

Christeddy



More information about the Gluster-users mailing list