[Gluster-devel] Some gluster questions

Reinis Rozitis r at roze.lv
Tue Apr 1 11:27:15 UTC 2008


Hello,
we are playing arround gluster and some technical and theoretical questions 
came up:

1. Is it wise to let gluster do the local striping over disks/bricks?

We wanted to use ZFS (as for its simplified management and nice features) 
but as there are issues installing OpenSolaris on HP DL380 (OS doesnt 
support install from USB (through ILO)) we ended up making a bunch of raid1 
(mirror) volumes and mounting them together with Gluster unify. But I am not 
sure which way would it be better - to make a large raid 6/10 volume (I 
personally dont like hardware raids which involve more than 6 disks) or bind 
the mirrors together with LVM and then export through Gluster or use it this 
way?


2. What is the correct way of adding a new local brick to (allready 
existing/filled) unify storage?

I have defined a new brick volume and added it to unify subvolumes restarted 
both the client and server but the ALU scheduler still doesnt create any new 
files on the fresh brick


The unify config:

volume c1d1
  type storage/posix
  option directory /data/raw_c1d1
end-volume

....
....

volume unify
  type cluster/unify
  subvolumes c1d1 c1d2 c1d3 c1d4 c1d5 c1d6 c1d7 c1d8 c2d0 c2d1 c2d2 c2d3 
c2d4 c2d5 c2d6
  option namespace ns
  option scheduler alu
  option alu.limits.min-free-disk  5%
  option alu.limits.max-open-files 10000
  option alu.order 
disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
  option alu.disk-usage.entry-threshold 10GB
  option alu.disk-usage.exit-threshold  60MB
  option alu.open-files-usage.entry-threshold 1024
  option alu.open-files-usage.exit-threshold 32
  option alu.stat-refresh.interval 10sec
end-volume


Could it be that the alu.disk-usage.entry-threshold  is too high? And the 
new volume doesnt get used until there is a 10Gb difference to others? 
(probably have to play arround first rather than asking questions)



3. Is there another way to mount the volume locally not using TCP?

We are transfering files from an older disk storage and setting up fuse 
kernel module and gluster client there is problematic. So we have mounted 
the exported gluster volume just back on the new storage server and 
transfering the files to the mountpoint (tar/ssh).

Client config on the new box looks only like:

volume remote
  type protocol/client
  option transport-type tcp/client
  option remote-host 127.0.0.1
  option remote-subvolume storage
end-volume


As I have seen people and in docs saying that you can load the translators 
anywhere - is it possible just to copy over all the volumes to client config 
(without the "protocol/server" volume) and "mount" the unify partition 
without the 'glusterfsd' at all?


In this case scenario (mounting locally over TCP) should be the performance 
translators like io-threads / write-behind loaded also on client or is it 
enough to have them only on server (as they are the same box anyway)?
Should the translators be on both client/server when there are different 
boxes?


wbr
Reinis Rozitis






More information about the Gluster-devel mailing list