[Gluster-users] Running Gluster client/server on single process

Roberto Franchini ro.franchini at gmail.com
Wed May 19 11:59:47 UTC 2010


On Sat, May 15, 2010 at 10:06 PM, Craig Carl <craig at gluster.com> wrote:
> Robert -
>       NUFA has been deprecated and doesn't apply to any recent version of
> Gluster. What version are you running? ('glusterfs --version')

We run 3.0.4 on ubuntu 9.10 and 10.04 server.
Is there a way to mimic NUFA behaviour?

We are using gluster to store Lucene indexes. Indexes are created
locally from milions of small files and then copied to the storage.
I tried read this little files from gluster but was too slow.
So maybe a NUFA way, e.g. prefer local disk for read, could improve performance.
Let me know :)

At the moment we use dht/replicate:


#CLIENT

volume remote1
 type protocol/client
 option transport-type tcp
 option remote-host zeus
 option remote-subvolume brick
end-volume

volume remote2
 type protocol/client
 option transport-type tcp
 option remote-host hera
 option remote-subvolume brick
end-volume

volume remote3
 type protocol/client
 option transport-type tcp
 option remote-host apollo
 option remote-subvolume brick
end-volume

volume remote4
 type protocol/client
 option transport-type tcp
 option remote-host demetra
 option remote-subvolume brick
end-volume

volume remote5
 type protocol/client
 option transport-type tcp
 option remote-host ade
 option remote-subvolume brick
end-volume

volume remote6
 type protocol/client
 option transport-type tcp
 option remote-host athena
 option remote-subvolume brick
end-volume

volume replicate1
 type cluster/replicate
 subvolumes remote1 remote2
end-volume

volume replicate2
 type cluster/replicate
 subvolumes remote3 remote4
end-volume

volume replicate3
 type cluster/replicate
 subvolumes remote5 remote6
end-volume

volume distribute
 type cluster/distribute
 subvolumes replicate1 replicate2 replicate3
end-volume

volume writebehind
 type performance/write-behind
 option window-size 1MB
 subvolumes distribute
end-volume

volume quickread
 type performance/quick-read
 option cache-timeout 1         # default 1 second
#  option max-file-size 256KB        # default 64Kb
 subvolumes writebehind
end-volume

### Add io-threads for parallel requisitions
volume iothreads
 type performance/io-threads
 option thread-count 16 # default is 16
 subvolumes quickread
end-volume


#SERVER

volume posix
 type storage/posix
 option directory /data/export
end-volume

volume locks
 type features/locks
 subvolumes posix
end-volume

volume brick
 type performance/io-threads
 option thread-count 8
 subvolumes locks
end-volume

volume server
 type protocol/server
 option transport-type tcp
 option auth.addr.brick.allow *
 subvolumes brick
end-volume
-- 
Roberto Franchini
http://www.celi.it
http://www.blogmeter.it
http://www.memesphere.it
Tel +39.011.562.71.15
jabber:ro.franchini at gmail.com skype:ro.franchini



More information about the Gluster-users mailing list