[Gluster-users] Running Gluster client/server on single process

Bryan McGuire bmcguire at newnet66.org
Mon Jun 21 00:56:49 UTC 2010


Tejas,

I have done the following in order to test with 3.0.5 release  
candidate. Please correct me if I am wrong.

Unmounted storage on both servers.
Stopped glusterfsd.
Downloaded glusterfs-3.0.5rc6.tar.gz from http://ftp.gluster.com/pub/gluster/glusterfs/qa-releases/
Extracted
./configure
make
make install
Started glusterfsd
mounted storage on both servers.

Do I need to make any changes to my configuration files?

<<<<glusterfsd.vol>>>>

## file auto generated by /bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs

volume posix1
   type storage/posix
   option directory /fs/gluster
end-volume

volume locks1
     type features/locks
     subvolumes posix1
end-volume

volume brick1
     type performance/io-threads
     option thread-count 8
     subvolumes locks1
end-volume

volume server-tcp
     type protocol/server
     option transport-type tcp
     option auth.addr.brick1.allow *
     option transport.socket.listen-port 6996
     option transport.socket.nodelay on
     subvolumes brick1
end-volume


<<<<glusterfs.vol>>>>

## file auto generated by /bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs

# RAID 1
# TRANSPORT-TYPE tcp
volume 192.168.1.16-1
     type protocol/client
     option transport-type tcp
     option remote-host 192.168.1.16
     option transport.socket.nodelay on
     option transport.remote-port 6996
     option remote-subvolume brick1
end-volume

volume 192.168.1.15-1
     type protocol/client
     option transport-type tcp
     option remote-host 192.168.1.15
     option transport.socket.nodelay on
     option transport.remote-port 6996
     option remote-subvolume brick1
end-volume

volume mirror-0
     type cluster/replicate
     subvolumes 192.168.1.15-1 192.168.1.16-1
end-volume

#volume readahead
#    type performance/read-ahead
#    option page-count 4
#    subvolumes mirror-0
#end-volume

#volume iocache
#    type performance/io-cache
#    option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed  
's/[^0-9]//g') / 5120 ))`MB
#    option cache-timeout 1
#    subvolumes readahead
#end-volume

volume quickread
     type performance/quick-read
     option cache-timeout 1
     option max-file-size 1024kB
    # subvolumes iocache
     subvolumes mirror-0
end-volume

volume writebehind
     type performance/write-behind
     option cache-size 4MB
     subvolumes quickread
end-volume

volume statprefetch
     type performance/stat-prefetch
     subvolumes writebehind
end-volume



Bryan McGuire





On Jun 20, 2010, at 12:59 PM, Tejas N. Bhise wrote:

> Hi Bryan,
>
> 3.0.5 should be out soon. If you want to do some testing before it's  
> officially out, you can try the latest release candidate. You don't  
> need to patch at this stage. Let me know if you know how to get the  
> release candidates and use them.
>
> Regards,
> Tejas.
>
> ----- Original Message -----
> From: "Bryan McGuire" <bmcguire at newnet66.org>
> To: "Tejas N. Bhise" <tejas at gluster.com>
> Cc: "gluster-users" <gluster-users at gluster.org>
> Sent: Sunday, June 20, 2010 7:46:55 PM
> Subject: Re: [Gluster-users] Running Gluster client/server on single  
> process
>
> Tejas,
>
> Any idea when 3.0.5 will be released? I am very anxious for these
> patches to be in production.
>
> On another note, I am very new to Gluster let alone Linux. Could you,
> or someone else, give me some guidance (How to) in applying the
> patches. I would like to test them for now?
>
> Bryan McGuire
>
>
> On May 19, 2010, at 8:06 AM, Tejas N. Bhise wrote:
>
>> Roberto,
>>
>> We recently made some code changes we think will considerably help
>> small file performance -
>>
>> selective readdirp - http://patches.gluster.com/patch/3203/
>> dht lookup revalidation optimization - http://patches.gluster.com/patch/3204/
>> updated write-behind default values - http://patches.gluster.com/patch/3223/
>>
>> These are tentatively scheduled to go into 3.0.5.
>> If its possible for you, I would suggest you test them in a non-
>> production environment
>> and see if it  helps with distribute config itself.
>>
>> Please do not use in production, for that wait for the release which
>> these patches go in.
>>
>> Do let me know if you have any questions about this.
>>
>> Regards,
>> Tejas.
>>
>>
>> ----- Original Message -----
>> From: "Roberto Franchini" <ro.franchini at gmail.com>
>> To: "gluster-users" <gluster-users at gluster.org>
>> Sent: Wednesday, May 19, 2010 5:29:47 PM
>> Subject: Re: [Gluster-users] Running Gluster client/server on single
>> process
>>
>> On Sat, May 15, 2010 at 10:06 PM, Craig Carl <craig at gluster.com>
>> wrote:
>>> Robert -
>>>      NUFA has been deprecated and doesn't apply to any recent
>>> version of
>>> Gluster. What version are you running? ('glusterfs --version')
>>
>> We run 3.0.4 on ubuntu 9.10 and 10.04 server.
>> Is there a way to mimic NUFA behaviour?
>>
>> We are using gluster to store Lucene indexes. Indexes are created
>> locally from milions of small files and then copied to the storage.
>> I tried read this little files from gluster but was too slow.
>> So maybe a NUFA way, e.g. prefer local disk for read, could improve
>> performance.
>> Let me know :)
>>
>> At the moment we use dht/replicate:
>>
>>
>> #CLIENT
>>
>> volume remote1
>> type protocol/client
>> option transport-type tcp
>> option remote-host zeus
>> option remote-subvolume brick
>> end-volume
>>
>> volume remote2
>> type protocol/client
>> option transport-type tcp
>> option remote-host hera
>> option remote-subvolume brick
>> end-volume
>>
>> volume remote3
>> type protocol/client
>> option transport-type tcp
>> option remote-host apollo
>> option remote-subvolume brick
>> end-volume
>>
>> volume remote4
>> type protocol/client
>> option transport-type tcp
>> option remote-host demetra
>> option remote-subvolume brick
>> end-volume
>>
>> volume remote5
>> type protocol/client
>> option transport-type tcp
>> option remote-host ade
>> option remote-subvolume brick
>> end-volume
>>
>> volume remote6
>> type protocol/client
>> option transport-type tcp
>> option remote-host athena
>> option remote-subvolume brick
>> end-volume
>>
>> volume replicate1
>> type cluster/replicate
>> subvolumes remote1 remote2
>> end-volume
>>
>> volume replicate2
>> type cluster/replicate
>> subvolumes remote3 remote4
>> end-volume
>>
>> volume replicate3
>> type cluster/replicate
>> subvolumes remote5 remote6
>> end-volume
>>
>> volume distribute
>> type cluster/distribute
>> subvolumes replicate1 replicate2 replicate3
>> end-volume
>>
>> volume writebehind
>> type performance/write-behind
>> option window-size 1MB
>> subvolumes distribute
>> end-volume
>>
>> volume quickread
>> type performance/quick-read
>> option cache-timeout 1         # default 1 second
>> #  option max-file-size 256KB        # default 64Kb
>> subvolumes writebehind
>> end-volume
>>
>> ### Add io-threads for parallel requisitions
>> volume iothreads
>> type performance/io-threads
>> option thread-count 16 # default is 16
>> subvolumes quickread
>> end-volume
>>
>>
>> #SERVER
>>
>> volume posix
>> type storage/posix
>> option directory /data/export
>> end-volume
>>
>> volume locks
>> type features/locks
>> subvolumes posix
>> end-volume
>>
>> volume brick
>> type performance/io-threads
>> option thread-count 8
>> subvolumes locks
>> end-volume
>>
>> volume server
>> type protocol/server
>> option transport-type tcp
>> option auth.addr.brick.allow *
>> subvolumes brick
>> end-volume
>> -- 
>> Roberto Franchini
>> http://www.celi.it
>> http://www.blogmeter.it
>> http://www.memesphere.it
>> Tel +39.011.562.71.15
>> jabber:ro.franchini at gmail.com skype:ro.franchini
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list