[Gluster-users] 2.0.8: server on tmpfs for small files
Jeremy Enos
jenos at ncsa.uiuc.edu
Sat Nov 20 02:11:28 UTC 2010
For an experiment, I tried dumping a loopback filesystem into the tmpfs
filesystem, and then exporting that one w/ Gluster. It seems to have
worked for any single client, though I'm not getting IOR to work against
it yet w/ MPI. I'm expecting performance to stink, but we'll see.
Jeremy
On 11/19/2010 7:12 PM, Jeremy Enos wrote:
> Looks like RAMFS has the same issue TMPFS does... I'm looking into
> the RNA networks. Thanks-
>
> Jeremy
>
> On 11/18/2010 6:55 PM, Craig Carl wrote:
>> On 11/18/2010 04:33 PM, Jeremy Enos wrote:
>>> Post is almost a year old... ever any response here? Is it
>>> possible to export tmpfs locations w/ gluster?
>>> thx-
>>>
>>> Jeremy
>>>
>>> On 12/1/2009 8:14 AM, Alexander Beregalov wrote:
>>>> Hi
>>>>
>>>> Is it possible to start server on tmpfs ?
>>>> It is announced that stripe can be used over tmpfs, but stripe is
>>>> client plugin and server cannot start on tmpfs because lack of
>>>> xattrs.
>>>>
>>>> I am trying to setup a small fast storage for small files
>>>> (compiling purpose).
>>>> I made ext2 with xattr on ramdisk on 4 hosts, joined them with
>>>> replicate plugin and mounted it on one client. Also io-cache,
>>>> write-behind, quick-read and io threads were used on client side.
>>>> I compiled linux kernel, performance was 10 times worse than tmpfs
>>>> exported by NFS on one node.
>>>>
>>>> Any ideas?
>>>>
>>>> Servers:
>>>>
>>>> volume posix
>>>> type storage/posix # POSIX FS translator
>>>> option directory /mnt/ost # Export this directory
>>>> end-volume
>>>>
>>>> volume locks
>>>> type features/locks
>>>> option mandatory-locks on
>>>> subvolumes posix
>>>> end-volume
>>>>
>>>> volume brick
>>>> type performance/io-threads
>>>> option thread-count 4 # Four CPUs
>>>> subvolumes locks
>>>> end-volume
>>>>
>>>> volume server
>>>> type protocol/server
>>>> option transport-type tcp
>>>> option transport.socket.nodelay on
>>>> subvolumes brick
>>>> option auth.addr.brick.allow * # Allow access to "brick" volume
>>>> end-volume
>>>>
>>>>
>>>> Client:
>>>>
>>>> volume server1
>>>> type protocol/client
>>>> option transport-type tcp
>>>> option remote-host<IP>
>>>> option transport.socket.nodelay on
>>>> option remote-subvolume brick # name of the remote volume
>>>> end-volume
>>>>
>>>> <the same for server[2-4]>
>>>>
>>>> volume replicated
>>>> type cluster/replicate
>>>> subvolumes server1 server2 server3 server4
>>>> end-volume
>>>>
>>>> volume iocache
>>>> type performance/io-cache
>>>> option cache-size 1000MB # default is 32MB
>>>> option priority *.h:3,*.o:2,*:1 # default is '*:0'
>>>> option cache-timeout 1 # default is 1 second
>>>> subvolumes replicated
>>>> end-volume
>>>>
>>>> volume writeback
>>>> type performance/write-behind
>>>> option cache-size 500MB # default is equal to aggregate-size
>>>> option flush-behind off # default is 'off'
>>>> subvolumes iocache
>>>> end-volume
>>>>
>>>> volume quickread
>>>> type performance/quick-read
>>>> option cache-timeout 1 # default 1 second
>>>> option max-file-size 256KB # default 64Kb
>>>> subvolumes iocache
>>>> end-volume
>>>>
>>>> volume iothreads
>>>> type performance/io-threads
>>>> option thread-count 16 # default is 16
>>>> subvolumes quickread
>>>> end-volume
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>> Jeremy -
>> xattrs are required for any Gluster setup regardless of the volume
>> design, tmpfs doesn't support user xattrs. RamFS works well, if you
>> have the budget FisionIO cards are very fast and work well with
>> Gluster, so does the solution from RNA Networks.
>> (http://www.rnanetworks.com/)
>>
>> Thanks,
>>
>> Craig
>>
>> -->
>> Craig Carl
>> Senior Systems Engineer
>> Gluster
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list