[Gluster-users] Ideal GlusterFS Setup/config?
Raghavendra G
raghavendra.hg at gmail.com
Mon Aug 11 03:31:51 UTC 2008
Hi sal,
Some of the optimizations that can be performed are:
* Since the files are almost static, a read-ahead translator can be loaded
on client side which will boost the efficiency of reading a file.
* If your system is SMP or multicore, you can load as many iothreads as the
number of cores or number of processors, which will parallelize multiple
operations on filesystem.
regards,
On Sun, Aug 10, 2008 at 8:31 AM, sal poliandro <popsikle at gmail.com> wrote:
> Hello all!
>
> Sorry in advanced for the noob questions to follow.
>
> I have implemented Gluster over private GigE to sync files between my two
> web servers. The plan is in the future to add more webservers and set these
> two boxes up as dedicated glusterFS boxes but for now they also need to be
> serving the content they are duplicating. I am noticing small lag when we
> have more then a few hundred active visitors to the site and need to figure
> out where its coming from before football season official gets into swing
> when we will have 8000+ unique visitors at any given time. I am currently
> exporting /dev/sda7 as /mnt/gluster then mounting it using the client on
> /home. I am wondering if I am doing it the in the most efficient way and
> what io/thread settings I should look into. Each of these servers has 4GB
> of ram, with plenty of extra memory for cache if needed. The files don't
> change very often (in fact really only when a new image or avatar is
> uploaded via the forums) but I don't really know how to tweak gluster yet.
> Here is my current configs on both servers, it is exactly the same.
>
> [root at web2 glusterfs]# cat glusterfs-server.vol
> volume brick
> type storage/posix
> option directory /mnt/gluster
> end-volume
>
> volume server
> type protocol/server
> option transport-type tcp/server
> subvolumes brick
> option auth.ip.brick.allow 172.16.1.18* # Allow access to brick
> end-volume
>
>
> [root at web2 glusterfs]# cat glusterfs-client.vol
> volume brick1
> type protocol/client
> option transport-type tcp/client # for TCP/IP transport
> option remote-host 172.16.1.181 # IP address of server1
> option remote-subvolume brick # name of the remote volume on server1
> option transport-timeout 10
> end-volume
>
> volume brick2
> type protocol/client
> option transport-type tcp/client # for TCP/IP transport
> option remote-host 172.16.1.182 # IP address of server2
> option remote-subvolume brick # name of the remote volume on server2
> option transport-timeout 10
> end-volume
>
> volume afr
> type cluster/afr
> subvolumes brick1 brick2
> end-volume
>
> Any help/suggestions would be very much appreciated!
>
>
> --
> Salvatore "Popsikle" Poliandro
> Founder - CaffeineLAN.net
>
> Wanna help the LAN?
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
>
--
Raghavendra G
A centipede was happy quite, until a toad in fun,
Said, "Prey, which leg comes after which?",
This raised his doubts to such a pitch,
He fell flat into the ditch,
Not knowing how to run.
-Anonymous
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080811/8d318bf2/attachment.html>
More information about the Gluster-users
mailing list