[Gluster-devel] Optimization for proxy storage

Guido Smit guido at comlog.nl
Thu Dec 6 07:50:38 UTC 2007


Tsukasa,

I did the same using the "GlusterFS High Availability Storage with 
GlusterFS" document, and the performance was really bad.
After moving everything back to the client configs, the performance was 
back.

My server config:

volume brick
  type storage/posix                   # POSIX FS translator
  option directory /home/export/webroot        # Export this directory
end-volume

volume brick-ns
  type storage/posix
  option directory /home/export/namespace
end-volume

### Add network serving capability to above brick.
volume server
  type protocol/server
  option transport-type tcp/server     # For TCP/IP transport
  subvolumes brick brick-ns
  option auth.ip.brick.allow 192.168.1.* # Allow access to "brick" volume
  option auth.ip.brick-ns.allow 192.168.1.* # Allow access to "brick" volume
end-volume

My Client config:

### File: /etc/glusterfs-client.vol - GlusterFS Client Volume Specification

### Add client feature and attach to remote subvolume of server1
volume pnp-www1
  type protocol/client
  option transport-type tcp/client     # for TCP/IP transport
  option remote-host 192.168.1.159      # IP address of the remote brick
# option remote-port 6996              # default server port is 6996
  option remote-subvolume brick        # name of the remote volume
end-volume

### Add client feature and attach to remote subvolume of server2
volume pnp-www2
  type protocol/client
  option transport-type tcp/client     # for TCP/IP transport
  option remote-host 192.168.1.169      # IP address of the remote brick
# option remote-port 6996              # default server port is 6996
  option remote-subvolume brick        # name of the remote volume
end-volume

volume client-ns
  type protocol/client
  option transport-type tcp/client
  option remote-host 192.168.1.159
  option remote-subvolume brick-ns
end-volume

volume afr
  type cluster/afr
  subvolumes pnp-www1 pnp-www2
  option replicate *:2
end-volume

### Add unify feature to cluster "server1" and "server2". Associate an
### appropriate scheduler that matches your I/O demand.
volume unify0
  type cluster/unify
  subvolumes afr
  ### ** Round Robin (RR) Scheduler **
  option scheduler rr
  ### ** define a namespace server, which is not listed in 'subvolumes'
  option namespace client-ns
end-volume

volume iot
 type performance/io-threads
 subvolumes unify0
 option thread-count 8
end-volume

volume wb
 type performance/write-behind
 subvolumes iot
end-volume

volume ra
 type performance/read-ahead
 subvolumes wb
end-volume

volume ioc
 type performance/io-cache
 subvolumes ra
end-volume

Hope this helps you.

Guido


Tsukasa Morii wrote:
> Hello.
>
> I made an environment with GlusterFS by reference to the site below. But
> disk peformance is not enough to be used for Web proxy storage. Could
> somebody tell me the ways to make the peformance much better?
>
> "GlusterFS High Availability Storage with GlusterFS"
> http://www.gluster.org/docs/index.php/GlusterFS_High_Availability_Storage_with_GlusterFS
>
> My goal and environment are as indicated below. Please let me know if
> other informaton is needed.
>
> [GOAL]
> - Three web proxy servers keep a lot of web contens(html, css, js,
> images) under gluster file system.
> - Three servers can be used to build GlusterFS for storage.
> - Automatic failover is necessary.
>
> [ENVIRONMENT]
> - 3 web proxy servers (CPU:2.4Ghz RAM:1GB)
> - 3 gluster servers (CPU:2.4Ghz RAM:4GB)
> - Most contents are less than 50k bytes.
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
>   

-- 
Met vriendelijke groet,

Guido Smit
ComLog B.V.

Televisieweg 133
1322 BE Almere
T. 036 5470500
F. 036 5470481






More information about the Gluster-devel mailing list