[Gluster-devel] Re: FW: IO Errors
Anand Avati
avati at zresearch.com
Wed Feb 27 03:02:03 UTC 2008
Scott,
can you give the output of glusterfs --version? Have you tried with
glusterfs-1.3.8pre1?
avati
2008/2/27, Scott McNally <smcnally at pensaworks.com>:
>
> Tried to send this to the mailing list but it denied me.
>
>
>
> *From:* Scott McNally [mailto:smcnally at pensaworks.com]
> *Sent:* Tuesday, February 26, 2008 5:01 PM
> *To:* 'gluster-devel at nongnu.org'
> *Subject:* IO Errors
>
>
>
> I am running into IOError exceptions when accessing the cluster through
> mono, this is a heavily multithreaded app that attempts to read
> approximately 4k from the head of each file in a random order.
>
>
>
> There are no logs in the glusterfs.log when the errors happen.
>
>
>
> Any suggestions?
>
>
>
>
>
> I have the following Setup:
>
>
>
> 21 servers.
>
>
>
> All running the following config file.
>
>
>
> Server:
>
>
>
> volume base
>
> type storage/posix
>
> option directory /storage
>
> end-volume
>
>
>
> volume brick-ns
>
> type storage/posix
>
> option directory /storage-ns
>
> end-volume
>
>
>
> #now lets do some performance stuff
>
> volume brick
>
> type performance/io-threads
>
> option thread-count 8
>
> option cache-size 32MB
>
> subvolumes base
>
> end-volume
>
>
>
> ##add network serving capability to above brick
>
> volume server
>
> type protocol/server
>
> option transport-type tcp/server
>
> subvolumes brick brick-ns
>
> option auth.ip.brick.allow 192.168.7.* #allow access to
> brick volume
>
> option auth.ip.brick-ns.allow 192.168.7.* #etc
>
> end-volume
>
>
>
>
>
> Client Config:
>
>
>
>
>
> volume brick1
>
> type protocol/client
>
> option transport-type tcp/client
>
> option remote-host 192.168.7.68
>
> option remote-subvolume brick
>
> end-volume
>
>
>
> volume brick2
>
> type protocol/client
>
> option transport-type tcp/client
>
> option remote-host 192.168.7.69
>
> option remote-subvolume brick
>
> end-volume
>
> …
>
> volume brick21
>
>
>
>
>
>
>
> #now we use the first 3 blades for namespace
>
>
>
> volume brick-ns1
>
> type protocol/client
>
> option transport-type tcp/client
>
> option remote-host 192.168.7.68
>
> option remote-subvolume brick-ns
>
> end-volume
>
>
>
> volume brick-ns2
>
> type protocol/client
>
> option transport-type tcp/client
>
> option remote-host 192.168.7.69
>
> option remote-subvolume brick-ns
>
> end-volume
>
>
>
> volume brick-ns3
>
> type protocol/client
>
> option transport-type tcp/client
>
> option remote-host 192.168.7.70
>
> option remote-subvolume brick-ns
>
> end-volume
>
>
>
>
>
> then afrs for every 3
>
> like so..
>
>
>
> #setup the afrs (3 replicas)
>
> volume afr1
>
> type cluster/afr
>
> subvolumes brick1 brick2 brick3
>
> end-volume
>
>
>
>
>
> volume afr-ns
>
> type cluster/afr
>
> subvolumes brick-ns1 brick-ns2 brick-ns3
>
> end-volume
>
>
>
>
>
> #unify all this into one big happy family
>
> volume unify
>
> type cluster/unify
>
> option namespace afr-ns
>
> option scheduler alu
>
> option alu.limits.min-free-disk 5%
>
> option alu.limits.max-open-files 1000
>
> option alu.orderread-usage:disk-usage:write-usage:open-files-usage:disk-speed-usage
>
> option alu.disk-usage.entry-threshold 1GB
>
> option alu.disk-usage.exit-threshold 200MB
>
> option alu.open-files-usage.entry-threshold 200
>
> option alu.open-files-usage.entry-threshold 32
>
> option alu.read-usage.entry-threshold 20%
>
> option alu.read-usage.exit-threshold 4%
>
> option alu.write-usage.entry-threshold 20%
>
> option alu.write-usage.exit-threshold 4%
>
> option alu.stat-refresh.interval 15sec
>
> subvolumes afr1 afr2 afr3 afr4 afr5 afr6 afr7
>
> end-volume
>
>
>
> volume wb
>
> type performance/write-behind
>
> option aggregate-size 128KB
>
> option flush-behind on
>
> subvolumes unify
>
> end-volume
>
>
>
> I am running whatever is the current package for Fedora Core 8
>
>
>
> Shows a build date of 2/3/2008 1.3.8
>
>
>
>
>
--
If I traveled to the end of the rainbow
As Dame Fortune did intend,
Murphy would be there to tell me
The pot's at the other end.
More information about the Gluster-devel
mailing list