[Gluster-devel] excessive inode consumption in namespace?

Rhesa Rozendaal gluster at rhesa.com
Mon Jul 2 15:31:40 UTC 2007


I'm having a bit of a puzzle here.

I've setup a 1 server, 1 client test. The server exports 4 bricks and a 
namespace volume, and the client unifies those (see specs at the bottom).

The namespace directory actually sits on the first partition, so I would 
expect that partition to show twice the number of consumed inodes compared to 
the other partitions. But what I'm seeing instead is a threefold consumption:

# df -i
/dev/etherd/e3.1     488407040 1603131 486803909    1% /mnt/e31
/dev/etherd/e0.3     366313472 1603130 364710342    1% /mnt/e03
/dev/etherd/e0.1     488407040 4821312 483585728    1% /mnt/e01
/dev/etherd/e0.2     488407040 1603130 486803910    1% /mnt/e02

# df -H
/dev/etherd/e3.1       2.0T    97G   1.8T   6% /mnt/e31
/dev/etherd/e0.3       1.5T    97G   1.3T   7% /mnt/e03
/dev/etherd/e0.1       2.0T   101G   1.8T   6% /mnt/e01
/dev/etherd/e0.2       2.0T    96G   1.8T   6% /mnt/e02

As far as I can tell, The directory structure of /mnt/e01/gfs and /mnt/e01/ns 
are identical, so I'd expect 3.2M inodes used, instead of the 4.8M.

Any thought what could cause this, and what I can do to prevent it?

Thanks in advance,

Rhesa Rozendaal
ExposureManager.com


### server spec
volume brick01
   type storage/posix
   option directory /mnt/e01/gfs
end-volume

volume ns
   type storage/posix
   option directory /mnt/e01/ns
end-volume

volume brick02
   type storage/posix
   option directory /mnt/e02/gfs
end-volume

volume brick03
   type storage/posix
   option directory /mnt/e03/gfs
end-volume

volume brick31
   type storage/posix
   option directory /mnt/e31/gfs
end-volume

volume server
   type protocol/server
   option transport-type tcp/server
   subvolumes ns brick01 brick02 brick03 brick31
   option auth.ip.ns.allow *
   option auth.ip.brick01.allow *
   option auth.ip.brick02.allow *
   option auth.ip.brick03.allow *
   option auth.ip.brick31.allow *
end-volume
### end server spec

### client spec
volume ns
   type protocol/client
   option transport-type tcp/client
   option remote-host nfs-deb-03
   option remote-subvolume ns
end-volume

volume client01
   type protocol/client
   option transport-type tcp/client
   option remote-host nfs-deb-03
   option remote-subvolume brick01
end-volume

volume client02
   type protocol/client
   option transport-type tcp/client
   option remote-host nfs-deb-03
   option remote-subvolume brick02
end-volume

volume client03
   type protocol/client
   option transport-type tcp/client
   option remote-host nfs-deb-03
   option remote-subvolume brick03
end-volume

volume client31
   type protocol/client
   option transport-type tcp/client
   option remote-host nfs-deb-03
   option remote-subvolume brick31
end-volume

volume export
   type cluster/unify
   subvolumes client01 client02 client03 client31
   option namespace ns
   option scheduler alu
   option alu.limits.min-free-disk 1GB
   option alu.order 
disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
end-volume

volume writeback
   type performance/write-behind
   option aggregate-size 131072
   subvolumes export
end-volume

volume readahead
   type performance/read-ahead
   option page-size 65536
   option page-count 16
   subvolumes writeback
end-volume

volume iothreads
   type performance/io-threads
   option thread-count 8
   subvolumes readahead
end-volume
### end client spec






More information about the Gluster-devel mailing list