[Gluster-users] mod_glusterfs

jvanwanrooy at chatventure.nl jvanwanrooy at chatventure.nl
Sun May 24 13:40:34 UTC 2009


Hi, 

At the moment I'm testing GlusterFS 2.0.0 for use in our storage cluster for our "Joint Product eXperience" product. 
I setup two storage servers which are replicated. The client consists of two parts: lighttpd with mod_glusterfs for serving to the wold and a mount to save the files to from our application. 

The config files look like this: 
##################################### 
### GlusterFS Server Volume File ## 
##################################### 

### Export volume "posix1" with the contents of "/home/jpx/glustervolume" directory. 
volume posix1 
type storage/posix 
option directory /home/jpx/glustervolume 
end-volume 

volume locks 
type features/locks 
subvolumes posix1 
end-volume 

volume brick 
type performance/io-threads 
option thread-count 8 

subvolumes locks 
end-volume 

volume server 
type protocol/server 
option transport-type tcp 
option auth.login.brick.allow xxxx 
option auth.login.dikkestorageuser.password xxxx 

subvolumes brick 
end-volume 

##################################### 
### GlusterFS Client Volume File ## 
##################################### 

### Add client feature and attach to remote subvolume of server1 
volume brick1 
type protocol/client 
option transport-type tcp 
option remote-host xxx.xxx.xxx.xxx 
option remote-subvolume locks 

option username xxxx 
option password xxxx 
end-volume 

### Add client feature and attach to remote subvolume of server2 
volume brick2 
type protocol/client 
option transport-type tcp 
option remote-host xxx.xxx.xxx.xxx 
option remote-subvolume locks 

option username xxxx 
option password xxxx 
end-volume 

volume afr 
type cluster/replicate 
subvolumes brick1 brick2 

option data-self-heal on 
option metadata-self-heal on 
option entry-self-heal on 

option data-change-log on 
option metadata-change-log on 
option entry-change-log on 

option data-lock-server-count 2 
option metadata-lock-server-count 2 
option entry-lock-server-count 2 
end-volume 

##################################### 
### Lighttpd include file for gluster ## 
##################################### 
$HTTP["host"] == "gluster3.vh1.royalfish.nl" { 
glusterfs.document-root = "" 
glusterfs.prefix = "" 
glusterfs.logfile = "/var/log/glusterfs/lighttpd.log" 
glusterfs.volume-specfile = "/etc/glusterfs/client-volume.vol" 
glusterfs.loglevel = "debug" 
glusterfs.cache-timeout = 300 
glusterfs.xattr-interface-size-limit = "65536" 
} 

I have a few problems: 

    1. The /var/log/glusterfs/lighttpd.log file gives a few warnings/errors: 


        1. 2009-05-24 15:18:44 W [client-protocol.c:6783:init] brick1: WARNING: Failed to set 'ulimit -n 1M': Operation not permitted 
        2. 2009-05-24 15:18:44 E [client-protocol.c:6791:init] brick1: Failed to set max open fd to 64k: Operation not permitted 
        3. 2009-05-24 15:18:44 W [client-protocol.c:6783:init] brick2: WARNING: Failed to set 'ulimit -n 1M': Operation not permitted 
        4. 2009-05-24 15:18:44 E [client-protocol.c:6791:init] brick2: Failed to set max open fd to 64k: Operation not permitted 
        5. 2009-05-24 15:18:44 E [name.c:420:client_bind] brick1: cannot bind inet socket (11) to port less than 1024 (Permission denied) 
        6. 2009-05-24 15:18:44 E [name.c:420:client_bind] brick1: cannot bind inet socket (12) to port less than 1024 (Permission denied) 
        7. 2009-05-24 15:18:44 E [name.c:420:client_bind] brick2: cannot bind inet socket (13) to port less than 1024 (Permission denied) 
        8. 2009-05-24 15:18:44 E [name.c:420:client_bind] brick2: cannot bind inet socket (14) to port less than 1024 (Permission denied) 
        9. 2009-05-24 15:18:44 E [client-protocol.c:3733:client_lookup] brick1: LOOKUP 1/zzz (/zzz): failed to get remote inode number for parent 
        10. 2009-05-24 15:18:44 E [client-protocol.c:3733:client_lookup] brick1: LOOKUP 1/zzz (/zzz): failed to get remote inode number for parent 
        11. 2009-05-24 15:18:44 D [socket.c:654:__socket_proto_state_machine] brick2: partial data read on NB socket 
        12. 2009-05-24 15:18:44 D [socket.c:654:__socket_proto_state_machine] brick2: partial data read on NB socket 
        13. 2009-05-24 15:18:44 D [socket.c:654:__socket_proto_state_machine] brick2: partial data read on NB socket 
        14. 2009-05-24 15:18:44 D [socket.c:654:__socket_proto_state_machine] brick2: partial data read on NB socket 
    2. 
When trying to download small files, they serve up well. No problem at all.     3. When trying to download 1mb.bin, 10mb.bin and larger files, lighttpd's CPU usage becomes 99%. 
    4. When stracing the work of lighttpd it is constantly polling: 00:46:55.528608 writev(6, [{"", 0}], 1) = 0 00:46:55.528732 time(NULL) = 1243032415 00:46:55.528869 poll([{fd=4, events=POLLIN}, {fd=6, events=POLLOUT}], 2, 1000) = 1 ([{fd=6, revents=POLLOUT}]) 00:46:55.529005 writev(6, [{"", 0}], 1) = 0 00:46:55.529135 time(NULL) = 1243032415 00:46:55.529250 poll([{fd=4, events=POLLIN}, {fd=6, events=POLLOUT}], 2, 1000) = 1 ([{fd=6, revents=POLLOUT}]) 

Does anyone know why my lighttpd is using so much CPU? Possibly I'm doing something terribly wrong? 

Thanks in advance for your replies. 

Best Regards Jasper 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090524/d2b25830/attachment.html>


More information about the Gluster-users mailing list