[Gluster-users] 503 Service unavailable load balancing with nginx
Gangalwar
gaurav at gluster.com
Mon Aug 1 17:59:21 UTC 2011
Cool..nice to hear that from you.
________________________________
From: Gabriel-Adrian Samfira [samfiragabriel at gmail.com]
Sent: Monday, August 01, 2011 9:01 PM
To: Gangalwar
Cc: gluster-users at gluster.org
Subject: RE: [Gluster-users] 503 Service unavailable load balancing with nginx
Thats good to know! I will increase the limit as needed. For now at least, 5 G is fine. The implicit value in nginx is much lower (few MB). I have to say, so far, gluster with object storage is awesome. Good job to you and the rest of the gluster team!
On 2011 8 1 18:25, "Gangalwar" <gaurav at gluster.com<mailto:gaurav at gluster.com>> wrote:
> Gluster-Object has no such limit on the object size, we can directly store the object of any size without splitting it, its only limitted by your backend capacity i.e. size of glusterfs volume used as account.
> Swift supports large objects (>5GB) by splitting it into multiple smaller segments and store those segments in some psuedo directory. But since in Gluster-Object we are maintaining the namespace for objects as files and directories, we store it as a single big file.
> You can remove this limit from config file.
> You can use "st" command without -S option to avoid segmentation.
>
> Regards,
> Gaurav
>
> ________________________________
> From: Gabriel-Adrian Samfira [samfiragabriel at gmail.com<mailto:samfiragabriel at gmail.com>]
> Sent: Monday, August 01, 2011 7:37 PM
> To: Gangalwar
> Cc: gluster-users at gluster.org<mailto:gluster-users at gluster.org>
> Subject: RE: [Gluster-users] 503 Service unavailable load balancing with nginx
>
>
> As far as i can tell gluster uses swift for the object storage part. Swift han an arbitrarily set maximum file size of 5 gb. After that you are supposed to split the file in pieces and create a manifest file that is used to concatenate the pieces when downloading. The "st" command does this automatically. Thats the reason for setting clirnt_max_body_size to 5 G.
>
> On 2011 8 1 09:15, "Gangalwar" <gaurav at gluster.com<mailto:gaurav at gluster.com><mailto:gaurav at gluster.com<mailto:gaurav at gluster.com>>> wrote:
>> Hi,
>> Thanks for reporting this issue, it will be fixed in the next release.
>> Also could i know why you are using client_max_body_size 5G; in the config file?
>>
>> Thanks,
>> Gaurav
>>
>> ________________________________
>>
>> Hello,
>>
>> Don't know if this is the best way to report a bug, but here goes :).
>>
>> I have 2 gluster servers running glusterfs-3.3beta1 on which I have
>> configured the Object Storage platform. The servers are on a private
>> network with no public IP's and i was trying to load balance the
>> object storage system using nginx. It worked great except that every
>> other request would be answered with a 503 error. Upon inspection of
>> /var/log/swift/proxy.error I found the following traceback:
>>
>> Jul 29 13:28:53 storage05 proxy-server ERROR 500 Traceback (most
>> recent call last):#012 File
>> "/usr/local/lib/python2.6/dist-packages/swift-1.4_dev-py2.6.egg/swift/obj/server.py",
>> line 891, in __call__#012 res = getattr(self, req.method)(req)#012
>> File "/usr/local/lib/python2.6/dist-packages/swift-1.4_dev-py2.6.egg/swift/obj/server.py",
>> line 733, in GET#012 if file_obj.metadata[X-ETAG] in
>> request.if_none_match:#012NameError: global name 'X' is not
>> defined#012 From Object Server 127.0.0.1:6010<http://127.0.0.1:6010><http://127.0.0.1:6010><http://127.0.0.1:6010>
>>
>> (txn:
>> tx2abf0954-1043-4976-a692-39da260d9271)
>>
>> It seams that at line 733 in
>> /usr/local/lib/python2.6/dist-packages/swift-1.4_dev-py2.6.egg/swift/obj/server.py,
>> is trying to call X-ETAG instead of X_ETAG (i think its a typo).
>> Replacing the dash with an
>> underscore takes care of the error on my system. If its of any help,
>> here is the nginx config i used:
>>
>>
>> worker_processes 1;
>>
>> events {
>> worker_connections 1024;
>> }
>>
>>
>> http {
>> include mime.types;
>> default_type application/octet-stream;
>>
>> sendfile on;
>> keepalive_timeout 65;
>>
>> upstream backend-secure {
>> server 192.168.5.5:443<http://192.168.5.5:443><http://192.168.5.5:443><http://192.168.5.5:443>;
>> server 192.168.5.6:443<http://192.168.5.6:443><http://192.168.5.6:443><http://192.168.5.6:443>;
>> }
>>
>> server {
>> listen 80;
>> client_max_body_size 5G;
>> location / {
>> proxy_pass https://backend-secure;
>> proxy_set_header Host $host;
>> proxy_set_header X-Real-IP $remote_addr;
>> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
>> proxy_set_header X-Forwarded-Proto https;
>> proxy_redirect off;
>>
>> }
>> }
>> server {
>> listen 443 ssl;
>> client_max_body_size 5G;
>> ssl_certificate /etc/nginx/ssl/cert.crt;
>> ssl_certificate_key /etc/nginx/ssl/key.key;
>> location / {
>> proxy_pass https://backend-secure;
>> proxy_set_header Host $host;
>> proxy_set_header X-Real-IP $remote_addr;
>> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
>> proxy_set_header X-Forwarded-Proto https;
>> proxy_redirect off;
>> }
>> }
>> }
>>
>>
>> Best regards,
>> Gabriel
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org<mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>><mailto:Gluster-users at gluster.org<mailto:Gluster-users at gluster.org><mailto:Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>>>
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110801/0656762f/attachment.html>
More information about the Gluster-users
mailing list