[Gluster-users] Gluster swift

符永涛 yongtaofu at gmail.com
Sat Dec 8 14:53:20 UTC 2012


looks like you're using glusterfs ufo, for performance issue you'd
better set object only to  yes(/etc/swift/fs.conf), still you'd better
not put too many files under one container, since for glusterfs ufo
the container actually is a folder and put huge number of files under
one dir obviously impact performance. (Not like orignal swift, the
openstack swift hash files to a flat architecture)
I have the following suggestions and hope it helps:
1 turn object only on
2 better to hash the files to several subfolders(if the object path
contains / it will map to a folder in file system)

2012/12/4, Peter Portante <pportant at redhat.com>:
> Hi Andrew,
>
> We have seen this kind of scaling issues with RHS 2.0 code base, which is
> what appears to be the version you are using. We are currently in the
> process of some changes that we think might alleviate this problem, or fix
> it entirely, but we have not reached a solid footing to verify it yet.
>
> -peter
>
>
> ----- Original Message -----
>> From: "Andrew Holway" <a.holway at syseleven.de>
>> To: "Peter Portante" <pportant at redhat.com>
>> Cc: gluster-users at gluster.org, kaleb at keithley.org
>> Sent: Friday, November 30, 2012 4:09:44 PM
>> Subject: Re: [Gluster-users] Gluster swift
>>
>> :D
>>
>> Maybe this is more useful.
>>
>> [root at cheese25 ~]# yum list installed | grep gluster
>> glusterfs.x86_64                    3.3.1-3.el6
>>                        @epel-glusterfs
>> glusterfs-fuse.x86_64               3.3.1-3.el6
>>                        @epel-glusterfs
>> glusterfs-server.x86_64             3.3.1-3.el6
>>                        @epel-glusterfs
>> glusterfs-swift.noarch              3.3.1-3.el6
>>                        @epel-glusterfs-swift
>> glusterfs-swift-account.noarch      3.3.1-3.el6
>>                        @epel-glusterfs-swift
>> glusterfs-swift-container.noarch    3.3.1-3.el6
>>                        @epel-glusterfs-swift
>> glusterfs-swift-object.noarch       3.3.1-3.el6
>>                        @epel-glusterfs-swift
>> glusterfs-swift-plugin.noarch       3.3.1-3.el6
>>                        @epel-glusterfs-swift
>> glusterfs-swift-proxy.noarch        3.3.1-3.el6
>>                        @epel-glusterfs-swift
>>
>>
>> On Nov 29, 2012, at 9:03 PM, Andrew Holway wrote:
>>
>> > Hi,
>> >
>> > glusterfs.x86_64                        3.3.1-3.el6
>> >                    @epel-glusterfs
>> > glusterfs-fuse.x86_64                   3.3.1-3.el6
>> >                    @epel-glusterfs
>> > glusterfs-server.x86_64                 3.3.1-3.el6
>> >                    @epel-glusterfs
>> > glusterfs-swift.noarch                  3.3.1-3.el6
>> >                    @epel-glusterfs-swift
>> > glusterfs-swift-account.noarch          3.3.1-3.el6
>> >                    @epel-glusterfs-swift
>> > glusterfs-swift-container.noarch        3.3.1-3.el6
>> >                    @epel-glusterfs-swift
>> > glusterfs-swift-object.noarch           3.3.1-3.el6
>> >                    @epel-glusterfs-swift
>> > glusterfs-swift-plugin.noarch           3.3.1-3.el6
>> >                    @epel-glusterfs-swift
>> > glusterfs-swift-proxy.noarch            3.3.1-3.el6
>> >                    @epel-glusterfs-swift
>> > glusterfs.i686                          3.2.7-1.el6
>> >                    epel
>> > glusterfs-debuginfo.x86_64              3.3.1-3.el6
>> >                    epel-glusterfs
>> > glusterfs-devel.i686                    3.2.7-1.el6
>> >                    epel
>> > glusterfs-devel.x86_64                  3.3.1-3.el6
>> >                    epel-glusterfs
>> > glusterfs-geo-replication.x86_64        3.3.1-3.el6
>> >                    epel-glusterfs
>> > glusterfs-rdma.x86_64                   3.3.1-3.el6
>> >                    epel-glusterfs
>> > glusterfs-swift-doc.noarch              3.3.1-3.el6
>> >                    epel-glusterfs-swift
>> > glusterfs-vim.x86_64                    3.2.7-1.el6
>> >                    epel
>> >
>> > Ta,
>> >
>> > Andrew
>> >
>> > On Nov 29, 2012, at 8:52 PM, Peter Portante wrote:
>> >
>> >> Hi Andrew,
>> >>
>> >> What version of Gluster are you using?
>> >>
>> >> -peter
>> >>
>> >>
>> >> ----- Original Message -----
>> >>> From: "Andrew Holway" <a.holway at syseleven.de>
>> >>> To: gluster-users at gluster.org
>> >>> Cc: kaleb at keithley.org
>> >>> Sent: Thursday, November 29, 2012 1:02:15 PM
>> >>> Subject: Re: [Gluster-users] Gluster swift
>> >>>
>> >>> In addition,
>> >>>
>> >>> Requests to view the contents of containers that have been filled
>> >>> in
>> >>> this manner fail.
>> >>>
>> >>> [root at cheese25 free]# ls -l | wc
>> >>>  3654   32879  193695
>> >>> [root at cheese25 free]#
>> >>>
>> >>> [root at bright60 lots_of_little_files]# curl --verbose -H
>> >>> 'X-Auth-Token: AUTH_tk289d8ebe3ff44c97a9721970a4251f02'
>> >>> https://cheese25:443/v1/AUTH_gv0/free/ -k
>> >>> * About to connect() to cheese25 port 443 (#0)
>> >>> *   Trying 10.141.105.25... connected
>> >>> * Connected to cheese25 (10.141.105.25) port 443 (#0)
>> >>> * Initializing NSS with certpath: sql:/etc/pki/nssdb
>> >>> * warning: ignoring value of ssl.verifyhost
>> >>> * skipping SSL peer certificate verification
>> >>> * SSL connection using TLS_RSA_WITH_AES_256_CBC_SHA
>> >>> * Server certificate:
>> >>> * 	subject: CN=cheese25,O=Default Company Ltd,L=Default City,C=XX
>> >>> * 	start date: Nov 29 16:27:49 2012 GMT
>> >>> * 	expire date: Dec 29 16:27:49 2012 GMT
>> >>> * 	common name: cheese25
>> >>> * 	issuer: CN=cheese25,O=Default Company Ltd,L=Default City,C=XX
>> >>>> GET /v1/AUTH_gv0/free/ HTTP/1.1
>> >>>> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7
>> >>>> NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2
>> >>>> Host: cheese25
>> >>>> Accept: */*
>> >>>> X-Auth-Token: AUTH_tk289d8ebe3ff44c97a9721970a4251f02
>> >>>>
>> >>> < HTTP/1.1 503 Internal Server Error
>> >>> < Content-Type: text/html; charset=UTF-8
>> >>> < Content-Length: 0
>> >>> < Date: Thu, 29 Nov 2012 18:00:04 GMT
>> >>> <
>> >>> * Connection #0 to host cheese25 left intact
>> >>> * Closing connection #0
>> >>>
>> >>> But, non full volumes are ok :)
>> >>>
>> >>> [root at bright60 lots_of_little_files]# curl --verbose -H
>> >>> 'X-Auth-Token: AUTH_tk289d8ebe3ff44c97a9721970a4251f02'
>> >>> https://cheese25:443/v1/AUTH_gv0/stuff/ -k
>> >>> * About to connect() to cheese25 port 443 (#0)
>> >>> *   Trying 10.141.105.25... connected
>> >>> * Connected to cheese25 (10.141.105.25) port 443 (#0)
>> >>> * Initializing NSS with certpath: sql:/etc/pki/nssdb
>> >>> * warning: ignoring value of ssl.verifyhost
>> >>> * skipping SSL peer certificate verification
>> >>> * SSL connection using TLS_RSA_WITH_AES_256_CBC_SHA
>> >>> * Server certificate:
>> >>> * 	subject: CN=cheese25,O=Default Company Ltd,L=Default City,C=XX
>> >>> * 	start date: Nov 29 16:27:49 2012 GMT
>> >>> * 	expire date: Dec 29 16:27:49 2012 GMT
>> >>> * 	common name: cheese25
>> >>> * 	issuer: CN=cheese25,O=Default Company Ltd,L=Default City,C=XX
>> >>>> GET /v1/AUTH_gv0/stuff/ HTTP/1.1
>> >>>> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7
>> >>>> NSS/3.13.1.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2
>> >>>> Host: cheese25
>> >>>> Accept: */*
>> >>>> X-Auth-Token: AUTH_tk289d8ebe3ff44c97a9721970a4251f02
>> >>>>
>> >>> < HTTP/1.1 200 OK
>> >>> < X-Container-Object-Count: 15
>> >>> < X-Container-Bytes-Used: 0
>> >>> < Accept-Ranges: bytes
>> >>> < Content-Length: 84
>> >>> < Content-Type: text/plain; charset=utf-8
>> >>> < Date: Thu, 29 Nov 2012 18:00:49 GMT
>> >>> <
>> >>> a1
>> >>> a10
>> >>> a100
>> >>> a1000
>> >>> a10000
>> >>> a1001
>> >>> a1002
>> >>> a1003
>> >>> a1004
>> >>> a1005
>> >>> a1006
>> >>> a1007
>> >>> a1008
>> >>> a1009
>> >>> a101
>> >>> * Connection #0 to host cheese25 left intact
>> >>> * Closing connection #0
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Andrew
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> On Nov 29, 2012, at 6:38 PM, Andrew Holway wrote:
>> >>>
>> >>>> Hi,
>> >>>>
>> >>>> Im mooperd on irc.
>> >>>>
>> >>>> After lots of swearing and learning I think I am getting the
>> >>>> hang
>> >>>> of it. Today, I created 10000 files and squirted them into UFO.
>> >>>>
>> >>>> for file in $(ls); do curl -X PUT -T $file -H 'X-Auth-Token:
>> >>>> AUTH_tk289d8ebe3ff44c97a9721970a4251f02'
>> >>>> https://cheese25:443/v1/AUTH_gv0/new_container/ -k; done
>> >>>>
>> >>>> It works perfectly until you get to about 3500 files....then.
>> >>>>
>> >>>> </body>
>> >>>> </html><html>
>> >>>> <head>
>> >>>> <title>201 Created</title>
>> >>>> </head>
>> >>>> <body>
>> >>>> <h1>201 Created</h1>
>> >>>> <br /><br />
>> >>>>
>> >>>>
>> >>>>
>> >>>> </body>
>> >>>> </html><html>
>> >>>> <head>
>> >>>> <title>201 Created</title>
>> >>>> </head>
>> >>>> <body>
>> >>>> <h1>201 Created</h1>
>> >>>> <br /><br />
>> >>>>
>> >>>>
>> >>>>
>> >>>> </body>
>> >>>> </html><html>
>> >>>> <head>
>> >>>> <title>404 Not Found</title>
>> >>>> </head>
>> >>>> <body>
>> >>>> <h1>404 Not Found</h1>
>> >>>> The resource could not be found.<br /><br />
>> >>>>
>> >>>>
>> >>>> This seems to be a hard limit for the number of files in a dir.
>> >>>> Any
>> >>>> ideas?
>> >>>>
>> >>>> Thanks,
>> >>>>
>> >>>> Andrew
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>
>> >>>
>> >>> _______________________________________________
>> >>> Gluster-users mailing list
>> >>> Gluster-users at gluster.org
>> >>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>> >>>
>> >
>> >
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-users at gluster.org
>> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>


-- 
符永涛



More information about the Gluster-users mailing list