[Gluster-users] Gluster-swift and keystone

Thiago da Silva thiago at redhat.com
Mon Feb 10 18:59:42 UTC 2014


Hi Antonio,
In your case, "demo" is the tenant name, you will need to use the tenant
id, which you can get with this command: "keystone tenant-get demo"

We are working to remove this limitation, it is currently targeted for
the Icehouse release. You can follow all the work being planned for
gluster-swift in the Launchpad page:
https://blueprints.launchpad.net/gluster-swift

To use the Havana release (latest stable release), you can download all
the files from here:
https://launchpad.net/gluster-swift/havana

For Icehouse (upcoming release), they are here:
https://launchpad.net/gluster-swift/icehouse

Regards,

Thiago

   
On Mon, 2014-02-10 at 19:02 +0100, Antonio Messina wrote:
> Some more information:
> 
> I also tried with swift master branch and gluster-swift master branch
> without success.
> 
> .a.
> 
> On Mon, Feb 10, 2014 at 6:21 PM, Antonio Messina
> <antonio.s.messina at gmail.com> wrote:
> > Hi again Thiago,
> >
> > I did try to create a volume called "demo" and connect as tenant
> > "demo", but I am still getting the error:
> >
> > Feb 10 18:18:17 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> > account-server STDOUT: ERROR:root:No export found in ['default',
> > 'demo'] matching drive, volume_not_in_ring (txn:
> > tx7952d689cc334122bc200-0052f909d9)
> > Feb 10 18:18:17 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> > proxy-server ERROR Insufficient Storage
> > 127.0.0.1:6012/volume_not_in_ring (txn:
> > tx7952d689cc334122bc200-0052f909d9) (client_ip: 130.60.24.12)
> >
> > I probably have to point out my environment:
> > Ubuntu 12.04
> > OpenStack Havana from http://ubuntu-cloud.archive.canonical.com
> > swift 1.10
> > gluster-swift "havana" branch
> >
> > I tried with the "master" branch but it doesn't work with swift 1.10
> >
> > .a.
> >
> > On Mon, Feb 10, 2014 at 5:35 PM, Thiago da Silva <thiago at redhat.com> wrote:
> >> Hi Antonio,
> >>
> >> The current version of gluster-swift has a limitation where swift
> >> accounts must map to a gluster volume.
> >>
> >> When using keystone, you will need to create a gluster volume with the
> >> tenant id (not tenant name). Then, generate the ring again using
> >> 'gluster-swift-gen-builders' and restart swift.
> >>
> >> Thiago
> >>
> >>
> >> On Mon, 2014-02-10 at 13:54 +0100, Antonio Messina wrote:
> >>> Hi all,
> >>>
> >>> I am testing gluster and gluster-swift. I currently have a cluster of
> >>> 8 nodes plus a frontend node which is a peer but it doesn't have any
> >>> bricks.
> >>> On the frontend node I have installed swift (from debian packages,
> >>> havana version) and gluster-swift from git.
> >>>
> >>> I tested it *without* authentication and it basically worked, but
> >>> since I need to enable keystone authentication (and possibly also s3
> >>> tokens eventually) I tried to just add the configuration options for
> >>> the proxy-server I used for my "standard" swift installation (i.e.
> >>> without gluster). but it didn't work.
> >>>
> >>> The error I am getting is "503", the relevant logs (I'm running the
> >>> daemons with loglevel DEBUG) are:
> >>>
> >>> Feb 10 13:49:36 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> >>> proxy-server Authenticating user token
> >>> Feb 10 13:49:36 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> >>> proxy-server Removing headers from request environment:
> >>> X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
> >>> Feb 10 13:49:36 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> >>> proxy-server Storing 484f080f8436324ea6be721dee58cd0f token in
> >>> memcache
> >>> Feb 10 13:49:36 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> >>> proxy-server Using identity: {'roles': [u'_member_', u'Member'],
> >>> 'user': u'antonio', 'tenant': (u'a9b091f85e04499eb2282733ff7d183e',
> >>> u'demo')} (txn: txaaf80571c35d49818e757-0052f8cae0)
> >>> Feb 10 13:49:36 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> >>> proxy-server allow user with role member as account admin (txn:
> >>> txaaf80571c35d49818e757-0052f8cae0) (client_ip: 130.60.24.12)
> >>> Feb 10 13:49:37 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> >>> account-server STDOUT: ERROR:root:No export found in ['default']
> >>> matching drive, volume_not_in_ring (txn:
> >>> txaaf80571c35d49818e757-0052f8cae0)
> >>> Feb 10 13:49:37 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> >>> account-server 127.0.0.1 - - [10/Feb/2014:12:49:37 +0000] "GET
> >>> /volume_not_in_ring/0/AUTH_a9b091f85e04499eb2282733ff7d183e" 507 -
> >>> "txaaf80571c35d49818e757-0052f8cae0" "GET
> >>> http://130.60.24.55:8080/v1/AUTH_a9b091f85e04499eb2282733ff7d183e?format=json"
> >>> "proxy-server 5215" 0.1409 ""
> >>> Feb 10 13:49:37 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> >>> proxy-server ERROR Insufficient Storage
> >>> 127.0.0.1:6012/volume_not_in_ring (txn:
> >>> txaaf80571c35d49818e757-0052f8cae0) (client_ip: 130.60.24.12)
> >>> Feb 10 13:49:37 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> >>> proxy-server Node error limited 127.0.0.1:6012 (volume_not_in_ring)
> >>> (txn: txaaf80571c35d49818e757-0052f8cae0) (client_ip: 130.60.24.12)
> >>> Feb 10 13:49:37 server-b11f5969-8c3c-4b26-8e8d-defb4d272ce9
> >>> proxy-server Account GET returning 503 for [507] (txn:
> >>> txaaf80571c35d49818e757-0052f8cae0) (client_ip: 130.60.24.12)
> >>>
> >>> From the account-server log lines it seems that gluster-swift is
> >>> trying to match the gluster volume (which in my case is called
> >>> 'default') with a "drive" which is called, actually, "drive", but I
> >>> don't really understand where this come from: what is a drive in this
> >>> context? Is it related to swift? Is it something inserted by the
> >>> "authtoken" or "keystone" filters I added in the pipeline?
> >>>
> >>> On the proxy-server.conf I basically replaced the pipeline with:
> >>>
> >>> [pipeline:main]
> >>> pipeline = catch_errors healthcheck proxy-logging cache proxy-logging
> >>> authtoken keystoneauth proxy-server
> >>>
> >>> i.e. adding "authtoken" and "keystoneauth" filters, and added the
> >>> following two stanzas:
> >>>
> >>>
> >>> # Addition to make it work with keystone
> >>> [filter:authtoken]
> >>> paste.filter_factory = keystone.middleware.auth_token:filter_factory
> >>> auth_host = keystone-host
> >>> auth_port = 35357
> >>> auth_protocol = http
> >>> auth_uri = http://keystone-host:5000/
> >>> admin_tenant_name = service
> >>> admin_user = swift
> >>> admin_password = swift-password
> >>> delay_auth_decision = 1
> >>>
> >>> [filter:keystoneauth]
> >>> use = egg:swift#keystoneauth
> >>> operator_roles = Member, admin
> >>>
> >>> Thank you in advance to anyone willing to spent some times helping me
> >>> with this :)
> >>>
> >>> .a.
> >>>
> >>
> >>
> >
> >
> >
> > --
> > antonio.s.messina at gmail.com
> > antonio.messina at uzh.ch                     +41 (0)44 635 42 22
> > GC3: Grid Computing Competence Center      http://www.gc3.uzh.ch/
> > University of Zurich
> > Winterthurerstrasse 190
> > CH-8057 Zurich Switzerland
> 
> 
> 





More information about the Gluster-users mailing list