[Gluster-devel] auth.brick.ip.allow restrictions
Sebastien LELIEVRE
slelievre at tbs-internet.com
Mon Apr 30 15:46:12 UTC 2007
Majied Najjar a écrit :
> Hi,
>
> You can create a list of IP addresses which is comment delimited. For example:
>
> option auth.ip.brick.allow 192.168.28.6,192.168.28.7
>
I've tried this and it didn't seem to work. I will try again and look at
the logs
Cheers !
> Hope that helps.
>
> Majied
>
>
> On Mon, 30 Apr 2007 17:38:07 +0200
> Sebastien LELIEVRE <slelievre at tbs-internet.com> wrote:
>
>> Hello everyone
>>
>> I'm posting this here because I need an advice from you.
>>
>> On all examples presented on the web-site, and the user-config posted
>> here, you always allow a wide range of IP address (something like option
>> auth.ip.brick.allow 192.168.28.* )
>>
>> My question is : how to I configure the servers in order to accept just
>> a list of IP addresses ?
>>
>> I've tried this config :
>>
>> volume server
>> type protocol/server
>> option transport-type tcp/server
>> interfaces
>> subvolumes brick
>> option auth.ip.brick.allow 192.168.28.6
>> option auth.ip.brick.allow 192.168.28.7
>> end-volume
>>
>> and client 192.168.28.6 wasn't able to connect.
>> Error message on this client side is : Transport endpoint is not connected
>>
>> and on the server side, the log give us this :
>>
>> [Apr 30 17:16:57] [DEBUG/tcp-server.c:134/tcp_server_notify()]
>> tcp/server:Registering socket (5) for new transport object of 192.168.28.6
>> [Apr 30 17:17:08] [DEBUG/proto-srv.c:2418/mop_setvolume()]
>> server-protocol:mop_setvolume: received port = 1022
>> [Apr 30 17:17:08] [DEBUG/proto-srv.c:2434/mop_setvolume()]
>> server-protocol:mop_setvolume: IP addr = 192.168.28.7, received ip addr
>> = 192.168.28.6
>> [Apr 30 17:17:08] [ERROR/common-utils.c:55/full_rw()]
>> libglusterfs:full_rw: 0 bytes r/w instead of 113 (errno=17)
>> [Apr 30 17:17:08]
>> [DEBUG/protocol.c:244/gf_block_unserialize_transport()]
>> libglusterfs/protocol:gf_block_unserialize_transport: full_read of
>> header failed
>> [Apr 30 17:17:08] [DEBUG/proto-srv.c:2868/proto_srv_cleanup()]
>> protocol/server:cleaned up xl_private of 0x8050998
>> [Apr 30 17:17:08] [CRITICAL/tcp.c:82/tcp_disconnect()]
>> transport/tcp:closing socket: 5 priv->connected = 1
>> [Apr 30 17:17:08] [DEBUG/tcp-server.c:229/gf_transport_fini()]
>> tcp/server:destroying transport object for 192.168.28.6:1022 (fd=5)
>> [Apr 30 17:17:19] [DEBUG/tcp-server.c:134/tcp_server_notify()]
>> tcp/server:Registering socket (5) for new transport object of 192.168.28.6
>> [Apr 30 17:17:28] [DEBUG/proto-srv.c:2418/mop_setvolume()]
>> server-protocol:mop_setvolume: received port = 1021
>> [Apr 30 17:17:28] [DEBUG/proto-srv.c:2434/mop_setvolume()]
>> server-protocol:mop_setvolume: IP addr = 192.168.28.7, received ip addr
>> = 192.168.28.6
>> [Apr 30 17:17:28] [ERROR/common-utils.c:55/full_rw()]
>> libglusterfs:full_rw: 0 bytes r/w instead of 113 (errno=9)
>> [Apr 30 17:17:28]
>> [DEBUG/protocol.c:244/gf_block_unserialize_transport()]
>> libglusterfs/protocol:gf_block_unserialize_transport: full_read of
>> header failed
>> [Apr 30 17:17:28] [DEBUG/proto-srv.c:2868/proto_srv_cleanup()]
>> protocol/server:cleaned up xl_private of 0x804b1e8
>> [Apr 30 17:17:28] [CRITICAL/tcp.c:82/tcp_disconnect()]
>> transport/tcp:closing socket: 5 priv->connected = 1
>> [Apr 30 17:17:28] [DEBUG/tcp-server.c:229/gf_transport_fini()]
>> tcp/server:destroying transport object for 192.168.28.6:1021 (fd=5)
>>
>> On the other hand, client 192.168.28.7 is successfully connected.
>>
>> Both server and client versions are 1.3.0-pre3 from the latest tla.
>>
>> Regards,
>>
>> Enkahel
>>
>> Sebastien LELIEVRE
>> slelievre at tbs-internet.com Services to ISP
>> TBS-internet http://www.TBS-internet.com:
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
More information about the Gluster-devel
mailing list