[Gluster-devel] Incorrect return values from glfs_init()

Anand Avati anand.avati at gmail.com
Thu Jul 26 06:44:27 UTC 2012


ret of 1 is intentional to indicate that glfs_init() could not complete
yet. 0 indicates success and you can start issuing fops right away. -1 is
definitive failure. When ret is positive, it means initialization could not
complete, but glfs_t is still retrying to connect to the server and can
succeed in the future (for e.g, if you 'gluster volume start' the volume).

I do understand that it is currently pointless as there is no way to get
notified of asynchronous success, but it is part of the asynchronous
initialization (glfs_init_async you can see in glfs.c) which will be
supported in the future.

Avati

On Wed, Jul 25, 2012 at 10:27 PM, Bharata B Rao <bharata.rao at gmail.com>wrote:

> On Wed, Jul 25, 2012 at 2:08 PM, Bharata B Rao <bharata.rao at gmail.com>
> wrote:
> > glfs_init() is supposed to return 0  on success and -1 on failure.
> >
> > When I specify a volume that's not yet "started" from gluster CLI,
> > glfs_init() returns 1 with errno 98.
>
> Client volfile
> -----------------
> volume test-client-0
>     type protocol/client
>     option remote-host bharata
>     option remote-subvolume /test
>     option transport-type tcp
> end-volume
>
> volume test-dht
>     type cluster/distribute
>     subvolumes test-client-0
> end-volume
>
> volume test-write-behind
>     type performance/write-behind
>     subvolumes test-dht
> end-volume
>
> volume test-read-ahead
>     type performance/read-ahead
>     subvolumes test-write-behind
> end-volume
>
> volume test-io-cache
>     type performance/io-cache
>     subvolumes test-read-ahead
> end-volume
>
> volume test-quick-read
>     type performance/quick-read
>     subvolumes test-io-cache
> end-volume
>
> volume test-md-cache
>     type performance/md-cache
>     subvolumes test-quick-read
> end-volume
>
> volume test
>     type debug/io-stats
>     option latency-measurement off
>     option count-fop-hits off
>     subvolumes test-md-cache
> end-volume
>
> Client side log
> ---------------------
> [2012-07-26 05:17:57.602104] I [socket.c:3221:socket_init] 0-gfapi:
> SSL support is NOT enabled
> [2012-07-26 05:17:57.602215] I [socket.c:3236:socket_init] 0-gfapi:
> using system polling thread
> [2012-07-26 05:17:57.618120] I [socket.c:3221:socket_init]
> 0-test-client-0: SSL support is NOT enabled
> [2012-07-26 05:17:57.618160] I [socket.c:3236:socket_init]
> 0-test-client-0: using system polling thread
> [2012-07-26 05:17:57.618188] I [glfs-master.c:61:notify] 0-gfapi: New
> graph 62686172-6174-612d-3930-34332d323031 (0) coming up
> [2012-07-26 05:17:57.618206] I [client.c:2141:notify] 0-test-client-0:
> parent translators are ready, attempting connect on transport
> [2012-07-26 05:17:57.621582] E
> [client-handshake.c:1693:client_query_portmap_cbk] 0-test-client-0:
> failed to get the port number for remote subvolume
> [2012-07-26 05:17:57.621636] W [socket.c:390:__socket_rwv]
> 0-test-client-0: readv failed (No data available)
> [2012-07-26 05:17:57.621662] I [client.c:2089:client_rpc_notify]
> 0-test-client-0: disconnected
> [2012-07-26 05:17:57.621684] I [glfs-master.c:42:glfs_graph_setup]
> 0-glfs-master: switched to graph 62686172-6174-612d-3930-34332d323031
> (0)
> [2012-07-26 05:17:57.631304] E [dht-common.c:1372:dht_lookup]
> 0-test-dht: Failed to get hashed subvol for /
> [2012-07-26 05:17:57.631563] E [dht-common.c:1372:dht_lookup]
> 0-test-dht: Failed to get hashed subvol for /dir1
>
> Server
> ----------
> Not running since volume isn't yet started.
>
> Last glusterd log
> (/usr/local/var/log/glusterfs/usr-local-etc-glusterfs-glusterd.vol.log)
> ------------------------
> [2012-07-26 05:17:08.317357] E
> [glusterd-store.c:2212:glusterd_store_retrieve_volume] 0-: Unknown
> key: brick-0
> [2012-07-26 05:17:08.317375] E
> [glusterd-store.c:2212:glusterd_store_retrieve_volume] 0-: Unknown
> key: brick-1
> [2012-07-26 05:17:08.317654] E
> [glusterd-store.c:2212:glusterd_store_retrieve_volume] 0-: Unknown
> key: brick-0
> [2012-07-26 05:17:08.317664] E
> [glusterd-store.c:2212:glusterd_store_retrieve_volume] 0-: Unknown
> key: brick-1
> [2012-07-26 05:17:08.318514] I [glusterd.c:94:glusterd_uuid_init]
> 0-glusterd: retrieved UUID: b5b47193-5b5e-49da-b28c-8f0aa995e744
> [2012-07-26 05:17:08.319091] I
> [glusterd-utils.c:1222:glusterd_volume_start_glusterfs] 0-: About to
> start glusterfs for brick bharata:/rep1
> [2012-07-26 05:17:08.330673] I [socket.c:3221:socket_init]
> 0-management: SSL support is NOT enabled
> [2012-07-26 05:17:08.330726] I [socket.c:3236:socket_init]
> 0-management: using system polling thread
> [2012-07-26 05:17:08.331211] I
> [glusterd-utils.c:1222:glusterd_volume_start_glusterfs] 0-: About to
> start glusterfs for brick bharata:/rep2
> [2012-07-26 05:17:08.339197] I [socket.c:3221:socket_init]
> 0-management: SSL support is NOT enabled
> [2012-07-26 05:17:08.339219] I [socket.c:3236:socket_init]
> 0-management: using system polling thread
> [2012-07-26 05:17:08.339660] I
> [glusterd-utils.c:868:glusterd_volume_brickinfo_get] 0-management:
> Found brick
> [2012-07-26 05:17:08.339899] I
> [glusterd-utils.c:868:glusterd_volume_brickinfo_get] 0-management:
> Found brick
> [2012-07-26 05:17:08.347865] I [socket.c:3221:socket_init]
> 0-management: SSL support is NOT enabled
> [2012-07-26 05:17:08.347895] I [socket.c:3236:socket_init]
> 0-management: using system polling thread
> [2012-07-26 05:17:08.353960] I [socket.c:3221:socket_init]
> 0-management: SSL support is NOT enabled
> [2012-07-26 05:17:08.353983] I [socket.c:3236:socket_init]
> 0-management: using system polling thread
> Given volfile:
>
> +------------------------------------------------------------------------------+
>   1: volume management
>   2:     type mgmt/glusterd
>   3:     option working-directory /var/lib/glusterd
>   4:     option transport-type socket,rdma
>   5:     option transport.socket.keepalive-time 10
>   6:     option transport.socket.keepalive-interval 2
>   7:     option transport.socket.read-fail-log off
>   8: end-volume
>
>
> +------------------------------------------------------------------------------+
> [2012-07-26 05:17:08.354215] I [socket.c:2081:socket_event_handler]
> 0-transport: disconnecting now
> [2012-07-26 05:17:08.354244] I [socket.c:2081:socket_event_handler]
> 0-transport: disconnecting now
> [2012-07-26 05:17:08.354265] I [socket.c:2081:socket_event_handler]
> 0-transport: disconnecting now
> [2012-07-26 05:17:08.354283] I [socket.c:2081:socket_event_handler]
> 0-transport: disconnecting now
> [2012-07-26 05:17:08.359061] I
> [glusterd-pmap.c:237:pmap_registry_bind] 0-pmap: adding brick /rep1 on
> port 49154
> [2012-07-26 05:17:08.359611] I
> [glusterd-pmap.c:237:pmap_registry_bind] 0-pmap: adding brick /rep2 on
> port 49155
> [2012-07-26 05:17:08.374653] W [socket.c:390:__socket_rwv]
> 0-socket.management: readv failed (No data available)
> [2012-07-26 05:17:08.374715] W [socket.c:390:__socket_rwv]
> 0-socket.management: readv failed (No data available)
> [2012-07-26 05:17:08.376131] W [socket.c:390:__socket_rwv]
> 0-socket.management: readv failed (No data available)
> [2012-07-26 05:17:08.376176] W [socket.c:390:__socket_rwv]
> 0-socket.management: readv failed (No data available)
> [2012-07-26 05:17:14.867131] I
> [glusterd-handler.c:852:glusterd_handle_cli_get_volume] 0-glusterd:
> Received get vol req
> [2012-07-26 05:17:14.867970] I
> [glusterd-handler.c:852:glusterd_handle_cli_get_volume] 0-glusterd:
> Received get vol req
> [2012-07-26 05:17:14.868468] I
> [glusterd-handler.c:852:glusterd_handle_cli_get_volume] 0-glusterd:
> Received get vol req
> [2012-07-26 05:17:14.869070] I
> [glusterd-handler.c:852:glusterd_handle_cli_get_volume] 0-glusterd:
> Received get vol req
> [2012-07-26 05:17:57.621649] W [socket.c:390:__socket_rwv]
> 0-socket.management: readv failed (No data available)
> [2012-07-26 05:17:57.636813] W [socket.c:390:__socket_rwv]
> 0-socket.management: readv failed (No data available)
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20120725/3971cf10/attachment-0003.html>


More information about the Gluster-devel mailing list