[Gluster-users] GlusterFS 3.6.2: Can't mount GlusterFS volume

Andreas Hollaus Andreas.Hollaus at ericsson.com
Tue Jun 23 08:38:03 UTC 2015


Hi,

I'm not sure which log file you are referring to, but as log files are stored on a
ram disk in my case, I'm afraid it is gone.

I use a work-around for this problem: To avoid having to start the volume (again)
using the 'force' option, I wait until the local brick specified in the volume file
is available before I start the glusterfs service. For some reason it seems like this
strange state happens when I started the service without any bricks being available.
That would make sense if 'gluster volume info' confirmed that the volume is not
started, but in my case it actually says that it is and then you expect to be able to
mount it. I guess that it is quite normal not to have all bricks available when the
service is started on a node, but all bricks gone may be a corner case, right?
However, if the brick is not available, I don't understand why and how that attribute
could be removed. Maybe there's something pending which is executed when the brick
becomes available.

I don't know if there are any reasons for this behaviour or if it would be considered
a bug(?).


Regards
Andreas


On 06/23/15 07:32, Atin Mukherjee wrote:
>
> On 06/22/2015 09:55 PM, Atin Mukherjee wrote:
>> Sent from one plus one
>> On Jun 22, 2015 7:31 PM, "Andreas Hollaus" <andreas.hollaus at ericsson.com>
>> wrote:
>>> Hi,
>>>
>>> Well, I don't really know what to expect, but there actually are some
>> errors:
>>> Could it be due to that missing extended attribute? I don't understand
>> why it's missing (yet).
>> You are correct. Missing volume-id could be the problem, I will check the
>> code and confirm tomorrow.
> Could you attach the glusterd log file? Missing this attribute means the
> brick doesn't have its associated volume id. We normally set this
> attribute at the time of volume creation/add-brick.
>>> Regards
>>> Andreas
>>>
>>> # more /var/log/glusterfs/bricks/opt-lvmdir-c2-brick.log
>>> [2015-06-22 13:23:47.924071] I [MSGID: 100030] [glusterfsd.c:2018:main]
>> 0-/usr/s
>>> bin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.6.2 (args:
>> /usr/s
>>> bin/glusterfsd -s 10.32.0.64 --volfile-id
>> c_glstr.10.32.0.64.opt-lvmdir-c2-brick
>>>  -p /system/glusterd/vols/c_glstr/run/10.32.0.64-opt-lvmdir-c2-brick.pid
>> -S /var
>>> /run/67a3053b5f738d0e72fb517a245687f1.socket --brick-name
>> /opt/lvmdir/c2/brick -
>>> l /var/log/glusterfs/bricks/opt-lvmdir-c2-brick.log --xlator-option
>> *-posix.glus
>>> terd-uuid=c7de0c97-ca3a-47a4-bc7d-21fc33b88fee --brick-port 49152
>> --xlator-optio
>>> n c_glstr-server.listen-port=49152)
>>> [2015-06-22 13:23:47.959891] I [graph.c:269:gf_add_cmdline_options]
>> 0-c_glstr-se
>>> rver: adding option 'listen-port' for volume 'c_glstr-server' with value
>> '49152'
>>> [2015-06-22 13:23:47.959949] I [graph.c:269:gf_add_cmdline_options]
>> 0-c_glstr-po
>>> six: adding option 'glusterd-uuid' for volume 'c_glstr-posix' with value
>> 'c7de0c
>>> 97-ca3a-47a4-bc7d-21fc33b88fee'
>>> [2015-06-22 13:23:47.961541] I
>> [rpcsvc.c:2142:rpcsvc_set_outstanding_rpc_limit]
>>> 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
>>> [2015-06-22 13:23:47.961656] W [options.c:898:xl_opt_validate]
>> 0-c_glstr-server:
>>>  option 'listen-port' is deprecated, preferred is
>> 'transport.socket.listen-port'
>>> , continuing with correction
>>> [2015-06-22 13:23:47.964258] E [posix.c:5626:init] 0-c_glstr-posix:
>> Extended att
>>> ribute trusted.glusterfs.volume-id is absent
>>> [2015-06-22 13:23:47.964291] E [xlator.c:425:xlator_init]
>> 0-c_glstr-posix: Initi
>>> alization of volume 'c_glstr-posix' failed, review your volfile again
>>> [2015-06-22 13:23:47.964310] E [graph.c:322:glusterfs_graph_init]
>> 0-c_glstr-posi
>>> x: initializing translator failed
>>> [2015-06-22 13:23:47.964330] E [graph.c:525:glusterfs_graph_activate]
>> 0-graph: i
>>> nit failed
>>> [2015-06-22 13:23:47.965149] W [glusterfsd.c:1194:cleanup_and_exit] (-->
>> 0-: rec
>>> eived signum (0), shutting down
>>> [2015-06-22 13:25:48.579077] I [MSGID: 100030] [glusterfsd.c:2018:main]
>> 0-/usr/s
>>> bin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.6.2 (args:
>> /usr/s
>>> bin/glusterfsd -s 10.32.0.64 --volfile-id
>> c_glstr.10.32.0.64.opt-lvmdir-c2-brick
>>>  -p /system/glusterd/vols/c_glstr/run/10.32.0.64-opt-lvmdir-c2-brick.pid
>> -S /var
>>> /run/67a3053b5f738d0e72fb517a245687f1.socket --brick-name
>> /opt/lvmdir/c2/brick -
>>> l /var/log/glusterfs/bricks/opt-lvmdir-c2-brick.log --xlator-option
>> *-posix.glus
>>> terd-uuid=c7de0c97-ca3a-47a4-bc7d-21fc33b88fee --brick-port 49152
>> --xlator-optio
>>> n c_glstr-server.listen-port=49152)
>>> [2015-06-22 13:25:48.592801] I [graph.c:269:gf_add_cmdline_options]
>> 0-c_glstr-se
>>> rver: adding option 'listen-port' for volume 'c_glstr-server' with value
>> '49152'
>>> [2015-06-22 13:25:48.592855] I [graph.c:269:gf_add_cmdline_options]
>> 0-c_glstr-po
>>> six: adding option 'glusterd-uuid' for volume 'c_glstr-posix' with value
>> 'c7de0c
>>> 97-ca3a-47a4-bc7d-21fc33b88fee'
>>> [2015-06-22 13:25:48.594307] I
>> [rpcsvc.c:2142:rpcsvc_set_outstanding_rpc_limit]
>>> 0-rpc-service: Configured rpc.outstanding-rpc-limit with value 64
>>> [2015-06-22 13:25:48.594420] W [options.c:898:xl_opt_validate]
>> 0-c_glstr-server:
>>>  option 'listen-port' is deprecated, preferred is
>> 'transport.socket.listen-port'
>>> , continuing with correction
>>> [2015-06-22 13:25:48.596705] W [graph.c:344:_log_if_unknown_option]
>> 0-c_glstr-se
>>> rver: option 'rpc-auth.auth-glusterfs' is not recognized
>>> [2015-06-22 13:25:48.596741] W [graph.c:344:_log_if_unknown_option]
>> 0-c_glstr-se
>>> rver: option 'rpc-auth.auth-unix' is not recognized
>>> [2015-06-22 13:25:48.596770] W [graph.c:344:_log_if_unknown_option]
>> 0-c_glstr-se
>>> rver: option 'rpc-auth.auth-null' is not recognized
>>> [2015-06-22 13:25:48.596809] W [graph.c:344:_log_if_unknown_option]
>> 0-c_glstr-qu
>>> ota: option 'timeout' is not recognized
>>> Final graph:
>>>
>> +------------------------------------------------------------------------------+
>>>   1: volume c_glstr-posix
>>>   2:     type storage/posix
>>>   3:     option glusterd-uuid c7de0c97-ca3a-47a4-bc7d-21fc33b88fee
>>>   4:     option directory /opt/lvmdir/c2/brick
>>>   5:     option volume-id e052572c-c7ad-4d66-986f-621fbc48999e
>>>   6: end-volume
>>>   7:
>>>   8: volume c_glstr-changelog
>>>   9:     type features/changelog
>>>  10:     option changelog-brick /opt/lvmdir/c2/brick
>>>  11:     option changelog-dir /opt/lvmdir/c2/brick/.glusterfs/changelogs
>>>  12:     option changelog-barrier-timeout 120
>>>  13:     subvolumes c_glstr-posix
>>>  14: end-volume
>>>  15:
>>>  16: volume c_glstr-access-control
>>>  17:     type features/access-control
>>>  18:     subvolumes c_glstr-changelog
>>>  19: end-volume
>>>  20:
>>>  21: volume c_glstr-locks
>>>  22:     type features/locks
>>>  23:     subvolumes c_glstr-access-control
>>>  24: end-volume
>>>  25:
>>>  26: volume c_glstr-io-threads
>>>  27:     type performance/io-threads
>>>  28:     subvolumes c_glstr-locks
>>>  29: end-volume
>>>  30:
>>>  31: volume c_glstr-barrier
>>>  32:     type features/barrier
>>>  33:     option barrier disable
>>>  34:     option barrier-timeout 120
>>>  35:     subvolumes c_glstr-io-threads
>>>  36: end-volume
>>>  37:
>>>  38: volume c_glstr-index
>>>  39:     type features/index
>>>  40:     option index-base /opt/lvmdir/c2/brick/.glusterfs/indices
>>>  41:     subvolumes c_glstr-barrier
>>>  42: end-volume
>>>  43:
>>>  44: volume c_glstr-marker
>>>  45:     type features/marker
>>>  46:     option volume-uuid e052572c-c7ad-4d66-986f-621fbc48999e
>>>  47:     option timestamp-file /system/glusterd/vols/c_glstr/marker.tstamp
>>>  48:     option xtime off
>>>  49:     option gsync-force-xtime off
>>>  50:     option quota off
>>>  51:     subvolumes c_glstr-index
>>>  52: end-volume
>>>  53:
>>>  54: volume c_glstr-quota
>>>  55:     type features/quota
>>>  56:     option volume-uuid c_glstr
>>>  57:     option server-quota off
>>>  58:     option timeout 0
>>>  59:     option deem-statfs off
>>>  60:     subvolumes c_glstr-marker
>>>  61: end-volume
>>>  62:
>>>  63: volume /opt/lvmdir/c2/brick
>>>
>>>  64:     type debug/io-stats
>>>  65:     option latency-measurement off
>>>  66:     option count-fop-hits off
>>>  67:     subvolumes c_glstr-quota
>>>  68: end-volume
>>>  69:
>>>  70: volume c_glstr-server
>>>  71:     type protocol/server
>>>  72:     option transport.socket.listen-port 49152
>>>  73:     option rpc-auth.auth-glusterfs on
>>>  74:     option rpc-auth.auth-unix on
>>>  75:     option rpc-auth.auth-null on
>>>  76:     option transport-type tcp
>>>  77:     option auth.login./opt/lvmdir/c2/brick.allow
>> b860abc7-dfeb-402c-baf3-ef
>>> 13f1c3bb52
>>>  78:     option auth.login.b860abc7-dfeb-402c-baf3-ef13f1c3bb52.password
>> db5d45c
>>> 9-58f7-49ad-8080-9197eb69695e
>>>  79:     option auth.addr./opt/lvmdir/c2/brick.allow *
>>>  80:     subvolumes /opt/lvmdir/c2/brick
>>>  81: end-volume
>>>  82:
>>>
>> +------------------------------------------------------------------------------+
>>> [2015-06-22 13:25:49.738346] I [login.c:82:gf_auth] 0-auth/login: allowed
>> user n
>>> ames: b860abc7-dfeb-402c-baf3-ef13f1c3bb52
>>> [2015-06-22 13:25:49.738399] I [server-handshake.c:585:server_setvolume]
>> 0-c_gls
>>> tr-server: accepted client from
>> oamhost-1994-2015/06/22-13:23:51:485466-c_glstr-
>>> client-1-0-0 (version: 3.6.2)
>>> [2015-06-22 13:25:49.744199] I [login.c:82:gf_auth] 0-auth/login: allowed
>> user n
>>> ames: b860abc7-dfeb-402c-baf3-ef13f1c3bb52
>>> [2015-06-22 13:25:49.744238] I [server-handshake.c:585:server_setvolume]
>> 0-c_gls
>>> tr-server: accepted client from
>> oamhost-1898-2015/06/22-13:23:46:623908-c_glstr-
>>> client-1-0-0 (version: 3.6.2)
>>> [2015-06-22 13:25:49.762796] I [login.c:82:gf_auth] 0-auth/login: allowed
>> user n
>>> ames: b860abc7-dfeb-402c-baf3-ef13f1c3bb52
>>> [2015-06-22 13:25:49.762834] I [server-handshake.c:585:server_setvolume]
>> 0-c_gls
>>> tr-server: accepted client from
>> oamhost-3395-2015/06/22-13:25:49:733813-c_glstr-
>>> client-1-0-0 (version: 3.6.2)
>>> [2015-06-22 13:25:49.882096] I [server.c:518:server_rpc_notify]
>> 0-c_glstr-server
>>> : disconnecting connection from
>> oamhost-1898-2015/06/22-13:23:46:623908-c_glstr-
>>> client-1-0-0
>>> [2015-06-22 13:25:49.882150] I [client_t.c:417:gf_client_unref]
>> 0-c_glstr-server
>>> : Shutting down connection
>> oamhost-1898-2015/06/22-13:23:46:623908-c_glstr-clien
>>> t-1-0-0
>>> [2015-06-22 13:25:50.465176] I [login.c:82:gf_auth] 0-auth/login: allowed
>> user n
>>> ames: b860abc7-dfeb-402c-baf3-ef13f1c3bb52
>>> [2015-06-22 13:25:50.465214] I [server-handshake.c:585:server_setvolume]
>> 0-c_gls
>>> tr-server: accepted client from
>> oamhost-2130-2015/06/22-13:23:48:74682-c_glstr-c
>>> lient-1-0-0 (version: 3.6.2)
>>> [2015-06-22 13:25:50.912886] I [login.c:82:gf_auth] 0-auth/login: allowed
>> user n
>>> ames: b860abc7-dfeb-402c-baf3-ef13f1c3bb52
>>> [2015-06-22 13:25:50.912931] I [server-handshake.c:585:server_setvolume]
>> 0-c_gls
>>> tr-server: accepted client from
>> oamhost-5400-2015/06/22-13:25:57:519410-c_glstr-
>>> client-1-0-0 (version: 3.6.2)
>>>
>>>
>>> Regards
>>> Andreas
>>>
>>>
>>> On 6/22/2015 3:45 PM, Atin Mukherjee wrote:
>>>> Sent from one plus one
>>>> On Jun 22, 2015 7:06 PM, "Andreas Hollaus" <Andreas.Hollaus at ericsson.com>
>> wrote:
>>>>> Hi,
>>>>>
>>>>> I keep having this situation where I have to start the volume using
>> the force
>>>>> option.Why isn't the volume started without this?
>>>>> It seems to have these problems after a node restart, but I expected
>> the volume to be
>>>>> restarted properly whenever the service is restarted.
>>>> Do you see anything abnormal in glusterd/ brick logs pre and post
>> restart?
>>>>>
>>>>> Regards
>>>>> Andreas
>>>>>
>>>>> On 06/22/15 10:40, Andreas Hollaus wrote:
>>>>>> Hi,
>>>>>>
>>>>>> Well that did the trick. Thanks!
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>> Andreas
>>>>>>
>>>>>>
>>>>>> On 06/22/15 10:07, Sakshi Bansal wrote:
>>>>>>> Both the bricks are down. Can you run -
>>>>>>> $ gluster volume start <volume-name> force
>>>>>> _______________________________________________
>>>>>> Gluster-users mailing list
>>>>>> Gluster-users at gluster.org
>>>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>



More information about the Gluster-users mailing list