[Gluster-users] [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17

Ramesh Nachimuthu rnachimu at redhat.com
Tue Dec 20 08:16:24 UTC 2016





----- Original Message -----
> From: "Giuseppe Ragusa" <giuseppe.ragusa at hotmail.com>
> To: "Ramesh Nachimuthu" <rnachimu at redhat.com>
> Cc: users at ovirt.org, gluster-users at gluster.org, "Ravishankar Narayanankutty" <ranaraya at redhat.com>
> Sent: Tuesday, December 20, 2016 4:15:18 AM
> Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring GlusterFS volumes in HC HE oVirt 3.6.7 /
> GlusterFS 3.7.17
> 
> On Fri, Dec 16, 2016, at 05:44, Ramesh Nachimuthu wrote:
> > ----- Original Message -----
> > > From: "Giuseppe Ragusa" <giuseppe.ragusa at hotmail.com>
> > > To: "Ramesh Nachimuthu" <rnachimu at redhat.com>
> > > Cc: users at ovirt.org
> > > Sent: Friday, December 16, 2016 2:42:18 AM
> > > Subject: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > > GlusterFS volumes in HC HE oVirt 3.6.7 /
> > > GlusterFS 3.7.17
> > > 
> > > Giuseppe Ragusa ha condiviso un file di OneDrive. Per visualizzarlo, fare
> > > clic sul collegamento seguente.
> > > 
> > > 
> > > <https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > [https://r1.res.office365.com/owa/prem/images/dc-generic_20.png]<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > 
> > > vols.tar.gz<https://1drv.ms/u/s!Am_io8oW4r10bw5KMtEtKgpcRoI>
> > > 
> > > 
> > > 
> > > Da: Ramesh Nachimuthu <rnachimu at redhat.com>
> > > Inviato: lunedì 12 dicembre 2016 09.32
> > > A: Giuseppe Ragusa
> > > Cc: users at ovirt.org
> > > Oggetto: Re: [ovirt-users] Python stack trace for VDSM while monitoring
> > > GlusterFS volumes in HC HE oVirt 3.6.7 / GlusterFS 3.7.17
> > > 
> > > On 12/09/2016 08:50 PM, Giuseppe Ragusa wrote:
> > > > Hi all,
> > > >
> > > > I'm writing to ask about the following problem (in a HC HE oVirt 3.6.7
> > > > GlusterFS 3.7.17 3-hosts-replica-with-arbiter sharded-volumes setup all
> > > > on
> > > > CentOS 7.2):
> > > >
> > > >  From /var/log/messages:
> > > >
> > > > Dec  9 15:27:46 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > Internal
> > > > server error#012Traceback (most recent call last):#012  File
> > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > _serveRequest#012    res = method(**params)#012  File
> > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > result
> > > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > line
> > > > 117, in status#012    return self._gluster.volumeStatus(volumeName,
> > > > brick,
> > > > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > wrapper#012    rv = func(*args, **kwargs)#012  File
> > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > __call__#012    return callMethod()#012  File
> > > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012
> > > > **kwargs)#012
> > > > File "<string>", line 2, in glusterVolumeStatus#012  File
> > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > >   llmethod#012    raise convert_to_error(kind, result)#012KeyError:
> > > >   'device'
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:Extracting
> > > > Engine
> > > > VM OVF from the OVF_STORE
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore:OVF_STORE volume
> > > > path:
> > > > /rhev/data-center/mnt/glusterSD/shockley.gluster.private:_enginedomain/1d60fd45-507d-4a78-8294-d642b3178ea3/images/22a172de-698e-4cc5-bff0-082882fb3347/8738287c-8a25-4a2a-a53a-65c366a972a1
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Found
> > > > an OVF for HE VM, trying to convert
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config:Got
> > > > vm.conf from OVF_STORE
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Current
> > > > state
> > > > EngineUp (score: 3400)
> > > > Dec  9 15:27:47 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Best
> > > > remote
> > > > host read.mgmt.private (id: 2, score: 3400)
> > > > Dec  9 15:27:48 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > Internal
> > > > server error#012Traceback (most recent call last):#012  File
> > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > _serveRequest#012    res = method(**params)#012  File
> > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > result
> > > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > line
> > > > 117, in status#012    return self._gluster.volumeStatus(volumeName,
> > > > brick,
> > > > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > wrapper#012    rv = func(*args, **kwargs)#012  File
> > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > __call__#012    return callMethod()#012  File
> > > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012
> > > > **kwargs)#012
> > > > File "<string>", line 2, in glusterVolumeStatus#012  File
> > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > >   llmethod#012    raise convert_to_error(kind, result)#012KeyError:
> > > >   'device'
> > > > Dec  9 15:27:48 shockley ovirt-ha-broker:
> > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > established
> > > > Dec  9 15:27:48 shockley ovirt-ha-broker:
> > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > closed
> > > > Dec  9 15:27:48 shockley ovirt-ha-broker:
> > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > established
> > > > Dec  9 15:27:48 shockley ovirt-ha-broker:
> > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > closed
> > > > Dec  9 15:27:48 shockley ovirt-ha-broker:
> > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > established
> > > > Dec  9 15:27:48 shockley ovirt-ha-broker:
> > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > closed
> > > > Dec  9 15:27:48 shockley ovirt-ha-broker:
> > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > established
> > > > Dec  9 15:27:48 shockley ovirt-ha-broker:
> > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > closed
> > > > Dec  9 15:27:48 shockley ovirt-ha-broker:
> > > > INFO:mem_free.MemFree:memFree:
> > > > 7392
> > > > Dec  9 15:27:50 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > Internal
> > > > server error#012Traceback (most recent call last):#012  File
> > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > _serveRequest#012    res = method(**params)#012  File
> > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > result
> > > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > line
> > > > 117, in status#012    return self._gluster.volumeStatus(volumeName,
> > > > brick,
> > > > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > wrapper#012    rv = func(*args, **kwargs)#012  File
> > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > __call__#012    return callMethod()#012  File
> > > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012
> > > > **kwargs)#012
> > > > File "<string>", line 2, in glusterVolumeStatus#012  File
> > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > >   llmethod#012    raise convert_to_error(kind, result)#012KeyError:
> > > >   'device'
> > > > Dec  9 15:27:52 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > Internal
> > > > server error#012Traceback (most recent call last):#012  File
> > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > _serveRequest#012    res = method(**params)#012  File
> > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > result
> > > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > line
> > > > 117, in status#012    return self._gluster.volumeStatus(volumeName,
> > > > brick,
> > > > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > wrapper#012    rv = func(*args, **kwargs)#012  File
> > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > __call__#012    return callMethod()#012  File
> > > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012
> > > > **kwargs)#012
> > > > File "<string>", line 2, in glusterVolumeStatus#012  File
> > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > >   llmethod#012    raise convert_to_error(kind, result)#012KeyError:
> > > >   'device'
> > > > Dec  9 15:27:54 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > Internal
> > > > server error#012Traceback (most recent call last):#012  File
> > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > _serveRequest#012    res = method(**params)#012  File
> > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > result
> > > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > line
> > > > 117, in status#012    return self._gluster.volumeStatus(volumeName,
> > > > brick,
> > > > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > wrapper#012    rv = func(*args, **kwargs)#012  File
> > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > __call__#012    return callMethod()#012  File
> > > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012
> > > > **kwargs)#012
> > > > File "<string>", line 2, in glusterVolumeStatus#012  File
> > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > >   llmethod#012    raise convert_to_error(kind, result)#012KeyError:
> > > >   'device'
> > > > Dec  9 15:27:55 shockley ovirt-ha-broker:
> > > > INFO:cpu_load_no_engine.EngineHealth:System load total=0.1234,
> > > > engine=0.0364, non-engine=0.0869
> > > > Dec  9 15:27:57 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Initializing
> > > > VDSM
> > > > Dec  9 15:27:57 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Connecting
> > > > the storage
> > > > Dec  9 15:27:58 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > Internal
> > > > server error#012Traceback (most recent call last):#012  File
> > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > _serveRequest#012    res = method(**params)#012  File
> > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > result
> > > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > line
> > > > 117, in status#012    return self._gluster.volumeStatus(volumeName,
> > > > brick,
> > > > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > wrapper#012    rv = func(*args, **kwargs)#012  File
> > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > __call__#012    return callMethod()#012  File
> > > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012
> > > > **kwargs)#012
> > > > File "<string>", line 2, in glusterVolumeStatus#012  File
> > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > >   llmethod#012    raise convert_to_error(kind, result)#012KeyError:
> > > >   'device'
> > > > Dec  9 15:27:58 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Connecting
> > > > storage server
> > > > Dec  9 15:27:58 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Connecting
> > > > storage server
> > > > Dec  9 15:27:59 shockley ovirt-ha-agent:
> > > > INFO:ovirt_hosted_engine_ha.lib.storage_server.StorageServer:Refreshing
> > > > the storage domain
> > > > Dec  9 15:27:59 shockley ovirt-ha-broker:
> > > > INFO:ovirt_hosted_engine_ha.broker.listener.ConnectionHandler:Connection
> > > > established
> > > > Dec  9 15:27:59 shockley journal: vdsm jsonrpc.JsonRpcServer ERROR
> > > > Internal
> > > > server error#012Traceback (most recent call last):#012  File
> > > > "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 533, in
> > > > _serveRequest#012    res = method(**params)#012  File
> > > > "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod#012
> > > > result
> > > > = fn(*methodArgs)#012  File "/usr/share/vdsm/gluster/apiwrapper.py",
> > > > line
> > > > 117, in status#012    return self._gluster.volumeStatus(volumeName,
> > > > brick,
> > > > statusOption)#012  File "/usr/share/vdsm/gluster/api.py", line 86, in
> > > > wrapper#012    rv = func(*args, **kwargs)#012  File
> > > > "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus#012
> > > > statusOption)#012  File "/usr/share/vdsm/supervdsm.py", line 50, in
> > > > __call__#012    return callMethod()#012  File
> > > > "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>#012
> > > > **kwargs)#012
> > > > File "<string>", line 2, in glusterVolumeStatus#012  File
> > > > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _ca
> > > >   llmethod#012    raise convert_to_error(kind, result)#012KeyError:
> > > >   'device'
> > > >
> > > >  From /var/log/vdsm/vdsm.log:
> > > >
> > > > jsonrpc.Executor/1::ERROR::2016-12-09
> > > > 15:27:46,870::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > > Internal server error
> > > > Traceback (most recent call last):
> > > >    File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> > > >    533,
> > > >    in _serveRequest
> > > >      res = method(**params)
> > > >    File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > >      result = fn(*methodArgs)
> > > >    File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > >      return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > >    File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > >      rv = func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > >      statusOption)
> > > >    File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > >      return callMethod()
> > > >    File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > >      **kwargs)
> > > >    File "<string>", line 2, in glusterVolumeStatus
> > > >    File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> > > >    in
> > > >    _callmethod
> > > >      raise convert_to_error(kind, result)
> > > > KeyError: 'device'
> > > > jsonrpc.Executor/5::ERROR::2016-12-09
> > > > 15:27:48,627::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > > Internal server error
> > > > Traceback (most recent call last):
> > > >    File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> > > >    533,
> > > >    in _serveRequest
> > > >      res = method(**params)
> > > >    File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > >      result = fn(*methodArgs)
> > > >    File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > >      return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > >    File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > >      rv = func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > >      statusOption)
> > > >    File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > >      return callMethod()
> > > >    File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > >      **kwargs)
> > > >    File "<string>", line 2, in glusterVolumeStatus
> > > >    File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> > > >    in
> > > >    _callmethod
> > > >      raise convert_to_error(kind, result)
> > > > KeyError: 'device'
> > > > jsonrpc.Executor/7::ERROR::2016-12-09
> > > > 15:27:50,164::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > > Internal server error
> > > > Traceback (most recent call last):
> > > >    File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> > > >    533,
> > > >    in _serveRequest
> > > >      res = method(**params)
> > > >    File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > >      result = fn(*methodArgs)
> > > >    File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > >      return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > >    File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > >      rv = func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > >      statusOption)
> > > >    File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > >      return callMethod()
> > > >    File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > >      **kwargs)
> > > >    File "<string>", line 2, in glusterVolumeStatus
> > > >    File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> > > >    in
> > > >    _callmethod
> > > >      raise convert_to_error(kind, result)
> > > > KeyError: 'device'
> > > > jsonrpc.Executor/0::ERROR::2016-12-09
> > > > 15:27:52,804::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > > Internal server error
> > > > Traceback (most recent call last):
> > > >    File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> > > >    533,
> > > >    in _serveRequest
> > > >      res = method(**params)
> > > >    File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > >      result = fn(*methodArgs)
> > > >    File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > >      return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > >    File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > >      rv = func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > >      statusOption)
> > > >    File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > >      return callMethod()
> > > >    File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > >      **kwargs)
> > > >    File "<string>", line 2, in glusterVolumeStatus
> > > >    File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> > > >    in
> > > >    _callmethod
> > > >      raise convert_to_error(kind, result)
> > > > KeyError: 'device'
> > > > jsonrpc.Executor/5::ERROR::2016-12-09
> > > > 15:27:54,679::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > > Internal server error
> > > > Traceback (most recent call last):
> > > >    File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> > > >    533,
> > > >    in _serveRequest
> > > >      res = method(**params)
> > > >    File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > >      result = fn(*methodArgs)
> > > >    File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > >      return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > >    File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > >      rv = func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > >      statusOption)
> > > >    File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > >      return callMethod()
> > > >    File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > >      **kwargs)
> > > >    File "<string>", line 2, in glusterVolumeStatus
> > > >    File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> > > >    in
> > > >    _callmethod
> > > >      raise convert_to_error(kind, result)
> > > > KeyError: 'device'
> > > > jsonrpc.Executor/2::ERROR::2016-12-09
> > > > 15:27:58,349::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > > Internal server error
> > > > Traceback (most recent call last):
> > > >    File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> > > >    533,
> > > >    in _serveRequest
> > > >      res = method(**params)
> > > >    File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > >      result = fn(*methodArgs)
> > > >    File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > >      return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > >    File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > >      rv = func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > >      statusOption)
> > > >    File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > >      return callMethod()
> > > >    File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > >      **kwargs)
> > > >    File "<string>", line 2, in glusterVolumeStatus
> > > >    File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> > > >    in
> > > >    _callmethod
> > > >      raise convert_to_error(kind, result)
> > > > KeyError: 'device'
> > > > jsonrpc.Executor/4::ERROR::2016-12-09
> > > > 15:27:59,169::__init__::538::jsonrpc.JsonRpcServer::(_serveRequest)
> > > > Internal server error
> > > > Traceback (most recent call last):
> > > >    File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line
> > > >    533,
> > > >    in _serveRequest
> > > >      res = method(**params)
> > > >    File "/usr/share/vdsm/rpc/Bridge.py", line 275, in _dynamicMethod
> > > >      result = fn(*methodArgs)
> > > >    File "/usr/share/vdsm/gluster/apiwrapper.py", line 117, in status
> > > >      return self._gluster.volumeStatus(volumeName, brick, statusOption)
> > > >    File "/usr/share/vdsm/gluster/api.py", line 86, in wrapper
> > > >      rv = func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/gluster/api.py", line 407, in volumeStatus
> > > >      statusOption)
> > > >    File "/usr/share/vdsm/supervdsm.py", line 50, in __call__
> > > >      return callMethod()
> > > >    File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda>
> > > >      **kwargs)
> > > >    File "<string>", line 2, in glusterVolumeStatus
> > > >    File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773,
> > > >    in
> > > >    _callmethod
> > > >      raise convert_to_error(kind, result)
> > > > KeyError: 'device'
> > > >
> > > >  From /var/log/vdsm/supervdsm.log:
> > > >
> > > > Traceback (most recent call last):
> > > >    File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > >      res = func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > >      return func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > >      return _parseVolumeStatusDetail(xmltree)
> > > >    File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > >    _parseVolumeStatusDetail
> > > >      'device': value['device'],
> > > > KeyError: 'device'
> > > > MainProcess|jsonrpc.Executor/5::ERROR::2016-12-09
> > > > 15:27:48,625::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > > Error in wrapper
> > > > Traceback (most recent call last):
> > > >    File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > >      res = func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > >      return func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > >      return _parseVolumeStatusDetail(xmltree)
> > > >    File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > >    _parseVolumeStatusDetail
> > > >      'device': value['device'],
> > > > KeyError: 'device'
> > > > MainProcess|jsonrpc.Executor/7::ERROR::2016-12-09
> > > > 15:27:50,163::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > > Error in wrapper
> > > > Traceback (most recent call last):
> > > >    File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > >      res = func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > >      return func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > >      return _parseVolumeStatusDetail(xmltree)
> > > >    File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > >    _parseVolumeStatusDetail
> > > >      'device': value['device'],
> > > > KeyError: 'device'
> > > > MainProcess|jsonrpc.Executor/0::ERROR::2016-12-09
> > > > 15:27:52,803::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > > Error in wrapper
> > > > Traceback (most recent call last):
> > > >    File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > >      res = func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > >      return func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > >      return _parseVolumeStatusDetail(xmltree)
> > > >    File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > >    _parseVolumeStatusDetail
> > > >      'device': value['device'],
> > > > KeyError: 'device'
> > > > MainProcess|jsonrpc.Executor/5::ERROR::2016-12-09
> > > > 15:27:54,677::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > > Error in wrapper
> > > > Traceback (most recent call last):
> > > >    File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > >      res = func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > >      return func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > >      return _parseVolumeStatusDetail(xmltree)
> > > >    File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > >    _parseVolumeStatusDetail
> > > >      'device': value['device'],
> > > > KeyError: 'device'
> > > > MainProcess|jsonrpc.Executor/2::ERROR::2016-12-09
> > > > 15:27:58,348::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > > Error in wrapper
> > > > Traceback (most recent call last):
> > > >    File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > >      res = func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > >      return func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > >      return _parseVolumeStatusDetail(xmltree)
> > > >    File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > >    _parseVolumeStatusDetail
> > > >      'device': value['device'],
> > > > KeyError: 'device'
> > > > MainProcess|jsonrpc.Executor/4::ERROR::2016-12-09
> > > > 15:27:59,168::supervdsmServer::120::SuperVdsm.ServerCallback::(wrapper)
> > > > Error in wrapper
> > > > Traceback (most recent call last):
> > > >    File "/usr/share/vdsm/supervdsmServer", line 118, in wrapper
> > > >      res = func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/supervdsmServer", line 534, in wrapper
> > > >      return func(*args, **kwargs)
> > > >    File "/usr/share/vdsm/gluster/cli.py", line 352, in volumeStatus
> > > >      return _parseVolumeStatusDetail(xmltree)
> > > >    File "/usr/share/vdsm/gluster/cli.py", line 216, in
> > > >    _parseVolumeStatusDetail
> > > >      'device': value['device'],
> > > > KeyError: 'device'
> > > >
> > > > Please note that the whole oVirt cluster is working (apparently) as it
> > > > should, but due to a known limitation with split-GlusterFS-network
> > > > setup
> > > > (http://lists.ovirt.org/pipermail/users/2016-August/042119.html solved
> > > > in
> > > > https://gerrit.ovirt.org/#/c/60083/ but maybe not backported to 3.6.x
> > > > or
> > > > present only in nightly later than 3.6.7, right?) GlusterFS volumes are
> > > > being managed from the hosts commandline only, while the oVirt Engine
> > > > webui is used only to monitor them.
> > > >
> > > > The GlusterFS part is currently experiencing some recurring NFS crashes
> > > > (using internal GlusterFS NFS support, not NFS-Ganesha) as reported in
> > > > Gluster users mailing list and in Bugzilla
> > > > (http://www.gluster.org/pipermail/gluster-users/2016-December/029357.html
> > > > and https://bugzilla.redhat.com/show_bug.cgi?id=1381970 without any
> > > > feedback insofar...) but only on not-oVirt-related volumes.
> > > >
> > > > Finally, I can confirm that checking all oVirt-related and
> > > > not-oVirt-related GlusterFS volumes from the hosts commandline with:
> > > >
> > > > vdsClient -s localhost glusterVolumeStatus volumeName=nomevolume
> > > 
> > > Can you post the output of 'gluster volume status <vol-name> detail
> > > --xml'.
> > > 
> > > Regards,
> > > Ramesh
> > > 
> > > Hi Ramesh,
> > > 
> > > Please find attached all the output produced with the following command:
> > > 
> > > for vol in $(gluster volume list); do gluster volume status ${vol} detail
> > > --xml > ${vol}.xml; res=$?; echo "Exit ${res} for volume ${vol}"; done
> > > 
> > > Please note that the exit code was always zero.
> > > 
> > 
> > +gluster-users
> > 
> > This seems to be a bug in Glusterfs 3.7.17. Output of 'gluster volume
> > status <vol-name> details --xml ' should have a <device> element for all
> > the bricks in the volume. But it missing for the arbiter brick. This issue
> > is not re-producible in Gulsterfs-3.8.
> 
> Do I need to open a GlusterFS bug for this on 3.7?
> Looking at the changelog, it does not seem to have been fixed in 3.7.18 nor
> to be among the already known issues.
> 

Please open a bug against Glusterfs 3.7.17.

> On the oVirt side: is GlusterFS 3.8 compatible with oVirt 3.6.x (maybe with x
> > 7 ie using nightly snapshots)?
> 

You can upgrade to GlusterFS 3.8. It is compatible with oVirt 3.6. 

Note: You may have to add  the GlusterFS 3.8 repo manually from https://download.gluster.org/pub/gluster/glusterfs/3.8/LATEST/. 

Regards,
Ramesh

> Many thanks.
> 
> Regards,
> Giuseppe
> 
> > Regards,
> > Ramesh
> > 
> > 
> > > Many thanks for you help.
> > > 
> > > Best regards,
> > > Giuseppe
> > > 
> > > 
> > > >
> > > > always succeeds without errors.
> > > >
> > > > Many thanks in advance for any advice (please note that I'm planning to
> > > > upgrade from 3.6.7 to latest nightly 3.6.10.x as soon as the
> > > > corresponding
> > > > RHEV gets announced, then later on all the way up to 4.1.0 as soon as
> > > > it
> > > > stabilizes; on GlusterFS-side I'd like to upgrade asap to 3.8.x but I
> > > > cannot find any hint on oVirt 3.6.x compatibility...).
> > > >
> > > > Best regards,
> > > > Giuseppe
> > > >
> > > > PS: please keep my address in to/copy since I still have problems
> > > > receiving
> > > > oVirt mailing list messages on Hotmail.
> > > >
> > > >
> > > > _______________________________________________
> > > > Users mailing list
> > > > Users at ovirt.org
> > > > http://lists.phx.ovirt.org/mailman/listinfo/users
> > > 
> > > 
> > > 
> 


More information about the Gluster-users mailing list