[Bugs] [Bug 1532868] gluster upgrade causes vm disk errors
bugzilla at redhat.com
bugzilla at redhat.com
Fri Feb 23 07:00:10 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1532868
--- Comment #5 from Krutika Dhananjay <kdhananj at redhat.com> ---
(In reply to David Galloway from comment #4)
> I just hit this bug a few hours after upgrading from
> glusterfs-3.8.4-18.el7rhgs.x86_64 to 3.8.4-54.el7rhgs.x86_64.
>
> The VM started back up fine after manually setting it to Up.
>
> I have sosreports from the RHV Manager, the hypervisor, and one of the
> Gluster nodes if it'd help.
>
> === RHV Manager ===
>
> 2018-02-22 19:29:17,200Z INFO
> [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
> (ForkJoinPool-1-worker-4) [] VM
> 'da38e6f8-9ccb-4ce0-95f0-afc98a171386'(teuthology) moved from 'Up' -->
> 'Paused'
> 2018-02-22 19:29:17,243Z INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (ForkJoinPool-1-worker-4) [] EVENT_ID: VM_PAUSED(1,025), Correlation ID:
> null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: VM
> teuthology has been paused.
> 2018-02-22 19:29:17,290Z ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (ForkJoinPool-1-worker-4) [] EVENT_ID: VM_PAUSED_ERROR(139), Correlation ID:
> null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: VM
> teuthology has been paused due to unknown storage error.
>
> === Hypervisor Gluster log ===
>
> [2018-02-22 19:29:17.197805] E
> [shard.c:426:shard_modify_size_and_block_count]
> (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/distribute.so(+0x6a26d)
> [0x7f854855926d]
> -->/usr/lib64/glusterfs/3.8.4/xlator/features/shard.so(+0xb38e)
> [0x7f85482d838e]
> -->/usr/lib64/glusterfs/3.8.4/xlator/features/shard.so(+0xac3b)
> [0x7f85482d7c3b] ) 0-ssdstorage-shard: Failed to get
> trusted.glusterfs.shard.file-size for 4a9b76ea-b373-4129-adcb-3a9b04b14ea1
> [2018-02-22 19:29:17.197840] W [fuse-bridge.c:767:fuse_attr_cbk]
> 0-glusterfs-fuse: 686161812: STAT()
> /ba9d818f-aa63-4a96-a9be-8d50d04fe44e/images/e4493c34-2805-41ed-8a1b-
> bfc1a6adad93/6f0ab3fe-e64f-4ad8-8a25-a0881e9713c3 => -1 (Invalid argument)
>
>
> Nothing relevant in vdsm.log aside from this one line:
>
> 2018-02-22 20:29:17,108+0000 INFO (periodic/314) [vdsm.api] START
> getVolumeSize(sdUUID=u'ba9d818f-aa63-4a96-a9be-8d50d04fe44e',
> spUUID=u'28fc87ad-2e28-44d2-8ce4-2e63b9bad4c6',
> imgUUID=u'e4493c34-2805-41ed-8a1b-bfc1a6adad93',
> volUUID=u'6f0ab3fe-e64f-4ad8-8a25-a0881e9713c3', options=None)
> from=internal, task_id=15aec0cf-8854-4959-9e14-3bb14a00dc63 (api:46)
If this bug is consistently reproducible, could you please get the tcpdump
output of the gluster client machine and share the same along with the newly
generated client and brick logs?
Here's what you need to do to capture tcpdump output:
tcpdump -i <network interface name> -w <output-filename>.pcap
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
More information about the Bugs
mailing list