[Bugs] [Bug 1672076] New: chrome / chromium crash on gluster, sqlite issue?

bugzilla at redhat.com bugzilla at redhat.com
Sun Feb 3 15:12:43 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1672076

            Bug ID: 1672076
           Summary: chrome / chromium crash on gluster, sqlite issue?
           Product: GlusterFS
           Version: 5
            Status: NEW
         Component: glusterd
          Assignee: bugs at gluster.org
          Reporter: mjc at avtechpulse.com
                CC: bugs at gluster.org
  Target Milestone: ---
    Classification: Community



I run Fedora 29 clients and servers, with user home folders mounted on gluster.
This worked fine with Fedora 27 clients, but on F29 clients the chrome and
chromium browsers crash. The backtrace info (see below) suggests problems with
sqlite. 

Firefox runs just fine, even though it is an sqlite user too.

chromium clients mounted on local drives work fine.

- Mike


clients: glusterfs-5.3-1.fc29.x86_64,
chromium-71.0.3578.98-1.fc29.x86_64
server: glusterfs-server-5.3-1.fc29.x86_64

[root at gluster1 ~]# gluster volume info
Volume Name: volume1
Type: Replicate
Volume ID: 91ef5aed-94be-44ff-a19d-c41682808159
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster/brick1/data
Brick2: gluster2:/gluster/brick2/data
Options Reconfigured:
nfs.disable: on
server.allow-insecure: on
cluster.favorite-child-policy: mtime



[mjc at daisy ~]$ chromium-browser
[18826:18826:0130/094436.431828:ERROR:sandbox_linux.cc(364)]
InitializeSandbox() called with multiple threads in process gpu-process.
[18785:18785:0130/094440.905900:ERROR:x11_input_method_context_impl_gtk.cc(144)]
Not implemented reached in virtual void
libgtkui::X11InputMethodContextImplGtk::SetSurroundingText(const string16&,
const gfx::Range&)
Received signal 7 BUS_ADRERR 7fc30e9bd000
#0 0x7fc34b008261 base::debug::StackTrace::StackTrace()
#1 0x7fc34b00869b base::debug::(anonymous namespace)::StackDumpSignalHandler()
#2 0x7fc34b008cb7 base::debug::(anonymous namespace)::StackDumpSignalHandler()
#3 0x7fc3401fe030 <unknown>
#4 0x7fc33f5820f0 __memmove_avx_unaligned_erms
#5 0x7fc346099491 unixRead
#6 0x7fc3460d2784 readDbPage
#7 0x7fc3460d5e4f getPageNormal
#8 0x7fc3460d5f01 getPageMMap
#9 0x7fc3460958f5 btreeGetPage
#10 0x7fc3460ec47b sqlite3BtreeBeginTrans
#11 0x7fc3460fd1e8 sqlite3VdbeExec
#12 0x7fc3461056af chrome_sqlite3_step
#13 0x7fc3464071c7 sql::Statement::StepInternal()
#14 0x7fc3464072de sql::Statement::Step()
#15 0x555fd21699d7 autofill::AutofillTable::GetAutofillProfiles()
#16 0x555fd2160808
autofill::AutofillProfileSyncableService::MergeDataAndStartSyncing()
#17 0x555fd1d25207 syncer::SharedChangeProcessor::StartAssociation()
#18 0x555fd1d09652
_ZN4base8internal7InvokerINS0_9BindStateIMN6syncer21SharedChangeProcessorEFvNS_17RepeatingCallbackIFvNS3_18DataTypeController15ConfigureResultERKNS3_15SyncMergeResultESA_EEEPNS3_10SyncClientEPNS3_29GenericChangeProcessorFactoryEPNS3_9UserShareESt10unique_ptrINS3_20DataTypeErrorHandlerESt14default_deleteISK_EEEJ13scoped_refptrIS4_ESC_SE_SG_SI_NS0_13PassedWrapperISN_EEEEEFvvEE3RunEPNS0_13BindStateBaseE
#19 0x7fc34af4309d base::debug::TaskAnnotator::RunTask()
#20 0x7fc34afcda86 base::internal::TaskTracker::RunOrSkipTask()
#21 0x7fc34b01b6a2 base::internal::TaskTrackerPosix::RunOrSkipTask()
#22 0x7fc34afd07d6 base::internal::TaskTracker::RunAndPopNextTask()
#23 0x7fc34afca5e7 base::internal::SchedulerWorker::RunWorker()
#24 0x7fc34afcac84 base::internal::SchedulerWorker::RunSharedWorker()
#25 0x7fc34b01aa09 base::(anonymous namespace)::ThreadFunc()
#26 0x7fc3401f358e start_thread
#27 0x7fc33f51d6a3 __GI___clone
  r8: 00000cbfd93d4a00  r9: 00000000cbfd93d4 r10: 000000000000011c r11:
0000000000000000
 r12: 00000cbfd940eb00 r13: 0000000000000000 r14: 0000000000000000 r15:
00000cbfd9336c00
  di: 00000cbfd93d4a00  si: 00007fc30e9bd000  bp: 00007fc30faff7e0  bx:
0000000000000800
  dx: 0000000000000800  ax: 00000cbfd93d4a00  cx: 0000000000000800  sp:
00007fc30faff788
  ip: 00007fc33f5820f0 efl: 0000000000010287 cgf: 002b000000000033 erf:
0000000000000004
 trp: 000000000000000e msk: 0000000000000000 cr2: 00007fc30e9bd000
[end of stack trace]
Calling _exit(1). Core file will not be generated. 



And a client mount log is below - although the log is full megabytes of:

The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker]
0-epoll: Failed to dispatch handler" repeated 20178 times between [2019-01-31
13:44:14.962950] and [2019-01-31 13:46:00.013310]

and

[2019-01-31 13:46:07.470163] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]

so I've just shown the start of the log. I guess that's related to
https://bugzilla.redhat.com/show_bug.cgi?id=1651246.

- Mike




Mount log:

[2019-01-31 13:44:00.775353] I [MSGID: 100030] [glusterfsd.c:2715:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 5.3 (args:
/usr/sbin/glusterfs --process-name fuse --volfile-server=gluster1
--volfile-server=gluster2 --volfile-id=/volume1 /fileserver2)
[2019-01-31 13:44:00.817140] I [MSGID: 101190]
[event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 1
[2019-01-31 13:44:00.926491] I [MSGID: 101190]
[event-epoll.c:622:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 2
[2019-01-31 13:44:00.928102] I [MSGID: 114020] [client.c:2354:notify]
0-volume1-client-0: parent translators are ready, attempting connect on
transport
[2019-01-31 13:44:00.931063] I [MSGID: 114020] [client.c:2354:notify]
0-volume1-client-1: parent translators are ready, attempting connect on
transport
[2019-01-31 13:44:00.932144] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-volume1-client-0: changing port to 49152 (from 0)
Final graph:
+------------------------------------------------------------------------------+
  1: volume volume1-client-0
  2:     type protocol/client
  3:     option ping-timeout 42
  4:     option remote-host gluster1
  5:     option remote-subvolume /gluster/brick1/data
  6:     option transport-type socket
  7:     option transport.tcp-user-timeout 0
  8:     option transport.socket.keepalive-time 20
  9:     option transport.socket.keepalive-interval 2
 10:     option transport.socket.keepalive-count 9
 11:     option send-gids true
 12: end-volume
 13:
 14: volume volume1-client-1
 15:     type protocol/client
 16:     option ping-timeout 42
 17:     option remote-host gluster2
 18:     option remote-subvolume /gluster/brick2/data
 19:     option transport-type socket
 20:     option transport.tcp-user-timeout 0
 21:     option transport.socket.keepalive-time 20
 22:     option transport.socket.keepalive-interval 2
 23:     option transport.socket.keepalive-count 9
 24:     option send-gids true
 25: end-volume
 26:
 27: volume volume1-replicate-0
 28:     type cluster/replicate
 29:     option afr-pending-xattr volume1-client-0,volume1-client-1
 30:     option favorite-child-policy mtime
 31:     option use-compound-fops off
 32:     subvolumes volume1-client-0 volume1-client-1
 33: end-volume
 34:
 35: volume volume1-dht
 36:     type cluster/distribute
[2019-01-31 13:44:00.932495] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
 37:     option lock-migration off
 38:     option force-migration off
 39:     subvolumes volume1-replicate-0
 40: end-volume
 41:
 42: volume volume1-write-behind
 43:     type performance/write-behind
 44:     subvolumes volume1-dht
 45: end-volume
 46:
 47: volume volume1-read-ahead
 48:     type performance/read-ahead
 49:     subvolumes volume1-write-behind
 50: end-volume
 51:
 52: volume volume1-readdir-ahead
 53:     type performance/readdir-ahead
 54:     option parallel-readdir off
 55:     option rda-request-size 131072
 56:     option rda-cache-limit 10MB
 57:     subvolumes volume1-read-ahead
 58: end-volume
 59:
 60: volume volume1-io-cache
 61:     type performance/io-cache
 62:     subvolumes volume1-readdir-ahead
 63: end-volume
 64:
 65: volume volume1-quick-read
 66:     type performance/quick-read
 67:     subvolumes volume1-io-cache
 68: end-volume
 69:
 70: volume volume1-open-behind
 71:     type performance/open-behind
 72:     subvolumes volume1-quick-read
 73: end-volume
 74:
 75: volume volume1-md-cache
 76:     type performance/md-cache
 77:     subvolumes volume1-open-behind
 78: end-volume
 79:
 80: volume volume1
 81:     type debug/io-stats
 82:     option log-level INFO
 83:     option latency-measurement off
 84:     option count-fop-hits off
 85:     subvolumes volume1-md-cache
 86: end-volume
 87:
 88: volume meta-autoload
 89:     type meta
 90:     subvolumes volume1
 91: end-volume
 92:
+------------------------------------------------------------------------------+
[2019-01-31 13:44:00.933375] I [rpc-clnt.c:2042:rpc_clnt_reconfig]
0-volume1-client-1: changing port to 49152 (from 0)
[2019-01-31 13:44:00.933549] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2019-01-31 13:44:00.934170] I [MSGID: 114046]
[client-handshake.c:1107:client_setvolume_cbk] 0-volume1-client-0: Connected to
volume1-client-0, attached to remote volume '/gluster/brick1/data'.
[2019-01-31 13:44:00.934210] I [MSGID: 108005]
[afr-common.c:5237:__afr_handle_child_up_event] 0-volume1-replicate-0:
Subvolume 'volume1-client-0' came back up; going online.
[2019-01-31 13:44:00.935291] I [MSGID: 114046]
[client-handshake.c:1107:client_setvolume_cbk] 0-volume1-client-1: Connected to
volume1-client-1, attached to remote volume '/gluster/brick2/data'.
[2019-01-31 13:44:00.937661] I [fuse-bridge.c:4267:fuse_init] 0-glusterfs-fuse:
FUSE inited with protocol versions: glusterfs 7.24 kernel 7.28
[2019-01-31 13:44:00.937691] I [fuse-bridge.c:4878:fuse_graph_sync] 0-fuse:
switched to graph 0
[2019-01-31 13:44:14.852144] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:14.962950] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2019-01-31 13:44:15.038615] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.040956] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.041044] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.041467] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.471018] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.477003] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.482380] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.487047] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.603624] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.607726] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]
[2019-01-31 13:44:15.607906] W [dict.c:761:dict_ref]
(-->/usr/lib64/glusterfs/5.3/xlator/performance/quick-read.so(+0x7c45)
[0x7fb0e0b49c45]
-->/usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xaba1)
[0x7fb0e0b5cba1] -->/lib64/libglusterfs.so.0(dict_ref+0x60) [0x7fb0f2457c40] )
0-dict: dict is NULL [Invalid argument]

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list