[Gluster-devel] glusterfsd crash when using quota without io-threads

Sanoj Unnikrishnan sunnikri at redhat.com
Thu Jun 8 11:02:51 UTC 2017


I would still be worried about the Invalid read/write. IMO whether an
illegal access causes a crash depends on whether the page is currently
mapped.
So, it could so happen that there is a use after free / use outside of
bounds happening in the code  and  it turns out that this location gets
mapped in a different (unmapped) page when IO threads is not loaded.

Could you please share the valgrind logs as well.

On Wed, Jun 7, 2017 at 8:22 PM, Kinglong Mee <kinglongmee at gmail.com> wrote:

> After deleting io-threads from the vols, quota operates (list/set/modify)
> lets glusterfsd crash.
> I use it at CentOS 7 (CentOS Linux release 7.3.1611) with glusterfs 3.8.12.
> It seems the stack corrupt, when testing with the following diff,
> glusterfsd runs correctly.
>
> There are two questions as,
> 1. When using valgrind, it shows there are many "Invalid read/write" when
> with io-threads.
>    Why glusterfsd runs correctly with io-threads? but crash without
> io-threads?
>
> 2. With the following diff, valgrind also shows many "Invalid read/write"
> when without io-threads?
>    but no any crash.
>
> Any comments are welcome.
>
> Revert http://review.gluster.org/11499 seems better than the diff.
>
> diff --git a/xlators/features/marker/src/marker-quota.c
> b/xlators/features/marke
> index 902b8e5..f3d2507 100644
> --- a/xlators/features/marker/src/marker-quota.c
> +++ b/xlators/features/marker/src/marker-quota.c
> @@ -1075,7 +1075,7 @@ mq_synctask1 (xlator_t *this, synctask_fn_t task,
> gf_boole
>          }
>
>          if (spawn) {
> -                ret = synctask_new1 (this->ctx->env, 1024 * 16, task,
> +                ret = synctask_new1 (this->ctx->env, 0, task,
>                                        mq_synctask_cleanup, NULL, args);
>                  if (ret) {
>                          gf_log (this->name, GF_LOG_ERROR, "Failed to
> spawn "
>
> -----------------------------------test steps
> ----------------------------------
> 1. gluster volume create gvtest node1:/test/ node2:/test/
> 2. gluster volume start gvtest
> 3. gluster volume quota enable gvtest
>
> 4. "deletes io-threads from all vols"
> 5. reboot node1 and node2.
> 6. sh quota-set.sh
>
> # cat quota-set.sh
> gluster volume quota gvtest list
> gluster volume quota gvtest limit-usage / 10GB
> gluster volume quota gvtest limit-usage /1234 1GB
> gluster volume quota gvtest limit-usage /hello 1GB
> gluster volume quota gvtest limit-usage /test 1GB
> gluster volume quota gvtest limit-usage /xyz 1GB
> gluster volume quota gvtest list
> gluster volume quota gvtest remove /hello
> gluster volume quota gvtest remove /test
> gluster volume quota gvtest list
> gluster volume quota gvtest limit-usage /test 1GB
> gluster volume quota gvtest remove /xyz
> gluster volume quota gvtest list
>
> -----------------------glusterfsd crash without the
> diff--------------------------------
>
> /usr/local/lib/libglusterfs.so.0(_gf_msg_backtrace_nomem+
> 0xf5)[0x7f6e1e950af1]
> /usr/local/lib/libglusterfs.so.0(gf_print_trace+0x21f)[0x7f6e1e956943]
> /usr/local/sbin/glusterfsd(glusterfsd_print_trace+0x1f)[0x409c83]
> /lib64/libc.so.6(+0x35250)[0x7f6e1d025250]
> /lib64/libc.so.6(gsignal+0x37)[0x7f6e1d0251d7]
> /lib64/libc.so.6(abort+0x148)[0x7f6e1d0268c8]
> /lib64/libc.so.6(+0x74f07)[0x7f6e1d064f07]
> /lib64/libc.so.6(+0x7baf5)[0x7f6e1d06baf5]
> /lib64/libc.so.6(+0x7c3e6)[0x7f6e1d06c3e6]
> /usr/local/lib/libglusterfs.so.0(__gf_free+0x311)[0x7f6e1e981327]
> /usr/local/lib/libglusterfs.so.0(synctask_destroy+0x82)[0x7f6e1e995c20]
> /usr/local/lib/libglusterfs.so.0(synctask_done+0x25)[0x7f6e1e995c47]
> /usr/local/lib/libglusterfs.so.0(synctask_switchto+0xcf)[0x7f6e1e996585]
> /usr/local/lib/libglusterfs.so.0(syncenv_processor+0x60)[0x7f6e1e99663d]
> /lib64/libpthread.so.0(+0x7dc5)[0x7f6e1d7a2dc5]
> /lib64/libc.so.6(clone+0x6d)[0x7f6e1d0e773d]
>
> or
>
> package-string: glusterfs 3.8.12
> /usr/local/lib/libglusterfs.so.0(_gf_msg_backtrace_nomem+
> 0xf5)[0x7fa15e623af1]
> /usr/local/lib/libglusterfs.so.0(gf_print_trace+0x21f)[0x7fa15e629943]
> /usr/local/sbin/glusterfsd(glusterfsd_print_trace+0x1f)[0x409c83]
> /lib64/libc.so.6(+0x35250)[0x7fa15ccf8250]
> /lib64/libc.so.6(gsignal+0x37)[0x7fa15ccf81d7]
> /lib64/libc.so.6(abort+0x148)[0x7fa15ccf98c8]
> /lib64/libc.so.6(+0x74f07)[0x7fa15cd37f07]
> /lib64/libc.so.6(+0x7dd4d)[0x7fa15cd40d4d]
> /lib64/libc.so.6(__libc_calloc+0xb4)[0x7fa15cd43a14]
> /usr/local/lib/libglusterfs.so.0(__gf_calloc+0xa7)[0x7fa15e653a5f]
> /usr/local/lib/libglusterfs.so.0(iobref_new+0x2b)[0x7fa15e65875a]
> /usr/local/lib/glusterfs/3.8.12/rpc-transport/socket.so(+
> 0xa98c)[0x7fa153a8398c]
> /usr/local/lib/glusterfs/3.8.12/rpc-transport/socket.so(+
> 0xacbc)[0x7fa153a83cbc]
> /usr/local/lib/glusterfs/3.8.12/rpc-transport/socket.so(+
> 0xad10)[0x7fa153a83d10]
> /usr/local/lib/glusterfs/3.8.12/rpc-transport/socket.so(+
> 0xb2a7)[0x7fa153a842a7]
> /usr/local/lib/libglusterfs.so.0(+0x97ea9)[0x7fa15e68eea9]
> /usr/local/lib/libglusterfs.so.0(+0x982c6)[0x7fa15e68f2c6]
> /lib64/libpthread.so.0(+0x7dc5)[0x7fa15d475dc5]
> /lib64/libc.so.6(clone+0x6d)[0x7fa15cdba73d]
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170608/0507d4ed/attachment.html>


More information about the Gluster-devel mailing list