[Bugs] [Bug 1369349] enable trash, then truncate a large file lead to glusterfsd segfault

bugzilla at redhat.com bugzilla at redhat.com
Wed Mar 22 08:38:42 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1369349

WuVT <wzmvincent at sina.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |wzmvincent at sina.com



--- Comment #4 from WuVT <wzmvincent at sina.com> ---
(In reply to Jiffin from comment #3)
> We cannot use syncop infra here. It is usually used in glusterfs client code
> path. Thanks for pointing out this issue. I will try to reproduce this issue
> and let you know

Hi, Jiffin, I've met the same problem.
1) gluster volume set v1 features.trash on
2) gluster volume set v1 features.trash-max-filesize 1GB
3) mount -t glusterfs 127.0.0.1:v1 /mnt/test
4) dd if=/dev/zero of=/mnt/test/d1 bs=1M count=150
5) dd if=/dev/zero of=/mnt/test/d1 bs=1M count=150 (second time)
After the fifth step, glusterfsd went down.
I changed the stack size to unlimited, the problem still exists.
Here's some info:
[root at node12 7]# gluster v info v1

Volume Name: v1
Type: Distribute
Volume ID: 50429860-d368-49fe-aa8e-1b06a1ec5a44
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: node12:/data/2fe9ae62-3e0c-4f7f-be1d-d023732e4c36/v2/brick
Options Reconfigured:
features.trash-max-filesize: 1GB
diagnostics.brick-log-level: INFO
nfs.disable: on
user.smb: disable
auth.allow: node12,node13,,
performance.client-io-threads: on
performance.io-thread-count: 16
performance.write-behind: on
performance.flush-behind: on
performance.strict-o-direct: on
performance.write-behind-window-size: 32MB
performance.io-cache: on
performance.cache-size: 64MB
performance.cache-refresh-timeout: 1
features.trash: on
diagnostics.client-log-level: INFO

[root at node12 7]# ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 3878
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 3878
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Is the any work around to avoid this problem, or any plan to solve this
problem? Thanks!

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=evTmO5WkWI&a=cc_unsubscribe


More information about the Bugs mailing list