[Bugs] [Bug 1369349] enable trash, then truncate a large file lead to glusterfsd segfault

bugzilla at redhat.com bugzilla at redhat.com
Wed Mar 22 12:37:24 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1369349



--- Comment #5 from Jiffin <jthottan at redhat.com> ---
(In reply to WuVT from comment #4)
> (In reply to Jiffin from comment #3)
> > We cannot use syncop infra here. It is usually used in glusterfs client code
> > path. Thanks for pointing out this issue. I will try to reproduce this issue
> > and let you know
> 
> Hi, Jiffin, I've met the same problem.
> 1) gluster volume set v1 features.trash on
> 2) gluster volume set v1 features.trash-max-filesize 1GB
> 3) mount -t glusterfs 127.0.0.1:v1 /mnt/test
> 4) dd if=/dev/zero of=/mnt/test/d1 bs=1M count=150
> 5) dd if=/dev/zero of=/mnt/test/d1 bs=1M count=150 (second time)
> After the fifth step, glusterfsd went down.
> I changed the stack size to unlimited, the problem still exists.
> Here's some info:
> [root at node12 7]# gluster v info v1
>  
> Volume Name: v1
> Type: Distribute
> Volume ID: 50429860-d368-49fe-aa8e-1b06a1ec5a44
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: node12:/data/2fe9ae62-3e0c-4f7f-be1d-d023732e4c36/v2/brick
> Options Reconfigured:
> features.trash-max-filesize: 1GB
> diagnostics.brick-log-level: INFO
> nfs.disable: on
> user.smb: disable
> auth.allow: node12,node13,,
> performance.client-io-threads: on
> performance.io-thread-count: 16
> performance.write-behind: on
> performance.flush-behind: on
> performance.strict-o-direct: on
> performance.write-behind-window-size: 32MB
> performance.io-cache: on
> performance.cache-size: 64MB
> performance.cache-refresh-timeout: 1
> features.trash: on
> diagnostics.client-log-level: INFO
> 
> [root at node12 7]# ulimit -a
> core file size          (blocks, -c) unlimited
> data seg size           (kbytes, -d) unlimited
> scheduling priority             (-e) 0
> file size               (blocks, -f) unlimited
> pending signals                 (-i) 3878
> max locked memory       (kbytes, -l) 64
> max memory size         (kbytes, -m) unlimited
> open files                      (-n) 1024
> pipe size            (512 bytes, -p) 8
> POSIX message queues     (bytes, -q) 819200
> real-time priority              (-r) 0
> stack size              (kbytes, -s) unlimited
> cpu time               (seconds, -t) unlimited
> max user processes              (-u) 3878
> virtual memory          (kbytes, -v) unlimited
> file locks                      (-x) unlimited
> 
> Is the any work around to avoid this problem, or any plan to solve this
> problem? Thanks!

IMO the findings of jiademing.dd( iesool at 126.com) is correct. It happens only
if tries to truncate very large files(Till files of size 20M is fine).
Original solution requires lot of change in the code base.(may b target for
3.11).
As workaround I can sent a patch which won't stores files large than 10M to
trash directory during truncate operation

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=4vzImT85al&a=cc_unsubscribe


More information about the Bugs mailing list