[Gluster-users] deleted files make bricks full ?

Shehjar Tikoo shehjart at gluster.com
Wed Aug 17 06:06:55 UTC 2011


Thanks for providing the exact steps. This is a bug. We're on it.

-Shehjar

Tomoaki Sato wrote:
> a simple way to reproduce the issue:
> 1) NFS mount and create 'foo' and umount.
> 2) NFS mount and delete 'foo' and umount.
> 3) replete 1) 2) till ENOSPC.
> 
> command logs are following:
> [root at vhead-010 ~]# rpm -qa | grep gluster
> glusterfs-fuse-3.1.5-1
> glusterfs-core-3.1.5-1
> [root at vhead-010 ~]# cat /etc/issue
> CentOS release 5.6 (Final)
> Kernel \r on an \m
> 
> [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
> /mnt/brick; do
> ne
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cde:00002cdf:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002ceb:00002cec:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cf8:00002cf9:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002d05:00002d06:8
>                      103212320    192256 103020064   1% /mnt/brick
> [root at vhead-010 ~]# mount small:/small /mnt
> [root at vhead-010 ~]# ls /mnt
> [root at vhead-010 ~]# dd if=/dev/zero of=/mnt/foo bs=1M count=1024
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 17.8419 seconds, 60.2 MB/s
> [root at vhead-010 ~]# ls -l /mnt/foo
> -rw-r--r-- 1 root root 1073741824 Aug  2 08:14 /mnt/foo
> [root at vhead-010 ~]# umount /mnt
> [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
> /mnt/brick; do
> ne
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cde:00002cdf:8
>                      103212320   1241864 101970456   2% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002ceb:00002cec:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cf8:00002cf9:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002d05:00002d06:8
>                      103212320    192256 103020064   1% /mnt/brick
> [root at vhead-010 ~]# mount small:/small /mnt
> [root at vhead-010 ~]# rm -f /mnt/foo
> [root at vhead-010 ~]# ls /mnt
> [root at vhead-010 ~]# umount /mnt
> [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
> /mnt/brick; do
> ne
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cde:00002cdf:8
>                      103212320   1241864 101970456   2% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002ceb:00002cec:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cf8:00002cf9:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002d05:00002d06:8
>                      103212320    192256 103020064   1% /mnt/brick
> [root at vhead-010 ~]# ssh small-1-4-private
> [root at localhost ~]# du /mnt/brick
> 16      /mnt/brick/lost+found
> 24      /mnt/brick
> [root at localhost ~]# ps ax | grep glusterfsd | grep -v grep
>  7246 ?        Ssl    0:03 /opt/glusterfs/3.1.5/sbin/glusterfsd 
> --xlator-option
> small-server.listen-port=24009 -s localhost --volfile-id 
> small.small-1-4-private
> .mnt-brick -p 
> /etc/glusterd/vols/small/run/small-1-4-private-mnt-brick.pid --bri
> ck-name /mnt/brick --brick-port 24009 -l 
> /var/log/glusterfs/bricks/mnt-brick.log
> 
> [root at localhost ~]# ls -l /proc/7246/fd
> total 0
> lrwx------ 1 root root 64 Aug  2 08:18 0 -> /dev/null
> lrwx------ 1 root root 64 Aug  2 08:18 1 -> /dev/null
> lrwx------ 1 root root 64 Aug  2 08:18 10 -> socket:[153304]
> lrwx------ 1 root root 64 Aug  2 08:18 11 -> socket:[153306]
> lrwx------ 1 root root 64 Aug  2 08:18 12 -> socket:[153388]
> lrwx------ 1 root root 64 Aug  2 08:18 13 -> /mnt/brick/foo (deleted) <====
> lrwx------ 1 root root 64 Aug  2 08:18 2 -> /dev/null
> lr-x------ 1 root root 64 Aug  2 08:18 3 -> eventpoll:[153252]
> l-wx------ 1 root root 64 Aug  2 08:18 4 -> 
> /var/log/glusterfs/bricks/mnt-brick.
> log
> lrwx------ 1 root root 64 Aug  2 08:18 5 -> 
> /etc/glusterd/vols/small/run/small-1
> -4-private-mnt-brick.pid
> lrwx------ 1 root root 64 Aug  2 08:18 6 -> socket:[153257]
> lrwx------ 1 root root 64 Aug  2 08:18 7 -> socket:[153301]
> lrwx------ 1 root root 64 Aug  2 08:18 8 -> /tmp/tmpfpuXk7N (deleted)
> lrwx------ 1 root root 64 Aug  2 08:18 9 -> socket:[153297]
> [root at localhost ~]# exit
> [root at vhead-010 ~]# mount small:/small /mnt
> [root at vhead-010 ~]# dd if=/dev/zero of=/mnt/foo bs=1M count=1024
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 21.4717 seconds, 50.0 MB/s
> [root at vhead-010 ~]# ls -l /mnt/foo
> -rw-r--r-- 1 root root 1073741824 Aug  2 08:19 /mnt/foo
> [root at vhead-010 ~]# umount /mnt
> [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
> /mnt/brick; do
> ne
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cde:00002cdf:8
>                      103212320   2291472 100920848   3% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002ceb:00002cec:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cf8:00002cf9:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002d05:00002d06:8
>                      103212320    192256 103020064   1% /mnt/brick
> [root at vhead-010 ~]# mount small:/small /mnt
> [root at vhead-010 ~]# rm -f /mnt/foo
> [root at vhead-010 ~]# ls /mnt
> [root at vhead-010 ~]# umount /mnt
> [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
> /mnt/brick; do
> ne
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cde:00002cdf:8
>                      103212320   2291472 100920848   3% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002ceb:00002cec:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cf8:00002cf9:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002d05:00002d06:8
>                      103212320    192256 103020064   1% /mnt/brick
> [root at vhead-010 ~]# ssh small-1-4-private ls -l /proc/7246/fd
> total 0
> lrwx------ 1 root root 64 Aug  2 08:18 0 -> /dev/null
> lrwx------ 1 root root 64 Aug  2 08:18 1 -> /dev/null
> lrwx------ 1 root root 64 Aug  2 08:18 10 -> socket:[153304]
> lrwx------ 1 root root 64 Aug  2 08:18 11 -> socket:[153306]
> lrwx------ 1 root root 64 Aug  2 08:18 12 -> socket:[153388]
> lrwx------ 1 root root 64 Aug  2 08:18 13 -> /mnt/brick/foo (deleted) <====
> lrwx------ 1 root root 64 Aug  2 08:21 14 -> /mnt/brick/foo (deleted) <====
> lrwx------ 1 root root 64 Aug  2 08:18 2 -> /dev/null
> lr-x------ 1 root root 64 Aug  2 08:18 3 -> eventpoll:[153252]
> l-wx------ 1 root root 64 Aug  2 08:18 4 -> 
> /var/log/glusterfs/bricks/mnt-brick.
> log
> lrwx------ 1 root root 64 Aug  2 08:18 5 -> 
> /etc/glusterd/vols/small/run/small-1
> -4-private-mnt-brick.pid
> lrwx------ 1 root root 64 Aug  2 08:18 6 -> socket:[153257]
> lrwx------ 1 root root 64 Aug  2 08:18 7 -> socket:[153301]
> lrwx------ 1 root root 64 Aug  2 08:18 8 -> /tmp/tmpfpuXk7N (deleted)
> lrwx------ 1 root root 64 Aug  2 08:18 9 -> socket:[153297]
> 
> Tomo Sato
> 
> (2011/08/02 7:14), Tomoaki Sato wrote:
>> Hi,
>>
>> My simple test program, which repeat create-write-read-delete 64 of 1GB
>> files on 100GB x 4 bricks from 4 NFS clients, fails due to ENOSPC.
>> I found some glusterfsd have many file descriptors of same name, deleted
>> files and these files fill the bricks.
>> Is this know issue?
>>
>> Best,
>>
>> Tomo Sato
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> Received: from exmf024-nj-4.domain.local (10.240.156.92) by
> HUB024-NJ-2.exch024.domain.local (10.240.10.35) with Microsoft SMTP Server
> (TLS) id 14.1.289.1; Mon, 1 Aug 2011 16:43:09 -0700
> Received: from localhost (localhost.localdomain [127.0.0.1])    by
> exmf024-nj-4.domain.local (Postfix) with ESMTP id 3BC8F71;    Mon,  1 
> Aug 2011
> 16:43:09 -0700 (PDT)
> X-Relayed-From: 184.106.200.248
> X-Relayed-From-Added: Yes
> X-Virus-Scanned: by amavisd-new at exmf024-nj-4.domain.local
> X-Spam-Flag: NO
> X-Spam-Score: 2.69
> X-Spam-Level: **
> X-Spam-Status: No, score=2.69 tagged_above=-999 required=25
>     tests=[BOTNET_BADHELO=0.2, BOTNET_CLIENT=0.8, BOTNET_CLIENTWORDS=0.2,
>     BOTNET_IPINHOSTNAME=0.2, BOTNET_SOHO=-0.01, CTASD_SPAM_UNKNOWN=-0.5,
>     IMEDIA_FROM_NE_RETPATH=0.3, IM_RCVD_SORBS_DUL=1.5]
> X-Spam-CTCH-RefID: str=0001.0A090207.4E373A0A.0021,ss=1,re=0.000,fgs=0
> Received: from exmf024-nj-4.domain.local ([127.0.0.1])    by localhost
> (exmf024-nj-4.domain.local [127.0.0.1]) (amavisd-new, port 10024)    
> with ESMTP
> id J2h3EjhY-+Uh; Mon,  1 Aug 2011 16:43:04 -0700 (PDT)
> Received: from gluster.org (184-106-200-248.static.cloud-ips.com
> [184.106.200.248])    by exmf024-nj-4.domain.local (Postfix) with ESMTP id
> 52216B3;    Mon,  1 Aug 2011 16:43:03 -0700 (PDT)
> Received: from gluster.com-mirror1 (localhost [127.0.0.1])    by 
> gluster.org
> (Postfix) with ESMTP id 20E2A700001;    Mon,  1 Aug 2011 16:47:28 -0700 
> (PDT)
> X-Original-To: gluster-users at gluster.org
> Delivered-To: gluster-users at gluster.org
> Received: from mail.valinux.co.jp (mail.valinux.co.jp [210.128.90.3])    by
> gluster.org (Postfix) with ESMTP id E8249C5C093    for
> <gluster-users at gluster.org>; Mon,  1 Aug 2011 16:47:25 -0700 (PDT)
> Received: from [127.0.0.1] (vagw.valinux.co.jp [210.128.90.14])    by
> mail.valinux.co.jp (Postfix) with ESMTP id C16DF48F59    for
> <gluster-users at gluster.org>; Tue,  2 Aug 2011 08:42:58 +0900 (JST)
> Message-ID: <4E373A25.4050102 at valinux.co.jp>
> Date: Tue, 2 Aug 2011 08:43:33 +0900
> From: Tomoaki Sato <tsato at valinux.co.jp>
> User-Agent: Mozilla/5.0 (Windows NT 6.0; rv:5.0) Gecko/20110624 
> Thunderbird/5.0
> To: <gluster-users at gluster.org>
> References: <4E37255C.4030607 at valinux.co.jp>
> In-Reply-To: <4E37255C.4030607 at valinux.co.jp>
> X-Virus-Scanned: clamav-milter 0.95.2 at va-mail.local.valinux.co.jp
> X-Virus-Status: Clean
> Subject: Re: [Gluster-users] deleted files make bricks full ?
> X-BeenThere: gluster-users at gluster.org
> X-Mailman-Version: 2.1.11
> Precedence: list
> List-Id: Gluster General Discussion List  <gluster-users.gluster.org>
> List-Unsubscribe: 
> <http://gluster.org/cgi-bin/mailman/options/gluster-users>, 
>     <mailto:gluster-users-request at gluster.org?subject=unsubscribe>
> List-Archive: <http://gluster.org/pipermail/gluster-users>
> List-Post: <mailto:gluster-users at gluster.org>
> List-Help: <mailto:gluster-users-request at gluster.org?subject=help>
> List-Subscribe: 
> <http://gluster.org/cgi-bin/mailman/listinfo/gluster-users>,
>     <mailto:gluster-users-request at gluster.org?subject=subscribe>
> Content-Transfer-Encoding: 7bit
> Content-Type: text/plain; charset="us-ascii"; format=flowed
> Sender: <gluster-users-bounces at gluster.org>
> Errors-To: gluster-users-bounces at gluster.org
> Return-Path: gluster-users-bounces at gluster.org
> X-MS-Exchange-Organization-AuthSource: HUB024-NJ-2.exch024.domain.local
> X-MS-Exchange-Organization-AuthAs: Anonymous
> MIME-Version: 1.0
> 
> a simple way to reproduce the issue:
> 1) NFS mount and create 'foo' and umount.
> 2) NFS mount and delete 'foo' and umount.
> 3) replete 1) 2) till ENOSPC.
> 
> command logs are following:
> [root at vhead-010 ~]# rpm -qa | grep gluster
> glusterfs-fuse-3.1.5-1
> glusterfs-core-3.1.5-1
> [root at vhead-010 ~]# cat /etc/issue
> CentOS release 5.6 (Final)
> Kernel \r on an \m
> 
> [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
> /mnt/brick; do
> ne
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cde:00002cdf:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002ceb:00002cec:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cf8:00002cf9:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002d05:00002d06:8
>                      103212320    192256 103020064   1% /mnt/brick
> [root at vhead-010 ~]# mount small:/small /mnt
> [root at vhead-010 ~]# ls /mnt
> [root at vhead-010 ~]# dd if=/dev/zero of=/mnt/foo bs=1M count=1024
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 17.8419 seconds, 60.2 MB/s
> [root at vhead-010 ~]# ls -l /mnt/foo
> -rw-r--r-- 1 root root 1073741824 Aug  2 08:14 /mnt/foo
> [root at vhead-010 ~]# umount /mnt
> [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
> /mnt/brick; do
> ne
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cde:00002cdf:8
>                      103212320   1241864 101970456   2% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002ceb:00002cec:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cf8:00002cf9:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002d05:00002d06:8
>                      103212320    192256 103020064   1% /mnt/brick
> [root at vhead-010 ~]# mount small:/small /mnt
> [root at vhead-010 ~]# rm -f /mnt/foo
> [root at vhead-010 ~]# ls /mnt
> [root at vhead-010 ~]# umount /mnt
> [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
> /mnt/brick; do
> ne
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cde:00002cdf:8
>                      103212320   1241864 101970456   2% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002ceb:00002cec:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cf8:00002cf9:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002d05:00002d06:8
>                      103212320    192256 103020064   1% /mnt/brick
> [root at vhead-010 ~]# ssh small-1-4-private
> [root at localhost ~]# du /mnt/brick
> 16      /mnt/brick/lost+found
> 24      /mnt/brick
> [root at localhost ~]# ps ax | grep glusterfsd | grep -v grep
>  7246 ?        Ssl    0:03 /opt/glusterfs/3.1.5/sbin/glusterfsd 
> --xlator-option
> small-server.listen-port=24009 -s localhost --volfile-id 
> small.small-1-4-private
> .mnt-brick -p 
> /etc/glusterd/vols/small/run/small-1-4-private-mnt-brick.pid --bri
> ck-name /mnt/brick --brick-port 24009 -l 
> /var/log/glusterfs/bricks/mnt-brick.log
> 
> [root at localhost ~]# ls -l /proc/7246/fd
> total 0
> lrwx------ 1 root root 64 Aug  2 08:18 0 -> /dev/null
> lrwx------ 1 root root 64 Aug  2 08:18 1 -> /dev/null
> lrwx------ 1 root root 64 Aug  2 08:18 10 -> socket:[153304]
> lrwx------ 1 root root 64 Aug  2 08:18 11 -> socket:[153306]
> lrwx------ 1 root root 64 Aug  2 08:18 12 -> socket:[153388]
> lrwx------ 1 root root 64 Aug  2 08:18 13 -> /mnt/brick/foo (deleted) <====
> lrwx------ 1 root root 64 Aug  2 08:18 2 -> /dev/null
> lr-x------ 1 root root 64 Aug  2 08:18 3 -> eventpoll:[153252]
> l-wx------ 1 root root 64 Aug  2 08:18 4 -> 
> /var/log/glusterfs/bricks/mnt-brick.
> log
> lrwx------ 1 root root 64 Aug  2 08:18 5 -> 
> /etc/glusterd/vols/small/run/small-1
> -4-private-mnt-brick.pid
> lrwx------ 1 root root 64 Aug  2 08:18 6 -> socket:[153257]
> lrwx------ 1 root root 64 Aug  2 08:18 7 -> socket:[153301]
> lrwx------ 1 root root 64 Aug  2 08:18 8 -> /tmp/tmpfpuXk7N (deleted)
> lrwx------ 1 root root 64 Aug  2 08:18 9 -> socket:[153297]
> [root at localhost ~]# exit
> [root at vhead-010 ~]# mount small:/small /mnt
> [root at vhead-010 ~]# dd if=/dev/zero of=/mnt/foo bs=1M count=1024
> 1024+0 records in
> 1024+0 records out
> 1073741824 bytes (1.1 GB) copied, 21.4717 seconds, 50.0 MB/s
> [root at vhead-010 ~]# ls -l /mnt/foo
> -rw-r--r-- 1 root root 1073741824 Aug  2 08:19 /mnt/foo
> [root at vhead-010 ~]# umount /mnt
> [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
> /mnt/brick; do
> ne
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cde:00002cdf:8
>                      103212320   2291472 100920848   3% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002ceb:00002cec:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cf8:00002cf9:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002d05:00002d06:8
>                      103212320    192256 103020064   1% /mnt/brick
> [root at vhead-010 ~]# mount small:/small /mnt
> [root at vhead-010 ~]# rm -f /mnt/foo
> [root at vhead-010 ~]# ls /mnt
> [root at vhead-010 ~]# umount /mnt
> [root at vhead-010 ~]# for i in small-{1..4}-4-private; do ssh $i df 
> /mnt/brick; do
> ne
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cde:00002cdf:8
>                      103212320   2291472 100920848   3% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002ceb:00002cec:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002cf8:00002cf9:8
>                      103212320    192256 103020064   1% /mnt/brick
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/mapper/00002d05:00002d06:8
>                      103212320    192256 103020064   1% /mnt/brick
> [root at vhead-010 ~]# ssh small-1-4-private ls -l /proc/7246/fd
> total 0
> lrwx------ 1 root root 64 Aug  2 08:18 0 -> /dev/null
> lrwx------ 1 root root 64 Aug  2 08:18 1 -> /dev/null
> lrwx------ 1 root root 64 Aug  2 08:18 10 -> socket:[153304]
> lrwx------ 1 root root 64 Aug  2 08:18 11 -> socket:[153306]
> lrwx------ 1 root root 64 Aug  2 08:18 12 -> socket:[153388]
> lrwx------ 1 root root 64 Aug  2 08:18 13 -> /mnt/brick/foo (deleted) <====
> lrwx------ 1 root root 64 Aug  2 08:21 14 -> /mnt/brick/foo (deleted) <====
> lrwx------ 1 root root 64 Aug  2 08:18 2 -> /dev/null
> lr-x------ 1 root root 64 Aug  2 08:18 3 -> eventpoll:[153252]
> l-wx------ 1 root root 64 Aug  2 08:18 4 -> 
> /var/log/glusterfs/bricks/mnt-brick.
> log
> lrwx------ 1 root root 64 Aug  2 08:18 5 -> 
> /etc/glusterd/vols/small/run/small-1
> -4-private-mnt-brick.pid
> lrwx------ 1 root root 64 Aug  2 08:18 6 -> socket:[153257]
> lrwx------ 1 root root 64 Aug  2 08:18 7 -> socket:[153301]
> lrwx------ 1 root root 64 Aug  2 08:18 8 -> /tmp/tmpfpuXk7N (deleted)
> lrwx------ 1 root root 64 Aug  2 08:18 9 -> socket:[153297]
> 
> Tomo Sato
> 
> (2011/08/02 7:14), Tomoaki Sato wrote:
>> Hi,
>>
>> My simple test program, which repeat create-write-read-delete 64 of 1GB
>> files on 100GB x 4 bricks from 4 NFS clients, fails due to ENOSPC.
>> I found some glusterfsd have many file descriptors of same name, deleted
>> files and these files fill the bricks.
>> Is this know issue?
>>
>> Best,
>>
>> Tomo Sato
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list