[Gluster-users] Problem: rsync files to glusterfs fail randomly~
joel vennin
joel.vennin at gmail.com
Tue May 11 07:24:57 UTC 2010
In my configuration file, i've just remove the readahead translator
definition. So you have to remove as you said the volume definition of
read-ahead.
Good luck !
On Mon, May 10, 2010 at 4:28 PM, bonn deng <bonndeng at gmail.com> wrote:
>
> Hi, Joel, thanks for your helpful reply! But how can I remove the
> readahead translator? Simply remove the volume definition of read-ahead as
> follows? Just to make sure of it, thanks~
>
> volume read-ahead
> type performance/read-ahead
> option force-atime-update no
> option page-count 4
> subvolumes cache
> end-volume
>
> On Mon, May 10, 2010 at 9:58 PM, joel vennin <joel.vennin at gmail.com>wrote:
>
>> Hi,
>>
>> I got a similar problem, I found a work around to this issue by deactive
>> the readahead translator. Once we removed the readahead translator
>> everything worked fine. However, we have still an issue: using distribute a
>> program is not able to open a file using the fopen function.
>>
>> Good luck
>>
>> On Mon, May 10, 2010 at 3:12 PM, bonn deng <bonndeng at gmail.com> wrote:
>>
>>> Hello, everyone~
>>> We're using glusterfs as our data storage tool, after we upgraded gfs
>>> version from 2.0.7 to 3.0.3, we encountered some wierd problems: we need
>>> to
>>> rsync some files to gfs cluster every five minutes, but randomly some
>>> files
>>> cannot be transfered correctly or evan cannot be transfered at all. I ssh
>>> to
>>> the computer where the rsync operation failed and check the log under
>>> directory "/var/log/glusterfs", which reads:
>>>
>>> ……
>>> [2010-05-10 20:32:05] W [fuse-bridge.c:1719:fuse_create_cbk]
>>> glusterfs-fuse:
>>> 4499440:
>>> /uigs/sugg/.sugg_access_log.2010051012.10.11.89.102.nginx1.cMi7LW
>>> => -1
>>> (No such file or directory)
>>> [2010-05-10 20:32:13] W [fuse-bridge.c:1719:fuse_create_cbk]
>>> glusterfs-fuse:
>>> 4499542:
>>> /sogou-logs/nginx-logs/proxy/.proxy_access_log.2010051019.10.11.89.102.
>>> nginx1.MnUaIR => -1 (No such file or directory)
>>>
>>> [2010-05-10 20:35:12] W [fuse-bridge.c:491:fuse_entry_cbk]
>>> glusterfs-fuse:
>>> LOOKUP(/uigs/pblog/bdweb/201005/20100510) inode (ptr=0x2aaaac010fb0,
>>> ino=183475774
>>> 468, gen=5467705122580597717) found conflict (ptr=0x1d75640,
>>> ino=183475774468, gen=5467705122580599136)
>>> [2010-05-10 20:35:16] W [fuse-bridge.c:491:fuse_entry_cbk]
>>> glusterfs-fuse:
>>> LOOKUP(/uigs/pblog/suggweb/201005/20100510) inode (ptr=0x1d783b0,
>>> ino=245151107323
>>> , gen=5467705122580597722) found conflict (ptr=0x2aaaac0bc4b0,
>>> ino=245151107323, gen=5467705122580598133)
>>>
>>> [2010-05-10 20:40:08] W [fuse-bridge.c:491:fuse_entry_cbk]
>>> glusterfs-fuse:
>>> LOOKUP(/uigs/pblog/bdweb/201005/20100510) inode (ptr=0x2aaab806cca0,
>>> ino=183475774
>>> 468, gen=5467705122580597838) found conflict (ptr=0x1d75640,
>>> ino=183475774468, gen=5467705122580599136)
>>> [2010-05-10 20:40:12] W [fuse-bridge.c:491:fuse_entry_cbk]
>>> glusterfs-fuse:
>>> LOOKUP(/uigs/pblog/suggweb/201005/20100510) inode (ptr=0x1d7c190,
>>> ino=245151107323
>>> , gen=5467705122580597843) found conflict (ptr=0x2aaaac0bc4b0,
>>> ino=245151107323, gen=5467705122580598133)
>>>
>>> [2010-05-10 20:45:10] W [fuse-bridge.c:491:fuse_entry_cbk]
>>> glusterfs-fuse:
>>> LOOKUP(/uigs/pblog/bdweb/201005/20100510) inode (ptr=0x2aaab00a6a90,
>>> ino=183475774
>>> 468, gen=5467705122580597838) found conflict (ptr=0x1d75640,
>>> ino=183475774468, gen=5467705122580599136)
>>> [2010-05-10 20:45:14] W [fuse-bridge.c:491:fuse_entry_cbk]
>>> glusterfs-fuse:
>>> LOOKUP(/uigs/pblog/suggweb/201005/20100510) inode (ptr=0x2aaab80960e0,
>>> ino=2451511
>>> 07323, gen=5467705122580597669) found conflict (ptr=0x2aaaac0bc4b0,
>>> ino=245151107323, gen=5467705122580598133)
>>> ……
>>>
>>> Does anybody know what's wrong with our gfs? And another question, in
>>> order to trace the problem, we want to know to which machine the failed
>>> file
>>> should be put, where can I get this information or what can I do?
>>> By the way, we're now using glusterfs version 3.0.3, and we have
>>> nearly
>>> 200 data servers in the gfs cluster (in distribute mode, not replicate).
>>> What else do I need to put here in order to make our problem clear if
>>> it's
>>> not now?
>>> Thanks for your help! Any suggestion would be appreciated~
>>>
>>> --
>>> Quansong Deng(bonnndeng/邓泉松)
>>>
>>> Web and Software Technology R&D center
>>> Dept. CST
>>> Tsinghua University,
>>> P.R China
>>> 100084
>>>
>>> CELL: +86 135 524 20008
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>
>>>
>>
>
>
> --
> Quansong Deng(bonnndeng/邓泉松)
>
> Web and Software Technology R&D center
> Dept. CST
> Tsinghua University,
> P.R China
> 100084
>
> CELL: +86 135 524 20008
>
More information about the Gluster-users
mailing list