[Gluster-users] Directory writes yes, directory reads no

Scott Larson stl at wiredrive.com
Thu Jun 12 23:50:17 UTC 2008


      The setup is very basic for testing, a single FreeBSD 7.0 server  
functioning as server and client.  The behavior is quite strange.  As  
I mentioned writes work, and if I specifically pass a filename to ls  
(`ls -l foo`) it displays the file and proper attributes, but a plain  
`ls` with no arguments returns nothing whatsoever.  Reads do work  
though as if I do `vi foo` I'm able to edit the file then save it  
back.  Anyway, here are the spec files and the logs for client and  
server run with DEBUG.  To keep it from being a complete mess the logs  
are right after startup, where all I have done is mounted the share  
and attempted the `ls` on the directory.  There is a transport init  
failed error in the logs, not sure what to make of that since some of  
the file operations actually work.
      Considering what a huge win this would be for us if it was  
working, I could likely provide a FreeBSD machine or two if that's  
what stands between the current state of things and getting it to a  
fully functional level.

server spec:

volume brick
   type storage/posix
   option directory /home/export
end-volume

volume server
   type protocol/server
   option transport-type tcp/server
   subvolumes brick
   option auth.ip.brick.allow *
end-volume


client spec:

volume client
   type protocol/client
   option transport-type tcp/client
   option remote-host 127.0.0.1
   option remote-subvolume brick
end-volume


server log:

2008-06-12 16:37:58 D [glusterfs.c:166:get_spec_fp] glusterfs: loading  
spec from /usr/local/etc/glusterfs/glusterfs-server.vol
2008-06-12 16:37:58 D [spec.y:107:new_section] parser: New node for  
'brick'
2008-06-12 16:37:58 D [xlator.c:115:xlator_set_type] xlator: attempt  
to load file /usr/local/lib/glusterfs/1.3.9/xlator/storage/posix.so
2008-06-12 16:37:58 D [spec.y:127:section_type] parser:  
Type:brick:storage/posix
2008-06-12 16:37:58 D [spec.y:141:section_option] parser:  
Option:brick:directory:/home/export
2008-06-12 16:37:58 D [spec.y:198:section_end] parser: end:brick
2008-06-12 16:37:58 D [spec.y:107:new_section] parser: New node for  
'server'
2008-06-12 16:37:58 D [xlator.c:115:xlator_set_type] xlator: attempt  
to load file /usr/local/lib/glusterfs/1.3.9/xlator/protocol/server.so
2008-06-12 16:37:58 D [spec.y:127:section_type] parser:  
Type:server:protocol/server
2008-06-12 16:37:58 D [spec.y:141:section_option] parser:  
Option:server:transport-type:tcp/server
2008-06-12 16:37:58 D [spec.y:185:section_sub] parser: child:server- 
 >brick
2008-06-12 16:37:58 D [spec.y:141:section_option] parser:  
Option:server:auth.ip.brick.allow:*
2008-06-12 16:37:58 D [spec.y:198:section_end] parser: end:server
2008-06-12 16:37:58 D [server-protocol.c:6299:init] server: protocol/ 
server xlator loaded
2008-06-12 16:37:58 D [transport.c:80:transport_load] transport:  
attempt to load file /usr/local/lib/glusterfs/1.3.9/transport/tcp/ 
server.so
2008-06-12 16:37:58 D [server-protocol.c:6340:init] server: defaulting  
limits.transaction-size to 4194304
2008-06-12 16:38:48 D [tcp-server.c:145:tcp_server_notify] server:  
Registering socket (5) for new transport object of 127.0.0.1
2008-06-12 16:38:48 D [ip.c:120:gf_auth] brick: allowed = "*",  
received ip addr = "127.0.0.1"
2008-06-12 16:38:48 D [server-protocol.c:5664:mop_setvolume] server:  
accepted client from 127.0.0.1:1023
2008-06-12 16:38:48 D [server-protocol.c:5707:mop_setvolume] server:  
creating inode table with lru_limit=1024, xlator=brick
2008-06-12 16:38:48 D [inode.c:1163:inode_table_new] brick: creating  
new inode table with lru_limit=1024, sizeof(inode_t)=154
2008-06-12 16:38:48 D [inode.c:577:__create_inode] brick/inode: create  
inode(1)
2008-06-12 16:38:48 D [inode.c:367:__active_inode] brick/inode:  
activating inode(140733193388033), lru=0/1024
2008-06-12 16:38:48 D [inode.c:577:__create_inode] brick/inode: create  
inode(19725238606894080)
2008-06-12 16:38:48 D [inode.c:367:__active_inode] brick/inode:  
activating inode(19725238606894080), lru=0/1024
2008-06-12 16:38:48 D [inode.c:367:__active_inode] brick/inode:  
activating inode(19725238606894080), lru=0/1024
2008-06-12 16:38:48 D [inode.c:367:__active_inode] brick/inode:  
activating inode(19725238606894080), lru=0/1024
2008-06-12 16:38:48 D [inode.c:367:__active_inode] brick/inode:  
activating inode(19725238606894080), lru=0/1024


client log file:

2008-06-12 16:38:20 D [glusterfs.c:166:get_spec_fp] glusterfs: loading  
spec from /usr/local/etc/glusterfs/glusterfs-client.vol
2008-06-12 16:38:20 D [spec.y:107:new_section] parser: New node for  
'client'
2008-06-12 16:38:20 D [xlator.c:115:xlator_set_type] xlator: attempt  
to load file /usr/local/lib/glusterfs/1.3.9/xlator/protocol/client.so
2008-06-12 16:38:20 D [spec.y:127:section_type] parser:  
Type:client:protocol/client
2008-06-12 16:38:20 D [spec.y:141:section_option] parser:  
Option:client:transport-type:tcp/client
2008-06-12 16:38:20 D [spec.y:141:section_option] parser:  
Option:client:remote-host:127.0.0.1
2008-06-12 16:38:20 D [spec.y:141:section_option] parser:  
Option:client:remote-subvolume:brick
2008-06-12 16:38:20 D [spec.y:198:section_end] parser: end:client
2008-06-12 16:38:20 D [glusterfs.c:128:fuse_graph] glusterfs: setting  
option mount-point to /mnt/gluster
2008-06-12 16:38:20 D [xlator.c:115:xlator_set_type] xlator: attempt  
to load file /usr/local/lib/glusterfs/1.3.9/xlator/mount/fuse.so
2008-06-12 16:38:20 D [client-protocol.c:5313:notify] client:  
transport init failed
2008-06-12 16:38:20 D [client-protocol.c:5006:init] client: defaulting  
transport-timeout to 42
2008-06-12 16:38:20 D [transport.c:80:transport_load] transport:  
attempt to load file /usr/local/lib/glusterfs/1.3.9/transport/tcp/ 
client.so
2008-06-12 16:38:20 D [client-protocol.c:5033:init] client: defaulting  
limits.transaction-size to 268435456
2008-06-12 16:38:20 D [inode.c:1163:inode_table_new] fuse: creating  
new inode table with lru_limit=1024, sizeof(inode_t)=154
2008-06-12 16:38:20 D [inode.c:577:__create_inode] fuse/inode: create  
inode(1)
2008-06-12 16:38:20 D [inode.c:367:__active_inode] fuse/inode:  
activating inode(140733193388033), lru=0/1024
2008-06-12 16:38:48 D [tcp-client.c:77:tcp_connect] client: socket fd  
= 5
2008-06-12 16:38:48 D [tcp-client.c:107:tcp_connect] client: finalized  
on port `1023'
2008-06-12 16:38:48 D [tcp-client.c:128:tcp_connect] client:  
defaulting remote-port to 6996
2008-06-12 16:38:48 D [common-utils.c:179:gf_resolve_ip] resolver: DNS  
cache not present, freshly probing hostname: 127.0.0.1
2008-06-12 16:38:48 D [common-utils.c:204:gf_resolve_ip] resolver:  
returning IP:127.0.0.1[0] for hostname: 127.0.0.1
2008-06-12 16:38:48 D [common-utils.c:212:gf_resolve_ip] resolver:  
flushing DNS cache
2008-06-12 16:38:48 D [tcp-client.c:161:tcp_connect] client: connect  
on 5 in progress (non-blocking)
2008-06-12 16:38:48 D [tcp-client.c:205:tcp_connect] client:  
connection on 5 success
2008-06-12 16:38:48 D [client-protocol.c:5342:notify] client: got  
GF_EVENT_CHILD_UP
2008-06-12 16:38:48 W [client-protocol.c:280:client_protocol_xfer]  
client: attempting to pipeline request type(1) op(34) with handshake
2008-06-12 16:38:48 D [client-protocol.c: 
5096:client_protocol_handshake_reply] client: reply frame has callid:  
424242
2008-06-12 16:38:48 D [client-protocol.c: 
5130:client_protocol_handshake_reply] client: SETVOLUME on remote-host  
succeeded
2008-06-12 16:38:48 D [fuse-bridge.c:375:fuse_entry_cbk] glusterfs- 
fuse: 0: (34) / => 515396075521
2008-06-12 16:38:48 W [fuse-bridge.c:389:fuse_entry_cbk] glusterfs- 
fuse: 0: (34) / => 515396075521 Rehashing 0/0
2008-06-12 16:38:48 D [fuse-bridge.c:1751:fuse_opendir] glusterfs- 
fuse: 0: OPEN /
2008-06-12 16:38:48 D [fuse-bridge.c:678:fuse_fd_cbk] glusterfs-fuse:  
0: (22) / => 0x80139e0e0
2008-06-12 16:38:48 D [fuse-bridge.c:2056:fuse_statfs] glusterfs-fuse:  
0: STATFS
2008-06-12 16:38:48 D [fuse-bridge.c:1931:fuse_readdir] glusterfs- 
fuse: 0: READDIR (0x80139e0e0, size=4096, offset=0)
2008-06-12 16:38:48 D [fuse-bridge.c:1899:fuse_readdir_cbk] glusterfs- 
fuse: 0: READDIR => 568/4096,0
2008-06-12 16:38:48 D [fuse-bridge.c:1958:fuse_releasedir] glusterfs- 
fuse: 0: CLOSEDIR 0x80139e0e0
2008-06-12 16:38:48 D [fuse-bridge.c:916:fuse_err_cbk] glusterfs-fuse:  
0: (24) ERR => 0
2008-06-12 16:38:48 D [fuse-bridge.c:375:fuse_entry_cbk] glusterfs- 
fuse: 0: (34) / => 140733193388033
2008-06-12 16:38:48 D [fuse-bridge.c:1751:fuse_opendir] glusterfs- 
fuse: 0: OPEN /
2008-06-12 16:38:48 D [fuse-bridge.c:678:fuse_fd_cbk] glusterfs-fuse:  
0: (22) / => 0x80139e0e0
2008-06-12 16:38:48 D [fuse-bridge.c:2056:fuse_statfs] glusterfs-fuse:  
0: STATFS
2008-06-12 16:38:48 D [fuse-bridge.c:1931:fuse_readdir] glusterfs- 
fuse: 0: READDIR (0x80139e0e0, size=4096, offset=0)
2008-06-12 16:38:48 D [fuse-bridge.c:1899:fuse_readdir_cbk] glusterfs- 
fuse: 0: READDIR => 568/4096,0
2008-06-12 16:38:48 D [fuse-bridge.c:1958:fuse_releasedir] glusterfs- 
fuse: 0: CLOSEDIR 0x80139e0e0
2008-06-12 16:38:48 D [fuse-bridge.c:916:fuse_err_cbk] glusterfs-fuse:  
0: (24) ERR => 0

-- 
Scott Larson
Network Administrator

Wiredrive
4216 3/4 Glencoe Ave
Marina Del Rey, CA 90292
t 310.823.8238
stl at wiredrive.com
http://www.wiredrive.com

On Jun 3, 2008, at 1:25 AM, KE Liew wrote:

> It would be useful to know which version you're using, and what your  
> setup is. Posting your spec file and logs can help too.
>
>
> KwangErn
>
> On Tue, Jun 3, 2008 at 1:17 AM, Scott Larson <stl at wiredrive.com>  
> wrote:
>     The quick question:  Has anyone else run into the issue where
> they can write files to a directory, but then are unable to see them
> with something like `ls`?  After starting the server and client and
> mounting the glusterfs share to /mnt/gluster, if I cd into it and run
> `touch foo && ls`, no files are visible.  However if I then look at
> the actual directory being shared, /usr/export, file foo is present.
> This problem seems counterintuitive.  I can provide logs and the
> config if there isn't an obvious answer, however nothing in the logs
> immediately grabs me.
>     As a background, I'm looking at alternatives to our Isilon
> cluster, and all of our client servers are FreeBSD 7.0.  I know that
> is not currently a supported client OS, however GlusterFS would be a
> seemingly good candidate for our needs and the fact writes actually
> work is promising.
>
> --
> Scott Larson
> Network Administrator
> IOWA Interactive
> 4216 3/4 Glencoe Ave
> Marina Del Rey, CA 90292
>
> t 310.823.8238
> f 310.823.7108
> stl at iowainteractive.com
> http://www.iowainteractive.com
> http://www.wiredrive.com
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080612/e786eb24/attachment.html>


More information about the Gluster-users mailing list