[Gluster-users] Problems with x86_64 servers and ppc64 client
Steven Truelove
truelove at array.ca
Mon May 25 18:57:27 UTC 2009
Hi,
I am trying to use glusterfs with two x86_64 servers and a ppc64
client. I have not had any problems with an x86_64 client, but I have
had problems with the ppc64 client.
At first, I could not get gluster to mount on the client. I fixed
this by fixing the endian swapping done in byte-order.h (I have made a
bug report for this already). I can now mount the filesystem, but when
I do almost anything, such as "echo blah > blah", or "cat blah", I run
into problems. Mostly I get errors claiming that 'blah' is a directory
(it isn't).
Here is the client log for "cat blah", where blah is a file that
already exists (note there is some extra debug that I added to see the
flags for the 'open' call):
[2009-05-25 20:35:02] N [glusterfsd.c:1152:main] glusterfs: Successfully
started
[2009-05-25 20:35:02] D [client-protocol.c:6301:notify] remote2: got
GF_EVENT_CHILD_UP
[2009-05-25 20:35:02] D [client-protocol.c:6301:notify] remote2: got
GF_EVENT_CHILD_UP
[2009-05-25 20:35:02] D [client-protocol.c:6301:notify] remote1: got
GF_EVENT_CHILD_UP
[2009-05-25 20:35:02] D [client-protocol.c:6301:notify] remote1: got
GF_EVENT_CHILD_UP
[2009-05-25 20:35:02] N [client-protocol.c:5562:client_setvolume_cbk]
remote2: Connected to 192.168.12.43:6996, attached to remote volume
'threaded-locked-brick'.
[2009-05-25 20:35:02] N [client-protocol.c:5562:client_setvolume_cbk]
remote2: Connected to 192.168.12.43:6996, attached to remote volume
'threaded-locked-brick'.
[2009-05-25 20:35:02] N [client-protocol.c:5562:client_setvolume_cbk]
remote1: Connected to 192.168.12.41:6996, attached to remote volume
'threaded-locked-brick'.
[2009-05-25 20:35:02] N [client-protocol.c:5562:client_setvolume_cbk]
remote1: Connected to 192.168.12.41:6996, attached to remote volume
'threaded-locked-brick'.
[2009-05-25 20:35:12] D [client-protocol.c:803:client_open] remote1:
flags = 65536, req->flags = 65536
[2009-05-25 20:35:12] D [client-protocol.c:803:client_open] remote2:
flags = 65536, req->flags = 65536
[2009-05-25 20:35:12] W [stripe.c:1871:stripe_open_cbk] stripe: remote2
returned error Not a directory
[2009-05-25 20:35:12] W [stripe.c:1871:stripe_open_cbk] stripe: remote1
returned error Not a directory
[2009-05-25 20:35:12] W [fuse-bridge.c:641:fuse_fd_cbk] glusterfs-fuse:
4: OPEN() /blah => -1 (Not a directory)
Here is the server log:
[2009-05-25 20:36:34] N [server-protocol.c:7040:mop_setvolume]
tcp-server: accepted client from 192.168.12.31:1019
[2009-05-25 20:36:49] D [server-protocol.c:3852:server_open]
threaded-locked-brick: req->flags = 256, state->flags = 65536
[2009-05-25 20:36:49] E [posix.c:1447:posix_open] brick: open on
/raid/glusterfs/blah with flags 65536: Not a directory
[2009-05-25 20:36:49] D [server-protocol.c:2012:server_open_cbk]
tcp-server: 8: OPEN /blah (129302530) ==> -1 (Not a directory)
[2009-05-25 20:37:37] D [server-protocol.c:3852:server_open]
threaded-locked-brick: req->flags = 256, state->flags = 65536
[2009-05-25 20:37:37] E [posix.c:1447:posix_open] brick: open on
/raid/glusterfs/blah with flags 65536: Not a directory
[2009-05-25 20:37:37] D [server-protocol.c:2012:server_open_cbk]
tcp-server: 12: OPEN /blah (129302530) ==> -1 (Not a directory)
[2009-05-25 20:37:49] D [server-protocol.c:3852:server_open]
threaded-locked-brick: req->flags = 256, state->flags = 65536
[2009-05-25 20:37:49] E [posix.c:1447:posix_open] brick: open on
/raid/glusterfs/blah with flags 65536: Not a directory
[2009-05-25 20:37:49] D [server-protocol.c:2012:server_open_cbk]
tcp-server: 16: OPEN /blah (129302530) ==> -1 (Not a directory)
Here is the client vol file:
### Add client feature and attach to remote subvolume
volume remote1
type protocol/client
option transport-type tcp
# option transport-type unix
# option transport-type ib-sdp
# option remote-host 127.0.0.1 # IP address of the remote brick
option remote-host 192.168.12.41
# option transport.socket.remote-port 6996 # default server
port is 6996
# option transport-type ib-verbs
# option transport.ib-verbs.remote-port 6996 # default
server port is 6996
# option transport.ib-verbs.work-request-send-size 1048576
# option transport.ib-verbs.work-request-send-count 16
# option transport.ib-verbs.work-request-recv-size 1048576
# option transport.ib-verbs.work-request-recv-count 16
# option transport-timeout 30 # seconds to wait for a reply
# from server for each request
option remote-subvolume threaded-locked-brick # name of the
remote volume
end-volume
volume remote2
type protocol/client
option remote-host 192.168.12.43
# option transport.socket.remote-port 6996 # default server
port is 6996
option transport-type tcp
option remote-subvolume threaded-locked-brick
end-volume
volume stripe
type cluster/stripe
option block-size 1MB
subvolumes remote1 remote2
end-volume
Here is the server vol file (note that I am allowing client access using
both IB Verbs and TCP. In this case, I am connecting the client using TCP):
volume brick
type storage/posix # POSIX FS translator
option directory /raid/glusterfs # Export this directory
end-volume
volume locked-brick
type features/locks
subvolumes brick
end-volume
volume threaded-locked-brick
type performance/io-threads
# option thread-count 8
subvolumes locked-brick
end-volume
### Add network serving capability to above brick.
volume server
type protocol/server
# option transport-type tcp
# option transport-type unix
# option transport-type ib-sdp
# option transport.socket.bind-address 192.168.1.10 # Default is to
listen on all interfaces
# option transport.socket.listen-port 6996 # Default is 6996
option transport-type ib-verbs
#option transport.ib-verbs.bind-address 192.168.1.10 # Default is to
listen on all interfaces
# option transport.ib-verbs.listen-port 6996 # Default is 6996
# option transport.ib-verbs.work-request-send-size 131072
# option transport.ib-verbs.work-request-send-count 64
# option transport.ib-verbs.work-request-recv-size 131072
# option transport.ib-verbs.work-request-recv-count 64
# option client-volume-filename
/usr/local/etc/glusterfs/glusterfs-client.vol
subvolumes threaded-locked-brick
# NOTE: Access to any volume through protocol/server is denied by
# default. You need to explicitly grant access through # "auth"
# option.
option auth.addr.threaded-locked-brick.allow * # Allow access to
"brick" volume
end-volume
volume tcp-server
type protocol/server
option transport-type tcp
# option transport.socket.listen-port 6997 # Default is 6996
option auth.addr.threaded-locked-brick.allow * # Allow access to
"brick" volume
subvolumes threaded-locked-brick
end-volume
Any assistance that could be offered would be appreciated!
Thanks,
Steven Truelove
--
Steven Truelove
Array Systems Computing, Inc.
1120 Finch Avenue West, 7th Floor
Toronto, Ontario
M3J 3H7
CANADA
http://www.array.ca
truelove at array.ca
Phone: (416) 736-0900 x307
Fax: (416) 736-4715
More information about the Gluster-users
mailing list