[Gluster-users] 3.7.0-2 Another transaction is in progress. Please try again after sometime.

Ryan Clough ryan.clough at dsic.com
Sat May 30 18:23:43 UTC 2015


Cannot run "gluster volume" commands.

Two brick distribute volume.
[root at hgluster01 ~]# gluster peer status
Number of Peers: 1

Hostname: hgluster02.red.dsic.com
Uuid: d85ec083-34f2-458c-9b31-4786462ca48e
State: Peer in Cluster (Connected)

[root at hgluster02 ~]# gluster peer status
Number of Peers: 1

Hostname: hgluster01.red.dsic.com
Uuid: 875dbae1-82bd-485f-98e4-b7c5562e4da1
State: Peer in Cluster (Connected)

Here is my current config:
Volume Name: export_volume
Type: Distribute
Volume ID: c74cc970-31e2-4924-a244-4c70d958dadb
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: hgluster01:/gluster_data
Brick2: hgluster02:/gluster_data
Options Reconfigured:
performance.cache-size: 1GB
diagnostics.brick-log-level: ERROR
performance.stat-prefetch: on
performance.write-behind: on
performance.flush-behind: on
features.quota-deem-statfs: on
performance.quick-read: off
performance.client-io-threads: on
performance.read-ahead: on
performance.io-thread-count: 24
features.quota: off
cluster.eager-lock: on
nfs.disable: on
auth.allow: 192.168.10.*,10.0.10.*,10.8.0.*,10.2.0.*,10.0.60.*
server.allow-insecure: on
performance.write-behind-window-size: 1MB
network.ping-timeout: 60
features.quota-timeout: 0
performance.io-cache: off
server.root-squash: on
performance.readdir-ahead: on

I am getting the following error messages on both bricks every 3 seconds:
[2015-05-30 17:50:34.810126] W [socket.c:642:__socket_rwv] 0-nfs: readv on
/var/run/gluster/692e2a3fcfe7221b623fcc6eb9a843c0.socket failed (Invalid
argument)
[2015-05-30 17:50:37.810463] W [socket.c:3059:socket_connect] 0-nfs: Ignore
failed connection attempt on
/var/run/gluster/692e2a3fcfe7221b623fcc6eb9a843c0.socket, (No such file or
directory)

NFS is disabled.

When I try to run "gluster volume status" it returns:
[root at hgluster01 glusterd]# gluster volume status
Locking failed on d85ec083-34f2-458c-9b31-4786462ca48e. Please check log
file for details.

and the following is logged:
[2015-05-30 18:17:44.026491] E [glusterd-utils.c:164:glusterd_lock]
0-management: Unable to get lock for uuid:
875dbae1-82bd-485f-98e4-b7c5562e4da1, lock held by:
875dbae1-82bd-485f-98e4-b7c5562e4da1
[2015-05-30 18:17:44.026554] E [glusterd-syncop.c:1736:gd_sync_task_begin]
0-management: Unable to acquire lock

I am unable to turn off root squash so that I can create new base project
directories. Any help would be appreciated. Seems like a pretty nasty bug
and, although we can read and write to the volume, I am unable to
administrate it.

Thank you, in advance, for your time.
___________________________________________
¯\_(ツ)_/¯
Ryan Clough
Information Systems
Decision Sciences International Corporation
<http://www.decisionsciencescorp.com/>
<http://www.decisionsciencescorp.com/>

-- 
This email and its contents are confidential. If you are not the intended 
recipient, please do not disclose or use the information within this email 
or its attachments. If you have received this email in error, please report 
the error to the sender by return email and delete this communication from 
your records.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150530/1515c17c/attachment.html>


More information about the Gluster-users mailing list