[Gluster-users] 3.7.0-2 Another transaction is in progress. Please try again after sometime.
Atin Mukherjee
atin.mukherjee83 at gmail.com
Sun May 31 06:28:16 UTC 2015
Then its a different issue. Could you check whether you are executing
concurrent cli commands on a same volume(refer to cmd log history in
/var/log/glusterfs), if thats the case then it is expected, otherwise could
you restart glusterd instance of hgluster2 and see if the problem persists?
Even if the problem resurfaces restarting all the glusterd instances should
solve it.
HTH,
Atin
On 31 May 2015 00:23, "Ryan Clough" <ryan.clough at dsic.com> wrote:
> We upgraded from 3.6.3 to 3.7.0
>
> ___________________________________________
> ¯\_(ツ)_/¯
> Ryan Clough
> Information Systems
> Decision Sciences International Corporation
> <http://www.decisionsciencescorp.com/>
> <http://www.decisionsciencescorp.com/>
>
> On Sat, May 30, 2015 at 11:29 AM, Atin Mukherjee <
> atin.mukherjee83 at gmail.com> wrote:
>
>>
>> On 30 May 2015 23:54, "Ryan Clough" <ryan.clough at dsic.com> wrote:
>> >
>> > Cannot run "gluster volume" commands.
>> >
>> > Two brick distribute volume.
>> > [root at hgluster01 ~]# gluster peer status
>> > Number of Peers: 1
>> >
>> > Hostname: hgluster02.red.dsic.com
>> > Uuid: d85ec083-34f2-458c-9b31-4786462ca48e
>> > State: Peer in Cluster (Connected)
>> >
>> > [root at hgluster02 ~]# gluster peer status
>> > Number of Peers: 1
>> >
>> > Hostname: hgluster01.red.dsic.com
>> > Uuid: 875dbae1-82bd-485f-98e4-b7c5562e4da1
>> > State: Peer in Cluster (Connected)
>> >
>> > Here is my current config:
>> > Volume Name: export_volume
>> > Type: Distribute
>> > Volume ID: c74cc970-31e2-4924-a244-4c70d958dadb
>> > Status: Started
>> > Number of Bricks: 2
>> > Transport-type: tcp
>> > Bricks:
>> > Brick1: hgluster01:/gluster_data
>> > Brick2: hgluster02:/gluster_data
>> > Options Reconfigured:
>> > performance.cache-size: 1GB
>> > diagnostics.brick-log-level: ERROR
>> > performance.stat-prefetch: on
>> > performance.write-behind: on
>> > performance.flush-behind: on
>> > features.quota-deem-statfs: on
>> > performance.quick-read: off
>> > performance.client-io-threads: on
>> > performance.read-ahead: on
>> > performance.io-thread-count: 24
>> > features.quota: off
>> > cluster.eager-lock: on
>> > nfs.disable: on
>> > auth.allow: 192.168.10.*,10.0.10.*,10.8.0.*,10.2.0.*,10.0.60.*
>> > server.allow-insecure: on
>> > performance.write-behind-window-size: 1MB
>> > network.ping-timeout: 60
>> > features.quota-timeout: 0
>> > performance.io-cache: off
>> > server.root-squash: on
>> > performance.readdir-ahead: on
>> >
>> > I am getting the following error messages on both bricks every 3
>> seconds:
>> > [2015-05-30 17:50:34.810126] W [socket.c:642:__socket_rwv] 0-nfs: readv
>> on /var/run/gluster/692e2a3fcfe7221b623fcc6eb9a843c0.socket failed (Invalid
>> argument)
>> > [2015-05-30 17:50:37.810463] W [socket.c:3059:socket_connect] 0-nfs:
>> Ignore failed connection attempt on
>> /var/run/gluster/692e2a3fcfe7221b623fcc6eb9a843c0.socket, (No such file or
>> directory
>> This is also a bug and will be fixed in 3.7.2. No functional impact
>> though.
>> >
>> > NFS is disabled.
>> >
>> > When I try to run "gluster volume status" it returns:
>> > [root at hgluster01 glusterd]# gluster volume status
>> > Locking failed on d85ec083-34f2-458c-9b31-4786462ca48e. Please check
>> log file for details.
>> >
>> > and the following is logged:
>> > [2015-05-30 18:17:44.026491] E [glusterd-utils.c:164:glusterd_lock]
>> 0-management: Unable to get lock for uuid:
>> 875dbae1-82bd-485f-98e4-b7c5562e4da1, lock held by:
>> 875dbae1-82bd-485f-98e4-b7c5562e4da1
>> > [2015-05-30 18:17:44.026554] E
>> [glusterd-syncop.c:1736:gd_sync_task_begin] 0-management: Unable to acquire
>> lock
>> >
>> > I am unable to turn off root squash so that I can create new base
>> project directories. Any help would be appreciated. Seems like a pretty
>> nasty bug and, although we can read and write to the volume, I am unable to
>> administrate it.
>> Did you upgrade your cluster from 3.5 to 3.7? If yes then this is a known
>> bug and we plan to release it in 3.7.1
>> >
>> > Thank you, in advance, for your time.
>> > ___________________________________________
>> > ¯\_(ツ)_/¯
>> > Ryan Clough
>> > Information Systems
>> > Decision Sciences International Corporation
>> >
>> > This email and its contents are confidential. If you are not the
>> intended recipient, please do not disclose or use the information within
>> this email or its attachments. If you have received this email in error,
>> please report the error to the sender by return email and delete this
>> communication from your records.
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-users at gluster.org
>> > http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
> This email and its contents are confidential. If you are not the intended
> recipient, please do not disclose or use the information within this email
> or its attachments. If you have received this email in error, please report
> the error to the sender by return email and delete this communication from
> your records.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150531/65f832e9/attachment.html>
More information about the Gluster-users
mailing list