[Gluster-devel] Problems with graph switch in disperse

lidi at perabytes.com lidi at perabytes.com
Sat Dec 27 12:43:57 UTC 2014


I tracked this problem, and found that the loc.parent and loc.pargfid are all null in the call sequences below:

ec_manager_writev() -> ec_get_size_version() -> ec_lookup(). This can cause server_resolve() return an EINVAL.

A replace-brick will cause all opened fd and inode table recreate, but ec_lookup() get the loc from fd->_ctx. 

So loc.parent and loc.pargfid are missing while fd changed.  Other xlators always do a lookup from root  

directory, so never cause this problem. It seems that a recursive lookup from root directory may address this 

issue.

----- 原邮件信息 -----
发件人:Raghavendra&nbsp;Gowdappa&nbsp;<rgowdapp at redhat.com>
发送时间:14-12-24 21:48:56
收件人:Xavier&nbsp;Hernandez&nbsp;<xhernandez at datalab.es>
抄送人:Gluster&nbsp;Devel&nbsp;<gluster-devel at gluster.org>
主题:Re:&nbsp;[Gluster-devel]&nbsp;Problems&nbsp;with&nbsp;graph&nbsp;switch&nbsp;in&nbsp;disperse

Do&nbsp;you&nbsp;know&nbsp;the&nbsp;origins&nbsp;of&nbsp;EIO?&nbsp;fuse-bridge&nbsp;only&nbsp;fails&nbsp;a&nbsp;lookup&nbsp;fop&nbsp;with&nbsp;EIO&nbsp;(when&nbsp;NULL&nbsp;gfid&nbsp;is&nbsp;received&nbsp;in&nbsp;a&nbsp;successful&nbsp;lookup&nbsp;reply).&nbsp;So,&nbsp;there&nbsp;might&nbsp;be&nbsp;other&nbsp;xlator&nbsp;which&nbsp;is&nbsp;sending&nbsp;EIO.



-----&nbsp;Original&nbsp;Message&nbsp;-----

>&nbsp;From:&nbsp;&quot;Xavier&nbsp;Hernandez&quot;&nbsp;<xhernandez at datalab.es>

>&nbsp;To:&nbsp;&quot;Gluster&nbsp;Devel&quot;&nbsp;<gluster-devel at gluster.org>

>&nbsp;Sent:&nbsp;Wednesday,&nbsp;December&nbsp;24,&nbsp;2014&nbsp;6:25:17&nbsp;PM

>&nbsp;Subject:&nbsp;[Gluster-devel]&nbsp;Problems&nbsp;with&nbsp;graph&nbsp;switch&nbsp;in&nbsp;disperse

>&nbsp;

>&nbsp;Hi,

>&nbsp;

>&nbsp;I'm&nbsp;experiencing&nbsp;a&nbsp;problem&nbsp;when&nbsp;gluster&nbsp;graph&nbsp;is&nbsp;changed&nbsp;as&nbsp;a&nbsp;result&nbsp;of

>&nbsp;a&nbsp;replace-brick&nbsp;operation&nbsp;(probably&nbsp;with&nbsp;any&nbsp;other&nbsp;operation&nbsp;that

>&nbsp;changes&nbsp;the&nbsp;graph)&nbsp;while&nbsp;the&nbsp;client&nbsp;is&nbsp;also&nbsp;doing&nbsp;other&nbsp;tasks,&nbsp;like

>&nbsp;writing&nbsp;a&nbsp;file.

>&nbsp;

>&nbsp;When&nbsp;operation&nbsp;starts,&nbsp;I&nbsp;see&nbsp;that&nbsp;the&nbsp;replaced&nbsp;brick&nbsp;is&nbsp;disconnected,

>&nbsp;but&nbsp;writes&nbsp;continue&nbsp;working&nbsp;normally&nbsp;with&nbsp;one&nbsp;brick&nbsp;less.

>&nbsp;

>&nbsp;At&nbsp;some&nbsp;point,&nbsp;another&nbsp;graph&nbsp;is&nbsp;created&nbsp;and&nbsp;comes&nbsp;online.&nbsp;Remaining

>&nbsp;bricks&nbsp;on&nbsp;the&nbsp;old&nbsp;graph&nbsp;are&nbsp;disconnected&nbsp;and&nbsp;the&nbsp;old&nbsp;graph&nbsp;is&nbsp;destroyed.

>&nbsp;I&nbsp;see&nbsp;how&nbsp;new&nbsp;write&nbsp;requests&nbsp;are&nbsp;sent&nbsp;to&nbsp;the&nbsp;new&nbsp;graph.

>&nbsp;

>&nbsp;This&nbsp;seems&nbsp;correct.&nbsp;However&nbsp;there's&nbsp;a&nbsp;point&nbsp;where&nbsp;I&nbsp;see&nbsp;this:

>&nbsp;

>&nbsp;[2014-12-24&nbsp;11:29:58.541130]&nbsp;T&nbsp;[fuse-bridge.c:2305:fuse_write_resume]

>&nbsp;0-glusterfs-fuse:&nbsp;2234:&nbsp;WRITE&nbsp;(0x16dcf3c,&nbsp;size=131072,&nbsp;offset=255721472)

>&nbsp;[2014-12-24&nbsp;11:29:58.541156]&nbsp;T&nbsp;[ec-helpers.c:101:ec_trace]&nbsp;2-ec:

>&nbsp;WIND(INODELK)&nbsp;0x7f8921b7a9a4(0x7f8921b78e14)&nbsp;[refs=5,&nbsp;winds=3,&nbsp;jobs=1]

>&nbsp;frame=0x7f8932e92c38/0x7f8932e9e6b0,&nbsp;min/exp=3/3,&nbsp;err=0&nbsp;state=1

>&nbsp;{111:000:000}&nbsp;idx=0

>&nbsp;[2014-12-24&nbsp;11:29:58.541292]&nbsp;T&nbsp;[rpc-clnt.c:1384:rpc_clnt_record]

>&nbsp;2-patchy-client-0:&nbsp;Auth&nbsp;Info:&nbsp;pid:&nbsp;0,&nbsp;uid:&nbsp;0,&nbsp;gid:&nbsp;0,&nbsp;owner:

>&nbsp;d025e932897f0000

>&nbsp;[2014-12-24&nbsp;11:29:58.541296]&nbsp;T&nbsp;[io-cache.c:133:ioc_inode_flush]

>&nbsp;2-patchy-io-cache:&nbsp;locked&nbsp;inode(0x16d2810)

>&nbsp;[2014-12-24&nbsp;11:29:58.541354]&nbsp;T

>&nbsp;[rpc-clnt.c:1241:rpc_clnt_record_build_header]&nbsp;2-rpc-clnt:&nbsp;Request

>&nbsp;fraglen&nbsp;152,&nbsp;payload:&nbsp;84,&nbsp;rpc&nbsp;hdr:&nbsp;68

>&nbsp;[2014-12-24&nbsp;11:29:58.541408]&nbsp;T&nbsp;[io-cache.c:137:ioc_inode_flush]

>&nbsp;2-patchy-io-cache:&nbsp;unlocked&nbsp;inode(0x16d2810)

>&nbsp;[2014-12-24&nbsp;11:29:58.541493]&nbsp;T&nbsp;[io-cache.c:133:ioc_inode_flush]

>&nbsp;2-patchy-io-cache:&nbsp;locked&nbsp;inode(0x16d2810)

>&nbsp;[2014-12-24&nbsp;11:29:58.541536]&nbsp;T&nbsp;[io-cache.c:137:ioc_inode_flush]

>&nbsp;2-patchy-io-cache:&nbsp;unlocked&nbsp;inode(0x16d2810)

>&nbsp;[2014-12-24&nbsp;11:29:58.541537]&nbsp;T&nbsp;[rpc-clnt.c:1577:rpc_clnt_submit]

>&nbsp;2-rpc-clnt:&nbsp;submitted&nbsp;request&nbsp;(XID:&nbsp;0x17&nbsp;Program:&nbsp;GlusterFS&nbsp;3.3,

>&nbsp;ProgVers:&nbsp;330,&nbsp;Proc:&nbsp;29)&nbsp;to&nbsp;rpc-transport&nbsp;(patchy-client-0)

>&nbsp;[2014-12-24&nbsp;11:29:58.541646]&nbsp;W&nbsp;[fuse-bridge.c:2271:fuse_writev_cbk]

>&nbsp;0-glusterfs-fuse:&nbsp;2234:&nbsp;WRITE&nbsp;=>&nbsp;-1&nbsp;(Input/output&nbsp;error)

>&nbsp;

>&nbsp;It&nbsp;seems&nbsp;that&nbsp;fuse&nbsp;still&nbsp;has&nbsp;a&nbsp;write&nbsp;request&nbsp;pending&nbsp;for&nbsp;graph&nbsp;0.&nbsp;It&nbsp;is

>&nbsp;resumed&nbsp;but&nbsp;it&nbsp;returns&nbsp;EIO&nbsp;without&nbsp;calling&nbsp;the&nbsp;xlator&nbsp;stack&nbsp;(operations

>&nbsp;seen&nbsp;between&nbsp;the&nbsp;two&nbsp;log&nbsp;messages&nbsp;are&nbsp;from&nbsp;other&nbsp;operations&nbsp;and&nbsp;they&nbsp;are

>&nbsp;sent&nbsp;to&nbsp;graph&nbsp;2).&nbsp;I'm&nbsp;not&nbsp;sure&nbsp;why&nbsp;this&nbsp;happens&nbsp;and&nbsp;how&nbsp;I&nbsp;should&nbsp;aviod&nbsp;this.

>&nbsp;

>&nbsp;I&nbsp;tried&nbsp;the&nbsp;same&nbsp;scenario&nbsp;with&nbsp;replicate&nbsp;and&nbsp;it&nbsp;seems&nbsp;to&nbsp;work,&nbsp;so&nbsp;there

>&nbsp;must&nbsp;be&nbsp;something&nbsp;wrong&nbsp;in&nbsp;disperse,&nbsp;but&nbsp;I&nbsp;don't&nbsp;see&nbsp;where&nbsp;the&nbsp;problem

>&nbsp;could&nbsp;be.

>&nbsp;

>&nbsp;Any&nbsp;ideas&nbsp;?

>&nbsp;

>&nbsp;Thanks,

>&nbsp;

>&nbsp;Xavi

>&nbsp;_______________________________________________

>&nbsp;Gluster-devel&nbsp;mailing&nbsp;list

>&nbsp;Gluster-devel at gluster.org

>&nbsp;http://www.gluster.org/mailman/listinfo/gluster-devel

>&nbsp;

_______________________________________________

Gluster-devel&nbsp;mailing&nbsp;list

Gluster-devel at gluster.org

http://www.gluster.org/mailman/listinfo/gluster-devel

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20141227/3d8a8f73/attachment.html>


More information about the Gluster-devel mailing list