<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Feb 21, 2017 at 9:47 PM, Shyam <span dir="ltr">&lt;<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Update from week of: (2017-02-13 to 2017-02-21)<br>
<br>
This week we have 3 problems from fstat to report as follows,<br>
<br>
1) ./tests/features/lock_revocati<wbr>on.t<br>
- *Pranith*, request you take a look at this<br>
- This seems to be hanging on CentOS runs causing *aborted* test runs<br>
- Some of these test runs are,<br>
  - <a href="https://build.gluster.org/job/centos6-regression/3256/console" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>centos6-regression/3256/consol<wbr>e</a><br>
  - <a href="https://build.gluster.org/job/centos6-regression/3196/console" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>centos6-regression/3196/consol<wbr>e</a><br>
  - <a href="https://build.gluster.org/job/centos6-regression/3196/console" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>centos6-regression/3196/consol<wbr>e</a><br>
<br>
2) tests/basic/quota-anon-fd-nfs.<wbr>t<br>
- This had one spurious failure in 3.10<br>
- I think it is because of not checking if NFS mount is available (which is anyway a good check to have in the test to avoid spurious failures)<br>
- I have filed and posted a fix for the same,<br>
  - Bug: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1425515" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/sh<wbr>ow_bug.cgi?id=1425515</a><br>
  - Possible Fix: <a href="https://review.gluster.org/16701" rel="noreferrer" target="_blank">https://review.gluster.org/167<wbr>01</a><br>
<br>
3) ./tests/bugs/glusterd/bug-1303<wbr>028-Rebalance-glusterd-rpc-<wbr>connection-issue.t<br>
- *Milind/Hari*, request you take a look at this<br>
- This seems to have about 8 failures in the last week on master and release-3.10<br>
- The failure seems to stem from tier.rc:function rebalance_run_time (line 133)?<br>
- Logs follow,<br>
<br>
&lt;snip&gt;<br>
  02:36:38 [10:36:38] Running tests in file ./tests/bugs/glusterd/bug-1303<wbr>028-Rebalance-glusterd-rpc-<wbr>connection-issue.t<br>
  02:36:45 No volumes present<br>
  02:37:36 Tiering Migration Functionality: patchy: failed: Tier daemon is not running on volume patchy<br>
  02:37:36 ./tests/bugs/glusterd/../../ti<wbr>er.rc: line 133: * 3600 +  * 60 + : syntax error: operand expected (error token is &quot;* 3600 +  * 60 + &quot;)<br>
  02:37:36 ./tests/bugs/glusterd/bug-1303<wbr>028-Rebalance-glusterd-rpc-<wbr>connection-issue.t: line 23: [: : integer expression expected<br>
  02:37:41 Tiering Migration Functionality: patchy: failed: Tier daemon is not running on volume patchy<br>
  02:37:41 ./tests/bugs/glusterd/../../ti<wbr>er.rc: line 133: * 3600 +  * 60 + : syntax error: operand expected (error token is &quot;* 3600 +  * 60 + &quot;)<br>
  02:37:41 ./tests/bugs/glusterd/bug-1303<wbr>028-Rebalance-glusterd-rpc-<wbr>connection-issue.t: line 23: [: : integer expression expected<br>
  02:37:41 ./tests/bugs/glusterd/bug-1303<wbr>028-Rebalance-glusterd-rpc-<wbr>connection-issue.t: line 23: [: -: integer expression expected<br>
  02:37:41 ./tests/bugs/glusterd/bug-1303<wbr>028-Rebalance-glusterd-rpc-<wbr>connection-issue.t ..<br>
  ...<br>
  02:37:41 ok 14, LINENUM:69<br>
  02:37:41 not ok 15 Got &quot;1&quot; instead of &quot;0&quot;, LINENUM:70<br>
  02:37:41 FAILED COMMAND: 0 tier_daemon_check<br>
  02:37:41 not ok 16 Got &quot;1&quot; instead of &quot;0&quot;, LINENUM:72<br>
  02:37:41 FAILED COMMAND: 0 non_zero_check<br>
  02:37:41 not ok 17 Got &quot;1&quot; instead of &quot;0&quot;, LINENUM:75<br>
  02:37:41 FAILED COMMAND: 0 non_zero_check<br>
  02:37:41 not ok 18 Got &quot;1&quot; instead of &quot;0&quot;, LINENUM:77<br>
  02:37:41 FAILED COMMAND: 0 non_zero_check -<br>
  02:37:41 Failed 4/18 subtests<br>
&lt;/snip&gt;<br></blockquote><div><br></div><div><br><a href="http://lists.gluster.org/pipermail/gluster-devel/2017-February/052137.html">http://lists.gluster.org/pipermail/gluster-devel/2017-February/052137.html</a><br><br></div><div>Hari did mention that he has identified the issue and will be sending a patch soon.<br><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Shyam<div><div class="gmail-h5"><br>
<br>
On 02/15/2017 09:25 AM, Shyam wrote:<br>
</div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div class="gmail-h5">
Update from week of: (2017-02-06 to 2017-02-13)<br>
<br>
No major failures to report this week, things look fine from a<br>
regression suite failure stats perspective.<br>
<br>
Do we have any updates on the older cores? Specifically,<br>
  - <a href="https://build.gluster.org/job/centos6-regression/3046/consoleText" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>centos6-regression/3046/consol<wbr>eText</a><br>
(./tests/basic/tier/tier.t -- tier rebalance)<br>
  - <a href="https://build.gluster.org/job/centos6-regression/2963/consoleFull" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>centos6-regression/2963/consol<wbr>eFull</a><br>
(./tests/basic/volume-snapshot<wbr>.t -- glusterd)<br>
<br>
Shyam<br>
<br>
On 02/06/2017 02:21 PM, Shyam wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Update from week of: (2017-01-30 to 2017-02-06)<br>
<br>
Failure stats and actions:<br>
<br>
1) ./tests/basic/tier/tier.t<br>
Core dump needs attention<br>
<a href="https://build.gluster.org/job/centos6-regression/3046/consoleText" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>centos6-regression/3046/consol<wbr>eText</a><br>
<br>
Looks like the tier rebalance process has crashed (see below for the<br>
stack details)<br>
<br>
2) ./tests/basic/ec/ec-background<wbr>-heals.t<br>
Marked as bad in master, not in release-3.10. May cause unwanted<br>
failures in 3.10 and as a result marked this as bad in 3.10 as well.<br>
<br>
Commit: <a href="https://review.gluster.org/16549" rel="noreferrer" target="_blank">https://review.gluster.org/165<wbr>49</a><br>
<br>
3) ./tests/bitrot/bug-1373520.t<br>
Marked as bad in master, not in release-3.10. May cause unwanted<br>
failures in 3.10 and as a result marked this as bad in 3.10 as well.<br>
<br>
Commit: <a href="https://review.gluster.org/16549" rel="noreferrer" target="_blank">https://review.gluster.org/165<wbr>49</a><br>
<br>
Thanks,<br>
Shyam<br>
<br>
On 01/30/2017 03:00 PM, Shyam wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Hi,<br>
<br>
The following is a list of spurious(?) regression failures in the 3.10<br>
branch last week (from <a href="http://fstat.gluster.org" rel="noreferrer" target="_blank">fstat.gluster.org</a>).<br>
<br>
Request component owner or other devs to take a look at the failures,<br>
and weed out real issues.<br>
<br>
Regression failures 3.10:<br>
<br>
Summary:<br>
1) <a href="https://build.gluster.org/job/centos6-regression/2960/consoleFull" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>centos6-regression/2960/consol<wbr>eFull</a><br>
  ./tests/basic/ec/ec-background<wbr>-heals.t<br>
<br>
2) <a href="https://build.gluster.org/job/centos6-regression/2963/consoleFull" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>centos6-regression/2963/consol<wbr>eFull</a><br>
  &lt;glusterd Core dumped&gt;<br>
  ./tests/basic/volume-snapshot.<wbr>t<br>
<br>
3) <a href="https://build.gluster.org/job/netbsd7-regression/2694/consoleFull" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>netbsd7-regression/2694/consol<wbr>eFull</a><br>
  ./tests/basic/afr/self-heald.t<br>
<br>
4) <a href="https://build.gluster.org/job/centos6-regression/2954/consoleFull" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>centos6-regression/2954/consol<wbr>eFull</a><br>
  ./tests/basic/tier/legacy-many<wbr>.t<br>
<br>
5) <a href="https://build.gluster.org/job/centos6-regression/2858/consoleFull" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>centos6-regression/2858/consol<wbr>eFull</a><br>
  ./tests/bugs/bitrot/bug-124598<wbr>1.t<br>
<br>
6) <a href="https://build.gluster.org/job/netbsd7-regression/2637/consoleFull" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>netbsd7-regression/2637/consol<wbr>eFull</a><br>
  ./tests/basic/afr/self-heal.t<br>
<br>
7) <a href="https://build.gluster.org/job/netbsd7-regression/2624/consoleFull" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>netbsd7-regression/2624/consol<wbr>eFull</a><br>
  ./tests/encryption/crypt.t<br>
<br>
Thanks,<br>
Shyam<br>
</blockquote>
<br>
Core details from<br>
<a href="https://build.gluster.org/job/centos6-regression/3046/consoleText" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>centos6-regression/3046/consol<wbr>eText</a><br>
<br>
Core was generated by `/build/install/sbin/glusterfs -s localhost<br>
--volfile-id tierd/patchy -p /var/li&#39;.<br>
Program terminated with signal 11, Segmentation fault.<br>
#0  0x00007ffb62c2c4c4 in __strchr_sse42 () from /lib64/libc.so.6<br>
<br>
Thread 1 (Thread 0x7ffb5a169700 (LWP 467)):<br>
#0  0x00007ffb62c2c4c4 in __strchr_sse42 () from /lib64/libc.so.6<br>
No symbol table info available.<br>
#1  0x00007ffb56b7789f in dht_filter_loc_subvol_key<br>
(this=0x7ffb50015930, loc=0x7ffb2c002de4, new_loc=0x7ffb2c413f80,<br>
subvol=0x7ffb2c413fc0) at<br>
/home/jenkins/root/workspace/c<wbr>entos6-regression/xlators/clus<wbr>ter/dht/src/dht-helper.c:307<br>
<br>
<br>
        new_name = 0x0<br>
        new_path = 0x0<br>
        trav = 0x0<br>
        key = &#39;\000&#39; &lt;repeats 1023 times&gt;<br>
        ret = 0<br>
#2  0x00007ffb56bb2ce4 in dht_lookup (frame=0x7ffb4c00623c,<br>
this=0x7ffb50015930, loc=0x7ffb2c002de4, xattr_req=0x7ffb4c00949c) at<br>
/home/jenkins/root/workspace/c<wbr>entos6-regression/xlators/clus<wbr>ter/dht/src/dht-common.c:2494<br>
<br>
<br>
        subvol = 0x0<br>
        hashed_subvol = 0x0<br>
        local = 0x7ffb4c00636c<br>
        conf = 0x7ffb5003f380<br>
        ret = -1<br>
        op_errno = -1<br>
        layout = 0x0<br>
        i = 0<br>
        call_cnt = 0<br>
        new_loc = {path = 0x0, name = 0x0, inode = 0x0, parent = 0x0,<br>
gfid = &#39;\000&#39; &lt;repeats 15 times&gt;, pargfid = &#39;\000&#39; &lt;repeats 15 times&gt;}<br>
        __FUNCTION__ = &quot;dht_lookup&quot;<br>
#3  0x00007ffb63ff6f5c in syncop_lookup (subvol=0x7ffb50015930,<br>
loc=0x7ffb2c002de4, iatt=0x7ffb2c415af0, parent=0x0,<br>
xdata_in=0x7ffb4c00949c, xdata_out=0x7ffb2c415a50) at<br>
/home/jenkins/root/workspace/c<wbr>entos6-regression/libglusterfs<wbr>/src/syncop.c:1223<br>
<br>
<br>
        _new = 0x7ffb4c00623c<br>
        old_THIS = 0x7ffb50019490<br>
        tmp_cbk = 0x7ffb63ff69b3 &lt;syncop_lookup_cbk&gt;<br>
        task = 0x7ffb2c009790<br>
        frame = 0x7ffb2c001b3c<br>
        args = {op_ret = 0, op_errno = 0, iatt1 = {ia_ino = 0, ia_gfid =<br>
&#39;\000&#39; &lt;repeats 15 times&gt;, ia_dev = 0, ia_type = IA_INVAL, ia_prot =<br>
{suid = 0 &#39;\000&#39;, sgid = 0 &#39;\000&#39;, sticky = 0 &#39;\000&#39;, owner = {read = 0<br>
&#39;\000&#39;, write = 0 &#39;\000&#39;, exec = 0 &#39;\000&#39;}, group = {read = 0 &#39;\000&#39;,<br>
write = 0 &#39;\000&#39;, exec = 0 &#39;\000&#39;}, other = {read = 0 &#39;\000&#39;, write = 0<br>
&#39;\000&#39;, exec = 0 &#39;\000&#39;}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev<br>
= 0, ia_size = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0,<br>
ia_atime_nsec = 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0,<br>
ia_ctime_nsec = 0}, iatt2 = {ia_ino = 0, ia_gfid = &#39;\000&#39; &lt;repeats 15<br>
times&gt;, ia_dev = 0, ia_type = IA_INVAL, ia_prot = {suid = 0 &#39;\000&#39;, sgid<br>
= 0 &#39;\000&#39;, sticky = 0 &#39;\000&#39;, owner = {read = 0 &#39;\000&#39;, write = 0<br>
&#39;\000&#39;, exec = 0 &#39;\000&#39;}, group = {read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;,<br>
exec = 0 &#39;\000&#39;}, other = {read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;, exec = 0<br>
&#39;\000&#39;}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size =<br>
0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0,<br>
ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}, xattr<br>
= 0x0, statvfs_buf = {f_bsize = 0, f_frsize = 0, f_blocks = 0, f_bfree =<br>
0, f_bavail = 0, f_files = 0, f_ffree = 0, f_favail = 0, f_fsid = 0,<br>
f_flag = 0, f_namemax = 0, __f_spare = {0, 0, 0, 0, 0, 0}}, vector =<br>
0x0, count = 0, iobref = 0x0, buffer = 0x0, xdata = 0x0, flock = {l_type<br>
= 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len =<br>
0, data = &#39;\000&#39; &lt;repeats 1023 times&gt;}}, lease = {cmd = 0, lease_type =<br>
NONE, lease_id = &#39;\000&#39; &lt;repeats 15 times&gt;, lease_flags = 0}, uuid =<br>
&#39;\000&#39; &lt;repeats 15 times&gt;, errstr = 0x0, dict = 0x0, lock_dict = {__data<br>
= {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0,<br>
__spins = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = &#39;\000&#39;<br>
&lt;repeats 39 times&gt;, __align = 0}, barrier = {guard = {__data = {__lock =<br>
0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0,<br>
__list = {__prev = 0x0, __next = 0x0}}, __size = &#39;\000&#39; &lt;repeats 39<br>
times&gt;, __align = 0}, cond = {__data = {__lock = 0, __futex = 0,<br>
__total_seq = 0, __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x0,<br>
__nwaiters = 0, __broadcast_seq = 0}, __size = &#39;\000&#39; &lt;repeats 47<br>
times&gt;, __align = 0}, waitq = {next = 0x0, prev = 0x0}, count = 0}, task<br>
= 0x7ffb2c009790, mutex = {__data = {__lock = 0, __count = 0, __owner =<br>
0, __nusers = 0, __kind = 0, __spins = 0, __list = {__prev = 0x0, __next<br>
= 0x0}}, __size = &#39;\000&#39; &lt;repeats 39 times&gt;, __align = 0}, cond =<br>
{__data = {__lock = 0, __futex = 0, __total_seq = 0, __wakeup_seq = 0,<br>
__woken_seq = 0, __mutex = 0x0, __nwaiters = 0, __broadcast_seq = 0},<br>
__size = &#39;\000&#39; &lt;repeats 47 times&gt;, __align = 0}, done = 0, entries =<br>
{{list = {next = 0x0, prev = 0x0}, {next = 0x0, prev = 0x0}}, d_ino = 0,<br>
d_off = 0, d_len = 0, d_type = 0, d_stat = {ia_ino = 0, ia_gfid = &#39;\000&#39;<br>
&lt;repeats 15 times&gt;, ia_dev = 0, ia_type = IA_INVAL, ia_prot = {suid = 0<br>
&#39;\000&#39;, sgid = 0 &#39;\000&#39;, sticky = 0 &#39;\000&#39;, owner = {read = 0 &#39;\000&#39;,<br>
write = 0 &#39;\000&#39;, exec = 0 &#39;\000&#39;}, group = {read = 0 &#39;\000&#39;, write = 0<br>
&#39;\000&#39;, exec = 0 &#39;\000&#39;}, other = {read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;,<br>
exec = 0 &#39;\000&#39;}}, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0,<br>
ia_size = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec<br>
= 0, ia_mtime = 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0},<br>
dict = 0x0, inode = 0x0, d_name = 0x7ffb2c414100 &quot;&quot;}, offset = 0,<br>
locklist = {list = {next = 0x0, prev = 0x0}, flock = {l_type = 0,<br>
l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len = 0,<br>
data = &#39;\000&#39; &lt;repeats 1023 times&gt;}}, client_uid = 0x0, lk_flags = 0}}<br>
        __FUNCTION__ = &quot;syncop_lookup&quot;<br>
#4  0x00007ffb568b96c7 in dht_migrate_file (this=0x7ffb50019490,<br>
loc=0x7ffb2c002de4, from=0x7ffb50015930, to=0x7ffb500184a0, flag=1) at<br>
/home/jenkins/root/workspace/c<wbr>entos6-regression/xlators/clus<wbr>ter/dht/src/dht-rebalance.c:<wbr>1375<br>
<br>
<br>
        ret = 0<br>
        new_stbuf = {ia_ino = 0, ia_gfid = &#39;\000&#39; &lt;repeats 15 times&gt;,<br>
ia_dev = 0, ia_type = IA_INVAL, ia_prot = {suid = 0 &#39;\000&#39;, sgid = 0<br>
&#39;\000&#39;, sticky = 0 &#39;\000&#39;, owner = {read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;,<br>
exec = 0 &#39;\000&#39;}, group = {read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;, exec = 0<br>
&#39;\000&#39;}, other = {read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;, exec = 0 &#39;\000&#39;}},<br>
ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0,<br>
ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime<br>
= 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}<br>
        stbuf = {ia_ino = 0, ia_gfid = &#39;\000&#39; &lt;repeats 15 times&gt;, ia_dev<br>
= 0, ia_type = IA_INVAL, ia_prot = {suid = 0 &#39;\000&#39;, sgid = 0 &#39;\000&#39;,<br>
sticky = 0 &#39;\000&#39;, owner = {read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;, exec = 0<br>
&#39;\000&#39;}, group = {read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;, exec = 0 &#39;\000&#39;},<br>
other = {read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;, exec = 0 &#39;\000&#39;}}, ia_nlink<br>
= 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0, ia_blksize = 0,<br>
ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime = 0,<br>
ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}<br>
        empty_iatt = {ia_ino = 0, ia_gfid = &#39;\000&#39; &lt;repeats 15 times&gt;,<br>
ia_dev = 0, ia_type = IA_INVAL, ia_prot = {suid = 0 &#39;\000&#39;, sgid = 0<br>
&#39;\000&#39;, sticky = 0 &#39;\000&#39;, owner = {read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;,<br>
exec = 0 &#39;\000&#39;}, group = {read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;, exec = 0<br>
&#39;\000&#39;}, other = {read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;, exec = 0 &#39;\000&#39;}},<br>
ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_rdev = 0, ia_size = 0,<br>
ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_atime_nsec = 0, ia_mtime<br>
= 0, ia_mtime_nsec = 0, ia_ctime = 0, ia_ctime_nsec = 0}<br>
        src_ia_prot = {suid = 0 &#39;\000&#39;, sgid = 0 &#39;\000&#39;, sticky = 0<br>
&#39;\000&#39;, owner = {read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;, exec = 0 &#39;\000&#39;},<br>
group = {read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;, exec = 0 &#39;\000&#39;}, other =<br>
{read = 0 &#39;\000&#39;, write = 0 &#39;\000&#39;, exec = 0 &#39;\000&#39;}}<br>
        src_fd = 0x0<br>
        dst_fd = 0x0<br>
        dict = 0x7ffb4c00949c<br>
        xattr = 0x0<br>
        xattr_rsp = 0x0<br>
        file_has_holes = 0<br>
        conf = 0x7ffb5002acd0<br>
        rcvd_enoent_from_src = 0<br>
        flock = {l_type = 1, l_whence = 0, l_start = 0, l_len = 0, l_pid<br>
= 0, l_owner = {len = 0, data = &#39;\000&#39; &lt;repeats 1023 times&gt;}}<br>
        plock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid<br>
= 0, l_owner = {len = 0, data = &#39;\000&#39; &lt;repeats 1023 times&gt;}}<br>
        tmp_loc = {path = 0x7ffb4c0083f0 &quot;&quot;, name = 0x0, inode =<br>
0x7ffb2c00cf6c, parent = 0x0, gfid =<br>
&quot;\365\267[t\277\205N\370\232\2<wbr>62\206\341o\253:E&quot;, pargfid = &#39;\000&#39;<br>
&lt;repeats 15 times&gt;}<br>
        locked = _gf_true<br>
        p_locked = _gf_false<br>
        lk_ret = -1<br>
        defrag = 0x7ffb5002b1f0<br>
        clean_src = _gf_false<br>
        clean_dst = _gf_false<br>
        log_level = 9<br>
        delete_src_linkto = _gf_true<br>
        locklist = {list = {next = 0x0, prev = 0x0}, flock = {l_type =<br>
0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len = 0,<br>
data = &#39;\000&#39; &lt;repeats 1023 times&gt;}}, client_uid = 0x0, lk_flags = 0}<br>
        meta_dict = 0x0<br>
        meta_locked = _gf_false<br>
        __FUNCTION__ = &quot;dht_migrate_file&quot;<br>
#5  0x00007ffb568bb198 in rebalance_task (data=0x7ffb2c00171c) at<br>
/home/jenkins/root/workspace/c<wbr>entos6-regression/xlators/clus<wbr>ter/dht/src/dht-rebalance.c:<wbr>1915<br>
<br>
<br>
        ret = -1<br>
        local = 0x7ffb2c002ddc<br>
        frame = 0x7ffb2c00171c<br>
#6  0x00007ffb63ff4fa3 in synctask_wrap (old_task=0x7ffb2c009790) at<br>
/home/jenkins/root/workspace/c<wbr>entos6-regression/libglusterfs<wbr>/src/syncop.c:375<br>
<br>
<br>
        task = 0x7ffb2c009790<br>
#7  0x00007ffb62b478b0 in ?? () from /lib64/libc.so.6<br>
No symbol table info available.<br>
#8  0x0000000000000000 in ?? ()<br>
No symbol table info available.<br>
______________________________<wbr>_________________<br>
maintainers mailing list<br>
<a href="mailto:maintainers@gluster.org" target="_blank">maintainers@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/maintainers" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/maintainers</a><br>
</blockquote>
______________________________<wbr>_________________<br></div></div>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-devel</a><br>
</blockquote><div class="gmail-HOEnZb"><div class="gmail-h5">
______________________________<wbr>_________________<br>
maintainers mailing list<br>
<a href="mailto:maintainers@gluster.org" target="_blank">maintainers@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/maintainers" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/maintainers</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><br></div><div>~ Atin (atinm)<br></div></div></div></div>
</div></div>