[Gluster-devel] [Gluster-users] GlusterFS (3.3.1) - performance issues - large number of LOOKUP calls & high CPU usage

Song gluster at 163.com
Fri Jun 7 05:34:16 UTC 2013


We have met same performance issues when open a file. Sometime, it cost more
than 10 seconds that a file open.

 

We add some debug info to locate this problem and test again and again, find
it's will cost a few seconds when execute 'xdr_callmsg' function in
'rpc_request_to_xdr'.

 

A typical example of log is as below. From [2013-06-06 13:34:59.471004] to
[2013-06-06 13:35:04.890363] , 'xdr_callmsg' function cost more than 5
seconds.

 

[2013-06-06 13:34:59.470991] I
[rpc-clnt.c:1175:rpc_clnt_record_build_record] 0-gfs1-client-51: (thread_id
is 140257410492160 )add for open_slow rpc_fill_request_end

[2013-06-06 13:34:59.471004] I [xdr-rpcclnt.c:87:rpc_request_to_xdr] 0-rpc:
(thread_id is 140257410492160 len = 131072 )add for open_slow
xdrmem_create_end

[2013-06-06 13:34:59.570044] I [client.c:124:client_submit_request]
0-gfs1-client-86: (thread_id is 140257819739904 )add for open_slow
rpc_clnt_submit

[2013-06-06 13:34:59.570091] I [rpc-clnt.c:1363:rpc_clnt_submit]
0-gfs1-client-86: (thread_id is 140257819739904 )add for open_slow callid
end

 

......

 

[2013-06-06 13:34:59.579865] I [client3_1-fops.c:2235:client3_1_lookup_cbk]
0-gfs1-client-5: (thread_id is 140257819739904)add for open_slow lookup_cbk
path=/xmail_dedup/gfs1_000/1FA/1B1

[2013-06-06 13:34:59.579917] I [client3_1-fops.c:2235:client3_1_lookup_cbk]
0-gfs1-client-6: (thread_id is 140257819739904)add for open_slow lookup_cbk
path=/xmail_dedup/gfs1_000/1FA/1B1

[2013-06-06 13:35:04.890363] I [xdr-rpcclnt.c:92:rpc_request_to_xdr] 0-rpc:
(thread_id is 140257410492160 )add for open_slow xdr_callmsg_end

[2013-06-06 13:35:04.890366] I [client.c:110:client_submit_request]
0-gfs1-client-44: (thread_id is 140257785079552 )add for open_slow create
the xdr payload

 

 

Native client and use 5 glusterfs in one server. When performance issues
appear, the cpu usage is as below:

 

top - 13:45:37 up 57 days, 14:04,  4 users,  load average: 6.98, 5.38, 4.67

Tasks: 712 total,   8 running, 704 sleeping,   0 stopped,   0 zombie

Cpu(s):  3.2%us, 63.5%sy,  0.0%ni, 31.5%id,  1.4%wa,  0.0%hi,  0.4%si,
0.0%st

Mem:  65956748k total, 55218008k used, 10738740k free,  3362972k buffers

Swap:  8388600k total,    41448k used,  8347152k free, 37370840k cached

 

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND

13905 root      20   0  554m 363m 2172 R 244.8  0.6   9:51.01 glusterfs

13650 root      20   0  766m 610m 2056 R 184.8  0.9  18:24.37 glusterfs

13898 root      20   0  545m 356m 2176 R 179.2  0.6  12:04.87 glusterfs

13919 root      20   0  547m 360m 2172 R 111.6  0.6   9:16.89 glusterfs

22460 root      20   0  486m 296m 2200 S 100.4  0.5 194:59.10 glusterfs 

13878 root      20   0  545m 361m 2176 R 99.7  0.6   8:35.88 glusterfs 

 

 

-----Original Message-----
From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of Stephan von
Krawczynski
Sent: Thursday, June 06, 2013 10:07 PM
To: Pablo
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] GlusterFS (3.3.1) - performance issues - large
number of LOOKUP calls & high CPU usage

 

On Thu, 06 Jun 2013 10:39:21 -0300

Pablo < <mailto:paa.listas at gmail.com> paa.listas at gmail.com> wrote:

 

> I have never try this (In fact I'm just learning a bit more how to 

> administer a Gluster server.), buy you may find it useful.

> 

>
<http://download.gluster.org/pub/gluster/glusterfs/doc/HA%20and%20Load%25>
http://download.gluster.org/pub/gluster/glusterfs/doc/HA%20and%20Load%

> 20Balancing%20for%20NFS%20and%20SMB.html

> 

> Pablo.

 

The thing with this way of failover is though, that you will likely corrupt
a currently written file. If your NFS-server (gluster) node dies while you
write your file will be corrupt. If you use native glusterfs mounts it will
not (should not). This is why I consider the NFS server feature nothing more
than a bad hack. It does not deliver the safety that glusterfs promises,
even if you solve the failover problem somehow.

 

--

Regards,

Stephan

_______________________________________________

Gluster-users mailing list

 <mailto:Gluster-users at gluster.org> Gluster-users at gluster.org

 <http://supercolony.gluster.org/mailman/listinfo/gluster-users>
http://supercolony.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20130607/7733cdec/attachment-0001.html>


More information about the Gluster-devel mailing list