[Gluster-devel] glusterfsd memory leak issue found after enable ssl

Zhou, Cynthia (NSB - CN/Hangzhou) cynthia.zhou at nokia-sbell.com
Mon Apr 22 05:15:31 UTC 2019


Why there is no bio_free called in ssl_teardown_connection then?

cynthia

From: Milind Changire <mchangir at redhat.com>
Sent: Monday, April 22, 2019 10:21 AM
To: Zhou, Cynthia (NSB - CN/Hangzhou) <cynthia.zhou at nokia-sbell.com>
Cc: Atin Mukherjee <amukherj at redhat.com>; gluster-devel at gluster.org
Subject: Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

After patch 22334<https://review.gluster.org/c/glusterfs/+/22334>, the priv->ssl_ctx is now maintained per socket connection and is no longer shared.
So you might want to SSL_free(priv->ssl_ctx) as well and set priv->ssl_ctx to NULL.

There might be some strings that are duplicated (gf_strdup()) via the socket_init() code path. Please take a look at those as well.

Sorry about that. I missed it.


On Mon, Apr 22, 2019 at 7:25 AM Zhou, Cynthia (NSB - CN/Hangzhou) <cynthia.zhou at nokia-sbell.com<mailto:cynthia.zhou at nokia-sbell.com>> wrote:

Hi,
From my code study it seems priv->ssl_ssl is not properly released, I made a patch and the glusterfsd memory leak is alleviated with my patch, but some otherwhere is still leaking, I have no clue about the other leak points.

--- a/rpc/rpc-transport/socket/src/socket.c
+++ b/rpc/rpc-transport/socket/src/socket.c
@@ -1019,7 +1019,16 @@ static void __socket_reset(rpc_transport_t *this) {
   memset(&priv->incoming, 0, sizeof(priv->incoming));
   event_unregister_close(this->ctx->event_pool, priv->sock, priv->idx);
-
+  if(priv->use_ssl&& priv->ssl_ssl)
+  {
+    gf_log(this->name, GF_LOG_TRACE,
+           "clear and reset for socket(%d), free ssl ",
+           priv->sock);
+    SSL_shutdown(priv->ssl_ssl);
+    SSL_clear(priv->ssl_ssl);
+    SSL_free(priv->ssl_ssl);
+    priv->ssl_ssl = NULL;
+  }
   priv->sock = -1;
   priv->idx = -1;
   priv->connected = -1;
@@ -4238,6 +4250,16 @@ void fini(rpc_transport_t *this) {
     pthread_mutex_destroy(&priv->out_lock);
     pthread_mutex_destroy(&priv->cond_lock);
     pthread_cond_destroy(&priv->cond);
+     if(priv->use_ssl&& priv->ssl_ssl)
+  {
+    gf_log(this->name, GF_LOG_TRACE,
+           "clear and reset for socket(%d), free ssl ",
+           priv->sock);
+    SSL_shutdown(priv->ssl_ssl);
+    SSL_clear(priv->ssl_ssl);
+    SSL_free(priv->ssl_ssl);
+    priv->ssl_ssl = NULL;
+  }
     if (priv->ssl_private_key) {
       GF_FREE(priv->ssl_private_key);
     }


From: Zhou, Cynthia (NSB - CN/Hangzhou)
Sent: Thursday, April 18, 2019 5:31 PM
To: 'Atin Mukherjee' <amukherj at redhat.com<mailto:amukherj at redhat.com>>
Cc: 'Raghavendra Gowdappa' <rgowdapp at redhat.com<mailto:rgowdapp at redhat.com>>; 'gluster-devel at gluster.org<mailto:gluster-devel at gluster.org>' <gluster-devel at gluster.org<mailto:gluster-devel at gluster.org>>
Subject: RE: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

We scan it use memory-leak tool, there are following prints. We doubt some open ssl lib malloc is is not properly freed by glusterfs code.
er+0x2af [libglusterfs.so.0.0.1]\n\t\tstart_thread+0xda [libpthread-2.27.so<http://libpthread-2.27.so>]'
        13580 bytes in 175 allocations from stack
                b'CRYPTO_malloc+0x58 [libcrypto.so.1.0.2p]'
        232904 bytes in 14 allocations from stack
                b'CRYPTO_malloc+0x58 [libcrypto.so.1.0.2p]\n\t\t[unknown]'
[15:41:56] Top 10 stacks with outstanding allocations:
        8792 bytes in 14 allocations from stack
                b'CRYPTO_malloc+0x58 [libcrypto.so.1.0.2p]\n\t\t[unknown]'
        9408 bytes in 42 allocations from stack
                b'CRYPTO_realloc+0x4d [libcrypto.so.1.0.2p]'
        9723 bytes in 14 allocations from stack
                b'CRYPTO_malloc+0x58 [libcrypto.so.1.0.2p]\n\t\t[unknown]'
        10696 bytes in 21 allocations from stack
                b'CRYPTO_malloc+0x58 [libcrypto.so.1.0.2p]\n\t\t[unknown]'
        11319 bytes in 602 allocations from stack
                b'CRYPTO_malloc+0x58 [libcrypto.so.1.0.2p]\n\t\t[unknown]'
        11431 bytes in 518 allocations from stack
                b'CRYPTO_malloc+0x58 [libcrypto.so.1.0.2p]\n\t\t[unknown]'
        11704 bytes in 371 allocations from stack
                b'CRYPTO_malloc+0x58 [libcrypto.so.1.0.2p]\n\t\t[unknown]'


cynthia
From: Zhou, Cynthia (NSB - CN/Hangzhou)
Sent: Thursday, April 18, 2019 5:25 PM
To: 'Atin Mukherjee' <amukherj at redhat.com<mailto:amukherj at redhat.com>>
Cc: Raghavendra Gowdappa <rgowdapp at redhat.com<mailto:rgowdapp at redhat.com>>; gluster-devel at gluster.org<mailto:gluster-devel at gluster.org>
Subject: RE: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

I’ve test on glusterfs3.12.15 and glusterfs5.5 all have this issue, after enable tls ssl socket, when execute gluster v heal <vol-name> info, will trigger glfshel to connect glusterfsd process, and cause glusterfsd process memory leak. Could you please try in your env?

cynthia


From: Atin Mukherjee <amukherj at redhat.com<mailto:amukherj at redhat.com>>
Sent: Thursday, April 18, 2019 1:19 PM
To: Zhou, Cynthia (NSB - CN/Hangzhou) <cynthia.zhou at nokia-sbell.com<mailto:cynthia.zhou at nokia-sbell.com>>
Cc: Raghavendra Gowdappa <rgowdapp at redhat.com<mailto:rgowdapp at redhat.com>>; gluster-devel at gluster.org<mailto:gluster-devel at gluster.org>
Subject: Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl



On Wed, 17 Apr 2019 at 10:53, Zhou, Cynthia (NSB - CN/Hangzhou) <cynthia.zhou at nokia-sbell.com<mailto:cynthia.zhou at nokia-sbell.com>> wrote:
Hi,
In my recent test, I found that there are very severe glusterfsd memory leak when enable socket ssl option

What gluster version are you testing? Would you be able to continue your investigation and share the root cause?

--
- Atin (atinm)
_______________________________________________
Gluster-devel mailing list
Gluster-devel at gluster.org<mailto:Gluster-devel at gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-devel


--
Milind
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20190422/a02ec0b0/attachment-0001.html>


More information about the Gluster-devel mailing list