[Bugs] [Bug 1146200] New: The memories are exhausted quickly when handle the message which has multi fragments in a single record

bugzilla at redhat.com bugzilla at redhat.com
Wed Sep 24 18:17:50 UTC 2014


https://bugzilla.redhat.com/show_bug.cgi?id=1146200

            Bug ID: 1146200
           Summary: The memories are exhausted quickly when handle the
                    message which has multi fragments in a single record
           Product: GlusterFS
           Version: 3.6.0
         Component: rpc
          Keywords: Triaged
          Severity: low
          Assignee: gluster-bugs at redhat.com
          Reporter: vbellur at redhat.com
                CC: bugs at gluster.org, gluster-bugs at redhat.com,
                    jacke406 at 163.com, ndevos at redhat.com
        Depends On: 1136221, 1139598



Clone for GlusterFS 3.6

+++ This bug was initially created as a clone of Bug #1136221 +++

Description of problem:
    We construct some rpc messages and send it to the IP and port which
glusterfsd listens, the memory usage goes up quickly until exhausted

Version-Release number of selected component (if applicable):
    3.3.0, 3.4.1, 3.5.0


Steps to Reproduce:
1. Start glusterfs services, and get the IP and port that one glusterfsd
process listens

2. Run the attachement python script, which connects the IP and port and send
four bytes 00 00 00 00 to the glusterfsd process

3. Watch the memory usage of the glusterfsd process. It will grow up quickly

Actual results:
   Memory of the glusterfsd process grows up quickly till exhausted

Expected results:
   Glusterfsd just ignores the messages


Additional info:
   The bug seems in __socket_proto_state_machine, which goes into an infinite
loop to malloc memories when handle the special message. The special message is
"multi fragments in a single record", and some values are not reset when handle
next fragment.

   We tested below fix and it seems work:
          if (!RPC_LASTFRAG (in->fraghdr)) {

+             in->pending_vector = in->vector;
+             in->pending_vector->iov_base =  &in->fraghdr;
+             in->pending_vector->iov_len  = sizeof (in->fraghdr);
              in->record_state = SP_STATE_READING_FRAGHDR;
              break;
           }

--- Additional comment from jiangkai on 2014-09-04 06:35:44 EDT ---

More issues than imaging to handle the "multi fragments in a single record"
message. The proposal is to refuse it:


 if (!RPC_LASTFRAG (in->fraghdr)) {
       gf_log (this->name, GF_LOG_ERROR, "multiple fragments per record not
supported now");
       ret = -1;
       goto out;
 }

--- Additional comment from jiangkai on 2014-09-05 04:45:08 EDT ---

It happens after 3.4;  
3.3.1 reports error messages.

It seems imported by the change id Icd9f256bb2fd8c6266a7abefdff16936b4f8922d to
support SSL

--- Additional comment from Anand Avati on 2014-09-09 04:30:12 EDT ---

REVIEW: http://review.gluster.org/8662 (socket: Fixed parsing RPC records
containing multi fragments) posted (#1) for review on master by Gu Feng
(flygoast at 126.com)

--- Additional comment from Niels de Vos on 2014-09-11 05:54:58 EDT ---

http://review.gluster.org/8662 is for mainline bug 1139598, we can backport the
change when it has been merged in the master branch.


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1136221
[Bug 1136221] The memories are exhausted quickly when handle the message
which has multi fragments in a single record
https://bugzilla.redhat.com/show_bug.cgi?id=1139598
[Bug 1139598] The memories are exhausted quickly when handle the message
which has multi fragments in a single record
-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=nKwWYYT418&a=cc_unsubscribe


More information about the Bugs mailing list