[Bugs] [Bug 1438966] Multiple bricks WILL crash after TCP port probing

bugzilla at redhat.com bugzilla at redhat.com
Thu Apr 13 17:33:24 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1438966

Skyler Vock <skyler.vock at eschat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|needinfo?(skyler.vock at escha |
                   |t.com)                      |



--- Comment #3 from Skyler Vock <skyler.vock at eschat.com> ---
(In reply to Niels de Vos from comment #1)
> I've been running a test on glusterfs-3.10.1-1.el7.x86_64 for over half a
> day now, and have not noticed any problem yet. This is what is running (and
> will be running for a longer period):
> 
> - single brick volume
> - mounted with fuse, and recursively cp'ing and rm'ing /usr in a loop
> - running 'nmap -p49152 127.0.0.1' in a loop
> 
> Is this bug still reproducible with the current version of Gluster for you?

Niels,

This is a difficult bug to replicate. We see the outage in production anywhere
between once a week and once every 3 weeks. This outage, for us, is ONLY during
a 

'find <path> -depth -type f -name "*" -mtime +31 -delete' 

command in a backup cron. To speed up the test process, we have a recursive
script building the environment, running the backup cron, then destroying the
environment. While this is running on a gluster client, we are recursively
executing 

'while true; do nmap -Pn -sT -p49150-49160 <ip>; done' 

on a node outside of the gluster architecture, without glusterfs installed.
This is similar to a polling system ie OpenNMS.

- one volume
- replicated
- two clients
- three servers, one brick per server
- mounted with fuse

Our test runs anywhere between an hour and 24 hours to reproduce the issue. We
can consistently reproduce and continue to see this bug in production.

Let me know if you have other questions.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list