[Gluster-users] Files on Brick not showing up in ls command

Nithya Balachandran nbalacha at redhat.com
Thu Feb 14 05:03:14 UTC 2019


Let me know if you still see problems.

Thanks,
Nithya

On Thu, 14 Feb 2019 at 09:05, Patrick Nixon <pnixon at gmail.com> wrote:

> Thanks for the follow up.  After reviewing the logs Vijay mentioned,
> nothing useful was found.
>
> I wiped removed and wiped the brick tonight.   I'm in the process of
> balancing the new brick and will resync the files onto the full gluster
> volume when that completes
>
> On Wed, Feb 13, 2019, 10:28 PM Nithya Balachandran <nbalacha at redhat.com>
> wrote:
>
>>
>>
>> On Tue, 12 Feb 2019 at 08:30, Patrick Nixon <pnixon at gmail.com> wrote:
>>
>>> The files are being written to via the glusterfs mount (and read on the
>>> same client and a different client). I try not to do anything on the nodes
>>> directly because I understand that can cause weirdness.   As far as I can
>>> tell, there haven't been any network disconnections, but I'll review the
>>> client log to see if there any indication.   I don't recall any issues last
>>> time I was in there.
>>>
>>>
>> If I understand correctly, the files are written to the volume from the
>> client , but when the same client tries to list them again, those entries
>> are not listed. Is that right?
>>
>> Do the files exist on the bricks?
>> Would you be willing to provide a tcpdump of the client when doing this?
>> If yes, please do the following:
>>
>> On the client system:
>>
>>    - tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22
>>    - Copy the files to the volume using the client
>>    - List the contents of the directory in which the files should exist
>>    - Stop the tcpdump capture and send it to us.
>>
>>
>> Also provide the name of the directory and the missing files.
>>
>> Regards,
>> NIthya
>>
>>
>>
>>
>>
>>> Thanks for the response!
>>>
>>> On Mon, Feb 11, 2019 at 7:35 PM Vijay Bellur <vbellur at redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On Sun, Feb 10, 2019 at 5:20 PM Patrick Nixon <pnixon at gmail.com> wrote:
>>>>
>>>>> Hello!
>>>>>
>>>>> I have an 8 node distribute volume setup.   I have one node that
>>>>> accept files and stores them on disk, but when doing an ls, none of the
>>>>> files on that specific node are being returned.
>>>>>
>>>>>  Can someone give some guidance on what should be the best place to
>>>>> start troubleshooting this?
>>>>>
>>>>
>>>>
>>>> Are the files being written from a glusterfs mount? If so, it might be
>>>> worth checking if the network connectivity is fine between the client (that
>>>> does ls) and the server/brick that contains these files. You could look up
>>>> the client log file to check if there are any messages related to
>>>> rpc disconnections.
>>>>
>>>> Regards,
>>>> Vijay
>>>>
>>>>
>>>>> # gluster volume info
>>>>>
>>>>> Volume Name: gfs
>>>>> Type: Distribute
>>>>> Volume ID: 44c8c4f1-2dfb-4c03-9bca-d1ae4f314a78
>>>>> Status: Started
>>>>> Snapshot Count: 0
>>>>> Number of Bricks: 8
>>>>> Transport-type: tcp
>>>>> Bricks:
>>>>> Brick1: gfs01:/data/brick1/gv0
>>>>> Brick2: gfs02:/data/brick1/gv0
>>>>> Brick3: gfs03:/data/brick1/gv0
>>>>> Brick4: gfs05:/data/brick1/gv0
>>>>> Brick5: gfs06:/data/brick1/gv0
>>>>> Brick6: gfs07:/data/brick1/gv0
>>>>> Brick7: gfs08:/data/brick1/gv0
>>>>> Brick8: gfs04:/data/brick1/gv0
>>>>> Options Reconfigured:
>>>>> cluster.min-free-disk: 10%
>>>>> nfs.disable: on
>>>>> performance.readdir-ahead: on
>>>>>
>>>>> # gluster peer status
>>>>> Number of Peers: 7
>>>>> Hostname: gfs03
>>>>> Uuid: 4a2d4deb-f8dd-49fc-a2ab-74e39dc25e20
>>>>> State: Peer in Cluster (Connected)
>>>>> Hostname: gfs08
>>>>> Uuid: 17705b3a-ed6f-4123-8e2e-4dc5ab6d807d
>>>>> State: Peer in Cluster (Connected)
>>>>> Hostname: gfs07
>>>>> Uuid: dd699f55-1a27-4e51-b864-b4600d630732
>>>>> State: Peer in Cluster (Connected)
>>>>> Hostname: gfs06
>>>>> Uuid: 8eb2a965-2c1e-4a64-b5b5-b7b7136ddede
>>>>> State: Peer in Cluster (Connected)
>>>>> Hostname: gfs04
>>>>> Uuid: cd866191-f767-40d0-bf7b-81ca0bc032b7
>>>>> State: Peer in Cluster (Connected)
>>>>> Hostname: gfs02
>>>>> Uuid: 6864c6ac-6ff4-423a-ae3c-f5fd25621851
>>>>> State: Peer in Cluster (Connected)
>>>>> Hostname: gfs05
>>>>> Uuid: dcecb55a-87b8-4441-ab09-b52e485e5f62
>>>>> State: Peer in Cluster (Connected)
>>>>>
>>>>> All gluster nodes are running glusterfs 4.0.2
>>>>> The clients accessing the files are also running glusterfs 4.0.2
>>>>> Both are Ubuntu
>>>>>
>>>>> Thanks!
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190214/29543daa/attachment.html>


More information about the Gluster-users mailing list