[Gluster-users] missing files on FUSE mount
Benedikt Kaleß
benedikt.kaless at forumZFD.de
Wed Nov 18 16:33:08 UTC 2020
Hi Martin,
my volume is a full replica
I obtain messages like this in /var/log/glusterfs/bricks
gluster-<volume>-brick.log:[2020-11-18 13:57:41.434070] I [MSGID:
115071] [server-rpc-fops_v2.c:1492:server4_create_cbk] 0-gv-ho-server:
CREATE info [{frame=194885}, {path=/user/Documents/Hb
ciLog.txt}, {uuid_utoa=3d4eca3b-72f3-44da-956f-f072d16ed92e},
{bname=HbciLog.txt},
{client=CTX_ID:afd96693-cfd3-4fd6-9c00-b0c8916765fb-GRAPH_ID:0-PID:1134-HOST:cluster-ho-PC_NAME:gv-ho-
client-3-RECON_NO:-4}, {error-xlator=-}, {errno=17}, {error=file already
exists}]
gluster-<volume>-brick.log:[2020-11-18 13:57:41.465911] I [MSGID:
115071] [server-rpc-fops_v2.c:1492:server4_create_cbk] 0-gv-ho-server:
CREATE info [{frame=194893}, {path=/user/Documents/Hb
ciLog.txt}, {uuid_utoa=3d4eca3b-72f3-44da-956f-f072d16ed92e},
{bname=HbciLog.txt},
{client=CTX_ID:afd96693-cfd3-4fd6-9c00-b0c8916765fb-GRAPH_ID:0-PID:1134-HOST:cluster-ho-PC_NAME:gv-ho-
client-3-RECON_NO:-4}, {error-xlator=-}, {errno=17}, {error=file already
exists}]
gluster-<volume>-brick.log:[2020-11-18 14:04:13.274582] I [MSGID:
115071] [server-rpc-fops_v2.c:1492:server4_create_cbk] 0-gv-ho-server:
CREATE info [{frame=212398}, {path=/user/Documents/Hb
ciLog.txt}, {uuid_utoa=3d4eca3b-72f3-44da-956f-f072d16ed92e},
{bname=HbciLog.txt},
{client=CTX_ID:afd96693-cfd3-4fd6-9c00-b0c8916765fb-GRAPH_ID:0-PID:1134-HOST:cluster-ho-PC_NAME:gv-ho-
client-3-RECON_NO:-4}, {error-xlator=-}, {errno=17}, {error=file already
exists}]
Best
Bene
Am 18.11.20 um 17:25 schrieb Martín Lorenzo:
> Hi Benedikt,
> You are right , disabling performance.readdir-ahead didn't solve the
> issue for me.
> It took a little longer to find out, and I wasn't sure if the errors
> were already there before turning off the setting.
> Is your volume full replica or are you using an arbiter?
>
>
>
> On Wed, Nov 18, 2020 at 1:16 PM Benedikt Kaleß
> <benedikt.kaless at forumzfd.de <mailto:benedikt.kaless at forumzfd.de>> wrote:
>
> Dear Martin,
>
> Do you have any new observations regarding this issue?
>
> I just found your thread. This error of missing files on a fuse
> mounts is appearing on my setup with 3 replicated bricks on
> gluster 8.2. too.
>
> I set performance.readdir-ahead: off but the error still occurs
> quite frequently.
>
> Best regards
>
> Bene
>
> Am 04.11.20 um 12:07 schrieb Martín Lorenzo:
>> Thanks Mahdi, I'll try that option, I hope it doesn't come with a
>> big performance penalty.
>> Recently upgraded to 7.8 by Strahil's advice, but before that, I
>> had the feeling that restarting the brick processes in one node
>> in particular (the one with the most user connections) helped a lot.
>>
>> I've setup an experiment/workaround on a frequently used dir. A
>> cron script creates a directory there every minute, sleeps 2
>> seconds and removes it. At the same time in a different node node
>> /mount i am listling (long format) the same base directory, every
>> minute. On the latter, in the mount logs I am constantly getting
>> this message every 2 - 3 minutes (checkglus12 is the dir I am
>> creating/removing):
>> [2020-11-04 09:53:02.087991] I [MSGID: 109063]
>> [dht-layout.c:647:dht_layout_normalize] 0-tapeless-dht: Found
>> anomalies in /interno/checkglus12 (gfid =
>> 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
>>
>> The other issue I found on the logs, is that I find self-heal
>> entries all the time, during "normal" operations. Here is an
>> excerpt (grepping 'heal')
>> [2020-11-03 21:33:39.189343] I [MSGID: 108026]
>> [afr-self-heal-common.c:1744:afr_log_selfheal]
>> 0-tapeless-replicate-1: Completed metadata selfheal on
>> c1e69788-7211-40d4-a38c-8d21786b0438. sources=0 [1] sinks=2
>> [2020-11-03 21:33:47.870217] I [MSGID: 108026]
>> [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
>> 0-tapeless-replicate-1: performing metadata selfheal on
>> df127689-98ba-4a35-9bb7-32067de57615
>> [2020-11-03 21:33:47.875594] I [MSGID: 108026]
>> [afr-self-heal-common.c:1744:afr_log_selfheal]
>> 0-tapeless-replicate-1: Completed metadata selfheal on
>> df127689-98ba-4a35-9bb7-32067de57615. sources=0 [1] sinks=2
>> [2020-11-03 21:50:01.331224] I [MSGID: 108026]
>> [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
>> 0-tapeless-replicate-1: performing metadata selfheal on
>> 3c316533-5f47-4267-ac19-58b3be305b94
>> [2020-11-03 21:50:01.340247] I [MSGID: 108026]
>> [afr-self-heal-common.c:1744:afr_log_selfheal]
>> 0-tapeless-replicate-1: Completed metadata selfheal on
>> 3c316533-5f47-4267-ac19-58b3be305b94. sources=0 [1] sinks=2
>> [2020-11-03 21:52:45.269751] I [MSGID: 108026]
>> [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
>> 0-tapeless-replicate-1: performing metadata selfheal on
>> f2e404e2-0550-4a2e-9a79-1724e7e4c8f0
>> Thanks again for your help
>> Regards,
>> Martin
>>
>> On Wed, Nov 4, 2020 at 4:59 AM Mahdi Adnan <mahdi at sysmin.io
>> <mailto:mahdi at sysmin.io>> wrote:
>>
>> Hello Martín,
>>
>> Try to disable "performance.readdir-ahead", we had a similar
>> issue, and disabling "performance.readdir-ahead" solved our
>> issue.
>> gluster volume set tapeless performance.readdir-ahead off
>>
>> On Tue, Oct 27, 2020 at 8:23 PM Martín Lorenzo
>> <mlorenzo at gmail.com <mailto:mlorenzo at gmail.com>> wrote:
>>
>> Hi Strahil, today we have the same number clients on all
>> nodes, but the problem persists. I have the impression
>> that it gets more frequent as the server capacity fills
>> up, now we are having at least one incident per day.
>> Regards,
>> Martin
>>
>> On Mon, Oct 26, 2020 at 8:09 AM Martín Lorenzo
>> <mlorenzo at gmail.com <mailto:mlorenzo at gmail.com>> wrote:
>>
>> HI Strahil, thanks for your reply,
>> I had one node with 13 clients, the rest with 14.
>> I've just restarted the services on that node, now I
>> have 14, let's see what happens.
>> Regarding the samba repos, I wasn't aware of that, I
>> was using centos main repo. I'll check the out
>> Best Regards,
>> Martin
>>
>>
>> On Tue, Oct 20, 2020 at 3:19 PM Strahil Nikolov
>> <hunter86_bg at yahoo.com
>> <mailto:hunter86_bg at yahoo.com>> wrote:
>>
>> Do you have the same ammount of clients connected
>> to each brick ?
>>
>> I guess something like this can show it:
>>
>> gluster volume status VOL clients
>> gluster volume status VOL client-list
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В вторник, 20 октомври 2020 г., 15:41:45
>> Гринуич+3, Martín Lorenzo <mlorenzo at gmail.com
>> <mailto:mlorenzo at gmail.com>> написа:
>>
>>
>>
>>
>>
>> Hi, I have the following problem, I have a
>> distributed replicated cluster set up with samba
>> and CTDB, over fuse mount points
>> I am having inconsistencies across the FUSE
>> mounts, users report that files are disappearing
>> after being copied/moved. I take a look at the
>> mount points on each node, and they don't display
>> the same data
>>
>> #### faulty mount point####
>> [root at gluster6 ARRIBA GENTE martes 20 de octubre]# ll
>> ls: cannot access PANEO VUELTA A CLASES CON
>> TAPABOCAS.mpg: No such file or directory
>> ls: cannot access PANEO NIÑOS ESCUELAS CON
>> TAPABOCAS.mpg: No such file or directory
>> total 633723
>> drwxr-xr-x. 5 arribagente PN 4096 Oct 19
>> 10:52 COMERCIAL AG martes 20 de octubre
>> -rw-r--r--. 1 arribagente PN 648927236 Jun 3
>> 07:16 PANEO FACHADA PALACIO LEGISLATIVO DRONE DIA
>> Y NOCHE.mpg
>> -?????????? ? ? ? ?
>> ? PANEO NIÑOS ESCUELAS CON TAPABOCAS.mpg
>> -?????????? ? ? ? ?
>> ? PANEO VUELTA A CLASES CON TAPABOCAS.mpg
>>
>>
>> ###healthy mount point###
>> [root at gluster7 ARRIBA GENTE martes 20 de octubre]# ll
>> total 3435596
>> drwxr-xr-x. 5 arribagente PN 4096 Oct 19
>> 10:52 COMERCIAL AG martes 20 de octubre
>> -rw-r--r--. 1 arribagente PN 648927236 Jun 3
>> 07:16 PANEO FACHADA PALACIO LEGISLATIVO DRONE DIA
>> Y NOCHE.mpg
>> -rw-r--r--. 1 arribagente PN 2084415492 Aug 18
>> 09:14 PANEO NIÑOS ESCUELAS CON TAPABOCAS.mpg
>> -rw-r--r--. 1 arribagente PN 784701444 Sep 4
>> 07:23 PANEO VUELTA A CLASES CON TAPABOCAS.mpg
>>
>> - So far the only way to solve this is to create
>> a directory in the healthy mount point, on the
>> same path:
>> [root at gluster7 ARRIBA GENTE martes 20 de
>> octubre]# mkdir hola
>>
>> - When you refresh the other mountpoint, and the
>> issue is resolved:
>> [root at gluster6 ARRIBA GENTE martes 20 de octubre]# ll
>> total 3435600
>> drwxr-xr-x. 5 arribagente PN 4096 Oct 19
>> 10:52 COMERCIAL AG martes 20 de octubre
>> drwxr-xr-x. 2 root root 4096 Oct 20
>> 08:45 hola
>> -rw-r--r--. 1 arribagente PN 648927236 Jun 3
>> 07:16 PANEO FACHADA PALACIO LEGISLATIVO DRONE DIA
>> Y NOCHE.mpg
>> -rw-r--r--. 1 arribagente PN 2084415492 Aug 18
>> 09:14 PANEO NIÑOS ESCUELAS CON TAPABOCAS.mpg
>> -rw-r--r--. 1 arribagente PN 784701444 Sep 4
>> 07:23 PANEO VUELTA A CLASES CON TAPABOCAS.mpg
>>
>> Interestingly, the error occurs on the mount
>> point where the files were copied. They don't
>> show up as pending heal entries. I have around 15
>> people using them over samba, I think I'm having
>> this issue reported every two days.
>>
>> I have an older cluster with similar issues,
>> different gluster version, but a very similar
>> topology (4 bricks, initially two bricks then
>> expanded)
>> Please note , the bricks aren't the same size
>> (but their replicas are), so my other suspicion
>> is that rebalancing has something to do with it.
>>
>> I'm trying to reproduce it over a small
>> virtualized cluster, so far no results.
>>
>> Here are the cluster details
>> four nodes, replica 2, plus one arbiter hosting 2
>> bricks
>>
>> I have 2 bricks with ~20 TB capacity and the
>> other pair is ~48TB
>> Volume Name: tapeless
>> Type: Distributed-Replicate
>> Volume ID: 53bfa86d-b390-496b-bbd7-c4bba625c956
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 2 x (2 + 1) = 6
>> Transport-type: tcp
>> Bricks:
>> Brick1:
>> gluster6.glustersaeta.net:/data/glusterfs/tapeless/brick_6/brick
>> Brick2:
>> gluster7.glustersaeta.net:/data/glusterfs/tapeless/brick_7/brick
>> Brick3:
>> kitchen-store.glustersaeta.net:/data/glusterfs/tapeless/brick_1a/brick
>> (arbiter)
>> Brick4:
>> gluster12.glustersaeta.net:/data/glusterfs/tapeless/brick_12/brick
>> Brick5:
>> gluster13.glustersaeta.net:/data/glusterfs/tapeless/brick_13/brick
>> Brick6:
>> kitchen-store.glustersaeta.net:/data/glusterfs/tapeless/brick_2a/brick
>> (arbiter)
>> Options Reconfigured:
>> features.quota-deem-statfs: on
>> performance.client-io-threads: on
>> nfs.disable: on
>> transport.address-family: inet
>> features.quota: on
>> features.inode-quota: on
>> features.cache-invalidation: on
>> features.cache-invalidation-timeout: 600
>> performance.cache-samba-metadata: on
>> performance.stat-prefetch: on
>> performance.cache-invalidation: on
>> performance.md-cache-timeout: 600
>> network.inode-lru-limit: 200000
>> performance.nl-cache: on
>> performance.nl-cache-timeout: 600
>> performance.readdir-ahead: on
>> performance.parallel-readdir: on
>> performance.cache-size: 1GB
>> client.event-threads: 4
>> server.event-threads: 4
>> performance.normal-prio-threads: 16
>> performance.io-thread-count: 32
>> performance.write-behind-window-size: 8MB
>> storage.batch-fsync-delay-usec: 0
>> cluster.data-self-heal: on
>> cluster.metadata-self-heal: on
>> cluster.entry-self-heal: on
>> cluster.self-heal-daemon: on
>> performance.write-behind: on
>> performance.open-behind: on
>>
>> Log section form faulty mount point. I think the
>> [file exists] entries are from people trying to
>> copy the missing files over an over
>>
>>
>> [2020-10-20 11:31:03.034220] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:32:06.684329] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:33:02.191863] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:34:05.841608] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:35:20.736633] I [MSGID: 108026]
>> [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
>> 0-tapeless-replicate-1: performing metadata
>> selfheal on 958dbd7a-3cd7-4b66-9038-76e5c5669644
>> [2020-10-20 11:35:20.741213] I [MSGID: 108026]
>> [afr-self-heal-common.c:1750:afr_log_selfheal]
>> 0-tapeless-replicate-1: Completed metadata
>> selfheal on 958dbd7a-3cd7-4b66-9038-76e5c5669644.
>> sources=[0] 1 sinks=2
>> [2020-10-20 11:35:04.278043] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> The message "I [MSGID: 108026]
>> [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
>> 0-tapeless-replicate-1: performing metadata
>> selfheal on 958dbd7a-3cd7-4b66-9038-76e5c5669644"
>> repeated 3 times between [2020-10-20
>> 11:35:20.736633] and [2020-10-20 11:35:26.733298]
>> The message "I [MSGID: 108026]
>> [afr-self-heal-common.c:1750:afr_log_selfheal]
>> 0-tapeless-replicate-1: Completed metadata
>> selfheal on 958dbd7a-3cd7-4b66-9038-76e5c5669644.
>> sources=[0] 1 sinks=2 " repeated 3 times between
>> [2020-10-20 11:35:20.741213] and [2020-10-20
>> 11:35:26.737629]
>> [2020-10-20 11:36:02.548350] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:36:57.365537] I [MSGID: 108026]
>> [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
>> 0-tapeless-replicate-1: performing metadata
>> selfheal on f4907af2-1775-4c46-89b5-e9776df6d5c7
>> [2020-10-20 11:36:57.370824] I [MSGID: 108026]
>> [afr-self-heal-common.c:1750:afr_log_selfheal]
>> 0-tapeless-replicate-1: Completed metadata
>> selfheal on f4907af2-1775-4c46-89b5-e9776df6d5c7.
>> sources=[0] 1 sinks=2
>> [2020-10-20 11:37:01.363925] I [MSGID: 108026]
>> [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
>> 0-tapeless-replicate-1: performing metadata
>> selfheal on f4907af2-1775-4c46-89b5-e9776df6d5c7
>> [2020-10-20 11:37:01.368069] I [MSGID: 108026]
>> [afr-self-heal-common.c:1750:afr_log_selfheal]
>> 0-tapeless-replicate-1: Completed metadata
>> selfheal on f4907af2-1775-4c46-89b5-e9776df6d5c7.
>> sources=[0] 1 sinks=2
>> The message "I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0" repeated 3 times
>> between [2020-10-20 11:36:02.548350] and
>> [2020-10-20 11:37:36.389208]
>> [2020-10-20 11:38:07.367113] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:39:01.595981] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:40:04.184899] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:41:07.833470] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:42:01.871621] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:43:04.399194] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:44:04.558647] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:44:15.953600] W [MSGID: 114031]
>> [client-rpc-fops_v2.c:2114:client4_0_create_cbk]
>> 0-tapeless-client-5: remote operation failed.
>> Path: /PN/arribagente/PLAYER 2020/ARRIBA GENTE
>> martes 20 de octubre/PANEO NIÑOS ESCUELAS CON
>> TAPABOCAS.mpg [File exists]
>> [2020-10-20 11:44:15.953819] W [MSGID: 114031]
>> [client-rpc-fops_v2.c:2114:client4_0_create_cbk]
>> 0-tapeless-client-2: remote operation failed.
>> Path: /PN/arribagente/PLAYER 2020/ARRIBA GENTE
>> martes 20 de octubre/PANEO NIÑOS ESCUELAS CON
>> TAPABOCAS.mpg [File exists]
>> [2020-10-20 11:44:15.954072] W [MSGID: 114031]
>> [client-rpc-fops_v2.c:2114:client4_0_create_cbk]
>> 0-tapeless-client-3: remote operation failed.
>> Path: /PN/arribagente/PLAYER 2020/ARRIBA GENTE
>> martes 20 de octubre/PANEO NIÑOS ESCUELAS CON
>> TAPABOCAS.mpg [File exists]
>> [2020-10-20 11:44:15.954680] W
>> [fuse-bridge.c:2606:fuse_create_cbk]
>> 0-glusterfs-fuse: 31043294:
>> /PN/arribagente/PLAYER 2020/ARRIBA GENTE martes
>> 20 de octubre/PANEO NIÑOS ESCUELAS CON
>> TAPABOCAS.mpg => -1 (File exists)
>> [2020-10-20 11:44:15.963175] W
>> [fuse-bridge.c:2606:fuse_create_cbk]
>> 0-glusterfs-fuse: 31043306:
>> /PN/arribagente/PLAYER 2020/ARRIBA GENTE martes
>> 20 de octubre/PANEO NIÑOS ESCUELAS CON
>> TAPABOCAS.mpg => -1 (File exists)
>> [2020-10-20 11:44:15.971839] W
>> [fuse-bridge.c:2606:fuse_create_cbk]
>> 0-glusterfs-fuse: 31043318:
>> /PN/arribagente/PLAYER 2020/ARRIBA GENTE martes
>> 20 de octubre/PANEO NIÑOS ESCUELAS CON
>> TAPABOCAS.mpg => -1 (File exists)
>> [2020-10-20 11:44:16.010242] W
>> [fuse-bridge.c:2606:fuse_create_cbk]
>> 0-glusterfs-fuse: 31043403:
>> /PN/arribagente/PLAYER 2020/ARRIBA GENTE martes
>> 20 de octubre/PANEO NIÑOS ESCUELAS CON
>> TAPABOCAS.mpg => -1 (File exists)
>> [2020-10-20 11:44:16.020291] W
>> [fuse-bridge.c:2606:fuse_create_cbk]
>> 0-glusterfs-fuse: 31043415:
>> /PN/arribagente/PLAYER 2020/ARRIBA GENTE martes
>> 20 de octubre/PANEO NIÑOS ESCUELAS CON
>> TAPABOCAS.mpg => -1 (File exists)
>> [2020-10-20 11:44:16.028857] W
>> [fuse-bridge.c:2606:fuse_create_cbk]
>> 0-glusterfs-fuse: 31043427:
>> /PN/arribagente/PLAYER 2020/ARRIBA GENTE martes
>> 20 de octubre/PANEO NIÑOS ESCUELAS CON
>> TAPABOCAS.mpg => -1 (File exists)
>> The message "W [MSGID: 114031]
>> [client-rpc-fops_v2.c:2114:client4_0_create_cbk]
>> 0-tapeless-client-5: remote operation failed.
>> Path: /PN/arribagente/PLAYER 2020/ARRIBA GENTE
>> martes 20 de octubre/PANEO NIÑOS ESCUELAS CON
>> TAPABOCAS.mpg [File exists]" repeated 5 times
>> between [2020-10-20 11:44:15.953600] and
>> [2020-10-20 11:44:16.027785]
>> The message "W [MSGID: 114031]
>> [client-rpc-fops_v2.c:2114:client4_0_create_cbk]
>> 0-tapeless-client-2: remote operation failed.
>> Path: /PN/arribagente/PLAYER 2020/ARRIBA GENTE
>> martes 20 de octubre/PANEO NIÑOS ESCUELAS CON
>> TAPABOCAS.mpg [File exists]" repeated 5 times
>> between [2020-10-20 11:44:15.953819] and
>> [2020-10-20 11:44:16.028331]
>> The message "W [MSGID: 114031]
>> [client-rpc-fops_v2.c:2114:client4_0_create_cbk]
>> 0-tapeless-client-3: remote operation failed.
>> Path: /PN/arribagente/PLAYER 2020/ARRIBA GENTE
>> martes 20 de octubre/PANEO NIÑOS ESCUELAS CON
>> TAPABOCAS.mpg [File exists]" repeated 5 times
>> between [2020-10-20 11:44:15.954072] and
>> [2020-10-20 11:44:16.028355]
>> [2020-10-20 11:45:03.572106] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:45:40.080010] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> The message "I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0" repeated 2 times
>> between [2020-10-20 11:45:40.080010] and
>> [2020-10-20 11:47:10.871801]
>> [2020-10-20 11:48:03.913129] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:49:05.082165] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:50:06.725722] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:51:04.254685] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:52:07.903617] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:53:01.420513] I [MSGID: 108026]
>> [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
>> 0-tapeless-replicate-0: performing metadata
>> selfheal on 3c316533-5f47-4267-ac19-58b3be305b94
>> [2020-10-20 11:53:01.428657] I [MSGID: 108026]
>> [afr-self-heal-common.c:1750:afr_log_selfheal]
>> 0-tapeless-replicate-0: Completed metadata
>> selfheal on 3c316533-5f47-4267-ac19-58b3be305b94.
>> sources=[0] sinks=1 2
>> The message "I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0" repeated 3 times
>> between [2020-10-20 11:52:07.903617] and
>> [2020-10-20 11:53:12.037835]
>> [2020-10-20 11:54:02.208354] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:55:04.360284] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:56:09.508092] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:57:02.580970] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>> [2020-10-20 11:58:06.230698] I [MSGID: 108031]
>> [afr-common.c:2581:afr_local_discovery_cbk]
>> 0-tapeless-replicate-0: selecting local
>> read_child tapeless-client-0
>>
>>
>> Let me know if you need something else. Thank you
>> for you suppoort!
>> Best Regards,
>> Martin Lorenzo
>>
>>
>> ________
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> <mailto:Gluster-users at gluster.org>
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>> ________
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> --
>> Respectfully
>> Mahdi
>>
>>
>> ________
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> --
> forumZFD
> Entschieden für Frieden|Committed to Peace
>
> Benedikt Kaleß
> Leiter Team IT|Head team IT
>
> Forum Ziviler Friedensdienst e.V.|Forum Civil Peace Service
> Am Kölner Brett 8 | 50825 Köln | Germany
>
> Tel 0221 91273233 | Fax 0221 91273299 |
> http://www.forumZFD.de
>
> Vorstand nach § 26 BGB, einzelvertretungsberechtigt|Executive Board:
> Oliver Knabe (Vorsitz|Chair), Sonja Wiekenberg-Mlalandle, Alexander Mauz
> VR 17651 Amtsgericht Köln
>
> Spenden|Donations: IBAN DE37 3702 0500 0008 2401 01 BIC BFSWDE33XXX
>
--
forumZFD
Entschieden für Frieden|Committed to Peace
Benedikt Kaleß
Leiter Team IT|Head team IT
Forum Ziviler Friedensdienst e.V.|Forum Civil Peace Service
Am Kölner Brett 8 | 50825 Köln | Germany
Tel 0221 91273233 | Fax 0221 91273299 |
http://www.forumZFD.de
Vorstand nach § 26 BGB, einzelvertretungsberechtigt|Executive Board:
Oliver Knabe (Vorsitz|Chair), Sonja Wiekenberg-Mlalandle, Alexander Mauz
VR 17651 Amtsgericht Köln
Spenden|Donations: IBAN DE37 3702 0500 0008 2401 01 BIC BFSWDE33XXX
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20201118/6e2a3818/attachment.html>
More information about the Gluster-users
mailing list