<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>Hi all,</p>
<p>we set the followig options today:</p>
<p>performance.read-ahead=on<br>
performance.write-behind-window-size=4MB<br>
performance.cache-max-file-size=10<br>
performance.write-behind=off<br>
performance.cache-invalidation=on<br>
server.event-threads=4<br>
client.event-threads=4<br>
performance.parallel-readdir=on<br>
performance.readdir-ahead=on<br>
performance.nl-cache-timeout=600<br>
performance.nl-cache=on<br>
network.inode-lru-limit=200000<br>
performance.md-cache-timeout=600<br>
performance.stat-prefetch=on<br>
performance.cache-samba-metadata=on<br>
features.cache-invalidation-timeout=600<br>
features.cache-invalidation=on<br>
nfs.disable=on<br>
cluster.self-heal-daemon=enable<br>
cluster.data-self-heal=on<br>
cluster.metadata-self-heal=on<br>
cluster.entry-self-heal=on<br>
cluster.force-migration=on<br>
network.ping-timeout=10<br>
performance.cache-size=512MB<br>
</p>
<p>and since 2 hours no ??? Files occured anymore.</p>
<p>Is it possible that a update of gluster sets options to default?
<br>
</p>
<p>Best</p>
<p>Benedikt<br>
</p>
<p><br>
</p>
<div class="moz-cite-prefix">Am 18.11.20 um 17:25 schrieb Martín
Lorenzo:<br>
</div>
<blockquote type="cite"
cite="mid:CAPtH=ok2Zi1bW08xwFZN7vC9R_xGgVTpjQxDrE_F-qu2Rg8hJw@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">Hi Benedikt,
<div>
<div>You are right , disabling performance.readdir-ahead
didn't solve the issue for me.</div>
<div>It took a little longer to find out, and I wasn't sure if
the errors were already there before turning off the
setting.</div>
</div>
<div>Is your volume full replica or are you using an arbiter?</div>
<div><br>
</div>
<div><br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, Nov 18, 2020 at 1:16
PM Benedikt Kaleß <<a
href="mailto:benedikt.kaless@forumzfd.de"
moz-do-not-send="true">benedikt.kaless@forumzfd.de</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>Dear Martin,</p>
<p>Do you have any new observations regarding this issue?</p>
<p>I just found your thread. This error of missing files on
a fuse mounts is appearing on my setup with 3 replicated
bricks on gluster 8.2. too.</p>
<p>I set performance.readdir-ahead: off but the error still
occurs quite frequently.</p>
<p>Best regards</p>
<p>Bene<br>
</p>
<div>Am 04.11.20 um 12:07 schrieb Martín Lorenzo:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">Thanks Mahdi, I'll try that option, I
hope it doesn't come with a big performance penalty. </div>
<div dir="ltr">Recently upgraded to 7.8 by Strahil's
advice, but before that, I had the feeling that
restarting the brick processes in one node in
particular (the one with the most user connections)
helped a lot.
<div><br>
</div>
<div>I've setup an experiment/workaround on a
frequently used dir. A cron script creates a
directory there every minute, sleeps 2 seconds and
removes it. At the same time in a different node
node /mount i am listling (long format) the same
base directory, every minute. On the latter, in the
mount logs I am constantly getting this message
every 2 - 3 minutes (checkglus12 is the dir I am
creating/removing):</div>
<div>[2020-11-04 09:53:02.087991] I [MSGID: 109063]
[dht-layout.c:647:dht_layout_normalize]
0-tapeless-dht: Found anomalies in
/interno/checkglus12 (gfid =
00000000-0000-0000-0000-000000000000). Holes=1
overlaps=0<br>
</div>
<div><br>
</div>
<div>The other issue I found on the logs, is that I
find self-heal entries all the time, during "normal"
operations. Here is an excerpt (grepping 'heal')</div>
<div>[2020-11-03 21:33:39.189343] I [MSGID: 108026]
[afr-self-heal-common.c:1744:afr_log_selfheal]
0-tapeless-replicate-1: Completed metadata selfheal
on c1e69788-7211-40d4-a38c-8d21786b0438. sources=0
[1] sinks=2<br>
[2020-11-03 21:33:47.870217] I [MSGID: 108026]
[afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
0-tapeless-replicate-1: performing metadata selfheal
on df127689-98ba-4a35-9bb7-32067de57615<br>
[2020-11-03 21:33:47.875594] I [MSGID: 108026]
[afr-self-heal-common.c:1744:afr_log_selfheal]
0-tapeless-replicate-1: Completed metadata selfheal
on df127689-98ba-4a35-9bb7-32067de57615. sources=0
[1] sinks=2<br>
[2020-11-03 21:50:01.331224] I [MSGID: 108026]
[afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
0-tapeless-replicate-1: performing metadata selfheal
on 3c316533-5f47-4267-ac19-58b3be305b94<br>
[2020-11-03 21:50:01.340247] I [MSGID: 108026]
[afr-self-heal-common.c:1744:afr_log_selfheal]
0-tapeless-replicate-1: Completed metadata selfheal
on 3c316533-5f47-4267-ac19-58b3be305b94. sources=0
[1] sinks=2<br>
[2020-11-03 21:52:45.269751] I [MSGID: 108026]
[afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
0-tapeless-replicate-1: performing metadata selfheal
on f2e404e2-0550-4a2e-9a79-1724e7e4c8f0<br>
Thanks again for your help</div>
<div>Regards,</div>
<div>Martin</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Wed, Nov 4, 2020
at 4:59 AM Mahdi Adnan <<a
href="mailto:mahdi@sysmin.io" target="_blank"
moz-do-not-send="true">mahdi@sysmin.io</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px
0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div dir="ltr">Hello Martín,
<div><br>
</div>
<div> Try to disable "performance.readdir-ahead",
we had a similar issue, and disabling
"performance.readdir-ahead" solved our issue.</div>
<div>gluster volume set <span
style="color:rgb(0,0,0)">tapeless</span> performance.readdir-ahead<span
style="color:rgb(0,0,0)"> off</span></div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Tue, Oct 27,
2020 at 8:23 PM Martín Lorenzo <<a
href="mailto:mlorenzo@gmail.com"
target="_blank" moz-do-not-send="true">mlorenzo@gmail.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px
0px 0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div dir="ltr">Hi Strahil, today we have the
same number clients on all nodes, but the
problem persists. I have the impression that
it gets more frequent as the server capacity
fills up, now we are having at least one
incident per day.
<div>Regards,</div>
<div>Martin</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Mon, Oct
26, 2020 at 8:09 AM Martín Lorenzo <<a
href="mailto:mlorenzo@gmail.com"
target="_blank" moz-do-not-send="true">mlorenzo@gmail.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div dir="ltr">HI Strahil, thanks for your
reply,
<div>I had one node with 13 clients, the
rest with 14. I've just restarted the
services on that node, now I have 14,
let's see what happens.</div>
<div>Regarding the samba repos, I wasn't
aware of that, I was using centos main
repo. I'll check the out</div>
<div>Best Regards,</div>
<div>Martin</div>
<div><br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Tue,
Oct 20, 2020 at 3:19 PM Strahil Nikolov
<<a
href="mailto:hunter86_bg@yahoo.com"
target="_blank" moz-do-not-send="true">hunter86_bg@yahoo.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote"
style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">Do
you have the same ammount of clients
connected to each brick ?<br>
<br>
I guess something like this can show it:<br>
<br>
gluster volume status VOL clients<br>
gluster volume status VOL client-list<br>
<br>
Best Regards,<br>
Strahil Nikolov<br>
<br>
<br>
<br>
<br>
<br>
<br>
В вторник, 20 октомври 2020 г., 15:41:45
Гринуич+3, Martín Lorenzo <<a
href="mailto:mlorenzo@gmail.com"
target="_blank" moz-do-not-send="true">mlorenzo@gmail.com</a>>
написа: <br>
<br>
<br>
<br>
<br>
<br>
Hi, I have the following problem, I have
a distributed replicated cluster set up
with samba and CTDB, over fuse mount
points<br>
I am having inconsistencies across the
FUSE mounts, users report that files are
disappearing after being copied/moved. I
take a look at the mount points on each
node, and they don't display the same
data<br>
<br>
#### faulty mount point####<br>
[root@gluster6 ARRIBA GENTE martes 20 de
octubre]# ll<br>
ls: cannot access PANEO VUELTA A CLASES
CON TAPABOCAS.mpg: No such file or
directory<br>
ls: cannot access PANEO NIÑOS ESCUELAS
CON TAPABOCAS.mpg: No such file or
directory<br>
total 633723<br>
drwxr-xr-x. 5 arribagente PN 4096
Oct 19 10:52 COMERCIAL AG martes 20 de
octubre<br>
-rw-r--r--. 1 arribagente PN 648927236
Jun 3 07:16 PANEO FACHADA PALACIO
LEGISLATIVO DRONE DIA Y NOCHE.mpg<br>
-?????????? ? ? ? ?
? PANEO NIÑOS ESCUELAS CON
TAPABOCAS.mpg<br>
-?????????? ? ? ? ?
? PANEO VUELTA A CLASES CON
TAPABOCAS.mpg<br>
<br>
<br>
###healthy mount point###<br>
[root@gluster7 ARRIBA GENTE martes 20 de
octubre]# ll<br>
total 3435596<br>
drwxr-xr-x. 5 arribagente PN 4096
Oct 19 10:52 COMERCIAL AG martes 20 de
octubre<br>
-rw-r--r--. 1 arribagente PN 648927236
Jun 3 07:16 PANEO FACHADA PALACIO
LEGISLATIVO DRONE DIA Y NOCHE.mpg<br>
-rw-r--r--. 1 arribagente PN 2084415492
Aug 18 09:14 PANEO NIÑOS ESCUELAS CON
TAPABOCAS.mpg<br>
-rw-r--r--. 1 arribagente PN 784701444
Sep 4 07:23 PANEO VUELTA A CLASES CON
TAPABOCAS.mpg<br>
<br>
- So far the only way to solve this is
to create a directory in the healthy
mount point, on the same path:<br>
[root@gluster7 ARRIBA GENTE martes 20 de
octubre]# mkdir hola<br>
<br>
- When you refresh the other mountpoint,
and the issue is resolved:<br>
[root@gluster6 ARRIBA GENTE martes 20 de
octubre]# ll<br>
total 3435600<br>
drwxr-xr-x. 5 arribagente PN
4096 Oct 19 10:52 COMERCIAL AG martes 20
de octubre<br>
drwxr-xr-x. 2 root root
4096 Oct 20 08:45 hola<br>
-rw-r--r--. 1 arribagente PN
648927236 Jun 3 07:16 PANEO FACHADA
PALACIO LEGISLATIVO DRONE DIA Y
NOCHE.mpg<br>
-rw-r--r--. 1 arribagente PN
2084415492 Aug 18 09:14 PANEO NIÑOS
ESCUELAS CON TAPABOCAS.mpg<br>
-rw-r--r--. 1 arribagente PN
784701444 Sep 4 07:23 PANEO VUELTA A
CLASES CON TAPABOCAS.mpg<br>
<br>
Interestingly, the error occurs on the
mount point where the files were copied.
They don't show up as pending heal
entries. I have around 15 people using
them over samba, I think I'm having this
issue reported every two days. <br>
<br>
I have an older cluster
with similar issues, different gluster
version, but a very similar topology (4
bricks, initially two bricks then
expanded)<br>
Please note , the bricks aren't the same
size (but their replicas are), so my
other suspicion is that rebalancing has
something to do with it.<br>
<br>
I'm trying to reproduce it over a small
virtualized cluster, so far no results.<br>
<br>
Here are the cluster details<br>
four nodes, replica 2, plus one arbiter
hosting 2 bricks<br>
<br>
I have 2 bricks with ~20 TB capacity and
the other pair is ~48TB<br>
Volume Name: tapeless<br>
Type: Distributed-Replicate<br>
Volume ID:
53bfa86d-b390-496b-bbd7-c4bba625c956<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 2 x (2 + 1) = 6<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1:
gluster6.glustersaeta.net:/data/glusterfs/tapeless/brick_6/brick<br>
Brick2:
gluster7.glustersaeta.net:/data/glusterfs/tapeless/brick_7/brick<br>
Brick3:
kitchen-store.glustersaeta.net:/data/glusterfs/tapeless/brick_1a/brick
(arbiter)<br>
Brick4:
gluster12.glustersaeta.net:/data/glusterfs/tapeless/brick_12/brick<br>
Brick5:
gluster13.glustersaeta.net:/data/glusterfs/tapeless/brick_13/brick<br>
Brick6:
kitchen-store.glustersaeta.net:/data/glusterfs/tapeless/brick_2a/brick
(arbiter)<br>
Options Reconfigured:<br>
features.quota-deem-statfs: on<br>
performance.client-io-threads: on<br>
nfs.disable: on<br>
transport.address-family: inet<br>
features.quota: on<br>
features.inode-quota: on<br>
features.cache-invalidation: on<br>
features.cache-invalidation-timeout: 600<br>
performance.cache-samba-metadata: on<br>
performance.stat-prefetch: on<br>
performance.cache-invalidation: on<br>
performance.md-cache-timeout: 600<br>
network.inode-lru-limit: 200000<br>
performance.nl-cache: on<br>
performance.nl-cache-timeout: 600<br>
performance.readdir-ahead: on<br>
performance.parallel-readdir: on<br>
performance.cache-size: 1GB<br>
client.event-threads: 4<br>
server.event-threads: 4<br>
performance.normal-prio-threads: 16<br>
performance.io-thread-count: 32<br>
performance.write-behind-window-size:
8MB<br>
storage.batch-fsync-delay-usec: 0<br>
cluster.data-self-heal: on<br>
cluster.metadata-self-heal: on<br>
cluster.entry-self-heal: on<br>
cluster.self-heal-daemon: on<br>
performance.write-behind: on<br>
performance.open-behind: on<br>
<br>
Log section form faulty mount point. I
think the [file exists] entries are from
people trying to copy the missing files
over an over<br>
<br>
<br>
[2020-10-20 11:31:03.034220] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:32:06.684329] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:33:02.191863] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:34:05.841608] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:35:20.736633] I [MSGID:
108026]
[afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
0-tapeless-replicate-1: performing
metadata selfheal on
958dbd7a-3cd7-4b66-9038-76e5c5669644 <br>
[2020-10-20 11:35:20.741213] I [MSGID:
108026]
[afr-self-heal-common.c:1750:afr_log_selfheal]
0-tapeless-replicate-1: Completed
metadata selfheal on
958dbd7a-3cd7-4b66-9038-76e5c5669644.
sources=[0] 1 sinks=2 <br>
[2020-10-20 11:35:04.278043] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
The message "I [MSGID: 108026]
[afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
0-tapeless-replicate-1: performing
metadata selfheal on
958dbd7a-3cd7-4b66-9038-76e5c5669644"
repeated 3 times between [2020-10-20
11:35:20.736633] and [2020-10-20
11:35:26.733298]<br>
The message "I [MSGID: 108026]
[afr-self-heal-common.c:1750:afr_log_selfheal]
0-tapeless-replicate-1: Completed
metadata selfheal on
958dbd7a-3cd7-4b66-9038-76e5c5669644.
sources=[0] 1 sinks=2 " repeated 3
times between [2020-10-20
11:35:20.741213] and [2020-10-20
11:35:26.737629]<br>
[2020-10-20 11:36:02.548350] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:36:57.365537] I [MSGID:
108026]
[afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
0-tapeless-replicate-1: performing
metadata selfheal on
f4907af2-1775-4c46-89b5-e9776df6d5c7 <br>
[2020-10-20 11:36:57.370824] I [MSGID:
108026]
[afr-self-heal-common.c:1750:afr_log_selfheal]
0-tapeless-replicate-1: Completed
metadata selfheal on
f4907af2-1775-4c46-89b5-e9776df6d5c7.
sources=[0] 1 sinks=2 <br>
[2020-10-20 11:37:01.363925] I [MSGID:
108026]
[afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
0-tapeless-replicate-1: performing
metadata selfheal on
f4907af2-1775-4c46-89b5-e9776df6d5c7 <br>
[2020-10-20 11:37:01.368069] I [MSGID:
108026]
[afr-self-heal-common.c:1750:afr_log_selfheal]
0-tapeless-replicate-1: Completed
metadata selfheal on
f4907af2-1775-4c46-89b5-e9776df6d5c7.
sources=[0] 1 sinks=2 <br>
The message "I [MSGID: 108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0" repeated 3
times between [2020-10-20
11:36:02.548350] and [2020-10-20
11:37:36.389208]<br>
[2020-10-20 11:38:07.367113] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:39:01.595981] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:40:04.184899] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:41:07.833470] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:42:01.871621] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:43:04.399194] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:44:04.558647] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:44:15.953600] W [MSGID:
114031]
[client-rpc-fops_v2.c:2114:client4_0_create_cbk]
0-tapeless-client-5: remote operation
failed. Path: /PN/arribagente/PLAYER
2020/ARRIBA GENTE martes 20 de
octubre/PANEO NIÑOS ESCUELAS CON
TAPABOCAS.mpg [File exists]<br>
[2020-10-20 11:44:15.953819] W [MSGID:
114031]
[client-rpc-fops_v2.c:2114:client4_0_create_cbk]
0-tapeless-client-2: remote operation
failed. Path: /PN/arribagente/PLAYER
2020/ARRIBA GENTE martes 20 de
octubre/PANEO NIÑOS ESCUELAS CON
TAPABOCAS.mpg [File exists]<br>
[2020-10-20 11:44:15.954072] W [MSGID:
114031]
[client-rpc-fops_v2.c:2114:client4_0_create_cbk]
0-tapeless-client-3: remote operation
failed. Path: /PN/arribagente/PLAYER
2020/ARRIBA GENTE martes 20 de
octubre/PANEO NIÑOS ESCUELAS CON
TAPABOCAS.mpg [File exists]<br>
[2020-10-20 11:44:15.954680] W
[fuse-bridge.c:2606:fuse_create_cbk]
0-glusterfs-fuse: 31043294:
/PN/arribagente/PLAYER 2020/ARRIBA GENTE
martes 20 de octubre/PANEO NIÑOS
ESCUELAS CON TAPABOCAS.mpg => -1
(File exists)<br>
[2020-10-20 11:44:15.963175] W
[fuse-bridge.c:2606:fuse_create_cbk]
0-glusterfs-fuse: 31043306:
/PN/arribagente/PLAYER 2020/ARRIBA GENTE
martes 20 de octubre/PANEO NIÑOS
ESCUELAS CON TAPABOCAS.mpg => -1
(File exists)<br>
[2020-10-20 11:44:15.971839] W
[fuse-bridge.c:2606:fuse_create_cbk]
0-glusterfs-fuse: 31043318:
/PN/arribagente/PLAYER 2020/ARRIBA GENTE
martes 20 de octubre/PANEO NIÑOS
ESCUELAS CON TAPABOCAS.mpg => -1
(File exists)<br>
[2020-10-20 11:44:16.010242] W
[fuse-bridge.c:2606:fuse_create_cbk]
0-glusterfs-fuse: 31043403:
/PN/arribagente/PLAYER 2020/ARRIBA GENTE
martes 20 de octubre/PANEO NIÑOS
ESCUELAS CON TAPABOCAS.mpg => -1
(File exists)<br>
[2020-10-20 11:44:16.020291] W
[fuse-bridge.c:2606:fuse_create_cbk]
0-glusterfs-fuse: 31043415:
/PN/arribagente/PLAYER 2020/ARRIBA GENTE
martes 20 de octubre/PANEO NIÑOS
ESCUELAS CON TAPABOCAS.mpg => -1
(File exists)<br>
[2020-10-20 11:44:16.028857] W
[fuse-bridge.c:2606:fuse_create_cbk]
0-glusterfs-fuse: 31043427:
/PN/arribagente/PLAYER 2020/ARRIBA GENTE
martes 20 de octubre/PANEO NIÑOS
ESCUELAS CON TAPABOCAS.mpg => -1
(File exists)<br>
The message "W [MSGID: 114031]
[client-rpc-fops_v2.c:2114:client4_0_create_cbk]
0-tapeless-client-5: remote operation
failed. Path: /PN/arribagente/PLAYER
2020/ARRIBA GENTE martes 20 de
octubre/PANEO NIÑOS ESCUELAS CON
TAPABOCAS.mpg [File exists]" repeated 5
times between [2020-10-20
11:44:15.953600] and [2020-10-20
11:44:16.027785]<br>
The message "W [MSGID: 114031]
[client-rpc-fops_v2.c:2114:client4_0_create_cbk]
0-tapeless-client-2: remote operation
failed. Path: /PN/arribagente/PLAYER
2020/ARRIBA GENTE martes 20 de
octubre/PANEO NIÑOS ESCUELAS CON
TAPABOCAS.mpg [File exists]" repeated 5
times between [2020-10-20
11:44:15.953819] and [2020-10-20
11:44:16.028331]<br>
The message "W [MSGID: 114031]
[client-rpc-fops_v2.c:2114:client4_0_create_cbk]
0-tapeless-client-3: remote operation
failed. Path: /PN/arribagente/PLAYER
2020/ARRIBA GENTE martes 20 de
octubre/PANEO NIÑOS ESCUELAS CON
TAPABOCAS.mpg [File exists]" repeated 5
times between [2020-10-20
11:44:15.954072] and [2020-10-20
11:44:16.028355]<br>
[2020-10-20 11:45:03.572106] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:45:40.080010] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
The message "I [MSGID: 108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0" repeated 2
times between [2020-10-20
11:45:40.080010] and [2020-10-20
11:47:10.871801]<br>
[2020-10-20 11:48:03.913129] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:49:05.082165] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:50:06.725722] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:51:04.254685] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:52:07.903617] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:53:01.420513] I [MSGID:
108026]
[afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do]
0-tapeless-replicate-0: performing
metadata selfheal on
3c316533-5f47-4267-ac19-58b3be305b94 <br>
[2020-10-20 11:53:01.428657] I [MSGID:
108026]
[afr-self-heal-common.c:1750:afr_log_selfheal]
0-tapeless-replicate-0: Completed
metadata selfheal on
3c316533-5f47-4267-ac19-58b3be305b94.
sources=[0] sinks=1 2 <br>
The message "I [MSGID: 108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0" repeated 3
times between [2020-10-20
11:52:07.903617] and [2020-10-20
11:53:12.037835]<br>
[2020-10-20 11:54:02.208354] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:55:04.360284] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:56:09.508092] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:57:02.580970] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
[2020-10-20 11:58:06.230698] I [MSGID:
108031]
[afr-common.c:2581:afr_local_discovery_cbk]
0-tapeless-replicate-0: selecting local
read_child tapeless-client-0 <br>
<br>
<br>
Let me know if you need something else.
Thank you for you suppoort!<br>
Best Regards,<br>
Martin Lorenzo<br>
<br>
<br>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST /
09:00 UTC<br>
Bridge: <a
href="https://bluejeans.com/441850968"
rel="noreferrer" target="_blank"
moz-do-not-send="true">https://bluejeans.com/441850968</a><br>
<br>
Gluster-users mailing list<br>
<a
href="mailto:Gluster-users@gluster.org"
target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a><br>
<a
href="https://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank"
moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote>
</div>
</blockquote>
</div>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00
UTC<br>
Bridge: <a
href="https://bluejeans.com/441850968"
rel="noreferrer" target="_blank"
moz-do-not-send="true">https://bluejeans.com/441850968</a><br>
<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org"
target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a><br>
<a
href="https://lists.gluster.org/mailman/listinfo/gluster-users"
rel="noreferrer" target="_blank"
moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr">
<div dir="ltr">Respectfully
<div>Mahdi</div>
</div>
</div>
</blockquote>
</div>
</div>
<br>
<fieldset></fieldset>
<pre>________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" target="_blank" moz-do-not-send="true">https://meet.google.com/cpu-eiue-hvk</a>
Gluster-users mailing list
<a href="mailto:Gluster-users@gluster.org" target="_blank" moz-do-not-send="true">Gluster-users@gluster.org</a>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank" moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a>
</pre>
</blockquote>
<pre cols="72">--
forumZFD
Entschieden für Frieden|Committed to Peace
Benedikt Kaleß
Leiter Team IT|Head team IT
Forum Ziviler Friedensdienst e.V.|Forum Civil Peace Service
Am Kölner Brett 8 | 50825 Köln | Germany
Tel 0221 91273233 | Fax 0221 91273299 |
<a href="http://www.forumZFD.de" target="_blank" moz-do-not-send="true">http://www.forumZFD.de</a>
Vorstand nach § 26 BGB, einzelvertretungsberechtigt|Executive Board:
Oliver Knabe (Vorsitz|Chair), Sonja Wiekenberg-Mlalandle, Alexander Mauz
VR 17651 Amtsgericht Köln
Spenden|Donations: IBAN DE37 3702 0500 0008 2401 01 BIC BFSWDE33XXX</pre>
</div>
</blockquote>
</div>
</blockquote>
<pre class="moz-signature" cols="72">--
forumZFD
Entschieden für Frieden|Committed to Peace
Benedikt Kaleß
Leiter Team IT|Head team IT
Forum Ziviler Friedensdienst e.V.|Forum Civil Peace Service
Am Kölner Brett 8 | 50825 Köln | Germany
Tel 0221 91273233 | Fax 0221 91273299 |
<a class="moz-txt-link-freetext" href="http://www.forumZFD.de">http://www.forumZFD.de</a>
Vorstand nach § 26 BGB, einzelvertretungsberechtigt|Executive Board:
Oliver Knabe (Vorsitz|Chair), Sonja Wiekenberg-Mlalandle, Alexander Mauz
VR 17651 Amtsgericht Köln
Spenden|Donations: IBAN DE37 3702 0500 0008 2401 01 BIC BFSWDE33XXX</pre>
</body>
</html>