From Marcin.Proniakin at grupawp.pl Mon Oct 3 13:08:22 2022 From: Marcin.Proniakin at grupawp.pl (Proniakin Marcin) Date: Mon, 3 Oct 2022 13:08:22 +0000 Subject: [Gluster-users] ODP: Issue with disk import and disk upload - ovirt 4.5.2 In-Reply-To: References: Message-ID: <0a1c9ba40a0147f7bac2873cdd6d1b45@grupawp.pl> I'm sorry - wrong email. Kindly asking to ignore previous mail. ________________________________ Od: Proniakin Marcin Wys?ane: poniedzia?ek, 3 pa?dziernika 2022 15:02:01 Do: gluster-users at gluster.org DW: Milewski Daniel Temat: Issue with disk import and disk upload - ovirt 4.5.2 Hello, After upgrading ovirt to version 4.5.2, I've experienced issue with using import function in import disk window in storage domain after attaching it to data center. Logs in the attachment (#1). Second issue is with uploading disks from storage domain window. Using example from attachment (#2): When choosing to upload disk to domain portal-1 from upload function in the storage domain disk window, ovirt chooses wrong data-center dev-1 (dev-1 datacenter has domain dev-1, portal-1 datacenter has domain portal-1) and wrong host to upload. Accepting this upload always fails. When choosing to upload disk from storage -> disks window works fine. Issues confirmed on two independent ovirt servers (both with version 4.5.2). Sp??ki Grupy Wirtualna Polska: Wirtualna Polska Holding Sp??ka Akcyjna z siedzib? w Warszawie, ul. ?wirki i Wigury 16, 02-092 Warszawa, wpisana do Krajowego Rejestru S?dowego - Rejestru Przedsi?biorc?w prowadzonego przez S?d Rejonowy dla m.st. Warszawy w Warszawie pod nr KRS: 0000407130, kapita? zak?adowy: 1 461 895,65 z? (w ca?o?ci wp?acony), Numer Identyfikacji Podatkowej (NIP): 521-31-11-513 Wirtualna Polska Media Sp??ka Akcyjna z siedzib? w Warszawie, ul. ?wirki i Wigury 16, 02-092 Warszawa, wpisana do Krajowego Rejestru S?dowego - Rejestru Przedsi?biorc?w prowadzonego przez S?d Rejonowy dla m.st. Warszawy w Warszawie pod nr KRS: 0000580004, kapita? zak?adowy: 320 058 550,00 z? (w ca?o?ci wp?acony), Numer Identyfikacji Podatkowej (NIP): 527-26-45-593 Administratorem udost?pnionych danych osobowych jest Wirtualna Polska Media S.A. z siedzib? w Warszawie (dalej "WPM"). WPM przetwarza Twoje dane osobowe, kt?re zosta?y podane przez Ciebie dobrowolnie w trakcie dotychczasowej wsp??pracy, w zwi?zku z zawarciem umowy lub zosta?y zebrane ze ?r?de? powszechnie dost?pnych, w szczeg?lno?ci: imi? i nazwisko, adres email, numer telefonu. Przetwarzamy te dane w celach opisanych w polityce prywatno?ci, mi?dzy innymi w celu realizacji wsp??pracy, realizacji obowi?zk?w przewidzianych prawem, w celach marketingowych WP. Podstaw? prawn? przetwarzania Twoich danych osobowych w celach marketingowych jest prawnie uzasadniony interes jakim jest m.in. przesy?anie informacji marketingowych o us?ugach WP, w tym zaprosze? na konferencje bran?owe, informacje o publikacjach. Twoje dane mo?emy przekazywa? podmiotom przetwarzaj?cym je na nasze zlecenie oraz podmiotom uprawnionym do uzyskania danych na podstawie obowi?zuj?cego prawa. Masz prawo m.in. do ??dania dost?pu do danych, sprostowania, usuni?cia lub ograniczenia ich przetwarzania, jak r?wnie? prawo do zg?oszenia sprzeciwu w przewidzianych w prawie sytuacjach. Prawa te oraz spos?b ich realizacji opisali?my w polityce prywatno?ci. Tam te? znajdziesz informacje jak zakomunikowa? nam Twoj? wol? skorzystania z tych praw. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Marcin.Proniakin at grupawp.pl Mon Oct 3 13:02:01 2022 From: Marcin.Proniakin at grupawp.pl (Proniakin Marcin) Date: Mon, 3 Oct 2022 13:02:01 +0000 Subject: [Gluster-users] Issue with disk import and disk upload - ovirt 4.5.2 Message-ID: Hello, After upgrading ovirt to version 4.5.2, I've experienced issue with using import function in import disk window in storage domain after attaching it to data center. Logs in the attachment (#1). Second issue is with uploading disks from storage domain window. Using example from attachment (#2): When choosing to upload disk to domain portal-1 from upload function in the storage domain disk window, ovirt chooses wrong data-center dev-1 (dev-1 datacenter has domain dev-1, portal-1 datacenter has domain portal-1) and wrong host to upload. Accepting this upload always fails. When choosing to upload disk from storage -> disks window works fine. Issues confirmed on two independent ovirt servers (both with version 4.5.2). Sp??ki Grupy Wirtualna Polska: Wirtualna Polska Holding Sp??ka Akcyjna z siedzib? w Warszawie, ul. ?wirki i Wigury 16, 02-092 Warszawa, wpisana do Krajowego Rejestru S?dowego - Rejestru Przedsi?biorc?w prowadzonego przez S?d Rejonowy dla m.st. Warszawy w Warszawie pod nr KRS: 0000407130, kapita? zak?adowy: 1 461 895,65 z? (w ca?o?ci wp?acony), Numer Identyfikacji Podatkowej (NIP): 521-31-11-513 Wirtualna Polska Media Sp??ka Akcyjna z siedzib? w Warszawie, ul. ?wirki i Wigury 16, 02-092 Warszawa, wpisana do Krajowego Rejestru S?dowego - Rejestru Przedsi?biorc?w prowadzonego przez S?d Rejonowy dla m.st. Warszawy w Warszawie pod nr KRS: 0000580004, kapita? zak?adowy: 320 058 550,00 z? (w ca?o?ci wp?acony), Numer Identyfikacji Podatkowej (NIP): 527-26-45-593 Administratorem udost?pnionych danych osobowych jest Wirtualna Polska Media S.A. z siedzib? w Warszawie (dalej "WPM"). WPM przetwarza Twoje dane osobowe, kt?re zosta?y podane przez Ciebie dobrowolnie w trakcie dotychczasowej wsp??pracy, w zwi?zku z zawarciem umowy lub zosta?y zebrane ze ?r?de? powszechnie dost?pnych, w szczeg?lno?ci: imi? i nazwisko, adres email, numer telefonu. Przetwarzamy te dane w celach opisanych w polityce prywatno?ci, mi?dzy innymi w celu realizacji wsp??pracy, realizacji obowi?zk?w przewidzianych prawem, w celach marketingowych WP. Podstaw? prawn? przetwarzania Twoich danych osobowych w celach marketingowych jest prawnie uzasadniony interes jakim jest m.in. przesy?anie informacji marketingowych o us?ugach WP, w tym zaprosze? na konferencje bran?owe, informacje o publikacjach. Twoje dane mo?emy przekazywa? podmiotom przetwarzaj?cym je na nasze zlecenie oraz podmiotom uprawnionym do uzyskania danych na podstawie obowi?zuj?cego prawa. Masz prawo m.in. do ??dania dost?pu do danych, sprostowania, usuni?cia lub ograniczenia ich przetwarzania, jak r?wnie? prawo do zg?oszenia sprzeciwu w przewidzianych w prawie sytuacjach. Prawa te oraz spos?b ich realizacji opisali?my w polityce prywatno?ci. Tam te? znajdziesz informacje jak zakomunikowa? nam Twoj? wol? skorzystania z tych praw. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt-ui.log Type: text/x-log Size: 2792 bytes Desc: ovirt-ui.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: storage-domain-disk-upload-window.png Type: image/png Size: 40041 bytes Desc: storage-domain-disk-upload-window.png URL: From jstrunk at redhat.com Mon Oct 10 12:04:53 2022 From: jstrunk at redhat.com (jstrunk at redhat.com) Date: Mon, 10 Oct 2022 12:04:53 +0000 Subject: [Gluster-users] Updated invitation: Gluster Community Meeting @ Monthly from 05:00 to 06:00 on the second Tuesday from Tue Jul 12 to Mon Oct 10 (EDT) (gluster-users@gluster.org) Message-ID: <000000000000a0443805eaacf744@google.com> This event has been updated Changed: time Gluster Community Meeting Monthly from 05:00 to 06:00 on the second Tuesday from Tuesday Jul 12 to Monday Oct 10 Eastern Time - New York Location Bridge: meet.google.com/cpu-eiue-hvk https://www.google.com/maps/search/Bridge:++meet.google.com%2Fcpu-eiue-hvk?hl=en Join with Google Meet https://meet.google.com/cpu-eiue-hvk?hs=224 Join by phone (US) +1 574-400-8405 PIN: 291845177 More joining options https://tel.meet/cpu-eiue-hvk?pin=8483247585922&hs=0 Attachments Notes - Gluster Community Meeting https://docs.google.com/document/d/1gnan3tNRsv09wGyADjhxnWFxyO4vA6VkkVzEF5kTIb0/edit UPDATED THE GOOGLE MEET LINK -  meet.google.com/cpu-eiue-hvkSchedule -Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTCBridge:  meet.google.com/cpu-eiue-hvkMinutes meeting: https://hackmd.io/@wUUmx3WsTRerQyHUZbuy6A/SkR6Otqsq/editPrevious Meeting notes: GitHub Organizer nladha at redhat.com nladha at redhat.com Guests nladha at redhat.com - organizer sajmoham at redhat.com alpha754293 at hotmail.com Sheetal Pamecha Shwetha Acharya Deepshikha Khandelwal Sunil Kumar Heggodu Gopala Acharya Vinayakswami Hariharmath sunkumar at redhat.com Ana Neri Richard Wareing David Hasson tshacked at redhat.com Wojciech J. Turek pranith.karampuri at phonepe.com sasundar at redhat.com amar at kadalu.io prakash.mohanraj at gmail.com jstrunk at redhat.com Michael O'Sullivan max.degraaf at kpn.com Rakshitha Kamath ravishankar.n at pavilion.io niryadav at redhat.com ??? Gary Lloyd sanju.rakonde at phonepe.com pierre-marie.janvre at agoda.com aravinda at kadalu.tech Wilkinson, Hugo (IT Dept) rafi.kavungal at iternity.com gluster-users at gluster.org gluster-devel at gluster.org jocelyn.thode at elca.ch alvin at netvel.net David Spisla Gaby Rubin Luk?? Hejtm?nek Chris Knipe Johan H Karlsson Taylor, James (IT Dept) Brian Riddle ItLinux View all guest info https://calendar.google.com/calendar/event?action=VIEW&eid=ZXVvZTlscHVuZXEzMTdsanN2Mms2czEzaDRfUjIwMjIwNzEyVDA5MDAwMCBnbHVzdGVyLXVzZXJzQGdsdXN0ZXIub3Jn&tok=MTcjbmxhZGhhQHJlZGhhdC5jb205ZDc5NzBlMzUzMmEzN2E4ZjQ4NjliYWZkYmM0NDZiZGU1YWZlNWM1&ctz=America%2FNew_York&hl=en&es=0 Reply for gluster-users at gluster.org and view more details https://calendar.google.com/calendar/event?action=VIEW&eid=ZXVvZTlscHVuZXEzMTdsanN2Mms2czEzaDRfUjIwMjIwNzEyVDA5MDAwMCBnbHVzdGVyLXVzZXJzQGdsdXN0ZXIub3Jn&tok=MTcjbmxhZGhhQHJlZGhhdC5jb205ZDc5NzBlMzUzMmEzN2E4ZjQ4NjliYWZkYmM0NDZiZGU1YWZlNWM1&ctz=America%2FNew_York&hl=en&es=0 Your attendance is optional. ~~//~~ Invitation from Google Calendar: https://calendar.google.com/calendar/ You are receiving this email because you are an attendee on the event. To stop receiving future updates for this event, decline this event. Forwarding this invitation could allow any recipient to send a response to the organizer, be added to the guest list, invite others regardless of their own invitation status, or modify your RSVP. Learn more https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 8519 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 8670 bytes Desc: not available URL: From jstrunk at redhat.com Mon Oct 10 12:04:53 2022 From: jstrunk at redhat.com (jstrunk at redhat.com) Date: Mon, 10 Oct 2022 12:04:53 +0000 Subject: [Gluster-users] Updated invitation: Gluster Community Meeting @ Monthly from 05:00 to 06:00 on the second Tuesday (EDT) (gluster-users@gluster.org) Message-ID: <000000000000a2b25705eaacf7b6@google.com> This event has been updated Gluster Community Meeting Monthly from 05:00 to 06:00 on the second Tuesday Eastern Time - New York Location Bridge: meet.google.com/cpu-eiue-hvk https://www.google.com/maps/search/Bridge:++meet.google.com%2Fcpu-eiue-hvk?hl=en Join with Google Meet https://meet.google.com/cpu-eiue-hvk?hs=224 Join by phone (US) +1 574-400-8405 PIN: 291845177 More joining options https://tel.meet/cpu-eiue-hvk?pin=8483247585922&hs=0 Attachments Notes - Gluster Community Meeting https://docs.google.com/document/d/1gnan3tNRsv09wGyADjhxnWFxyO4vA6VkkVzEF5kTIb0/edit UPDATED THE GOOGLE MEET LINK -  meet.google.com/cpu-eiue-hvkSchedule -Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTCBridge:  meet.google.com/cpu-eiue-hvkMinutes meeting: https://hackmd.io/@wUUmx3WsTRerQyHUZbuy6A/SkR6Otqsq/editPrevious Meeting notes: GitHub Organizer nladha at redhat.com nladha at redhat.com Guests nladha at redhat.com - organizer sajmoham at redhat.com alpha754293 at hotmail.com Sheetal Pamecha Shwetha Acharya Deepshikha Khandelwal Sunil Kumar Heggodu Gopala Acharya Vinayakswami Hariharmath sunkumar at redhat.com Ana Neri Richard Wareing David Hasson tshacked at redhat.com Wojciech J. Turek pranith.karampuri at phonepe.com sasundar at redhat.com amar at kadalu.io prakash.mohanraj at gmail.com jstrunk at redhat.com Michael O'Sullivan max.degraaf at kpn.com Rakshitha Kamath ravishankar.n at pavilion.io niryadav at redhat.com ??? Gary Lloyd sanju.rakonde at phonepe.com pierre-marie.janvre at agoda.com aravinda at kadalu.tech Wilkinson, Hugo (IT Dept) rafi.kavungal at iternity.com gluster-users at gluster.org gluster-devel at gluster.org jocelyn.thode at elca.ch alvin at netvel.net David Spisla Gaby Rubin Luk?? Hejtm?nek Chris Knipe Johan H Karlsson Taylor, James (IT Dept) Brian Riddle ItLinux View all guest info https://calendar.google.com/calendar/event?action=VIEW&eid=ZXVvZTlscHVuZXEzMTdsanN2Mms2czEzaDRfUjIwMjIxMDExVDA5MDAwMCBnbHVzdGVyLXVzZXJzQGdsdXN0ZXIub3Jn&tok=MTcjbmxhZGhhQHJlZGhhdC5jb20xZmU5OTljNTRmYTI1ZGRjMTBmZjk0MmViMWMyOTI4ZDU5YWI1NGJi&ctz=America%2FNew_York&hl=en&es=0 Reply for gluster-users at gluster.org and view more details https://calendar.google.com/calendar/event?action=VIEW&eid=ZXVvZTlscHVuZXEzMTdsanN2Mms2czEzaDRfUjIwMjIxMDExVDA5MDAwMCBnbHVzdGVyLXVzZXJzQGdsdXN0ZXIub3Jn&tok=MTcjbmxhZGhhQHJlZGhhdC5jb20xZmU5OTljNTRmYTI1ZGRjMTBmZjk0MmViMWMyOTI4ZDU5YWI1NGJi&ctz=America%2FNew_York&hl=en&es=0 Your attendance is optional. ~~//~~ Invitation from Google Calendar: https://calendar.google.com/calendar/ You are receiving this email because you are an attendee on the event. To stop receiving future updates for this event, decline this event. Forwarding this invitation could allow any recipient to send a response to the organizer, be added to the guest list, invite others regardless of their own invitation status, or modify your RSVP. Learn more https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 8497 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 8648 bytes Desc: not available URL: From niryadav at redhat.com Tue Oct 11 03:42:31 2022 From: niryadav at redhat.com (niryadav at redhat.com) Date: Tue, 11 Oct 2022 03:42:31 +0000 Subject: [Gluster-users] Updated invitation: Gluster Community Meeting @ Tue Oct 11, 2022 2:30pm - 3:30pm (IST) (gluster-users@gluster.org) Message-ID: <000000000000e0301b05eaba1041@google.com> This event has been updated Changed: description Gluster Community Meeting Tuesday Oct 11, 2022 ? 2:30pm ? 3:30pm India Standard Time - Kolkata Location Bridge: meet.google.com/cpu-eiue-hvk https://www.google.com/maps/search/Bridge:++meet.google.com%2Fcpu-eiue-hvk?hl=en Join with Google Meet https://meet.google.com/cpu-eiue-hvk?hs=224 Join by phone (US) +1 574-400-8405 PIN: 291845177 More joining options https://tel.meet/cpu-eiue-hvk?pin=8483247585922&hs=0 Attachments Notes - Gluster Community Meeting https://docs.google.com/document/d/1gnan3tNRsv09wGyADjhxnWFxyO4vA6VkkVzEF5kTIb0/edit UPDATED THE GOOGLE MEET LINK -  meet.google.com/cpu-eiue-hvkSchedule -Every 2nd Tuesday at 14:30 IST / 09:00 UTCBridge:  meet.google.com/cpu-eiue-hvkMinutes meeting: https://hackmd.io/fB7S_jpZQ7K-d3ROFTUtFw?viewPrevious Meeting notes: GitHub Organizer nladha at redhat.com nladha at redhat.com Guests nladha at redhat.com - organizer sajmoham at redhat.com alpha754293 at hotmail.com Sheetal Pamecha Shwetha Acharya Deepshikha Khandelwal Sunil Kumar Heggodu Gopala Acharya Vinayakswami Hariharmath sunkumar at redhat.com Ana Neri Richard Wareing David Hasson tshacked at redhat.com Wojciech J. Turek pranith.karampuri at phonepe.com sasundar at redhat.com amar at kadalu.io prakash.mohanraj at gmail.com jstrunk at redhat.com Michael O'Sullivan max.degraaf at kpn.com Rakshitha Kamath ravishankar.n at pavilion.io niryadav at redhat.com ??? Gary Lloyd sanju.rakonde at phonepe.com pierre-marie.janvre at agoda.com aravinda at kadalu.tech Wilkinson, Hugo (IT Dept) Brian Riddle kkeithle at redhat.com rafi.kavungal at iternity.com gluster-users at gluster.org gluster-devel at gluster.org jocelyn.thode at elca.ch alvin at netvel.net David Spisla Gaby Rubin Luk?? Hejtm?nek Chris Knipe Johan H Karlsson Taylor, James (IT Dept) ItLinux Jones, Thomas View all guest info https://calendar.google.com/calendar/event?action=VIEW&eid=ZXVvZTlscHVuZXEzMTdsanN2Mms2czEzaDRfMjAyMjEwMTFUMDkwMDAwWiBnbHVzdGVyLXVzZXJzQGdsdXN0ZXIub3Jn&tok=MTcjbmxhZGhhQHJlZGhhdC5jb21iODk5OWI3NzMyODVjMjI5NjIyMTZlZTdmYzYwYmQ5ODU4ZTViYjkz&ctz=Asia%2FKolkata&hl=en&es=0 Reply for gluster-users at gluster.org and view more details https://calendar.google.com/calendar/event?action=VIEW&eid=ZXVvZTlscHVuZXEzMTdsanN2Mms2czEzaDRfMjAyMjEwMTFUMDkwMDAwWiBnbHVzdGVyLXVzZXJzQGdsdXN0ZXIub3Jn&tok=MTcjbmxhZGhhQHJlZGhhdC5jb21iODk5OWI3NzMyODVjMjI5NjIyMTZlZTdmYzYwYmQ5ODU4ZTViYjkz&ctz=Asia%2FKolkata&hl=en&es=0 Your attendance is optional. ~~//~~ Invitation from Google Calendar: https://calendar.google.com/calendar/ You are receiving this email because you are an attendee on the event. To stop receiving future updates for this event, decline this event. Forwarding this invitation could allow any recipient to send a response to the organizer, be added to the guest list, invite others regardless of their own invitation status, or modify your RSVP. Learn more https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 8722 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 8876 bytes Desc: not available URL: From stefan.solbrig at ur.de Mon Oct 17 10:55:37 2022 From: stefan.solbrig at ur.de (Stefan Solbrig) Date: Mon, 17 Oct 2022 12:55:37 +0200 Subject: [Gluster-users] link files not being created Message-ID: Dear all, I was doing some testing regarding to GlusterFS link files (as they are created by a "move" operation). According to this document: https://www.gluster.org/glusterfs-algorithms-distribution/ If a link file is missing, it should be created after accessing the file. However, I don't see this behaviour. If I delete (by hand) a link file on the brick, the file is still accessible, but the link file is never recreated. I can do an "open" or a "stat" on the file without getting an error, but the link file is not created. Is this the intended behaviour? Or am I misunderstanding the above mentioned document? best wishes, Stefan -- Dr. Stefan Solbrig Universit?t Regensburg, Fakult?t f?r Physik, 93040 Regensburg, Germany Tel +49-941-943-2097 From jahernan at redhat.com Mon Oct 17 11:43:23 2022 From: jahernan at redhat.com (Xavi Hernandez) Date: Mon, 17 Oct 2022 13:43:23 +0200 Subject: [Gluster-users] link files not being created In-Reply-To: References: Message-ID: Hi Stefan, On Mon, Oct 17, 2022 at 1:03 PM Stefan Solbrig wrote: > Dear all, > > I was doing some testing regarding to GlusterFS link files (as they are > created by a "move" operation). According to this document: > https://www.gluster.org/glusterfs-algorithms-distribution/ If a link > file is missing, it should be created after accessing the file. > However, I don't see this behaviour. If I delete (by hand) a link file on > the brick, the file is still accessible, but the link file is never > recreated. I can do an "open" or a "stat" on the file without getting an > error, but the link file is not created. > Is this the intended behaviour? Or am I misunderstanding the above > mentioned document? > You shouldn't access or modify the backend filesystems manually, you can accidentally create unexpected problems if you don't fully understand what you are doing. That said, most probably the access to the file is still working because Gluster is using its cached information to locate the file. If the client mount is restarted, probably the file won't be accessible anymore unless you disable the "lookup-optimize" option (and this should recreate the link file). Regards, Xavi > best wishes, > Stefan > > -- > Dr. Stefan Solbrig > Universit?t Regensburg, Fakult?t f?r Physik, > 93040 Regensburg, Germany > Tel +49-941-943-2097 > > ________ > > > > Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://meet.google.com/cpu-eiue-hvk > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.solbrig at ur.de Tue Oct 18 08:34:18 2022 From: stefan.solbrig at ur.de (Stefan Solbrig) Date: Tue, 18 Oct 2022 10:34:18 +0200 Subject: [Gluster-users] [EXT] link files not being created In-Reply-To: References: Message-ID: Hi Xavi, > On Mon, Oct 17, 2022 at 1:03 PM Stefan Solbrig > wrote: > Dear all, > > I was doing some testing regarding to GlusterFS link files (as they are created by a "move" operation). According to this document: https://www.gluster.org/glusterfs-algorithms-distribution/ If a link file is missing, it should be created after accessing the file. > However, I don't see this behaviour. If I delete (by hand) a link file on the brick, the file is still accessible, but the link file is never recreated. I can do an "open" or a "stat" on the file without getting an error, but the link file is not created. > Is this the intended behaviour? Or am I misunderstanding the above mentioned document? > > You shouldn't access or modify the backend filesystems manually, you can accidentally create unexpected problems if you don't fully understand what you are doing. > > That said, most probably the access to the file is still working because Gluster is using its cached information to locate the file. If the client mount is restarted, probably the file won't be accessible anymore unless you disable the "lookup-optimize" option (and this should recreate the link file). > > Regards, > > Xavi Thanks for the quick reply! Maybe I should explain better my motivation for the above mentioned experiments. I have a large production system running GlusterFS with almost 5 PB of data (in approx 100G of inodes). It's a distributed-only system (no sharding, not dispersed). In this system, the users sometimes experience the problem that they cannot delete a seemingly empty directory. The cause of this problem is, that the directory contains leftover link files, i.e. dht link files where the target is gone. I haven't identified yet why this happens and I don't have a method to provoke this error (otherwise I would have mentioned it on this list already.) But my quick & dirty fix is, to delete these leftover link files by hand. (These leftover link files are not being cleaned up by a "rebalance".) The reason for my experiments with link files is: what happens if for some reason I accidentally delete a link file where the target still exists? In the experiments (not on the production system) I also tried umounting and remounting the system, and I already tried setting "loopup-optmize = off". It doesn't affect the outcome of the experiments. best wishes, Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jahernan at redhat.com Tue Oct 18 09:26:22 2022 From: jahernan at redhat.com (Xavi Hernandez) Date: Tue, 18 Oct 2022 11:26:22 +0200 Subject: [Gluster-users] [EXT] link files not being created In-Reply-To: References: Message-ID: Hi Stefan, On Tue, Oct 18, 2022 at 10:34 AM Stefan Solbrig wrote: > Hi Xavi, > > On Mon, Oct 17, 2022 at 1:03 PM Stefan Solbrig > wrote: > >> Dear all, >> >> I was doing some testing regarding to GlusterFS link files (as they are >> created by a "move" operation). According to this document: >> https://www.gluster.org/glusterfs-algorithms-distribution/ If a link >> file is missing, it should be created after accessing the file. >> However, I don't see this behaviour. If I delete (by hand) a link file >> on the brick, the file is still accessible, but the link file is never >> recreated. I can do an "open" or a "stat" on the file without getting an >> error, but the link file is not created. >> Is this the intended behaviour? Or am I misunderstanding the above >> mentioned document? >> > > You shouldn't access or modify the backend filesystems manually, you can > accidentally create unexpected problems if you don't fully understand what > you are doing. > > That said, most probably the access to the file is still working because > Gluster is using its cached information to locate the file. If the client > mount is restarted, probably the file won't be accessible anymore unless > you disable the "lookup-optimize" option (and this should recreate the link > file). > > Regards, > > Xavi > > > Thanks for the quick reply! Maybe I should explain better my motivation > for the above mentioned experiments. I have a large production system > running GlusterFS with almost 5 PB of data (in approx 100G of inodes). It's > a distributed-only system (no sharding, not dispersed). In this system, > the users sometimes experience the problem that they cannot delete a > seemingly empty directory. The cause of this problem is, that the > directory contains leftover link files, i.e. dht link files where the > target is gone. I haven't identified yet why this happens and I don't have > a method to provoke this error (otherwise I would have mentioned it on this > list already.) > What version of Gluster are you using ? if I remember correctly, there was a fix in 3.10.2 (and some other following patches) to delete stale link files when deleting empty directories to avoid precisely this problem. Recently there have also been some patches to avoid leaving some of those stale entries. If you are still using 3.x I would recommend you to upgrade to a newer version, which have many issues already fixed. > But my quick & dirty fix is, to delete these leftover link files by hand. > (These leftover link files are not being cleaned up by a "rebalance".) > If you only remove the file, you are leaving some data behind that should also be removed. Each file is associated with an entry inside .glusterfs/xx/yy in the brick, called gfid. This entry has the format of an uuid and can be determined by reading (in hex) the "trusted.gfid" xattr of the file you are going to delete: # getfattr -n trusted.gfid -e hex If you manually remove files, you should also remove the gfid. > The reason for my experiments with link files is: what happens if for some > reason I accidentally delete a link file where the target still exists? > > In the experiments (not on the production system) I also tried umounting > and remounting the system, and I already tried setting "loopup-optmize = > off". It doesn't affect the outcome of the experiments. > If after remounting the volume you are still able to access the file but the link file is not created, then it means that it's not needed. Maybe it was one of those stale link files. Can you give me one example of those link files (I need the name) and the trusted.glusterfs.dht xattr of the parent directory from all bricks ? # getfattr -n trusted.glusterfs.dht -e hex Regards, Xavi > best wishes, > Stefan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.solbrig at ur.de Tue Oct 18 10:22:17 2022 From: stefan.solbrig at ur.de (Stefan Solbrig) Date: Tue, 18 Oct 2022 12:22:17 +0200 Subject: [Gluster-users] [EXT] link files not being created In-Reply-To: References: Message-ID: <44A61998-64EB-4678-ACB2-5E4F163CC028@ur.de> Hi Xavi, > Hi Stefan, > > On Tue, Oct 18, 2022 at 10:34 AM Stefan Solbrig > wrote: > Hi Xavi, > >> On Mon, Oct 17, 2022 at 1:03 PM Stefan Solbrig > wrote: >> Dear all, >> >> I was doing some testing regarding to GlusterFS link files (as they are created by a "move" operation). According to this document: https://www.gluster.org/glusterfs-algorithms-distribution/ If a link file is missing, it should be created after accessing the file. >> However, I don't see this behaviour. If I delete (by hand) a link file on the brick, the file is still accessible, but the link file is never recreated. I can do an "open" or a "stat" on the file without getting an error, but the link file is not created. >> Is this the intended behaviour? Or am I misunderstanding the above mentioned document? >> >> You shouldn't access or modify the backend filesystems manually, you can accidentally create unexpected problems if you don't fully understand what you are doing. >> >> That said, most probably the access to the file is still working because Gluster is using its cached information to locate the file. If the client mount is restarted, probably the file won't be accessible anymore unless you disable the "lookup-optimize" option (and this should recreate the link file). >> >> Regards, >> >> Xavi > > Thanks for the quick reply! Maybe I should explain better my motivation for the above mentioned experiments. I have a large production system running GlusterFS with almost 5 PB of data (in approx 100G of inodes). It's a distributed-only system (no sharding, not dispersed). In this system, the users sometimes experience the problem that they cannot delete a seemingly empty directory. The cause of this problem is, that the directory contains leftover link files, i.e. dht link files where the target is gone. I haven't identified yet why this happens and I don't have a method to provoke this error (otherwise I would have mentioned it on this list already.) > > What version of Gluster are you using ? if I remember correctly, there was a fix in 3.10.2 (and some other following patches) to delete stale link files when deleting empty directories to avoid precisely this problem. Recently there have also been some patches to avoid leaving some of those stale entries. > > If you are still using 3.x I would recommend you to upgrade to a newer version, which have many issues already fixed. I'm using 9.4 for the servers but my client (fuse) is still on 6.0. I know that's not optimal and I hope to change this soon, migrating everything to 9.6 > > But my quick & dirty fix is, to delete these leftover link files by hand. (These leftover link files are not being cleaned up by a "rebalance".) > > If you only remove the file, you are leaving some data behind that should also be removed. Each file is associated with an entry inside .glusterfs/xx/yy in the brick, called gfid. This entry has the format of an uuid and can be determined by reading (in hex) the "trusted.gfid" xattr of the file you are going to delete: > > # getfattr -n trusted.gfid -e hex > > If you manually remove files, you should also remove the gfid Yes, I'm aware of these files. Once I remove the (named) link file, the .glusterfs/xx/yy/.... will be the ones that have zero size and no other hard link. As far as I understand, every file on the bricks has a hard link to .glusterfs/xx/yy/... with the full name representing its gfid. I tend to remove these as well. > > The reason for my experiments with link files is: what happens if for some reason I accidentally delete a link file where the target still exists? > > In the experiments (not on the production system) I also tried umounting and remounting the system, and I already tried setting "loopup-optmize = off". It doesn't affect the outcome of the experiments. > > If after remounting the volume you are still able to access the file but the link file is not created, then it means that it's not needed. Maybe it was one of those stale link files. Not really... This was the case of the experiment, where I tried to delete the link file and the corresponding .glusterfs/x/yy, stopped the volume, umounted, restarted the volume, remounted, but the link file is still not being recreated. > Can you give me one example of those link files (I need the name) and the trusted.glusterfs.dht xattr of the parent directory from all bricks ? > > # getfattr -n trusted.glusterfs.dht -e hex > > Regards, > > Xavi Here's one of the stale files: [root at glubs-01 testvol]# getfattr -d -m. -e hex /gl/lv1lucuma/glurchbrick/scratch/analysis/CLS/N302/N302r001/run11/XMLOUT/N302r001n631_sto100.out.xml getfattr: Removing leading '/' from absolute path names # file: gl/lv1lucuma/glurchbrick/scratch/analysis/CLS/N302/N302r001/run11/XMLOUT/N302r001n631_sto100.out.xml trusted.gfid=0x6155412f6ade4009bcb92d839c2ad8b3 trusted.gfid2path.428e23fc0d37fc71=0x33343536636634622d336436642d346331622d386331622d6662616466643266356239302f4e333032723030316e3633315f73746f3130302e6f75742e786d6c trusted.glusterfs.dht.linkto=0x676c757263682d636c69656e742d3900 trusted.pgfid.3456cf4b-3d6d-4c1b-8c1b-fbadfd2f5b90=0x00000001 And here is the trusted.glusterfs.dht of the top level directory of each brick: trusted.glusterfs.dht=0x0888f55900000000b9ec78f7c58cd403 trusted.glusterfs.dht=0x0888f55900000000e59527f2f148c5e9 trusted.glusterfs.dht=0x0888f55900000000c58cd404ce451686 trusted.glusterfs.dht=0x0888f55900000000f148c5eafa0f7a9c trusted.glusterfs.dht=0x0888f55900000000ce451687d6fd5909 trusted.glusterfs.dht=0x0888f5590000000008e8c7fe11af7cb0 trusted.glusterfs.dht=0x0888f55900000000d6fd590ae29ce547 trusted.glusterfs.dht=0x0888f55900000000209640c72c05d3e5 trusted.glusterfs.dht=0x0888f55900000000e29ce548e419069c trusted.glusterfs.dht=0x0888f55900000000e419069de59527f1 trusted.glusterfs.dht=0x0888f55900000000fa0f7a9dfb8b9bf1 trusted.glusterfs.dht=0x0888f55900000000fb8b9bf2fd07bd46 trusted.glusterfs.dht=0x0888f55900000000fd07bd47fe83de9b trusted.glusterfs.dht=0x0888f55900000000fe83de9cffffffff trusted.glusterfs.dht=0x0888f5590000000000000000017c2154 trusted.glusterfs.dht=0x0888f55900000000017c215502f842a9 trusted.glusterfs.dht=0x0888f5590000000002f842aa047463fe trusted.glusterfs.dht=0x0888f55900000000047463ff05f08553 trusted.glusterfs.dht=0x0888f5590000000005f08554076ca6a8 trusted.glusterfs.dht=0x0888f55900000000076ca6a908e8c7fd trusted.glusterfs.dht=0x0888f5590000000011af7cb1132b9e05 trusted.glusterfs.dht=0x0888f55900000000132b9e0614a7bf5a trusted.glusterfs.dht=0x0888f5590000000014a7bf5b1623e0af trusted.glusterfs.dht=0x0888f559000000001623e0b017a35fb5 trusted.glusterfs.dht=0x0888f5590000000017a35fb6191f810a trusted.glusterfs.dht=0x0888f55900000000191f810b1a9f0010 trusted.glusterfs.dht=0x0888f559000000001a9f00111c1b2165 trusted.glusterfs.dht=0x0888f559000000001c1b21661d9aa06b trusted.glusterfs.dht=0x0888f559000000001d9aa06c1f16c1c0 trusted.glusterfs.dht=0x0888f559000000001f16c1c1209640c6 trusted.glusterfs.dht=0x0888f559000000002c05d3e62d81f53a trusted.glusterfs.dht=0x0888f559000000002d81f53b509f7813 trusted.glusterfs.dht=0x0888f55900000000509f781473bcfaec trusted.glusterfs.dht=0x0888f5590000000073bcfaed96da7dc5 trusted.glusterfs.dht=0x0888f5590000000096da7dc6b9ec78f6 Thank you a lot! -Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.vasilopoulos at uoc.gr Tue Oct 18 10:52:32 2022 From: g.vasilopoulos at uoc.gr (=?UTF-8?B?zpPOuc+Oz4HOs86/z4IgzpLOsc+DzrnOu8+Mz4DOv8+FzrvOv8+C?=) Date: Tue, 18 Oct 2022 13:52:32 +0300 Subject: [Gluster-users] glusterfs on ovirt 4.3 problem after multiple power outages. Message-ID: <2980c9a8-4672-cb46-33e7-ba117f9902a8@uoc.gr> Hello I am seeking consoultation regarding a problem with files not healing after multiple (3) power outages on the servers. The configuration is like this : There are 3 servers (g1,g2,g3) with 3 volumes (volume1, volume2, volume3) with replica 2 + arbiter. Glusterfs is ? 6.10. Volume 1 and volume2 are ok on Volume3 on there are about 12403 healing entries who are not healing and some virtual drives on ovirt vm are not starting and I cannot copy them either. For volume 3 data bricks are on g3 and g1 and arbiter brick is on g2 There are .prob-uuid-something files which are identical on the 2 servers (g1,g3) with the data bricks of volume3 . On g2 (arbiter brick there are no such files.) I have stopped the volume unmounted and runned xfs_repair on all bricks, remounted the bricks and started the volume. it did not fix the problem Is there anything I can do to fix the problem ? From hunter86_bg at yahoo.com Tue Oct 18 22:40:23 2022 From: hunter86_bg at yahoo.com (Strahil Nikolov) Date: Tue, 18 Oct 2022 22:40:23 +0000 (UTC) Subject: [Gluster-users] glusterfs on ovirt 4.3 problem after multiple power outages. In-Reply-To: <2980c9a8-4672-cb46-33e7-ba117f9902a8@uoc.gr> References: <2980c9a8-4672-cb46-33e7-ba117f9902a8@uoc.gr> Message-ID: <598989121.143254.1666132823647@mail.yahoo.com> Usually, I would run a full heal and check if it improves the situation: gluster volume heal full Best Regards,Strahil Nikolov? On Tue, Oct 18, 2022 at 14:01, ??????? ???????????? wrote: Hello I am seeking consoultation regarding a problem with files not healing after multiple (3) power outages on the servers. The configuration is like this : There are 3 servers (g1,g2,g3) with 3 volumes (volume1, volume2, volume3) with replica 2 + arbiter. Glusterfs is ? 6.10. Volume 1 and volume2 are ok on Volume3 on there are about 12403 healing entries who are not healing and some virtual drives on ovirt vm are not starting and I cannot copy them either. For volume 3 data bricks are on g3 and g1 and arbiter brick is on g2 There are .prob-uuid-something files which are identical on the 2 servers (g1,g3) with the data bricks of volume3 . On g2 (arbiter brick there are no such files.) I have stopped the volume unmounted and runned xfs_repair on all bricks, remounted the bricks and started the volume. it did not fix the problem Is there anything I can do to fix the problem ? ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users at gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.vasilopoulos at uoc.gr Wed Oct 19 07:00:21 2022 From: g.vasilopoulos at uoc.gr (=?UTF-8?B?zpPOuc+Oz4HOs86/z4IgzpLOsc+DzrnOu8+Mz4DOv8+FzrvOv8+C?=) Date: Wed, 19 Oct 2022 10:00:21 +0300 Subject: [Gluster-users] glusterfs on ovirt 4.3 problem after multiple power outages. In-Reply-To: <598989121.143254.1666132823647@mail.yahoo.com> References: <2980c9a8-4672-cb46-33e7-ba117f9902a8@uoc.gr> <598989121.143254.1666132823647@mail.yahoo.com> Message-ID: I have allready done this it didn't seem to help could reseting the arbiter brick be a solution? On 10/19/22 01:40, Strahil Nikolov wrote: > Usually, I would run a full heal and check if it improves the situation: > > gluster volume heal full > > Best Regards, > Strahil Nikolov > > > On Tue, Oct 18, 2022 at 14:01, ??????? ???????????? > wrote: > Hello I am seeking consoultation regarding a problem with files not > healing after multiple (3) power outages on the servers. > > > The configuration is like this : > > There are 3 servers (g1,g2,g3) with 3 volumes (volume1, volume2, > volume3) with replica 2 + arbiter. > > Glusterfs is ? 6.10. > > Volume 1 and volume2 are ok on Volume3 on there are about 12403 > healing > entries who are not healing and some virtual drives on ovirt vm > are not > starting and I cannot copy them either. > > For volume 3 data bricks are on g3 and g1 and arbiter brick is on g2 > > There are .prob-uuid-something files which are identical on the 2 > servers (g1,g3) with the data bricks of volume3 . On g2 (arbiter > brick > there are no such files.) > > I have stopped the volume unmounted and runned xfs_repair on all > bricks, > remounted the bricks and started the volume. it did not fix the > problem > > Is there anything I can do to fix the problem ? > > > > > ________ > > > > Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://meet.google.com/cpu-eiue-hvk > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jahernan at redhat.com Wed Oct 19 09:08:40 2022 From: jahernan at redhat.com (Xavi Hernandez) Date: Wed, 19 Oct 2022 11:08:40 +0200 Subject: [Gluster-users] [EXT] link files not being created In-Reply-To: <44A61998-64EB-4678-ACB2-5E4F163CC028@ur.de> References: <44A61998-64EB-4678-ACB2-5E4F163CC028@ur.de> Message-ID: On Tue, Oct 18, 2022 at 12:22 PM Stefan Solbrig wrote: > Hi Xavi, > > Hi Stefan, > > On Tue, Oct 18, 2022 at 10:34 AM Stefan Solbrig > wrote: > >> Hi Xavi, >> >> On Mon, Oct 17, 2022 at 1:03 PM Stefan Solbrig >> wrote: >> >>> Dear all, >>> >>> I was doing some testing regarding to GlusterFS link files (as they are >>> created by a "move" operation). According to this document: >>> https://www.gluster.org/glusterfs-algorithms-distribution/ If a link >>> file is missing, it should be created after accessing the file. >>> However, I don't see this behaviour. If I delete (by hand) a link file >>> on the brick, the file is still accessible, but the link file is never >>> recreated. I can do an "open" or a "stat" on the file without getting an >>> error, but the link file is not created. >>> Is this the intended behaviour? Or am I misunderstanding the above >>> mentioned document? >>> >> >> You shouldn't access or modify the backend filesystems manually, you can >> accidentally create unexpected problems if you don't fully understand what >> you are doing. >> >> That said, most probably the access to the file is still working because >> Gluster is using its cached information to locate the file. If the client >> mount is restarted, probably the file won't be accessible anymore unless >> you disable the "lookup-optimize" option (and this should recreate the link >> file). >> >> Regards, >> >> Xavi >> >> >> Thanks for the quick reply! Maybe I should explain better my motivation >> for the above mentioned experiments. I have a large production system >> running GlusterFS with almost 5 PB of data (in approx 100G of inodes). It's >> a distributed-only system (no sharding, not dispersed). In this system, >> the users sometimes experience the problem that they cannot delete a >> seemingly empty directory. The cause of this problem is, that the >> directory contains leftover link files, i.e. dht link files where the >> target is gone. I haven't identified yet why this happens and I don't have >> a method to provoke this error (otherwise I would have mentioned it on this >> list already.) >> > > What version of Gluster are you using ? if I remember correctly, there was > a fix in 3.10.2 (and some other following patches) to delete stale link > files when deleting empty directories to avoid precisely this problem. > Recently there have also been some patches to avoid leaving some of those > stale entries. > > If you are still using 3.x I would recommend you to upgrade to a newer > version, which have many issues already fixed. > > > I'm using 9.4 for the servers but my client (fuse) is still on 6.0. I know > that's not optimal and I hope to change this soon, migrating everything to > 9.6 > In that case you shouldn't have had stale link files during rmdir. All the files you removed were link files with rights set to "---------T", size 0 and an xattr named "trusted.glusterfs.dht.linkto" ? > > >> But my quick & dirty fix is, to delete these leftover link files by >> hand. (These leftover link files are not being cleaned up by a >> "rebalance".) >> > > If you only remove the file, you are leaving some data behind that should > also be removed. Each file is associated with an entry inside > .glusterfs/xx/yy in the brick, called gfid. This entry has the format of an > uuid and can be determined by reading (in hex) the "trusted.gfid" xattr of > the file you are going to delete: > > # getfattr -n trusted.gfid -e hex > > > If you manually remove files, you should also remove the gfid > > > Yes, I'm aware of these files. Once I remove the (named) link file, the > .glusterfs/xx/yy/.... will be the ones that have zero size and no other > hard link. As far as I understand, every file on the bricks has a hard link > to .glusterfs/xx/yy/... with the full name representing its gfid. I tend > to remove these as well. > Correct, except for directories, which are represented by a symlink with a single hardlink. > > >> The reason for my experiments with link files is: what happens if for >> some reason I accidentally delete a link file where the target still exists? >> >> In the experiments (not on the production system) I also tried umounting >> and remounting the system, and I already tried setting "loopup-optmize = >> off". It doesn't affect the outcome of the experiments. >> > > If after remounting the volume you are still able to access the file but > the link file is not created, then it means that it's not needed. Maybe it > was one of those stale link files. > > > Not really... This was the case of the experiment, where I tried to delete > the link file and the corresponding .glusterfs/x/yy, stopped the volume, > umounted, restarted the volume, remounted, but the link file is still not > being recreated. > If the file is accessible after remounting and the link file is not created, it means that dht doesn't need it. It could be a leftover from a previous operation (rename, rebalance, add-brick, remove-brick ...). > Can you give me one example of those link files (I need the name) and the > trusted.glusterfs.dht xattr of the parent directory from all bricks ? > > # getfattr -n trusted.glusterfs.dht -e hex > > > Regards, > > Xavi > > > Here's one of the stale files: > > [root at glubs-01 testvol]# getfattr -d -m. -e hex > /gl/lv1lucuma/glurchbrick/scratch/analysis/CLS/N302/N302r001/run11/XMLOUT/N302r001n631_sto100.out.xml > getfattr: Removing leading '/' from absolute path names > # file: > gl/lv1lucuma/glurchbrick/scratch/analysis/CLS/N302/N302r001/run11/XMLOUT/N302r001n631_sto100.out.xml > trusted.gfid=0x6155412f6ade4009bcb92d839c2ad8b3 > > trusted.gfid2path.428e23fc0d37fc71=0x33343536636634622d336436642d346331622d386331622d6662616466643266356239302f4e333032723030316e3633315f73746f3130302e6f75742e786d6c > trusted.glusterfs.dht.linkto=0x676c757263682d636c69656e742d3900 > trusted.pgfid.3456cf4b-3d6d-4c1b-8c1b-fbadfd2f5b90=0x00000001 > > And here is the trusted.glusterfs.dht of the top level directory of each > brick: > > trusted.glusterfs.dht=0x0888f55900000000b9ec78f7c58cd403 > trusted.glusterfs.dht=0x0888f55900000000e59527f2f148c5e9 > trusted.glusterfs.dht=0x0888f55900000000c58cd404ce451686 > trusted.glusterfs.dht=0x0888f55900000000f148c5eafa0f7a9c > trusted.glusterfs.dht=0x0888f55900000000ce451687d6fd5909 > trusted.glusterfs.dht=0x0888f5590000000008e8c7fe11af7cb0 > trusted.glusterfs.dht=0x0888f55900000000d6fd590ae29ce547 > trusted.glusterfs.dht=0x0888f55900000000209640c72c05d3e5 > trusted.glusterfs.dht=0x0888f55900000000e29ce548e419069c > trusted.glusterfs.dht=0x0888f55900000000e419069de59527f1 > trusted.glusterfs.dht=0x0888f55900000000fa0f7a9dfb8b9bf1 > trusted.glusterfs.dht=0x0888f55900000000fb8b9bf2fd07bd46 > trusted.glusterfs.dht=0x0888f55900000000fd07bd47fe83de9b > trusted.glusterfs.dht=0x0888f55900000000fe83de9cffffffff > trusted.glusterfs.dht=0x0888f5590000000000000000017c2154 > trusted.glusterfs.dht=0x0888f55900000000017c215502f842a9 > trusted.glusterfs.dht=0x0888f5590000000002f842aa047463fe > trusted.glusterfs.dht=0x0888f55900000000047463ff05f08553 > trusted.glusterfs.dht=0x0888f5590000000005f08554076ca6a8 > trusted.glusterfs.dht=0x0888f55900000000076ca6a908e8c7fd > trusted.glusterfs.dht=0x0888f5590000000011af7cb1132b9e05 > trusted.glusterfs.dht=0x0888f55900000000132b9e0614a7bf5a > trusted.glusterfs.dht=0x0888f5590000000014a7bf5b1623e0af > trusted.glusterfs.dht=0x0888f559000000001623e0b017a35fb5 > trusted.glusterfs.dht=0x0888f5590000000017a35fb6191f810a > trusted.glusterfs.dht=0x0888f55900000000191f810b1a9f0010 > trusted.glusterfs.dht=0x0888f559000000001a9f00111c1b2165 > trusted.glusterfs.dht=0x0888f559000000001c1b21661d9aa06b > trusted.glusterfs.dht=0x0888f559000000001d9aa06c1f16c1c0 > trusted.glusterfs.dht=0x0888f559000000001f16c1c1209640c6 > trusted.glusterfs.dht=0x0888f559000000002c05d3e62d81f53a > trusted.glusterfs.dht=0x0888f559000000002d81f53b509f7813 > trusted.glusterfs.dht=0x0888f55900000000509f781473bcfaec > trusted.glusterfs.dht=0x0888f5590000000073bcfaed96da7dc5 > trusted.glusterfs.dht=0x0888f5590000000096da7dc6b9ec78f6 > > I see that there are bricks with the same value for this xattr. On a pure distribute volume this shouldn't happen. The file name is hashed and the result indicates the brick that should contain the file. These xattr define the range of hashes that will go to each brick. The size of these ranges is also different on some bricks. These ranges should be proportional to the size of the brick's disk. If disks are of the same size, the size of the ranges should be equal. Ignoring for now these potential issues, the file you mentioned should be in the brick that has trusted.glusterfs.dht = 0x0888f55900000000d6fd590ae29ce547 (note that there are two bricks with this value). The only valid link file should be in this brick, or it could also be the real file. Any link file in other brick is stale and not really required. Regards, Xavi > Thank you a lot! > -Stefan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hunter86_bg at yahoo.com Sun Oct 23 13:19:35 2022 From: hunter86_bg at yahoo.com (Strahil Nikolov) Date: Sun, 23 Oct 2022 13:19:35 +0000 (UTC) Subject: [Gluster-users] glusterfs on ovirt 4.3 problem after multiple power outages. In-Reply-To: References: <2980c9a8-4672-cb46-33e7-ba117f9902a8@uoc.gr> <598989121.143254.1666132823647@mail.yahoo.com> Message-ID: <1073991470.1284592.1666531175431@mail.yahoo.com> I wouldn't do that without having a more clearer picture on the problem. Usually those '.prob-uuid' files are nothing more than probe files : https://github.com/oVirt/ioprocess/blob/ae379c8de83b28d73b6bd42d84e4e942821a7753/src/exported-functions.c#L867-L873 and deleting the old entries should not affect oVirt at all (should != will not). If your oVirt is used for Prod, I would delete them in low traffic hours/planned maintenance window. Can you provide the output of 'gluster volume heal volumeX info' in separate files ? Best Regards, Strahil Nikolov ? ?????, 19 ???????? 2022 ?., 10:00:33 ?. ???????+3, ??????? ???????????? ??????: I have allready done this it didn't seem to help could reseting the arbiter brick be a solution? On 10/19/22 01:40, Strahil Nikolov wrote: Usually, I would run a full heal and check if it improves the situation: gluster volume heal full Best Regards, Strahil Nikolov? On Tue, Oct 18, 2022 at 14:01, ??????? ???????????? wrote: Hello I am seeking consoultation regarding a problem with files not healing after multiple (3) power outages on the servers. The configuration is like this : There are 3 servers (g1,g2,g3) with 3 volumes (volume1, volume2, volume3) with replica 2 + arbiter. Glusterfs is ? 6.10. Volume 1 and volume2 are ok on Volume3 on there are about 12403 healing entries who are not healing and some virtual drives on ovirt vm are not starting and I cannot copy them either. For volume 3 data bricks are on g3 and g1 and arbiter brick is on g2 There are .prob-uuid-something files which are identical on the 2 servers (g1,g3) with the data bricks of volume3 . On g2 (arbiter brick there are no such files.) I have stopped the volume unmounted and runned xfs_repair on all bricks, remounted the bricks and started the volume. it did not fix the problem Is there anything I can do to fix the problem ? ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users at gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users at gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From diego.zuccato at unibo.it Thu Oct 27 10:31:14 2022 From: diego.zuccato at unibo.it (Diego Zuccato) Date: Thu, 27 Oct 2022 12:31:14 +0200 Subject: [Gluster-users] bitd.log and quotad.log flooding /var In-Reply-To: <166136661.581569.1660335515346@mail.yahoo.com> References: <223066796.1645097.1660075716545@mail.yahoo.com> <1784357447.127151.1660194767503@mail.yahoo.com> <2943b4c3-101c-1c90-1b2e-9040dc857624@unibo.it> <166136661.581569.1660335515346@mail.yahoo.com> Message-ID: Seems it's accumulating again. ATM it's like this: root 2134553 2.1 11.2 23071940 22091644 ? Ssl set23 1059:58 /usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p /var/run/gluster/quotad/quotad.pid -l /var/log/glusterfs/quotad.log -S /var/run/gluster/321cad6822171c64.socket --process-name quotad Uptime is 77d. The other 2 nodes are in the same situation. Gluster is 9.5-1 amd64. Is it latest enough or should I plan a migration to 10? Hints? Diego Il 12/08/2022 22:18, Strahil Nikolov ha scritto: > 75GB -> that's definately a memory leak. > What version do you use ? > > If latest - open a github issue. > > Best Regards, > Strahil Nikolov > > On Thu, Aug 11, 2022 at 10:06, Diego Zuccato > wrote: > Yup. > > Seems the /etc/sysconfig/glusterd setting got finally applied and I now > have a process like this: > root? ? 4107315? 0.0? 0.0 529244 40124 ?? ? ? ? Ssl? ago08? 2:44 > /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level ERROR > but bitd still spits out (some) 'I' lines > [2022-08-11 07:02:21.072943 +0000] I [MSGID: 118016] > [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0: > Triggering > signing [{path=/extra/some/other/dirs/file.dat}, > {gfid=3e35b158-35a6-4e63-adbd-41075a11022e}, > {Brick-path=/srv/bricks/00/d}] > > Moreover I've had to disable quota, since quota processes were eating > more than *75GB* RAM on each storage node! :( > > Il 11/08/2022 07:12, Strahil Nikolov ha scritto: > > Have you decreased glusterd log level via: > > glusterd --log-level WARNING|ERROR > > > > It seems that bitrot doesn't have it's own log level. > > > > As a workaround, you can configure syslog to send the logs only > remotely > > and thus preventing the overfill of the /var . > > > > > > Best Regards, > > Strahil Nikolov > > > >? ? On Wed, Aug 10, 2022 at 7:52, Diego Zuccato > >? ? > wrote: > >? ? Hi Strahil. > > > >? ? Sure. Luckily I didn't delete 'em all :) > > > >? ? ? From bitd.log: > >? ? -8<-- > >? ? [2022-08-09 05:58:12.075999 +0000] I [MSGID: 118016] > >? ? [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0: > >? ? Triggering > >? ? signing [{path=/astro/...omisis.../file.dat}, > >? ? {gfid=5956af24-5efc-496c-8d7e-ea6656f298de}, > >? ? {Brick-path=/srv/bricks/10/d}] > >? ? [2022-08-09 05:58:12.082264 +0000] I [MSGID: 118016] > >? ? [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0: > >? ? Triggering > >? ? signing [{path=/astro/...omisis.../file.txt}, > >? ? {gfid=afb75c03-0d29-414e-917a-ff718982c849}, > >? ? {Brick-path=/srv/bricks/13/d}] > >? ? [2022-08-09 05:58:12.082267 +0000] I [MSGID: 118016] > >? ? [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0: > >? ? Triggering > >? ? signing [{path=/astro/...omisis.../file.dat}, > >? ? {gfid=982bc7a8-d4ba-45d7-9104-044e5d446802}, > >? ? {Brick-path=/srv/bricks/06/d}] > >? ? [2022-08-09 05:58:12.084960 +0000] I [MSGID: 118016] > >? ? [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0: > >? ? Triggering > >? ? signing [{path=/atmos/...omisis.../file}, > >? ? {gfid=17e4dfb0-1f64-47a3-9aa8-b3fa05b7cd4e}, > >? ? {Brick-path=/srv/bricks/15/d}] > >? ? [2022-08-09 05:58:12.089357 +0000] I [MSGID: 118016] > >? ? [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0: > >? ? Triggering > >? ? signing [{path=/astro/...omisis.../file.txt}, > >? ? {gfid=e70bf289-5aeb-43c2-aadd-d18979cf62b5}, > >? ? {Brick-path=/srv/bricks/00/d}] > >? ? [2022-08-09 05:58:12.094440 +0000] I [MSGID: 100011] > >? ? [glusterfsd.c:1511:reincarnate] 0-glusterfsd: Fetching the > volume file > >? ? from server... [] > >? ? [2022-08-09 05:58:12.096299 +0000] I > >? ? [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs: > Received list of > >? ? available volfile servers: clustor00:24007 clustor02:24007 > >? ? [2022-08-09 05:58:12.096653 +0000] I [MSGID: 101221] > >? ? [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster: > duplicate > >? ? entry for volfile-server [{errno=17}, {error=File gi? esistente}] > >? ? [2022-08-09 05:58:12.096853 +0000] I > >? ? [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No > change in > >? ? volfile,continuing > >? ? [2022-08-09 05:58:12.096702 +0000] I [MSGID: 101221] > >? ? [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster: > duplicate > >? ? entry for volfile-server [{errno=17}, {error=File gi? esistente}] > >? ? [2022-08-09 05:58:12.102176 +0000] I [MSGID: 118016] > >? ? [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0: > >? ? Triggering > >? ? signing [{path=/astro/...omisis.../file.dat}, > >? ? {gfid=45f59e3f-eef4-4ccf-baac-bc8bf10c5ced}, > >? ? {Brick-path=/srv/bricks/09/d}] > >? ? [2022-08-09 05:58:12.106120 +0000] I [MSGID: 118016] > >? ? [bit-rot.c:1052:bitd_oneshot_crawl] 0-cluster_data-bit-rot-0: > >? ? Triggering > >? ? signing [{path=/astro/...omisis.../file.txt}, > >? ? {gfid=216832dd-0a1c-4593-8a9e-f54d70efc637}, > >? ? {Brick-path=/srv/bricks/13/d}] > >? ? -8<-- > > > >? ? And from quotad.log: > >? ? -<-- > >? ? [2022-08-09 05:58:12.291030 +0000] I > >? ? [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs: > Received list of > >? ? available volfile servers: clustor00:24007 clustor02:24007 > >? ? [2022-08-09 05:58:12.291143 +0000] I [MSGID: 101221] > >? ? [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster: > duplicate > >? ? entry for volfile-server [{errno=17}, {error=File gi? esistente}] > >? ? [2022-08-09 05:58:12.291653 +0000] I > >? ? [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No > change in > >? ? volfile,continuing > >? ? [2022-08-09 05:58:12.292990 +0000] I > >? ? [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs: > Received list of > >? ? available volfile servers: clustor00:24007 clustor02:24007 > >? ? [2022-08-09 05:58:12.293204 +0000] I > >? ? [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs: > Received list of > >? ? available volfile servers: clustor00:24007 clustor02:24007 > >? ? [2022-08-09 05:58:12.293500 +0000] I > >? ? [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No > change in > >? ? volfile,continuing > >? ? [2022-08-09 05:58:12.293663 +0000] I > >? ? [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No > change in > >? ? volfile,continuing > >? ? The message "I [MSGID: 100011] [glusterfsd.c:1511:reincarnate] > >? ? 0-glusterfsd: Fetching the volume file from server... []" > repeated 2 > >? ? times between [2022-08-09 05:58:12.094470 +0000] and [2022-08-09 > >? ? 05:58:12.291149 +0000] > >? ? The message "I [MSGID: 101221] > >? ? [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster: > duplicate > >? ? entry for volfile-server [{errno=17}, {error=File gi? esistente}]" > >? ? repeated 5 times between [2022-08-09 05:58:12.291143 +0000] and > >? ? [2022-08-09 05:58:12.293234 +0000] > >? ? [2022-08-09 06:00:23.180856 +0000] I > >? ? [glusterfsd-mgmt.c:77:mgmt_cbk_spec] 0-mgmt: Volume file changed > >? ? [2022-08-09 06:00:23.324981 +0000] I > >? ? [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs: > Received list of > >? ? available volfile servers: clustor00:24007 clustor02:24007 > >? ? [2022-08-09 06:00:23.325025 +0000] I [MSGID: 101221] > >? ? [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster: > duplicate > >? ? entry for volfile-server [{errno=17}, {error=File gi? esistente}] > >? ? [2022-08-09 06:00:23.325498 +0000] I > >? ? [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No > change in > >? ? volfile,continuing > >? ? [2022-08-09 06:00:23.325046 +0000] I [MSGID: 101221] > >? ? [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster: > duplicate > >? ? entry for volfile-server [{errno=17}, {error=File gi? esistente}] > >? ? [2022-08-09 22:00:07.364719 +0000] I [MSGID: 100011] > >? ? [glusterfsd.c:1511:reincarnate] 0-glusterfsd: Fetching the > volume file > >? ? from server... [] > >? ? [2022-08-09 22:00:07.374040 +0000] I > >? ? [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs: > Received list of > >? ? available volfile servers: clustor00:24007 clustor02:24007 > >? ? [2022-08-09 22:00:07.374099 +0000] I [MSGID: 101221] > >? ? [common-utils.c:3851:gf_set_volfile_server_common] 0-gluster: > duplicate > >? ? entry for volfile-server [{errno=17}, {error=File gi? esistente}] > >? ? [2022-08-09 22:00:07.374569 +0000] I > >? ? [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No > change in > >? ? volfile,continuing > >? ? [2022-08-09 22:00:07.385610 +0000] I > >? ? [glusterfsd-mgmt.c:2170:mgmt_getspec_cbk] 0-glusterfs: > Received list of > >? ? available volfile servers: clustor00:24007 clustor02:24007 > >? ? [2022-08-09 22:00:07.386119 +0000] I > >? ? [glusterfsd-mgmt.c:2203:mgmt_getspec_cbk] 0-glusterfs: No > change in > >? ? volfile,continuing > >? ? -8<-- > > > >? ? I've now used > >? ? ? ? gluster v set cluster_data diagnostics.brick-sys-log-level > CRITICAL > >? ? and rate of filling decreased, but I still see many 'I' lines :( > > > >? ? Using Gluster 9.5 packages from > >? ? deb [arch=amd64] > > > https://download.gluster.org/pub/gluster/glusterfs/9/LATEST/Debian/bullseye/amd64/apt > > > > >? ? ? > > >? ? bullseye main > > > >? ? Tks, > >? ? ? ? Diego > > > >? ? Il 09/08/2022 22:08, Strahil Nikolov ha scritto: > >? ? ? > Hey Diego, > >? ? ? > > >? ? ? > can you show a sample of such Info entries ? > >? ? ? > > >? ? ? > Best Regards, > >? ? ? > Strahil Nikolov > >? ? ? > > >? ? ? >? ? On Mon, Aug 8, 2022 at 15:59, Diego Zuccato > >? ? ? >? ? > >> wrote: > >? ? ? >? ? Hello all. > >? ? ? > > >? ? ? >? ? Lately, I noticed some hickups in our Gluster volume. > It's a > >? ? "replica 3 > >? ? ? >? ? arbiter 1" with many bricks (currently 90 data bricks > over 3 > >? ? servers). > >? ? ? > > >? ? ? >? ? I tried to reduce log level by setting > >? ? ? >? ? diagnostics.brick-log-level: ERROR > >? ? ? >? ? diagnostics.client-log-level: ERROR > >? ? ? >? ? and creating /etc/default/glusterd containing > "LOG_LEVEL=ERROR". > >? ? ? >? ? But I still see a lot of 'I' lines in the logs and have to > >? ? manually run > >? ? ? >? ? logrotate way too often or /var gets too full. > >? ? ? > > >? ? ? >? ? Any hints? What did I forget? > >? ? ? > > >? ? ? >? ? Tks. > >? ? ? > > >? ? ? >? ? -- > >? ? ? >? ? Diego Zuccato > >? ? ? >? ? DIFA - Dip. di Fisica e Astronomia > >? ? ? >? ? Servizi Informatici > >? ? ? >? ? Alma Mater Studiorum - Universit? di Bologna > >? ? ? >? ? V.le Berti-Pichat 6/2 - 40127 Bologna - Italy > >? ? ? >? ? tel.: +39 051 20 95786 > >? ? ? >? ? ________ > >? ? ? > > >? ? ? > > >? ? ? > > >? ? ? >? ? Community Meeting Calendar: > >? ? ? > > >? ? ? >? ? Schedule - > >? ? ? >? ? Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > >? ? ? >? ? Bridge: https://meet.google.com/cpu-eiue-hvk > > >? ? > > >? ? ? >? ? > >? ? >> > >? ? ? >? ? Gluster-users mailing list > >? ? ? > Gluster-users at gluster.org > > > >? ? >> > >? ? ? > https://lists.gluster.org/mailman/listinfo/gluster-users > > >? ? > > >? ? ? > > > >? ? >> > > > > >? ? ? > > > > >? ? -- > >? ? Diego Zuccato > >? ? DIFA - Dip. di Fisica e Astronomia > >? ? Servizi Informatici > >? ? Alma Mater Studiorum - Universit? di Bologna > >? ? V.le Berti-Pichat 6/2 - 40127 Bologna - Italy > >? ? tel.: +39 051 20 95786 > > > > -- > Diego Zuccato > DIFA - Dip. di Fisica e Astronomia > Servizi Informatici > Alma Mater Studiorum - Universit? di Bologna > V.le Berti-Pichat 6/2 - 40127 Bologna - Italy > tel.: +39 051 20 95786 > -- Diego Zuccato DIFA - Dip. di Fisica e Astronomia Servizi Informatici Alma Mater Studiorum - Universit? di Bologna V.le Berti-Pichat 6/2 - 40127 Bologna - Italy tel.: +39 051 20 95786 From shreyansh.shah at alpha-grep.com Fri Oct 28 06:10:33 2022 From: shreyansh.shah at alpha-grep.com (Shreyansh Shah) Date: Fri, 28 Oct 2022 11:40:33 +0530 Subject: [Gluster-users] Gluster 5.10 rebalance stuck Message-ID: Hi, We are running glusterfs 5.10 server volume. Recently we added a few new bricks and started a rebalance operation. After a couple of days the rebalance operation was just stuck, with one of the peers showing In-Progress with no file being read/transferred and the rest showing Failed/Completed, so we stopped it using "gluster volume rebalance data stop". Now when we are trying to start it again, we get the below error. Any assistance would be appreciated root at gluster-11:~# gluster volume rebalance data status > volume rebalance: data: failed: Rebalance not started for volume data. > root at gluster-11:~# gluster volume rebalance data start > volume rebalance: data: failed: Rebalance on data is already started > root at gluster-11:~# gluster volume rebalance data stop > volume rebalance: data: failed: Rebalance not started for volume data. -- Regards, Shreyansh Shah AlphaGrep* Securities Pvt. Ltd.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mzieba at gelbergroup.com Wed Oct 26 18:22:05 2022 From: mzieba at gelbergroup.com (Marcin Zieba) Date: Wed, 26 Oct 2022 18:22:05 +0000 Subject: [Gluster-users] Gluster issues rebalancing Message-ID: Hi I have a distributed replicated cluster with an arbiter. I added additional 2 pairs bricks to my cluster. Ran a rebalance and it completed successfully however the data used balanced only between the original nodes and one pair of the new bricks and the 3rd pair got a very small amount of data. Has anyone come across this and is there a way to balance the data across all 3 pairs? Seems that new data that is written to the share also behaves in this way. Thanks, Marcin -------------- next part -------------- An HTML attachment was scrubbed... URL: