[Gluster-devel] GlusterFs: Problems with Memory Mapped Files and "apt-get" on debian
ul at enas.net
Tue May 15 09:08:20 UTC 2007
I have a problem with "mmap" on my GlusterFs test environment and "apt"
2 different servers for storage
1 server as client
On top of the server I use a virtual server setup (details
Debian Sarge with self compiled 220.127.116.11 (uname -r 18.104.22.168-vs2.2.0) and
latest stable virtual server patch.
GlusterFs: latest mainline 2.4 from repository
What I'm trying to do:
- Create a AFR Mirror over the 2 Servers.
- Mount the Volume on Server 3 (Client).
- Install on the mounted volume the hole virtual Server with Apache,
MySql and so on.
So I have a full redundant Virtual Server mirrored over two bricks .
After some help from Avati Anand last week the above setup works just
fine. I tried out Mysql and it works normally (will make some more tests
in the future).
But now I have the problem with apt.
For example when I try to update the packagelists within the virtual
server I get the following error:
mastersql:/# apt-get update
Get:1 http://security.debian.org etch/updates Release.gpg
Hit http://ftp.de.debian.org etch/non-free
Fetched 2B in 7s
Reading package lists... Error!
E: Couldn't make mmap of 12582912 bytes - mmap (19 No such device)
W: Unable to munmap
E: The package lists or status file could not be parsed or opened.
After some googling I found out, that apt uses "memory mapped files" and
it seems that apt can't find some device.
But I'm not able to find out which device does it can't find.
Short description of MMAP:
Have you any idea what can cause this problem? Without GlusterFs as the
underlaying filesystem the problems not occurs.
Krishna Srinivas wrote:
> Hi Danson,
> Updating the replica only on the next access is not the best
> solution for the reasons mentioned by you. We will announce
> the AFR's auto-sync design to the list soon. It is scheduled
> for 1.4.
> On 5/12/07, Danson Michael Joseph <danson.joseph at baobabelectric.com>
>> Hi Again,
>> I just read that the CODA filesystem can replicate to multiple servers,
>> and when a server goes down and comes back up some time after files on
>> the remaining server have changed, the policy for repair is to repair on
>> next access. Now I don't believe that this policy is ideal because if
>> the second server then fails before a file has been accessed, the file
>> is not up to date, but it does highlight a possible repair technique
>> whereby the client does a loopback write for all files with different
>> timestamps or some other marked difference?
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
More information about the Gluster-devel