[Gluster-users] Replicating data files is causing issue with postgres

Jeff Lord jlord at mediosystems.com
Fri Mar 27 22:03:52 UTC 2009

We are attempting to run a postgres cluster which is composed of two  
Each mirroring the data on the other. Gluster config is identical on  
each node:

volume posix
  type storage/posix
  option directory /mnt/sdb1

volume locks
   type features/locks
   subvolumes posix

volume brick
  type performance/io-threads
  subvolumes locks

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  subvolumes brick

volume gfs01-hq
  type protocol/client
  option transport-type tcp
  option remote-host gfs01-hq
  option remote-subvolume brick

volume gfs02-hq
  type protocol/client
  option transport-type tcp
  option remote-host gfs02-hq
  option remote-subvolume brick

volume replicate
  type cluster/replicate
  option favorite-child gfs01-hq
  subvolumes gfs01-hq gfs02-hq

volume writebehind
   type performance/write-behind
   option page-size 128KB
   option cache-size 1MB
   subvolumes replicate

volume cache
   type performance/io-cache
   option cache-size 512MB
   subvolumes writebehind

The basic problem is whenever i try to import a database created on a  
different cluster i am running into these errors.

-bash-3.2$ pg_restore -U entitystore -d entitystore --no-owner -n  
public entitystore pg_restore: [archiver (db)] Error while PROCESSING  
pg_restore: [archiver (db)] Error from TOC entry 1829; 0 147089 TABLE  
DATA entity_medio-canon-all-0 entitystore
pg_restore: [archiver (db)] COPY failed: ERROR:  unexpected data  
beyond EOF in block 77309 of relation "entity_medio-canon-all-0"
HINT:  This has been seen to occur with buggy kernels; consider  
updating your system.
CONTEXT:  COPY entity_medio-canon-all-0, line 1022934: "medio-canon- 
all-0	1.mut_2572632518437988628	\\340\\000\\000\\001\\0008\\317\ 

The issue seems to be related to using gluster, as when i attempt the  
same restore to local (non-replicated disk) it works fine.
Is there something amiss in our gluster config? Should we be doing  
something different?

Thanks for taking the time to read.


More information about the Gluster-users mailing list