[Gluster-users] FORTRAN Codes and File I/O

Brian Smith brs at usf.edu
Fri Feb 12 16:22:05 UTC 2010


Hi, Harshavardhana,

Thanks for the reply.  My volume files are below.  Unfortunately, there
is no helpful information in the logs as it seems the log verbosity is
set too low.  I'll update that and hopefully get some more information.
I'm using distribute and not NUFA.

## file auto generated by /usr/bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /usr/bin/glusterfs-volgen --name work pvfs0:/pvfs/glusterfs
pvfs1:/pvfs/glusterfs

volume posix1
  type storage/posix
  option directory /pvfs/glusterfs
end-volume

volume locks1
    type features/locks
    subvolumes posix1
    option mandatory-locks on
end-volume

volume brick1
    type performance/io-threads
    option thread-count 8
    subvolumes locks1
end-volume

volume server-tcp
    type protocol/server
    option transport-type tcp
    option auth.addr.brick1.allow *
    option transport.socket.listen-port 6996
    option transport.socket.nodelay on
    subvolumes brick1
end-volume

## file auto generated by /usr/bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /usr/bin/glusterfs-volgen --name work pvfs0:/pvfs/glusterfs
pvfs1:/pvfs/glusterfs

# TRANSPORT-TYPE tcp
volume pvfs0-1
    type protocol/client
    option transport-type tcp
    option remote-host pvfs0
    option transport.socket.nodelay on
    option transport.remote-port 6996
    option remote-subvolume brick1
end-volume

volume pvfs1-1
    type protocol/client
    option transport-type tcp
    option remote-host pvfs1
    option transport.socket.nodelay on
    option transport.remote-port 6996
    option remote-subvolume brick1
end-volume

volume distribute
    type cluster/distribute
    subvolumes pvfs0-1 pvfs1-1
end-volume

volume writebehind
    type performance/write-behind
    option cache-size 4MB
    subvolumes distribute
end-volume

volume readahead
    type performance/read-ahead
    option page-count 4
    subvolumes writebehind
end-volume

volume iocache
    type performance/io-cache
    option cache-size 1GB
    option cache-timeout 1
    subvolumes readahead
end-volume

volume quickread
    type performance/quick-read
    option cache-timeout 1
    option max-file-size 64kB
    subvolumes iocache
end-volume

volume statprefetch
    type performance/stat-prefetch
    subvolumes quickread
end-volume



-- 
Brian Smith
Senior Systems Administrator
IT Research Computing, University of South Florida
4202 E. Fowler Ave. ENB204
Office Phone: +1 813 974-1467
Organization URL: http://rc.usf.edu


On Fri, 2010-02-12 at 14:47 +0530, Harshavardhana wrote:
> Hi Brian,
> 
>     Can you share your volume files and log files? are you using NUFA
> translator?.  Running "vasp" application codes on nufa based
> configuration we have seen certain issues. 
> 
> --
> Harshavardhana
> 
> On Thu, Feb 11, 2010 at 11:29 PM, Brian Smith <brs at usf.edu> wrote:
>         H
>         i all,
>         
>         I'm running Gluster 3.0.0 on top of XFS and while running a
>         FORTRAN code
>         that works perfectly well on any other file system, I get
>         runtime errors
>         when trying to open files -- along the lines of:
>         
>         At line 386 of file main.f (unit = 18, file = '')
>         Fortran runtime error: File 'CHGCAR' already exists
>         
>         Are there known issues with FORTRAN I/O and Gluster?  Is this
>         some sort
>         of caching artifact?  Its not a consistent problem as it only
>         seems to
>         happen when running jobs within my scheduling environment (I
>         use SGE).
>         
>         Let me know if you need more info.
>         
>         Thanks in advance,
>         -Brian
>         
>         --
>         Brian Smith
>         Senior Systems Administrator
>         IT Research Computing, University of South Florida
>         4202 E. Fowler Ave. ENB204
>         Office Phone: +1 813 974-1467
>         Organization URL: http://rc.usf.edu
>         
>         
>         On Thu, 2010-02-11 at 15:13 +0100, Eros Candelaresi wrote:
>         > Hi,
>         >
>         > for my small webhosting (3 servers, more to come hopefully)
>         I am
>         > investigating cluster filesystems. I have seen a few now and
>         I love the
>         > flexibility that GlusterFS brings. Still I cannot see a way
>         to adapt it
>         > to suit my needs. I have the following hardware:
>         > - Server #1 with 160GB S-ATA
>         > - Server #2 with 2x 400GB S-ATA
>         > - Server #3 with 2x 1,5TB S-ATA
>         >
>         > I am hoping to find a filesystem that fulfills the following
>         requirements:
>         > 1. POSIX compliant (Apache, Postfix, etc. will use it) -
>         GlusterFS has it
>         > 2. combine the harddisks of all servers into one single
>         filesystem -
>         > DHT/unify seem to do the job
>         > 3. redundancy: have a copy of each single file on at least 2
>         machines
>         > such that a single host may fail without people noticing -
>         looks like
>         > this may be achieved by having AFR below DHT/Unify
>         > 4. after a server failure redundancy should automatically be
>         recreated
>         > (ie. create new copies of all files that only exist once
>         after the crash)
>         > 5. just throw in new hardware, connect it with the cluster
>         and let the
>         > filesystem take care of filling it with data
>         >
>         > Hadoop seems strong on points 2.-5. but fails in 1. and is
>         unsuited for
>         > small files. For GlusterFS however, I cannot see how to
>         achieve 4.-5.
>         > There always seems to be manual reconfiguration and data
>         movement
>         > involved, is this correct? Since most of the Wiki is still
>         based on 2.0
>         > and there is 3.0 out now, I may be missing something.
>         >
>         > Hoping for your comments.
>         >
>         > Thanks and regards,
>         > Eros
>         >
>         >
>         > _______________________________________________
>         > Gluster-users mailing list
>         > Gluster-users at gluster.org
>         > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>         
>         _______________________________________________
>         Gluster-users mailing list
>         Gluster-users at gluster.org
>         http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 




More information about the Gluster-users mailing list