[Gluster-devel] glusterfs 1.3.1 tla472 crash

Amar S. Tumballi amar at zresearch.com
Thu Sep 6 17:05:52 UTC 2007


Hi Andrey,
 I am looking into the issue. Will get back to you soon. Btw, is it very
regular (ie, you mount glusterfs, and create a file, and it segfaults)?

-Amar

On 9/6/07, NovA <av.nova at gmail.com> wrote:
>
> Hello everybody!
>
> Today I've tried to switch from tla patch 184 to the 1.3.1 release.
> But client crashes with core, when writing something to imported unify
> volume. Here there are the backtrace and my specs:
>
> ------------
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `[glusterfs]
>                              '.
> Program terminated with signal 8, Arithmetic exception.
> #0  0x00002aaaab569af2 in get_stats_free_disk (this=0x7fffa19e8420) at
> alu.c:71
>         in alu.c
> #0  0x00002aaaab569af2 in get_stats_free_disk (this=0x7fffa19e8420) at
> alu.c:71
> #1  0x00002aaaab56a21d in update_stat_array_cbk (frame=0x65d2c0,
> cookie=0x6147e0, xl=<value optimized out>, op_ret=0,
>     op_errno=<value optimized out>, trav_stats=0x7fffa19e8420) at alu.c
> :483
> #2  0x00002aaaaaaaf9d5 in client_stats_cbk (frame=0x6560e0,
> args=<value optimized out>) at client-protocol.c:3974
> #3  0x00002aaaaaab2c33 in notify (this=0x614800, event=<value
> optimized out>, data=0x62cd80) at client-protocol.c:4409
> #4  0x00002aac092eeb22 in sys_epoll_iteration (ctx=<value optimized
> out>) at epoll.c:53
> #5  0x000000000040348b in main (argc=6, argv=0x7fffa19e8728) at
> glusterfs.c:388
> ------------
>
>
> --------- server-head.vol ------
> # Namespace
> volume brick-ns
>   type storage/posix                  # POSIX FS translator
>   option directory /mnt/glusterfs-ns  # Export this directory
> end-volume
>
> # Data
> volume disk
>   type storage/posix              # POSIX FS translator
>   option directory /mnt/hd        # Export this directory
> end-volume
>
> volume locks
>   type features/posix-locks
>   option mandatory on
>   subvolumes disk
> end-volume
>
> volume brick    #iothreads can give performance a boost
>   type performance/io-threads
>   option thread-count 8
>   subvolumes locks
> end-volume
>
> volume server
>   type protocol/server
>   option transport-type tcp/server     # For TCP/IP transport
> # option bind-address 192.168.1.10     # Default is to listen on all
> interfaces
> # option listen-port 6996              # Default is 6996
> # option client-volume-filename /etc/glusterfs/client.vol
>   subvolumes brick brick-ns
>   option auth.ip.brick.allow 10.1.0.*     # Allow access to "brick" volume
>   option auth.ip.brick-ns.allow 10.1.0.*  # Allow access to "brick-ns"
> volume
> end-volume
> ---------------------------------------
>
> -------- server-node.vol -----------
> volume disk
>   type storage/posix
>   option directory /mnt/hd
> end-volume
>
> volume locks
>   type features/posix-locks
>   option mandatory on
>   subvolumes disk
> end-volume
>
> volume brick
>   type performance/io-threads
>   subvolumes locks
> end-volume
>
> volume server
>   type protocol/server
>   option transport-type tcp/server     # For TCP/IP transport
> # option bind-address 192.168.1.10     # Default is to listen on all
> interfaces
> # option listen-port 6996              # Default is 6996
>   option client-volume-filename /etc/glusterfs/client.vol
>   subvolumes brick
>   option auth.ip.brick.allow 10.1.0.*  # Allow access to "brick" volume
> end-volume
> -----------------------------------
>
> ------------ client.vol -----------
> ### Remote subvolumes
> volume c0
>   type protocol/client
>   option transport-type tcp/client     # for TCP/IP transport
>   option remote-host 10.1.0.1          # IP address of the remote brick
> # option remote-port 6996              # default server port is 6996
>   option remote-subvolume brick        # name of the remote volume
> end-volume
>
> volume c-ns  # namespace
>   type protocol/client
>   option transport-type tcp/client
>   option remote-host 10.1.0.1
>   option remote-subvolume brick-ns
> end-volume
>
> volume c1
>   type protocol/client
>   option transport-type tcp/client     # for TCP/IP transport
>   option remote-host 10.1.0.2          # IP address of the remote brick
>   option remote-subvolume brick        # name of the remote volume
> end-volume
>
> [ ...skipped... ]
>
> volume c10
>   type protocol/client
>   option transport-type tcp/client     # for TCP/IP transport
>   option remote-host 10.1.0.11         # IP address of the remote brick
>   option remote-subvolume brick        # name of the remote volume
> end-volume
>
> volume bricks
>   type cluster/unify
>   option namespace c-ns    # this will not be storage child of unify.
>   subvolumes c0 c1 c2 c3 c4 c5 c6 c7 c8 c9 c10
>
>   option scheduler alu
>     option alu.limits.min-free-disk  5% #%
>     option alu.limits.max-open-files 10000
>     option alu.order
> disk-usage:read-usage:write-usage:open-files-usage:disk-speed-usage
>     option alu.disk-usage.entry-threshold 2GB
>     option alu.disk-usage.exit-threshold  128MB
>     option alu.open-files-usage.entry-threshold 1024
>     option alu.open-files-usage.exit-threshold 32
>     option alu.read-usage.entry-threshold 20 #%
>     option alu.read-usage.exit-threshold 4 #%
>     option alu.write-usage.entry-threshold 20 #%
>     option alu.write-usage.exit-threshold 4 #%
>     option alu.disk-speed-usage.entry-threshold 0
>     option alu.disk-speed-usage.exit-threshold 0
>     option alu.stat-refresh.interval 10sec
>     option alu.stat-refresh.num-file-create 10
> end-volume
>
> #volume debug
> #    type debug/trace
> #    subvolumes bricks
> #end-volume
>
> volume threads
>   type performance/io-threads
>   option thread-count 11
>   subvolumes bricks
> end-volume
>
> volume wb
>   type performance/write-behind
>   option aggregate-size 1MB
>   subvolumes threads
> end-volume
> ---------------------------------
>
> With best regards,
>   Andrey
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi
Engineer - Gluster Core Team
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Supercomputing and Superstorage!



More information about the Gluster-devel mailing list