[Gluster-users] Quota translator troubles

Anand Avati avati at zresearch.com
Fri Jan 30 20:34:06 UTC 2009


Patrick,
 Can you try the latest codebase? some fixes in quota have gone in
since your first mail. Also, quota is best used on the server side for
now. We are still working on making it work well on the client side.

Avati

On Fri, Jan 30, 2009 at 9:57 PM, Patrick Ruckstuhl <patrick at tario.org> wrote:
> Hi Ananth,
>
>
> here's the Config with the Quota on top:
>
> Server (two servers with different ip addresses have this config)
>
> ### Log brick
> volume log-posix
>  type storage/posix                   # POSIX FS translator
>  option directory /data/glusterfs/log        # Export this directory
> end-volume
>
> ### Add lock support
> volume log-locks
>  type features/locks
>  subvolumes log-posix
> end-volume
>
> ### Add performance translator
> volume log-brick
>  type performance/io-threads
>  option thread-count 8
>  subvolumes log-locks
> end-volume
>
> ### Add network serving capability to above brick.
> volume server
>  type protocol/server
>  option transport-type tcp
>  option transport.socket.bind-address 192.168.0.4
>  subvolumes log-brick
>  option auth.addr.log-brick.allow 192.168.0.2,192.168.0.3,192.168.0.4 #
> Allow access to "brick" volume
> end-volume
>
>
> Client
>
> ### Add client feature and attach to remote subvolume
> volume log-remote-hip
>  type protocol/client
>  option transport-type tcp
>  option remote-host 192.168.0.3         # IP address of the remote brick
>  option remote-subvolume log-brick        # name of the remote volume
> end-volume
>
> ### Add client feature and attach to remote subvolume
> volume log-remote-hop
>  type protocol/client
>  option transport-type tcp
>  option remote-host 192.168.0.4         # IP address of the remote brick
>  option remote-subvolume log-brick        # name of the remote volume
> end-volume
>
> ### This is a distributed volume
> volume log-distribute
>  type cluster/distribute
>  subvolumes log-remote-hip log-remote-hop
> end-volume
>
> ### Add writeback feature
> volume log-writeback
>  type performance/write-behind
>  option block-size 512KB
>  option cache-size 100MB
>  option flush-behind off
>  subvolumes log-distribute
> end-volume
>
> ### Add quota
> #volume log
>  type features/quota
>  option disk-usage-limit 100GB
>  subvolumes log-writeback
> #end-volume
>
>
> This config results in the crash. (if you can't reproduce the crash with
> this config I can see if I'll be able to run it with gdb)
>
>
> The config that works is
>
>
> Server (two servers with different ip addresses have this config)
>
> ### Log brick
> volume log-posix
>  type storage/posix                   # POSIX FS translator
>  option directory /data/glusterfs/log        # Export this directory
> end-volume
>
> ### Add quota support
> volume log-quota
>  type features/quota
>  option disk-usage-limit 100GB
>  subvolumes log-posix
> end-volume
>
> ### Add lock support
> volume log-locks
>  type features/locks
>  subvolumes log-quota
> end-volume
>
> ### Add performance translator
> volume log-brick
>  type performance/io-threads
>  option thread-count 8
>  subvolumes log-locks
> end-volume
>
> ### Add network serving capability to above brick.
> volume server
>  type protocol/server
>  option transport-type tcp
>  option transport.socket.bind-address 192.168.0.4
>  subvolumes log-brick
>  option auth.addr.log-brick.allow 192.168.0.2,192.168.0.3,192.168.0.4 #
> Allow access to "brick" volume
> end-volume
>
>
> Client
>
> ### Add client feature and attach to remote subvolume
> volume log-remote-hip
>  type protocol/client
>  option transport-type tcp
>  option remote-host 192.168.0.3         # IP address of the remote brick
>  option remote-subvolume log-brick        # name of the remote volume
> end-volume
>
> ### Add client feature and attach to remote subvolume
> volume log-remote-hop
>  type protocol/client
>  option transport-type tcp
>  option remote-host 192.168.0.4         # IP address of the remote brick
>  option remote-subvolume log-brick        # name of the remote volume
> end-volume
>
> ### This is a distributed volume
> volume log-distribute
>  type cluster/distribute
>  subvolumes log-remote-hip log-remote-hop
> end-volume
>
> ### Add writeback feature
> volume log-writeback
>  type performance/write-behind
>  option block-size 512KB
>  option cache-size 100MB
>  option flush-behind off
>  subvolumes log-distribute
> end-volume
>
>
> So the only difference is that the quota moved from the top on the client to
> the bottom on each server. This works but df returns the wrong amount (it
> returns the total available space, not the space available with the given
> quota).
>
>
> Regards,
> Patrick
>
>> Hi Patrick,
>> It would be great if you could provide us with a bit more information.
>> Could you mail us the specfiles used (both with and without quota) and also
>> the gdb backtrace? Also, if there are any special steps we need to follow to
>> reproduce the issue, please do let us know.
>> Regards,
>> Ananth
>>
>> -----Original Message-----
>> *From*: Patrick Ruckstuhl <patrick at tario.org
>> <mailto:Patrick%20Ruckstuhl%20%3cpatrick at tario.org%3e>>
>> *To*: gluster-users at gluster.org <mailto:gluster-users at gluster.org>
>> *Subject*: [Gluster-users] Quota translator troubles
>> *Date*: Sun, 25 Jan 2009 17:21:46 +0100
>>
>> i,
>>
>> I tried to add the quota translator on top of everything else (basically
>> I'd like to restrict the size of a distributed volume).
>>
>> This did not seem to work as I got the following error as soon as I
>> added the quota translator, removing fixed the problem.
>>
>>
>>
>> 2009-01-25 16:17:37 E [dht-common.c:1346:dht_getxattr] log-distribute:
>> invalid argument: loc->inode
>> 2009-01-25 16:17:39 W [dht-layout.c:456:dht_layout_normalize]
>> log-distribute: directory / looked up first time
>> 2009-01-25 16:17:39 W [dht-common.c:137:dht_lookup_dir_cbk]
>> log-distribute: fixing assignment on /
>> pending frames:
>> frame : type(1) op(UNLINK)
>>
>> patchset: glusterfs--mainline--3.0--patch-844
>> signal received: 11
>> configuration details:argp 1
>> backtrace 1
>> bdb->cursor->get 1
>> db.h 1
>> dlfcn 1
>> fdatasync 1
>> libpthread 1
>> llistxattr 1
>> setfsid 1
>> spinlock 1
>> epoll.h 1
>> xattr.h 1
>> st_atim.tv_nsec 1
>> package-string: glusterfs 2.0.0rc1
>> /lib/libc.so.6[0x7fdfa2391f60]
>> /lib/libpthread.so.0(pthread_spin_lock+0x0)[0x7fdfa26be630]
>> /lib/libglusterfs.so.0[0x7fdfa2adc6ac]
>> /lib/libglusterfs.so.0(dict_get_ptr+0x33)[0x7fdfa2add4e3]
>>
>> /lib/glusterfs/2.0.0rc1/xlator/cluster/distribute.so(dht_layout_get+0x21)[0x7fdfa1f2fac1]
>>
>> /lib/glusterfs/2.0.0rc1/xlator/cluster/distribute.so(dht_subvol_get_hashed+0x4c)[0x7fdfa1f301ac]
>>
>> /lib/glusterfs/2.0.0rc1/xlator/cluster/distribute.so(dht_unlink+0x69)[0x7fdfa1f386e9]
>> /lib/libglusterfs.so.0(default_unlink+0xa7)[0x7fdfa2ae6cf7]
>>
>> /lib/glusterfs/2.0.0rc1/xlator/features/quota.so(quota_unlink_stat_cbk+0xc1)[0x7fdfa1b1f721]
>>
>> /lib/glusterfs/2.0.0rc1/xlator/performance/write-behind.so(wb_stat_cbk+0x5c)[0x7fdfa1d2693c]
>>
>> /lib/glusterfs/2.0.0rc1/xlator/cluster/distribute.so(dht_attr_cbk+0xb0)[0x7fdfa1f395c0]
>>
>> /lib/glusterfs/2.0.0rc1/xlator/protocol/client.so(client_stat_cbk+0x14c)[0x7fdfa215965c]
>>
>> /lib/glusterfs/2.0.0rc1/xlator/protocol/client.so(protocol_client_pollin+0xc6)[0x7fdfa21498b6]
>>
>> /lib/glusterfs/2.0.0rc1/xlator/protocol/client.so(notify+0x128)[0x7fdfa214ff28]
>> /lib/glusterfs/2.0.0rc1/transport/socket.so[0x7fdfa12e00fc]
>> /lib/libglusterfs.so.0[0x7fdfa2af778f]
>> /sbin/glusterfs(main+0x888)[0x403638]
>> /lib/libc.so.6(__libc_start_main+0xe6)[0x7fdfa237e1a6]
>> /sbin/glusterfs[0x402379]
>>
>>
>>
>>
>> Adding the translator directly above the posix volume seems to somewhat
>> work (df still shows the wrong available size but I'm not able to write
>> more than the specified amount).
>>
>> Regards,
>> Patrick
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>
>




More information about the Gluster-users mailing list