[Gluster-users] On sharded tiered volume, only first shard of new file goes on hot tier.

Hari Gowtham hgowtham at redhat.com
Wed Feb 28 06:34:30 UTC 2018


Hi Jeff,

Tier and shard are not supported together.
There are chances for more bugs to be there in this area as there
wasn't much effort put into it.
And I don't see this support to be done in the near future.


On Tue, Feb 27, 2018 at 11:45 PM, Jeff Byers <jbyers.sfly at gmail.com> wrote:
> Does anyone have any ideas about how to fix, or to work-around the
> following issue?
> Thanks!
>
> Bug 1549714 - On sharded tiered volume, only first shard of new file
> goes on hot tier.
> https://bugzilla.redhat.com/show_bug.cgi?id=1549714
>
> On sharded tiered volume, only first shard of new file goes on hot tier.
>
> On a sharded tiered volume, only the first shard of a new file
> goes on the hot tier, the rest are written to the cold tier.
>
> This is unfortunate for archival applications where the hot
> tier is fast, but the cold tier is very slow. After the tier-
> promote-frequency (default 120 seconds), all of the shards do
> migrate to hot tier, but for archival applications, this
> migration is not helpful since the file is likely to not be
> accessed again just after being copied on, and it will later
> just migrate back to the cold tier.
>
> Sharding should be, and needs to be used with very large
> archive files because of bug:
>
>     Bug 1277112 - Data Tiering:File create and new writes to
>     existing file fails when the hot tier is full instead of
>     redirecting/flushing the data to cold tier
>
> which sharding with tiering helps mitigate.
>
> This problem occurs in GlusterFS 3.7.18 and 3.12.3, at least.
> I/O size doesn't make any difference, I tried multiple sizes.
> I didn't find any volume configuration options that helped
> in getting all of the new shards to go directly to the hot tier.
>
> # dd if=/dev/root bs=64M of=/samba/tiered-sharded-vol/file-1 count=6
> 402653184 bytes (403 MB) copied, 1.31154 s, 307 MB/s
> # ls -lrtdh /samba/tiered-sharded-vol/*
> /exports/brick-*/tiered-sharded-vol/*
> /exports/brick-*/tiered-sharded-vol/.shard/* 2>/dev/null
> ---------T    0 Feb 27 08:58 /exports/brick-cold/tiered-sharded-vol/file-1
> -rw-r--r--  64M Feb 27 08:58 /exports/brick-hot/tiered-sharded-vol/file-1
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.1
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.2
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.3
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.4
> -rw-r--r-- 384M Feb 27 08:58 /samba/tiered-sharded-vol/file-1
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.5
> # sleep 120
> # ls -lrtdh /samba/tiered-sharded-vol/*
> /exports/brick-*/tiered-sharded-vol/*
> /exports/brick-*/tiered-sharded-vol/.shard/* 2>/dev/null
> ---------T    0 Feb 27 08:58 /exports/brick-cold/tiered-sharded-vol/file-1
> -rw-r--r--  64M Feb 27 08:58 /exports/brick-hot/tiered-sharded-vol/file-1
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.1
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.2
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.3
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.4
> -rw-r--r--  64M Feb 27 08:58
> /exports/brick-hot/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.5
> -rw-r--r-- 384M Feb 27 08:58 /samba/tiered-sharded-vol/file-1
> ---------T    0 Feb 27 09:00
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.5
> ---------T    0 Feb 27 09:00
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.1
> ---------T    0 Feb 27 09:00
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.2
> ---------T    0 Feb 27 09:00
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.3
> ---------T    0 Feb 27 09:00
> /exports/brick-cold/tiered-sharded-vol/.shard/85c2cb2e-65ec-4d82-9500-ec6ba361cbbf.4
>
> Volume Name: tiered-sharded-vol
> Type: Tier
> Volume ID: 8c09077a-371e-4d30-9faa-c9c76d7b1b57
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Hot Tier :
> Hot Tier Type : Distribute
> Number of Bricks: 1
> Brick1: 10.10.60.169:/exports/brick-hot/tiered-sharded-vol
> Cold Tier:
> Cold Tier Type : Distribute
> Number of Bricks: 1
> Brick2: 10.10.60.169:/exports/brick-cold/tiered-sharded-vol
> Options Reconfigured:
> cluster.tier-mode: cache
> features.ctr-enabled: on
> features.shard: on
> features.shard-block-size: 64MB
> server.allow-insecure: on
> performance.quick-read: off
> performance.stat-prefetch: off
> nfs.disable: on
> nfs.addr-namelookup: off
> performance.readdir-ahead: on
> snap-activate-on-create: enable
> cluster.enable-shared-storage: disable
>
>
> ~ Jeff Byers ~
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Regards,
Hari Gowtham.


More information about the Gluster-users mailing list