[Gluster-devel] Open source SPC-1 Workload IO Pattern

Justin Clift justin at gluster.org
Wed Nov 19 03:20:29 UTC 2014


Nifty. :)

(Yeah, catching up on old unread email, as the wifi in this hotel is so
bad I can barely do anything else.  8-10 second ping times to
www.gluster.org. :/)

As a thought, would there be useful analysis/visualisation capabilities
if you stored the data into a time series database (eg InfluxDB) then
used Grafana (http://grafana.org) on it?

+ Justin


On Fri, 07 Nov 2014 12:01:56 +0100
Luis Pabón <lpabon at redhat.com> wrote:

> Hi guys,
> I created a simple test program to visualize the I/O pattern of
> NetApp’s open source spc-1 workload generator. SPC-1 is an enterprise
> OLTP type workload created by the Storage Performance Council 
> (http://www.storageperformance.org/results).  Some of the results are 
> published and available here: 
> http://www.storageperformance.org/results/benchmark_results_spc1_active .
> 
> NetApp created an open source version of this workload and described
> it in their publication "A portable, open-source implementation of
> the SPC-1 workload" ( 
> http://www3.lrgl.uqam.ca/csdl/proceedings/iiswc/2005/9461/00/01526014.pdf
> )
> 
> The code is available onGithub: https://github.com/lpabon/spc1 .  All
> it does at the moment is capture the pattern, no real IO is
> generated. I will be working on a command line program to enable
> usage on real block storage systems.  I may either extend fio or
> create a tool specifically tailored to the requirements needed to run
> this workload.
> 
> On github, I have an example IO pattern for a simulation running 50
> mil IOs using HRRW_V2. The simulation ran with an ASU1 (Data Store)
> size of 45GB, ASU2 (User Store) size of 45GB, and ASU3 (Log) size of
> 10GB.
> 
> - Luis
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-devel



-- 
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift


More information about the Gluster-devel mailing list