silikonrail.blogg.se

Parsec benchmark
Parsec benchmark












  1. #PARSEC BENCHMARK SOFTWARE#
  2. #PARSEC BENCHMARK CODE#

This benchmark suite is focused on systems that contain Graphics Processing Units (GPUs), multi-core processors, and new vector-based coprocessors like Xeon Phi.

#PARSEC BENCHMARK SOFTWARE#

The Scalable HeterOgeneous Computing benchmark suite (SHOC) is a collection of benchmark programs that tests the performance and stability of systems using computing devices with nontraditional architectures for general purpose computing, as well as the software used to program them. Have outputs that can be easily tested for correctness and possibly quality.Are simple enough that reasonably efficient solutions can be implemented in 500 lines of code, but not trivial microbencharks.Are representative of a reasonably wide variety of real-world tasks.We encourage people to implement the benchmarks using any algorithm, with any programming language, with any form of parallelism (or sequentially), and for any machine.

#PARSEC BENCHMARK CODE#

The benchmarks define problems in terms of the function they implement and not the particular algorithm or code they use. The problem based benchmark suite (PBBS) is designed to be an open source repository to compare different parallel programming methodologies in terms of performance and code quality. The suite focuses on emerging workloads and was designed to be representative of next-generation shared-memory programs for chip-multiprocessors. The Princeton Application Repository for Shared-Memory Computers (PARSEC) is a benchmark suite composed of multithreaded programs. For example, we use the InfiniBand/Ethernet communication components and the Image pipelining component to find the improvement in performance when using IB for such applications. Nectere, internally is comprised of different components that can be combined to test diverse and varying application usage patterns. The goal of Nectere is to understand how systems that have heterogeneity in processing as well as networking capabilities can perform on such future datacenter workloads. GTStream is useful for setting up realistically large scale, online, multi-tier big data applications and for experimenting with performance/reliability/availability insights, by varying GTStream configurations and workload characteristics ( Project Hoover: Auto-Scaling Streaming Map-Reduce Applications).īenchmark emulating streaming codes for multimedia (high throughput) or finance (low latency), for use in GPU and non-GPU cluster settings.Ī benchmark comprising of image processing and financial workloads which uses CPUs/GPUs for processing as well as Ethernet/InfiniBand for communication. It is completely distributed and scalable with routine deployment on over 200 virtual machines (we have run up to 1000 VMs). It has a default application analyzing distributed web server logs in order to understand the online behavior of web customers. The GTStreams platform is used to support a variety of big data streaming applications with different types of workloads. GTStream is a real-time multi-tier big data platform consisting of a Flume log processing tier, a HBase key-value store, a HDFS file system tier and a MapReduce tier (optional). Project Hoover: Auto-Scaling Streaming Map-Reduce Applications Faster, Larger, Easier: Reining Real-Time (scroll down for more info) Available BenchmarksĪ Scalable Big Data Streaming Benchmark for Evaluating Real-Time Cloud ISTC-CC Research: Benchmarks QUick Benchmark Links














Parsec benchmark