TPC takes the measure of big data systems

Comparing commercial Hadoop big data-styled analysis systems might get a little easier, thanks to a new benchmark from the Transaction Processing Performance Council (TPC).

The TPCx-HS benchmark, posted Monday, offers a performance assessment of Hadoop-based systems.

“There has been a lot of push from our customers for a standard to objectively measure performance and price performance of big data systems,” said Raghunath Nambiar, who is the chairman of the TPCx-HS committee, as well as a distinguished engineer at Cisco.

The worldwide IT market for big data-styled analysis should swell to over US$240 billion by 2016, according to IDC, and companies such as IBM and Hewlett-Packard are offering prepackaged systems running Hadoop, currently the most popular of the big data systems now being tested and used within the enterprise.

Today vendors may offer performance metrics of their Hadoop systems, though each company uses its own benchmark, making it difficult for customers to compare systems.

TPC hopes that Hadoop system vendors will run its benchmark against their own systems, allowing potential customers to directly compare the price performance across different offerings.

TPCx-HS “defines a level playing field. The number you get from vendor X can be fairly compared to the number from vendor Y,” Nambiar said.

A benchmark kit, which can be downloaded from the TPC site, tests overall performance of a Hadoop system. It includes the specification and user documentation, as well as scripts to run the benchmark code and a Java program to execute the benchmark load.

The benchmark itself measures how quickly an Apache Hadoop system organizes data using the widely used terasort sorting algorithm. Vendors can tune their systems either by optimizing the software through various means, or by running the fastest hardware available.

Using the benchmark, a tester can choose one of a number of different-sized machine-generated data sets, ranging now from a single terabyte to 10,000 terabytes.

The benchmark provides a score for overall performance, as well as a price-performance score to specify how much performance the system offers per the cost of the system. A third optional test measures the energy efficiency of the system.

The test must be conducted twice, according to the TPC rules, and the slowest run of the two is the official benchmark speed. Published TPC results can be challenged by other parties within 60 days.

Like with all of its benchmarks, TPC requires that all the official testing be done by a third party. With the big data benchmark, this can be done through an independent auditor or, more informally, a peer audit, which probably would not be as costly.

Founded in 1988, the TPC is a nonprofit corporation that provides vendor-neutral benchmarks for testing the performance of transaction processing and database systems.

Although the organization started with the intent of producing benchmarks for transactional database systems, it has, in recent years, been expanding out to covering other computational systems as well. In 2012, it published a benchmark for virtualization software.

Companies such as Dell, Cisco, IBM, Hewlett-Packard, Oracle, Unisys, Intel and Microsoft are members of the TPC, as well as Hadoop software vendors Cloudera, Pivotal and Red Hat.

Insider Resume Makeover: How (and When) to Break the Rules
Join the discussion
Be the first to comment on this article. Our Commenting Policies