EEMBC

EEMBC, the Embedded Microprocessor Benchmark Consortium, is a non-profit organization formed in 1997 with the aim of developing meaningful performance benchmarks for the hardware and software used in embedded systems. The goal of its members is to make EEMBC benchmarks an industry standard for evaluating the capabilities of embedded microprocessors, compilers, and the associated embedded system implementations, according to objective, clearly defined, application-based criteria.

EEMBC benchmarks, which are available to its members, aim to reflect real-world applications and the demands that embedded systems encounter in these environments. The consortium also licenses its benchmark suites to non-members, although with different usage terms.

The president of EEMBC is Markus Levy, who is also president of the Multicore Association. The director of software engineering is Shay Gal-On.

Contents

Score Certification Program

Only EEMBC members are entitled to publish their benchmark test results (except for CoreMark and GrinderBench), but they must submit these scores and their entire benchmark platform to the EEMBC Technology Center (ETC) for official (and free) certification before making the scores public. During the certification process, the ETC rebuilds the benchmark code and verifies accuracy and repeatability.

Benchmark Chronology

Up until 2004, the EEMBC benchmarks targeted embedded processors and were exclusively built using C standard library compatible source code. These benchmark suites included AutoBench 1.1 (for automotive, industrial, and general-purpose applications), ConsumerBench 1.1 (for digital imaging tasks), Networking 1.1, OABench 1.1 (targeting printer-related applications), and TeleBench 1.1 (for Digital signal processors).

In 2004, the consortium released GrinderBench, a benchmark based on the Java language. GrinderBench represented EEMBC's foray into embedded system benchmarking, allowing the user to approximate the performance of Java 2 Micro Edition (J2ME™) applications in products such as mobile phones and Blu-ray Disc players.

In 2005, in order to meet the demands of increasing processor performance and cache memory sizes, EEMBC released DENBench and Networking 2.0, supersets of ConsumerBench 1.1 and Networking 1.1, respectively. Both of these suites have significantly enhanced datasets to heavily tax the processor's memory subsystems.

Energy and Power Become Leading Indicators

While energy consumption has always been an important factor in any system design, the embedded industry paid much more attention to it in the 2006 time-frame. About that time, EEMBC released its EnergyBench test to provide data on the amount of energy a processor consumes while running EEMBC's performance benchmarks. Every processor vendor typically has its own power measurement methods, making it nearly impossible to make accurate comparisons among competing vendors. Many processor vendors offer "typical" power specifications on product datasheets that are difficult to compare with one another. The problem of interpreting these values is exacerbated when designers attempt to compare processor cores for system-on-chip implementations. EnergyBench defines a standardized methodology, currently using National Instruments' LabVIEW graphical development environment and data acquisition hardware.

The Infusion of Multicore Technology

Around the 2006 time-frame, processor vendors began to introduce their multicore devices. System developers had expectations of linearly increasing performance relative to the number of cores. While there are many software programming issues to consider in order to achieve optimal parallelism, there are also performance limitations related to the multicore devices themselves (such as memory bus speed and shared resources). In 2008, EEMBC released MultiBench to analyze architectures, memory bottlenecks, OS scheduling support, synchronization efficiency, and other related system functions. It measures the impact of parallelization and scalability across both data processing and computationally intensive tasks.

CoreMark

Coremark is a non-free benchmark that targets the CPU core. It was developed by Shay Gal-On and released as an industry standard by EEMBC in 2009, and will ultimately replace the Dhrystone benchmark. CoreMark’s primary goals are simplicity and providing a method for testing only a processor’s core features. Each iteration of CoreMark performs the following algorithms: list processing (find and sort), matrix manipulation (common matrix operations), state machine (determine if an input stream contains valid numbers), and CRC. Running CoreMark produces a single-number score, allowing users to make quick comparisons between processors.

TCP/IP and Firewall Appliance Benchmarking

As processors transformed into System-on-a-chip (SoCs), EEMBC began to evolve its benchmark suites to target more advanced embedded systems such as networking systems and smartphones. ETCPBench measures TCP/IP performance and conformance and validates the Ethernet functionality of embedded systems. The standardized methodology of this benchmark suite uses tools and scripts from Ixia. Regarding conformance, ETCPBench measure the ability of a system to adhere to applicable Request for Comments (RFC) standards; regarding performance, ETCPBench utilizes various networking scenarios to measure a system’s bandwidth and latency capabilities. This benchmark suite is applicable to a wide range of network systems from a low-end home Gateway (telecommunications) to high-end network switches and routers, as well as a wide variety of semiconductors ranging from Ethernet-enabled microcontrollers to high-end processors for data center equipment.

EEMBC is currently working on, DPIBench, a separate benchmark suite that targets network firewall appliances performing Deep Packet Inspection. DPIBench will be used to evaluate throughput and latency to highlight the strengths and weaknesses of DPI systems, processors, and middleware. This testing approach considers the various threat vectors used in attempting to transfer an infected payload (software) into a network. Without a common standard by which to compare performance across all these variables, consumers of DPI technologies lack an objective means of selecting a solution from the myriad vendor offerings available today, and are often at a severe disadvantage when attempting to select the most suitable solution to protect their information systems. As a matter of fact, throughput numbers from various DPI system vendors have been known to be off by as much as 90% from those numbers claimed on datasheets and marketing collateral.

Measuring the Web Browsing Experience

References

External links