Transactional memory

In computer science and engineering, transactional memory attempts to simplify concurrent programming by allowing a group of load and store instructions to execute in an atomic way. It is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing.

Motivation

The motivation of transactional memory lies in the programming interface of parallel programs. The goal of a transactional memory system is to transparently support the definition of regions of code that are considered a transaction, that is, that have atomicity, consistency and isolation requirements. Transactional memory allows writing code like this example:

def transfer_money(from_account, to_account, amount):
    with transaction():
        from_account -= amount
        to_account += amount

In the code, the block defined by "transaction" has the atomicity, consistency and isolation guarantees and the underlying transactional memory implementation must assure those guarantees transparently.

Hardware vs. software

Hardware transactional memory systems may comprise modifications in processors, cache and bus protocol to support transactions.[1][2][3][4][5] Load-link/store-conditional (LL/SC) offered by many RISC processors can be viewed as the most basic transactional memory support; however, LL/SC usually operates on data that is the size of a native machine word, so only single-word transactions are supported.

Software transactional memory provides transactional memory semantics in a software runtime library or the programming language,[6] and requires minimal hardware support (typically an atomic compare and swap operation, or equivalent). As the downside, software implementations usually come with a performance penalty, when compared to hardware solutions.

Owing to the more limited nature of hardware transactional memory (in current implementations), software using it may require fairly extensive tuning to fully benefit from it. For example, the dynamic memory allocator may have a significant influence on performance and likewise structure padding may affect performance (owing to cache alignment and false sharing issues); in the context of a virtual machine, various background threads may cause unexpected transaction aborts.[7]

History

One of the earliest implementations of transactional memory was the gated store buffer used in Transmeta's Crusoe and Efficeon processors. However, this was only used to facilitate speculative optimizations for binary translation, rather than any form of speculative multithreading, or exposing it directly to programmers. Azul Systems also implemented hardware transactional memory to accelerate their Java appliances, but this was similarly hidden from outsiders.[8]

Sun Microsystems implemented hardware transactional memory and a limited form of speculative multithreading in its high-end Rock processor. This implementation proved that it could be used for lock elision and more complex hybrid transactional memory systems, where transactions are handled with a combination of hardware and software. The Rock processor was canceled in 2009, just before the acquisition by Oracle; while the actual products were never released, a number of prototype systems were available to researchers.[8]

In 2009, AMD proposed the Advanced Synchronization Facility (ASF), a set of x86 extensions that provide a very limited form of hardware transactional memory support. The goal was to provide hardware primitives that could be used for higher-level synchronization, such as software transactional memory or lock-free algorithms. However, AMD has not announced whether ASF will be used in products, and if so, in what timeframe.[8]

More recently, IBM announced in 2011 that Blue Gene/Q had hardware support for both transactional memory and speculative multithreading. The transactional memory could be configured in two modes; the first is an unordered and single-version mode, where a write from one transaction causes a conflict with any transactions reading the same memory address. The second mode is for speculative multithreading, providing an ordered, multi-versioned transactional memory. Speculative threads can have different versions of the same memory address, and hardware implementation keeps tracks of the age for each thread. The younger threads can access data from older threads (but not the other way around), and writes to the same address are based on the thread order. In some cases, dependencies between threads can cause the younger versions to abort.[8]

Intel's Transactional Synchronization Extensions (TSX) is available in some of the Skylake processors. It was earlier implemented in Haswell and Broadwell processors as well, but the implementations turned out both times to be defective and support for TSX was disabled. The TSX specification describes the transactional memory API for use by software developers, but withholds details on technical implementation.[8]

Available implementations

See also

References

  1. Herlihy, Maurice; Moss, J. Eliot B. (1993). "Transactional memory: Architectural support for lock-free data structures" (PDF). Proceedings of the 20th International Symposium on Computer Architecture (ISCA). pp. 289–300.
  2. "Multiple Reservations and the Oklahoma Update". doi:10.1109/88.260295.
  3. Hammond, L; Wong, V.; Chen, M.; Carlstrom, B.D.; Davis, J.D.; Hertzberg, B.; Prabhu, M.K.; Honggo Wijaya; Kozyrakis, C.; Olukotun, K. (2004). "Transactional memory coherence and consistency". Proceedings of the 31st annual International Symposium on Computer Architecture (ISCA). pp. 102–13. Cite uses deprecated parameter |coauthors= (help)
  4. "Unbounded transactional memory".
  5. "LogTM: Log-based transactional memory" (PDF). WISC.
  6. "The ATOMOΣ Transactional Programming Language" (PDF). Stanford.
  7. Odaira, R.; Castanos, J. G.; Nakaike, T. (2013). "Do C and Java programs scale differently on Hardware Transactional Memory?". 2013 IEEE International Symposium on Workload Characterization (IISWC). p. 34. doi:10.1109/IISWC.2013.6704668. ISBN 978-1-4799-0555-3.
  8. 1 2 3 4 5 David Kanter (2012-08-21). "Analysis of Haswell's Transactional Memory". Real World Technologies. Retrieved 2013-11-19.
  9. "IBM plants transactional memory in CPU". EE Times.
  10. Brian Hall; Ryan Arnold; Peter Bergner; Wainer dos Santos Moschetta, Robert Enenkel, Pat Haugen, Michael R. Meissner, Alex Mericas, Philipp Oehler, Berni Schiefer, Brian F. Veale, Suresh Warrier, Daniel Zabawa, Adhemerval Zanella, IBM Redbooks (2014). Performance Optimization and Tuning Techniques for IBM Processors, including IBM POWER8 (PDF). IBM Redbooks. pp. 37–40. ISBN 978-0-7384-3972-3. Cite uses deprecated parameter |coauthors= (help)
  11. Wei Li, IBM XL compiler hardware transactional memory built-in functions for IBM AIX on IBM POWER8 processor-based systems
  12. Java on a 1000 Cores – Tales of Hardware/Software CoDesign on YouTube
  13. "STMX Homepage".
  14. Wong, Michael. "Transactional Language Constructs for C++" (PDF). Retrieved 12 Jan 2011.
  15. "Brief Transactional Memory GCC tutorial".
  16. "C Dialect Options - Using the GNU Compiler Collection (GCC)".
  17. "TransactionalMemory - GCC Wiki".
  18. Rigo, Armin. "Using All These Cores: Transactional Memory in PyPy". europython.eu. Retrieved 7 April 2015.

Further reading

External links

This article is issued from Wikipedia - version of the Friday, January 29, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.