Core dump
From Wikipedia, the free encyclopedia
A core dump is the name often given to the recorded state of the working memory of a computer program at a specific time, generally when the program that has terminated abnormally (crashed).[1] The name comes from the once-standard memory technology core memory. Core dumps are often used to diagnose or debug errors in computer programs.
On many operating systems, a fatal error in a program automatically triggers a core dump, and by extension the phrase "to dump core" has come to mean, in many cases, any fatal error, regardless of whether a record of the program memory is created.
The term is used in jargon to indicate any circumstance where large amounts of unedited data are deposited for further examination.
Contents |
[edit] Background
Before the advent of disk operating systems and the ability to record large data files, core dumps were paper printouts of the contents of memory, typically arranged in columns of octal or hexadecimal numbers (the latter was sometimes called a "hex dump"), together with interpretations of various encodings such as machine language instructions, text strings, or decimal or floating-point numbers. In more recent operating systems, a "core dump" is a file containing the memory image of a particular process, or the memory images of parts of the address space of that process, along with other information such as the values of processor registers. These files can be viewed in a readable text format similar to the older paper printouts as well using the proper tools such as objdump.
[edit] Causes of core dumps
A core dump is often a useful tool for a programmer seeking to isolate and identify an error in a computer program. In high-level programming languages, compilers usually generate programs with correct underlysing instructions, and errors more frequently arise from more complex logical errors such as accesses to non-existant memory. In practice, these are often buffer overflows, where a programmer allocates too little memory for incoming or computed data, or access to null pointers, a common coding error when an unassigned memory reference variable is accessed.
[edit] Uses of core dumps
Core dumps are a useful debugging aid in several situations. On early standalone or batch-processing systems, core dumps allowed a user to debug a program without monopolizing the (very expensive) computing facility for debugging. Besides, a printout was more convenient than debugging using switches and lights. On shared computers, whether time-sharing, batch processing, or server systems, core dumps allow off-line debugging of the operating system, so that the system can be back in operation immediately. Core dumps allow a user to save a crash for later or off-site analysis, or comparison with other crashes. For embedded computers, it may be impractical to support debugging on the computer itself, so a dump can be taken for analysis on a different computer. Some operating systems (such as early versions of Unix) did not support attaching debuggers to running processes, so core dumps were necessary to run a debugger on a process's memory contents. Core dumps can be used to capture data freed during dynamic memory allocation and may thus be used to retrieve information from a program that has exited or been closed. In the absence of an interactive debugger, the core dump may be used by an assiduous programmer to determine the error from direct examination.
A core dump represents the complete contents of the dumped regions of the address space of the dumped process. Depending on the operating system, the dump may contain few or no no data structures providing to aid interpretation of the memory regions. In these systems successful interpretation requires that the program or user trying to interpret the dump understand the structure of the programs memory use.
A debugger can use a symbol table (if there is one) to help the programmer interpret dumps, identifying variables symbolically and displaying source code; if the symbol table is not available, less interpretation of the dump is possible, but there might still be enough possible to determine the cause of the problem. There are also special-purpose tools called dump analyzers to analyze dumps. One popular tool that is available on almost all operating systems is the GNU Binutils objdump.
On modern Unix-like operating systems, core dump files can be read using the GNU Binutils Binary File Descriptor library and the GNU Debugger gdb and objdump that use this library. This library will supply the raw data for a given address in a memory region from a core dump; it does not know anything about variables or data structures in that memory region, so the application using the library to read the core dump will have to determine the addresses of variables and determine the layout of data structures itself, for example by using the symbol table for the program it's debugging.
Core dumps can be used to save a process at a given state for returning to it later. Highly available systems can be made by transferring core between processors, sometimes via coredump files themselves.
[edit] Format of core dump files
In older and simpler operating systems, a process's address space was contiguous, so a core dump file was simply a binary file with the sequence of bytes or words. In modern operating systems, a process address space may have gaps, and share pages with other processes or files, so more elaborate representations are used; they may also include other information about the state of the program at the time of the dump.
In Unix-like systems, core dumps generally use the standard executable image format: a.out in older Unixes, ELF in in modern Linux, System V, Solaris, and BSD systems, Mach-O in Mac OS X, etc.
[edit] Uses in culture
The term is sometimes used on Usenet for a posting that describes what has been happening in the poster's life, especially if it involves emotional stress; the implication is that the material has not been edited or analyzed. See also brain dump.
Core dumping is also used to describe a method of test taking in which the test taker writes memorized equations, dates or other information on the back of a test when it is first given out, in order to make sure not to forget it during the test. This behavior often accompanies cramming.
[edit] References
[edit] See also
[edit] External links
- Article "Why does this not work!? How to find and fix faults in Linux applications" by Guido Socher
- Article "GDB, GNU Debugger Intro" by Frank Schoep
- Wikibook "Guide To Unix" for a reference to cshell's "limit coredumpsize $BLOCKS|unlimited" and bash's "ulimit -c $BLOCKS|unlimited",
- Wikibook "Reverse_Engineering/Other_Tools#GNU_Tools" for some more references to gnu tools.
- CoreDumper -- BSD-licensed library for making core dumps
- Core Dumped Blues A 1980 song lamenting Segmentation violations and the resulting core dumps by Greg Boyd [1]
Descriptions for the file format: