Local Descriptor Table
From Wikipedia, the free encyclopedia
The Local Descriptor Table (LDT) is a memory table used in the x86 architecture in protected mode and containing memory segment descriptors: start in linear memory, size, executability, writability, access privilege, actual presence in memory, etc.
The LDT is the sibling of the Global Descriptor Table and similarly defines up to 8191 memory segments accessible to programs.
[edit] Past glory
On x86 processors not having paging features, like the Intel 80286, the LDT is essential to implementing separate address spaces for multiple processes. There will be generally one LDT per user process, describing privately held memory, while shared memory and kernel memory will be described by the GDT. The operating system will switch the current LDT when scheduling a new process, using the LLDT machine instruction. On the contrary, the GDT is generally not switched (although this may happen if virtual machine monitors like VMware are running on the computer).
The lack of symmetry between both tables is underligned by the fact that the current LDT can be automatically switched on certain events, notably if TSS-based multitasking is used, while this is not possible for the GDT. The LDT also cannot store certain privileged types of memory segments (eg. TSSes). Finally, the LDT is actually defined by a descriptor inside the GDT, while the GDT is directly defined by a linear address.
Creating shared memory through the GDT has some drawbacks. Notably such memory is visible to every process and with equal rights. In order to restrict visibility and to differentiate the protection of shared memory, for example to only allow read-only access for some processes, one can use separate LDT entries, pointed at the same physical memory areas and only created in the LDTs of processes which have requested access to a given shared memory area.
LDT (and GDT) entries which point to identical memory areas are called aliases. Aliases are also typically created in order to get write access to code segments: an executable selector cannot be used for writing. (Protected mode programs constructed in the so-called tiny memory model where everything is located in the same memory segment must use separate selectors for code and for data/stack and both selectors are technically "aliases" too.) In the case of the GDT, aliases are also created in order to get access to system segments like the TSSes.
Segments have a "Present" flag in their descriptors, so they can be removed from memory if need arises. For example, code segments or unmodified data segments can be thrown away, and modified data segments can be swapped out to disk. This allows to implement virtual memory. However, because entire segments have to be removed, it is better to limit their size. On the other side, using smaller, more easily swappable segments means that segment registers have to be reloaded more frequently, while this is a relatively time-consuming operation.
[edit] Fading out
Starting with the Intel 80386 microprocessor, separation between processes can be also achieved by giving them separate physical memory pages at the same virtual addresses. It is also more efficient to implement virtual memory using page-based swapping rather than full segment swapping (such swapping is performed below the segmentation layer, so segments can remain "Present" all the time). Therefore, modern 32-bit x86 operating systems use the LDT very little if they do not run 16-bit code.
On the contrary, running 16-bit code in a 32-bit environment and sharing memory (this happens e.g. when running OS/2 1.x programs on OS/2 2.0 and later) generally means that the LDT has to be fully filled, typically with 64 KiB segments, in such a way that every flat address has also a selector in the LDT. This technique is sometimes called LDT tiling. The limited size of the LDT means the virtual flat address space has to be limited to 512 megabytes (8191 times 64 KiB) - this is what happens on OS/2, although this limitation was fixed in version 4.5. It is also necessary to make sure that objects allocated in the 32-bit environment do not cross 64 KiB boundaries; this generates some address space waste.
If 32-bit code does not have to pass arbitrary memory objects to 16-bit code, e.g. presumably in the OS/2 1.x emulation present in Windows NT or in the Windows 3.1 emulation layer, it is not necessary to artificially limit the size of the 32-bit address space.