Interconnect bottleneck
From Wikipedia, the free encyclopedia
This article is orphaned as few or no other articles link to it. Please help introduce links in articles on related topics. (October 2006) |
The interconnect bottleneck -- the point where microchips reach their capacity -- is expected sometime around 2010.[1]
Improved performance of computer systems has been achieved, in large part, by downscaling the Integrated Circuit (IC) minimum feature size. This allows the basic IC building block, the transistor, to operate at a higher frequency, performing more computations per second. However, downscaling of the minimum feature size also results in tighter packing of the wires on a chip, which increases parasitic capacitance and signal propagation delay. Consequently, the delay due to the communication between the parts of a chip becomes comparable to the computation delay itself. This phenomenon, known as an “interconnect bottleneck”, is becoming a major problem in high-performance computer systems.[2]
This problem of “interconnect bottleneck” can be solved by utilizing optical interconnects to replace the long metallic interconnects. Such hybrid optical/electronic interconnects promises better performance even with larger designs. Optics has widespread use in long-distance communications; still it has not yet been widely used in chip-to-chip or on-chip interconnections because they (in centimeter or micrometer range) are not yet industry-manufacturable owing to costlier technology and lack of fully mature technologies. As optical interconnections move from computer network applications to chip level interconnections, new requirements for high connection density and alignment reliability have become as critical for the effective utilization of these links. There are still many materials, fabrication, and packaging challenges in integrating optic and electronic technologies.