Thursday, September 24, 2015

IC

An integrated circuit or monolithic integrated circuit (also referred to as an IC, a chip, or a microchip) is a set of electronic circuits on one small plate ("chip") of semiconductor material, normally silicon. This can be made much smaller than a discrete circuit made from independent electronic components. ICs can be made very compact, having up to several billion transistors and other electronic components in an area the size of a fingernail. The width of each conducting line in a circuit can be made smaller and smaller as the technology advances; in 2008 it dropped below 100 nanometers,[1] and has now been reduced to tens of nanometers.

ICs were made possible by experimental discoveries showing that semiconductor devices could perform the functions of vacuum tubes and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. The integrated circuit's mass production capability, reliability and building-block approach to circuit design ensured the rapid adoption of standardized integrated circuits in place of designs using discrete transistors.

ICs have two main advantages over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the IC's components switch quickly and consume little power (compared to their discrete counterparts) as a result of the small size and close proximity of the components. As of 2012, typical chip areas range from a few square millimeters to around 450 mm2, with up to 9 million transistors per mm2.

Integrated circuits are used in virtually all electronic equipment today and have revolutionized the world of electronics. Computers, mobile phones, and other digital home appliances are now inextricable parts of the structure of modern societies, made possible by the low cost of integrated circuits.

Contents  :
1 Terminology
2 Invention
3 Generations
3.1 SSI, MSI and LSI
3.2 VLSI
3.3 ULSI, WSI, SOC and 3D-IC
4 Advances in integrated circuits
5 Computer assisted design
6 Classification
7 Manufacturing
7.1 Fabrication
7.2 Packaging
7.3 Chip labeling and manufacture date
8 Intellectual property
9 Other developments
10 Silicon labelling and graffiti
11 ICs and IC families
12 See also
13 References
14 Further reading
15 External links

Terminology:

An integrated circuit is defined as:

A circuit in which all or some of the circuit elements are inseparably associated and electrically interconnected so that it is considered to be indivisible for the purposes of construction and commerce.

Circuits meeting this definition can be constructed using many different technologies, including thin-film transistor, thick film technology, or hybrid integrated circuit. However, in general usage integrated circuit has come to refer to the single-piece circuit construction originally known as a monolithic integrated circuit.

Invention:
Main article: Invention of the integrated circuit
Early developments of the integrated circuit go back to 1949, when German engineer Werner Jacobi (Siemens AG)[6] filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate in a 3-stage amplifier arrangement. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. An immediate commercial use of his patent has not been reported.

The idea of the integrated circuit was conceived by Geoffrey W.A. Dummer (1909–2002), a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence. Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952.[8] He gave many symposia publicly to propagate his ideas, and unsuccessfully attempted to build such a circuit in 1956.

A precursor idea to the IC was to create small ceramic squares (wafers), each containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which seemed very promising in 1957, was proposed to the US Army by Jack Kilby and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy).[9] However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC.


Jack Kilby's original integrated circuit
Newly employed by Texas Instruments, Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958.[10] In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated." The first customer for the new invention was the US Air Force.

Kilby won the 2000 Nobel Prize in Physics for his part in the invention of the integrated circuit.[14] His work was named an IEEE Milestone in 2009.

Half a year after Kilby, Robert Noyce at Fairchild Semiconductor developed his own idea of an integrated circuit that solved many practical problems Kilby's had not. Noyce's design was made of silicon, whereas Kilby's chip was made of germanium. Noyce credited Kurt Lehovec of Sprague Electric for the principle of p–n junction isolation caused by the action of a biased p–n junction (the diode) as a key concept behind the IC.

Fairchild Semiconductor was also home of the first silicon-gate IC technology with self-aligned gates, the basis of all modern CMOS computer chips. The technology was developed by Italian physicist Federico Faggin in 1968, who later joined Intel in order to develop the very first single-chip Central Processing Unit (CPU) (Intel 4004), for which he received the National Medal of Technology and Innovation in 2010.

Generations:
In the early days of simple integrated circuits, the technology's large scale limited each chip to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. As the technology progressed, millions, then billions[17] of transistors could be placed on one chip, and good designs required thorough planning, giving rise to new design methods.

Name Signification Year Transistors number Logic gates number
SSI small-scale integration 1964 1 to 10 1 to 12
MSI medium-scale integration 1968 10 to 500 13 to 99
LSI large-scale integration 1971 500 to 20,000 100 to 9,999
VLSI very large-scale integration 1980 20,000 to 1,000,000 10,000 to 99,999
ULSI ultra-large-scale integration 1984 1,000,000 and more 100,000 and more
SSI, MSI and LSI [edit]
The first integrated circuits contained only a few transistors. Early digital circuits containing tens of transistors provided a few logic gates, and early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as two transistors. The number of transistors in an integrated circuit has increased dramatically since then. The term "large scale integration" (LSI) was first used by IBM scientist Rolf Landauer when describing the theoretical concept;[citation needed] that term gave rise to the terms "small-scale integration" (SSI), "medium-scale integration" (MSI), "very-large-scale integration" (VLSI), and "ultra-large-scale integration" (ULSI). The early integrated circuits were SSI.

SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire development of the technology. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems; the Apollo guidance computer led and motivated the integrated-circuit technology,[20] while the Minuteman missile forced it into mass-production. The Minuteman missile program and various other Navy programs accounted for the total $4 million integrated circuit market in 1962, and by 1968, U.S. Government space and defense spending still accounted for 37% of the $312 million total production. The demand by the U.S. Government supported the nascent integrated circuit market until costs fell enough to allow firms to penetrate the industrial and eventually the consumer markets. The average price per integrated circuit dropped from $50.00 in 1962 to $2.33 in 1968.[21] Integrated circuits began to appear in consumer products by the turn of the decade, a typical application being FM inter-carrier sound processing in television receivers.

The first MOS chips were small-scale integrated chips for NASA satellites.

The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "medium-scale integration" (MSI).

In 1964, Frank Wanlass demonstrated a single-chip 16-bit shift register he designed, with an incredible (for the time) 120 transistors on a single chip.

MSI devices were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages.

Further development, driven by the same economic factors, led to "large-scale integration" (LSI) in the mid-1970s, with tens of thousands of transistors per chip.

SSI and MSI devices often were manufactured by masks created by hand-cutting Rubylith; an engineer would inspect and verify the completeness of each mask. LSI devices contain so many transistors, interconnecting wires, and other features that it is considered impossible for a human to check the masks or even do the original design entirely by hand; the engineer depends on computer programs and other hardware aids to do most of this work.

Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4000 transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.

No comments:

Post a Comment