Explainers
August 24, 2023

25 Things You Need to Know to Hold a Conversation with Chip-Industry Folks

Let's say that you are at a networking lunch - and you meet some new people. You mention that you used to be an engineer but are now a writer. They mention that they work for a semiconductor company. “Neither conducts nor insulates,” you say, pretty happy that you still remember some high school physics.

They work for a chip design company, they clarify. You rack your brain for anything you might know about chips and blurt out: “Ah, Moore’s law.”

They nod, “Yeah, Moore’s law.” The conversation shifts to how the Apple M1 Pro resurrected Moore’s law.

Meanwhile, you excuse yourself to go get some “more” (buttermilk).

Later that day, you wonder, what is Moore’s law? And who was Moore? And why is the internet saying that it may not hold any further? After an hour of research, you get to bigger questions: what is a chip? And why can’t people have just one? Oh wait, that’s a different chip.

Hold on buddy, we got your back.

We’ve made a list of 25 things you need to know - in short paragraphs, with important terms marked in bold. Let’s dive in.

Moore’s Law

Gordon Moore co-founded Intel in 1968.

A few years before that, in 1965, he was working as the director of research and development (R&D) at Fairchild Semiconductor. In an article for Electronics magazine (April 1965), Moore predicted that the number of components on a single chip will double every year, which he later revisited to every two years.

👉🏽 Moore’s law: Transistors per silicon chip will double every two years.

Now, one might imagine that chips are like really slow Amoebas that are doubling themselves into two once in two years. But what Moore meant to say was engineers were making swift progress in their ability to print thinner transistors (and other components) and therefore the same size of silicon wafer would fit more components.

By the way, Moore was one of the original traitorous eight.

Traitorous Eight

Back in the 1940s and 50s, a bunch of scientists invented the transistor. And in 1956 - three of them got The Nobel Prize in Physics for their invention. One of these - William Shockley - set up a company to manufacture silicon semiconductor transistors, and hired a bunch of really smart people to work for him.

William Shockley celebrates the news of his 1956 Nobel Prize in Physics with his employees, including Jay Last, Gordon Moore, Robert Noyce and Sheldon Roberts | Image source

Now Shockley was a genius but a terrible boss. He was described as “autocratic, domineering, erratic, hard-to-please, and paranoid.”

The very next year, in 1957, eight of Shockley’s employees left and formed Fairchild semiconductors. This “act of treason” involved some truly dramatic moments including the symbolic signing of a $1 bill during a clandestine meeting in a hotel.

A symbolic contract signed by the Fairchild founders and bankers on September 19, 1957 | Image source

Fairchild Semiconductor had open communication, flat organisational structures, and autonomous research groups. Employees got generous stock options. Quite the opposite of Shockley’s setup.

There's a lesson here for terrible bosses - but that's for another article.

Fairchild alumni, called Fairchildren, went on to form the giants that made up Silicon Valley - Intel (Noyce and Moore were among the t-8), AMD, Xilinx, Altera, LSI Logic, and National Semiconductor. Noyce was also a mentor to Apple founder Steve Jobs and Google founders Sergey Brin and Larry Page.

So all in all, the traitorous eight is famous for the right reasons.

Transistor

Very very simply put, a transistor is a device that has 3 terminals. A small signal applied at one pair of terminals can control a much larger signal at another pair of terminals. One way of using this is to use the small signal as an 0/1 signal to switch off/on the larger signal. Thus, it is an electrically controlled switch.

The first point-contact transistor was invented in 1947 by John Bardeen, Walter Brattain and William Shockley at Bell Labs. They won the Nobel for the invention in 1956.

Today, most transistors we talk about are MOSFETs - Metal Oxide Semiconductor Field Effect Transistors, which are a massive improvement over the original transistor (called a bipolar junction transistor or BJT). You will often hear the term CMOS, which is two different types of MOSFETS (NMOS and PMOS) used together for improved performance.

Integrated Circuit

At some point, scientists figured out that many transistors could be etched into a single flat board. Jack Kilby of Texas Instruments made the first “hybrid IC” in 1954 - on a germanium board. The first monolithic IC chip was made at Fairchild by Noyce in 1959 - it had four transistors on a single wafer of silicon.

Fairchild’s first integrated circuit (invented by Robert Noyce) had four transistors, 1960. | Image source and further reading

Just for reference, the 2022 Apple M1 Ultra chip has 114 billion transistors.

Jack Kilby won the Nobel Prize in Physics in 2000 for his invention. He would have shared it with Noyce but unfortunately, Noyce died in 1990, and the Nobel isn’t awarded posthumously.

If you are wondering why the Nobel was given almost half a century after the invention - therein lies a story. The invention of the IC was a technical invention and the Nobel prize isn’t usually given to technical inventions. Perhaps the Nobel committee could no longer ignore the fact that the monolithic IC has offered the “greatest benefit to humankind” more than any other 20th-century invention.

In his Nobel speech, Kilby started off by reminding the audience of the story of the beaver and the rabbit. “No, I didn’t build it myself,” says the beaver. “But it’s based on an idea of mine.”

Isn’t it incredible to imagine how far we’ve come from Kilby’s transistor?

Jack Kilby’s original IC - 5 cm x 1.8 cm x 2.5 cm | Image source | Further reading: The Chip That Jack Built.

Chip

If you’re ever in a room where the topic of discussion is the difference between ICs and chips, then my recommendation is to leave the room. Or to change the topic to the difference between biscuits and cookies.

However, if you can’t leave the room because you sat on glue, then here is some critical information: The two words "chip" and an "IC" (Integrated Circuit) are often used interchangeably.

Both refer to a miniaturised electronic circuit that has been manufactured onto a piece of semiconductor material (typically silicon). The term "chip" refers to the physical piece of semiconductor material, while "IC" refers to the circuit itself. So, an IC is a circuit that has been printed on a silicon chip.

Silicon Wafer

A silicon wafer is a smooth flat piece of single-crystal high-purity silicon. It is a semiconductor and can be altered in very specific ways in very specific areas to be a conductor or an insulator.

The process of converting a silicon wafer to a chip involves a combination of three processes - deposition, etching (photolithography), and implantation.

Photolithography

Photolithography is the process of exposing the silicon wafer (coated with a special chemical) to light through a mask (and through special lenses). It is because of advances in photolithography that semiconductor companies were able to improve transistors per unit area to match Moore’s law.

Without going into more detail, the basic idea is that an IC can be “printed” onto a silicon wafer through an amazing (and complicated) engineering process.

The place where such printing is done is called a Fab or a Foundry.

Package

The package is a casing with terminals that allow the microprocessor to connect to the real world. The earliest microprocessors - 4004, 8008, etc - were housed in 16 or 18-pin packages; and then the 6800, 6502, 8088, and 8086 were housed in 40-pin packages.

The famous Intel 8088 processor - the brain on IBM’s first ever PC (1981). Incidentally, IBM’s 1981 PC also featured Microsoft’s new MS-DOS operating system.

Much has changed since then and today packages look like shiny squares.

AMD Ryzen 5 2500 processor (2017) | Image source

Board

A board or a Printed Circuit Board (PCB) is a single flat area where microelectronic components are embedded and connected with each other. An example is a computer’s “motherboard.”

A board is not made of silicon. If you say “silicon board” at a meeting, then you’ll be found out as a newbie. However, some people do use IC for a board (instead of just the chip).

We can see a board on which a circuit is printed. A TI chip is placed In the centre of that circuit. Technically the chip is inside the black “package” | By © Raimond Spekking / CC BY-SA 4.0 (via Wikimedia Commons), CC BY-SA 4.0

Fab/Foundry

A "fab" is short for "fabrication facility." It is a specialised factory where integrated circuits (ICs), microprocessors, and other microelectronic components, are manufactured. Once the silicon chip is made, it has to be put into a package - and this process may be in the same facility, or in another (nearby) factory. The shiny thing we see inside a Mobile phone or computer with the brand name etched is the package (with the chip inside).

Fabs are incredibly clean - if the dust settles on a chip during manufacturing, it is ruined. Further, the machinery used in making chips is extremely sophisticated. Here is a description of one of these machines from an NYT article: The machine, made by the Dutch company ASML Holding, took decades to develop and was introduced in 2017. It costs $150 million or more. Shipping it to a fabrication facility requires 40 shipping containers, 20 trucks, and three Boeing 747s.

As you can imagine, fabs are expensive to set up and make sense only if there is a large demand for chips. There are many Fabs across the world but only a few have the latest “7nm” and “5nm” technology that cutting-edge chips like Apple M1 use.

Fabs are also called foundries but sometimes the term foundries is reserved for those companies that do not design chips, like TSMC - Taiwan Semiconductor Manufacturing Company.

In contrast, Intel and Samsung design and fabricate chips in their own fabs. These companies are called IDMs - Integrated Device Manufacturers.

5nm Technology Node

The semiconductor industry uses technology nodes to denote how densely transistors have been packed on a chip. The 2022 Apple M1 Ultra chip, for e.g., is a “5nm chip.” It packs 114 billion transistors on an 840mm2 die.

👉🏽 In 1965, an IC printed on a silicon chip had 64 transistors. Calculating Moore’s law of doubling every year from 1965 to 1975 (10 years) and doubling every two years from 1975 to 2019 (44 years), that’s 64*(2^10)*(2^(44/2)) = 137 billion. The 2022 Apple M1 Ultra chip has 114 billion transistors. So, Moore is currently off only by 3 years, which makes him a better seer than Nostradamus, in my opinion.

It is tempting to think that 5nm means that the thickness of something is 5 nanometres. But actually “5nm technology node” is only a marketing term. The Samsung 5nm chip has a “gate pitch of 27 nanometers and gate length of 8-10nm.”

5nm is more “advanced” than 7nm, 10nm, 14nm, 22nm, 28nm, and so on. The lower the technology node, the faster and less power-consuming the chip is. However, there are some stability losses.

It is important to note that not all applications need 5nm chips. As of 2023, only 2 companies - Samsung and TSMC can actually produce 5nm chips.

SCL India - India’s only fab - operates at 180nm.

Mindgrove SoCs are designed to be manufactured at 28nm.

TSMC and Taiwan

Taiwan Semiconductor Manufacturing Company (TSMC) is the global leader in chip manufacturing with an approximate $60b annual revenue. In Q3 2021, TSMC had a 57% market share of global semiconductor fab revenue. The next three were Samsung with 15%, UMC with 8%, and GlobalFoundaries with 6%. Wow, that fell fast.

The company was founded by Morris Chang, who spent 25 years at Texas Instruments (TI). At the peak, he was responsible for TI’s worldwide semiconductor business. Chang founded TSMC in 1987 soon after he returned to Taiwan.

TSMC is a pure-play foundry, i.e. it doesn’t design chips. TSMC and other contract manufacturers (not Fabs) - Foxconn ($175b), Pegatron, Quanta, and Wistron - have put Taiwan at the centre of the modern microelectronics supply chain.

The global semiconductor supply chain is extremely distributed | Image source
Taiwan is at the centre of the global semiconductor industry thanks to it’s manufacturing prowess | Image Source

DRAM and Japan

DRAM, or Dynamic Random-Access Memory, is a microelectronic device made of a transistor-capacitor pair that can store a bit (i.e. a 0 or a 1). A 64K 8-bit DRAM can store 64 kilo bytes with each byte being 8 bits.

It may sound obvious, but worth repeating: modern computing is possible due to multiple miniaturised semiconductor devices - processors, memory, controllers, i/o devices, and interfaces.

Of these devices, memory is a generic use device, i.e. a 64Mx32 DRAM is the same whether it is used in a computer, a calculator, a spaceship, or a washing machine. However, these devices use different microprocessors and i/o units - which may or may not be extremely specific.

It is a part of silicon valley history that as companies like Intel developed newer and newer chips, the manufacturing of these chips - especially when they were generic - got outsourced to countries where labour was cheaper. Japan, and later South Korea, took the lead in DRAM manufacturing in the 1980s, and still contribute to 65% of the world’s memory manufacturing.

Fabless

Companies that do only design - but do not manufacture - are called Fabless companies. The top Fabless companies in the world are - Qualcomm, Broadcom, Nvidia MediaTek (Taiwanese), AMD, Apple, Xilinx, etc. For reference, annual revenues in 2022 for Qualcomm were $44b, Broadcom - $33b, Nvidia - $26b, AMD - $23b, and MediaTek - $16b.

Mindgrove Technologies is a Fabless.

Microprocessor, Microcontroller, CPU, GPU

A microprocessor is a type of IC (or chip) which can process logical and arithmetic instructions. In addition to ALUs (Arithmetical and Logical Units), microprocessors will also have control units and registers. Microprocessors are fixed on a board (”PCB”) with other microelectronic systems (memory, i/o, etc) external to it. Some really famous microprocessors are Intel Pentium, AMD Athlon, and ARM.

Microprocessors shot into pop culture with the popular “Intel Inside” campaign that ran in the 1990s and 2000s | Image source and further reading

A microcontroller is a microprocessor + memory + I/O (input/output) units on one chip - usually used in an embedded system. Over time, these definitions have widened and industry folks use the terms microprocessor and microcontrollers loosely.

A Central Processing Unit (CPU) is the logic circuitry part of a computer or embedded system. You could say that a microprocessor is a CPU on a single chip. A Graphics Processing Unit (GPU) is a special microprocessor specifically designed to handle graphical processing. It turns out that GPUs are also great for handling AI jobs.

In summary, a microprocessor is a type of CPU that is used in computers and other digital devices, while a microcontroller is a type of microprocessor that is designed for use in embedded systems and control applications. A GPU is a specialised type of microprocessor that is designed for handling graphical processing and AI tasks.

👉🏽 Microprocessors were called “micro” processors because at the time transistors widths were measured in micrometres.
This image of "The painting of the Girl with a Pearl Earring, but with microprocessors in the background" is created using Dall-E, a GPT3 tool. AI tools like GPT3 would have been impossible without the development of cheap GPUs.

System on Chip - SoC

A System on Chip (SoC) is an IC that has multiple components on the same silicon substrate - CPU, memory interfaces, i/o interfaces, storage, etc. The Apple M1 is an SoC. So is the Qualcomm Snapdragon - which is used in the Samsung Galaxy range of phones.

Qualcomm Snapdragon processor on a board for HTC Desire mobile phone | CC BY-SA 4.0https://commons.wikimedia.org/w/index.php?curid=83664167

The use of SoCs in a system is in contrast to the use of a motherboard. In the former, the main units are within the SoC. In the latter, different units are placed separately on the motherboard and connected.

In an SoC like the 28nm Mindgrove Silicon Secure IoT (which uses the Shakti core), adjacent wires are laid out nanometers apart. In comparison, adjacent wires in boards (or PCBs) are laid out 0.1 - 0.15mm (micrometres). i.e. a 100 times more. This is because advanced IC printing technology allows us to print stuff closer together on Silicon - if you haven’t understood this, then you need to restart the article from the top!

A block diagram for the Mindgrove Secure IoT SoC.

Shakti

Shakti is India’s first industrial-grade microprocessor. It was built by the RISE group (Reconfigurable Intelligent Systems Engineering) at The Indian Institute of Technology, Madras (IIT-M). It is an open-source initiative with the source codes being under a modified BSD license. The development of Shakti was funded by the Ministry of Electronics and Information Technology (MeITY), Government of India.

Shakti started as a class project at IIT-M in 2012. The processors were taped-out at SCL Mohali at 180nm and at Intel Oregon at 22nm.

Shakti is based on RISC-V architecture.

Tape Out

Tape-out is the final design output of the chip design process, specifically the point at which the graphic for the photomask of the circuit is sent to the fab. Broadly it could also refer to the first printing of a new chip design in a small volume (100 to 1000 units).

ISA, x86, ARM, RISC-V

ISA stands for "Instruction Set Architecture". It refers to the set of standardised instructions that a computer's processor can execute.

x86 is a well-known instruction set architecture used in many personal computers and servers. The x86 was the ISA for the Intel 8086 processor. The processor is no longer used, but the ISA is still in use in processors made by Intel and AMD.

ARM (Acorn RISC Machine) is another popular instruction set architecture used in many mobile devices, as well as other embedded systems and Internet of Things (IoT) devices. Most mobile phones use SoCs with ARM-based microprocessors.

Note: It’s easy to confuse ARM (which is an ISA) and AMD (Advanced Micro Devices is a fabless chip company). Just like it’s easy to confuse AA and AAA batteries.

RISC-V (Reduced Instruction Set Computing) is a new, open-source instruction set architecture that is becoming popular for use in a variety of applications, including edge devices, high-performance computing, and data centres. Unlike x86 and ARM, RISC-V is not tied to a specific company or vendor, allowing for greater flexibility and innovation.

VLSI

Very-large-scale integration (VLSI) is the broad term given to the process of embedding thousands and millions of transistors on a single silicon chip. VLSI is a successor to large-scale integration (LSI), medium-scale integration (MSI), and small-scale integration (SSI) technologies.

The Computer Science and Engineering Physics specialisation that deals with the semiconductor industry is called VLSI.

EDA

VLSI engineers use Electronic Design Automation (EDA) tools like Cadence Virtuoso and Synopsys to design and simulate chips.

FPGA

Field Programmable Gate Arrays (FPGAs) are microprocessors that are based around a matrix of configurable logic blocks (CLBs) connected via programmable interconnects. The cool thing about FPGAs is that they can be programmed and reprogrammed to a desired application or functionality.

So a chipmaker can program the logic of a new chip into an FPGA - thus making a prototype that can be used to test. Needless to say, it won’t behave exactly like the final silicon chip in terms of speed and maybe some functionalities too.

👉🏽 The design process of a chip is EDA → FPGA → Tape-out → Production.

Mindgrove Technologies

Mindgrove Technologies is a deep-tech company that designs SoCs under the brand “Mindgrove Silicon.” It is powered by the RISC-V based Shakti core developed at IIT-Madras.

It is based out of the IIT Madras Research Park - and is incubated at Pravartak Innovation Hub and IIT Madras Incubation Cell. IIT Madras has a rich legacy of VLSI research - thus making it the best place in India for a new fabless.

The first three Mindgrove Silicon SoCs are Secure IoT, Vision SoC, and Edge Compute. The tape-out for the first of these - Secure IoT - is due in mid-2023, and production is expected to commence in late 2023. Secure IoT is 1/2 the size of microprocessor chips in the market, leading to better battery life and lower cost. Plus it has hardware-accelerated encryption embedded in the chip.

Summary

In the last 75 years, the semiconductor industry has transformed the world. Most of the inventions and innovations have come from a small set of people based in a few universities and companies.

It might feel like one needs a degree in computer science to engage in chip-talk. But it really isn’t so. And now that you have read our introduction to semicon-lingo, you can ace that networking lunch.

Microprocessors are just like humans, only much smaller, and way faster at math. [image via Dall-E]

Oh, and in case the conversation veers towards “chip-wars” and you need to prepare again, here are two excellent resources: Chris Miller’s excellent book Chip War: The Fight for the World's Most Critical Technology and Johnny Harris’ video USA vs China, The War You Can't See.