MIT Technology Review’s Jan/Feb 2023 issue released the “10 Breakthrough Technologies 2023” list. This annual list features technologies that MIT Tech Review believes will impact, and transform, our world very soon.
The first item on the 2023 list informs us that CRISPR gene editing-based solutions for cholesterol control are already in clinical trials. A few others on the list are: AI that makes images, mass-market military drones, genetically engineered pigs that grow organs for humans, and “the inevitable EV.”
Among this august company is “a chip design that changes everything.” The introduction reads:
The chip industry is undergoing a profound shift. Manufacturers have long licensed chip designs from a few big firms. Now, a popular open standard called RISC-V is upending those power dynamics by making it easier for anyone to create a chip. Many startups are exploring the possibilities.
Why do the editors at MIT Tech Review believe that RISC-V is likely to change the chip industry? Before we get there, we need to start with some theory.
RISC stands for Reduced Instruction Set Computing. RISC and CISC (Complex ISC) are two types of Instruction Set Architectures (ISA). RISC-V (pronounced “risk five”) is a specific type of RISC ISA.
The fact that RISC-V is a RISC ISA and that there exist other RISC ISAs can be mildly confusing. To a newcomer, a sentence like “ARM is RISC, and so is RISC-V, but ARM is not RISC-V” can be confounding. But people in the chip-verse (the semiconductor industry) are generally polite, and no one wants to embarrass newbies by bringing this up.
RISC-V is an open standard - i.e. companies don’t have to pay anyone to develop RISC-V-based products. Further, they may choose to keep their products open (to share them with the community) or closed (to commercialise them).
Computer hardware (like a processor) needs "instructions" that it can understand and process. An example of an instruction could be "take a value from memory and copy it to a register" or "take two values and perform an arithmetic operation on them."
A few hundred years ago, a machine performing basic arithmetic was a really big deal. Our expectations have evolved over time - “Convince 1 in 3 humans that you are human;” “Beat Kasparov at Chess;” “Launch this rocket, get it back to earth, and land it on its bum;” or even “Write a Twitter post for me.” But even these human-like tasks boil down to thousands of simple instructions at the processor level.
Coming back, a set of instructions for a processor to follow is an "instruction set."
Expanding this concept, an "Instruction Set Architecture" (ISA) articulates what a computer can do - hence it is like an abstract computer model. Simply put, an ISA is like a large PDF that explains how a computer will take commands. And an actual CPU is a realisation of an ISA.
The most popular ISAs are Intel x86 and ARM (owned by Arm). Both of these are proprietary - meaning that teams working to develop hardware or software that use these have to pay a hefty license/royalty fee - thus limiting innovation to only those who can afford this fee.
To allow more teams to develop hardware, a set of well-meaning academicians at UC Berkeley created an open-standard ISA called RISC-V. Using RISC-V attracts no license/royalty fee.
As mentioned earlier, RISC-V is a specific implementation of the RISC, and so is ARM. On the other hand, x86 is a CISC type of ISA.
RISC stands for Reduced Instruction Set Computing. CISC stands for Complex Instruction Set Computing. The technical differences between RISC and CISC are such (summarised from this article):
Consider the following assembly language command:
Suppose we write it as
The first method is the CISC approach, and the second is the RISC approach (but not the RISC code). You can see that the CISC approach has fewer instructions. But this also means that the hardware did more work per instruction (cycles). The instruction is “complex.”RISC does the opposite - it reduces the cycles per instruction.
In RISC, the number of instructions per program goes up. This might seem inconvenient, but there are other advantages. For example, in the RISC approach above, the registers A and B remain as is and can be used later without re-loading. Thus the complete code may be more efficient.
Without going into more technical details, we note that the C in CISC is complex since the instructions are “complex” - they refer to multiple things a processor must do. In RISC, they are “reduced” - each instruction is one thing (and needs one cycle).
The most popular implementation of CISC is the x86 ISA. Historically, the Intel 8086 was the first to use this - that’s why it’s called x86. The 8086 was extremely popular, and this thrust x86 into the centre of the computing world. The highly influential 1981 IBM PC used the Intel 8088 chip - a successor to the 8086.
The success of x86 led to CISC being the de facto ISA for the computing world for a long time. Today, most computers and servers use microchips based on processors that use the x86 CISC ISA.
The most popular RISC is the ARM ISA which dominates the mobile market - 95% of all mobiles across the world use an ARM-based chip, most likely a Qualcomm Snapdragon, MediaTek SoC, or an Apple M1/M2.
RISC was developed initially in the 1970s to improve the way a processor uses memory - memory was slow and expensive back then - this is no longer so. Further, most CISC implementations don’t have “complex” instructions as they did in the 1970s - they have evolved with the times.
So the schisms in the chip-verse today are less RISC vs CISC and more x86 vs ARM. And soon, it’s going to be x86 vs ARM vs RISC-V.
In the late 1970s, a team of researchers at IBM led by John Cocke started exploring ways to make computer processors more efficient by simplifying the ISA. Cocke and his team believed an uncomplicated, reduced instruction set architecture would be faster and more efficient.
The team developed a RISC processor, the IBM 801, released in 1980. Instead of using a complex set of instructions, it hosted a simplified instruction set and a pipeline architecture, allowing faster execution. There is some debate over whether 801 is a true RISC processor or not, yet, John Cocke is considered the “father of RISC.” Back in 1980, Cocke was already an IBM fellow. Later he won the ACM Turing Award, the National Medal of Science, and the National Medal of Technology.
The 801 was developed to run a telephone switch. Many years later, IBM developed the RISC-based PowerPC processor, which was used in Apple's Macintosh computers from 1994 until 2005.
Also, in the early 1980s, a research team at the University of California, Berkeley, started developing a RISC processor. David Patterson and Carlo H. Sequin led the team. It included other notable researchers, such as John L. Hennessy, who later became the President of Stanford University (2000-2016) and the Chairman of Alphabet Inc (2018-current).
The team's first RISC processor was developed in 1982. It was a 32-bit processor with a few simple instructions that could be executed quickly. The team’s objective was to demonstrate the feasibility and advantages of a reduced ISA. The original UC Berkeley ISA is said to have inspired the development of ARM ISA.
The team at UC Berkeley further refined and improved their work - producing RISC-II, RISC-III (SOAR) and RISC-IV (SPUR). These processors were used for research and teaching purposes and influenced commercial RISC processors' development.
RISC-I was developed at UC Berkeley, and yet the dominant RISC ISAs are closed systems - the ARM system, IBM’s Power ISA, MIPS, etc. Closed ecosystems create three significant problems: prohibitive licensing fees, lack of customisation, and the need for complex software tools. (The x86 ecosystem is also closed.)
Neither academic institutions like the RISE lab at IIT Madras (which developed the Shakti processor) nor startups like Mindgrove Technologies (which makes high-performance, low-power SoCs) would be able to justify paying for ARM’s upfront license fee. Even if they were to cough up the money, they would have limited rights to the IP (each agreement is negotiated separately). In fact, the story goes that, back in 201X, the RISE lab at IIT Madras wasn’t able to get reasonably priced access to an existing ISA IP with rights to modify. They had to contend with the royalty free Power ISA, but it had too many instructions (compared to RISC-V). So the RISE team, led by Prof Y, started working on its own RISC-V processor. This is the origin story of the Shakti processor.
Lastly, there is no standardisation or compatibility between different RISC processors - making it difficult for both software developers and hardware engineers to make code/products that work across.
If only there were an open standard ISA freely available to everyone … so thought a group of researchers led by Krste Asanović at UC Berkeley. In 2010, Prof. Asanović and graduate students Yunsup Lee and Andrew Waterman set out to create an open-standard RISC architecture… It would be modular and extensible, with a simple and streamlined instruction set that would be easy to implement and optimise.
The team named their project RISC-V and released the first instruction manual in May 2011.
Currently, the standard and the growth of the ecosystem around it are steered by the RISC-V International Association based in Switzerland, founded in 2020 (earlier RISC-V foundation, founded in 2015). Their website has a detailed history page - link here. Most semiconductor and hardware companies, big and small, are members of RISC-V International - Samsung, Qualcomm, Intel, Apple, etc.
The fact that RISC-V is an open standard and that there are open-source hardware implementations and related open-source software available has led to its rapid rise.
This is no coincidence. Back in 2014, Asanovich and Patterson authored this paper exploring the linkages between innovation and a free, open ISA. Among other things, they discussed how “proprietary ISAs are not guaranteed to last.” The authors’ core thesis was that radical technological evolution could only happen in a free and open environment.
RISC-V is not alone in its “openness.” There is a long list of key technologies that are open standards - Bluetooth, PDF, etc. - and open source - Linux, Apache, MySQL, Wordpress, etc. (What is the difference between open standard and open source? Read here.)
Specifically for the chip-verse, this open approach has come at a great time. In the last decade, the threat of geopolitical tensions (including trade conflicts, sanctions and war) and large-scale supply chain disruptions (including due to COVID) has meant that companies (and countries) are very aware that the ARM-x86 duopoly is risky to them and the industry at large.
Further, certain recent developments in the chip-verse have meant that companies are on edge with respect to their existing relationships with x86 and ARM.
You just read part 1 of a 2-part series. Read the second part "The Explosive Rise" here.