Consider a world when telephones, smartwatches, and other wearable electronics aren’t required to be shelved or abandoned in favor of a newer model. Instead, they might be upgraded with the newest sensors and processors that snap into the device’s internal chip, much to how LEGO bricks can be added to an existing structure. This type of customizable chipware might keep gadgets current while also minimizing our electrical waste.
With a LEGO-like design for a stackable, customizable artificial intelligence chip, MIT engineers have taken a step toward that modular ideal.
The chip’s layers interact optically thanks to alternating layers of sensor and processing units, as well as light-emitting diodes (LED). Traditional wire is used in other modular chip architectures to transmit signals between layers.
The MIT approach transmits data across the device using light rather than physical cables. As a result, the chip may be changed, with layers that can be swapped out or piled on to add new sensors or processors, for example.
According to MIT researcher Jihoon Kang, “you may add as many processing layers and sensors as you like, such as for light, pressure, and even smell.” “We call this a LEGO-like reconfigurable AI processor since its expandability is limitless depending on the layer combinations.”
The researchers are excited to use the idea in edge computing devices, which are self-contained sensors and other electronics that operate without the use of any central or distributed resources, such as supercomputers or cloud computing.”Demand for multifunctioning edge-computing devices will grow substantially as we reach the era of the internet of things based on sensor networks,” says Jeehwan Kim, an associate professor of mechanical engineering at MIT. “In the future, our suggested hardware design will allow high edge computing flexibility.”
Nature Electronics reported the team’s findings. In addition to Kim and Kang, MIT authors include co-first authors Chanyeol Choi, Hyunseok Kim, and Min-Kyu Song, as well as contributing authors Hanwool Yeon, Celesta Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Subeen Pang, Sang-Hoon Bae, Hun S. Kum.
Creating a path
The design of the team is now set up to do basic image recognition jobs. It does this by overlaying image sensors, LEDs, and processors created from artificial synapses — arrays of memory resistors, or “memristors,” discovered by the researchers before, that work together to form a physical neural network, or “brain-on-a-chip.” Without the use of additional software or an Internet connection, each array may be trained to analyze and categorize data directly on the device.
The researchers combined image sensors with artificial synapse arrays in their novel chip design, training each of them to identify certain letters (in this example, M, I, and T).Rather than using physical wires to transfer sensor inputs to a processor, the researchers created an optical interface between each sensor and artificial synapse array to allow communication between the layers without requiring a physical link.
“Other chips are physically linked through metal, making them difficult to rewire and redesign,” explains MIT postdoc Hyunseok Kim. “If you wanted to add any new function, you’d have to construct a new chip.” “Instead of using physical wires, we use an optical communication technology, which allows us to stack and add chips in any way we desire.”
The team’s optical communication system is made up of linked photodetectors and LEDs with small pixels printed on them. Photodetectors combine to form an image sensor that receives data and LEDs that send it to the next layer. When a signal (such as a letter image) reaches the image sensor, the image’s light pattern encodes a specific configuration of LED pixels, which stimulates a second layer of photodetectors, as well as an artificial synapse array, which classifies the signal based on the pattern and strength of the incoming LED light.
Putting it all together
The researchers made a single chip with a computational core that was around 4 square millimeters in size, or roughly the size of confetti. Three image recognition “blocks” are layered on the chip, each having an image sensor, optical communication layer, and artificial synapse array for identifying one of three letters: M, I, or T. They then projected a pixellated picture of random characters onto the device and measured the electrical current produced by each neural network array. (The higher the current, the more likely it is that the picture is the letter that the array has been trained to identify.)
The researchers discovered that while the chip accurately recognized clear pictures of each letter, it struggled to discern between hazy images, such as I and T. The researchers were able to rapidly replace out the chip’s processing layer with a superior “denoising” processor, and the device was able to correctly identify the pictures after that.
MIT postdoc Min-Kyu Song explains, “We demonstrated stackability, replaceability, and the ability to incorporate a new function into the device.”
The researchers want to expand the chip’s sensing and processing capabilities, and they envisage a wide range of uses.”We can add layers to a cellphone’s camera so it can detect more complicated pictures,” says Choi, who previously created a “smart” skin for monitoring vital signs alongside Kim.
Another concept, he says, is for consumers to pick from a variety of sensor and processing “bricks” that are embedded into gadgets.“We can develop an universal chip platform and sell each layer independently, like a video game,” explains Jeehwan Kim. “We could create many sorts of neural networks, such as for image or speech recognition, and let the client pick what they want, then connect them to an existing chip like a LEGO,” says the researcher.
The Ministry of Trade, Industry, and Energy (MOTIE) of South Korea, the Korea Institute of Science and Technology (KIST), and the Samsung Global Research Outreach Program all contributed to this study.