Research Foundation News

April 18, 2019

A magnetic personality, maybe not. But magnets can help AI get closer to the efficiency of the human brain.

New brain-like networks could help robots approach human-like efficiency at object recognition tasks

 

magnetics-brain Purdue University researchers have developed a process to use magnetics with brain-like networks to program and teach devices to better generalize about different objects. (Image provided) Download image

WEST LAFAYETTE, Ind. – Computers and artificial intelligence continue to usher in major changes in the way people shop. It is relatively easy to train a robot’s brain to create a shopping list, but what about ensuring that the robotic shopper can easily tell the difference between the thousands of products in the store?

Purdue University researchers and experts in brain-inspired computing think part of the answer may be found in magnets. The researchers have developed a process to use magnetics with brain-like networks to program and teach devices such as personal robots, self-driving cars and drones to better generalize about different objects.

“Our stochastic neural networks try to mimic certain activities of the human brain and compute through a connection of neurons and synapses,” said Kaushik Roy, Purdue’s Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering. “This allows the computer brain to not only store information but also to generalize well about objects and then make inferences to perform better at distinguishing between objects.”

magnetic-tunnel Magnetic tunnel junction devices show switching behavior, which is stochastic in nature. (Image provided)

Roy presented the technology during the annual German Physical Sciences Conference earlier this month in Germany. The work also appeared in the Frontiers in Neuroscience.

The switching dynamics of a nano-magnet are similar to the electrical dynamics of neurons. Magnetic tunnel junction devices show switching behavior, which is stochastic in nature. 

The stochastic switching behavior is representative of a sigmoid switching behavior of a neuron. Such magnetic tunnel junctions can be also used to store synaptic weights.

The Purdue group proposed a new stochastic training algorithm for synapses using spike timing dependent plasticity (STDP), termed Stochastic-STDP, which has been experimentally observed in the rat’s hippocampus. The inherent stochastic behavior of the magnet was used to switch the magnetization states stochastically based on the proposed algorithm for learning different object representations.

The trained synaptic weights, encoded deterministically in the magnetization state of the nano-magnets, are then used during inference. Advantageously, use of high-energy barrier magnets (30-40KT where K is the Boltzmann constant and T is the operating temperature) not only allows compact stochastic primitives, but also enables the same device to be used as a stable memory element meeting the data retention requirement. However, the barrier height of the nano-magnets used to perform sigmoid-like neuronal computations can be lowered to 20KT for higher energy efficiency.

“The big advantage with the magnet technology we have developed is that it is very energy-efficient,” said Roy, who leads Purdue’s Center for Brain-inspired Computing Enabling Autonomous Intelligence. “We have created a simpler network that represents the neurons and synapses while compressing the amount of memory and energy needed to perform functions similar to brain computations.”

Roy said the brain-like networks have other uses in solving difficult problems as well, including combinatorial optimization problems such as the traveling salesman problem and graph coloring. The proposed stochastic devices can act as “natural annealer”, helping the algorithms move out of local minimas.

Their work aligns with Purdue's Giant Leaps celebration, acknowledging the university’s global advancements in artificial intelligence as part of Purdue’s 150th anniversary. It is one of the four themes of the yearlong celebration’s Ideas Festival, designed to showcase Purdue as an intellectual center solving real-world issues.

Roy has worked with the Purdue Research Foundation Office of Technology Commercialization on patented technologies that are providing the basis for some of the research at C-BRIC. They are looking for partners to license the technology.

About Purdue Research Foundation Office of Technology Commercialization

The Office of Technology Commercialization operates one of the most comprehensive technology transfer programs among leading research universities in the U.S. Services provided by this office support the economic development initiatives of Purdue University and benefit the university's academic activities. The office is managed by the Purdue Research Foundation, which received the 2016 Innovation and Economic Prosperity Universities Award for Innovation from the Association of Public and Land-grant Universities. For more information about funding and investment opportunities in startups based on a Purdue innovation, contact the Purdue Foundry at foundry@prf.org. For more information on licensing a Purdue innovation, contact the Office of Technology Commercialization at otcip@prf.org. The Purdue Research Foundation is a private, nonprofit foundation created to advance the mission of Purdue University.           

Writer: Chris Adam, 765-588-3341, cladam@prf.org 

Source:
Kaushik Roy, kaushik@purdue.edu

 


Abstract

ReStoCNet: Residual Stochastic Binary Convolutional Spiking Neural Network for Memory-Efficient Neuromorphic Computing

Gopalakrishnan Srinivasan and Kaushik Roy

In this work, we propose ReStoCNet, a residual stochastic multilayer convolutional Spiking Neural Network (SNN) composed of binary kernels, to reduce the synaptic memory footprint and enhance the computational efficiency of SNNs for complex pattern recognition tasks. ReStoCNet consists of an input layer followed by stacked convolutional layers for hierarchical input feature extraction, pooling layers for dimensionality reduction, and fully-connected layer for inference. In addition, we introduce residual connections between the stacked convolutional layers to improve the hierarchical feature learning capability of deep SNNs. We propose Spike Timing Dependent Plasticity (STDP) based probabilistic learning algorithm, referred to as Hybrid-STDP (HB-STDP), incorporating Hebbian and anti-Hebbian learning mechanisms, to train the binary kernels forming ReStoCNet in a layer-wise unsupervised manner. We demonstrate the efficacy of ReStoCNet and the presented HB-STDP based unsupervised training methodology on the MNIST and CIFAR-10 datasets. We show that residual connections enable the deeper convolutional layers to self-learn useful high-level input features and mitigate the accuracy loss observed in deep SNNs devoid of residual connections. The proposed ReStoCNet offers >20 × kernel memory compression compared to full-precision (32-bit) SNN while yielding high enough classification accuracy on the chosen pattern recognition tasks.


Research Foundation News

Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, (765) 494-4600

© 2015-22 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by Office of Strategic Communications

Trouble with this page? Disability-related accessibility issue? Please contact News Service at purduenews@purdue.edu.