I just read a feed telling me that Intel has developed what they call a self-learning, “neuromorphic” chip. Exactly what does that mean? Well, let me back up a minute. Most artificial intelligence (AI) systems today are resource-intensive – i.e. they are not “single chip.” They are systems that consist of processors, memory, storage and support hardware. And, such systems are either in the cloud or some dedicated server farm and the only way to access them is via the Internet. This has been a bottleneck for applications such as edge computing, small cells, smart “X”and various other entities distant from the core networks.
But if Intel has, indeed, come up with a way to put deep learning on a single chip, it is a revolutionary step in autonomous devices and networks. Now, elements such as wireless edge networks can use deep learning locally to optimize functions such a resource management. They can, in real time, and more importantly, locally, adapt to conditions relative to time and usage, based upon feedback from the environment, to control any number of devices.
On top of that, it is said to be extremely energy efficient, capable of making logical inferences and improving its intelligence with time, much like the human brain, hence the term neuromorphic AI.
A quintessential application would be cybersecurity. Neuromorphic systems can be applied here to “learn” patterns of malware from experience and be used to identify new code without actrually knowing what the code contains.
Other applications could be in frequency management where peak wireless usage patterns can be learned and the data applied going forward. Because such patterns vary over time, these chips can adapt and predict such variants, ensuring that the right amount of bandwidth is available, dynamically and in real time. This has significant implications for upcoming wireless platforms such as high-speed Wi-Fi, Licensed Assisted Access (LAA), and similar unlicensed platforms. And in the milliwave spectrum for enhanced mobile broadband (eMBB).
A third application could be autonomous vehicles were these chips can learn driving habits, patterns and the hazard and environmental condition variables.
This is just the tip of the iceberg and a very exciting vector. Should self-learning chips be commercially feasible, all limits on the implementation of AI will disappear. Applications are nearly limitless, making everything and everyone a self-contained, intelligent unit – to the joy of sci-fi fans everywhere.
EDITORS’ NOTE — In case you haven’t heard of “Skynet,” it was the “neural net-based, conscious group mind” featured in the Terminator movies.
Ernest Worthman is the Executive Editor of Applied Wireless Technology magazine. A Life Member of the IEEE, his 20-plus years of editorial experience includes being the Editorial Director of Wireless Design and Development and Fiber Optic Technology, the Editor of RF Design, the Technical Editor of Communications Magazine, Cellular Business, Global Communications and a Contributing Technical Editor to Mobile Radio Technology, Satellite Communications, as well as computer-related periodicals such as Windows NT.