August 18, 2017 —
I am using that well-known adage to start this missive as a lead in to talk about getting one’s arms around the development of artificial intelligence. Most of the noise around AI is that it will be the great enabler – from 5G to the Internet of Everything (IoX). But, like all evolving technologies there are bumps.
Hype aside, AI is a two-headed monster. If there is anything history has taught us it is that as powerful as a tool can be, that power can be subverted to do evil.
One of the most visible vectors where AI holds a lot of promise is cybersecurity. But this vector also has some of the highest potential for fallout. This because the same approach that is applied for cybersecurity is used by the dark side to circumvent that security. In fact, sometimes it is easier for the hacker to break security using AI than it is AI to set up a good security screen.
Why? Well, because, truthfully, the code is imperfect. It is usually much more complex than it needs to be, riddled with a variety of flaws and usually a concoction of multiple programming methodologies and programmers. These elements make code a wide and shallow target for compromise. This concern becomes vital with 5G and Internet of the IoX devices, many of which will operate autonomously. Others will be in the hands of the consumer, who has little comprehension about the metrics of cybersecurity.
Another AI dark side capability is that it can be made to mimic human behaviors. An unsuspecting user might be tricked into revealing a password or allowing the mimic AI to allow access to a mobile device, for example. It might also direct a user to download malware files, make a shady transaction, or relay confidential information. It could be capable of doing the same for autonomous devices.
These are only two of many potential cracks in the AI ecosystem. As AI evolves, there certainly will be more, some of which will surprise us.
Now, moving to the light side, one of AI’s strongest assets is in cybersecurity to analyze code. With modern high-power hardware, AI can run though lines of code much faster and more effectively than both humans and present analysis software. It can root out flaws in the alpha and beta stages before the application or program goes public. That closes a significant window of opportunity for hackers since the code now leaves the “factory” with a much higher percentage of un-exploitable programming. AI will also be capable of monitoring such devices and analyze existing code that is already out there; and patch that code as well. This closes a ubiquitous window of opportunity for hackers.
Another rising star in the AI wheelhouse is unstructured data. Present day hardware is very good at processing structured data. Unfortunately, the universe is not made of that. Real-life data is unstructured, which cannot be effectively analyzed without a lot of massaging first.
This vector has a lot of potential. AI can be used to analyze unstructured data and learn from the non-linearities that it contains. It is the content that is the most valuable. For example, AI can be used to understand how the human brain makes non-logical assessments. Something that is difficult for today’s computers to comprehend.
This can be extremely valuable in the mobile world – autonomous vehicles, smart “X,” consumer retail, and enterprise apps, just to mention a few. All of these ecosystems have unique anomalies that make data in them unstructured. Couple that with Big Data and having a fuzzy approach to analysis provides much more reliable and specific results.
AI is still just emerging. There are some novel examples out there; the most visible is in human-centric robots. While that is an attention-getting vector, the real AI will be much more of a behind-the-scenes platform.
Special Editorial Report – July 11, 2017
5G is being touted as the end-all and be-all of wireless networks. It matters little with whom one chats, everybody has a vision for 5G. Autonomous vehicles, remotely piloted vehicles (RPV, commonly called drones), smart cites, homes and infrastructures; eHealth; artificial intelligence; remote control; virtual everything (VX); personal communications; enterprise mobility…the list goes on and on. And it will all be enabled by 5G, “they” say.
That is a lofty vision for two platforms, 5G and the Internet of Everything/Everyone (IoX), that aren’t even anywhere near being fully defined or deployed. But that is the beauty of it. At the nebulous stage of these concepts just about any claim of what they are capable of can be presented.
For example, the other day I got a feed that claims that the IoX is the killer app and 5G will enable that. How? By providing a pervasive, ubiquitous, interwoven fabric of interconnect for everything and everyone. It is supposed to have ultra-low latency, unlimited bandwidth, be super-fast and cover every square inch of the planet.
The year 2020 is supposed to be a pivotal point for both 5G and the IoX. We are 2-1/2 years away from that. The early claims that, in 2020, at least 50 billion devices will be on the IoX is now more like 10 or 20 billion. The IoX will exist, in its true form only when everything and everyone is on it. That is not going to happen by 2020. Truthfully, I really don’t have a clue as to when it will happen. Or when the moment arrives that we now have the IoX.
I recently spoke with a contact I have at California PATH (Institute of Transportation Studies-UC Berkeley). These guys are at the top of the autonomous vehicle food chain. We were talking about true autonomous vehicles and an infrastructure to support it, i.e., vehicles with no user controls or intervention capabilities. His take on it is that this won’t likely occur until the next century. That is over 80 years away. Even if he misses it by half, it is still 40+ years out. Yet, if you listen to some of the diatribe, driver-less cars are just around the corner.
I get some of that same feedback from my contemporaries in the wireless space. Just because 3GPP approved a bump up in the completion of the non-standalone (NSA) implementation of 5G New Radio (NR) from 2018 to sometime in 2017 doesn’t mean everything else is automatically going to scale proportionately. And this is the non-standard. Where is the standard? Now AT&T is saying that it will lay the groundwork for 5G next year.
There is no doubt that some “5G” or 5G-like services will start to roll out in the next couple of years, especially the fixed P2P and P2MP stuff. But, in reality, the “killer” app and visions of 5G and the IoX are at least five to 10, maybe more, years away. A world of lightning fast speed, low single-digit network latency and “unlimited” bandwidth that connects everything and everyone only exists in theory. There are some bits and pieces out there, in bounded applications, but that is about it. And, some things will never need the bleeding edge (do I really care how fast my smart socks tell my smart washer what setting to use to wash them)? And some things that will be under the 5G and IoX umbrella are already here and working today. Other things will function fine with less than 5G metrics, as part of both the IoX and 5G for years to come (industrial M2M for example).
Let’s take a quick detour to other segments. Heck, I know people who are still recording video to a VCR (not many, I have to admit)! And the shiny, new 2017 semi-smart car I just bought still has a CD player right next to the flash drive port. And, they still make tube amplifiers!
So when someone says to me that the IoX, or 5G or autonomous vehicles are the next killer apps, I have to just shake my head. I have been in technology since the Z80 and public safety radio was primarily 150 to 450 MHz. I have seen a lot come and go.
As an engineer, I understand technology development. Yes, Moore’s law has been a staple for 50+ years, but today it isn’t about doubling much of anything anymore. Some even say it is dead, and I now find myself tending to agree.
To wit, one of the places it has face-planted is in semiconductors. There are segments where semis are bumping up against the laws of physics. Take PC microprocessors for example. They have plateaued, or nearly so, at today’s speeds and performance levels. It has taken years to go from 2 GHz. to around 3.5 GHz. And today, the directions of improvements are mostly tangential – parallel processing, chip strapping, multiprocessors and the like. As well, with some exceptions, the software industry is still hung up on single thread implementation.
It is imperative that the 5G and IoX ecosystems do not, similarly, get hung up on single specs and focus on homogeneity, wherever that path leads. Going forward, at least until we lasso quantum physics, or discover ways to push past the limits of physics as we know it, increased speed will depend on efficiency, implementation, agility, smart allocation, optimization, virtualization. But running, flat out, with a focus on Moore’s law is no longer a long-term strategy.
One thing often forgotten, and just as often responsible for progress, is that there is more than just technology involved. An example of that is the latest generation of smartphones. They are just chock full of the latest technology, yet sales are flat, or even slipping.
So just because the one chip of the NR set is coming out a bit sooner doesn’t all of a sudden shift the whole metric. NR is only one, albeit important, element of 5G. As well, aside from hardware and software, there is politics, varying priorities, governmental regulations, spectrum management issues, even human fickleness There is never going to be a definable “aha” moment where yesterday’s 4G, now we have 5G. The same with the IoX.
It is much more reasonable to expect 5G and the IoX to come in stages, and fits and spurts, and to consider much of what is being said about it being “here” as just hype for the time being. We will just become slowly aware that it exists, bit by bit and someday, when most of the criterion is met – so it will have…arrived!
July 6, 2017 —
T-Mobile, which has recently laid claim to the 600 MHz band for 5G, has set its sights on including 3.5 GHz in with the 5G spectrum ecosystem. The carrier has petitioned the FCC to look at modifying the rules governing 3550-3700 MHz, known as the Citizens Broadband Radio Service to better facilitate 5G technologies.
T-Mobile’s Petition of Rule Making asks the FCC to auction all 150 megahertz of the spectrum in the 3.5 GHz band to the Priority Access License (PAL) licensees while maintaining opportunities to licensed-by-rule licensees using the spectrum access system.
In a blog post this week, T-Mobile CTO Neville Ray made the case that the FCC needs to look to more bands other than the millimeter band to achieve the promise of 5G, and he said 3.5 GHz fits that bill.
“3.5 GHz Is great mid-band spectrum for 5G. As the current FCC has recognized, a balanced spectrum portfolio, including mid-band spectrum – between 1 GHz and 6 GHz – is essential to ensure the United States has complete 5G networks,” Ray wrote. “It has better coverage characteristics than high-band spectrum, meaning that it can help deliver the promise of 5G to rural areas.”
The Petition notes that while 5G technologies are expected to use 40-50 megahertz channels, the FCC’s CBRS rules limit PALS to 70 megahertz per market. That licensing structure would limit the number of carriers to one per market, which would strip OEMs of the impetus to make handsets for the band, T-Mobile said.
“In order to optimize the 3.5 GHz band for 5G, there must be an opportunity for multiple carriers to aggregate larger bandwidths,” T-Mobile writes.
Additionally, the 3.5 GHz band is adjacent to spectrum that has been proposed for 5G in Sen. Thune’s MOBILE NOW legislation (between 3100 MHz and 3550 MHz and 3700-4200 MHz). Ray notes that CBRS coupled with the Mobile NOW spectrum would equal 1100 megahertz of spectrum, which he refers to as a “great start.”
July 5, 2017 –
The IEEE Vehicular Technology Society board of directors has awarded the 2017 Neal Shepherd Propagation Prize to the NYU Wireless-led paper, “Investigation of Prediction Accuracy, Sensitivity, and Parameter Stability of Large-Scale Propagation Path Loss Models for 5G Wireless Communications.”
The May 2016 paper, led by NYU Wireless graduate student Shu Sun, studied the sensitivity and accuracy of large scale path loss models that predict levels of signal and interference for 5G millimeter wave systems. This paper was a collaborative work conducted by NYU Wireless along with NYU Wireless industrial affiliate companies Nokia and Qualcomm, as well as Aalborg University in Europe.
“The prize paper has become a must-read study for everyone in the 5G research community and standards bodies, as it used a massive set of empirical millimeter wave and UHF propagation from across the world to prove the superiority of the optional ‘close in’ one-meter reference distance path loss model that has recently been adopted by ITU and 3GPP standards bodies in the design of future indoor and urban cellular systems,” said Prof. Theodore (Ted) S. Rappaport, founding director, NYU Wireless.
The Neal Shepherd Propagation Prize is an annual award given to the best propagation paper in the IEEE Transactions on Vehicular Technology. The award will be presented to Sun and paper’s other coauthors at the Fall 2017 IEEE Vehicular Technology Conference in Montreal in September.
July 3, 2017 —
SK Telecom and Samsung Electronics have completed an end-to-end trial at 3.5 GHz using Samsung’s 5G virtualized core, virtualized RAN, Distributed Unit (baseband unit and radio unit) and test device that are based on the latest 3GPP 5G NR standards elements.
Results achieved were speeds over 1 Gbps and latency of 1.2 millisecond, which was achieved by reducing the transmission time interval (TTI) down to 0.25 millisecond, or about one quarter of 4G LTE’s transmission time. In addition to the latency improvements, carrier aggregation allowed them to achieve a channel bandwidth of 80 megahertz, while 20 megahertz is the maximum channel bandwidth for LTE, which made consistent gigabit performance possible.
Samsung said virtualization played a significant role in the trial’s success. New applications and functions for services can be deployed with Mobile Edge Computing (MEC), according to the OEM.
SK Telecom and Samsung have been exploring 5G communications in the 28 GHz band, which enables the fast transmission of large volumes of data across wide bandwidths. On the other hand, 3.5 GHz offers a broader, more stable, network coverage area.