X

Connect (X)

Tag Archives: artificial intelligence

Autonomous Vehicles Need to Predict the Future

By Ernest Worthman, Executive Editor, AWT magazine, Senior Member, IEEE

Tech Talk

Worthman

The autonomous vehicle (AV) scene has been, well… blah, lately. Why? Because much of what can be done to advance it is hitting the technology wall. We are stuck in between levels three and four. We have maxed out level three technology (except for incremental upgrades). However, the leap from level three to four is much wider than the previous levels.

There are several reasons we cannot go sans-drivers yet. But that is not what I want to talk about in this column. What I do want to discuss is how we are moving to level four.

Some will argue that the technology exists to have level four. I would argue that we have some success in some of the segments that make level four possible. However, there are some critical components still missing or too rudimentary to make full level four a reality – primarily today’s artificial intelligence.

The current focus is on sensors. They are where most of the money is at present. Cameras, microphones, shock, and vibration, environmental (temperature, moisture, air), air interface (RF), all have elevated to highly functional levels. So, I would be willing to say that this part of the goal has reached level four. But what has not, is real-time preemption (comprehension, if you will) and that is the tipping point.

There are plenty of examples of driverless vehicle successes in some segments. However, they are all in some sort of controlled environment. That shows such vehicles are capable of operating without human intervention. However, controlled applications do not require situation comprehension or intuition. Nor do they require a fat database of possible scenarios, or AI, really. So, none of this is sufficiently sophisticated to be unleashed in uncontrolled, real-life environments.

There is, however, quite a bit of quiet activity going on in this space. Most of it has to do with tweaking existing technologies and adding components toward the goal of level four, as well as figuring out ways to make vehicles recognize a situation and react based upon the highest probability of being correct.

That may sound simple, but it is not – especially the real-time situation and reaction relationship. The reason is simple enough, though – the multiplicity and complexity of possible scenarios – i.e. to be able to predict the future. Thus, we are stalled here.

Of course, no one, not even AI can predict the future. However, the use case for AI in autonomous vehicles has to be able to predict the most probable outcome in any given scenario with a high success rate. That requires some awareness of future outcomes. So far, AI, for all its capabilities, cannot do that well.

The solution is, of course, AI combined with complex algorithms.  And, speaking of complex algorithms, it just so happens that the TÜV (this is the Technischer Überwachungsverein or Technical Inspection Association of Germany and Austria – the vehicular inspection and product certification organization) is working on exactly that.

One of the issues I have had with full level four and five autonomous vehicles is that it will take forever to amass enough scenarios to make the vehicle sufficiently intelligent to have a five-nines, or better, accuracy.

Even with AI, ML, MI, fuzzy logic, deep learning, and everything else, current AI empowered vehicles cannot manage all of the scenarios. Yet for the autonomous vehicle to thrive, it must be able to predict with (ideally) perfect accuracy. There is also the argument that even mediocre AI-controlled AVs are as capable, if not more so than many drivers.

That may or may not be true but let us play with some numbers. Consider this: there are, today, roughly 1.42 billion cars in operation worldwide, including 1.06 billion passenger cars and 363 million commercial vehicles. Depending upon where you live, the number varies but the global average is 18 car-related deaths per 100,000 people – all with differing accident conditions.

Doing a bit of math that puts the global death rate around 260,000, give or take, for the 1.42 billion cars. Percentage-wise, that is roughly, 0.0017 percent. And that is for fatal incidents. Add non-fatal, and those not reported, and that number is likely two or three times as low. Thusly, for AVs to break even, that would have to be the number they need to hit.

Now, this is absolute math, and it assumes every vehicle is autonomous. But my point is that scaling AI in AVs will have to be very intelligent.

However, even with the most basic understanding of connected and autonomous vehicles it just makes sense that simply operating according to known scenarios, obstacles and potential causes of accidents will not work. They must be able, as are humans, to react to uncommon and to predict unknown scenarios.

Common knowledge is that AI and its cohorts are the only way this can be accomplished (short of using the Vulcan mind-meld on the vehicle computers).

At the base level, the answer to this, and to just about every other computer managed device, is the algorithm.

It just so happens that the TÜV has taken on pondering about the worst thing that could happen, at any and every given moment, and figuring out how to get out of it without endangering or obstructing traffic.

They have developed a new, self-driving car algorithm dubbed the Continuous Learning Machine AI tool that automatically labels and mines training data to enable connected autonomous vehicles to react to unpredicted events such as bicycles swerving onto the road amidst traffic, or kids running into the street. In essence, it is all about predicting doom – and there is a lot of it. However, the solution is not to create as many scenarios as can occur. As I mentioned earlier, that is a rather daunting challenge. A better solution is to use AI to create patterns.

The fundamentals of this are not particularly complicated. What is complicated is to have the AI be able to create patterns – pattern recognition – learning from large quantities of data. Then use that learning to “guess” what is most likely to occur. The result is much less static data and faster computational capabilities to approximate real time.

Just as no human is born with the knowledge to drive a car safely, neither is a computer. And, thus far, they are less effective than humans simply because there is virtually, an unlimited sink of data that must be stored to consider every possible scenario. Whereas humans can deduce from far less data, all things being equal.

So, it is about the ability to use rational thinking – to deduce something from a collection of experiences. Something only the human mind is capable of. Sure, we can approximate that with huge volumes of data, and complicated neural networks, and the like. But the problem is still that a huge amount of data is needed for computers to even come close to deductive reasoning.

The traditional approach is to drive and drive and drive. While that is possible, the number of miles and time it takes to build even a reasonable database is prohibitive. A better approach is to collect data from multiple, in this case, millions at least, sources.

Neither of these approaches is practical at the moment, unfortunately. Even doing this virtually has its challenges because they would have to program a nearly unlimited range of scenarios.

This is why the TÜV is going in this direction. There is a need for AI algorithms, one of which has been developed by Germany’s Technical University of Munich (TUM), to be flexible. The purpose of such algorithms is to constantly predict the worst possible situation. However, that is extremely computationally intensive (quantum computing, anyone?).

The trick is to constantly improve the algorithm by giving it enough data on uncommon events and given actions to execute if such events occur. This would allow these algorithms to improve by increasing accuracy and the number of cases it can predict. This is an excellent use case for big data.

As I mentioned, it is relatively straightforward to develop a vehicle that can operate in a known environment. But that will not work in edge cases. Thusly, levels four and five autonomous driving in real-life environments are still a long way off.

Finally, there are non-technical issues yet to even be realized. Legal issues, ownership issues, responsible party issues, liability, and more. These issues are constantly being debated and will likely be so for some time to come before solutions are reached.

Yes, we are still a long way off for levels four and five AVs.


Level 4 requires high automation.

Level 5 requires full autonomy.

With AI, Telecom Field Services Hit New Levels of Efficiency

By J Sharpe Smith, Senior Editor

Artificial Intelligence, one of the hottest leading-edge technologies, can teach a camera to spot a cheetah, help a doctor make a diagnosis or allow a car to be driven autonomously. One new platform can now make companies with field service operations, such as telecom services, become more efficient, according to David Simmons, director of innovation and technology for telecommunications at Black & Veatch.

“We are using intelligent automation to be able to learn as we are performing scopes of work for our clients, whether it involves self-performing or subcontracting that work out,” Simmons told AGL eDigest. “Learning how to best connect that scope of work with the best resource depends on automatically accessing a number of factors, such as location, performance, skills and safety.”

The Intelligent Service Automation and Control (ISAC) platform provided by Zinier takes those overall variables into account in real time as work orders pass through it. This aligns the resources with the right work at the right time. By deepening real-time visibility into the field, ISAC anticipates service disruptions through AI-driven recommendations, allowing improved operational efficiencies by automating manual front-office, back-office and field-office tasks.

“AI analyzes the data as a human would, but without the emotion or biases of a human,” Simmons said. “We look at it as an opportunity for our subcontract partners to get consistent work with near-real-time payment, because we close out our work orders so effectively and efficiently. We want to leverage the technology to be their preferred partner. We want to make it easy for the subcontractors to work with us.”

For example, if a crew is deployed to a site and it is missing a part, it can report that back in real time to the Zinier platform, which automatically checks inventory. The component is either dispatched to the site or the crew is diverted to work at a site nearby, while the part is back ordered.

“The whole idea is to keep the subcontractor out from behind the wheel of the truck and working at the site,” Simmons said. “That’s what we all want. We need to be efficient, so the crews are not sitting around waiting. They could spend a week working on the two sites, instead of waiting an extended period of time waiting for the part at the first site and not getting paid promptly for either.”

The services firm is able to keep historic diagnostic data for all telco equipment, ensuring the appropriately skilled technician shows up for each maintenance job. The ISAC platform performs predictive analytics to send technicians to perform maintenance before problems occur.

“We want to make sure the crews have all the components they need to successfully perform their jobs, including the engineering artifacts (drawings, structural analysis), and that they have all the permits in place, the right materials, as well as the necessary documentation to validate the work performance. It should all available in one spot,” Simmons said. “Then you throw in location-based services to be able to evaluation their proximity with the location of the work that we have scheduled at their disposal.”

Figuring out what site is most optimal for the crew to go next relies on a set of data elements used within the platform.

“This is going to prevent folks from having to search for the information, calling back and forth, to do their job,” Simmons said. “Information on how to get the work done is at everyone fingertips.”

AI: the Right Tech, the Right Time

With carriers pressured to deploy higher data speeds over faster, cheaper networks, it is the telecom services companies’ jobs to facilitate the transition to next-generation 5G wireless communications technology.

“We feel like our partners in the field [tower/fiber crews] are at such a disadvantage compared to the people in the office, “Simmons said. “We have to close that gap. They are the critical lynchpin into 5G and the next generation of telecom. In the context, there is so much opportunity from a work perspective that we could do 50 percent more work, which means we could become more efficient with the current workforce and hire additional workers.”

AI is necessary for building out small cells, where the profit margin per site is slim. In the near term, the industry is no longer installing tens of thousands of sites at a macro level annually, but instead is looking at installing hundreds of thousand sites from a small cell perspective.

“The paradigm in which the work is done for small cells has to change, Simmons said. “Technology has to be at the forefront of that change. We can’t do that efficiently and effectively if it doesn’t scale.”

Black & Veatch launched its first round of deployments using the AI tool last month with a team that is performing fiber splicing. In 2020, the firm intends to partner with its subcontractors in macrocells, small cells and fiber to optimize collaboration on the Zinier platform.

“We have to make the transformation,” Simmons said. “With our partnership with Zinier and with the technology we are confident we make a significant, positive improvement throughout the supply chain.”

Next Generation AI is Coming Your Way, Will Security Take a Detour?

By Ernest Worthman, Executive Editor, AWT Magazine; Sr. Member, IEEE

We are all aware that AI has been pervasively deployed in the generation of assistive technology from Amazon, Google and others. Until now they have been, relatively, low-tech and simple (including their lack of security).

However, that is about to change. In anticipation of the upcoming holiday season, the major players, Amazon, Facebook, and Google are all upping the game. One might say that AI 2.0 is about to be released.

These next-generation devices go from listen and reply to becoming smart display devices, adding video to them.

Amazon unveiled Echo Show, and Google is releasing the Home Hub, Pixel 3, Pixel Stand and Pixel Slate. Facebook rolled out Portal and Portal+ devices for Facebook Messenger video chat and Alexa with tablet-sized, rotating screens. It also is connected to Newsy.

Google Home Hub, is connected to a number of apps that help you with everything from cooking to smart home management to ride sharing. It too, comes with a smart screen.

The Amazon offering of Echo Show offers new video visuals and the ability to be a hands-free video calling center. It also has the ability to integrate with smart homes.

However, what all of these devices still have in common are security issues. Adjacent to all of these evolutionary devices is the specter of compromise. Recall that Facebook recently exposed 50 million accounts, with 30 million of them having data stolen. In a similar scenario, Google+ was pulled one day before its debut because a security hole was discovered in the software.

Do not think Amazon escapes the security scrutiny. The fact that the Echo has been criticized for the way it captures data and uses it for any number of purposes has been going on for some time now. And, tangentially, one of Amazon’s more underhanded actions was the recent discovery of an algorithm, in its hiring and recruitment processes, that penalized applications with “women” in them for years. Not a security issue but certainly an unconscionable course.

However, back to privacy issues. While the knowledge of this is growing, it is not as significant as it should be.  Recently, a PricewaterhouseCoopers survey noted that only 10 percent of nonusers do not own smart speakers due to privacy concerns. In other words, 90 percent of non-users either have no clue about potential security issues, or do not care. That is a disturbing metric. To support that, such assistant adoption has grown steadily. Moreover, analysts do not see that abating.

These device manufacturers, as well as the app developers linked to them do not seem to show much of a penchant to up security or protect private data. Most of what they do is damage control. All Facebook did was to limit initial use cases for Portal, keeping out much of its knowledge of one’s social life. That is why Portal did not debut with facial recognition software, as had initially been expected.

The big challenge for these segments is trust. I will grant that it is difficult for them to be all that they can be while maintaining security and privacy. Security is the easier of the two. Privacy is more challenging because the users want private and personal data to be available to varying degrees, depending upon personal preferences. In addition, the majority of users cannot be expected to understand how to manage their privacy until it becomes a function that they can understand in very simple terms.

This is a complex wheelhouse that requires a great deal of understanding, by both the user and the provider, regardless of whether it is an app or a device. Add to that the impending Internet of Everything/Everyone (IoX) and it gets even murkier.

In the end, part of it will fall on the user, part on the provider. In any event, personal and private data needs to be, fundamentally, protected and unavailable unless the user, specifically, allows access to it. Storing it anywhere but with the user is not cool. That is the pivotal issue that the vendors need to focus on.

Time for a Reality Check for AI

By Ernest Worthman, AWT Exec. Editor, IEEE Sr. Member

Artificial intelligence platforms, applications, programs, tools, functions, systems, whatever one wants to call them, have been the buzzword of technology for some time now. In fact, AI is considered one of the great enablers for the upcoming 5G ecosystem.

However, knowing what I know about this technology, I have taken a rather conservative opinion, in my writings, of just how much faith has been put in AI to solve the world’s problems.

Google, Microsoft, Amazon, and others have put AI into our everyday lives with AI-enabled devices such as Siri and Echo, but peeling back the layers, such implementations are still on the basic scale, even though their creators would have you believe they are the future. Not quite true, but it follows much of the AI hype we have been hearing for the last couple of years.

In that vein, I recently received a report by an organization called Riot Research. They claim the AI bubble is about to burst and bring forth a new era of AI development and implementation. Digging down a bit, I found, not altogether, a shortage of opinions supporting this.

One of their points is that the AI hype has generated unrealistic expectations of what AI is, and will be, capable of – at least for the next five years. Interesting point. Let us look at some of the data that supports the Riot observation.

First of all, it’s true AI cannot happen without deep learning and neural networks. The integration is often referred to as machine intelligence. (See my recent PowerPoint presentation on these technologies.)  This, again, plays to the 5G ecosystem where much of the intelligence will be distributed and require a high level of intelligence at places like the edge. To be effective (due to the overwhelming integration of platforms, technologies, applications, and the like), AI will have to be able to become self-aware to some degree. So far, that is not the case, for either 5G or other platforms.

A classic case of that is the reference to AI (deep learning) recognizing an object. An oft-used example is of a cat. A few years ago, AI (in this case a neural network) was able to recognize the face of a cat from video streams. That was heralded as a breakthrough. But, cutting to the chase, AI may be able to identify a cat, as a cat, from its database or learning algorithms. However, it is still quite incapable of knowing whether the cat is real, or just a picture (because it has no awareness of what a cat is), without assistance (human, or other) – back to the real issue, self-awareness. The concepts are solid but the technology lags.

Now, before I get a flood of responses saying we have self-aware systems, I want to clarify something. I am not talking about the kind of self-awareness that one can find in a thermostat that regulates the very temperature that it measures. While that is technically correct, it is not the point of the self-awareness I am discussing. I am talking about real self-awareness – ultimately, the concept that machines realize that the human race is a threat and would not hesitate to eliminate all forms of life on the planet in order to protect its autonomy (as is depicted in so many sci-fi scenarios).

Of course, that scenario is far out (if ever possible) on the radar screen but it defines the ultimate in machine intelligence. However, initial implementations of this path are visible and will be the core of the AI of tomorrow. How the rest turns out is anybody’s guess.

Do not get me wrong. We have a really good start on AI and its capabilities. However, its current capabilities have been oversold and this has led to the current bubble. Yes, there is quite of bit of low-hanging fruit available and is what has VCs and other investors throwing money at the platforms. Nevertheless, eventually, we are going to have to separate the wheat from the chaff and that will be the reality check coming down the line.


Ernest Worthman
Executive Editor/Applied Wireless Technology
His 20-plus years of editorial experience includes being the Editorial Director of Wireless Design and Development and Fiber Optic Technology, the Editor of RF Design, the Technical Editor of Communications Magazine, Cellular Business, Global Communications and a Contributing Technical Editor to Mobile Radio Technology, Satellite Communications, as well as computer-related periodicals such as Windows NT. His technical writing practice client list includes RF Industries, GLOBALFOUNDRIES, Agilent Technologies, Advanced Linear Devices, Ceitec, SA, Lucent Technologies, , Qwest, City and County of Denver, Sandia National Labs, Goldman Sachs, and others. Before becoming exclusive to publishing, he was a computer consultant and regularly taught courses and seminars in applications software, hardware technology, operating systems, and electronics.  His credentials include a BS, Electronic Engineering Technology; A.A.S, Electronic Digital Technology. He has held a Colorado Post-Secondary/Adult teaching credential, member of IBM’s Software Developers Assistance Program and Independent Vendor League, a Microsoft Solutions Provider Partner. He is a senior/life member of the IEEE, the Press Liaison for the IEEE Vehicular Technology Society and a member of the  IEEE Communications Society, IEEE MTT Society, IEEE Vehicular Technology Society and the IEEE 5G Community. He was  also a first-class FCC technician in the early days of radio. Ernest Worthman may be contacted at: eworthman@aglmediagroup.com, or ernest_worthman@ieee.org

Global Smart Cities to Top $2 Trillion by 2025: Frost & Sullivan

Less than seven years from now, smart cities will create business opportunities to the tune of $2 trillion, according to the analyst firm Frost & Sullivan, driven by artificial intelligence, personalized healthcare, robotics and distributed energy generation.

The Asia-Pacific region is anticipated to be the fastest-growing region in the smart energy space by 2025. In Asia, more than 50 percent of smart cities will be in China, and smart city projects will generate $320 billion for China’s economy by 2025.

North America has been quickly catching up, with many Tier II cities, such as Denver and Portland, committed to building their smart city portfolios. The total NA smart buildings market, comprising the total value of smart sensors, systems, hardware, controls, and software sold, is projected to reach $5.74 billion in 2020.

Europe will have the largest number of smart city project investments globally, given the engagement that the European Commission has shown in developing these initiatives. The European e-hailing market, central to cities developing smart mobility solutions, currently generates revenues of $50 billion and is estimated to reach $120 billion by 2025.

In Latin America, cities actively developing smart city initiatives include: Mexico City, Guadalajara, Bogotá, Santiago, Buenos Aires and Rio de Janeiro. In Brazil, smart city projects will drive almost 20 percent of the overall $3.2 billion IoT revenue by 2021.

AI plays a key role in smart cities in the areas of smart parking, smart mobility, the smart grid, adaptive signal control and waste management. Major corporations, such as Google, IBM, and Microsoft, remain key tech innovators and the primary drivers of AI adoption.

“AI has been the most funded technology innovation space in the past two years, with large investments coming from independent and corporate venture capital companies,” explained Jillian Walker, visionary innovation principal consultant at Frost & Sullivan.

Along with AI, personalized healthcare, robotics, advanced driver assistance systems (ADAS) and distributed energy generation are believed to be the technological cornerstones of smart cities of the future.

“Currently most smart city models provide solutions in silos and are not interconnected. The future is moving toward integrated solutions that connect all verticals within a single platform. IoT is already paving the way to allow for such solutions,” added Vijay Narayanan, visionary innovation senior research analyst at Frost & Sullivan.

To buy the report, go to http://ww2.frost.com