David Ignatius, L, and Kathleen Featherington, Marc Rotenberg, Rama Chellappa and Peter Levin discuss the future of AI at a Great Talk Maryland panel discussion. Credit: Aliza Worthington

Is artificial intelligence out of control? Is it our best future? A lively ninety-minute panel discussion this week with experts from industry, academia, and journalism helped make sense of the crucial topic.

David Ignatius, prize-winning columnist, and associate editor for The Washington Post, moderated the panel at the Johns Hopkins University Homewood Campus, which was part of an event presented by GREAT TALK Inc. in partnership with the Alexander Grass Humanities Institute at Johns Hopkins University. Disagreements arose on specifics, but there was consensus that humans can, will, and should maintain control of the use of AI to chart the future.

While the topic of AI, for many, seems to only recently be crashing into public consciousness, Kathleen Featheringham of Booz Allen Hamilton assures us we’ve been using AI for much longer in our daily lives than we realize. “Is it our future? One hundred percent,” said Featheringham, director of the artificial intelligence practice for Booz Allen. “But how and what that looks like is really up to us. And we should really take a very proactive role in what that looks like.”

Concerned about misinformation surrounding every new AI trending story, like the recent publicity of Chat GPT and its role in everything from search engine responses to the creation of news articles, she spends a lot of her time demystifying what AI is and is not.

Ignatius asked her, “What’s the brief version of that?”

“The simple version is that it’s math,” Featheringham said. “It really is. It’s mathematical equations that put together a set of commands to do certain things.”

Marc Rotenberg’s domain is the law, and making sure AI is not applied in such a way as to deprive people of their rights. President and founder of the Center for AI and Digital Policy, he was intrigued by a decision by the High Court of the European Union, called the Court of Justice on identifying security risks among travelers.

Rotenberg found it interesting that courts are beginning to draw a line between the outputs we can prescribe as humans vs. outputs the computer determines based on its own “self-learning.”

“The Court of Justice looked at a system to evaluate travelers for the risk to public safety,” Rotenberg began. The system was designed with pre-existing criteria to determine if a traveler might commit a terrorist act, and it was based on applying this human-made criterion that the Court of Justice decided it was legal.

“But if the system was based on self-learning, because it taught itself people from these countries, or people purchased these tickets in cash, or other factors that contributed to a higher statistical likelihood of finding a suspected terrorist, the Court of Justice would have said, ‘No. That, we won’t allow.’”

Ignatius pulled the thread on the topic of human consciousness, asking Rama Chellappa to describe the “consciousness” of the most advanced AI systems, if such a thing existed.

Chellappa, chief scientist at the Johns Hopkins Institute for Assured Autonomy, believes we are not there yet. He said that context is very important, and algorithms lack common sense and decision-making capabilities. “Humans have that…AI doesn’t have that.” He used the example of AI in cars. “A Tesla that’s driving well in Indiana won’t do well in Mumbai.” Computers lack reason.

Peter Levin, while agreeing that computers do not see or think, said they do analyze. Co-founder and CEO of Amida Technology Solutions, Levin believes “the problem is that you can contaminate the data stream.”

Manipulation of data, he argued, “can trick the algorithms in to thinking there’s a weapon where there is no weapon, or concealing armaments that are actually there.” While he conceded that phenomenon is known is “adversarial AI,” he argued it’s actually “adversarial data.”

This ties in with the overarching theme that emerged regarding the language surrounding AI. All panelists agreed care should be taken to accurately reflect machine capabilities and manage expectations and understandable apprehension amidst the public. There was broad agreement that the term “learning” should not be applied to computers. Learning is a uniquely human and animal capability.

Featheringham warned against humanizing machines. “Humans and machines are not the same things…. humans by nature are good at critical thinking. You have in your head one of the best, most powerful supercomputers that there is…. machines have to be taught.” Levin agreed, insisting that computers are not creative.

Rotenberg pointed to the rapidly accelerating rate of change as a legitimate source of fear among scientists and laypeople alike, while Chellappa is deeply concerned about the growing educational disparity between the coasts and middle America.

Rotenberg’s center envisions two futures: one in which we manage AI in an open, pluralistic, innovative society; in the other, AI largely manages us in a closed, authoritarian society. Scientists believe each is possible, but to achieve the former, “[t]here is a real belief around the world that we need human-centric, trust-worthy AI…to protect fundamental human rights.”

Editor’s note: This article has been updated to clarify the presenters of the panel conversation.

Join the Conversation

1 Comment

  1. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Leave a comment

Your email address will not be published. Required fields are marked *