Is artificial intelligence out of control? Is it our best future? A lively ninety-minute panel discussion this week with experts from industry, academia, and journalism helped make sense of the crucial topic.
David Ignatius, prize-winning columnist, and associate editor for The Washington Post, moderated the panel at the Johns Hopkins University Homewood Campus, which was part of an event presented by GREAT TALK Inc. in partnership with the Alexander Grass Humanities Institute at Johns Hopkins University. Disagreements arose on specifics, but there was consensus that humans can, will, and should maintain control of the use of AI to chart the future.
While the topic of AI, for many, seems to only recently be crashing into public consciousness, Kathleen Featheringham of Booz Allen Hamilton assures us weโve been using AI for much longer in our daily lives than we realize. โIs it our future? One hundred percent,โ said Featheringham, director of the artificial intelligence practice for Booz Allen. โBut how and what that looks like is really up to us. And we should really take a very proactive role in what that looks like.โ
Concerned about misinformation surrounding every new AI trending story, like the recent publicity of Chat GPT and its role in everything from search engine responses to the creation of news articles, she spends a lot of her time demystifying what AI is and is not.
Ignatius asked her, โWhatโs the brief version of that?โ
โThe simple version is that itโs math,โ Featheringham said. โIt really is. Itโs mathematical equations that put together a set of commands to do certain things.โ
Marc Rotenbergโs domain is the law, and making sure AI is not applied in such a way as to deprive people of their rights. President and founder of the Center for AI and Digital Policy, he was intrigued by a decision by the High Court of the European Union, called the Court of Justice on identifying security risks among travelers.
Rotenberg found it interesting that courts are beginning to draw a line between the outputs we can prescribe as humans vs. outputs the computer determines based on its own โself-learning.โ
โThe Court of Justice looked at a system to evaluate travelers for the risk to public safety,โ Rotenberg began. The system was designed with pre-existing criteria to determine if a traveler might commit a terrorist act, and it was based on applying this human-made criterion that the Court of Justice decided it was legal.
โBut if the system was based on self-learning, because it taught itself people from these countries, or people purchased these tickets in cash, or other factors that contributed to a higher statistical likelihood of finding a suspected terrorist, the Court of Justice would have said, โNo. That, we wonโt allow.โโ
Ignatius pulled the thread on the topic of human consciousness, asking Rama Chellappa to describe the โconsciousnessโ of the most advanced AI systems, if such a thing existed.
Chellappa, chief scientist at the Johns Hopkins Institute for Assured Autonomy, believes we are not there yet. He said that context is very important, and algorithms lack common sense and decision-making capabilities. โHumans have thatโฆAI doesnโt have that.โ He used the example of AI in cars. โA Tesla thatโs driving well in Indiana wonโt do well in Mumbai.โ Computers lack reason.
Peter Levin, while agreeing that computers do not see or think, said they do analyze. Co-founder and CEO of Amida Technology Solutions, Levin believes โthe problem is that you can contaminate the data stream.โ
Manipulation of data, he argued, โcan trick the algorithms in to thinking thereโs a weapon where there is no weapon, or concealing armaments that are actually there.โ While he conceded that phenomenon is known is โadversarial AI,โ he argued itโs actually โadversarial data.โ
This ties in with the overarching theme that emerged regarding the language surrounding AI. All panelists agreed care should be taken to accurately reflect machine capabilities and manage expectations and understandable apprehension amidst the public. There was broad agreement that the term โlearningโ should not be applied to computers. Learning is a uniquely human and animal capability.
Featheringham warned against humanizing machines. โHumans and machines are not the same thingsโฆ. humans by nature are good at critical thinking. You have in your head one of the best, most powerful supercomputers that there isโฆ. machines have to be taught.โ Levin agreed, insisting that computers are not creative.
Rotenberg pointed to the rapidly accelerating rate of change as a legitimate source of fear among scientists and laypeople alike, while Chellappa is deeply concerned about the growing educational disparity between the coasts and middle America.
Rotenbergโs center envisions two futures: one in which we manage AI in an open, pluralistic, innovative society; in the other, AI largely manages us in a closed, authoritarian society. Scientists believe each is possible, but to achieve the former, โ[t]here is a real belief around the world that we need human-centric, trust-worthy AIโฆto protect fundamental human rights.โ
Editor’s note: This article has been updated to clarify the presenters of the panel conversation.

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461