A TECH-OPTIMIST ON THE FIRST AND LAST QUESTIONS
A Conversation with Bob Muglia about AI, Robotics, and the Future of Humanity
As an early adopter of tablet tech, Bob Muglia doesn’t have many bookcases in his home, but there is one in his office that holds some volumes that have had a deep and enduring impact on his life and work. They are the writings of Isaac Asimov, the 20th century science fiction author whose prolific output demonstrated such visionary genius that he can legitimately be considered a prophet. Muglia read hundreds of Asimov’s books and short stories during his youth, and they shaped his worldview so deeply that both the cosmic Humanism and the tech-optimism of Asimov can be seen clearly reflected in Muglia’s 2023 book The Datapreneurs – a book about what he calls “The Arc of Data Innovation,” from the era of spreadsheets to the Modern Data Stack, and how it will lead us to Artificial General Intelligence, AI-powered robotics, and beyond. As the former CEO of Snowflake and the past President of Microsoft’s Server and Tools Division, it is a story that Muglia is perfectly positioned to tell.
It was a pleasure to interview him in this office where, in the same room as the volumes of Asimov’s Robot books and his Foundation series, some of them beat up old paperbacks, there is a huge fossil of a crocodile hanging on the wall over a blue sofa covered with tribal fabrics and surrounded by sculptures from Papua New Guinea. “I find New Guinea to be incredibly fascinating,” he says. “There’s New Guinea art all over this house. A lot of that art’s pretty scary. But it’s really amazing...” When I interviewed Bob, he was packing to head to New Guinea for the sixth time in 25 years.
Bob recollected how he watched cellphones be introduced into the isolated communities there over the more than two decades of his various visits, and he wondered aloud about how the imminent introduction of Starlink would further transform that society, which produces the terrifically powerful folk art that he so admires. Among other things, it would mean a universalization of education. In this context, and with a view to the impact that Large Language Models (LLMs) like GPT have already had on education, I asked Muglia how he saw AI influencing the future of education in terms of the development of critical thinking skills.
“Well, technology has always affected the way we think and what we can do. I mean, when I was in elementary school, we learned long division, and I think, like most people, I haven’t done long division in, you know, 50 something years. I mean, calculators are just so much better at it than we are. …While I think that the things that we viewed as being important for people to understand and learn in school may change as time evolves, I’m actually very hopeful about how artificial intelligence can help our education system. Mostly because I think that our education system needs help. When we see that in major cities around the country, very small percentages of students in some of these schools are reading and doing mathematics at grade level, it’s a major concern that something has been breaking down. So I’m hopeful that, in fact, Artificial Intelligence tutors can have a major impact in the way people learn. Having the ability for kids to actually interact and work on learning something at their own pace has tremendous potential.”
I remarked to Bob that another aspect of education, broadly construed, is the cultivation of imagination, not just critical thinking, but also what we might call visual intelligence. This is one of the focuses of Collagia, where Muglia has joined the company’s board of directors. The extent of Bob’s involvement with aesthetics and the art world is also evidenced by his having been a panelist at Christie’s 2024 Art and Tech Summit at Rockefeller Center in New York. This is where I first met the CEO of Collagia, Roo Armande, who was also in attendance. I asked Muglia, “Based on your experience with AIs that are increasingly competent in analyzing visual imagery to make something approaching what we would call aesthetic judgments, do you see a time when Artificial Intelligence would be able to replicate or even exceed the imaginative aspect of human consciousness? Or do you think that maybe there’s something irreducible about human consciousness that has to do with imagination and our capacity for innovative creativity that will never really be replicated by AI?”
“Well, I think if you asked me that question three or four years ago, I would probably have answered it differently than I answer it today,” Muglia explains. “Like many people, I would have said that the creative aspect of humanity would be the last thing that a robotic society or an AI-based society would be able to replicate. But it hasn’t worked out that way at all. The creativity coming out of these models is incredibly surprising. Shocking, in fact. These are the places where I see stunning advancements in just a couple of years. I mean, the models can already write poetry a lot better than I can. Now that’s not saying much, I admit. I’m not trying to compare it to the world’s great poets, but nonetheless, the level of creativity that you see is pretty astounding, and in general that’s replicating itself across multiple domains. In writing, that’s true. In art, it seems to be true. Video creation is now emerging. There are incredible tools right now to have AI write songs that have lyrics that are about things you want it to write about, and it does a reasonable job of creating music from the ground up. Far, far better than most non-musicians, far better than I could do, that’s for sure. And it has done so at a very fast pace. So, I tend to think that Artificial Intelligence will be a source of great creativity to the world, going forward, and, eventually, perhaps as in other things, it will surpass humans – surpass humanity. I do think it will.”
First Questions
This striking response provoked me to probe Muglia’s mind further on what he thought about the relative consciousness of Artificial Intelligence and of the coming AI-powered robots. I told him that one of the things I found most fascinating about his book, The Datapreneurs, was that he didn’t fall into the trap of thinking that consciousness is a binary phenomenon, like an on/off switch, where something – some being – is either not conscious or is (fully) conscious. This, I suggested to Bob, was a huge fallacy that arose around the time of Descartes. The work in which Descartes forwarded that fallacy was titled Meditations on First Philosophy, referring to “the first questions” or the most fundamental questions of all. This question that I was posing to Muglia, was certainly one of those First Questions. “You take a view,” I said, “which I’ve seen in Buddhist ontology, where consciousness is a phenomenon on the spectrum of sentience and there are various degrees of sentience, and at some point you wind up with an arising of consciousness, and then there are various degrees of consciousness. So, in your book, you write about how cockroaches are sentient but they’re not conscious. Your house pet, your average dog or cat, is certainly sentient but also seems to be somewhat conscious, but maybe not as conscious as a dolphin that is being trained by the military to communicate through a computer system. So I’m wondering, if you look at it from that perspective, do you imagine that Artificial General Intelligence could become a life form that someday would be even more conscious than humanity?”
“The short answer is yes, I do think that will happen,” Bob said, unequivocally. “It is natural that consciousness does emerge in more sophisticated forms of evolution. …It is likely that in fact consciousness will begin to emerge from these systems [AI and robotics] and will eventually surpass us.”
Then Muglia began talking about evolution in a whole new cosmic horizon beyond what Darwin had been able to grasp, but in line with basic Darwinian principles. Bob thinks that self-replicating robots powered by Artificial General Intelligence, and eventually by superintelligence, would evolve and, in some cybernetic relationship with humanity, we would also enter another epoch of evolutionary history as we expand into the cosmos together with them.
Last Question
Asimov’s story, “The Last Question,” features prominently toward the end of Muglia’s book, The Datapreneurs. It is about the furthest reaches of this evolution, when humanity has developed, and ultimately merged with, an AI that is cosmic in the scope and scale of its computational power. This Cosmic AI then takes as its titanic task the most intractable problem of all, how to overcome or negate entropy in order to cheat death and escape the eventual disintegration of this universe. So I said to Muglia, “Given that you consider yourself a humanist, and when you look at Asimov’s laws of robotics, which are very anthropocentric, and then certainly his laws of Humanics (which you believe should ground a new social contract), there is a deep commitment to preserving the human form of life there. And yet I wonder whether finding some way to overcome entropy in the far distant future would require evolving beyond the human form of life. How might you respond to certain transhumanists who would say that not using singularity level technologies like gene editing or gene splicing to evolve beyond the limits of humanity is actually putting an artificial constraint on innovation, and perhaps crippling us in the face of meeting this challenge of even overcoming entropy on a cosmic scale someday?”
“Of course, that’s the hardest question that you’re asking, one of the most metaphysical questions that we can ask,” Bob replies. “Asimov, you know, was one of the original Humanists. He was actually President of the Humanist Society for a number of years, and, you know, people were always at the center of his stories, even in the robot stories. …So he was very much a Humanist and he believed that humanity would take over the galaxy and maybe ultimately the universe. …In ‘The Last Question,’ he kind of gave hints of that when he talks about Multivac, the computer that eventually answers the last question of how to reverse entropy. And in that I’ve always seen some kind of transcendence beyond the human form. I do think that will happen. It’s happening right now… I mean, people talk about the chip in the brain. It’s going to come, and when it does, the level of fidelity in connection between the human brain and the internet and the intelligence that exists in that form is gonna become much, much deeper. I think we’re going to see tremendous evolution in the next few hundred years, and I guess you could say I am a transhumanist in that sense. I think that humanity will evolve at a rate that is sort of unbelievable, maybe at a rate that we can’t even imagine.”
To this I replied, “So it seems to me that you’re not necessarily opposed, in the very long run, to an evolution beyond what we consider to be the limits of the human form today.” Bob answered, “I think it’s necessary actually. I’m certain it will happen. I mean, it’s gonna happen, and I think it’s necessary.”
To which my response was, “and you advocate for a responsible transition to it. This is where we get into the more social and political aspects of your book, where you are essentially proposing a new social contract, one that would have to be adopted both by corporations globally and governments worldwide.” Toward the end of The Datapreneurs Muglia advocates for a global regulatory framework for AI and for the kind of robotics that will be powered by AGI. He talks about “common sense” as the basis for this, and he points to Asimov’s liberal and humanist Laws of Humanics as a model for such a system. But I pointed out how since the middle of the 20th century, in the era right after World War II, when the United Nations was formed and the Universal Declaration of Human Rights was adopted – an era of great optimism about global cooperation – we’ve witnessed increasingly deep divides between cultures and civilizations. Lamentably, the world is more divided culturally and religiously today than it probably has been at any time in the last century or more. So I asked Bob what the basis of this new “social contract” could be, if it has to be global in order to effectively regulate developments in AI and AGI-powered Robotics. “You gesture toward a more sophisticated form of Asimov’s Laws of Humanics, but you also talk about common sense a lot. That we need a global regulatory framework based on common sense. But whose common sense? Do you mean common sense in the way that, for example, Tom Paine meant it in the Age of Enlightenment? How do we understand common sense with these increasingly deep divides between civilizations?”
Bob was quick to reply that, “this is an area where my thinking has evolved somewhat since I wrote the book, to be honest with you, because we’ve watched some attempts at regulation. …It has become clear to me that there’s not one answer to this. It’s not like this is the right answer and the whole world will go down that path, at least not for a really long time, because of the great divides in societies that you talk about. What would be right for some parts of the United States are not right for other parts of the United States, and they’re certainly very different from what would happen – what will happen – in China and other more authoritarian parts of the world.”
“It seems from your answer that you’re actually a lot more of a pragmatist than I took you to be by the end of your book,” I said. “I hope so,” Bob replied, “I tried to be pragmatic.” I wanted to see how this characteristically American pragmatism, of which William James was such a staunch advocate, translated into Muglia’s views on the government regulation and corporate moderation of free speech in the communication between humans and AI. “When you mentioned differences in regulation in the United States versus, say, China, differences that are going to be inevitable and that really cannot be overcome in the foreseeable future, I think that also touches on the question of censorship and the way that we regulate the interaction between humans and AI. In your book, you mention hate speech and anti-social behavior and attempts by Open AI and Google to moderate content in the conversation between humans and AI. And yet at the core of Asimov’s Laws of Humanics, which you endorse, is a deep commitment to liberalism. There’s something akin to John Stuart Mill’s ‘principle of harm’ there, which is the commitment to the maximal liberty of the human individual insofar as it doesn’t impinge on the liberty of another. And I wonder – if that’s the core commitment, then how do we define hate speech, and how can antisocial behavior be understood in a way that is relevant across the divide between cultures and religions worldwide?”
“That’s one of the great debates right now,” Bob replied. “It’s an active debate in society, in this country at least. Depending on your perspective, Elon Musk’s purchasing of Twitter was either one of the most awful things that has happened, and he should be shut down, or it was a great unleashing of free speech that needed to happen. I, in general, side more on the free speech side. I think in general more speech is better, and we have to be careful about what we describe as being harmful. I think it is true that the pendulum may have swung a little too far in the direction of trying to close down free speech.”
Picking up on his commitment to liberty, I asked Bob if he was worried at all about the erosion of the line between government and corporations, between state power and the liberty of corporations to operate and do their own research and development. “Yeah. Personally, yes. But I don’t get to decide any of these things. Like 300 million other Americans, I just get to have an opinion.” I pressed by asking how Muglia would decide the matter if he could craft a policy of his own. “Where would you draw that line between the responsibility of corporations to society in terms of developing AI and robotics, versus the right of governments to regulate development?”
“Well, in general, I would say that government plays a critical role in society, but it’s most effective when that role is limited to what only it can do. When it tries to extend beyond that, it tends to cause secondary problems, which are significant, let’s put it that way. …They always spend way more money than anybody else could possibly spend. …And so, in general, I would prefer a lighter touch of government than a heavier-handed touch.”
After my interview with him, Bob resumed packing for New Guinea. I wonder whether he will find that the increasing ubiquity of smart phones connected to the internet (not to mention the imminent, aforementioned introduction of Starlink) has started to transform the hitherto isolated tribal culture that produced the powerful folk art that he has chosen to fill his home with. What impact would such a discovery have on his techno-optimism?
Certainly, his is a position with enough internal dynamism to shift over time, especially considering his pragmatic approach. For example, it is hard to tell whether Muglia is in fact a committed Humanist – like Asimov – or a Transhumanist, as he admitted to being at one point. Perhaps it comes down to the question of what “Humanity” really means.