Sunday, 2 April 2023

Fear of an A.I. Pundit

Fear of an A.I. Pundit

Opinion

Ross Douthat
Ross Douthat - Op-Ed columnist for The New York Times

Nick Bostrom’s 2014 book, “Superintelligence,” a crucial text for the community of worriers about the risks of artificial intelligence, begins with a fable: A tribe of sparrows, weary of a marginal existence, becomes convinced that everything would be better if they could only have an owl to help them out — to build nests, to assist with care and feeding, to keep an eye out for other predators. Delighted by this idea, the sparrows decide to go hunting for an owl egg or owl chick that they might bring up as their own. Only Scronkfinkle, “a one-eyed sparrow with a fretful temperament,” points out that maybe they should consider the dangers of living with a full-grown owl and put some thought into owl taming and owl domestication first. But he’s overruled on the grounds that merely getting an owl will be hard enough and there will be time to worry about taming it once it’s been acquired and reared. So while the others fly off to search for eggs and chicks, Scronkfinkle and a few other sparrows try to put their minds to the taming problem — a difficult challenge, lacking an owl to work with, and one shadowed by the fear that at any moment their nest mates might come back with an owlet and put their sketched-out theories to a brutal test. It’s a neat fable about what A.I. alarmists think is happening right now. The accelerating power of artificial intelligence, manifest publicly so far in chatbots and image generators, is a growing owlet in our nest, and our alarmists are still unprepared to tame it. And it’s in the spirit of Scronkfinkle that a collection of Silicon Valley notables, including Elon Musk, just signed an open letter urging at least a six-month pause in large-scale A.I. experiments to allow our safety protocols to catch up. But there’s a crucial difference between the fable and our own situation, which helps explain why the human pause urgers have a harder task even than Scronkfinkle. Note that the sparrows, for all their guilelessness, at least know generally what an owl looks like, what it is and what it does. So it shouldn’t be hard for them, and it isn’t hard for the reader, to imagine the powers that an untamed owl would bring to bear — familiar powers of speed and sight and strength, which could tear and gouge and devour the luckless sparrow clan. With a notional-for-now superintelligence, however, the whole point is that there isn’t an analogue in existence right now for us to observe, understand and learn to fear. The alarmists don’t have a simple scenario of risk, a clear description of the claws and beak; they have a lot of highly uncertain scenarios based on even more uncertain speculation about what an intelligence somehow greater than ours might be capable of doing. That doesn’t make their arguments wrong. Indeed, you could argue that the very uncertainty makes superintelligent A.I. that much more worth fearing. But generally, when human beings turn against a technology or move to restrain it, we have a good idea of what we’re afraid of happening, what kind of apocalypse we’re trying to forestall. The nuclear test ban treaties came after Hiroshima and Nagasaki, not before. Or a less existential example: The current debate about limiting kids’ exposure to social media is potent because we’ve lived with the internet and the iPhone for some time; we know a lot about what the downsides of online culture seem to be. Whereas it’s hard to imagine persuading someone to pre-emptively regulate TikTok in the year 1993. I write this as someone who struggles to understand the specific dooms that might befall us if the A.I. alarmists are correct or even precisely what we mean when we say “superintelligence.” Some of my uncertainty attaches to the debates about machine consciousness and whether A.I. would need to acquire a sense of self-awareness to become genuinely dangerous. But it’s also possible to distill the uncertainty to narrower questions that don’t require taking a position on the nature of the self or soul. So let’s walk through one of them: Will supercharged machine intelligence find it significantly easier to predict the future? I like this question because it’s connected to my own vocation — or at least what other people think my vocation is supposed to be: No matter how many times you disclaim prophetic knowledge, there is no more reliable dinner-party question for a newspaper columnist than, “What’s going to happen in Ukraine?” Or “Who’s going to win the next primary?” I don’t think my own intelligence is especially suited to this kind of forecasting. When I look back on my own writing, I do OK at describing large-scale trends that turn out to have a shaping influence on events — like the transformation of the Republican Party into a downscale, working-class coalition, say. But where the big trends distill into specific events, I’m just doing guesswork like everybody else: Despite my understanding of the forces that gave rise to Donald Trump, I still consistently predicted that he wouldn’t be the Republican nominee in 2016. There are forms of intelligence, however, that do better than mine at concrete prediction. If you read the work of Philip Tetlock, who studies superforecasters, it’s clear that certain habits of mind yield better predictions than others, at least when their futurology is expressed in percentages averaged over a wide range of predictions. Thus (to use an example from Tetlock’s book, “Superforecasting,” written with Dan Gardner) the average pundit, early in the Syrian civil war, might have put the likelihood of President Bashar al-Assad losing power within six months at around 40 percent. But the superforecasters, with a slightly deeper focus on the situation, put the odds at less than 25 percent. Assad’s subsequent survival alone doesn’t prove that the superforecasters had it exactly right — maybe the dictator just beat the odds — but it helps their overall batting average, which across a range of similar predictive scenarios is higher than the pundit baseline. But not so much higher that a statesman can just rely on their aggregates to go on some kind of geopolitical winning streak. So one imaginable goal for a far superior intelligence would be to radically improve on this kind of merely human prognostication. We know that artificial intelligence already has powers of pattern recognition that exceed and sometimes mystify its human makers. For instance, A.I. can predict a person’s sex at above-average rates based on a retina photograph alone, for reasons that remain unclear. And there’s growing evidence that artificial intelligence will be able to do remarkable diagnostic work in medicine. So imagine some grander scale of pattern recognition being applied to global politics, predicting not just some vague likelihood of a dictator’s fall, but this kind of plot, in this specific month, with these particular conspirators. Or this particular military outcome in this particular province with these events rapidly following. Superintelligence in this scenario would be functioning as a version of the “psychohistory” imagined by Isaac Asimov in his “Foundation” novels, which enables its architect to guide future generations through the fall of a galactic empire. And a prophetic gift of this sort would have obvious applications beyond politics — to stock market forecasting, for instance, or to the kind of “precrime” prediction engine envisioned by Philip K. Dick and then, in adaptation, Steven Spielberg. It would also fit neatly into some of the speculation from A.I. pessimists. When the Silicon Valley-adjacent writer Scott Alexander set out to write a vision of a malevolent A.I.’s progress, for instance, he imagined it attaching itself initially to Kim Jong-un and taking over his country through a kind of superforecasting prowess: “Its advice is always excellent — its political stratagems always work out, its military planning is impeccable and its product ideas turn North Korea into an unexpected economic powerhouse.” But is any intelligence, supercharged or otherwise, capable of such foresight? Or is the world so irreducibly complex that even if you pile pattern recognition upon pattern recognition and let A.I. run endless simulations, you will still end up with probabilities that aren’t all that much more accurate than what can be achieved with human judgment and intelligence? My assumption is that it’s the latter, that there are diminishing returns to any kind of intelligence as a tool of prophecy, that the world is not fashioned to be predicted in such detailed ways . *The New York Times*



from Asharq AL-awsat https://english.aawsat.com/home/article/4250036/ross-douthat/fear-ai-pundit

No comments:

Post a Comment