We need to examine the beliefs of today’s tech luminaries -Dlight News

We need to examine the beliefs of today's tech luminaries

The author is a science critic

People who are very rich or very smart or both, sometimes believe strange things. Some of these beliefs are captured in the acronym Tescreal. The letters represent overlapping futuristic philosophies — bookended by transhumanism and longtermism — espoused by many of AI’s wealthiest and most prominent proponents.

The label, coined by a former Google ethicist and philosopher, has begun to circulate online and usefully explains why some tech figures want to see a public gaze trained on vague future problems like existential risk rather than current liabilities like algorithmic ones. Prejudice A fraternity that is ultimately committed to breeding AI for a post-human future could care little about the social injustices perpetrated by their errant infant today.

As well as transhumanism, which advocates the technological and biological enhancement of humans, Tescrell embraces Extropianism, the belief that science and technology will bring about an indefinite lifespan; Singularism, the idea that artificial superintelligence will eventually surpass human intelligence; Cosmicism, a manifesto for healing death and spreading it outward into the universe; Rationalism, the conviction that reason should be the supreme guiding principle for humanity; Effective altruism, a social movement that calculates how to maximize benefit to others; and longtermism, a radical form of utilitarianism that argues that we have moral obligations to those who still exist, even at the expense of those who do now.

The acronym can be traced back to an unpublished paper by Timnit Gebru, former co-lead for Google on AI ethics and Emil Torres, a PhD student in philosophy at Leibniz University. An early draft of the paper, yet to be submitted to the journal, argues that the unchecked rush to AGI (artificial general intelligence) has “created systems that harm marginalized groups and centralize power, while the language of social justice uses and ‘benefits humanity.’, like the eugenicists of the 20th century”. The authors add that an all-purpose, undefined AGI, cannot be properly safety-tested and therefore should not be built.

Gebru and Torres go on to explore the intellectual motives of the pro-AGI crowd. “At the heart of this [Tescreal] The bundle,” Torres elaborates to me in an email, “is a techno-utopian vision of the future in which we become radically ‘advanced’, immortal ‘posthumans’, colonizing the cosmos, re-engineering entire galaxies. [and] Create a virtual-reality world in which trillions of ‘digital people’ exist”.

Tech luminaries certainly overlap in their interests. Elon Musk, who wants to colonize Mars, has expressed sympathy for long-term thinking and owns Neuralink, an essentially transhumanist company. PayPal co-founder Peter Thiel has backed the anti-aging technology and bankrolled rival Neuralink. Both Musk and Thiel invested in OpenAI, creator of ChatGPT. Like Thiel, Ray Kurzweil, the messiah of singularism now employed by Google, wants to be cryogenically frozen and revived in a scientifically advanced future.

Another influential figure is the philosopher Nick Bostrom, a long-term thinker. He directs Oxford University’s Future of Humanity Institute, whose funding includes Musk. (Bostrom recently apologized for a historic racist email.) The organization works closely with the Oxford-based charity Center for Effective Philanthropy. Some effective philanthropists have identified a career in AI safety as a smart gambit. After all, there is no more effective way to do good than to save our species from the robopocalypse.

Gabru, along with others, has described such talk as fear-mongering and marketing hype. Many would be tempted to dismiss her views – she was fired from Google after raising concerns over the energy use and social harm associated with big language models – as sour grapes, or as ideological outrage. But it glosses over the motivations of those running the AI ​​show, a glittering corporate spectacle with plot lines that few are able to faithfully follow, let alone regulate.

The frequent talk of a possible techno-apocalypse not only sets up these tech glitterati as the defenders of humanity, it also suggests an inevitability in the path we’re taking. And it distracts from the real harm identified today by academics like Ruha Benjamin and Safia Noble. Algorithms making decisions using biased data are disqualifying black patients for certain medical procedures, while generative AI steals human labor, propagates misinformation, and puts jobs at risk.

Maybe it’s a plot twist we weren’t meant to notice.

Source link