‘Basic Income and AI-Induced Unemployment’
- In this article, The Adam Smith Institute present the case for UBI: ‘We are pretty confident that future AGI systems will have superhuman labour abilities, generality, and agency.’ They quote heavily from OpenAI’s research. Brock’s first example of a Wishful Worry is: ‘When robots and AI do all of the work of people, what will be the right universal basic income?’
‘Desirability without Desire: Life Extension, Boredom and Spiritual Experience’
- Philosophy paper discussing radically extended life. Brock’s second example of a Wishful Worry is: ‘As biotechnology affords dramatically longer human lifespans, how will we fight boredom?’
‘The Boredom Objection to Life Extension’
- Arin Vahanian discusses the topic of life extension, posted on the Transhumanist Party’s website
‘The Ethical Challenges of Connecting Our Brains to Computers’
- Brock’s third example of a Wishful Worry is: ‘with neurotechnology-augmentation rendering some of us essentially superheroes, what ethical dilemmas will we face?’
‘Hacked Sex Robots Could Murder People, Security Expert Warns’
- Ethics of sex robots. A fun distracting topic that distracts us from the ‘actual agonies’ of the present!
Examples of Genuine Harms, caused or exacerbated by technology
‘The climate change-denying TikTok post that won't go away’
- When it comes to climate, social platforms have consistently been responsible for spreading misinformation and climate denial. TikTok vowed to clamp down on climate change denial in early 2023. However, the BBC reported that if you search ‘climate change’ you are still likely to come across a hugely popular video featuring Dan Peña claiming that climate change is a fraud
‘Why Won’t Twitter Treat White Supremacy Like ISIS? Because It Would Mean Banning Some Republican Politicians Too’
- X (née Twitter) has failed to clamp down on the rise of white supremacist content in recent years.
That’s despite being able to remove ISIS content, by the way, as this VICE article exposed in 2019
Ghost Work
- We’re living in an age of growing wealth inequality. There have been reports of tech companies directly employing a shadow workforce with reduced rights and low pay. For example, the book Ghost Work explores the workforce used for tasks like content moderation, annotating training data for machine learning models, and providing human support to supposedly “automated” services. These workers are typically overworked, and underpaid.
Generative AI: Wishful Worries and Genuine Harms
Statement on AI Risk
- A public statement by The Centre for AI Safety, saying that A.I. is as dangerous as nuclear war
‘Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic’
- OpenAI’s ‘Ghost Work’: using Kenyan workers making as little as $2 an hour to filter traumatic content from chat GPT
‘Original research: More watermarks found in Stable Diffusion’s images’
- Visual evidence that Stability AI built models based on images shared by Getty Images. In February 2023, Getty Images filed a lawsuit against Stability AI for copyright infringement.
Artists and Illustrators Are Suing Three A.I. Art Generators for Scraping and ‘Collaging’ Their Work Without Consent
- It’s not just corporates like Getty Images who are suing AI companies. Artists and Illustrators are also claiming that scraping their work to build training datasets was unlawful.
‘AI is already taking video game illustrators’ jobs in China’
- A story about how AI is already taking video game illustrators jobs in China. Although this shows there are some genuine harms happening in the automation space, I think some of the wilder claims are wishful worries.
‘Lyft co-founder says autonomous vehicles won’t replace drivers for at least a decade’
- This 2022 article about Lyft is a good example of how predictions of automation can play out. The same cofounder predicted that most of Lyft’s cars would be autonomous by 2021.