Magnus Vinding
Talk
In Conversation With… Magnus Vinding
Magnus Vinding has written books such as Effective Altruism: How Can We Best Help Others?, Suffering-Focused Ethics: Defense and Implications, and Reasoned Politics. Magnus has also written a range of essays including The Speciesism of Leaving Nature Along and the Theoretical Case for “Wildlife Antinatalism”. Magnus is also the co-founder of the Centre for Reducing Suffering which was set-up in 2020 to develop ethical views that prioritise suffering and research ways to best reduce suffering.
Please find other links for Magnus Vinding below:
Other links:
Avoiding the Worst: How to Prevent a Moral Catastrophe (Tobias Baumann)
Downside-focused views prioritize s-risk reduction over utopia creation - Section from an essay by Lukas Gloor that makes a case for prioritising worst-case outcomes
The point about overemphasizing salient ideas and focus areas is explored here: https://magnusvinding.com/2022/12/08/distrusting-salience-keeping-unseen-urgencies-in-mind/
The post I mentioned about biases against prioritising wild-animal suffering:
https://magnusvinding.com/2020/07/02/ten-biases-against-prioritizing-wild-animal-suffering/
The post I mentioned about laypeople's views of population ethics, and how most people seem to agree that the avoidance of worst-case is more important than the creation of a large utopia:
https://centerforreducingsuffering.org/popular-views-of-population-ethics-imply-a-priority-on-preventing-worst-case-outcomes/
Matti was also kind enough to send us his response to some additional questions we could not get to during the speaker session:
What balance do you think we must strike between shaping the values of humanity and directly shaping technology like AI?
I think these are less distinct than what often seems assumed. I think there is sometimes an overemphasis on directly shaping AI that overlooks how AI is likely to be shaped by surrounding factors, including humanity's values and institutions more broadly (not just "AI governance", i.e. governance pertaining to AI directly).
I have written about this issue here: https://magnusvinding.com/2022/09/06/what-does-a-future-dominated-by-ai-imply/
One of the reasons I think surrounding factors are likely to be so critical is that I am highly skeptical of so-called FOOM scenarios, or "hard takeoff" scenarios, meaning that I strongly doubt AI will suddenly take off and take over. There are many reasons for this skepticism, which I've tried to outline in my book Reflections on Intelligence and in my post Two contrasting models of “intelligence” and future growth.
A longer reading list with 'FOOM skeptical' essays can be found here: https://magnusvinding.com/2017/12/16/a-contra-ai-foom-reading-list/
How do you envision the integration of Suffering-Focused Ethics into real-world decision-making, policies, or technologies, and what challenges do you foresee in applying this ethical framework on a broader societal scale?
As for the first part of the question, I guess there are two broad avenues of influence and integration.
First, there is the attempt to advance suffering-focused values as a key foundation — or at least a key component — in our political values, both locally and globally. The organization OPIS is doing some work on this, and it's also what Part III of my book Reasoned Politics is essentially about (especially Chapter 7, which cites additional resources).
Second, there is the attempt to advance specific policies that are helpful for reducing suffering, everything ranging from policies aimed at reducing wild-animal suffering to policies that secure the right to voluntary euthanasia. This is what Part IV in Reasoned Politics is about, attempting to at least take a first step toward identifying helpful policies.
What challenges do I foresee? Well, to be cynically honest, we tend to be deeply coalitional and alliance-driven, and we tend to have hidden motives aimed at seeing our own groups winning as well as elevating our status within our groups. We tend not to care primarily about outcomes in the larger world, sadly, but rather about labels and group identities, and a concern for actually reducing suffering often takes the backseat in this psychological and coalitional game. If someone identifies with our label — "yay" — but if not, if someone is just talking about something as mundane as, say, reducing suffering and creating better outcomes, well, that's going to be rather... boring, not interesting. Where's the coalitional loyalty and juiciness in that? It feels useless.
And lest we kid ourselves, it is worth stressing that this is also true — sometimes even especially true — of aspiring do-gooders. We, too, often care more about labels and group identities than actually reducing suffering, and we are largely self-deceived about this. We may think we are special and above the mere "normal humans", but we are not.
(I say more about challenges in "Appendix B: Hidden Challenges to the Two-Step Ideal" in Reasoned Politics.)