Joscha Bach and Connor Leahy [HQ VERSION]

Thanks! Share it with your friends!

You disliked this video. Thanks for the feedback!

Added by vindheim
25 Views
Support us! https://www.patreon.com/mlst
MLST Discord: https://discord.gg/aNPkGUQtc5
Twitter: https://twitter.com/MLStreetTalk

Sorry about the quality on the live one guys, this should be a big improvement!
Transcript and longer summary: https://docs.google.com/document/d/1TUJhlSVbrHf2vWoe6p7xL5tlTK_BGZ140QqqTudF8UI/edit?usp=sharing
Pod: https://podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/Joscha-Bach-and-Connor-Leahy-on-AI-risk-e25ukc5

Dr. Joscha Bach argued that general intelligence emerges from civilization, not individuals. Given our biological constraints, humans cannot achieve a high level of general intelligence on our own. Bach believes AGI may become integrated into all parts of the world, including human minds and bodies. He thinks a future where humans and AGI harmoniously coexist is possible if we develop a shared purpose and incentive to align. However, Bach is uncertain about how AI progress will unfold or which scenarios are most likely.

Bach argued that global control and regulation of AI is unrealistic. While regulation may address some concerns, it cannot stop continued progress in AI. He believes individuals determine their own values, so "human values" cannot be formally specified and aligned across humanity. For Bach, the possibility of building beneficial AGI is exciting but much work is still needed to ensure a positive outcome.

Connor Leahy believes we have more control over the future than the default outcome might suggest. With sufficient time and effort, humanity could develop the technology and coordination to build a beneficial AGI. However, the default outcome likely leads to an undesirable scenario if we do not actively work to build a better future. Leahy thinks finding values and priorities most humans endorse could help align AI, even if individuals disagree on some values.

Leahy argued a future where humans and AGI harmoniously coexist is ideal but will require substantial work to achieve. While regulation faces challenges, it remains worth exploring. Leahy believes limits to progress in AI exist but we are unlikely to reach them before humanity is at risk. He worries even modestly superhuman intelligence could disrupt the status quo if misaligned with human values and priorities.

Overall, Bach and Leahy expressed optimism about the possibility of building beneficial AGI but believe we must address risks and challenges proactively. They agreed substantial uncertainty remains around how AI will progress and what scenarios are most plausible. But developing a shared purpose between humans and AI, improving coordination and control, and finding human values to help guide progress could all improve the odds of a beneficial outcome. With openness to new ideas and willingness to consider multiple perspectives, continued discussions like this one could help ensure the future of AI is one that benefits and inspires humanity.

TOC:
00:00:00 - Introduction and Background
00:02:54 - Different Perspectives on AGI
00:13:59 - The Importance of AGI
00:23:24 - Existential Risks and the Future of Humanity
00:36:21 - Coherence and Coordination in Society
00:40:53 - Possibilities and Future of AGI
00:44:08 - Coherence and alignment
01:08:32 - The role of values in AI alignment
01:18:33 - The future of AGI and merging with AI
01:22:14 - The limits of AI alignment
01:23:06 - The scalability of intelligence
01:26:15 - Closing statements and future prospects
Commenting disabled.