Concerning AI | Existential Risk From Artificial Intelligence

Concerning AI | Existential Risk From Artificial Intelligence

concerning.ai
Exploring Safe and Useful Intelligent Machines
0066: The AI we have is not the AI we want
May 3
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0066-2018-04-01.mp3
0065: AGI Fire Alarm
Apr 19
There’s No Fire Alarm for Artificial General Intelligence by Eliezer Yudkowsky http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0065-2018-03-18.mp3
0064: AI Go Foom
Apr 5
We discuss Intelligence Explosion Microeconomics by Eliezer Yudkowsky http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0064-2018-03-11.mp3
0063: Ted’s Talk
Mar 26
Ted gave a live talk a few weeks ago.
0062: There’s No Room at the Top
Mar 16
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0062-2018-03-04.mp3
0061: Collapse Will Save Us
Mar 2
Some believe civilization will collapse before the existential AI risk has a chance to play out. Are they right?
0060: Peter Scott’s Timeline For Artificial Intelligence Risks
Feb 13
Timeline For Artificial Intelligence Risks Peter’s Superintelligence Year predictions (5% chance, 50%, 95%): 2032/2044/2059 You can get in touch with Peter at HumanCusp.com and Peter@HumanCusp.com For reference (not discussed in this episode): Crisis of…
0059: Unboxing the Spectre of a Meltdown
Jan 30
SpectreAttack.com http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0059-2018-01-14.mp3
0058: Why Disregard the Risks?
Jan 16
There are understandable reasons why accomplished leaders in AI disregard AI risks. We discuss what they might be. Wikipedia’s list of cognitive biases Alpha Zero Virtual Reality recorded January 7, 2017, originally posted to Concerning.AI…
0057: Waymo is Everybody?
Jan 2
If the Universe Is Teeming With Aliens, Where is Everybody? http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0057-2017-11-12.mp3
0056: Julia Hu of Lark, an AI Health Coach
Dec 19, 2017
Julia Hu, founder and CEO of Lark, an AI health coach, is our guest this episode. Her tech is really cool and clearly making a positive difference in lots of people’s lives right now. Longer term, she doesn’t see much to worry about.
0055: Sean Lane
Dec 5, 2017
Ted had a fascinating conversation with Sean Lane, founder and CEO of Crosschx.
0054: Predictions of When
Nov 21, 2017
We often talk about how know one really knows when the singularity might happen (if it does), when human-level AI will exist (if ever), when we might see superintelligence, etc. Back in January, we made up a 3 number system for talking about our own…
0053: Listener Feedback
Nov 7, 2017
Great voice memos from listeners led to interesting conversations.
0052: Paths to AGI #4: Robots Revisited
Oct 24, 2017
We continue our mini series about paths to AGI. Sam Harris’s podcast about the nature of consciousness Robot or Not podcast See also: 0050: Paths to AGI #3: Personal Assistants 0047: Paths to AGI #2: Robots 0046: Paths to AGI #1: Tools…
0051: Rodney Brooks Says Not To Worry
Oct 10, 2017
Rodney Brooks article: The Seven Deadly Sins of Predicting the Future of AI
0050: Paths to AGI #3: Personal Assistants
Sep 25, 2017
3rd in a series about future of current narrow AIs.
0049: After On by Rob Reid
Sep 11, 2017
Read After On by Rob Reid, before you listen or because you listen.
0047: Paths to AGI #2: Robots
Sep 5, 2017
This is our 2nd episode thinking about possible paths to superintelligence focusing on one kind of narrow AI each show. This episode is about embodiment and robots. It’s possible we never really agreed about what we were talking about and need to come…
0048: AI XPrize and Thrival Festival (special mini-episode)
Aug 29, 2017
For show notes, please see https://concerning.ai/2017/08/29/0048-ai-xprize-and-thrival-festival-special-mini-episode/
0046: Paths to AGI #1: Tools
Aug 22, 2017
How might we get from today’s narrow AIs to AGI? This episode focus is tools.
0045: We Enjoy Our Stories
Aug 8, 2017
Is all AI-involved science fiction the same?
0044: Nexus Trilogy
Jun 21, 2017
We talked about the Nexus Trilogy of novels as a way to further our thinking about the wizard hat idea Tim Urban wrote about in his article about Elon Musk’s Neuralink.
0043: Not a Propeller Hat Episode
Jun 5, 2017
Are we living our lives as if AI were an existential threat?
0042: Listener Feedback
May 22, 2017
Listener Feedback this episode
0041: Can Neuralink Save Us?
May 5, 2017
Tim Urban’s article at Wait But Why: Elon Musk’s Neuralink and the Brain’s Magical Future
0040: If it were superintelligent, it would be hard to argue with
Apr 14, 2017
Mostly a listener feedback episode. Lots of great stuff here!
0039: We Need More Sparrow Fables
Mar 31, 2017
We need better language to talk about these difficult technical topics. See https://concerning.ai/2017/03/31/0039-we-need-more-sparrow-fables/ for notes.
0038: We Don’t Want to Die
Mar 17, 2017
See https://concerning.ai/2017/03/17/0038-we-dont-want-to-die/
0037: Listeners Gone Wild
Mar 4, 2017
Listener Voicemail & Comments Eric’s voicemail Evan’s comment (Our interview with Evan: ep 0011: Evan Prodromou, AI practitioner (part 1), ep 0012: Evan Prodromou, AI practitioner (part 2) John’s comment Ted got the author’s name wrong Predictably…
0036: Baby You Can Drive My Car
Feb 21, 2017
Main topic of this show: Unexpected Consequences of Self Driving Cars by Rodney Brooks
0035: New Water Story
Feb 15, 2017
What should our values be? Could “Life is Precious” replace the Consumption Story?
Concerning AI: Episode XXXIV – A New Hope
Feb 8, 2017
Do we need to do philosophy on a deadline? Can AI help make us better humans?
0033: Mind Game and the Curse of Dimensionality
Jan 31, 2017
Wind up your propeller hats! This one is a doozy. Hopefully someone can explain it to me (Ted).
0032: Westworld
Jan 25, 2017
In which we talk about Westworld, among other things.
0031: Listener Feedback
Jan 16, 2017
Too time constrained for show notes this time. If you want to send us notes to be added here, please do it! Best place to reach is is the Concerning AI group on Facebook. All of the listener feedback in this episode comes from that group. Thank you all!…
0030: Season 2, Episode 1
Jan 11, 2017
It’s been a while since we recorded. What have we been up to?
0029: I Disagree, Therefore I am
Nov 21, 2016
We recorded this episode on Nov 6, 2016, two days before the US election. Sorry it’s taken so long to get out. Also, no show notes due to need to simply get it published and avoid further delay. Enjoy!…
0028: Food for Thought
Oct 18, 2016
Nick Bostrom’s Superintelligence Fiction from Liu Cixin: The Three Body Problem The Dark Forest Death’s End We’re a lot more beautiful if we try. (5:41) The Upward Spiral (9:45) Are we getting any wiser? (12:43) What are we trying for? To continue an…
0027: Listener Feedback and the Locality Effect
Sep 29, 2016
Korey’s comment: … one question you asked on ‘The Locality Principle’, was what other people are doing to avert a possible AIpocalypse; I’m starting a company! An entertainment venture with one driving purpose: to create a fully realized virtual world,…
0026: The Locality Principle
Sep 22, 2016
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0026-2016-09-18.mp3 These notes won’t be much use without listening to the episode, but if you do listen, one or two of you might find one or two of them helpful. Lyle Cantor‘s comment (excerpt) in…
0025: The Concerning AI Summer is Over
Sep 12, 2016
Some things we talked about: Companies developing narrow AI, not giving one thought about AI safety, because just getting the thing to work at all is really hard. Self-driving cars and how fast they’re progressing Difference between Open AI and MIRI in…
0024: Simulation Revisited
Jul 21, 2016
No notes this time, just a speculative conversation about some possible implications of the idea that we could be living in a simulation. Subscribe in Overcast, iTunes, or though our feed. To get in touch with us, visit the Concerning AI Facebook group.…
0023: That Would Just Be Absurd
Jun 10, 2016
Are people better than robots?
0022: A Few Useful Things to Know about Machine Learning
May 29, 2016
This episode, we talk about this paper: A Few Useful Things to Know about Machine Learning
0021: Some of ’em wear dapper suits
May 9, 2016
We want robot surgeons, bus and taxi drivers and investment advisors. Do you?
0020: AI in Fiction
Apr 26, 2016
Fiction is fun. And, we can’t rely on it to help us figure out what’s going to happen.
0019: Do Ethics Give Cyborgs or Robots an Advantage?
Apr 11, 2016
Human augmentation may be a way for humans to advance on par with non-biological beings (AIs), but do ethical guidelines make that less likely to happen?
0018: The One True Doomsday
Apr 5, 2016
Throughout history there have been doomsayers, yet we’re still here. What makes today’s doomsday scenarios different?
0017: Give them the moon
Apr 1, 2016
What would it mean to entangle with technology rather than leave the biosphere behind? Could we just send the AI to the moon?
0016: More on AlphaGo
Mar 25, 2016
Brandon is back and we talk at length about how AlphaGo works and what the implications are, such that we can see with our feeble human minds.
0015: AlphaGo goes up 3-1 on Lee Sedol
Mar 14, 2016
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0015-2016-03-13.mp3 Ted talks with special guest Eric Saumur about AlphaGo, compassion, desires and more.
0014: What if AI aren’t “Other”? – A kinder gentler podcast
Mar 2, 2016
From now on, we seek to come from a place or empathy on this show, even when it seems to not make sense. There’s nothing to win here.
0013: Listener Feedback
Feb 25, 2016
Ben’s frustrated with us. Let’s see if we can figure out why.
0012: Evan Prodromou, AI practitioner (part 2)
Feb 18, 2016
This episode is the 2nd (and final) part of our conversation with Evan Prodromou, software developer, open source advocate and AI practitioner extraordinaire. Hope you enjoy it as much as we did!
0011: Evan Prodromou, AI practitioner (part 1)
Feb 16, 2016
Evan’s awesome. Great to talk with a bona fide AI practitioner. Just getting things started.
0010: Don’t Worry, Be Happy
Feb 11, 2016
We haven’t found strong arguments on the “Don’t be worried. Here’s why …” side of things. We know the arguments must exist, but we can’t find them (send them to us!). So, what to do? Make some arguments up, that’s what!
0009: Deep Learning
Feb 4, 2016
What the heck is deep learning?
0008: Exponentials
Dec 28, 2015
Exponentials are powerful and very difficult to understand (because we think linearly)
0007: Cosmists, Terrans, and Cyborgists
Dec 1, 2015
Are we cosmists or terrans?
0006: Compassion
Nov 17, 2015
Are we missing “compassion” when thinking about our AI descendants?
0005: Paths to Superintelligence
Nov 9, 2015
How might we get to superintelligence? This episode explores some possible paths, or maybe simply directions.
0004: Prioritization of Risks
Oct 19, 2015
Existential risk – One where an adverse outcome would either annihilate Earth- originating intelligent life or permanently and drastically curtail its potential
0003: Skeptics
Oct 2, 2015
The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research. The “believers”, meanwhile, insist that…
0002: Consciousness
Sep 22, 2015
Following up on last episode’s question, “What is intelligence?” this episode we attempt to unpack “consciousness.”
0001: What is Intelligence?
Sep 15, 2015
Our goal on this episode is to establish a “good enough” definition of intelligence that we’ll be able to refer back to in future episodes.
0000: Reboot as Concerning AI
Sep 9, 2015
This is a reboot. After recording 4 episodes of what we thought would be the Friendly AI podcast, here is Episode 0000 of Concerning AI.
-1: Embodiment
Aug 15, 2015
Does an AI need embodiment?
-2: Shock Levels
Jul 1, 2015
Shock levels
-3: Running Toward or Running Away
Jul 1, 2015
Is it better to run toward something (compelling) or away from something (scary)?
-4: And So It Begins
Jul 1, 2015
And So It Begins! We didn’t used to be concerned about artificial intelligence, but now we are.