A profile of Max Tegmark, the physicist pushing to halt AGI development, who was subpoenaed by OpenAI over the Future of Life Institute's past ties to Elon Musk
Max Tegmark wants to halt development of artificial superintelligence—and has Steve Bannon, Meghan Markle and will.i.am as supporters
Michel Devoret, a Google Quantum AI chief scientist, John Martinis, who left Google in 2020, and John Clarke win the Nobel in Physics for quantum computing work
all 3 @UofCalifornia professors. Home to groundbreaking physicists, including 2 immigrants leading the world in innovation and possibility, California is proud to dream big and deliver even bigger. @e...
Q&A with Future of Life Institute co-founder Max Tegmark on AGI, how Elon Musk could constructively engage with the Trump administration on AI safety, and more
Harry McCracken / Fast Company :
Over 700 people, including AI experts and executives, sign an open letter calling for more regulation of deepfakes, such as by criminalizing deepfake child porn
especially as we head into a major election. ‘AI godfather’, others urge more deepfake regulation in open letter https://www.reuters.com/... Ari H. Mendelson / @kingmakerseries : Regulating this kind ...
World leaders at the 2024 World Economic Forum fret over AI-powered misinformation and job displacement, following excitement about ChatGPT at the 2023 WEF
https://youtube.com/... X: Pat Gelsinger / @pgelsinger : A great conversation on the interrelationship between chips and AI, the importance of leveraging technology responsibly, and how @Intel is crea...
A profile of, and interview with, Max Tegmark, who co-founded the Elon Musk-backed Future of Life Institute and says AI may be an existential threat to humanity
Emily Bobrow / Wall Street Journal : X: @drtechlash , @ainnovationai , @_luciavelasco , @sharongoldman , @wsj , @danakst , and @emilybobrow X: Nirit Weiss-Blatt, PhD / @drtechlash : Step 1: Talk abou...
A profile of and interview with Max Tegmark, co-founder of Elon Musk-backed Future of Life Institute, who warns AI could pose an existential threat to humanity
Long optimistic about the possibilities of artificial intelligence, the MIT physicist now says companies and governments must work together … X: @sharongoldman , @danakst , @wsj , and @emilybobrow X: ...
Studies show that LK-99 is not a superconductor and that impurities, notably copper sulfide, were responsible for the material's superconducting-like behaviors
how science sleuths solved the mystery Replications pieced together the puzzle of why the material displayed superconducting-like behaviours. https://www.nature.com/... @originalspin : Sucks that we d...
OpenAI and DeepMind executives, Geoffrey Hinton, and 350+ others sign a statement saying “mitigating the risk of extinction from AI should be a global priority”
and says computer scientists need ethics training Brian Fung / CNN : AI industry and researchers sign statement warning of ‘extinction’ risk Alka Jain / Livemint : Industry leaders warn ‘AI poses risk...