On May 24, 2014, the Future of Life Institute held its opening event at MIT: a panel discussion on "The Future of Technology: Benefits and Risks", moderated by Alan Alda. The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn. The discussion covered a broad range of topics from the future of bioengineering and personal genetics to autonomous weapons, AI ethics and the Singularity. On January 2-5, 2015, FLI organized "The Future of AI: Opportunities and Challenges" conference in Puerto Rico, which brought together the world's leading AI builders from academia and industry to engage with each other and experts in economics, law, and ethics. The goal was to identify promising research directions that can help maximize the future benefits of AI. The Institute circulated an open letter on AI safety at the conference which was subsequently signed by Stephen Hawking, Elon Musk, and many artificial intelligence experts. On January 5-8, 2017, FLI organized the Beneficial AI conference in Asilomar, California, a private gathering of what The New York Times called "heavy hitters of A.I.". The institute released a set of principles for responsible AI development that came out of the discussion at the conference, signed by Yoshua Bengio, Yann LeCun, and many other AI researchers. On January 4-7, 2019, FLI organized the Beneficial AGI conference in Puerto Rico. This meeting focused on long-term questions on ensuring that Artificial General Intelligence is beneficial to humanity..
Global research program
On January 15, 2015, the Future of Life Institute announced that Elon Musk had donated $10 million to fund a global AI research endeavor. On January 22, 2015, the FLI released a request for proposals from researchers in academic and other non-profit institutions. Unlike typical AI research, this program is focused on making AI safer or more beneficial to society, rather than just more powerful. On July 1, 2015, a total of $7 million was awarded to 37 research projects.
United States and Allies Protest U.N. Talks to Ban Nuclear Weapons in "The New York Times"
"Is Artificial Intelligence a Threat?" in The Chronicle of Higher Education, including interviews with FLI founders Max Tegmark, Jaan Tallinn and Viktoriya Krakovna.
"But What Would the End of Humanity Mean for Me?", an interview with Max Tegmark on the ideas behind FLI in The Atlantic.
"Transcending Complacency on Superintelligent Machines", an op-ed in the Huffington Post by Max Tegmark, Stephen Hawking, Frank Wilczek and Stuart J. Russell on the movie Transcendence.
"Top 23 One-liners From a Panel Discussion That Gave Me a Crazy Idea" in Diana Crow Science.
"An Open Letter to Everyone Tricked into Fearing Artificial Intelligence", includes "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter" by the FLI