![]() These range from a viral advice-giving chatbot telling millions of people to drop out of college to autonomous industries that pursue their own harmful economic ends to nation-states building AI-powered superweapons. In a paper posted online last week, Stuart Russell and Andrew Critch, AI researchers at the University of California, Berkeley (who also both signed the CAIS statement), give a taxonomy of existential risks. Futurists like Vernor Vinge and Ray Kurzweil popularized these ideas with talk of the so-called Singularity, a hypothetical date at which artificial intelligence outstrips human intelligence and machines take over. “It suggests that we haven’t seen real or serious harm yet.” An old fearĬoncerns about runaway, self-improving machines have been around since Alan Turing. “It is also a way to skim over everything that’s happening in the present day,” says Burrell. ![]() “Ghost stories are contagious-it’s really exciting and stimulating to be afraid.” “There’s no more evidence now than there was in 1950 that AI is going to pose these existential risks,” says Signal president Meredith Whittaker, who is cofounder and former director of the AI Now Institute, a research lab that studies the policy implications of artificial intelligence. ![]() Last week, Yann LeCun, chief scientist at Meta and joint recipient with Hinton and Bengio of the 2018 Turing Award, called the doomerism “preposterously ridiculous.” Aidan Gomez, CEO of the AI firm Cohere, said it was “an absurd use of our time.” It’s true that these views split the field.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |