A resume of scientists and researchers play in artificial intelligence service ( AI ) has line up that around a third of them believe it could make a catastrophe on equation with all - out nuclear war .

The survey was give to researchers who had co - author at least two computationallinguisticspublications between 2019–2022 . It aimed to distinguish industry views on controversial topics surrounding AI and artificial universal word ( AGI ) – the ability of an AI to think like a human – plus the shock that people in the field of inquiry believeAIwill have on society at declamatory . The results are published in a preprint paper that has not yet undergone peer review .

AGI , as the paper notes , is a controversial topic in the theatre . There are enceinte differences in ruling on whether we are advancing towards it , whether it is something we should be aiming towards at all , and what would happen when humanity get there .

" The community in conglomeration knows that it ’s a controversial emergence , and now ( courtesy of this survey ) we can have it away that we have a go at it that it ’s controversial , " the squad wrote in their inquiry . Among the ( pretty split ) finding was that 58 pct of respondents agreed that AGI should be an important concern for lifelike language processing at all , while 57 percent agreed that recent research had drive us towards AGI .

Where it gets interesting is how AI researchers trust that AGI will affect the world at large .

" 73 pct of respondents agree that labor automation from AI could plausibly run to revolutionary social modification in this C , on at least the scale of the Industrial Revolution , " the researcherswrote of their survey .

Meanwhile , a non - trivial 36 percent of respondents agreed that it is plausible that AI could bring forth catastrophic outcomes in this 100 , " on the tier of all - out nuclear state of war " .

It ’s not the most reassuring thing when a substantial balance of a battleground believes it could lead to manhood ’s destruction . However , in the feedback section , some answerer object to the wording of " all - out nuclear war " , publish that they " would fit in with less uttermost verbiage of the question " .

" This suggests that our result of 36 % is an underestimate of respondents who are seriously concerned about minus impacts of AI systems , " the team wrote .

Though ( perhaps with secure reason ) wary about likely catastrophic consequences of AGI , research worker overwhelmingly agreed that lifelike language processing has " a positive overall encroachment on the world , both up to the present day ( 89 per centum ) and fit into the future ( 87 per centum ) . "

" While the sentiment are anticorrelated , a substantial minority of 23 percent of respondents agreed with both Q6 - 2 [ that AGI could be catastrophic on par with an all - out nuclear war ] and Q3 - 4 [ that NLP has an overall positive impact on the world ] , " the researchers wrote , " suggesting that they may believe NLP ’s voltage for incontrovertible encroachment is so outstanding that it even outweighs plausible threat to civilization . "

Among other findings were that 74 percent of AI researchers believe that the private sector is too heavily influencing the orbit , and that 60 percentage trust the carbon footprint of direct heavy models should be a major concern for NLP investigator .

The paper is published onpre - photographic print server arXiv