Recently, Cyber Talk spotlighted an article about the what-ifs and potential consequences of tech innovation. As futurist and author Alvin Toffler once said, “Our technological powers increase, but the side effects and potential hazards also escalate.”
Now, a report by AI researchers and policymakers from the US and Britain, indicates that with the increasing prevalence and affordability of AI technologies, opportunities for cyber risk also rise. And, more needs to be done to address the evolving concern.
Cade Metz from the The New York Times puts this development in context, “AI experts and pundits have discussed the threats created by the technology for years, but this is among the first efforts to tackle the issue head-on.”
Behind the concern is a newly announced drone that can track people through a smartphone app. To make it, its inventors used basic, widely available technologies such as cameras, open-source software, and inexpensive computer chips. These ordinary materials can make pretty powerful things right now. It’s only going to get easier and less costly as time goes on. That means easier access for hackers and cybercriminals, which escalates the worry of AI being misused or abused.
Because AI systems are not always predictable in terms of how they learn, they can become vulnerable to manipulation. Metz reports that according to Paul Scharre, one of the authors of the report, researchers are creating AI systems to “find and exploit security holes” in a variety of environments–which can be used for good or evil.
Read the full story at The New York Times.