February 19, 2018

Purdue prof: Future AI advances pose problems for police, courts

Marcus Rogers Marcus Rogers
Download image

WEST LAFAYETTE, Ind. — Artificial intelligence (AI) is a two-way street offering as many avenues for criminals as tools for law enforcement.

Marcus Rogers, head of Purdue Polytechnic Institute’s Department of Computer and Information Technology, said while research continues to make new advancements in AI, the results of such progress aren’t always taken under consideration.

 “What people don’t realize is technology is a two-way street,” he said. “So, yes, while AI and machine learning can help us, on the negative side, those same tools can be used against us very quickly.”

Rogers said the current tendency is to develop new technology quickly without thought as to how the criminal element might use the technology. At the current rate, scientists estimate the first strong, self-learning AI system could be functioning by 2030.

That step in the advancement of AI creates an entire list of questions that, at this point, no answers are readily available.

“AI can be used by criminals to leapfrog us,” Rogers said. “But what do we do with deviant behavior from the AI when it becomes aware and sentient? There’s a price to pay.”

It’s the issue of deviant behavior by an AI system that could face the courts and law enforcement very soon. Rogers said law schools talk about two types of crime: crimes that are immoral, such as murder, and acts that are defined as crimes because of our statutes and criminal code.

AI already is being used for a variety of crime. Rogers said systems have the ability to go out and autonomously scan millions and millions of internet addresses and come back with identifications on the types of systems and their vulnerabilities and then launch programs to take advantage of that.

There also is malware that is self-mutating and polymorphic, allowing it to determine when it is in a virtual machine environment and change how it appears compared to when it is actually interacting with a computer.

“Who do you arrest when an AI system breaks the law? How far down the stream is the liability?” Rogers said. “Even with our more natural basic laws like killing, what does that mean to a computer system? What society sees as deviant behavior, it might see in its own views as acting completely within what it thinks is lawful behavior.”

“We forget that at some point we will lose the ability to control the context and things won’t be relative to us,” he added.

Rogers said he expects to see civil law catch up to these questions a lot quicker than criminal law, adding that once it’s understood who’s liable, then the prosecution can understand who it can go after on the criminal side.

“It’s going to be really interesting to see what happens in the next 20 years, we will actually be dealing with this problem for real,” he said. “It sounds like a bad science fiction novel, but the reality is the technology is here right now.” 

Writer: Brian L. Huchel, 765-494-2084, bhuchel@purdue.edu 

Source: Marcus Rogers, 765-494-4545, rogersmk@purdue.edu

Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, (765) 494-4600

© 2015-22 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by Office of Strategic Communications

Trouble with this page? Disability-related accessibility issue? Please contact News Service at purduenews@purdue.edu.