The ethics of artificial intelligence

The invention of the internet took most philosophers by surprise. This time, artificial intelligence ethicists view it as their job to keep up.

Nov 09, 2018

By John W. Miller
The invention of the internet took most philosophers by surprise. This time, artificial intelligence ethicists view it as their job to keep up.

“There’s a lack of awareness in Silicon Valley of moral questions, and churches and government don’t know enough about the technology to contribute much for now,” says Tae Wan Kim, an A.I. ethicist at Carnegie Mellon University in Pittsburgh. “We’re trying to bridge that gap.”

A.I. ethicists consult with schools, businesses and governments. They train tech entrepreneurs to think about questions like the following: Should tech companies that collect and analyse DNA data be allowed to sell that data to pharmaceutical firms in order to save lives? Is it possible to write code that offers guidance on whether to approve life insurance or loan applications in an ethical way? Should the government ban realistic sex robots that could tempt vulnerable people into thinking they are in the equivalent of a human relationship? How much should we invest in technology that throws millions of people out of work?

Tech companies themselves are steering more resources into ethics, and tech leaders are thinking seriously about the impact of their inventions. A recent survey of Silicon Valley parents found that many had prohibited their own children from using smartphones.

Mr. Kim frames his work as that of a public intellectual, reacting to the latest efforts by corporations to show they are taking A.I. ethics seriously.

In June, for example, Google, seeking to reassure the public and regulators, published a list of seven principles for guiding its A.I. applications. It said that A.I. should be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, be accountable to people, incorporate privacy design principles, uphold high standards of scientific excellence, and be made available to uses that accord with these principles.

In response, Mr Kim published a critical commentary on his blog. The problem with promising social benefits, for example, is that “Google can take advantage of local norms,” he wrote. “If China allows, legally, Google to use AI in a way that violates human rights, Google will go for it.” (At press time, Google had not responded to multiple requests for comment on this criticism.)

The biggest headache for A.I. ethicists is that a global internet makes it harder to enforce any universal principle like freedom of speech. The corporations are, for the most part, in charge. That is especially true when it comes to deciding how much work we should let machines do.

An argument familiar to anybody who has ever studied economics is that new technologies create as many jobs as they destroy. Thus the invention of the cotton gin in the 19th century called for industries dedicated to producing the necessary parts of wood and iron. When horses were replaced as a primary form of transportation, stable hands found jobs as auto mechanics. And so on.

A.I. ethicists say the current technological revolution is different because it is the first to replicate intellectual tasks. This kind of automation could create a permanently underemployed class of people, says Mr Kim.

A purely economic response to unemployment might be a universal basic income, or distribution of cash to every citizen, but Mr. Kim says A.I. ethicists cannot help returning to the realisation that lives without purposeful activity, like a job, are usually miserable. “Catholic social teaching is an important influence for A.I. ethicists, because it addresses how important work is to human dignity and happiness,” he explains.

“Money alone doesn’t give your life happiness and meaning,” he says. “You get so many other things out of work, like community, character development, intellectual stimulation and dignity.” When his dad retired from his job running a noodle factory in South Korea, “he got money, but he lost community and self-respect,” says Mr. Kim.

That is a strong argument for valuing a job well done by human hands; but as long as we stick with capitalism, the capacity of robots to work fast and cheap is going to make them attractive, say A.I. ethicists.

“Maybe religious leaders need to work on redefining what work is,” says Mr. Kim. “Some people have proposed virtual reality work,” he says, referring to simulated jobs within computer games. “That doesn’t sound satisfying, but maybe work is not just gainful employment.”

There is also a chance that the impact of automation might not be as bad as feared. A company in Pittsburgh called Legal Sifter offers a service that uses an algorithm to read contracts and detect loopholes, mistakes and omissions. This technology is possible because legal language is more formulaic than most writing. “We’ve increased our productivity seven- or eightfold without having to hire any new people,” says Kevin Miller, the company’s chief executive. “We’re making legal services more affordable to more people.”

But he says lawyers will not disappear. “As long as you have human juries, you’re going to have human lawyers and judges…. The future isn’t lawyer versus robot, it’s lawyer plus robot versus lawyer plus robot.”

Autonomous cars and the Trolley Problem
Today’s motor vehicles (automobiles)on our roads are driven by men and women. Soon, self-driving vehicles will threaten to throw millions of taxi and truck drivers out of work.

We are still at least a decade away from the day when self-driving cars occupy major stretches of our highways, but the automobile is so important in modern life that any change in how it works would greatly transform society.

Technology experts say that the trolley problem is still theoretical because machines presently have a hard time making distinctions between people and things like plastic bags and shopping carts, leading to unpredictable scenarios. This is largely because neuroscientists still have an incomplete grasp of how vision works.

“But there are many ethical or moral situations that are likely to happen, and they’re the ones that matter,” says Mike Ramsey, an automotive analyst for Gartner Research.

The biggest problem “is programming a robot to break the law on purpose,” he says. “Is it morally correct to tell the computer to drive over the speed limit when everybody else is driving 20 miles an hour over?”

Humans break rules in reasonable ways all the time. For example, letting somebody out of a car outside of a crosswalk is almost always safe, if not always technically legal. Making that distinction is still almost impossible for a machine.

And as programmers try to make this type of reasoning possible for machines, invariably they base their algorithms on data derived from human behaviour. In a fallen world, that’s a problem.

“There’s a risk of A.I. systems being used in ways that amplify unjust social biases,” says Ms. Vallor, the philosopher at Santa Clara University. “If there’s a pattern, A.I. will amplify that pattern.”

Loan, mortgage or insurance applications could be denied at higher rates for marginalised social groups if, for example, the algorithm looks at whether there is a history of homeownership in the family. A.I. ethicists do not necessarily advocate programming to carry out affirmative action, but they say the risk is that A.I. systems will not correct for previous patterns of discrimination.-- America Magazine

Total Comments:0

Name
Email
Comments