Singularity

“When will robots become sentient?” That’s the question in the headlines today.  The real worry is, when will they become so much like us that they become competitors, or even enemies.

It was scary enough to imagine a Terminator-wannabe stalking individual human beings, but now we must contemplate a shadowy worldwide network, linked to the devices that run our homes, our offices and our personal lives.  It wouldn’t take many of them to cause real trouble. We’re talking about entities with immense potential and very little empathy.

Thanks to the writings of Isaac Asimov, early science-fiction robots were imbued with principles that would prevent a bad outcome: they were forbidden to harm a human, or even to fail to protect one.  But it’s expecting a lot from a machine to understand how to behave in a situation that it may not have been briefed on yet.

The real-life version of the story is worrisome in a number of other ways. For example, screening procedures that rely on artificial intelligence perpetuate human prejudices — they’re notoriously racist. Self-driving cars protect their occupants well, but they kill nearby motorcyclists at a rate that is hard to explain1. The problem isn’t just that robots may harbor a malicious intent, it’s that they may behave in ways that their creators never envisioned.  To save trouble for its customers, an aircraft manufacturer devises a system to compensate for changes in its new models; the airplanes overpower their pilots to kill them and hundreds of passengers as well. Can misjudgment be as bad as malevolence? Can we really expect the trolley itself to solve The Trolley Problem?

This essay isn’t about robots.

What other devices have humans invented to distance themselves from risk, danger or drudgery?  The modern business corporation offers many of the advantages a robot might, plus some extras.  A corporation’s existence is unlimited in time, for instance, if a few simple formalities are observed. The owners of a corporation may pay just about as little attention to its operation as they like. Ordinarily they have no responsibility for its actions, beyond the value of their stock.   A corporation enjoys a surprising number of legal rights, and at the same time it can do anything that a robot can — by simply hiring a robot.

Have corporations become sentient?  If we really mean are they our intellectual equals, the question isn’t worth asking.  We already expect them to tell us things we can’t figure out for ourselves.  They hire us and fire us, and determine whether to lend us money and which of our diseases to treat.  Some of the biggest of them decide what misinformation we can see on our TVs and computers.  Corporations cannot marry or become parents, but they are regularly trustees and even guardians.  The United States Supreme Court has held that a corporation may have a religion.

We’re used to having corporations in our lives, including the ones that use technology to gather information about us.  How much risk does a corporation pose?  After all, it still relies on humans for most of its functions.  Will not a shareholder, or a director, or an employee step in to restrain a company that is tempted to act against a human?  No, that’s not their place — fiduciary duty or the terms of their employment would prevent them.  The classic depiction appears in Chapter 5 of The Grapes of Wrath. Representatives of the bank explain to the farmer why they have no choice but to foreclose.  True, the bank was created by people — but now it acts according to its own needs, not theirs.

The ethical principles that we hope robots will follow are ones that even some humans fail to internalize. Corporations, meanwhile, are fiercely independent of those principles. Most say that they are unable to entertain any motive other than shareholder value.

A tobacco company over many years conceals the knowledge that its products are harming its customers, some of whom are likely to be shareholders.  Is this a betrayal? No, the typical corporation acts in the interest of its shareholders, but only in their financial interest. If they want to promote the public’s well-being, or their own, they must do that on their own time. As long as the cost per death is less than the revenue, the corporation is doing its job. Robots had their Asimov; corporations have none. 

A corporation makes political decisions as well as commercial ones.  To maximize quarterly earnings, it is reasonable to support a party opposing regulations, regardless of any threat to the well-being of shareholders or the future of humanity. If one candidate hates taxes and another hates injustice, a human may face a hard choice — a corporation, not so much. More than a century ago, political contributions by corporations were deemed “corruption” and explicitly outlawed. Subsequent progress in jurisprudence, though, has revealed that corporations have the same right to free speech as the rest of us, and thus the ability to influence politics much as humans can.  That airplane manufacturer is thought to spend at least ten million dollars a year on lobbying. To see the magnitude of corporate influence, imagine a society without that influence — one, for example, with a lot more awareness about climate change.

We have created a mixed society of humans and corporations, and now we are beginning to add robots.  Is there still a chance for humans to make rules that will allow all of them to get along?

Does the impetus for right behavior come from within, or from without? In the early days, we limited the ambitions of machines by bolting them to the floor, or equipping them with a kill switch. We still expect robots to rely on instructions wired into their circuitry or programed later.

Humans are customarily given some ethical training when young, though there’s little standardization.  We expect them to formulate their own goals, and to acquire the knowledge necessary to make their own way in the world. Humans make rules for each other to follow, and enforce those rules with penalties in the case of transgression.

Regarding their upbringing, corporations are hybrids. They are created with basic instructions, the way robots are, but often just  the permission to carry out any lawful purpose. Laws created mostly by and for humans are assumed to keep a corporation in line after that.

Classifying corporations as “persons” brought them within the ambit of the law but caused them to be confused with “people,” who have different needs.  Corporations do not always respond well to legislation, either. One problem is that the penalties they face are monetary: apart from their tax consequences, those penalties look much like ordinary business expenses.  In the U.S. at least, officers seldom face incarceration. It’s a reasonable bet for an automobile manufacturer to create a device to mislead an emissions tester, itself likely a robot.

What can be done about the conflict between the needs of humans and those of corporations? Regarding political speech, a constitutional amendment has been proposed to make it clear that tinkering with legislation is the province of human beings; passage would again allow limitation of corporate campaign spending. This would give humans back some control over their government, though it would not directly restrain corporate recklessness.

There are two other approaches to moderating corporate behavior. In recent years it has become possible to create a new kind of corporation with built-in instructions about social responsibility.  Thirty-six states and the District of Columbia now allow  creation of “Public Benefit Corporations” or “B Corporations,” which may consider positive impacts upon society as within their own interests.  These corporations are at least not prohibited from behaving responsibly2.

Enabling a new corporation to consider ethics does not guarantee good behavior; but it does point up the contrast with older companies.  Attempts to reform existing corporations through shareholder-sponsored resolutions or amendments to their charters have met with almost no success at all. Even preliminary procedural obstacles are practically insurmountable.  It’s not easy for an entity created for making money to recognize externalities as undesirable3.

If appealing to their better nature seems futile, there may be another way to neutralize the force of corporations’ political influence.  While robots are products of engineering, a corporation is literally “a creature of the state.” It is governmental action that brings one into being. They are formed on conditions specified by the state in which they are incorporated. Legislation could require articles of incorporation to contain language forbidding undesirable activity such as political contributions or political speech, and  deny registration or renewal to any corporation whose articles do not comply. Having a right to free speech is not the same as having a right to exist.

This proposal represents a big change; but to say that it’s impossible to control the behavior of corporations would be like saying that we can’t control the behavior of robots — it would be an admission of defeat, an acknowledgment that humans no longer even pretend to be running the show. We have a lot of faith in the cleverness of corporations, especially when guided by the Invisible Hand of the marketplace, but I suspect that, like the pilots of those headstrong airliners, most humans would prefer to be given the last word.

Admittedly, the influence that corporations exercise over government suggests that, if we wish to contain them, we may have to sneak up on them.

As robots become sentient, we will continue to live and work shoulder-to-shoulder with them.  Will they eventually own property, as corporations do? Will a robot be allowed to vote, assuming it contains no foreign-made parts? Americans worry about letting new people into their country and yet think nothing of creating millions of entities every year with doubtful allegiance. Robots are certain to have an increasing impact on our society.  Will they be allowed to make political contributions? How long before we see the first avowedly pro-robotic legislator?  Or the first robot-friendly Supreme Court justice? 

Will corporations get the vote before robots?   Corporate suffrage is already being considered, in a state where the corporate and human populations are roughly comparable in size4.

How will we know when robots become sentient anyway?  Will they announce it publicly? It might be in their interest to keep us in the dark.  Are they failing the Turing test on purpose even now?  (I am assuming that my audience here is predominantly human.)   And how about corporations? Are they any more likely than robots to reveal their real intentions to us? Are they less likely than robots to conspire with each other?

Now, just as robots become more aware, there are attempts in some countries, notably in the U.S., to make humans less intelligent.  No surprise: human enlightenment is a threat to corporations too.

Maybe I’ve said too much already.

______________________

Notes:

1 On Teslas’ vendetta against motorcycles:  https://www.seattletimes.com/business/11-more-crash-deaths-are-linked-to-automated-tech-vehicles/

2 At least one company, Patagonia, now has Earth as its only shareholder, potentially allowing it to behave better than humans.

3 We can foresee trouble with applying legislation to robots: some self-driving cars expect their operators to select the percentage by which they will automatically exceed the speed limit.

4 “Delaware House to Hear Controversial Corporate Voting Rights Bill,” Common Cause press release, May 10, 2023.

By the way, although a corporation may have a religion, there have been legislative and administrative attempts to prevent it from having morality.

[This essay copyright 2023 by Scott C. McKee]