The Robotic False Self

TheRoboticFalseSelfIn the context of Simple Reality we have used robots as metaphors for the false self. Now it seems the two are beginning to meld into one another. Could it be true that in the future we may not be able to tell “Hal” from Harold, “Her” from Harriet?

A psychopathological robot like Hal made for good entertainment in a Hollywood sci-fi film but surely robots are no threat to us because we make them, we program them—right? What a relief! We can all relax. But wait! Who is the “we” that programs them? Oh Oh! It is the false self. Hello Houston—we have a problem.

Nicholas Carr, former executive editor of the Harvard Business Review and author of several books on technology, in his book The Shallows, pointed out that the Web has had a detrimental effect on our ability to read, think and reflect. In his new book The Glass Cage: Automation and Us, Carr explains “how certain aspects of automotive technology can separate us from, well, Reality.” Here is our first clue of how robots and the false self are becoming indistinguishable. The false self’s survival strategy behaviors are precisely what prevents us from experiencing Simple Reality. We created the false self expressly to distract us from living in the present moment. Let’s continue this fascinating analysis.

What we know about most of us is that we seek activities that will distract us from our pain and suffering and conversely avoid behavior that allows us to be aware of what’s really happening in our lives—especially activities associated with simplicity, solitude and silence—we will avoid introspection at all costs. “In 11 experiments involving more than 700 people, the majority of participants reported that they found it unpleasant to be alone in a room with their thoughts for just 6 to 15 minutes.”

All humanity’s problems can be traced to our inability to sit alone in a room for 15 minutes.

There is nothing funny about our inability to begin the process of waking up but comedian Louis C. K. has managed to create a riff watched more than eight million times on YouTube that zeroes in on what it feels like to not be comfortable in our own skin—our false-self skin, that is. “‘Sometimes when things clear away and you’re not watching anything and you’re in your car and you start going, oh no, here it comes, that I’m alone, and it starts to visit on you, just this sadness,’ he said. ‘And that’s why we text and drive. People are willing to risk taking a life and ruining their own because they don’t want to be alone for a second because it’s so hard.”

Psychologist Stephanie Brown, author of Speed: Facing Our Addiction to Fast and Faster and Overcoming Our Fear of Slowing Down says “There’s this widespread belief that thinking and feeling will only slow you down and get in your way, but it’s the opposite.”  Actually, as we have learned from our sages, it is thinking and feeling (as in emotions) that block our entering the present moment where we could actually put our mind at rest. How ironic!

Aided and abetted by our devices we seem to have stepped onto a treadmill that continually accelerates, or as some would say we have “stepped onto a slippery slope.” Working to quiet our “mind chatter,” to medicate the symptoms showing up in our bodies and to distract ourselves from our afflictive emotions only serves to push us to continually work harder to suppress those reactions and give them more power over our behavior.

Even former technology enthusiasts like Sherry Turkle are forced to acknowledge the evidence that continues to pile up that makes us ask the question, who is really in control of human behavior? “She holds an endowed chair at M.I.T. and is on close collegial terms with the roboticists and affective-computing engineers who work there.”

In her latest book, Reclaiming Conversation: The Power of Talk in a Digital Age, she gains insights through interviews on how social media are affecting the beliefs, attitudes and values of Americans enamored of their devices. Reviewing her book, Jonathan Franzen reveals one of her conclusions. “Our rapturous submission to digital technology has led to an atrophying of human capacities like empathy and self-reflection … The people she interviews have adopted new technologies in pursuit of greater control, only to feel controlled by them.”  Again we have to remind our readers that it is not our beloved and much coveted devices which give us the illusion of control that is problematic, but the false-self need to be in control of that which cannot be controlled (and doesn’t need to be controlled) that is the source of our shrinking ability to access our compassion for one another. “A recent study shows a steep decline in empathy, as measured by standard psychological tests, among college students of the smartphone generation.”

What are we beginning to realize about our much vaunted intellect and its ability to mesmerize us with its marvelous technology? “How, for all its miraculous-seeming benefits, automation also can and often does impair our mental and physical skills, cause dreadful mistakes and accidents, particularly in medicine and aviation, and threaten to turn the algorithms we create as servants into our mindless masters—what sci-fi movies have been warning us about for at least two or three decades now. (As Carr puts it near the end of The Glass Cage, when we become dependent on our technological slaves … we turn into slaves ourselves.)”

How does technology adversely affect our physical health? In 2005, the RAND Corporation predicted that electronic medical records could save more than $81 billion annually but it turns out that the computer screen interposes itself between the patient and the doctor with a detrimental effect. “Studies have proved that checking records, possible diagnoses and drug interactions on a computer during a medical examination can interfere with what should be not only a fact-based investigation but a deeply human, partly intuitive and empathetic process.” When our technology begins to reduce the ability of our doctors to respond to their patients in the present moment with compassion—that should cause us to pause—are we headed in the right direction?

Now to the implications for aviation. Pilots are becoming increasingly dependent on automated flying. “A heavy reliance on computer automation can erode a pilot’s expertise … leading to what Jan Noyes, a human-factors expert at Britain’s University of Bristol, calls ‘a deskilling of the crew.’”  When a key principle involved in human awakening is self-reliance, then dependence on robots is clearly self-limiting and ultimately self-destructive.

Are we being too pessimistic or even paranoid about Artificial Intelligence? Tesla’s chief executive Elon Musk suggests proceeding carefully with the development of A.I., “just to make sure that we don’t do something very foolish.”

Bill Gates said he didn’t “understand why some people are not concerned [about] super intelligence.”  Stephen Hawking sounds the most piercing alarm by scientists who are concerned about the possibility of a dark side to developing human-like robots. He warned that “the development of full artificial intelligence could spell the end of the human race.”  Is Stephen Hawking overreacting? Probably not, as we shall see.

John Markoff, author of Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, seems to share Hawking’s concern. “The same technologies that extend the intellectual power of humans can displace them as well.”

Even something so apparently harmless as a child’s doll can morph into a threatening A.I. toy in the hands of a false self-intent on making a profit. The doll officially named Barbara Millicent Roberts or “Barbie” made her appearance at the New York Toy Fair in 1959. With over a billion dollars in sales and a “fandom” of millions of little girls, what could be the problem? Well, for starters: “Protestors at the 1972 Toy Fair complained that Barbie and other dolls encouraged girls ‘to see themselves solely as mannequins, sex objects or housekeepers,’ according to an account in The New York Times.”

“A 2006 study in the journal Developmental Psychology bluntly concluded that ‘girls exposed to Barbie reported lower body esteem and greater desire for a thinner body shape.”  If that’s not enough cause for alarm wait until we see what Mattel is up to now. What if the toymaker could create the illusion that Barbie was a sentient being? Introducing “Hello Barbie” in her latest new and improved incarnation. But first a little background story on the interface between toys, dolls and A.I.

“In the 1960s, a computer scientist, Joseph Weizenbaum, created a computer program called Eliza, which could pretend to be a psychotherapist via a simple text interface. As Weizenbaum later wrote, ‘I was startled to see how quickly and how very deeply people … became emotionally involved with the computer and how unequivocally they anthropomorphized it.”  We have just introduced the problem facing all of humanity in our attempts to create a sustainable human community, namely, what’s real and what isn’t or in this example, what’s human and what isn’t?

M.I.T. roboticists Cynthia Breazeal and Brian Scassellati along with psychologist Sherry Turkle introduced children to the robots Cog and Kismet in 2001. The robots, unlike Hello Barbie, couldn’t speak and engaged the children through eye contact, gestures and facial expressions. The researchers pulled aside the curtain to reveal that it was “The Wizard of Oz” operating the deceptively real robots by showing the children how the robots worked and letting them control the robots. Nevertheless, “Most children said they believed that Kismet and Cog could listen, feel, care about them and make friends.”

Imagine how lifelike Hello Barbie will seem to a 3 to 8 year-old with her 8,000 lines of realistically responsive dialogue. In the late 1990s, Noel Sharkey, a professor at the University of Sheffield in England was studying the interface of robotics and ethics. His daughter ended up being strongly attached to her robot Tamagotchi. “We had to break it away from my daughter in the end, because she was obsessed with it. It was like, ‘Oh, my God, my Tamagotchi is going to die.”

Mattel will not be trying to cause our children psychological problems with Hello Barbie but in the understandable need to make a profit the toy manufacturer may be fostering unintended consequences. To sell more dolls we can trust that Mattel will use A.I. technologies to make the doll more likeable. “The first thing you’re going to do is to try and create stronger and stronger emotional bonds.” The danger is that a synthetic friendship may take the place of the real kind. Sharkey continues, “If you’ve got someone who you can talk to all the time, why bother making friends.”

University of Washington professor of psychology, Peter Kahn, studies human-robot interaction and defines the “domination model” relationship in which the child makes all the demands and receives all the rewards but feels no responsibility to the robot. In other words, the robot becomes a kind of A.I. other. “This, he says, is unhealthy for moral and emotional development. At worst, the human can begin to abuse his power. In a study conducted at a Japanese shopping mall a couple of years ago, for instance, researchers videotaped numerous children who kicked and punched a humanoid robot when it got in their way.”

How do we teach children to distinguish between illusion and reality when we adults have yet to grasp the difference? Remember that a person’s identity is determined by their worldview and that our worldview is composed of our BELIEFS, attitudes and values. Psychologist Sherry Turkle completes our circle. “It’s not that we have really invented machines that love us or care about us in any way, shape or form, but that we are ready to BELIEVE [emphasis added] that they do. We are ready to play their game.”

Speaking of games, IBM’s A.I. system called “Watson” defeated human champions in the quiz show game “Jeopardy” in 2011. Like Mattel, IBM wants to find ways to commercialize this A.I. technology. “On Thursday [September 24, 2015], IBM announced new capabilities in Watson services like speech [Hello Watson?], language understanding, image recognition and sentiment analysis. These humanlike abilities such as seeing, listening and reasoning are those associated with artificial intelligence in computing. IBM calls its approach to A.I. ‘cognitive computing.’”

It is obviously easier to program a computer to emulate the human intellect (cognitive computing) than human affective behavior. We should all be concerned that we don’t create a form of A.I. that lacks our best human qualities such as morality and compassion. Which begs the question just how much morality and compassion is being expressed around the global village today? Not much, regrettably. Perhaps we could program a robot to better express our True self than we ourselves have been able to do so far in our history of human interaction. What would that entail?

“Computer scientists are teaming up with philosophers, psychologists, linguists, lawyers, theologians and human rights experts to identify the set of decision points that robots would need to work through in order to emulate our own thinking about right and wrong. Scheutz [Matthias Scheutz of the Human-Robot Interaction Laboratory at Tufts University] defines “morality” broadly, as a factor that can come into play when choosing between contradictory paths.”

Simple Reality empowers humans to choose between the contradictory “paths” of reaction and response in order to express morality and compassion, in order to create a sustainable community. Perhaps robots could be programmed to distinguish between self-destructive behavior and life-enhancing behavior. If so then A.I. may be able to support humanity in making the paradigm shift that seems to be our only hope for survival.

Just kidding! If you took that last sentence seriously you are not paying attention to the content found in the Simple Reality Project. You are grasping at straws. In P-B there are two so-called “paths,” neither of which lead anywhere and the “leaders” beckoning us to follow them are in conventional religious terms, Satan (Christianity) or Maya (Buddhism) and technology (e.g., A.I.). Neither religion nor the human intellect hold out any hope because they are both creations of our P-B identity.

Anthropomorphism is the great seducer in both cases described above. As robots become more human-like, as they obviously will, we will be tempted to surrender more of our agency to them and shrink from both our responsibility and our opportunity to become self-reliant and to engage in the process of authentic transformation, which is the only true power that we have and the only power that we need. If we listen to our false self, our robots will become more anthropomorphic as we become more like robots.

On the other hand, if we continue to listen to pseudo-religious leaders invoking the divine illusion of an anthropomorphic God, we will also continue to stagger forward sleepwalking toward chaos and the collapse of civilization. Our false-self will continue to project our anxiety, guilt, shame and regret on the “Sky God” or the blameless other living next door.  The Implicate Order (Creative Intelligence) can be trusted to guide us in expressing our True self if we reject reaction and embrace response. In doing so we embrace the perfection of Creation and we ourselves become creators of both Truth and Beauty.


References and notes are available for this essay.
Find a much more in-depth discussion in the Simple Reality books:
Where Am I?  Story – The First Great Question
Who Am I?  Identity – The Second Great Question
Why Am I Here?  Behavior – The Third Great Question
Science & Philosophy: The Failure of Reason in the Human Community


This entry was posted in 3 Essays. Bookmark the permalink.