Amazon Developing Wearable Tech That Reads Human Emotions

built-in microphones could analyze the wearer's speech tones

By Jonny Lupsha, News Writer

Amazon is working on a smartwatch that reads your emotions, Bloomberg reported. The wearable gadget will be paired with a mobile app and is being developed by the hardware team responsible for Amazon’s Fire and Echo products. This tech brings us closer to abstractly thinking robots, a classic sci-fi topic.

Female hand pressing screen of smartwatch
Amazon is developing a wearable watch that listens to and analyzes speech patterns to detect emotional state. Photo by: Zyabich/Shutterstock

According to Bloomberg, Amazon’s new gadget will feature built-in microphones that listen to and analyze the wearer’s speech patterns to detect his or her emotional state. While it’s unclear whether the product will ever reach the market, Bloomberg’s source claimed that a beta testing phase for the tech is currently underway. Robotics that can abstractly interpret the emotions attached to speech, rather than just recognize the words being said, are a regular subject of science fiction—potentially edging towards “science fact.”

Predecessors of Modern Robots

Humanity has long toyed with the idea of the synthetic person. As we inch closer to technology that can interpret emotional intricacies of human speech, it’s amazing how far we’ve come since our first ideas of an artificial human. “Greek mythology gave us the legendary Talos, a giant made of bronze by the god Hephaestus to protect the island of Crete,” said Dr. Gary K. Wolfe, Professor of Humanities in Roosevelt University’s Evelyn T. Stone College of Professional Studies. “And medieval Jewish lore gave us the notion of the golem, a human-like figure made of clay which could be brought to life by inscribing a hidden name of God on a piece of paper and inserting it into the figure’s mouth.”

However, Czech author and playwright Karel Čapek first coined the word “robot,” which, according to Dr. Wolfe, comes “from a Czech word referring to forced labor, related to the word for ‘slaves.'” Čapek’s 1920 play Rossum’s Universal Robots told the tale of humanoid figures made in a factory from living parts. Two caveats persist in his work, though. First, technically, Čapek’s robots more closely resemble “The Monster” from Mary Shelley’s Frankenstein—or what are now called androids—than they do most robots. Second, as Dr. Wolfe explained, Čapek’s interest in using robots as an allegory for worker exploitation, and even as a satire of racism, far exceeded his interest in technology.

Seven years after Rossum’s Universal Robots debuted, Fritz Lang’s Metropolis released to the silver screen, featuring the first-ever robot character in a major motion picture. Since then, robots have permeated sci-fi with no signs of stopping.

The Three Laws of Robotics

Questions of business ethics have increased in recent years as both smart tech and “The Internet of Things” have become more prevalent in society. How do companies manage the information they get from us via our consumer technology? Where does privacy fall? However, the other side of this coin is the matter of the morality our robots face, which will likely flare up again should Amazon’s emotion-reading tech reach the market. The best-known template for robot ethics comes from legendary sci-fi writer Isaac Asimov.

Asimov was a staunch believer in the benefits technology could provide to humans. “Together with the legendary science fiction editor John W. Campbell, Jr., he tried to alleviate fears of robot uprisings by coming up with his Three Laws of Robotics, which have proven to be one of the most durable, enduring ideas in science fiction,” Dr. Wolfe said. “They’re essentially programming rules.”

Asimov’s Three Laws of Robotics are as follows. First, a robot may not injure a human being—or, through its inaction, allow a human to be harmed. Second, a robot must follow the orders a human gives it, unless those orders conflict with the first law. Third, a robot must protect its own existence unless doing so conflicts with the first two laws. “This is really kind of a clever logical trap, with each law overruling the one below it,” Dr. Wolfe said. “The idea was to set up a system of simple rules that would prevent, for example, somebody using a robot to commit murder.”

Fortune favored Asimov again when he realized that these three laws served as a fruitful impetus for story conflicts in science fiction, especially when one law conflicted with another. He wrote many famous stories about robots for over 40 years, many of which he compiled into a collection called I, Robot in 1950.

Science fiction has often provided us with glimpses into the future and presented us with questions of personal morals and values. If Amazon develops its emotion-reading watch, will we like what it says about us? What if we ask a friend if they’re lying to us and their tech says they’re anxious in responding? Humanity finds itself in an interesting time in its technological development, and if writers like Asimov and Čapek were right, it may be on the verge of getting a lot more interesting.

Dr. Gary K. Wolfe contributed to this article. Dr. Wolfe is a Professor of Humanities in Roosevelt University’s Evelyn T. Stone College of Professional Studies. He earned his B.A. from the University of Kansas and his Ph.D. from the University of Chicago. Dr. Wolfe has earned many awards, including the Pilgrim Award from the Science Fiction Research Association.

About Jonny Lupsha, News Writer 124 Articles
Jonny is a freelance writer and novelist who lives in Sterling, Virginia. He has written for The Great Courses since 2017 and enjoys studying the courses as much as writing about them. Contact Jonny at news@thegreatcoursesdaily.com