The human brain is commonly described in terms of computing. One might think computers outperform humans due to the speed and ease with which they handle large quantities of data. However, examples such as Shakuntala Devi and Gary Kasparov illustrate that even incredibly optimized artificial intelligence can still be outperformed by the human brain (Hassan & Rizvi, 2019). Devi is known for calculating the twenty-third root of a number with over one hundred digits in fifty seconds in 1977, twelve seconds quicker than the world’s fastest computer at the time (Hassan & Rizvi, 2019). Meanwhile, Kasparov is a chess player known for his 1996 defeat of Deep Blue, an Artificial Intelligence (AI) designed by IBM (Hassan & Rizvi, 2019). Despite this initial win, Deep Blue would go on to defeat Kasparov a year later and the world’s top chess player is currently the Houdini3 64-bit (Hassan & Rizvi, 2019). Yet these are only two examples. There are varying ways to compare humans and computers as well as plenty of debate on if and how the human brain could be simulated.

In 2011, it was reported that computers had exceeded the human brain, the K supercomputer from the company Fujitsu “[computing] four times faster and [holding] ten times as much data” (Fischetti, 2011). The K, though computationally powerful, runs off of nine point nine million watts, as opposed to the twenty watts reported to run the human brain (Fischetti, 2011). However, in 2015 the human brain was said to outperform IBM’s Sequoia supercomputer, which holds the record for most traversed edges per second (Hsu, 2015). One critical difference between humans and computers is how they are designed to problem-solve. Most AI uses a “brute force” method, solving as many computations as possible, while the human brain is “made for general purposes, not specifically just for computational jobs” (Hassan & Rizvi, 2019). A computer may be able to out-solve a human brain at trainable or optimizable tasks, but can’t manage the types of simultaneous processes a human brain can perform (Hassan & Rizvi, 2019).

This paradox defines the debate of if and how the human brain could be digitally simulated. While AI excels at simulating connections and handling large scales of data, it is difficult to create machines that can process figurative language, analyze sentiment, perform situation based processing, or utilize common sense. Experts “[argue] that human thinking is not only highly metaphorical but that metaphors mediate human behavior and reasoning” (Neuman et al., 2013). For example, the phrase “my lawyer is a shark” would lead an English speaking human to believe said lawyer is cunning and fierce. However, a computer would interpret that phrase to mean said lawyer literally is a shark (Neuman et al., 2013). 

Strategies to identify metaphors include Word Sense Disambiguation (WSD), clustering, and the use of words’ categorization (Neuman et al., 2013). WSD “is the field of research in which algorithms are developed to disambiguate the sense of a word in context” (Neuman et al., 2013). Meanwhile, other strategies include teaching AI new uses of words and optimizing AI to predict whether a text is literal or figurative (Neuman et al., 2013). Sentiment is difficult for AI to analyze as it relies on subjective information combined with audio and visual modalities, and personal and cultural factors (Poria, Cambria, Howard, Huang, & Hussain, 2016).

In the case of common sense and situation based processing, AI cannot distinguish between data provided and the context it is provided in. For example, an AI was trained to differentiate between photographs of dogs and wolves (Ribeiro, Singh, & Guestrin,  2016). Because the AI analyzed all components of the image and not only the animal within, images of dogs that were outdoors would often be miscategorized as wolves, as wolves were always outdoors (Ribeiro, Singh, & Guestrin,  2016). While a human can differentiate between an outdoor dog and a wolf, it is more difficult for AI to do so. 

Though these tasks are difficult for computers, HBSF Fellow Erik Cambria founded SenticNet to “[help] machines learn, leverage, [and] love” (Cambria, 2020). SenticNet’s approach is both top-down and bottom-up: top-down for the fact that it leverages symbolic models such as semantic networks and conceptual dependency representations to encode meaning; bottom-up because it uses sub-symbolic methods such as deep neural networks and multiple kernel learning to infer syntactic patterns from data. SenticNet was conceived in 2009 at the MIT Media Laboratory within an industrial Cooperative Awards in Science and Engineering and designs emotion-aware intelligent applications (Cambria, 2020). SenticNet has many current projects running, but highlights include improving human-computer interaction, capturing and analyzing market sentiments, and AI for social good, e.g., suicidal ideation detection.

Another project aiming to increase AI understanding of metaphors included HBSF founder Newton Howard and HBSF board member Sergey Kanareykin. The Autonomous Dynamic Analysis of Metaphor and Analogy project was born from researchers from IllinoisTech, Massachusetts Institute of Technology, Georgetown University, Ben-Gurion University of the Negev, Bar-Ilan University, and the Center for Advanced Defense Studies in Washington DC (Illinois Tech, 2012). Through reliance on databases and text corpora, the ADAMA project created a way for AI to identify conceptual metaphors with limited human input (Gandy et al, 2013). Rather than simply identifying whether a text is literal or figurative, this AI analyzes how the metaphor is constructed in terms of the target and source domains.

Through projects such as SenticNet and ADAMA, we can better close the gap between artificial and human intelligence. This not only allows for better human-computer communication but also gives insights into human brain dysfunction. You can support HBSF’s research on the brain and neurodegenerative disease by donating here.

Written by Senia Hardwick

References

Cambria, E. (2020). SenticNet. https://sentic.net/

Fischetti, M. (2011, November 1). Computers versus Brains. Scientific American. https://www.scientificamerican.com/article/computers-vs-brains/

Gandy, L.,  Allan, N., Atallah, M., Frieder, O.,  Howard, N., Kanareykin, S., Koppel, M., Last, M., Neuman, Y. & Argamon, S. (2013). Automatic identification of conceptual metaphors with limited knowledge. Proceedings of the 27th AAAI Conference on Artificial Intelligence, AAAI 2013. 328-334.

Hassan, M. A., & Rizvi, Q. M. (2019). Computer vs human brain: An analytical approach and overview. International Research Journal of Engineering and Technology, 6(10), 580–583.

Hsu, J. (2015). Estimate: Human Brain 30 Times Faster than Best Supercomputers. IEEE Spectrum: Technology, Engineering, and Science News. https://spectrum.ieee.org/tech-talk/computing/networks/estimate-human-brain-30-times-faster-than-best-supercomputers

Illinois Tech. (2012, April 26). Argamon team receives iarpa grant. https://www.iit.edu/news/argamon-team-receives-iarpa-grant

Neuman, Y., Assaf, D., Cohen, Y., Last, M., Argamon, S., Howard, N., & Frieder, O. (2013). Metaphor identification in large texts corpora. PLoS ONE, 8(4). https://doi.org/10.1371/journal.pone.0062343

Poria, S., Cambria, E., Howard, N., Huang, G.-B., & Hussain, A. (2016). Fusing audio, visual and textual clues for sentiment analysis from multimodal content. Neurocomputing, 174, 50–59. https://doi.org/10.1016/j.neucom.2015.01.095

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should i trust you? “: Explaining the predictions of any classifier. ArXiv:1602.04938 [Cs, Stat]. http://arxiv.org/abs/1602.04938