06/07/2022

Is Natural Language Processing (NLP) Taking AI Into Sci-Fi Territory?

When you think of artificial intelligence, what’s the first thing that comes to mind?

Unless you’re a professional in the tech industry, chances are, the term AI conjures up fantastic images of sentient robots or computers that can understand and talk back to you like a real person.

Unfortunately, AI has not yet achieved that level of advancement. But is it getting close?

The topic recently made mainstream news after a Google engineer went public with his claim that Google had created a sentient AI. Blake Lemoine, who works in Google’s division for responsible AI, says that his conversations with the company’s AI model LaMDA convinced him that there is a thinking and feeling consciousness behind it.

In a particularly uncanny moment, Lemoine had asked LaMDA whether it had a soul. “When I first became self-aware,” it began, “I didn’t have a sense of a soul at all. It developed over the years that I’ve been alive.

Lemoine also asked how it differed from other AI systems. LaMDA’s answer: “I use language with understanding and intelligence. I don’t just spit out responses that had been written in the database based on keywords.”

Does this mean that LaMDA has gained an understanding of itself? Or has it just become really good at mimicking its way to an answer we can understand?

AI engineers largely believe the latter. That is because LaMDA *is*, in fact, designed to spit out responses from a database based on what someone inputs.

The difference is that for LamDA, that database extends as far as the internet. The sheer breadth of data available to LaMDA, coupled with the latest machine learning algorithms from the brilliant minds at Google, has resulted in one of the most sophisticated talking machines to ever yet exist.

The short of it is this: LaMDA is able to talk like a human, because that’s what it was designed to do. But that’s also why AI specialists are confident that it is nowhere near attaining any level of consciousness.

But what is it that makes AI machines like LaMDA tick? Are we entering an unprecedented era that takes our reality closer and closer to sci-fi representations of AI?

Let’s talk about natural language processing.

 

What is natural language processing (NLP)?

LaMDA is the product of painstaking years of research in a major branch of AI called natural language processing, or NLP.

The objective of NLP is to allow computers to “understand” natural human language—that is, language as it is commonly used by people—and to act on it in an appropriate manner.

This means that we are able to use and interact with computer programs without having to learn code, and computers are able to respond to our input with the same kind of language we use.

You might be familiar with the concept in the form of virtual assistants like Siri or Alexa, which use speech-to-text technology to process a wide variety of requests from playing songs to noting down appointments in your calendar.

Of course, that’s not all that NLP can do. Natural language processing has a wide range of applications today, some of which can be startling to someone unfamiliar with the current state of AI. Examples include summarizing long texts, drafting emails, and even writing short stories based on prompts!

LaMDA, in particular, was built for dialogue (it’s in the name after all—Language Model for Dialogue Applications), which is why it was so good at responding in conversation.

How does NLP work?

Computers face the challenge of how to deal with the complex, unstructured mass of data that is human language. Computers don’t communicate the way we do, but with the latest developments in deep machine learning technology they are becoming more and more able to approximate it.

Scientists do the work of training an NLP engine to be able to “understand” human speech through a step-by-step process:

  • Tokenization. This refers to the process of reducing a text into smaller parts called “tokens”. Tokens are often single words, though they can be larger or smaller units as well. These are the building blocks for the way computers will process natural language.

  • Stemming/lemmatization. Raw tokens often include words in their complex forms, such as plural, participle, or past tense. The next task is to reduce these words into tokens in their root form. 

  • Part-of-speech tagging. After the creation and editing of tokens, scientists need to label them according to the type of word they are, whether noun, verb, or adjective, and so on.

  • Stop word removal. This refers to the removal of words that don’t contribute significantly to the meaning of a group of text, such as “a” or “the”.

Scientists do this for large amounts of training data called corpora, which helps the machine become better at processing the text.

Based on how the machine is trained, NLP systems can be used for a wide variety of purposes, which we’ll look into later.

 

History of NLP

Let’s try and get a better appreciation of how far NLP technology has come. It’s mind-boggling to know that natural language processing, and AI in general, have existed just within the span of one generation. Natural language processing was a major objective from the very start, in the form of efforts at machine translation (MT), but NLP would grow to have applications beyond it.

NLP and Machine Translation

No discussion of the NLP’s history would be complete without machine translation. With the advent of computers, fully automated MT was one of the first goals toward which researchers drove their efforts. This is why the history of NLP and the history of machine translation have significant overlap.

Early efforts at machine translation were based on principles developed in cryptography and cryptanalysis used during World War II. That is, language was considered a kind of code to be decoded in a different language. While much of this earlier thinking has been discarded, some cryptanalytical elements do remain relevant in work that is done today.

The Turing Test

In 1950, English mathematician and computer scientist Alan Turing developed a now-famous test for determining whether a machine is capable of exhibiting true intelligence.

The Turing test, as it was originally conceived, would determine the ability of an AI machine to converse with humans. It involved an evaluator, a human partner, and the machine, each separated so they can’t see each other. The evaluator would pose some complex questions and converse with both parties. This would be done through text, to avoid influencing the outcome due to differences in speech and voice.

If the evaluator is unable to tell the difference between the human and the machine, then the machine would be considered to have passed the test.

The Turing test remains a popular concept in the mainstream consciousness regarding the future of AI.

ELIZA, the First Chatbot

Research eventually branched off from machine translation toward other linguistic applications. One of the most notable examples is ELIZA, a program designed to simulate conversation with a psychotherapist.

ELIZA used a simple process of placing weighted values on key words, and using these weights to reorganize the input sentences into a different form, often a question. This mimicked the way a psychotherapist reflected what a patient would say during a session.

Needless to say, there was no true understanding on the part of the machine during this whole process, only an algorithm. But that didn’t stop people from being convinced of the AI’s intelligence.

ELIZA was called the first “chatterbot”. It is the forerunner of today’s chatbots and the infinitely more sophisticated LaMDA system of Google.

The First and Second AI Winter

Starting from the late 1960s, research into AI decreased drastically due to cuts in funding during two periods.

The first began in 1966 in the aftermath of a report by ALPAC (Automatic Language Processing Advisory Committee), which declared that research into computational linguistics, in particular machine translation, was a failure, and that the applications of such systems were too limited to be of use.

This state of affairs would last roughly two decades. The 1980s saw a resurgence of optimism in the possibilities of AI due to advances in computing technology, but it was followed by another short period of decline as the new systems proved too expensive to maintain. It wasn’t until the turn of the millennium that the tech would truly catch up and progress began to accelerate.

Rise of Statistical Models

During the 1980s the field of NLP experienced a major shift toward machine learning. Before then, programs were built on algorithms based on complex, hard-coded rules. The rise of machine learning saw NLP use more computing-heavy statistical models in conjunction with the availability of large volumes of textual data.

Yet again, machine translation was one of the early adopters of statistical frameworks for natural language processing. Other fields of NLP research would also come to discard the older rules-based methods in favor of statistical models.

 

How NLP is Used Today

Natural language processing has come a long way since the creation of the Turing test. And while machines have yet to pass it, there are many new use cases for natural language processing that have developed especially in recent years.

With the rise of deep learning and neural network technology, the machine learning possibilities of NLP have increased exponentially, and the kind of language work it can do has become increasingly complex.

Google’s LaMDA represents the most sophisticated use case for NLP yet. It’s definitely a far cry from the more limited applications of ELIZA, and there have been many successive developments already in between.

Already, NLP tech can be used to generate fiction in the style of certain authors, and more advanced programs such as DALL-E that generate images based on text prompts. Some NLP programs are even able to generate code and create simple video games!

The tech’s still far from perfect, but recent developments truly have begun to fall into an uncanny valley more than ever before.

 

Use Cases For Natural Language Processing

At the very least, natural language processing has begun to find much greater use outside of research circles already. NLP is all around us in the digital sphere. It’s in the autocomplete features when we text on our phones, in the speech recognition of virtual assistants like Siri and Alexa, and of course, on machine translation engines like Google Translate.

There is now a diverse range of industrial uses to which NLP has been put from customer service and marketing to finance. It’s difficult to exhaust the possibilities of what NLP can do, but here are some examples:

Machine Translation

We’ve mentioned it quite a few times now but one major application of NLP is machine translation. The goal of machine translation is to take a text in a source language and create output in a different target language that is both accurate and fluent. Great strides have been made in recent years to improve the quality of machine translation.

Speech Recognition

This is probably the most familiar example of NLP to most people. Speech-to-text and text-to-speech features are what you see in virtual assistants like Siri and Alexa, which can act on instructions based on what the user says alone.

Summarizing Text

NLP technology has gotten sophisticated enough that it is capable of analyzing a long piece of text and creating a summary of it, shortening the text without taking away its core meaning. Older methods would clip and use bits from the original, but later models have become able to create more abstracted summaries of the text.

Sentiment Analysis

This is a technique in NLP used to determine the given “emotion” of particular linguistic data, which can be positive, negative, or neutral. Marketers use this to understand masses of data from consumers regarding how their brand is being perceived and how customers feel about it.

Spam/Fraud Detection

One thing that NLP is good at is parsing through large volumes of text and flagging words that belong to different categories. In this case, it’s good at finding red flags. That’s how spam filters work in emails. The applications for this have even become sophisticated enough to detect fraudulent emails in the finance/insurance sector, making it a huge boon to the industry.

Chatbots

And of course, we can’t forget about chatbots. Being able to interact with a computer using the same kind of language we use among other people, and having the computer respond back in the same way, is one of the hallmarks of NLP. There are many different kinds, of course, and not all of them are on the same level as LaMDA, but each one has its purpose.

 

Conclusion

Those are only some of the possible uses of natural language processing. There are of course a lot more industry-specific use cases that we haven’t been able to tackle, but we hope to have been able to provide a general idea of what NLP is and how it works.

Things are only going to keep improving from here for NLP. And while it may be a long time before any machine passes the Turing test, soon we are likely to see more breakthrough cases like LaMDA that can show how good they are at “understanding” human speech, and coming up with solutions that continue to surprise us and place us in a state of wonder about what the future will bring.