About the NLP
Chatbots were originally based on a system of rules.
This meant that specialists had to encode hundreds, and possibly thousands, of phrase structure rules in order for the chatbot to correctly respond to the data entered by the person. Eliza is such an example. It is a chatbot developed in the 1960s that parodies dialogue with a psychotherapist.
Chatbots are one of the well-known examples of NLP. Chatbots were originally based on a system of rules. This meant that specialists had to encode hundreds, and possibly thousands, of phrase structure rules in order for the chatbot to correctly respond to the data entered by the person. Eliza is such an example. It is a chatbot developed in the 1960s that parodies dialogue with a psychotherapist.
Most chatbots and virtual assistants today are built and programmed using machine learning techniques. These methods rely on numerous gigabytes of data collected during conversations between people.
The more data transferred to the machine learning model, the better the chatbot will perform. See this example, for instance: https://www.conveythis.com//
Chatbots are about how computers understand written language. What about speaking? How can computers turn sound into words and then understand their meaning? Speech recognition is the second subsection of natural language processing. This is not a new technology either. It has been the focus of many researchers over the past decades. Harpy was developed at Carnegie Mellon University in the 1970s. It was the first computer program to understand 1000 words. At the time, computers weren`t powerful enough for real-time speech recognition, unless you spoke very slowly. This obstacle was removed with the advent of faster and more powerful computers.
Speech synthesis is in many ways the opposite of speech recognition. With this technology, the computer has the ability to make sounds or pronounce words. The world`s first speech synthesis device is considered to be VODER (Voice Operating Demonstrator - a model of a vocal apparatus). It was developed by Homer Dudley of Bell Labs in the 1930s. VODER had manual controls. Much has changed since then.
In speech recognition systems and chat bots, sentences are broken down into phonemes. In order to pronounce a certain sentence, the computer stores these phonemes, transforms and reproduces them. This method of connecting phonemes was and remains the reason that computer speech sounds very robotic, since distortions often occur at the boundaries of the stitching of elements.
Of course, the sound got better over time. The use of modern algorithms in the latest virtual assistants such as Siri, Cortana and Alexa confirms that we have come a long way. However, their speech still sounds slightly different from human speech.
Natural Language Processing is a general area name that covers many subsections. All of them usually use machine learning models, mostly neural networks, and data from many conversations between people.
Since human languages are constantly and spontaneously evolving, and the computer needs clear and structured data, certain problems arise during processing and accuracy suffers. In addition, text analysis methods are highly dependent on the language, genre, topic - additional configuration is always required. However, today many tasks of natural language processing are still solved using deep learning of neural networks.
Natural Language Processing (NLP) is a direction at the intersection of computer science and linguistics, which enables computers to understand human, that is, natural language. It is now one of the most popular areas of data science. However, it has been around since the invention of computers. A detailed overview: Wiki
It is the advances in technology and computing power that have led to incredible advances in NLP. Speech synthesis and recognition technologies are becoming as popular as technologies that work with written texts. The development of virtual assistants like Siri, Alexa, and Cortana is a testament to how far scientists have come.