Neuronscale logo
Menu icon

The website is under intensive development. Not all sections are completed. It may contain errors, display incorrectly and have incorrect data!


Descriptions of concepts related to Artificial Intelligence, Machine Learning and Deep Learning.


A set of rules that a machine can follow to learn how to do a task.

Artificial intelligence

This refers to the general concept of machines acting in a way that simulates or mimics human intelligence.


A machine is described as autonomous if it can perform its task or tasks without needing human intervention.

Backward chaining

A method where the model starts with the desired output and works in reverse to find data that might support it.


Assumptions made by a model that simplify the process of learning to do its assigned task. Most supervised machine learning models perform better with low bias, as these assumptions can negatively affect results.

Big data

Datasets that are too large or complex to be used by traditional data processing applications.

Bounding box

Commonly used in image or video tagging, this is an imaginary box drawn on visual information. The contents of the box are labeled to help a model recognize it as a distinct type of object.


A chatbot is program that is designed to communicate with people through text or voice commands in a way that mimics human-to-human conversation.

Cognitive computing

This is effectively another way to say artificial intelligence. It's used by marketing teams at some companies to avoid the science fiction aura that sometimes surrounds AI.

Computational learning theory

A field within artificial intelligence that is primarily concerned with creating and analyzing machine learning algorithms.


A large dataset of written or spoken material that can be used to train a machine to perform linguistic tasks.

Data mining

The process of analyzing datasets in order to discover new patterns that might improve the model.

Data science

Drawing from statistics, computer science and information science, this interdisciplinary field aims to use a variety of scientific methods, processes and systems to solve problems involving data.


A collection of related data points, usually with a uniform order and tags.

Deep learning

A function of artificial intelligence that imitates the human brain by learning from the way data is structured, rather than from an algorithm that's programmed to do one specific thing.

Entity annotation

The process of labeling unstructured sentences with information so that a machine can read them. This could involve labeling all people, organizations and locations in a document, for example.

Entity extraction

An umbrella term referring to the process of adding structure to data so that a machine can read it. Entity extraction may be done by humans or by a machine learning model.

Forward chaining

A method in which a machine must work from a problem to find a potential solution. By analyzing a range of hypotheses, the AI must determine those that are relevant to the problem.

General AI

AI that could successfully do any intellectual task that can be done by any human being. This is sometimes referred to as strong AI, although they aren't entirely equivalent terms.


Occasionally used interchangeably with parameter, although the terms have some subtle differences. Hyperparameters are values that affect the way your model learns. They are usually set manually outside the model.


A user's intention behind a given utterance or action.


A piece of information that is associated with an object or data point. Labels can be used to describe the data or to train a machine learning model.

Linguistic annotation

The process of adding linguistic information to text, such as parts of speech tags and named entities.

Machine intelligence

A field of computer science that deals with the creation of intelligent machines. It's a subset of artificial intelligence that focuses on the development of algorithms that can learn from data.

Machine learning

A field of artificial intelligence that gives computers the ability to learn without being explicitly programmed. Machine learning algorithms are trained on data to identify patterns and make predictions.

Machine translation

A type of natural language processing (NLP) that involves translating text from one language to another. Machine translation systems are trained on massive amounts of data in order to learn the patterns that are used to translate text.


A representation of a real-world system or process. In machine learning, a model is used to learn from data and make predictions. The model is typically a mathematical equation or algorithm.

Neural network

A type of artificial intelligence that is inspired by the structure and function of the human brain. Neural networks are composed of interconnected nodes, or neurons, that transmit signals to each other. Neural networks can be trained to perform a variety of tasks, including image recognition, natural language processing, and machine translation.

Natural language generation

The process of creating human-like text. Natural language generation systems are used in a variety of applications, such as chatbots, email marketing, and customer service.

Natural language processing

A field of artificial intelligence that deals with the interaction between computers and human language. NLP techniques are used to process and analyze text and speech data. NLP can be used for a variety of tasks, such as machine translation, sentiment analysis, and text summarization.

Natural language understanding

The process of understanding the meaning of text or speech. Natural language understanding systems are used to extract information from text and speech data. NLU can be used for a variety of tasks, such as question answering, machine translation, and sentiment analysis.


A problem in machine learning that occurs when a model learns the training data too well and is unable to generalize to new data. Overfitting can occur when a model has too many parameters or when the training data is not representative of the real world.


A value that is used to configure a machine learning model. Parameters can be adjusted to improve the model's performance.

Pattern recognition

The process of identifying patterns in data. Pattern recognition algorithms are used in a variety of applications, such as image recognition, fraud detection, and customer segmentation.

Predictive analytics

The use of data to make predictions about future events. Predictive analytics models are used in a variety of applications, such as sales forecasting, risk assessment, and customer churn prediction.


A high-level programming language that is often used for machine learning and data science. Python is a scripting language, which means that it is interpreted rather than compiled. This makes Python easy to learn and use, even for beginners.

Reinforcement learning

A type of machine learning that learns by interacting with its environment. In reinforcement learning, the agent takes actions in the environment and receives rewards or penalties based on the outcome of those actions. The agent learns to take actions that maximize its expected reward.

Semantic annotation

Tagging different search queries or products with the goal of improving the relevance of a search engine.

Sentiment analysis

The process of identifying the sentiment of text, such as whether it is positive, negative, or neutral. Sentiment analysis is often used in marketing and customer service applications.

Strong AI

Artificial intelligence that could successfully do any intellectual task that can be done by any human being. This is sometimes referred to as general AI, although they aren't entirely equivalent terms.

Supervised learning

A type of machine learning in which the model is trained on labeled data. Labeled data is data that has been tagged with the correct answer. Supervised learning is often used for tasks such as classification and regression.

Test data

Data that is used to evaluate the performance of a machine learning model. Test data is typically held out from the training data and is not used to train the model. The model's performance on the test data is an indication of how well it will perform on new data.

Training data

Data that is used to train a machine learning model. Training data is typically used to teach the model how to perform a specific task. The more data that is used to train a model, the better it will perform on new data.

Transfer learning

A technique for using a machine learning model that has been trained on one task to solve a different task. Transfer learning can be used to improve the performance of a model on a new task, especially if the new task is related to the task that the model was originally trained on.

Turing test

A test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. The Turing test was introduced by Alan Turing in his 1950 paper, "Computing Machinery and Intelligence." The test consists of a human interrogator who interacts with a machine and a human through a text-based interface. If the interrogator cannot reliably distinguish between the machine and the human, then the machine is said to have passed the Turing test.

Unsupervised learning

A type of machine learning in which the model is trained on unlabeled data. Unlabeled data is data that does not have any tags or labels. Unsupervised learning is often used for tasks such as clustering and dimensionality reduction.

Validation data

Data that is used to tune the hyperparameters of a machine learning model. Hyperparameters are parameters that affect the way a model learns. Validation data is typically used to find the best values for the hyperparameters.


The degree to which a model's predictions change when the training data is changed. A high-variance model will have a lot of noise in its predictions, while a low-variance model will have very little noise. Variance is a desirable property for some tasks, such as classification, where it is important to be able to generalize to new data. However, variance can also be a problem for tasks, such as regression, where it is important to make accurate predictions for specific data points.


The difference between a model's predictions and the actual values. Variation is a measure of how well a model performs. A low-variation model will have a small amount of variation, while a high-variation model will have a large amount of variation.

Weak AI

A term used to describe AI that is not capable of human-level intelligence or functionality.