Algorithm: A set of rules or instructions given to an AI, machine learning model, or computer program to
help it perform a task.
Artificial Intelligence (AI): A field of computer science focused on creating systems capable of
performing tasks that typically require human intelligence. These tasks include problem-solving,
decision-making, and understanding natural language.
Autonomous Systems: Systems capable of operating independently of human control, often using AI to
make decisions in real-time based on environmental data.
Backpropagation: A method used in artificial neural networks to calculate the error contribution of each
neuron after a batch of data is processed. It’s crucial for the learning process, allowing the network to
update its weights to improve performance.
Bias: In AI, bias occurs when an algorithm produces results that are systematically prejudiced due to
erroneous assumptions in the machine learning process.
Chatbot: A software application used to conduct an online chat conversation via text or text-to-speech,
in lieu of providing direct contact with a live human agent.
Classification: A machine learning model’s process of predicting the category or class of a given input
data point.
Clustering: An unsupervised learning technique that involves grouping sets of data points such that
items in the same group (or cluster) are more similar to each other than to those in other groups.
Convolutional Neural Network (CNN): A deep learning algorithm that can take in an input image, assign
importance (learnable weights and biases) to various aspects/objects in the image, and differentiate one
from the other.
Data Mining: The process of examining large datasets to find patterns, correlations, and anomalies to
predict outcomes.
Decision Tree: A model that uses a tree-like graph of decisions and their possible consequences,
including chance event outcomes, resource costs, and utility. It’s a decision support tool that uses a
graphical and analytical decision-making process.
Deep Learning: A subset of machine learning that uses neural networks with many layers (deep neural
networks) to analyze various factors of data in a process that mimics the human brain’s operation.
Ethics in AI: Refers to the moral principles and techniques intended to guide the development and
responsible use of AI technologies, ensuring they contribute positively to society.
Feature Extraction: The process of reducing the amount of resources required to describe a large set of
data accurately. When performing analysis of complex data, one of the major problems stems from the
number of variables involved.
Generative AI (GenAI): A subset of AI technologies and models that can generate new content, including
text, images, and videos, based on the data they have been trained on.
Generative Pre-trained Transformer (GPT): An example of a generative AI model, primarily used for
natural language processing tasks. It generates human-like text by predicting the sequence of words.
Heuristics: Techniques designed for solving a problem more quickly when classic methods are too slow,
or for finding an approximate solution when classic methods fail to find any exact solution.
Inference: The process of using a trained neural network to make predictions.
Latent Variables: Variables that are not directly observed but are rather inferred from other variables
that are observed (directly measured).
Loss Function: A method of evaluating how well specific algorithm models the given data. If predictions
deviate from actual results, loss functions provide a measure of the error.
Machine Learning (ML): A method of AI that enables systems to learn and improve from experience
without being explicitly programmed. It involves the development of algorithms that can learn and make
predictions or decisions based on data.
Model Fine-tuning: A process in machine learning where a model developed for a specific task is
adjusted or fine-tuned to perform a similar but slightly different task.
Natural Language Processing (NLP): A branch of AI that gives computers the ability to understand,
interpret, and generate human language in a way that is valuable.
Neural Network: A computer system modeled on the human brain’s network of neurons. These systems
learn to perform tasks by considering examples, generally without being programmed with task-specific
rules.
Overfitting: A modeling error in machine learning that occurs when a function is too closely fit to a
limited set of data points, resulting in poor predictive performance on new data.
Recall: In the context of machine learning and information retrieval, recall is the fraction of the total
amount of relevant instances that were actually retrieved.
Regularization: A technique used to prevent overfitting in machine learning models by adding a penalty
on the larger weights.
Reinforcement Learning: An area of machine learning concerned with how software agents ought to
take actions in an environment to maximize some notion of cumulative reward.
Supervised Learning: A machine learning technique that teaches a model to make predictions or
decisions based on input-output pairs. It involves training the model on a labeled dataset.
Transfer Learning: A research problem in machine learning that focuses on storing knowledge gained
while solving one problem and applying it to a different but related problem.
Unsupervised Learning: A type of machine learning that looks for previously undetected patterns in a
dataset without pre-existing labels