Probably approximately correct learning information
Framework for mathematical analysis of machine learning
Part of a series on
Machine learning and data mining
Paradigms
Supervised learning
Unsupervised learning
Online learning
Batch learning
Meta-learning
Semi-supervised learning
Self-supervised learning
Reinforcement learning
Curriculum learning
Rule-based learning
Quantum machine learning
Problems
Classification
Generative modeling
Regression
Clustering
Dimensionality reduction
Density estimation
Anomaly detection
Data cleaning
AutoML
Association rules
Semantic analysis
Structured prediction
Feature engineering
Feature learning
Learning to rank
Grammar induction
Ontology learning
Multimodal learning
Supervised learning (classification • regression)
Apprenticeship learning
Decision trees
Ensembles
Bagging
Boosting
Random forest
k-NN
Linear regression
Naive Bayes
Artificial neural networks
Logistic regression
Perceptron
Relevance vector machine (RVM)
Support vector machine (SVM)
Clustering
BIRCH
CURE
Hierarchical
k-means
Fuzzy
Expectation–maximization (EM)
DBSCAN
OPTICS
Mean shift
Dimensionality reduction
Factor analysis
CCA
ICA
LDA
NMF
PCA
PGD
t-SNE
SDL
Structured prediction
Graphical models
Bayes net
Conditional random field
Hidden Markov
Anomaly detection
RANSAC
k-NN
Local outlier factor
Isolation forest
Artificial neural network
Autoencoder
Cognitive computing
Deep learning
DeepDream
Feedforward neural network
Kolmogorov–Arnold Network
Recurrent neural network
LSTM
GRU
ESN
reservoir computing
Restricted Boltzmann machine
GAN
Diffusion model
SOM
Convolutional neural network
U-Net
Transformer
Vision
Mamba
Spiking neural network
Memtransistor
Electrochemical RAM (ECRAM)
Reinforcement learning
Q-learning
SARSA
Temporal difference (TD)
Multi-agent
Self-play
Learning with humans
Active learning
Crowdsourcing
Human-in-the-loop
RLHF
Model diagnostics
Coefficient of determination
Confusion matrix
Learning curve
ROC curve
Mathematical foundations
Kernel machines
Bias–variance tradeoff
Computational learning theory
Empirical risk minimization
Occam learning
PAC learning
Statistical learning
VC theory
Machine-learning venues
ECML PKDD
NeurIPS
ICML
ICLR
IJCAI
ML
JMLR
Related articles
Glossary of artificial intelligence
List of datasets for machine-learning research
List of datasets in computer vision and image processing
Outline of machine learning
v
t
e
In computational learning theory, probably approximately correct (PAC) learning is a framework for mathematical analysis of machine learning. It was proposed in 1984 by Leslie Valiant.[1]
In this framework, the learner receives samples and must select a generalization function (called the hypothesis) from a certain class of possible functions. The goal is that, with high probability (the "probably" part), the selected function will have low generalization error (the "approximately correct" part). The learner must be able to learn the concept given any arbitrary approximation ratio, probability of success, or distribution of the samples.
The model was later extended to treat noise (misclassified samples).
An important innovation of the PAC framework is the introduction of computational complexity theory concepts to machine learning. In particular, the learner is expected to find efficient functions (time and space requirements bounded to a polynomial of the example size), and the learner itself must implement an efficient procedure (requiring an example count bounded to a polynomial of the concept size, modified by the approximation and likelihood bounds).
^L. Valiant. A theory of the learnable. Communications of the ACM, 27, 1984.
and 26 Related for: Probably approximately correct learning information
computational learning theory, probablyapproximatelycorrect (PAC) learning is a framework for mathematical analysis of machine learning. It was proposed...
unsupervised learning. From a theoretical viewpoint, probablyapproximatelycorrect (PAC) learning provides a framework for describing machine learning. The term...
intractable. He created the ProbablyApproximatelyCorrect or PAC model of learning that introduced the field of Computational Learning Theory and became a theoretical...
Gold. Subsequently known as Algorithmic learning theory. Probablyapproximatelycorrectlearning (PAC learning) proposed in 1984 by Leslie Valiant Gold...
generative models also began in the 1970s. A probablyapproximatelycorrectlearning bound for semi-supervised learning of a Gaussian mixture was demonstrated...
polynomial-time quantum algorithms which are correct WHP. Probablyapproximatelycorrectlearning: A process for machine-learning in which the learned function has...
introduced ProbablyApproximatelyCorrectLearning (PAC Learning), a framework for the mathematical analysis of machine learning. Symbolic machine learning encompassed...
received training data. This is closely related to probablyapproximatelycorrect (PAC) learning, where the learner is evaluated on its predictive power...
the ACM, 27(11):1134–1142 (1984) Description: The Probablyapproximatelycorrectlearning (PAC learning) framework. David E. Rumelhart, Geoffrey E. Hinton...
probablyapproximatelycorrect (PAC) model was applied by D. Roth (2002) to solve computer vision problem by developing a distribution-free learning theory...
of steps). A weaker formal model of learnability is the Probablyapproximatelycorrectlearning (PAC) model, introduced by Leslie Valiant in 1984. It is...
and they begin to babble later on in infancy—at approximately 11 months as compared to approximately 6 months for hearing babies. Prelinguistic language...
written without having to write as much code", and that "it is not always correct, but it is just close enough". According to a paper written by OpenAI researchers...
assumptions). A natural model of passive learning is Valiant's probablyapproximatelycorrect (PAC) learning. Here the learner receives random examples...
classification. Based on language models, LLMs acquire these abilities by learning statistical relationships from text documents during a computationally...
successfully maps words onto the correct objects, concepts, and actions. While domain-specific accounts of word learning argue for innate constraints that...
languages, the correct use of prepositions in the English language is difficult to learn, and it can turn out to be quite a frustrating learning experience...
Dialogic learning is learning that takes place through dialogue. It is typically the result of egalitarian dialogue; in other words, the consequence of...
that a program is operating correctly if no one knows how exactly it works. There have been many cases where a machine learning program passed rigorous tests...