One of two broad types of artificial neural network
This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "Feedforward neural network" – news · newspapers · books · scholar · JSTOR(September 2011) (Learn how and when to remove this template message)
Part of a series on
Machine learning and data mining
Paradigms
Supervised learning
Unsupervised learning
Online learning
Batch learning
Meta-learning
Semi-supervised learning
Self-supervised learning
Reinforcement learning
Curriculum learning
Rule-based learning
Quantum machine learning
Problems
Classification
Generative modeling
Regression
Clustering
Dimensionality reduction
Density estimation
Anomaly detection
Data cleaning
AutoML
Association rules
Semantic analysis
Structured prediction
Feature engineering
Feature learning
Learning to rank
Grammar induction
Ontology learning
Multimodal learning
Supervised learning (classification • regression)
Apprenticeship learning
Decision trees
Ensembles
Bagging
Boosting
Random forest
k-NN
Linear regression
Naive Bayes
Artificial neural networks
Logistic regression
Perceptron
Relevance vector machine (RVM)
Support vector machine (SVM)
Clustering
BIRCH
CURE
Hierarchical
k-means
Fuzzy
Expectation–maximization (EM)
DBSCAN
OPTICS
Mean shift
Dimensionality reduction
Factor analysis
CCA
ICA
LDA
NMF
PCA
PGD
t-SNE
SDL
Structured prediction
Graphical models
Bayes net
Conditional random field
Hidden Markov
Anomaly detection
RANSAC
k-NN
Local outlier factor
Isolation forest
Artificial neural network
Autoencoder
Cognitive computing
Deep learning
DeepDream
Feedforward neural network
Recurrent neural network
LSTM
GRU
ESN
reservoir computing
Restricted Boltzmann machine
GAN
Diffusion model
SOM
Convolutional neural network
U-Net
Transformer
Vision
Mamba
Spiking neural network
Memtransistor
Electrochemical RAM (ECRAM)
Reinforcement learning
Q-learning
SARSA
Temporal difference (TD)
Multi-agent
Self-play
Learning with humans
Active learning
Crowdsourcing
Human-in-the-loop
RLHF
Model diagnostics
Coefficient of determination
Confusion matrix
Learning curve
ROC curve
Mathematical foundations
Kernel machines
Bias–variance tradeoff
Computational learning theory
Empirical risk minimization
Occam learning
PAC learning
Statistical learning
VC theory
Machine-learning venues
ECML PKDD
NeurIPS
ICML
ICLR
IJCAI
ML
JMLR
Related articles
Glossary of artificial intelligence
List of datasets for machine-learning research
List of datasets in computer vision and image processing
Outline of machine learning
v
t
e
Simplified example of training a neural network in object detection: The network is trained by multiple images that are known to depict starfish and sea urchins, which are correlated with "nodes" that represent visual features. The starfish match with a ringed texture and a star outline, whereas most sea urchins match with a striped texture and oval shape. However, the instance of a ring textured sea urchin creates a weakly weighted association between them.
Subsequent run of the network on an input image (left):[1] The network correctly detects the starfish. However, the weakly weighted association between ringed texture and sea urchin also confers a weak signal to the latter from one of two intermediate nodes. In addition, a shell that was not included in the training gives a weak signal for the oval shape, also resulting in a weak signal for the sea urchin output. These weak signals may result in a false positive result for sea urchin. In reality, textures and outlines would not be represented by single nodes, but rather by associated weight patterns of multiple nodes.
A feedforward neural network (FNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers.[2] Its flow is uni-directional, meaning that the information in the model flows in only one direction—forward—from the input nodes, through the hidden nodes (if any) and to the output nodes, without any cycles or loops,[2] in contrast to recurrent neural networks,[3] which have a bi-directional flow. Modern feedforward networks are trained using the backpropagation method[4][5][6][7][8] and are colloquially referred to as the "vanilla" neural networks.[9]
^Ferrie, C., & Kaiser, S. (2019). Neural Networks for Babies. Sourcebooks. ISBN 1492671207.{{cite book}}: CS1 maint: multiple names: authors list (link)
^ abZell, Andreas (1994). Simulation Neuronaler Netze [Simulation of Neural Networks] (in German) (1st ed.). Addison-Wesley. p. 73. ISBN 3-89319-554-8.
^Schmidhuber, Jürgen (2015-01-01). "Deep learning in neural networks: An overview". Neural Networks. 61: 85–117. arXiv:1404.7828. doi:10.1016/j.neunet.2014.09.003. ISSN 0893-6080. PMID 25462637. S2CID 11715509.
^Cite error: The named reference lin1970 was invoked but never defined (see the help page).
^Cite error: The named reference kelley1960 was invoked but never defined (see the help page).
^Rosenblatt, Frank. x. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington DC, 1961
^Cite error: The named reference werbos1982 was invoked but never defined (see the help page).
^Rumelhart, David E., Geoffrey E. Hinton, and R. J. Williams. "Learning Internal Representations by Error Propagation". David E. Rumelhart, James L. McClelland, and the PDP research group. (editors), Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundation. MIT Press, 1986.
^Hastie, Trevor. Tibshirani, Robert. Friedman, Jerome. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, New York, NY, 2009.
and 23 Related for: Feedforward neural network information
A feedforwardneuralnetwork (FNN) is one of the two broad types of artificial neuralnetwork, characterized by direction of the flow of information between...
In contrast to the uni-directional feedforwardneuralnetwork, it is a bi-directional artificial neuralnetwork, meaning that it allows the output from...
highway network of May 2015 applies these principles to feedforwardneuralnetworks. It was reported to be "the first very deep feedforwardnetwork with...
in the 2020s.[citation needed] The simplest kind of feedforwardneuralnetwork is a linear network, which consists of a single layer of output nodes; the...
data only for its receptive field. Although fully connected feedforwardneuralnetworks can be used to learn features and classify data, this architecture...
management, neuralnetworks, cognitive studies and behavioural science. Feed forward is a type of element or pathway within a control system. Feedforward control...
A multilayer perceptron (MLP) is a name for a modern feedforward artificial neuralnetwork, consisting of fully connected neurons with a nonlinear kind...
A probabilistic neuralnetwork (PNN) is a feedforwardneuralnetwork, which is widely used in classification and pattern recognition problems. In the PNN...
A neuralnetwork is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or mathematical...
Quantum neuralnetworks are computational neuralnetwork models which are based on the principles of quantum mechanics. The first ideas on quantum neural computation...
Instantaneously trained neuralnetworks are feedforward artificial neuralnetworks that create a new hidden neuron node for each novel training sample...
connections between input and output. For a feedforwardneuralnetwork, the depth of the CAPs is that of the network and is the number of hidden layers plus...
A graph neuralnetwork (GNN) belongs to a class of artificial neuralnetworks for processing data that can be represented as graphs. In the more general...
Highway Network was the first working very deep feedforwardneuralnetwork with hundreds of layers, much deeper than previous artificial neuralnetworks. It...
scraped from the public internet), feedforwardneuralnetworks, and transformers. They have superseded recurrent neuralnetwork-based models, which had previously...
topologies and learning algorithms. The feedforwardneuralnetwork was the first and simplest type. In this network the information moves only from the input...
defined by the two frameworks were mutually incompatible. The Open NeuralNetwork Exchange (ONNX) project was created by Meta and Microsoft in September...
developed by Ian Goodfellow and his colleagues in June 2014. In a GAN, two neuralnetworks contest with each other in the form of a zero-sum game, where one agent's...
Neural architecture search (NAS) is a technique for automating the design of artificial neuralnetworks (ANN), a widely used model in the field of machine...
leading to multi-class classification. In practice, the last layer of a neuralnetwork is usually a softmax function layer, which is the algebraic simplification...
Conference on Similarity Search and Applications, SISAP and the Conference on Neural Information Processing Systems (NeurIPS) host competitions on vector search...
Spiking neuralnetworks (SNNs) are artificial neuralnetworks (ANN) that more closely mimic natural neuralnetworks. In addition to neuronal and synaptic...
An artificial neuralnetwork (ANN) combines biological principles with advanced statistics to solve problems in domains such as pattern recognition and...