DeepL Lab

Generate your next Deep Learning using AI

Academy

Learning representations by back-propagating errors (1986)

This paper formally introduced backpropagation as an efficient way to train multilayer neural networks by propagating errors backward to update weights.

ImageNet Classification with Deep Convolutional Neural Networks (2012, AlexNet)

This paper showed that deep convolutional networks trained on GPUs and large-scale data can massively outperform traditional vision methods on ImageNet, igniting the deep learning boom.

Attention Is All You Need (2017, Transformer)

This paper proposed the Transformer architecture based solely on self-attention, enabling highly parallel sequence modeling and laying the foundation for today’s large language models.