Deep Learning & Machine Learning Research Paper Collection
π Overview
This repository is a comprehensive collection of influential research papers in Deep Learning (DL), Machine Learning (ML), Artificial Intelligence (AI), Generative AI (GenAI), CUDA/Triton, and other related fields. The goal is to provide a structured approach to understanding the evolution, core concepts, and practical implementations of these fields.
β οΈ Disclaimer
This is a personal learning project. The implementations and notes may contain errors or simplifications. Use with caution and always refer to the original papers.
π― Project Goals
- Implement all these research papers from scratch.
- Provide scratch implementations to aid learning and understanding.
- Build a structured, categorized, and well-maintained repository for reference.
π Inspiration
Inspired by @saurabhaloneai and expanded with additional research papers and implementations.
π Repository Structure
βββ Foundational Deep Neural Networks
βββ Optimization & Regularization
βββ Sequence Modeling
βββ Language Modeling
βββ Open Source LLMs & Implementation
βββ Architecture Innovations
βββ Training Methodologies
βββ Image Generative Modeling
βββ Deep Reinforcement Learning
βββ General Machine Learning Papers
βββ CUDA & Triton Optimization Papers
βββ Generative AI (GenAI)
βββ Scaling & Model Optimization
βββ Reasoning & Capabilities
βββ Inference & Efficiency Techniques
βββ Fine-tuning & Adaptation
βββ Graph Neural Networks
βββ Self-Supervised and Few-Shot Learning
π Research Papers Collection
1οΈβ£ Foundational Deep Neural Networks
2οΈβ£ Optimization & Regularization Techniques
3οΈβ£ Sequence Modeling
4οΈβ£ Language Modeling
5οΈβ£ Image Generative Modeling
6οΈβ£ Deep Reinforcement Learning
7οΈβ£ General Machine Learning Papers
8οΈβ£ CUDA & Triton Optimization Papers
9οΈβ£ Generative AI (GenAI)
π Scaling & Model Optimization
1οΈβ£1οΈβ£ Reasoning & Capabilities
1οΈβ£2οΈβ£ Inference & Efficiency Techniques
1οΈβ£3οΈβ£ Fine-tuning & Adaptation
1οΈβ£4οΈβ£ Graph Neural Networks
1οΈβ£5οΈβ£ Self-Supervised and Few-Shot Learning
π Implementation Guidelines
- Start Simple: Begin by implementing models with easy-to-use libraries like Scikit-learn or TensorFlow.
- Replicate Results: Try to reproduce the results from the paper before adding your tweaks.
- Analyze: Evaluate your implementation with proper metrics and compare with the paperβs results.
- Visualize: Use plots to understand model performance and feature importance.
π₯ How to Use
- Navigate to the relevant category in the repository.
- Read the research papers and access the implementations.
- Check the boxes as you complete each implementation.
- Experiment with implementations to gain deeper insights into each concept.
π References & Credits
Inspired by @saurabhaloneai and various research papers from ArXiv, NeurIPS, CVPR, and ICML.
π§ Maintained by
Maintained by Abinash Pradhan - @abinashpradhan01.