📌 1. Mixed Precision Training
- Mixed Precision Training
📌 2. EfficientNet (경량화 모델)
- EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
- EfficientNetV2: Smaller Models and Faster Training
📌 3. Dropout & Regularization
- Dropout: A Simple Way to Prevent Neural Networks from Overfitting
- Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift
📌 4. Data Augmentation Techniques
- A survey on Image Data Augmentation for Deep Learning
📌 5. Model Compression & Quantization
- Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
- Post-training Quantization for TensorFlow Lite
📌 6. TensorFlow Dataset Performance
- Performance guide for tf.data
📌 7. General Deep Learning Optimization
- Bag of Tricks for Image Classification with Convolutional Neural Networks
✅ 추천 논문/자료 요약
| 주제 | 링크 |
|---|---|
| Mixed Precision | NVIDIA 논문 |
| EfficientNet | EfficientNet 논문 |
| Dropout | Dropout 원 논문 |
| Data Augmentation | Survey 논문 |
| Quantization | Quantization 논문 |
| Dataset 최적화 | TensorFlow guide |