๐ 1. Mixed Precision Training
- Mixed Precision Training
- NVIDIA ๊ณต์ ๋
ผ๋ฌธ:
https://arxiv.org/abs/1710.03740 - ๊ด๋ จ ๊ณต์ ๊ฐ์ด๋:
https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html
- NVIDIA ๊ณต์ ๋
ผ๋ฌธ:
๐ 2. EfficientNet (๊ฒฝ๋ํ ๋ชจ๋ธ)
- EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
- EfficientNetV2: Smaller Models and Faster Training
๐ 3. Dropout & Regularization
- Dropout: A Simple Way to Prevent Neural Networks from Overfitting
- Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift
๐ 4. Data Augmentation Techniques
- A survey on Image Data Augmentation for Deep Learning
๐ 5. Model Compression & Quantization
- Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
- Post-training Quantization for TensorFlow Lite
๐ 6. TensorFlow Dataset Performance
- Performance guide for tf.data
๐ 7. General Deep Learning Optimization
- Bag of Tricks for Image Classification with Convolutional Neural Networks
โ ์ถ์ฒ ๋ ผ๋ฌธ/์๋ฃ ์์ฝ
์ฃผ์ | ๋งํฌ |
---|---|
Mixed Precision | NVIDIA ๋ ผ๋ฌธ |
EfficientNet | EfficientNet ๋ ผ๋ฌธ |
Dropout | Dropout ์ ๋ ผ๋ฌธ |
Data Augmentation | Survey ๋ ผ๋ฌธ |
Quantization | Quantization ๋ ผ๋ฌธ |
Dataset ์ต์ ํ | TensorFlow guide |
๋ต๊ธ ๋จ๊ธฐ๊ธฐ