미니컨 프로젝트 수행을 위해서 Deep Learning 분야를 공부합니다. (일정이 급해서 정리는 시간나는대로 하겠습니다.)
- (03-20) Lecture 2 : Linear Classification
- (03-21) Lecture 3 : Loss Function, Optimization, Stochastic Gradient Descent
- (03-22) Lecture 4 : Introduction to Neural Network, BackPropagation
- (03-23) Lecture 6 : Training Neural Networks I
- (03-25) Lecture 10 : RNN and LSTM
- BiDAF(Bi-Directinal Attention Flow)
- Attention is All You Need ( a.k.a. Transformers )
- BERT : Pre-Training of Deep Bidirectional Transformers for Language Understanding
- Word Piece Model (???)
- KorQuAD 전처리를 어떻게 할지? WPM??
- fine-tuning 은 어떻게?
- (Official Website) http://cs231n.stanford.edu/
- (Lecture Video) https://www.youtube.com/watch?v=vT1JzLTH4G4&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv
- (Official Website) http://web.stanford.edu/class/cs224n/
- (Lecture Video) https://www.youtube.com/watch?v=5vcj8kSwBCY&list=PLoROMvodv4rOhcuXMZkNm7j3fVwBBY42z&index=13 (Attention 관련)
- (Refered Github) https://jalammar.github.io/illustrated-bert/ (BERT, ELMOs, etc)
- (Refered Github) https://jalammar.github.io/illustrated-transformer/ (Transformers)
- (교재) 밑바닥부터 시작하는 딥러닝