-
Advanced Self-supervised Pre-training Models
September 18, 2021Overview of GPT-2, GPT-3, and ALBERT covering zero/few-shot learning, model scaling, and efficient pre-training approaches.
8 min read ·nlp -
Self-supervised Pre-training Models
September 18, 2021Overview of self-supervised pre-training models including GPT-1 and BERT, covering masked language modeling and transfer learning.
7 min read ·nlp -
Naver Boostcamp AI Tech 2nd - Week 7 Report
September 17, 2021Week 7 retrospective of Naver Boostcamp AI Tech covering Transformer and BERT studies, mentoring Q&A on attention mechanisms and positional encoding.
3 min read ·naver-boostcamp -
Introduction to the Transformer
September 13, 2021Introduction to the Transformer architecture: self-attention with Query/Key/Value, scaled dot-product attention, and how it overcomes RNN limitations.
6 min read ·nlp -
Boostcamp AI Tech 2nd - Week 6 Study Summary
September 10, 2021Week 6 study summary of Naver Boostcamp AI Tech covering NLP lectures, bucketing for data batching, and peer session discussions.
1 min read ·naver-boostcamp -
BLEU
September 10, 2021BLEU score for machine translation evaluation: precision, recall, F1, n-gram overlap, and brevity penalty.
2 min read ·nlp
2021