Posts

Showing posts from February 29, 2024

[Day 59] Stanford CS224N (NLP with DL): Backprop and Dependency Parsing

Image
 Hello :) Today is Day 59! A quick summary of today: From the  Stanford CS224N course , I covered: Lecture 3: Backprop and Neural Networks Lecture 4: Dependency Parsing And below are my notes for both.  Lecture 3: Backprop After Andrej Karpathy, backprop and I have a friend-friend relationship, so this felt like a nice overview over backprop in NNs. Lecture 4:  Dependency Parsing This one felt like a linguistics lesson, learning about how people interpret language and how such logic transferred to computers. Tomorrow is RNN's turn. Really exciting! That is all for today! See you tomorrow :)

[Day 58] Stanford CS224N (NLP with DL): Lecture 2 - Neural classifiers (diving deeper into word embeddings)

Image
 Hello :) Today is Day 58! A quick summary of today: covered Lecture 2 of Stanford's NLP with DL did assignment 1  on google colab which covered some exercises on count-based and prediction-based methods Read 6 papers about word embeddings and wrote down some basic summaries  GloVe: Global Vectors for Word Representation Improving Distributional Similarity with Lessons Learned from Word Embeddings Evaluation methods for unsupervised word embeddings A Latent Variable Model Approach to PMI-based Word Embeddings Linear Algebraic Structure of Word Senses, with Applications to Polysemy On the Dimensionality of Word Embedding Firstly, I will share my notes from the lecture: Next, are the short summaries of each paper. These were definitely interesting papers, I feel like I am learning the history of something big, and after a few lectures I will be in the present haha. These papers, that preceeded the infamous transformer, and that laid grounds for modern embeddings were very interesting