Posts

Showing posts from April 26, 2024

[Day 116] Reading some research + transferring posts to the new blog

Image
 Hello :) Today is Day 116! A quick summary of today: read 4 research papers about graphs and LLMs started transferring blog posts to the new blog Do Transformers Really Perform Bad for Graph Representation? ( link ) Transformers dominate the field, but so far they have not been able to break into graph representation learning. The paper presents the ‘Graphormer’ which introduces several novel ideas with its architecture. To better encode graph structural information, below are the 3 main ideas 1. Centrality encoding - In the original transformer, the attention distribution is calculated based on semantic correlation between nodes, which ignores node importance (like a highly followed celebrity node is very important and can easily affect a social network). The centrality encoding assigns each node 2 real-valued embedding vectors according to indegree and outdegree, and this way the model can capture both semantic correlation and node importance in its attention mechanism. 2. Spatial e