Skip to content
← Research Lab
Attention Probing

What the Model Sees

For each Sumerian word, we extracted attention weights from all 4 layers of our GPT. Attends to = what the word looks at in its context. Attended by = what later words look back at it for.

🔥 Key Findings

⚙️ ME & Powers

📐 NAM- Abstractions

✨ Divine

🌍 Places & Cosmos

⚔️ Actors & Actions

Filter:
↑↓ navigate select
Full search →
v1.5.0

Liang Yi Museum

  • New article: 'Where Touch Is Allowed' on Hong Kong's Liang Yi Museum
  • Explores the philosophy of tactile museum experiences and Ming dynasty furniture
View all updates
New Article

Mar 11, 2026

What a Neural Network Sees in Sumerian

We trained a 6.8M parameter GPT on 66K Sumerian sentences and probed its attention weights. The results confirm some philological claims, challenge others, and reveal semantic associations invisible to traditional methods.

Read Article