Deep Implicit Attention: A Mean-Field Theory Perspective on Attention Mechanisms

0

Deep Implicit Attention: A Mean-Field Theory Perspective on Attention Mechanisms

Deep Implicit Attention: A Mean-Field Theory Perspective on Attention Mechanisms | mcbal

“The main take-away so far has been that you can think of softmax attention as implementing a single, big gradient step of some energy function and that training transformers is akin to meta-learning how to best tune a stack of attention and feed-forward modules to perform well on some auxiliary (meta-)task(s). But what can an energy-based perspective actually provide beyond quaint and hand-wavy statements like implicit energy landscapes are sculpted every time you train a transformer?

In this post, we approach attention in terms of the collective response of a statistical-mechanical system. Attention is interpreted as an inner-loop fixed-point optimization step which returns the approximate response of a system being probed by data. This response is a differentiable compromise between the system’s internal dynamics and the data it’s being exposed to. To better respond to incoming data, outer-loop optimization steps can nudge the interactions and the self-organizing behaviour of the system…”

Source: mcbal.github.io/post/deep-implicit-attention-a-mean-field-theory-perspective-on-attention-mechanisms/

May 9, 2021
0 Comments
Inline Feedbacks
View all comments

Subscribe to our Digest