Shrinking massive neural networks used to model language

Shrinking massive neural networks used to model language

Shrinking massive neural networks used to model language

Deep learning neural networks can be massive, demanding major computing power. In a test of the “lottery ticket hypothesis,” MIT researchers have found leaner, more efficient subnetworks hidden within BERT models. The discovery could make natural language processing more accessible.

Source: news.mit.edu/2020/neural-model-language-1201

Subscribe to our Digest