Shrinking massive neural networks used to model language
Shrinking massive neural networks used to model language
Deep learning neural networks can be massive, demanding major computing power. In a test of the “lottery ticket hypothesis,” MIT researchers have found leaner, more efficient subnetworks hidden within BERT models. The discovery could make natural language processing more accessible.
December 10, 2020
Subscribe
Login
Please login to comment
0 Comments