Implicit Neural Representations with Periodic Activation Functions

0

Implicit Neural Representations with Periodic Activation Functions

Implicit Neural Representations with Periodic Activation Functions

The following results compare SIREN to a variety of network architectures. TanHReLUSoftplus etc. means an MLP of equal size with the respective nonlinearity. We also compare to the recently proposed positional encoding, combined with a ReLU nonlinearity, noted as ReLU P.E. SIREN outperforms all baselines by a significant margin, converges significantly faster, and is the only architecture that accurately represents the gradients of the signal, enabling its use to solve boundary value problems.

Source: vsitzmann.github.io/siren/

July 23, 2020
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

Subscribe to our Digest