Implicit Neural Representations with Periodic Activation Functions
Implicit Neural Representations with Periodic Activation Functions
The following results compare SIREN to a variety of network architectures. TanH, ReLU, Softplus etc. means an MLP of equal size with the respective nonlinearity. We also compare to the recently proposed positional encoding, combined with a ReLU nonlinearity, noted as ReLU P.E. SIREN outperforms all baselines by a significant margin, converges significantly faster, and is the only architecture that accurately represents the gradients of the signal, enabling its use to solve boundary value problems.
Source: vsitzmann.github.io/siren/
July 23, 2020
Subscribe
Login
Please login to comment
0 Comments