郑之杰的个人博客
欢迎光临
Improving Deep Learning by Inverse Square Root Linear Units (ISRLUs)
使用逆平方根线性单元(ISRLU)改进深度学习.
ZerO Initialization: Initializing Neural Networks with only Zeros and Ones
ZerO初始化: 仅使用0和1初始化神经网络.
Dynamic ReLU
DY-ReLU:动态整流线性单元.
Learning Activation Functions to Improve Deep Neural Networks
APL:自适应分段线性单元.
Orthogonal-Padé Activation Functions: Trainable Activation functions for smooth and faster convergence in deep networks
OPAU:基于正交Padé近似的可训练激活函数.
Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks
PAU:基于Padé近似的可学习激活函数.