xenonpy.model.nn package
Submodules
xenonpy.model.nn.layer module
- class xenonpy.model.nn.layer.Layer1d(n_in, n_out, *, drop_out=0.0, layer_func=functools.partial(<class 'torch.nn.modules.linear.Linear'>, bias=True), act_func=ReLU(), batch_nor=functools.partial(<class 'torch.nn.modules.batchnorm.BatchNorm1d'>, eps=1e-05, momentum=0.1, affine=True))[source]
Bases:
Module
Base NN layer. This is a wrap around PyTorch. See here for details: http://pytorch.org/docs/master/nn.html#
- Parameters:
- forward(*x)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
xenonpy.model.nn.wrap module
- class xenonpy.model.nn.wrap.L1[source]
Bases:
object
- static batch_norm(*args, **kwargs)[source]
Wrapper class for
torch.nn.BatchNorm1d
. http://pytorch.org/docs/0.3.0/optim.html#torch.nn.BatchNorm1d
- static conv(*args, **kwargs)[source]
Wrapper class for
torch.nn.Conv1d
. http://pytorch.org/docs/0.3.0/optim.html#torch.nn.Conv1d
- static instance_norm(*args, **kwargs)[source]
Wrapper class for
torch.nn.InstanceNorm1d
. http://pytorch.org/docs/0.3.0/optim.html#torch.nn.InstanceNorm1d
- static linear(*args, **kwargs)[source]
Wrapper class for
torch.nn.Linear
. http://pytorch.org/docs/0.3.0/optim.html#torch.nn.Linear
- class xenonpy.model.nn.wrap.LrScheduler[source]
Bases:
object
- static exponential_lr(*args, **kwargs)[source]
Wrapper class for
torch.optim.lr_scheduler.ExponentialLR
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.lr_scheduler.ExponentialLR
- static lambda_lr(*args, **kwargs)[source]
Wrapper class for
torch.optim.lr_scheduler.LambdaLR
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.lr_scheduler.LambdaLR
- static multi_step_lr(*args, **kwargs)[source]
Wrapper class for
torch.optim.lr_scheduler.MultiStepLR
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.lr_scheduler.MultiStepLR
- static reduce_lr_on_plateau(*args, **kwargs)[source]
Wrapper class for
torch.optim.lr_scheduler.ReduceLROnPlateau
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.lr_scheduler.ReduceLROnPlateau
- static step_lr(*args, **kwargs)[source]
Wrapper class for
torch.optim.lr_scheduler.StepLR
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.lr_scheduler.StepLR
- class xenonpy.model.nn.wrap.Optim[source]
Bases:
object
- static ada_delta(*args, **kwargs)[source]
Wrapper class for
torch.optim.Adadelta
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.Adadelta
- static ada_grad(*args, **kwargs)[source]
Wrapper class for
torch.optim.Adagrad
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.Adagrad
- static ada_max(*args, **kwargs)[source]
Wrapper class for
torch.optim.Adamax
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.Adamax
- static adam(*args, **kwargs)[source]
Wrapper class for
torch.optim.Adam
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.Adam
- static asgd(*args, **kwargs)[source]
Wrapper class for
torch.optim.ASGD
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.ASGD
- static lbfgs(*args, **kwargs)[source]
Wrapper class for
torch.optim.LBFGS
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.LBFGS
- static r_prop(*args, **kwargs)[source]
Wrapper class for
torch.optim.Rprop
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.Rprop
- static rms_prop(*args, **kwargs)[source]
Wrapper class for
torch.optim.RMSprop
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.RMSprop
- static sgd(*args, **kwargs)[source]
Wrapper class for
torch.optim.SGD
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.SGD
- static sparse_adam(*args, **kwargs)[source]
Wrapper class for
torch.optim.SparseAdam
. http://pytorch.org/docs/0.3.0/optim.html#torch.optim.SparseAdam