:orphan: :py:mod:`Decoder-checkpoint` ============================ .. py:module:: Decoder-checkpoint Module Contents --------------- Classes ~~~~~~~ .. autoapisummary:: Decoder-checkpoint.Decoder .. py:class:: Decoder(input_size, z_dim, hidden_dims_dec_common, modality_independent_decoder, output_types_input, modality_independent_decoder_side=None, output_types_side=None, linearize=False) Bases: :py:obj:`torch.nn.Module` Base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:: import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:`to`, etc. :ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool .. py:method:: forward(z)