And its variations.

class VAEOutput[source]

VAEOutput(pred:Tensor, kl_loss:Tensor) :: tuple

VAEOutput(pred, kl_loss)

class VAE[source]

VAE(encoder:Module, decoder:Module, beta:float=1.0) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

Vector Quantized VAE

class VQPseudoGrad[source]

VQPseudoGrad(*args, **kwargs) :: Function

Records operation history and defines formulas for differentiating ops.

See the Note on extending the autograd engine for more details on how to use this class: https://pytorch.org/docs/stable/notes/extending.html#extending-torch-autograd

Every operation performed on :class:Tensor s creates a new function object, that performs the computation, and records that it happened. The history is retained in the form of a DAG of functions, with edges denoting data dependencies (input <- output). Then, when backward is called, the graph is processed in the topological ordering, by calling :func:backward methods of each :class:Function object, and passing returned gradients on to next :class:Function s.

Normally, the only way users interact with functions is by creating subclasses and defining new operations. This is a recommended way of extending torch.autograd.

Examples::

>>> class Exp(Function):
>>>
>>>     @staticmethod
>>>     def forward(ctx, i):
>>>         result = i.exp()
>>>         ctx.save_for_backward(result)
>>>         return result
>>>
>>>     @staticmethod
>>>     def backward(ctx, grad_output):
>>>         result, = ctx.saved_tensors
>>>         return grad_output * result
>>>
>>> #Use it by calling the apply method:
>>> output = Exp.apply(input)

class VectorQuantizer[source]

VectorQuantizer(k:int, d:int, commitment_cost:float=0.25) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

class EMA(nn.Module):

    def __init__(self, size:Tuple[int], gamma:float):
        super().__init__()
        self.register_buffer("avg", torch.zeros(*size))
        self.gamma = gamma
        self.cor = 1

    def update(self, val):
        self.cor *= self.gamma
        self.avg += (val - self.avg) * (1-self.gamma)

    @property
    def value(self):
        return self.avg / (1. - self.cor)

    def updated_value(self, val):
        self.update(val)
        return self.value

class VectorQuantizerEMA[source]

VectorQuantizerEMA(k:int, d:int, commitment_cost:float=0.25, gamma=0.99, epsilon=1e-05) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

:ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool

class VQVAE(nn.Module):
    
    def __init__(self, encoder, decoder, k:int, d:int, commitment_cost:float=0.25, use_ema:bool=False):
        super().__init__()
        self.encoder, self.decoder = encoder, decoder
        self.quantize = (VectorQuantizerEMA(k, d, commitment_cost) if use_ema else
                         VectorQuantizer(k, d, commitment_cost))

    def forward(self, x):
        ze = self.encoder(x)
        zq, vq_loss, code = self.quantize(ze)
        x_hat = self.decoder(zq)
        return x_hat, vq_loss, code

    @torch.no_grad()
    def encode(self, x):
        ze = self.ecoder(x)
        _, _, code = self.quantize(ze)
        return code

    @torch.no_grad()
    def decode(self, code):
        zq = F.embedding(code, self.quantize.embedding)
        if zq.dim() == 4:
            zq = zq.permute(0,3,1,2).contiguous()
        return self.decoder(zq)