Auto Encoder Architecture#

Should I design my AE’s decoder to be symmetry to its encoder?#

An AE’s encoder and decoder will not need to be symmetric in terms of structure. However, symmetric model structures work well in practice, so it’s usually preferable to non-symmetric design.

So the answer: Yes, no, maybe. It depends on how well your model is performing, which ultimately is down to experiments.

How to take an AE and say: this part is the encoder, and this part is the decoder?#

Usually the layer in AE with the smallest dimension is its latent. Encoders encodes inputs into latents, and decoder decodes latents into outputs.

It helps to think in terms of information flowing through the network. Take a look at all auto-encoders. They usually consist of layers that scale down the input (encoder) and layers scaling up the input (decoders). Suppose that no information is lost (the decoder decodes the image perfectly), then the latent (vector created during the pass through the model) with the lowest dimension is most densely-packed with information, therefore, the best latent.

Can we share latent between different decoders (maybe of different architecture)?#

Yes but no. It is feasible, but no-one does that.

If you really want to do it, you can simply train a new decoder that maps the latent to an image in a supervised manner. But wait! There is already the original decoder that is trained in the same way. So why bother training a new model like that?