[Re] Variational Sparse Coding

Published in ReScience C, 2019

The reproduced paper proposes an improvement over the Variational Autoencoder (VAE) architecture by explicitly modelling sparsity in the latent space with a Spike and Slab prior distribution and drawing ideas from sparse coding theory. The main motivation behind the original work lies in the ability to infer truly sparse representations from generally intractable non-linear probabilistic models, simultaneously addressing the problem of lack of interpretability of latent features. Moreover, the proposed model improves the classification accuracy using the low-dimensional representations obtained, and significantly adds robustness while varying the dimensionality of latent space.

Download here

Feature selection algorithm recommendation for gene expression data through gradient boosting and neural network metamodels

Published in 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2018

Feature selection is an important step in gene expression data analysis. However, many feature selection methods exist and a costly experimentation is usually needed to determine the most suitable one for a given problem. This paper presents the application of gradient boosting and neural network techniques for the construction of metamodels that can recommend rankings of {feature selection - classification} algorithm pairs for new gene expression classification problems. Results in a corpus of 60 public data sets show the superiority of these techniques in producing more useful rankings in relation to classical metamodels.

Download here