2013 Academic Year Seminars
Speaker(s): Magnus Rattray
Learning sparse structure is useful in many applications. For example, gene regulatory networks are sparsely connected since each gene is typically only regulated by a small number of other genes. Our group has been applying Bayesian factor analysis models with sparse loading matrices to uncover the regulatory network from gene expression data. The Bayesian approach to sparse inference involves the use of sparsity inducing priors on the parameters of a probabilistic model. In this talk I will discuss some results using a statistical mechanics theory to examine the performance of sparsity priors, such as mixture priors (aka two-group or spike/slab priors) and L1 priors, by calculating learning curves for Bayesian PCA in the limit of large data dimension. This allows me to address a number of questions, e.g. how well can we estimate sparsity using the marginal likelihood when the sparsity prior is not well-matched to the data generating process? I will show that this kind of model-mismatch can lead to under-estimation of the degree of sparsity and sub-optimal predictions. Current work is focussed on fixing these broken models.