Automating experiments with neural information estimators
The idea that good experiments maximize information gain has a long history (e.g. Lindley, 1956). A key obstacle to using this insight to automate experiment design has been the difficulty of estimating information gain. In the last few years new methods have become available for casting information estimation as an optimization problem. Using these ideas, we describe variational bounds on expected information gain. Deep neural networks and gradient-based optimization then allow efficient optimization of these bounds even for complex data models. We apply these ideas to experiment design, adaptive surveys, and computer adaptive testing.
Hi Noah, Nice talk. Two questions. 1) Do you have any results or ideas on whether the performance of the bounds generalizes to different problems for which you have tractable optimal benchmarks? 2) Could various benchmarks be pooled to create a better estimate? I guess this depends on whether they bracket the true optimal, which will depen...