I went to Portland, Oregon for work. My first thoughts: when someone sends you somewhere for work, you’re going to do a lot of work. This was probably the hardest I worked over the summer. This sounds great, at first. But the topic of study wasn’t exactly aligned with my initial interests. I supposed I pursued this opportunity because I wanted to become more employable. I think this worked, I am more employable now, but I’m more interested abstract mathematics. Once you start applying math to other things, it not only becomes less interesting to me, but I think it become more difficult.
So the topic of this work trip is automated Bayesian inference and probabilistic programming languages. I suppose, recently, there has been a wave of interest in neural networks and machine learning and whatnot. Bayesian inference an machine learning are quite intimately related, but one can interpret the two topics in a way so that we can note an important difference. Machine learning tries to approximate a function based off of it’s behavior on a learning set of data, while inference tries to approximate the population data from where this learning data came. If you have a certain statistical problem, perhaps classifying objects, one can use either method to solve the problem. But, I feel the probabilistic programming approach using Bayesian inference isn’t as popular, and that shouldn’t be the case.
In a basic sense, if you can think inferentially about a problem, you can solve it using languages like Anglican or WebPPL. I prefer Anglican. Those two languages are probabilistic languages, and they automate many of the statistical computations, making inference problems easier to solve. If you can reshape your problem so that it varies on smoothly varying parameters (vectors, perhaps), then you can achieve a solution that is related to your initial conditions. Of course, that is super simplified.
Check out Anglican: http://www.robots.ox.ac.uk/~fwood/anglican/
Or WebPPL: http://webppl.org/