2008年5月5日 星期一

An introduction to graphical models

This paper introduces the basic ideas and the recent progress in graphical models. Graphical models have gained much and much more attention across the computer science society in this years, partly because they provides intuitive implementation of our thoughts on the algorithm's behaviour. Another important advantage of graphical models is that often a successful model on one problem could be implanted to many other problems as well, with a few problem specific modifications. This greatly saves the researchers' efforts.

 

Typically, graphical models consists of nodes, which represents random variables in a system, and edges, which stands for probabilistic relationship or casuality between nodes. Depends on the nature of problem, the structure of graph might contain loops, which result in fundamental difference when operating on the model. Inference and learning are the two most common operations on graphical models, where inference tries to solve problems by reasoning through the model, and learning to optimize the model given observations. Inference on loopy graphs is NP-hard. Thus, approximate inference methods is essential in many applications.

 

Famous algorithms and instances about graphical models includes MRF, HMM, EM, MCMC etc.. In fact, a brief survey of many fields in computer science would show that most of them are flooded with graphical models. For example, the recent progress of computer vision is heavily based on the success of bayesian inference and MRF. I am optimistic to see a further popularization of graphical models in the future.

沒有留言: