next up previous contents
Next: Model spaces Up: Running experiments Previous: Running experiments   Contents


Backtracking proposals

As noted in [Angelopoulos CussensAngelopoulos Cussens2001], a query over an SLP defines a stochastic SLD-tree. Each leaf of the tree corresponds to a model. The MCMC can then be viewed as a walk between leaves of the tree. The method by which a proposed model is reached from the current model defines a proposal distribution. A wide variety of proposals are supported, provided that some weak conditions are met. In the MCMC used here, we have experimented with a number of proposal distributions. Because all proposals over the SLD-tree are concerned with finding a backtrack point on the path of the current model, and then sampling forward from that point, we call these backtracking proposals. To make the process clearer consider the general scenario in Fig. 21.

Figure 21: Reaching from $\ensuremath {{M_i}} $ in the SLD-tree.
\includegraphics[width=0.4\textwidth, clip]{jump.eps}
Given that the process is at \ensuremath{{M_i}} a proposal \ensuremath{{M_*}} needs to be reached. This is done by choosing one of the backtrack points ($B_j$) from the path defined by the root of the tree to $M_i$. Once such a point ($B_i$) has been chosen, \ensuremath{{M_*}} is reached by sampling forward according to the prior. In this section we detail the different backtracking proposals available in MCMCMS.

In [Angelopoulos CussensAngelopoulos Cussens2001] a cyclic probability (CP) backtracking strategy was followed. The algorithm is parametrised by the expected depth $Depth$ and its formulated as follows:

00. Let j := 1.
10. Let $p_b = 1 - 2^{-j}$
20. Backtrack one step.
25. If at the top of the tree then go to 2, otherwise with probability $p_b$ backtrack one more point and repeat 1, or, with probability $ 1 - p_b $, go to 2.
28. Sample for a new leaf $M_*$ by selecting branches with probability proportional to clause labels. On the first choice, do not descend to the branch leading to $M_i$ (branch $C_i$ in Fig. 21).
30. Let $k := j + 1$. If $k> D$ then $j := 1$ else $j := k$
40. Unless iterations limit reached Goto 10.
A simpler version of CP uses a constant, non-cyclic, $p_b$. In the software this strategy is identified by $pb(PB)$.

One of the shortcomings of CP is that is dependant on \ensuremath{{M_i}} . For deterministic strategies, strategies that do not depend on \ensuremath{{M_i}} , the calculation of $\alpha$, the acceptance probability, can be simplified. We have therefore also implemented a cyclic deterministic (CD) strategy. It is again parametrised by a number; the expected $Distance$. The algorithm is much simpler. It simply cycles $j$ from $1$ to $Distance$, increasing by one at each MCMC iteration. Backtracking chooses $B_j$, the choice point $j$ levels up from the leaf \ensuremath{{M_i}} , at each proposal.

A third scheme (UC, for uniform choice) randomly picks with a uniform distribution between all choice points between \ensuremath{{M_i}} and the root.

Each of the three backtracking proposals can be chosen by selecting the appropriate term for the second argument of run_data/11(see Appendix B) which controls the corresponding experiment. $bp(Bp)$ is the backtracking probabilistically proposal with backtracking probability equal to $Bp$, $cp(D)$ is the cyclic probabilistic backtracking with $Depth = D$, $cd(D)$ is the deterministic strategy with $Distance = D$ and $uc$ for the uniform choice strategy. We have tried to make the backtracking proposal as modular as possible. If you wish to implement a different proposal you should see file src/bp.pl.


next up previous contents
Next: Model spaces Up: Running experiments Previous: Running experiments   Contents
Nicos Angelopoulos 2008-06-02