David N. Jansen. Understanding Fox and Glynn’s “Computing Poisson probabilities”. Technical report: ICIS-R11001, February, Radboud University Nijmegen, 2011.
This article gives a consistent presentation of Computing Poisson Probabilities by Fox and Glynn and its proofs (with a few additions from my hand). It makes the Finder algorithm, sketched only in that publication, explicit.
J. Kwisthout. The Computational Complexity of Probabilistic Inference. Technical report: ICIS-R11003, April, Radboud University Nijmegen, 2011.
The Inference problem in probabilistic or Bayesian networks (given a probabilistic network and a joint value assignment to a subset of its vari- ables, compute the posterior probability of that assignment) is arguably the canonical computational problem related to such networks. Since it has been proven NP-hard by Cooper in 1990, a number of complexity results have been obtained for the inference problem. For example, the question whether the posterior probability is non-zero is NP-complete and whether it is larger than a threshold q is PP-complete. Roth established #P-completeness for the functional variant (i.e., where the actual probability is computed). These results are based on different reductions and often lack details (e.g., full membership proofs) due to space constraints. In this paper we discuss a unifying construction that reduces each of these problems from the corresponding Satisfiability variant and give membership proofs in full detail, allowing the reader to get a good understanding of these results and some important aspects of Probabilistic Turing Machines, which are the building blocks of these results. In addition, we show that the threshold variant of probabilistic inference, while fixed-parameter tractable for bounded treewidth, remains para-PP-complete even when the threshold is arbitrarily close to 1.
J. Kwisthout. A Formal Theory of Relevancy in Problem Solving. Technical report: ICIS-R11005, December, Radboud University Nijmegen, 2011.
When computer scientists discuss the computational complexity of, e.g., finding the shortest path from building A to building B in some town or city, their starting point typically is a formal description of the problem at hand, e.g., a graph with weights on every edge where buildings correspond to vertices, routes between buildings to edges, and route-distances to edge-weights. Given such a formal description, either tractability or intractability of the problem is established, by proving that the problem enjoys a polynomial time algorithm, respectively is NP-hard. However, this problem description is in fact an abstraction of the actual problem of being in A and desiring to go to B: it focuses on the relevant aspects of the problem (e.g., distances between landmarks and crossings) and leaves out a lot of irrelevant details. This abstraction step is often overlooked, but may well contribute to the overall complexity of solving the problem at hand. For example, it appears that ``going from A to B`` is rather easy to abstract: it is fairly clear that the distance between A and the next crossing is relevant, and that the color of the roof of B is typically not. However, when the problem to be solved is ``make X love me``, where the current state is (assumed to be) ``X doesn`t love me``, it is hard to agree on all the relevant aspects of this problem. In this paper a framework is presented in order to capture the notion of relevance in finding a suitable problem representation. It is shown that it is in itself intractable in general to find a minimal relevant subset of all features that might or might not be relevant to the problem. The paper aims to contribute a formal notion of â??relevancyâ?? in problem solving, in order to be able to separate â??easy to abstractâ?? from â??hard to abstractâ?? problems and discuss individual differences in the abstraction task, e.g., when experts in a particular domain are compared with novice problem solvers.