IRIS Research Publications


2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987

Journal

P. van Bommel, and Th.P. van der Weide. Measuring the incremental information value of documents. In: Information Sciences, Nr: 2, Vol: 176, Pages: 91-119, 2006, Nijmegen Institute for Information and Computing Sciences, University of Nijmegen Technical Report NIII-R0422.

The incremental searcher satisfaction model for Information Retrieval has been introduced to capture the incremental information value of documents. In this paper, from various cognitive perspectives, searcher requirements are derived in terms of the increment function. Different approaches for the construction of increment functions are identified, such as the individual and the collective approach. Translating the requirements to similarity functions leads to the so-called base similarity features and the monotonicity similarity features. We show that most concrete similarity functions in IR, such as Inclusion, Jaccards, Dices, and Cosine coefficient, and some other approaches to similarity functions, possess the base similarity features. The Inclusion coefficient also satisfies the monotonicity features.

[ PDF ] [ Bibtex ]

P.J.M. Frederiks, and Th.P. van der Weide. Information Modeling: the process and the required competencies of its participants. In: Data & Knowledge Engineering, Nr: 1, Vol: 58, Pages: 4-20, July, 2006, Best paper award in NLDB 2004 conference.

In recent literature it is commonly agreed that the first phase of the software development process is still an area of concern. Furthermore, while software technology has been changed and improved rapidly, the way of working and managing this process have remained behind.In this paper focus is on the process of information modeling, its quality and the required competencies of its participants (domain experts and system analysts). The competencies are discussed and motivated assuming natural language is the main communication vehicle between domain expert and system analyst. As a result, these competencies provide the clue for the effectiveness of the associated process of information modeling.

[ PDF ] [ Bibtex ] [ External URL ]

F.A. Grootjen, and Th.P. van der Weide. Conceptual Query Expansion. In: Data and Knowledge Engineering, Nr: 2, Vol: 56, Pages: 174-193, februari, Institute for Information and Computing Sciences, Radboud University Nijmegen2006.

The incremental searcher satisfaction model for Information Retrieval has been introduced to capture the incremental information value of documents. In this paper, from various cognitive perspectives, searcher requirements are derived in terms of the increment function. Different approaches for the construction of increment functions are identified, such as the individual and the collective approach. Translating the requirements to similarity functions leads to the so-called base similarity features and the monotonicity similarity features. We show that most concrete similarity functions in IR, such as Inclusion, Jaccard's, Dice's, and Cosine coefficient, and some other approaches to similarity functions, possess the base similarity features. The Inclusion coefficient also satisfies the monotonicity features.

[ PDF ] [ Bibtex ]

A. ten Teije M. Marcos M. Balser J. van Croonenborg C. Duellic F. van Harmelen P.J.F. Lucas S. Miksch W. Reif K. Rosenbrand and A. Seyfang. Improving medical protocols by formal methods. In: Artificial Intelligence in Medicine, Nr: 3, Vol: 63, Pages: 193-209, 2006.

Objectives: During the last decade, evidence-based medicine has given rise to an increasing number of medical practice guidelines and protocols. However, the work done on developing and distributing protocols outweights the effort on guarenteeing their quality. Indeed, anomalies like ambiguity and incompleteness are frequent in medical protocols. Recent efforts have tried to address the problem of protocol improvement, but they are not sufficient since they rely on informal processes and notations. Our objective is to improve the quality of medical protocols. Approach: The solution we suggest to the problem of quality improvement of protocols consists in the utilarisation of formal methods. It requires the definition og an adequate protocol representation language, the development of techniques for the formal analysis of protocols described in that language and, more importantly, the evaluation of the feasibility of the approach based on the formalisation and verification. of real-life protocols.

[ Missing PDF ] [ Bibtex ]

R.H. Bisseling, and Ildiko Flesch. Mondriaan sparse matrix partitioning for attacking cryptosystems - a case study. In: Journal of Parallel Computing, Nr: 7, Vol: 32, Pages: 551-567, 2006.

A case study is presented demonstrating the application of the Mondriaan package for sparse matrix partitioning to the field of cryptology. An important step in an integer factorisation attack on the RSA public-key cryptosystem is the solution of a large sparse linear system with 0/1 coefficients, which can be done by the block Lanczos algorithm proposed by Montgomery. We parallelise this algorithm using Mondriaan partitioning and discuss the high-level components needed. A speedup of 8 is obtained on 16 processors of a Silicon Graphics Origin 3800 for the factorisation of an integer with 82 decimal digits, and a speedup of 7 for 98 decimal digits.

[ PDF ] [ Bibtex ]

A.J.J. van Breemen, and J.J. Sarbo. Surviving in the Bermuda Triangle of Semiosis. In: IJCAS, Vol: 18, Pages: 337-346, 2006.

What we think is part of reality and at least partly determined by reality at the same time. The advent of knowledge engineering asks for a shift from lifeless representational and blind reductionist models towards a relational and teleological interpretation of cognition in order to embed the cognitive events in processes of meaning production or semeiosis. Such embedding is determined by the properties of perception (the senses) and the types of distinctions that can be made by semeiosis. The selection of elements in such processes that are formalizable asks for a model in which the phases that make up the process, the decision moments, and their degrees of freedom are clearly indicated. In this paper we will outline such a model for two levels: the level of sign recognition and the level of response to a sign. The decision moments will only be indicated. The practical importance of this structure lies in its potential to be interpreted as a methodology for (formal) specification.

[ Missing PDF ] [ Bibtex ]

T. Heskes. Convexity Arguments for Efficient Minimization of the Bethe and Kikuchi Free Energies. In: Journal of Artificial Intelligence Research, Vol: 26, Pages: 153-190, 2006.

Loopy and generalized belief propagation are popular algorithms for approximate inference in Markov random fields and Bayesian networks. Fixed points of these algorithms have been shown to correspond to extrema of the Bethe and Kikuchi free energy, both of which are approximations of the exact Helmholtz free energy. However, belief propagation does not always converge, which motivates approaches that explicitly minimize the Kikuchi/Bethe free energy, such as CCCP and UPS. Here we describe a class of algorithms that solves this typically non-convex constrained minimization problem through a sequence of convex constrained minimizations of upper bounds on the Kikuchi free energy. Intuitively one would expect tighter bounds to lead to faster algorithms, which is indeed convincingly demonstrated in our simulations. Several ideas are applied to obtain tight convex bounds that yield dramatic speed-ups over CCCP.

[ PDF ] [ Bibtex ] [ External URL ]

O. Zoeter, and T. Heskes. Deterministic approximate inference techniques for conditionally Gaussian state space models. In: Statistics and Computing, Nr: 3, Vol: 16, Pages: 279-292, 2006.

We describe a novel deterministic approximate inference technique for conditionally Gaussian state space models, i.e. state space models where the latent state consists of both multinomial and Gaussian distributed variables. The method can be interpreted as a smoothing pass and iteration scheme symmetric to an assumed density filter. It improves upon previously proposed smoothing passes by not making more approximations than implied by the projection on the chosen parametric form, the assumed density. Experimental results show that the novel scheme outperforms these alternative smoothing passes. Comparisons with sampling methods suggest that the performance does not degrade with longer sequences.

[ PDF ] [ Bibtex ] [ External URL ]

Chapter

S.J. Overbeek, D.B.B. Rijsenbrij, and H.A. (Erik) Proper. Sophia: Towards a Personal Digital Workspace for Knowledge Workers. In: Knowledge Workers: Issues and Perspectives, Chapter 15, Pages: 246-265, December, 2006, ISBN 813140479X.

Present day knowledge workers interact with a digital world which is full of digital services intended to support these workers in their knowledge intensive tasks. Digital services include the use of applications in general, tools that support knowledge generation, or knowledge transfer, but may also support the proliferation of knowledge in order to improve organizational decision making and value addition. However, it often occurs that contemporary digital services are not user-friendly, impersonal, and ambiguous in use. Therefore, the notion of Sophia is presented: a reference model of and a development framework for a personal digital workspace for knowledge workers, aiming to integrate and personalize all digital services, digital information items, and digital knowledge items, so that an individual knowledge worker can carry out his work related activities pleasantly, effectively, and efficiently in every context. We further argue that digital architecture plays an important role when realizing a development methodology.

[ PDF ] [ Bibtex ]

J.J. Sarbo, J.I. Farkas, and A.J.J. van Breemen. Natural Grammar. , Edited by: R. Gudwin, and J. Queiroz. Chapter 4, Pages: 152-175, October, 2006, 978-1599040646.

By taking as a starting point for our research the function of language to generate meaning, we endeavor in this chapter to derive a grammar of natural language from the more general Peircean theory of cognition. After a short analysis of cognitive activity, we introduce a model for sign (re)cognition and analyze it from a logical and semiotic perspective. Next, the model is instantiated for language signs from a syntactical point of view. The proposed representation is called natural insofar as it respects the steps the brain/mind is going through when it is engaged in cognitive processing. A promise of this approach lies in its potential for generating information by the computer, which the human user may directly recognize as knowledge in a natural, hence economic way.

[ Missing PDF ] [ Bibtex ]

Conference

P. van Bommel, S.J.B.A. (Stijn) Hoppenbrouwers, H.A. (Erik) Proper, and Th.P. van der Weide. Exploring Modelling Strategies in a Meta-modelling Context. , Edited by: R. Meersman, Z. Tari, and P. Herrero. Pages: 1128-1137, October/November, Springer, 2006.

We are concerned with a core aspect of the processes of obtaining conceptual models. We view such processes as information gathering dialogues, in which strategies may be followed (possibly, imposed) in order to achieve certain modelling goals. Many goals and strategies for modelling can be distinguished, but the current discussion concerns meta-model driven strategies, aiming to fulfil modelling goals or obligations that are the direct result of meta-model choices (i.e. the chosen modelling language). We provide a rule-based conceptual framework for capturing strategies for modelling, and give examples based on a simplified version of the Object Role Modelling (ORM) meta-model. We discuss strategy rules directly related to the meta-model, and additional procedural rules. We indicate how the strategies may be used to dynamically set a modelling agenda. Finally, we describe a generic conceptual structure for a strategy catalog.

[ PDF ] [ Bibtex ]

H.A. (Erik) Proper, and Th.P. van der Weide. Modelling as Selection of Interpretation. In: Modellierung 2006, Innsbruck, Austra, EU, Lecture Notes in Informatics, Vol: P82, Pages: 223-232, March, 2006, ISBN 3885791765.

This paper reports on a research effort to better understand the act of modelling. In this paper we describe a formal framework by which the process of modelling can be regarded as involving the selection of more and more refined interpretations in terms of the underlying meta-model of the modelling language used. The resulting framework will be used to create a laboratory setup in which we can more closely study (and support) modelling processes.

[ PDF ] [ Bibtex ]

B. van Gils, H.A. (Erik) Proper, P. van Bommel, and Th.P. van der Weide. Quality Makes the Information Market. , Edited by: R. Meersman, and Z. Tari. Pages: 345-359, October/November, Springer, 2006.

In this paper we consider information exchange via the Web to be an information market. The notion of quality plays an important role on this information market. We present a model of quality and discuss how this model can be operationalized.

[ PDF ] [ Bibtex ]

P. van Bommel, S.J.B.A. (Stijn) Hoppenbrouwers, H.A. (Erik) Proper, and Th.P. van der Weide. On the use of Object-Role Modelling to Model Active Domains. In: Proceedings of the Workshop on Exploring Modeling Methods for Systems Analysis and Design (EMMSAD`06), held in conjunctiun with the 18th Conference on Advanced Information Systems 2006 (CAiSE 2006), Pages: 473-484, June, 2006, ISBN 9782870375259.

Conceptual modelling methods such as Object-Role Modelling (ORM) have traditionally been developed with the aim of providing conceptual models of database structures. More recently, however, such modelling languages have shown their use for modelling (the ontology) of domains in general. In these latter cases, the modelling effort results in a (formally based) conceptual reasoning systems using a domain calculus on top of a domain grammar. ORM is a member of a family of modelling methods with a well-defined and explicit way of working based on natural language analysis. Their natural language grounding aids in model validation, while their explicit way of working contributes to the repeatability of modelling processes. As the title suggests, this paper is primarily concerned with the application of ORM `rigour

[ PDF ] [ Bibtex ]

P. van Bommel, S.J.B.A. (Stijn) Hoppenbrouwers, H.A. (Erik) Proper, and Th.P. van der Weide. Giving Meaning to Enterprise Architectures - Architecture Principles with ORM and ORC. , Edited by: R. Meersman, Z. Tari, and P. Herrero. October/November, Springer, 2006.

[ PDF ] [ Bibtex ]

Ildiko Flesch, and P.J.F. Lucas. Graphical reasoning with Bayesian networks. In: Proceedings of AI-2006, Research and Developments in Intelligent Systems, Vol: XXIII, Pages: 71-84, Springer, 2006.

Nowadays, Bayesian networks are seen by many researchers as standard tools for reasoning with uncertainty. Despite the fact that Bayesian networks are graphical representations, representing dependence and independence information, normally the emphasis of the visualisation of the reasoning process is on showing changes in the associated marginal probability distributions due to entering observations, rather than on changes in the associated graph structure. In this paper, we argue that it is possible and relevant to look at Bayesian network reasoning as reasoning with a graph structure, depicting changes in the dependence and independence information. We propose a new method that is able to modify the graphical part of a Bayesian network to bring it in accordance with available observations. In this way, Bayesian network reasoning is seen as reasoning about changing dependences and independences as reflected by changes in the graph structure.

[ Missing PDF ] [ Bibtex ]

Ildiko Flesch, and Peter Lucas. Conflict-based Diagnosis: Adding Uncertainty to Consistency-based Diagnosis. In: Proceedings of ECAI2006 Workshop: Model-based Systems, Pages: 10-15, 2006.

Consistency-based diagnosis, a type of model-based diagnosis, concerns using a model of the structure and behaviour of a system in order to establish whether or not this system is malfunctioning. So far, it has been difficult to incorporate uncertainty into consistency-based diagnosis, and the various proposals found in literature are unsatisfactory. In the field of Bayesian networks, particular measures, such as the conflict measure, have been introduced in order to find conflicts between the content of a joint probability distribution and the related data. In this paper, we show that the conflict measure can replace the logical consistency relation traditionally used in consistency-based diagnosis; it also offers a way to favour particular diagnoses above others. By doing so, uncertainty reasoning is added to consistency-based diagnosis. The resulting new type of model-based diagnosis is called conflict-based diagnosis.

[ PDF ] [ Bibtex ]

M.A.J. van Gerven, F.J. D`iez, B.G. Taal, and P.J.F. Lucas. Prognosis of High-Grade Carcinoid Tumors using Dynamic Limited Memory Influence Diagrams. In: IDAMAP 2006, 2006.

Dynamic limited-memory influence diagrams (DLIMIDs) have been developed as a framework for decision-making under uncertainty over time. We show that DLIMIDs constructed from two-stage temporal LIMIDs can represent infinite-horizon decision processes. Given a treatment strategy supplied by the physician, DLIMIDs may be used as prognostic models. The theory is applied to determine the prognosis of patients that suffer from an aggressive type of neuroendocrine tumor.

[ Missing PDF ] [ Bibtex ]

M.A.J. van Gerven, and F.J. D`iez. Selecting Strategies for Infinite-Horizon Dynamic LIMIDs. In: PGM 2006, 2006.

In previous work we have introduced dynamic limited-memory influence diagrams (DLIMIDs) as an extension of LIMIDs aimed at representing infinite-horizon decision processes. If a DLIMID respects the first-order Markov assumption then it can be represented by 2TLIMIDS. Given that the treatment selection algorithm for LIMIDs, called single policy updating (SPU), can be infeasible even for small finite-horizon models, we propose two alternative algorithms for treatment selection with 2TLIMIDS. First, single rule updating (SRU) is a hill-climbing method inspired upon SPU which needs not iterate exhaustively over all possible policies at each decision node. Second, a simulated annealing algorithm can be used to avoid the local-maximum policies found by SPU and SRU.

[ PDF ] [ Bibtex ]

C. Hesse, D. Holtackers, and T. Heskes. On the use of mixtures of Gaussians and mixtures of generalized exponentials for modelling and classifying biomedical signals. In: Proceedings of the 1st Annual Symposium IEEE EMBS Benelux Chapter, 2006.

Mixture density models, particularly those based on the Gaussian distribution, are widely used in machine learning tools for data modeling and classification. Gaussian mixture models have also been used in biomedical signal processing applications involving electrophysiological signals such as the electromyogram (EMG) and electroencephalogram (EEG). In this paper, we consider a generalization of the Gaussian mixture model, which is based on the generalized exponential distribution. We describe a means of fitting such a model to data and explore its utility for bio-signal analysis.

[ PDF ] [ Bibtex ]

A.J. Hommersom, Perry Groot, Peter Lucas, M. Balser, and J. Schmitt. Verification of Medical Guidelines using Task Execution with Background Knowledge. In: 4th Prestigous Applications of Intelligent Systems (in ECAI 2006), Pages: 835-836, IOS Press, Amsterdam, The Netherlands, EU, 2006.

The use of a medical guideline can be seen as the execution of computational tasks, sequentially or in parallel, in the face of patient data. It has been shown that many of such guidelines can be represented as a `network of tasks`, i.e., as a number of steps that have a specific function or goal. To investigate the quality of such guidelines we propose a formalization of criteria for good practice medicine a guideline should comply to. We use this theory in conjunction with medical background knowledge to verify the quality of a guideline dealing with diabetes mellitus type 2 using the interactive theorem prover KIV. Verification using task execution and background knowledge is a novel approach to quality checking of medical guidelines. ing our quality requirements and medical background knowledge, we interactively verify a guideline dealing with the management of diabetes mellitus type 2. More specifically, we model the guideline as a `network of tasks` using the language Asbru and, additionally, verify meta-level properties for this model in KIV, an interactive theorem prover. To the best of our knowledge, verification of a fully formalized guideline, as a network of tasks, using medical background knowledge has not been done before.

[ PDF ] [ Bibtex ]

A.J. Hommersom, P. Groot, P.J.F. Lucas, M. Balser, and J. Schmitt. Combining Task Execution and Background Knowledge for the Verification of Medical Guideline. In: Research and Development in Intelligent Systems, Vol: XXIII, Pages: 3-16, December, Springer, 2006.

The use of a medical guideline can be seen as the execution of computational tasks, sequentially or in parallel, in the face of patient data. It has been shown that many of such guidelines can be represented as a `network of tasks`, i.e., as a number of steps that have a specific function or goal. To investigate the quality of such guidelines we propose a formalization of criteria for good practice medicine a guideline should comply to. We use this theory in conjunction with medical background knowledge to verify the quality of a guideline dealing with diabetes mellitus type 2 using the interactive theorem prover KIV. Verification using task execution and background knowledge is a novel approach to quality checking of medical guidelines

[ PDF ] [ Bibtex ]

H.A. (Erik) Proper, P. van Bommel, S.J.B.A. (Stijn) Hoppenbrouwers, and Th.P. van der Weide. A Fundamental View on the Act of Modeling. , Edited by: J. Kizza J. Aisbett A. Vince and T. Wanyama. August, Fountain Publishers, Kampala, Uganda, Kampala, Uganda, 2006.

[ PDF ] [ Bibtex ]

S.J.B.A. (Stijn) Hoppenbrouwers, L. (Leonie) Lindeman, and H.A. (Erik) Proper. Capturing Modeling Processes - Towards the MoDial Modeling Laboratory. , Edited by: R. Meersman, Z. Tari, and P. Herrero. Pages: 1242-1252, October/November, Springer, 2006.

This paper is part of an ongoing research effort to better understand the process of conceptual modeling. As part of this effort, we are currently developing a modeling laboratory named MoDial (Modeling Dialogues). The main contribution of this paper is a conceptual meta-model of that part of MoDial which aims to capture the elicitation aspects of the modeling process used in creating a model, rather than the model as such.The current meta-model is the result of a two-stage research process. The first stage involves theoretical input from literature and earlier results. The second stage is concerned with (modest) empirical validation in terms of interviews with modeling experts.

[ PDF ] [ Bibtex ]

I. Flesch P.J.F. Lucas and S. Visscher. ON THE MODULARISATION OF INDEPENDENCE IN DYNAMIC BAYESIAN NETWORKS. In: 18th Belgium-Netherlands Conference on Artificial Intelligence (BNAIC`06), Pages: 133-140, 2006.

Dynamic Bayesian networks are a special type of Bayesian networks, which explicitly deal with the dimension of time. They are distinguished into repetitive and non-repetitive networks. Repetitive networks have the same set of random (statistical) variables and independence relations at each time step, whereas in non-repetitive networks the set of random variables and the independence relations between these random variables may vary in time. Due to their structural symmetry, repetitive networks are easier to use and are, therefore, often taken as a standard. However, repetitiveness is a very strong assumption, which normally does not hold, since particular dependences and independences may only hold at certain time steps. In this paper, we propose a new framework for the modularisation of non-repetitive dynamic Bayesian networks, which offers a practical approach to coping with the computational and structural difficulties associated with dynamic Bayesian networks. This framework is based on separating temporal and atemporal independence relations. We investigate properties of the modularisation and show the separation to be compositive.

[ Missing PDF ] [ Bibtex ]

R. Jurgelenaite, and T. Heskes. EM Algorithm for Symmetric Causal Independence Models . In: Proceedings of the Nineteenth European Conference on Machine Learning, Pages: 234-245, Radboud University Nijmegen, Springer, 2006.

Causal independence modelling is a well-known method both for reducing the size of probability tables and for explaining the underlying mechanisms in Bayesian networks. In this paper, we present the EM algorithm to learn the parameters in causal independence models based on the symmetric Boolean function. The developed algorithm enables us to assess the practical usefulness of the symmetric causal independence models, which has not been done previously. We evaluate the classification performance of the symmetric causal independence models learned with the presented EM algorithm. The results show the competitive performance of these models in comparison to noisy OR and noisy AND models as well as other state-of-the-art classifiers.

[ PDF ] [ Bibtex ]

R. Jurgelenaite, and T. Heskes. Symmetric Causal Independence Models for Classification. In: Proceedings of the Third European Workshop on Probabilistic Graphical Models, Pages: 163-170, Radboud University Nijmegen, 2006.

Causal independence modelling is a well-known method both for reducing the size of probability tables and for explaining the underlying mechanisms in Bayesian networks. In this paper, we propose an application of an extended class of causal independence models, causal independence models based on the symmetric Boolean function, for classification. We present an EM algorithm to learn the parameters of these models, and study convergence of the algorithm. Experimental results on the Reuters data collection show the competitive classification performance of causal independence models based on the symmetric Boolean function in comparison to noisy OR model and, consequently, with other state-of-the-art classifiers.

[ PDF ] [ Bibtex ] [ External URL ]

Betsy Pepels, Rinus Plasmeijer, and H.A. (Erik) Proper. Fact-Oriented Modeling from a Programming Language Designer`s Perspective. , Edited by: R. Meersman, Z. Tari, and P. Herrero. Pages: 1170-1180, October/November, Springer, 2006.

We investigate how achievements of programming languages research can be used for designing and extending fact oriented modeling languages. Our core contribution is that we show how extending fact oriented modeling languages with the single concept of algebraic data types leads to a natural and straightforward modeling of complex information structures like unnamed collection types and higher order types.

[ PDF ] [ Bibtex ]

Th. Charitos S. Visscher L.C. van der Gaag P.J.F. Lucas and K. Schurink. A Dynamic Model for Therapy Selection in ICU Patients with VAP. In: Proceedings of IDAMAP-2006, Pages: 71-71, 2006.

[ Missing PDF ] [ Bibtex ]

A.J. Hommersom, P. Groot, P.J.F. Lucas, M. Marcos, and B. Martinez--Salvador. A Constraint--based Approach to Medical Guidelines and Protocols. In: Proceedings of ECAI2006 workshop: AI techniques in healthcare: evidence-based guidelines and protocols, Pages: 25-30, August, 2006.

Medical guidelines and protocols are documents aimed at improving the quality of medical care by offering support in medical decision making in the form of management recommendations based on scientific evidence. Whereas medical guidelines are intended for nation-wide use, and thus omit medical management details that may differ among hospitals, medical protocols are aimed at local use within hospitals and, therefore, include detailed information. Although a medical guideline and protocol concerning the management of a particular disorder are related to each other, one question is to what extent guideline and protocol concerning the management of a particular disorder are related to each other, one question is to what extent they are different. Formal methods are applied to shed light on this issue. A Dutch medical guideline regarding the treatment of breast cancer, and a Dutch protocol based on it, are taken as an example.

[ PDF ] [ Bibtex ]

A.J. Hommersom, P. Groot, and P.J.F. Lucas. Checking Guideline Conformance of Medical Protocols using Modular Model Checking. In: 18th Belgium-Netherlands Conference on Artificial Intelligence (BNAIC`06), October, 2006.

Medical guidelines and protocols are documents aimed at improving the quality of medical care by offering support in medical decision making in the form of management recommendations based on scientific evidence. Whereas medical guidelines are intended for nation-wide use, and thus omit medical management details that may differ among hospitals, medical protocols are aimed at local use within hospitals and, therefore, include detailed information. Although protocols are often constructed on the basis of medical guidelines, the question is to which extent a protocol conforms to the guideline. Formal methods are applied to shed light on this issue. A Dutch medical guideline regarding the treatment of breast cancer, and a Dutch protocol based on it, are taken as an example.

[ PDF ] [ Bibtex ]

O. Zoeter, A. Ypma, and T. Heskes. Deterministic and stochastic Gaussian particle filtering. , 2006.

In this article we study inference problems in non-linear dynamical systems. In particular we are concerned with assumed density approaches to filtering and smoothing. In models with uncorrelated (but dependent) state and observation, the extended Kalman filter and the unscented Kalman filter break down. We show that the Gaussian particle filter and the one-step unscented Kalman filter make less assumptions and potentially form useful filters for this class of models. We construct a symmetric smoothing pass for both filters that does not require the dynamics to be invertible. We investigate the characteristics of the methods in an interesting problem from mathematical finance. Among others we find that smoothing helps, in particular for the deterministic one-step unscented Kalman filter.

[ PDF ] [ Bibtex ]

Reports

S Djajasaputra, T. Heskes, and P Ouwehand. State Space Models for Seasonal Aggregation in Sales Forecasting. Technical report: ICIS-R06015, March, Radboud University Nijmegen, 2006.

This paper describes a way to improve forecasts by simultaneously forecasting a group of products that exhibit a similar seasonal pattern. There have already been several previous publications that demonstrated forecast improvements using seasonal aggregation. However, these papers focused on various ad hoc methods to combine seasonal indices from aggregated time series. Instead, we develop state space models in which aggregation is naturally incorporated. Our primary contribution is the seasonal aggregation extension of the Harvey

[ PDF ] [ Bibtex ]

B. van Gils, H.A. (Erik) Proper, P. van Bommel, and Th.P. van der Weide. Aptness based search on the Web. Technical report: ICIS-R06005, November, Radboud University Nijmegen, 2006.

The Web has, in a relatively short period of time, evolved from a medium for information exchange between scholars to one of the most important media in modern times. This has had a major impact on the infrastructure supporting the Web. Retrieval systems that select relevant resources from the ever increasing volume of resources that are available to us are becoming more and more important. In our opinion, the traditional view on these systems (where `topical relevance` seems to be the key notion) is too limited. The main contribution of this paper is an integral view on a more advanced scheme for search on the web called aptness based retrieval.

[ PDF ] [ Bibtex ]

E.D. Schabell, H.A. (Erik) Proper, and Th.P. van der Weide. IRIS Publication Management System - the first steps towards realization. Technical report: ICIS-R06008, December, Radboud University Nijmegen, 2006.

The IRIS Publication Management System (PMS) has been a long time in coming. It has been a wish of the IRIS department to have a single entry point for dealing with the publications created by its members. The complexities of not only accepting new submissions, but to process these submissions on through the existing institutional publication infrastructure is not a hurdle easily taken. The submission of both internal and external publications generates not only a collection of doc- uments, but also a very valuable and maybe useful repository of publication data. If this data were to be collected, protected from inconsistencies and properly organized then one would only be limited in her imagination as the the uses that it could be put. To start with, one can begin to pro vide a very interesting playground for departmental retrieval experiments, provide various forms of exportable formats (think of HTML, BiBTeX, text, etc.) and generate any type of organizational reporting as deemed necessary (such as yearly overviews of departmental publications). This document discusses the definition and design of the IRIS PMS This includes the motivation (why), the (functional) requirements (what), the key design principles as well as the actual design (how).

[ PDF ] [ Bibtex ]

J. Nabukenya, P. van Bommel, and H.A. (Erik) Proper. Collaborative policy-making processes. Technical report: ICIS-R06036, December, Radboud University Nijmegen, 2006.

This paper is concerned with the application of collaboration engineering to improve the quality of policy-making processes. Policies are needed to guide complex decision-making. The creation of such policies is a collaborative process. The quality of this collaboration has a profound impact on the quality of the resulting policies and the acceptance by its stakeholders. We therefore focus on the use of techniques and methods from the field of collaboration engineering to improve the quality. We present the results of two case studies conducted on the use of collaboration engineering in the context of the policy making processes. This result also involves a generic design of a policy making process in terms of elementary constructs from collaboration engineering, which has been arrived at using the emph{action research} approach. Before presenting these case studies, however, some theoretical background on policy-making processes and collaboration engineering is provided.

[ PDF ] [ Bibtex ]

Ildiko Flesch, and P.J.F. Lucas. Graphical Reasoning with Bayesian Networks. Technical report: ICIS-R06024, September, Radboud University Nijmegen, 2006.

Nowadays, Bayesian networks are seen by many researchers as standard tools for reasoning with uncertainty. Despite the fact that Bayesian networks are graphical representations, representing dependence and independence information, normally the emphasis of the visualisation of the reasoning process is on showing changes in the associated marginal probability distributions due to entering observations, rather than on changes in the associated graph structure. In this paper, we argue that it is possible and relevant to look at Bayesian network reasoning as reasoning with a graph structure, depicting changes in the dependence and independence information. We propose a new method that is able to modify the graphical part of a Bayesian network to bring it in accordance with available observations. In this way, Bayesian network reasoning is seen as reasoning about changing dependences and independences as reflected by changes in the graph structure.

[ PDF ] [ Bibtex ]

M.A.J. van Gerven. Efficient Bayesian Inference by Factorizing Conditional Probability Distributions. Technical report: ICIS-R06032, November, Radboud University Nijmegen, 2006.

Bayesian inference becomes more efficient when one makes use of the structure that is contained in the potentials that together constitute a joint probability distribution. Such structure can be represented in the form of probability trees or Boolean polynomials. However, in order to make use of such representations in Bayesian inference, one needs to be able to represent it compactly within the inference engine. Recently, it was shown that potentials that exhibit functional dependence can be factorized efficiently by means of the introduction of hidden variables. Here we demonstrate how these techniques can be applied to represent arbitrary potentials compactly thus improving the efficiency of Bayesian inference for models with arbitrary potentials.

[ PDF ] [ Bibtex ]

M.A.J. van Gerven, and B.G. Taal. Structure and Parameters of a Bayesian Network for Carcinoid Prognosis. Technical report: ICIS-R06033, November, Radboud University Nijmegen, 2006.

In this report, we describe the structure and parameters of a Bayesian network for prognosis of patients that present with carcinoid tumors. The report acts as a reference guide to the carcinoid model, which has been developed in collaboration with an expert physician at the Netherlands Cancer Institute.

[ PDF ] [ Bibtex ]

B. van Gils, and D. Vojevodina. The effects of exceptions on enterprise architecture. Technical report: ICIS-R06011, February, Radboud University Nijmegen, 2006.

Exception handling be enterprises is an increasingly important topic since execptions may seriously disrupt day to day operations which may impact on the profits of the enterprise. In this paper we explore two views on the enterprise (top down: architecture, and bottom up: workflow) with respect to exception handeling. More specifically, we focus on using both views on the enterprise to handel exceptions in a structured and efficient manner.

[ PDF ] [ Bibtex ]

B. van Gils, and H.A. (Erik) Proper. Fundamentals of Quality on the Web. Technical report: ICIS-R06029, June, 2006.

We use information from the Web for performing our daily tasks more and more often. Locating the right resources that help us in doing so is a daunting task, especially with the present rate of growth of the Web as well as the many different kinds of resources available. The tasks of search engines is to assist us in finding those resources that are apt for our given tasks; search engines assess the quality of resources for players. In this paper we present a formal model for the notion of quality on the Web. We base our model on a thorough literature study of how the quality notion is used in different fields. Even more, we show how the quality of resources is affected by software manipulations (transformations).

[ PDF ] [ Bibtex ]

Ildiko Flesch, P.J.F. Lucas, and Stefan Visscher. On the modularisation of independence in dynamic Bayesian networks. Technical report: ICIS-R06025, September, Radboud University Nijmegen, 2006.

Dynamic Bayesian networks are Bayesian networks which explicitly incorporating the dimension of time. They are distinguished into repetitive and non-repetitive networks. Repetitive networks have the same set of random (statistical) variables and independence relations at each time step, whereas in non-repetitive networks the set of random variables and the independence relations between these random variables may vary in time. Due to their structural symmetry, repetitive networks are easier to use and are, therefore, often considered as the standard dynamic Bayesian networks. However, repetitiveness is a very strong assumption, which usually does not hold, because dependences and independences that only hold at certain time steps may be lost. In this paper, we propose a new framework for the modularisation of non-repetitive dynamic Bayesian networks, which offers a practical approach to coping with the computational and structural difficulties associated with unrestricted dynamic Bayesian networks. This framework is based on separating temporal and atemporal independence relations in the model. We investigate properties of the modularisation and show to be compositive.

[ PDF ] [ Bibtex ]

R. Jurgelenaite, T. Heskes, and P.J.F. Lucas. Noisy Threshold Functions for Modelling Causal Independence in Bayesian Networks . Technical report: ICIS-R06014, February, Radboud University Nijmegen, 2006.

Causal independence modelling is a well-known method both for reducing the size of probability tables and for explaining the underlying mechanisms in Bayesian networks. Many Bayesian network models incorporate causal independence assumptions; however, only the noisy OR and noisy AND, two examples of causal independence models, are used in practice. Their underlying assumption that either at least one cause, or all causes together, give rise to an effect, however, seems unnecessarily restrictive. In the present paper a new, more flexible, causal independence model is proposed, based on the Boolean threshold function. A connection is established between conditional probability distributions based on the noisy threshold model and Poisson binomial distributions, and the basic properties of this probability distribution are studied in some depth. We present and analyse recursive methods as well as approximation and bounding techniques to assess the conditional probabilities in the noisy threshold models.

[ PDF ] [ Bibtex ]

L. Klein Holte, M. Jansen, J. Mutter, and E.D. Schabell. It is about time for the AbTLinux dependency engine. Technical report: ICIS-R06013, February, Radboud University Nijmegen, 2006, A result of RE class of 2005/2006 student project..

This document describes the requirements for the ABout Time Linux dependency engine, a component of the AbTLinux package manager. These requirements were obtained by a student project group that participated in the Radboud University Nijmegen course called Requirements Engineering. This document represents the best results obtained from the six participating groups. The basis of the project flows from the main AbTLinux project goals. Clearly documented design, clear development goals leading to each release and just getting them done. Most members of the AbTLinux project have worked on other Linux distribution projects and have grown tired of working on badly documented designs. This requirements documents goal is to provide a structured and clear way of gathering information for the dependency engine project.

[ PDF ] [ Bibtex ]

S.J. Overbeek, P. van Bommel, H.A. (Erik) Proper, and D.B.B. Rijsenbrij. Foundations and Applications of Intelligent Knowledge Exchange. Technical report: ICIS-R06026, October, Radboud University Nijmegen, 2006.

Exchange of knowledge is becoming increasingly important to modern organizations. In this chapter is explained what this elementary knowledge exchange consists of and how a virtual workplace can support knowledge exchange between workers. A scenario from the medical domain illustrates how physicians can improve their knowledge exchange by utilizing the virtual workplace models introduced. Better adaptation to the rapidly changing nature of providing health care is a desirable effect of improved knowledge exchange between physicians. Explicit models concerning possible physical, social and digital contexts of knowledge exchange are discussed, as well as models which depict how knowledge relatedness enable intelligent knowledge exchange. Researchers studying virtual workplace models for industry and academic purposes belong to the intended audience of this chapter. Administrators of public sector or other non-profit agencies who wish to incorporate virtual workplace models and methods into their daily operations can also benefit from the contents discussed.

[ PDF ] [ Bibtex ]

S.J. Overbeek. A Research Methodology for Supporting the Development of a Personal Digital Workspace for Knowledge Workers. Technical report: ICIS-R06020, April, Radboud University Nijmegen, 2006.

Present day knowledge workers interact with a digital world which is full of digital services intended to support these workers in their knowledge intensive tasks. Digital services include the use of applications in general, tools that support knowledge generation, or knowledge transfer, but may also support the proliferation of knowledge in order to improve organizational decision making and value addition. However, it often occurs that contemporary digital services are not user-friendly, impersonal, and ambiguous in use. Therefore, my Ph.D. research focuses on the notion of ‘Sophia’: a reference model and a development framework for a personal digital workspace for knowledge workers. A personal digital workspace for knowledge workers aims to integrate and personalize all digital services, digital information items, and digital knowledge items, so that an individual knowledge worker can carry out his work related activities pleasantly, effectively, and efficiently in every physical, social and digital context.

[ PDF ] [ Bibtex ]

E.D. Schabell. ABout Time Linux - the requirements. Technical report: ICIS-R06021, May, Radboud University Nijmegen, 2006.

This document will detail a project that has risen from the desire to create a generic framework for managing the software on a Linux system. It is based on my experiences while working as a developer on a sourcebased Linux distribution spanning more than three years at the time of this writing. I hope to take my experiences as a developer and package maintainer to create this new package manager. I follow in the footsteps of a few pretty good existing projects such as (micropkg, 2004), (Easinstaller, 2004) and (SMGL, 2004), though I have found these to be lacking in some way or another.

[ PDF ] [ Bibtex ]

E.D. Schabell. Touring the ICIS Publication Management System (PMS v1.2). Technical report: ICIS-R06031, November, Radboud University Nijmegen, 2006, Sources can also be found in CodeYard project repo..

The Publication Management System (PMS) was initially developed and deployed for usage by the IRIS department within the Radboud University Nijmegen. It was born from a wish to provide extensive services in managing and reporting our publications. This paper takes the reader through a tour of the current version of PMS, from the basic services available to any user, on to specific functionality for our institutes members, through the API and finally leaves the reader with some examples of how to use the more advanced features PMS provides.

[ PDF ] [ Bibtex ]

Th.P. van der Weide, and P. van Bommel. GAM: A Generic Model for Adaptive Personalisation. Technical report: ICIS-R06022, June, Radboud University Nijmegen, 2006.

In this paper we formally define the Generic Adaptivity Model (GAM). This model provides a strong theoretical foundation for adaptive personalisation. Staying true to existing approaches in user modelling, the GAM can be used descriptively as well as prescriptively. The GAM consists of a number of pillars bound together by a common foundation. In order to allow for extensibility the GAM is domain independent and has little restrictions in applicability. The GAM is embedded in a method for the design of adaptation models for new as well as legacy systems.

[ PDF ] [ Bibtex ]

J. Nabukenya, G.-J. de Vreede, and H.A. (Erik) Proper. Research Methods for Collaboration Engineering: An Assessment of Applicability Using Collaborative Policy-Making Example. Technical report: ICIS-R07010, November, Radboud University Nijmegen, 2006.

Collaboration Engineering (CE) is a new field of research and practice which involves the design of recurring collaboration processes that are meant to cause predictable and success among organizations’ recurring mission-critical collaborative tasks. To measure the effectiveness of CE research efforts, we would need to use a research methodology. This article therefore provides an overview of selected research methods, and an assessment of their applicability to CE research using collaborative organizational policy-making processes as the primary example. This article also presents examples of research questions that can be answered in the CE research using the respective research methods.

[ PDF ] [ Bibtex ]

Professional

S.J. Overbeek. Boekrecensie: Mobiele Communicatie op de Werkvloer. In: Tijdschrift voor Informatie en Management, Nr: 12, Vol: 3, Pages: 44, March, 2006, In Dutch.

Het boek ‘Mobiele Communicatie op de Werkvloer’ richt zich, zoals de titel al aangeeft, op zakelijke toepassingen van mobiele communicatie. De auteurs geven zowel een technische als een functionele beschouwing en hebben zelfs een hoofdstuk gewijd aan architectuur. Adviseurs en (ict-)managers die ‘iets met mobiele communicatie te maken hebben’, behoren tot de beoogde lezersgroep van het boek. Wat opvalt is dat een grote schare aan concrete praktijksituaties wordt behandeld. Mobiele communicatie bij ondernemingen als NS RailConnect, Falck, KLM Catering Services, Schiphol en ondernemingen in de zorgsector, en de sector openbare orde en veiligheid worden besproken. De theorievorming gaat aan deze praktijkcases vooraf.

[ PDF ] [ Bibtex ]

S.J. Overbeek, P. van Bommel, H.A. (Erik) Proper, and D.B.B. Rijsenbrij. Grondslagen en Toepassingen voor Intelligente Kennisuitwisseling. In: IT Monitor, Nr: 10, Vol: 9, Pages: 8-11, November, 2006, In Dutch.

Het belang van kennis en in het bijzonder kennisuitwisseling wordt steeds groter voor organisaties. In het medische domein blijkt dat medisch specialisten vandaag de dag continue veranderingen in de aard van de gezondheidszorg ervaren. De praktijk verandert dagelijks, gedocumenteerd door duizenden wetenschappelijke medische vakbladen. Behandelingen vinden plaats in meer gevarieerde werkomgevingen en patienten brengen minder tijd door in ziekenhuizen. In dit werkklimaat heeft een medisch specialist behoefte aan meer kennis dan ooit tevoren om tegemoet te komen aan de behoeften van de patient. Steeds meer professionals begeven zich buiten de grenzen van de traditionele fysieke werkplekken en gedeelten van het dagelijks werk wordt volledig online uitgevoerd. Dit artikel focust op het verbeteren van kennisuitwisseling tussen professionals door middel van virtuele werkomgevingen. Voordat een dergelijke omgeving ontwikkeld kan worden, is begripsvorming nodig rondom de door een virtuele werkomgeving aan te bieden ondersteuning en dient de noodzaak van die ondersteuning helder te zijn.

[ PDF ] [ Bibtex ]

J.J. Sarbo. Peircean proto-signs. In: AIP Conference Proceedings, Vol: 839, Pages: 474-479, 2006, Best Paper Award.

Human knowledge is intentional, as opposed to `knowledge` represented by the computer, which is syntactic. The premise of this paper is that nevertheless a process model of cognition can be defined which is isomorphic and analogous to Peirce`s 9-adic classification of signs. An advantage of the relation with the Peircean concepts lies in the model`s potential for the definition of a `natural` representation of knowledge, a representation which can be more easily interpreted than the traditional formal ones.

[ Missing PDF ] [ Bibtex ]