S.J.B.A. (Stijn) Hoppenbrouwers, A.I. Bleeker, and H.A. (Erik) Proper. * Facing the Conceptual Complexities in Business Domain Modeling. *In: Computing Letters, Nr: 2, Vol: 1, Pages: 59-68, 2005.

The paper focuses on business domain modeling as part of requirements engineering in software development projects. Domain modeling concerns obtaining and modeling the language (concepts, terminologies; ontologies) used by stakeholders to talk about a domain. Achieving conceptual clarity and consensus among stakeholders is an important yet often neglected part of requirements engineering. Domain modeling can play a key role in supporting it. This does, however, require a nuanced approach to language aspects of domain modeling as well as ambition management concerning its goals, and the procedure followed. We provide an analysis of the linguistic complexities involved, as well as of various levels of ambition concerning the domain modeling process. On top of the ''classic'' approach to modeling singular, stable domains, we distinguish aspects like incremental modeling, modeling of multiple terminologies within a domain, and domain evolution; we will elaborate on the first two aspects.

P.J.F. Lucas. * Expert Systems. *In: UNESCO Encyclopaedia of Life Support Systems, EOLSS, 2005.

Expert systems, also called knowledge-based systems or knowledge systems, are computer systems characterized by the fact that an explicit distinction is made between a part in which knowledge of a problem domain is represented, and a part which manipulates that knowledge to reason about or solve an actual problem using problem data. Both the type of knowledge used in solving the problem and the nature of the problem-solving methods used determine which problems can be solved. The knowledge represented in a knowledge base is formal in nature, and is the result of modeling essential features of the domain for the problem at hand. The incorporated knowledge may be acquired from domain experts, literature or datasets. Designing an expert system usually involves using methodologies for knowledge acquisition, modeling and evaluation.

Perry Groot, A. ten Teije, and Frank van Harmelen. * A quantitative analysis of the robustness of knowledge-based systems through degradation studies. *In: Knowledge and Information Systems, Vol: 7, Pages: 224-245, 2005.

The overall aim of this paper is to provide a general setting for quantitative quality measures of knowledge-based system behaviour that is widely applicable to many knowledgebased systems. We propose a general approach that we call degradation studies: an analysis of how system output changes as a function of degrading system input, such as incomplete or incorrect data or knowledge. To show the feasibility of our approach, we have applied it in a case study. We have taken a large and realistic vegetation-classification system, and have analysed its behaviour under various varieties of incomplete and incorrect input. This case study shows that degradation studies can reveal interesting and surprising properties of the system under study.

T. Heskes, M. Opper, W. Wiegerinck, O. Winther, and O. Zoeter. * Approximate inference techniques with expectation constraints. *In: Journal of Statistical Mechanics: Theory and Experiment, Nr: 11, Vol: 0, Pages: 11-15, 2005, keywords - approximate inference, Bayesian networks, machine
learning, expectation propagation, expectation consistency.

This paper discusses inference problems in probabilistic graphical models that often occur in a machine learning setting. In particular it presents a unified view of several recently proposed approximation schemes. Expectation consistent approximations and expectation propagation are both shown to be related to Bethe free energies with weak consistency constraints, i.e. free energies where local approximations are only required to agree on certain statistics instead of full marginals.

[ PDF ] [ Bibtex ] [ External URL ]

S.J.B.A. (Stijn) Hoppenbrouwers, H.A. (Erik) Proper, and V.E. van Reijswoud. * Navigating the Methodology Jungle - The communicative role of modelling techniques in information system development. *In: Computing Letters, Nr: 3, Vol: 1, 2005.

In this position paper, we claim that more attention should be paid to the communicative role of modelling techniques in information system development. The communicative role of a modelling technique refers to it providing a language for communication between the different actors involved in system development, about particular aspects of the system being developed.

R. Jurgelenaite, and P.J.F. Lucas. * Exploiting Causal Independence in Large Bayesian Networks. *In: Knowledge-Based Systems, Vol: 18, Pages: 153-162, 2005.

The assessment of a probability distribution associated with a Bayesian network is a challenging task, even if its topology is sparse. Special probability distributions based on the notion of causal independence have therefore been proposed, as these allow defining a probability distribution in terms of Boolean combinations of local distributions. However, for very large networks even this approach becomes infeasible: in Bayesian networks which need to model a large number of interactions among causal mechanisms, such as in fields like genetics or immunology, it is necessary to further reduce the number of parameters that need to be assessed. In this paper, we propose using equivalence classes of binomial distributions as a means to define very large Bayesian networks. We analyse the behaviours obtained by using different symmetric Boolean functions with these probability distributions as a means to model joint interactions. Some surprisingly complicated behaviours are obtained in this fashion, and their intuitive basis is examined.

[ PDF ] [ Bibtex ] [ External URL ]

P.J.F. Lucas. * Bayesian network modelling through qualitative patterns. *In: Artificial Intelligence, Vol: 163, Pages: 233-263, 2005.

In designing a Bayesian network for an actual problem, developers need to bridge the gap between the mathematical abstractions offered by the Bayesian-network formalism and the features of the problem to be modelled. Qualitative probabilistic networks (QPNs) have been put forward as qualitative analogues to Bayesian networks, and allow modelling interactions in terms of qualitative signs. They thus have the advantage that developers can abstract from the numerical detail, and therefore the gap may not be as wide as for their quantitative counterparts. A notion that has been suggested in the literature to facilitate Bayesian-network development is causal independence. It allows exploiting compact representations of probabilistic interactions among variables in a network. In the paper, we deploy both causal independence and QPNs in developing and analysing a collection of qualitative, causal interaction patterns, called \emph{QC patterns}. These are endowed with a fixed qualitative semantics, and are intended to offer developers a high-level starting point when developing Bayesian networks.

Paul de Vrieze, P. van Bommel, Jakob Klok, and Th.P. van der Weide. * Adaptation in Multimedia Systems. *In: Multimedia Tools and Applications, Nr: 3, Vol: 25, Pages: 333-343, June, 2005.

Multimedia systems can profit a lot from personalization. Such a personalization is essential to give users the feeling that the system is easily accessible especially if it is done automatically. The way this adaptive personalization works is very dependent on the adaptation model that is chosen. We introduce a generic two-dimensional classification framework for user modeling systems. This enables us to clarify existing as well as new applications in the area of user modeling. In order to illustrate our framework we evaluate push and pull based user modeling in user modeling systems.

A. Ypma, and T. Heskes. * Novel approximations for inference in nonlinear dynamical systems using expectation propagation. *In: Neurocomputing, Vol: 69, Pages: 85-99, 2005, keywords - approximate inference, Bayesian networks, machine learning, expectation propagation, time series analysis.

We formulate the problem of inference in nonlinear dynamical systems in the framework of expectation propagation,and propose two novel algorithms. The first algorithm is based on Laplace approximation and allows for iterated forward and backward passes. The second is based on repeated application of the unscented transform. It leads to an unscented Kalman smoother for which the dynamics need not be inverted explicitly. In experiments with a onedimensional nonlinear dynamical system we show that for relatively low observation noise levels,the Laplace algorithm allows for the best estimates of the state means. The unscented algorithm however is more robust to high observation noise and always outperforms the conventional inference methods against which it was benchmarked.

[ PDF ] [ Bibtex ] [ External URL ]

O. Zoeter, and T. Heskes. * Change point problems in linear dynamical systems. *In: Journal of Machine Learning Research, Vol: 6, Pages: 1999-2026, December, 2005.

We study the problem of learning two regimes (we have a normal and a prefault regime in mind) based on a train set of non-Markovian observation sequences. Key to the model is that we assume that once the system switches from the normal to the prefault regime it cannot restore and will eventually result in a fault. We refer to the particular setting as semi-supervised since we assume the only information given to the learner is whether a particular sequence ended with a stop (implying that the sequence was generated by the normal regime) or with a fault (implying that there was a switch from the normal to the fault regime). In the latter case the particular time point at which a switch occurred is not known. The underlying model used is a switching linear dynamical system (SLDS). The constraints in the regime transition probabilities result in an exact inference procedure that scales quadratically with the length of a sequence. Maximum aposteriori (MAP) parameter estimates can be found using an expectation maximization (EM) algorithm with this inference algorithm in the E-step. For long sequences this will not be practically feasible and an approximate inference and an approximate EM procedure is called for. We describe a flexible class of approximations corresponding to different choices of clusters in a Kikuchi free energy with weak consistency constraints.

[ PDF ] [ Bibtex ] [ External URL ]

M.M. Lankhorst, and others. * Enterprise Architecture at Work: Modelling, Communication and Analysis. * Springer, 2005, ISBN 3540243712.

An enterprise architecture tries to describe and control an organisation`s structure, processes, applications, systems and techniques in an integrated way. The unambiguous specification and description of components and their relationships in such architecture requires a coherent architecture modelling language. Lankhorst and his co-authors present such an enterprise modelling language that captures the complexity of architectural domains and their relations and allows the construction of integrated enterprise architecture models. They provide architects with concrete instruments that improve their architectural practice. As this is not enough, they additionally present techniques and heuristics for communicating with all relevant stakeholders about these architectures. Since an architecture model is useful not only for providing insight into the current or future situation but can also be used to evaluate the transition from `as-is` to `to-be`, the authors also describe analysis methods for assessing both the qualitative impact of changes to an architecture and the quantitative aspects of architectures, such as performance and cost issues. The modelling language and the other techniques presented have been proven in practice in many real-life case studies. So this book is an ideal companion for enterprise IT or business architects in industry as well as for computer or management science students studying the field of enterprise architecture.

M.M. Lankhorst, L. van der Torre, H.A. (Erik) Proper, F. Arbab, F.S. de Boer, and Bonsague M.. * Foundations (of ArchiMate). *In: Enterprise Architecture at Work: Modelling, Communication and Analysis, Berlin, Germany, EU, Edited by: M.M. Lankhorst. Pages: 47-66, Springer, 2005, ISBN 3540243712.

[ Missing PDF ] [ Bibtex ]

M.M. Lankhorst, L. van der Torre, H.A. (Erik) Proper, F. Arbab, and M.W.A. Steen. * Viewpoints and Visualisation. *In: Enterprise Architecture at Work : Modelling, Communication and Analysis, Berlin, Germany, EU, Edited by: M.M. Lankhorst. Pages: 147-190, Springer, 2005, ISBN 3540243712.

[ Missing PDF ] [ Bibtex ]

H.A. (Erik) Proper, S.J.B.A. (Stijn) Hoppenbrouwers, and G.E. Veldhuijzen van Zanten. * Communication of Enterprise Architectures. *In: Enterprise Architecture at Work: Modelling, Communication and Analysis, Berlin, Germany, EU, Edited by: M.M. Lankhorst. Pages: 67-82, Springer, 2005, ISBN 3540243712.

[ Missing PDF ] [ Bibtex ]

P. van Bommel, B. van Gils, H.A. (Erik) Proper, M. van Vliet, and Th.P. van der Weide. * The Information Market: Its Basic Concepts and Its Challenges. *In: Web Information Systems Engineering - WISE 2005, New York, New York, USA, Edited by: A.H.H. Ngu, M. Kitsuregawa, E.J. Neuhold, J.-Y. Chung, and Q.Z. Sheng. Lecture Notes in Computer Science, Vol: 3806, Pages: 577-583, November, Springer-Verlag, 2005, ISBN 3540300171.

This paper discusses the concept of information market. The authors of this paper have been involved in several aspects of information retrieval research. In continuing this research tradition we now take a wider perspective on this field and re-position it as a market where demand for information meets supply for information. The paper starts by exploring the notion of a market in general and is followed by a specialization of these considerations in the information market, where we will also position some of the existing work.

[ PDF ] [ Bibtex ] [ External URL ]

S.J.B.A. (Stijn) Hoppenbrouwers, H.A. (Erik) Proper, and Th.P. van der Weide. * A Fundamental View on the Process of Conceptual Modeling. *In: Conceptual Modeling - ER 2005 - 24 International Conference on Conceptual Modeling, Lecture Notes in Computer Science, Vol: 3716, Pages: 128-143, June, 2005, ISBN 3540293892.

In an ongoing effort to better understand the process of creating conceptual models (in particular formal ones), we present a fundamental view of the process of modeling. We base this view on the idea that participants in such a process are involved in a deliberate and goal-driven effort to share and reconcile representations of their personal conceptions of (parts of) the world. This effort takes the shape of a modeling dialogue, involving the use of controlled language. We thus take a fundamental approach to subjective aspects of modeling, as opposed to traditional approaches which essentially consider models as objective entities. We position and present our initial theory of modeling, and briefly discuss how we intend to validate and further develop it.

[ PDF ] [ Bibtex ] [ External URL ]

S.J.B.A. (Stijn) Hoppenbrouwers, H.A. (Erik) Proper, and Th.P. van der Weide. * Understanding the Requirements on Modelling Techniques. *In: 17th International Conference on Advanced Information Systems Engineering, CAiSE 2005, Porto, Portugal, EU, Edited by: O. Pastor, and J. Falcao e Cunha. Lecture Notes in Computer Science, Vol: 3520, Pages: 262-276, June, Springer-Verlag, 2005, ISBN 3540260951.

The focus of this paper is not on the requirements of an information system to be developed, but rather on the requirements that apply to the modelling techniques used during information system development. We claim that in past and present, many information systems modelling techniques have been developed without a proper understanding of the requirements that follow from the development processes in which these techniques are to be used. This paper provides a progress report on our research e.orts to obtain a fundamental understanding of the requirements mentioned. We discuss the underlying research issues, the research approach we use, the way of thinking (weltanschauung) that will be employed in finding the answers, and some first results.

[ PDF ] [ Bibtex ] [ External URL ]

H.A. (Erik) Proper, A.A. Verrijn-Stuart, and S.J.B.A. (Stijn) Hoppenbrouwers. * Towards Utility-based Selection of Architecture-Modelling Concepts. *In: Proceedings of the Second Asia-Pacific Conference on Conceptual Modelling (APCCM2005), Newcastle, New South Wales, Australia, Edited by: S. Hartmann, and M. Stumptner. Conferences in Research and Practice in Information Technology Series, Vol: 42, Pages: 25-36, January, Australian Computer Society, Sydney, New South Wales, Australia, 2005, ISBN 1920682252.

In this paper we are concerned with the principles underlying the utility of modelling concepts, in particular in the context of architecture-modelling. Firstly, some basic concepts are discussed, in particular the relation between information, language, and modelling. Our primary area of application is the modelling of enterprise architectures and information system architectures, where the selection of concepts used to model different aspects very much depends on the specific concerns that need to be addressed. The approach is illustrated by a brief review of the relevant aspects of two existing frameworks for modelling of (software intensive) information systems and their architectures.

B. van Gils, H.A. (Erik) Proper, P. van Bommel, and Paul de Vrieze. * Transformation selection for aptness-based web retrieval. *In: Proceedings of the Sixteenth Australasian Database Conference (ADC2005), Newcastle, New South Wales, Australia, Edited by: H.E. Williams, and G. Dobbie. Conferences in Research and Practice in Information Technology Series, Vol: 39, Pages: 115-124, January, Australian Computer Society, Sydney, New South Wales, Australia, 2005, ISBN 192068221X.

A myriad of resources can be found on the Web today, and finding (topically) relevant resources for a given information need is a daunting task. Even if relevant resources can be found, they may not be apt for the searcher in a given context: some properties of the resource may be ``wrong'' for his current context. Suchissues can often be resolved by means of transformations. In this paper we discuss an algorithm for selecting candidate transformations for a given situation and present our first experiences with this algorithm.

P. van Bommel, B. van Gils, H.A. (Erik) Proper, E.D. Schabell, M. van Vliet, and Th.P. van der Weide. * Towards and Information Market Paradigm. *In: Forum proceedings of the 17th Conference on Advanced Information Systems 2005 (CAiSE 2005), Edited by: O. Belo, J. Eder, O. Pastor, and J. Falcao e Cunha. Pages: 27-32, June, FEUP, Porto, Portugal, EU, Porto, Portugal, EU, 2005, ISBN 9727520782.

This paper discusses the concept of information market. The authors of this paper have been involved in several aspects of information retrieval research. In continuing this research tradition we now take a wider perspective on this field, and position it as a market where demand for information meets supply for information.

A.J.J. van Breemen, and J.J. Sarbo. * Surviving in the Bermuda Triangle of semeiosis. *In: Computing Anticipatory Systems (CASYS`2005), Pages: 5, 2005, extended abstract; to appear in the International Journal of Computing Anticipatory Systems, 2006.

What we think is part of reality and at least partly determined by reality at the same time. The advent of knowledge engineering asks for a shift from lifeless representational and blind reductionist models towards a relational and teleological interpretation of cognition in order to embed the cognitive events in processes of meaning production or semeiosis. Such embedding is determined by the properties of perception (the senses) and the types of distinctions that can be made by semeiosis. The selection of elements in such processes that are formalizable asks for a model in which the phases that make up the process, the decision moments, and their degrees of freedom are clearly indicated. In this paper we will outline such a model for two levels: the level of sign recognition and the level of response to a sign. The decision moments will only be indicated. The practical importance of this structure lies in its potential to be interpreted as a methodology for (formal) specification.

M.A.J. van Gerven, P.J.F. Lucas, and Th.P. van der Weide. * A Qualitative Characterisation of Causal Independence Models using Boolean Polynomials. *In: Symbolic and Qualitative Approaches to Reasoning with Uncertainty (Proc of the 8th ECSQARU 2005), Vol: LNAI 3571, Pages: 244-256, 2005.

Causal independence models offer a high level starting point for the design of Bayesian networks but are not maximally exploited as their behaviour is often unclear. One approach is to employ qualitative probabilistic network theory in order to derive a qualitative characterisation of causal independence models. In this paper we exploit polynomial forms of Boolean functions to systematically analyse causal independence models, giving rise to the notion of a polynomial causal independence model. The advantage of the approach is that it allows understanding qualitative probabilistic behaviour in terms of algebraic structure.

Perry Groot, Heiner Stuckenschmidt, and Holger Wache. * Approximating Description Logic Classification for Semantic Web Reasoning. *In: The Semantic Web: Research and Applications: Second European Web Conference, Vol: 3532, Pages: 318-332, Springer Verlag, 2005.

In many application scenarios, the use of the Web ontology language OWL is hampered by the complexity of the underlying logic that makes reasoning in OWL intractable in the worst case. In this paper, we address the question whether approximation techniques known from the knowledge representation literature can help to simplify OWL reasoning. In particular, we carry out experiments with approximate deduction techniques on the problem of classifying new concept expressions into an existing OWL ontology using existing Ontologies on the web. Our experiments show that a direct application of approximate deduction techniques as proposed in the literature in most cases does not lead to an improvement and that these methods also suffer from some fundamental problems.

T. Heskes, and B. de Vries. * Incremental utility elicitation for adaptive personalization. *In: BNAIC 2005, Proceedings of the Seventeenth Belgium-Netherlands Conference on Artificial Intelligence, Edited by: K. Verbeeck, K. Tuyls, A. Now`e, B. Manderick, and B. Kuijpers. Pages: 127-134, Koninklijke Vlaamse Academie van Belgi`e voor Wetenschappen en Kunsten, Brussels, 2005, keywords - utility elicitation, Bayesian networks, machine learning.

Medical devices often contain many tunable parameters. The optimal setting of these parameters depends on the patient's utility function, which is often unknown. This raises two questions. First, how should we optimize the parameters given partial information about the patient's utility? And secondly, what questions do we ask to efficiently elicit this utility information? In this paper, we present a coherent probabilistic decision-theoretic framework to answer these questions. We illustrate the potential of this framework on a toy problem and discuss directions for future research.

A.J. Hommersom, P.J.F. Lucas, and P. van Bommel. * Argumentation Systems for History-Based Construction of Medical Guidelines. *In: Proceedings of the Seventeenth Belgium-Netherlands Conference on Artificial Intelligence (BNAIC-05), 2005.

Medical guidelines are hard to formalise because of their inherent fragmentary nature. Because of this, a formalism where details of guideline text are omitted is justified. In this paper, we describe medical histories and expectations and illustrate them with a number of examples from the Dutch breast cancer guideline. We applied our approach to an assumption-based argumentation system for its reasoning facilities to support guideline construction. Examples concerning treatment selection in the breast cancer guideline are discussed.

A.J. Hommersom, and P.J.F. Lucas. * Automated Theorem Proving for Quality-checking Medical Guidelines. *, 2005.

Requirements about the quality of medical guidelines can be represented using schemata borrowed from the theory of abductive diagnosis, using temporal logic to model the time-oriented aspects expressed in a guideline. Previously we have shown that these requirements can be verified using interactive theorem proving techniques. In this paper, we investigate how this approach can be mapped to the facilities of a resolution-based theorem prover, Otter, and a complementary program that searches for finite models of first-order statements, MACE. It is shown that the reasoning that is required for checking the quality of a guideline can be mapped to such fully automated theorem-proving facilities. The medical quality of an actual guideline concerning diabetes mellitus 2 is investigated in this way.

A.J. Hommersom, P.J.F. Lucas, P. van Bommel, and Th.P. van der Weide. * A History-Based Algebra for Quality-Checking Medical Guidelines. *In: Artificial Intelligence in Medicine, 10th Conference on Artificial Intelligence in Medicine, AIME 2005, LNCS, Vol: 3581, Springer-Verlag, 2005.

In this paper, we propose a formal theory to describe the development of medical guideline text in detail, but at a sufficiently high level abstraction, in such way that essential elements of the guidelines are highlighted. We argue that because of the fragmentary nature of medical guidelines, an approach where details in guideline text are omitted is justified. The different aspects of a guideline are illustrated and discussed by a number of examples from the Dutch breast cancer guideline. Furthermore, we discuss how the theory can be used to detect flaws in the guideline text at an early stage in the guideline development process and consequently can be used to improve the quality of medical guidelines.

A.J. Hommersom, J.-J.Ch. Meyer, and E.P. de Vink. * Toward Reasoning about Security Protocols: A Semantic Approach. *In: Logic and Communication in Multi-Agent Systems, LCMAS`04, Edited by: W. van der Hoek, A. Lomuscio, E.P. Vink, and M. Wooldridge. ENTCS, Vol: 126, Pages: 53-75, Elsevier, 2005.

We present a model-theoretic approach for reasoning about security protocols, applying recent insights from dynamic epistemic logics. This enables us to describe exactly the subsequent epistemic states of the agents participating in the protocol, using Kripke models and transitions between these based on updates of the agents' beliefs associated with steps in the protocol. As a case study we will consider the SRA Three Pass protocol.

S.J.B.A. (Stijn) Hoppenbrouwers, H.A. (Erik) Proper, and Th.P. van der Weide. * Fact Calculus: Using ORM and Lisa-D to Reason About Domains. *In: On the Move to Meaningful Internet Systems 2005: OTM Workshops - OTM Confederated International Workshops and Posters, AWeSOMe, CAMS, GADA, MIOS+INTEROP, ORM, PhDS, SeBGIS, SWWS, and WOSE 2005, Agia Napa, Cyprus, EU, Edited by: R. Meersman, Z. Tari, and P. Herrero. Lecture Notes in Computer Science, Vol: 3762, Pages: 720-729, October/November, Springer-Verlag, 2005, ISBN 3540297391.

We propose to use ORM and Lisa-D as means to formally reason about domains. Conceptual rule languages such as Lisa-D, RIDL and ConQuer allow for the specification of rules in a semi-natural language format that can more easily be understood by domain experts than languages such as predicate calculus, Z or OCL. If one would indeed be able to reason about properties of domains in terms of Lisa-D expressions, then this reasoning would be likely to be better accessible to people without a background in formal mathematics, such as "the average" domain expert. A potential application domain for such reasoning would be the field of business rules. If we can reason about business rules formulated in a semi-natural language format, the formal equivalence of (sets of) business rules (i.e. various paraphrasings) can be discussed with domain experts in a language and a fashion that is familiar to them.

[ PDF ] [ Bibtex ] [ External URL ]

S.J.B.A. (Stijn) Hoppenbrouwers, and H.A. (Erik) Proper. * Formal Modelling as a Grounded Conversation. *, Pages: 139-155, June, Proceedings of the 10th International Working Conf, 2005.

As part of an ongoing, broader theoretical study concerning a communication/conversation perspective on information system development, we focus in this paper on a specific sort of conversation in IS modelling: conversations for formal modelling, which are to bridge the gap between informal (NL-based) and formal (mathematics-based) representations and interpretations. We provide a communication-based analysis of the formal modelling process, and discuss why it is crucial that the (formal) structures in the various kinds of models are somehow grounded in the structures of agreement/commitment that underly the development conversations. We explain how looking at modelling as communicative behaviour may help achieve grounded models, thereby improving their validity in context.

S.J.B.A. (Stijn) Hoppenbrouwers, H.A. (Erik) Proper, and Th.P. van der Weide. * Towards explicit strategies for modeling. *In: Proceedings of the Workshop on Evaluating Modeling Methods for Systems Analysis and Design (EMMSAD`05), held in conjunctiun with the 17th Conference on Advanced Information Systems 2005 (CAiSE 2005), Edited by: T.A. Halpin, K. Siau, and J. Krogstie. Pages: 485-492, FEUP, Porto, Portugal, EU, Porto, Portugal, EU, 2005, ISBN 9727520774.

We present an initial framework resulting from our ongoing research concerning modelling strategies. Our approach is rooted in a subjectivist, communication-based view on modelling. Under this approach, models are viewed as the result of modelling dialogues, which are a specialized sub-type of the diverse conversations that constitute a system development conversation at large. By focussing on the process of modelling instead of properties of models or modelling languages, we expect, eventually, to be able to better understand and deal with some currently problematic aspects of modelling, in particular model validation in context. We sketch plans for an environment for studying modelling conversations and strategies.

R. Jurgelenaite, P.J.F. Lucas, and T. Heskes. * Exploring the Noisy Threshold Function in Designing Bayesian Networks. *In: AI-2005 the Twenty-fifth SGAI International Conference on Innovative Techniques and Applications of Artificial Intelligence, Edited by: M. Bremer, F. Coenen, and T. Allen. Pages: 133-146, Springer-Verlag, 2005.

Causal independence modeling is a well-known method both for reducing the size of probability tables and for explaining the underlying mechanisms in Bayesian networks. Many Bayesian network models incorporate causal independence assumptions; however, only the noisy OR and noisy AND, two examples of causal independence models, are used in practice. Their underlying assumption that either at least one cause, or all causes together, give rise to an effect, however, seems unnecessarily restrictive. In the present paper a new, more flexible, causal independence model is proposed, based on the Boolean threshold function. A connection is established between conditional probability distributions based on the noisy threshold model and Poisson binomial distributions, and the basic properties of this probability distribution are studied in some depth. The successful application of the noisy threshold model in the refinement of a Bayesian network for the diagnosis and treatment of ventilator-associated pneumonia demonstrates the practical value of the presented theory.

J. Nabukenya. * Collaboration Engineering for Policy Making: A Theory of Good Policy in a Collaborative Action. *In: Proceedings of the 12th Doctoral Consortium, held in conjunction with the 17th Conference on Advanced Information Systems Engineering (CAiSE’05), Pages: 54-61, June, 2005.

This paper is concerned with the potential application of Collaboration Engineering (CE) to the field of Policy-making. We claim that CE will lead to improved policy-making processes (PMPs) i.e. the quality of the policies that are being decided on. Policy-making involves several actors with divergent interests, though a policy can only be realized on the basis of collaboration in which the actors involved contribute the resources needed. However, the analysis to realize a “good policy” in a collaborative PMP poses interesting challenges: what does it mean for a policy to be good in a collaborative effort? The aim of our research is therefore to develop a theory to improve the quality of policies and the collaborative processes.

H.A. (Erik) Proper, S.J.B.A. (Stijn) Hoppenbrouwers, and Th.P. van der Weide. * A Fact-Oriented Approach to Activity Modeling. *In: On the Move to Meaningful Internet Systems 2005: OTM Workshops - OTM Confederated International Workshops and Posters, AWeSOMe, CAMS, GADA, MIOS+INTEROP, ORM, PhDS, SeBGIS, SWWS, and WOSE 2005, Agia Napa, Cyprus, EU, Edited by: R. Meersman, Z. Tari, and P. Herrero. Lecture Notes in Computer Science, Vol: 3762, Pages: 666-675, October/November, Springer-Verlag, 2005, ISBN 3540297391.

In this paper we investigate the idea of using an ORM model as a starting point to derive an activity model, essentially providing an activity view on the original ORM model. When producing an ORM model of an inherently active domain, the resulting ORM model can provide an appropriate base to start out from. We will illustrate this basic idea by means of a running example. Much work remains to be done, but the results so-far look promissing.

[ PDF ] [ Bibtex ] [ External URL ]

H.A. (Erik) Proper, and Th.P. van der Weide. * Schema Equivalence as a Counting Problem. *In: On the Move to Meaningful Internet Systems 2005: OTM Workshops - OTM Confederated International Workshops and Posters, AWeSOMe, CAMS, GADA, MIOS+INTEROP, ORM, PhDS, SeBGIS, SWWS, and WOSE 2005, Agia Napa, Cyprus, EU, Edited by: R. Meersman, Z. Tari, and P. Herrero. Lecture Notes in Computer Science, Vol: 3762, Pages: 730-739, October/November, Springer-Verlag, 2005, ISBN 3540297391.

In this paper we introduce some terminology for comparing the expressiveness of conceptual data modeling techniques, such as ER, NIAM, PSM and ORM, that are finitely bounded by their underlying domains. Next we consider schema equivalence and discuss the effects of the sizes of the underlying domains.This leads to the introduction of the concept of finite equivalence, which may serve as a means to a better understanding of the fundamentals of modeling concepts (utility). We give some examples of finite equivalence and inequivalence in the context of ORM.

[ PDF ] [ Bibtex ] [ External URL ]

J.J. Sarbo. * Peircean proto-signs. *, Edited by: D.M. Dubois. Pages: 4, 2005, invited paper; extended abstract; full paper to appear in AIP Conference Proceedings.

Human knowledge is intentional, as opposed to `knowledge represented by the computer, which is syntactic. The premise of this paper is that, nevertheless, a process model of cognition can be defined, which is isomorphic and analogous to Peirces 9-adic classification of signs. The result of the cognition of a percept is judgment which is a full meaning, as opposed to the computations of the model of cognitive processing, which are meaningful in the mathematical sense only. The assumption of this paper is that such computations can be interpreted as unfinished signs that are in a process of becoming signs. Such signs are called in this paper a `proto-sign. An advantage of the relation of the computational model of cognition with the Peircean concepts lies in the models potential for the definition of a `natural representation of knowledge, a representation which can be more easily interpreted by the human user, than the traditional formal ones. Human knowledge is intentional, as opposed to `knowledge represented by the computer, which is syntactic. The premise of this paper is that, nevertheless, a process model of cognition can be defined, which is isomorphic and analogous to Peirces 9-adic classification of signs. The result of the cognition of a percept is judgment which is a full meaning, as opposed to the computations of the model of cognitive processing, which are meaningful in the mathematical sense only. The assumption of this paper is that such computations can be interpreted as unfinished signs that are in a process of becoming signs. Such signs are called in this paper a `proto-sign. An advantage of the relation of the computational model of cognition with the Peircean concepts lies in the models potential for the definition of a `natural representation of knowledge, a representation which can be more easily interpreted by the human user, than the traditional formal ones.

S. Visscher, P.J.F. Lucas, K. Schurink, and M.J.M. Bonten. * Improving the Therapeutic Performance of a Medical Bayesian Network using Noisy Threshold Functions. *In: 6th International symposium on Biological and Medical Data Analysis (ISBMDA), LNCS 3337, Pages: 161-172, 2005.

Treatment management in critically ill patients needs to be efficient, as delay in treatment may give rise to deterioration in the patient\

O. Zoeter, and T. Heskes. * Gaussian quadrature based expectation propagation. *, Edited by: Z. Ghahramani, and R. Cowell. Society for Artificial Intelligence and Statistics, 2005.

We present a general approximation method for Bayesian inference problems. The method is based on Expectation Propagation (EP). Projection steps in the EP iteration that cannot be done analytically are done using Gaussian quadrature. By identifying a general form in the projections, the only quadrature rules that are required are for exponential family weight functions. The corresponding cumulant and moment generating functions can then be used to automatically derive the necessary quadrature rules. In this article the approach is restricted to approximating families that factorize to a product of one-dimensional families. The final algorithm has interesting similarities with particle filtering algorithms. We discuss these, and also discuss the relationship with variational Bayes and Laplace propagation. Experimental results are given for an interesting model from mathematical finance.

S. Visscher, P.J.F. Lucas, K. Schurink, and M.J.M. Bonten. * Using a Bayesian-network Model for the Analysis of Clinical Time-series Data. *In: Artificial Intelligence in Medicine (Proceeding of AIME 2005), LNAI 3581, Pages: 48-52, 2005.

Time is an essential element in the clinical management of patients as disease processes develop in time. Not surprisingly, the amount of temporal clinical information that is being collected to understand what is happening in the patient is therefore continuously increasing. A typical example of a disease process where time is considered important is the development of ventilator-associated pneumonia (VAP) during the stay in the ICU. A Bayesian network was developed previously to support clinicians in the diagnosis and treatment of VAP in the ICU. In the research described in this paper, we have investigated whether this Bayesian network can also be used to analyse the temporal data collected in the ICU for patterns indicating development of VAP. In addition, it was studied whether the Bayesian network was able to suggest appropriate antimicrobial treatment. A temporal database with over 17700 patient days was used for this purpose.

J. Nabukenya. * Collaboration Engineering for Policy Making: A Theory of Good Policy in a Collaborative Action. *In: roceedings of the 15th European Conference on Information Systems, Pages: 54-61, May, Radboud University Nijmegen2005.

This paper is concerned with the potential application of Collaboration Engineering (CE) to the field of Policy-making. We claim that CE will lead to improved policy-making processes (PMPs) i.e. the quality of the policies that are being decided on. Policy-making involves several actors with divergent interests, though a policy can only be realized on the basis of collaboration in which the actors involved contribute the resources needed. However, the analysis to realize a “good policy” in a collaborative PMP poses interesting challenges: what does it mean for a policy to be good in a collaborative effort? The aim of our research is therefore to develop a theory to improve the quality of policies and the collaborative processes.

J.J. Sarbo, and J.I. Farkas. * Cognition and Representation. *2005, Lecture Notes.

S.J.B.A. (Stijn) Hoppenbrouwers, Th.P. van der Weide, and H.A. (Erik) Proper. * Dealing with Uncertainty in Information Modelling. *Technical report: ICIS-R05013, Institute for Information and Computing Sciences, Radboud University, Nijmegen, The Netherlands, EU, 2005.

We present ongoing research concerning a communication-based approach to information modelling. The general goal of our research is to understand and support (contextualized) modelling dialogues rather than the models that result from these dialogues or the modelling languages in which the models are expressed. We take the point of view that information modelling dialogues are subject to the same kinds of uncertainty that occur in any communication between human agents. This uncertainty is for a large part due to the contextualized nature of information models. By focusing on dialogues and guiding them through strategies for dealing with uncertainty, we hope to achieve better, properly contextualized, information models. We present an analysis of uncertainty in information modelling, and give an example of a viable approach to one particular type of uncertainty reduction in information modelling. We work towards a functional design for an interactive modelling environment for testing our theories.

J.J. Sarbo, J.I. Farkas, and A.J.J. van Breemen. * Natural Grammar. *Technical report: ICIS-R05032, Radboud University, Nijmegen, The Netherlands, EU, 2005.

What is `natural in natural language? Can that property be captured formally in a type of grammar? The promise of this paper is that natural grammar may exist indeed. Our research has revealed additionally that such grammar can be given a naive logical as well as a semiotic interpretation, which are isomorphic, indicating that naive logic and natural language, as different types of knowledge, can have a uniform, semiotically based representation. Natural grammar bears similarity with dependency based formalisms like Link Grammar, its types of rules define an induced triadic classification of language concepts analogous to X-bar theory.

E.D. Schabell, and B. van Gils. * Implementing Vimes - the broker component. *Technical report: ICIS-R05028, Radboud University Nijmegen, 2005.

This document will discuss the Vimes retrieval architecture broker component from the research project Profile Based Retrieval Of Networked Information Resources (PRONIR). It will provide an overview of the development process from requirements investigations done with use cases, on to the actual design and implementation.

V.E. van Reijswoud, R. van Alteren, and H.A. (Erik) Proper. * ICT in ontwikkelingslanden. *In: Informatie, Nr: 4, Vol: 47, Pages: 10-16, 2005, In Dutch.

[ Missing PDF ] [ Bibtex ]

S.J.B.A. (Stijn) Hoppenbrouwers, H.A. (Erik) Proper, and A.I. Bleeker. * Modelleertalen als communicatiemiddel in architectuurprocessen. *In: Informatie & Architectuur, Nr: 1, Vol: 1, Pages: 4-6, 2005, In Dutch.

Binnen architectuurprocessen worden diverse modelleertalen gebruikt voor het vastleggen van en communiceren over architectuurbeschrijvingen. Voor de hand liggende voorbeelden van dit soort modelleertalen zijn UML, Testbed en de ArchiMate taal. Echter, natuurlijke taal, powerpointplaatjes met uitleg, etc., worden in de praktijk minstens even veel gebruikt voor dit soort doeleinden. Deze laatste vormen beschouwen wij daarom explicit ook als modelleertalen. Het verschil is hierbij dat bij talen zoals UML de syntactische vrijheid erg is ingeperkt, terwijl bijvoorbeeld in natuurlijke taal, of bij vrij geschetste illustraties, deze vrijheid een stuk groter is.

S.J. Overbeek, and S. van Middendorp. * Architecture and the Network Organization: Inextricably Bound Up. *December, 2005.

Contemporary organizations experience difficulties to convert their strategies into concrete actions, due to tensions between old and new paradigms. A new paradigm is a compilation of values, starting points, methods and knowledge that are part of a way of acting. In practice there is always a small group trying to explore a new paradigm. When a larger group starts to consider this paradigm as important, actions are needed to shift well prepared into the new paradigm can then be taken into consideration.

[ PDF ] [ Bibtex ] [ External URL ]

S.J. Overbeek, and S. van Middendorp. * Digitale Architectuur en de Netwerkorganisatie: onlosmakelijk verbonden. *In: Informatie & Architectuur, Nr: 4, Vol: 1, Pages: 8-12, December, 2005, In Dutch.

Veel organisaties hebben moeite om hun prachtige strategien om te zetten in concrete acties. Dit komt mede door de spanning tussen oude en nieuwe paradigmas. Een paradigma is een verzameling waarden, uitgangspunten, methoden en kennis die bij een manier van doen horen. In de praktijk is er altijd een kleine groep bezig om een nieuw paradigma te verkennen. Wanneer een grotere groep dit paradigma als belangrijk begint te beschouwen, kunnen er vervolgens acties bedacht worden die nodig zijn om goed voorbereid de verschuiving naar het nieuwe paradigma te maken.

S.J. Overbeek, S. van Middendorp, and D.B.B. Rijsenbrij. * De Digitale Werkruimte, een nieuw architectuur artefact. *In: Informatie & Architectuur, Nr: 2, Vol: 1, Pages: 28-31, 2005, In Dutch.

In het digitale tijdperk, ook wel aangeduid met het informatietijdperk, voltrekken maatschappelijke en technologische ontwikkelingen zich in hoog tempo. Deze ontwikkelingen hebben grote impact op de organisatiestructuren en de werkprocessen van medewerkers. Om snel en goed in te kunnen spelen op deze snel veranderende omgeving, zal ook de werkruimte van de medewerker dienen te worden gedigitaliseerd. Een goed ingerichte digitale werkruimte wordt een vereiste. Om het ontwikkelproces van een digitale werkruimte ordelijk en overzichtelijk te laten verlopen is een architectuuraanpak onvermijdelijk.

S.J. Overbeek, S. van Middendorp, and D.B.B. Rijsenbrij. * De Digitale Werkruimte, noodzaak voor een moderne manager. *In: Proceedings of the 7th National Architecture Congress, Nieuwegein, The Netherlands, EU, November, 2005, In Dutch.

In het digitale tijdperk, ook wel het informatietijdperk, voltrekken maatschappelijke en technologische ontwikkelingen zich in een razendsnel tempo. Ontwikkelingen die een grote invloed hebben op het functioneren van ondernemingen, in het bijzonder op het takenpakket van de manager. Om snel en goed in te kunnen spelen op een nauwelijks planbare toekomst zal ook de werkruimte van de manager dienen te worden gedigitaliseerd. Een goed ingerichte digitale werkruimte wordt een absolute .must. voor de manager die aan het roer wil blijven.

S.J. Overbeek, S. van Middendorp, and D.B.B. Rijsenbrij. * The Digital Workspace, in the financial sector. *In: IT Management Select, Nr: 4, Vol: 11, Pages: 38-51, December, 2005, ISBN 9012108209.

In the digital age, also known as the information age, social and technological developments take place at high speed. These developments greatly affect the functioning of businesses, in particular the managers job responsibilities. In order to enable fast and good response to a hardly predictable future, the workspace of the manager will also have to be digitized. A Binck case.