ICIS Research Publications


2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1960 1959 1958 1957 1956 1955 1953 1952 1951 1950 1949 1948 1947 1946 1945 1941 1938 1923 1911

Journal

B. van Gils, H.A. (Erik) Proper, P. van Bommel, and Th.P. van der Weide. Typing and transformational effects in complex information supply. In: International Journal of Cooperative Information Systems, Nr: 2, Vol: 16, Pages: 229-270, June, 2007.

Information plays an increasingly important role in our lives. Often we retrieve this information by querying the web: data resources found on the web may provide the information that were looking for. This implies that the Web may be seen as an information market: authors supply information and searchers may find it. In this article we present a formal framework for the syntactic aspects of the information market. We explore the information landscape using a modeling approach. An important part of this model is a (syntactic) framework for transformations, which allows us to deal with the heterogeneity of the data resources found on the Web. Last but not least we attempt to give an outline how our framework, which in essence focuses on the data resources on the Web, may lead to a better understanding of how information is supplied via the Web. For this we use an example from the field of information retrieval.

[ PDF ] [ Bibtex ]

F. Arbab, F.S. de Boer, M. Bonsangue, and M.M. Lankhorst. Integrating Architectural Models. In: Enterprise Modelling and Information Systems Architectures, Nr: 1, Vol: 2, Pages: 40-57, January, 2007.

The diversity of architectural models in enterprise architecture is a problem for their integration. In this paper we distinguish three kinds of models from each other and their visualization, and we illustrate how the distinctions can be used for model integration within the architectural approach. Symbolic models express properties of architectures of systems, semantic models interpret the symbols of semantic models, and subjective models are purposely abstracted conceptions of a domain. Building on results obtained in the ArchiMate project, we illustrate how symbolic models can be integrated using an architectural language, how integrated models can be updated using the distinction between symbolic models and their visualization, and how semantic models can be integrated using a new kind of enterprise analysis called semantic analysis. We also suggest that subjective models can be integrated using techniques from natural language analysis.

[ PDF ] [ Bibtex ]

A.J. Hommersom, P. Groot, P.J.F. Lucas, M. Balser, and J. Schmitt. Combining Task Execution and Background Knowledge for the Verification of Medical Guidelines. In: Knowledge-Based Systems, Vol: 20, July, 2007.

The use of a medical guideline can be seen as the execution of computational tasks, sequentially or in parallel, in the face of patient data. It has been shown that many of such guidelines can be represented as a `network of tasks`, i.e., as a number of steps that have a specific function or goal. To investigate the quality of such guidelines we propose a formalization of criteria for good practice medicine a guideline should comply to. We use this theory in conjunction with medical background knowledge to verify the quality of a guideline dealing with diabetes mellitus type 2 using the interactive theorem prover KIV. Verification using task execution and background knowledge is a novel approach to quality checking of medical guidelines.

[ PDF ] [ Bibtex ]

S.J. Overbeek. Improving Efficiency and Effectiveness of Knowledge Exchange between Knowledge Workers. In: ICFAI Journal of Knowledge Management, Nr: 1, Vol: 5, Pages: 24-38, January, 2007.

Information technology increasingly influences the way we work and live. Contemporary businesses demonstrate significant concerns on how increasing amounts of available information can be converted into knowledge. An increasing need for new knowledge concerning the development of new services which an organization offers to the customers in order to be competitive in the market is but an example of how important the dissemination of knowledge within organizations is. The growth in the relative size of people working in the knowledge economy stresses these developments. The research discussed in this paper focuses on improving the efficiency and effectiveness of knowledge exchange between knowledge workers by means of automated support so that dissemination of knowledge within organizations improves.

[ PDF ] [ Bibtex ]

J. Ackermann. Using a Specification Data Model for Speciication of Black-Box Software Components. In: Enterprise Modelling and Information Systems Architectures, Nr: 1, Vol: 2, Pages: 3-13, 2007.

[ Missing PDF ] [ Bibtex ]

C.A. Albers, T. Heskes, and H.J. Kappen. Haplotype inference in general pedigrees using the cluster variation method. In: Genetics, Vol: 177, Pages: 1101-1118, 2007.

We present CVMHAPLO, a probabilistic method for haplotyping in general pedigrees with many markers. CVMHAPLO reconstructs the haplotypes by assigning in every iteration a fixed number of the ordered genotypes with the highest marginal probability, conditioned on the marker data and ordered genotypes assigned in previous iterations. CVMHAPLO makes use of the cluster variation method (CVM) to efficiently estimate the marginal probabilities. We focused on single-nucleotide polymorphism (SNP) markers in the evaluation of our approach. In simulated data sets where exact computation was feasible, we found that the accuracy of CVMHAPLO was high and similar to that of maximum-likelihood methods. In simulated data sets where exact computation of the maximum-likelihood haplotype configuration was not feasible, the accuracy of CVMHAPLO was similar to that of state of the art Markov chain Monte Carlo (MCMC) maximum-likelihood approximations when all ordered genotypes were assigned and higher when only a subset of the ordered genotypes was assigned. CVMHAPLO was faster than the MCMC approach and provided more detailed information about the uncertainty in the inferred haplotypes. We conclude that CVMHAPLO is a practical tool for the inference of haplotypes in large complex pedigrees.

[ PDF ] [ Bibtex ] [ External URL ]

B. Bakker, and T. Heskes. Learning and approximate inference in dynamic hierarchical models. In: Computational Statistics & Data Analysis, Vol: 52, Pages: 821-839, 2007.

A new variant of the dynamic hierarchical model (DHM) that describes a large number of parallel time series is presented. The separate series, which may be interdependent, are modeled through dynamic linear models (DLMs). This interdependence is included in the model through the definition of a `top-level` or `average` DLM. The model features explicit dependences between the latent states of the parallel DLMs and the states of the average model, and thus the many parallel time series are linked to each other. The combination of dependences within each time series and dependences between the different DLMs makes the computation time that is required for exact inference cubic in the number of parallel time series, however, which is unacceptable for practical tasks that involve large numbers of parallel time series. Therefore, two methods for fast, approximate inference are proposed: a variational approximation and a factorial approach. Under these approximations, inference can be performed in linear time, and it still features exact means. Learning is implemented through a maximum likelihood (ML) estimation of the model parameters. This estimation is realized through an expectation maximization (EM) algorithm with approximate inference in the E-step. Examples of learning and forecasting on two data sets show that the addition of direct dependences has a `smoothing` effect on the evolution of the states of the individual time series, and leads to better prediction results. The use of approximate instead of exact inference is further shown not to lead to inferior results on either data set.

[ PDF ] [ Bibtex ] [ External URL ]

P. van Bommel, B. van Gils, H.A. (Erik) Proper, M. van Vliet, and Th.P. van der Weide. Value and the information market. In: Data and Knowledge Engineering, Nr: 1, Vol: 61, Pages: 153-175, April, 2007.

In this paper we explore how (micro)economic theory can be used to analyze and model the exchange of information on the Web. More specifically, we consider searchers for information who engage in transactions on the Web. Searchers will engage in web transactions only if they gain something in such a transaction. To this end we develop a formal model for markets, based on the notions of value and transaction. This model enables us to examine transactions on an information market. In this market we have a dual view on transactions, creating a dichotomy of transactors and transactands.

[ PDF ] [ Bibtex ] [ External URL ]

M.A.J. van Gerven, R. Jurgelenaite, B.G. Taal, T. Heskes, and P.J.F. Lucas. Predicting carcinoid heart disease with the noisy-threshold classifier. In: Artificial Intelligence in Medicine, Vol: 40, Pages: 45-55, May, 2007.

Objective: To predict the development of carcinoid heart disease (CHD), which is a life-threatening complication of certain neuroendocrine tumors. To this end, a novel type of Bayesian classifier, known as the noisy-threshold classifier, is applied. Materials and methods: Fifty-four cases of patients that suffered from a low-grade midgut carcinoid tumor, of which 22 patients developed CHD, were obtained from the Netherlands Cancer Institute (NKI). Eleven attributes that are known at admission have been used to classify whether the patient develops CHD. Classification accuracy and area under the receiver operating characteristics (ROC) curve of the noisy-threshold classifier are compared with those of the naive-Bayes classifier, logistic regression, the decision-tree learning algorithm C4.5, and a decision rule, as formulated by an expert physician. Results: The noisy-threshold classifier showed the best classification accuracy of 72% correctly classified cases, although differences were significant only for logistic regression and C4.5. An area under the ROC curve of 0.66 was attained for the noisy-threshold classifier, and equaled that of the physician’s decision-rule. Conclusions: The noisy-threshold classifier performed favorably to other state-of-the-art classification algorithms, and equally well as a decision-rule that was formulated by the physician. Furthermore, the semantics of the noisy-threshold classifier make it a useful machine learning technique in domains where multiple causes influence a common effect.

[ PDF ] [ Bibtex ] [ External URL ]

M.A.J. van Gerven, F.J. Diez, B.G. Taal, and P.J.F. Lucas. Selecting treatment strategies with dynamic limited-memory influence diagrams. In: Artificial Intelligence in Medicine, Nr: 3, Vol: 40, Pages: 171-186, 2007.

[ Missing PDF ] [ Bibtex ]

P. Groot, C. Gilissen, and M. Egmont-Petersen. Error Probabilities for Local Extrema in Gene Expression Data. In: Pattern Recognition Letters, Nr: 16, Vol: 28, Pages: 2133-2142, 2007.

Current approaches for the prediction of functional relations from gene expression data often do not have a clear methodology for extracting features and are not accompanied by a clear characterisation of their performance in terms of the inherent noise present in such data sets. Without such a characterisation it is unclear how to focus on the most probable functional relations present. In this article, we start from the fundamental theory of scale-space for obtaining features (i.e., local extrema) from gene expression profiles. We show that under the assumption of Gaussian distributed noise, repeatedly measuring a local extrema behaves like a bivariate Gaussian distribution. Furthermore, the error of not re-observing local extrema is phrased in terms of the integral over the tails of this bivariate Gaussian distribution. Using integration techniques developed in the 50s, we demonstrate how to compute these error probabilities exactly.

[ Missing PDF ] [ Bibtex ]

F.A. Grootjen, and Th.P. van der Weide. Information Parallax. , Nr: 3, Vol: 1, Pages: 34-55, March, International Journal of Intelligent Information T, Radboud University Nijmegen2007.

To effectively use and exchange information among AI systems, a formal specification of the representation of their shared domain of discourse - called an ontology - is indispensable. In this paper we introduce a special kind of knowledge representation based on a dual view on the universe of discourse and show how it can be used in human activities such as searching, in-depth exploration and browsing. After a formal definition of dualistic ontologies we exemplify this definition with three different (well known) kinds of ontologies, based on the vector model, on formal concept analysis and on fuzzy logic respectively. The vector model leads to concepts derived by latent semantic indexing using the singular value decomposition. Both the set model as the fuzzy set model lead to Formal Concept Analysis, in which the fuzzy set model is equipped with a parameter that controls the fine-graining of the resulting concepts. We discuss the relation between the resulting systems of concepts. Finally, we demonstrate the use of this theory by introducing the dual search engine. We show how this search engine can be employed to support the human activities addressed above.

[ PDF ] [ Bibtex ]

A.J. Hommersom, P. Groot, P.J.F. Lucas, M. Balser, and J. Schmitt. Verification of Medical Guidelines using Background Knowledge in Task Networks. In: IEEE Transactions on Knowledge and Data Engineering, June, 2007.

The application of a medical guideline to the treatment of the patient`s disease can be seen as the execution of tasks, sequentially or in parallel, in the face of patient data. It has been shown that many of such guidelines can be represented as a `network of tasks`, i.e., as a sequence of steps that have a specific function or goal. In this paper a novel methodology for verifying the quality of such guidelines is introduced. To investigate the quality of such guidelines we propose to include medical background knowledge to task networks and to formalise criteria for good practice medicine a guideline should comply to. This framework was successfully applied to a guideline dealing with the management of diabetes mellitus type 2 using the interactive theorem prover KIV.

[ PDF ] [ Bibtex ]

Book

M.A.J. van Gerven. Bayesian Networks for Clinical Decision Support. September, Radboud University, Nijmegen, The Netherlands, EU, 2007.

[ External URL ] [ Bibtex ]

Chapter

P. van Bommel, S.J.B.A. (Stijn) Hoppenbrouwers, H.A. (Erik) Proper, and Th.P. van der Weide. On the Use of Object-Role Modeling For Modeling Active Domains . Advances in Database Research, 2007.

This chapter is about how the Object Role Modeling (ORM) language and approach can be used for integration, at a deep and formal level, of various domain modeling representations and viewpoints, with a focus on the modeling of active domains. The authors argue that ORM is particularly suited for enabling such integration because of its generic conceptual nature, its useful, existing connection with natural language and controlled languages, and its formal rigor. They propose the Logbook Paradigm as an effective perspective in active domains modeling and for the derivation of domain grammars. They show how standard ORM can be extended to an Object Role Calculus (ORC), including temporal concepts and constraints that enable the modeling of active domains. A suggestion for graphical representation is also provided. The authors hope to contribute to integration of domain models and viewpoints in an academic and educational context rather than proposing ORM and ORC as new modeling tools in an industrial setting.

[ Missing PDF ] [ Bibtex ]

Ildiko Flesch, and Peter Lucas. Markov Equivalence in Bayesian Networks. Pages: 3-38, Springer, 2007.

Probabilistic graphical models, such as Bayesian networks, allow representing conditional independence information of random variables. These relations are graphically represented by the presence and absence of arcs and edges between vertices. Probabilistic graphical models are non-unique representations of the independence information of a joint probability distribution. However, the concept of Markov equivalence of probabilistic graphical models is able to offer unique representations, called essential graphs. In this survey paper the theory underlying these concepts is reviewed.

[ PDF ] [ Bibtex ]

Research Issues in Systems Analysis and Design, Databases and Software Development . Advances in Database Research, 2007.

[ Missing PDF ] [ Bibtex ]

Advances in Probabilistic Graphical Models. Springer, 2007.

[ Missing PDF ] [ Bibtex ]

Conference

S.J. Overbeek, P. van Bommel, H.A. (Erik) Proper, and D.B.B. Rijsenbrij. Knowledge Discovery and Exchange: Towards a Web-based Application for Discovery and Exchange of Revealed Knowledge. In: Proceedings of the Third International Conference on Web Information Systems and Technologies, Vol: WIA, Pages: 26-34, March, 2007.

Web technologies enable the discovery and exchange of knowledge from many different locations using many different channels. This implies that one is able to discover and exchange knowledge while using a PDA when traveling by train for instance. A provisional Web-based application referred to as `DEXAR: Discovery and eXchange of Revealed knowledge` is therefore introduced to illustrate the possibilities of the Web in the process of knowledge discovery and exchange. This is illustrated by an example from the medical domain. Before focussing on this Web application however a better understanding of knowledge discovery and exchange is needed to be able to determine what kind of Web-based support is desired and feasible. Thus, a knowledge market paradigm and a knowledge discovery paradigm are discussed in detail.

[ PDF ] [ Bibtex ] [ External URL ]

Adriana Birlutiu, and T. Heskes. Expectation Propagation for Rating Players in Sports Competitions. In: 11th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD) , 2007.

Rating players in sports competitions based on game results is one example of paired comparison data analysis. Since an exact Bayesian treatment is intractable, several techniques for approximate inference have been proposed in the literature. In this paper we compare several variants of expectation propagation (EP). EP generalizes assumed density filtering (ADF) by iteratively improving the approximations that are made in the filtering step of ADF. Furthermore, we distinguish between two variants of EP: EP-Correlated, which takes into account the correlations between the strengths of the players and EP-Independent, which ignores those correlations. We evaluate the different approaches on a large tennis dataset to find that EP does significantly better than ADF (iterative improvement indeed helps) and EP-Correlated does significantly better than EP-Independent (correlations do matter).

[ PDF ] [ Bibtex ]

P. van Bommel, P. Buitenbuis, S.J.B.A. (Stijn) Hoppenbrouwers, and H.A. (Erik) Proper. Architecture Principles -- A Regulative Perspective on Enterprise Architecture. In: Enterprise Modelling and Information Systems Architectures (EMISA2007), Lecture Notes in Informatics, Vol: 119, Pages: 47-60, Gesellschaft fur Informatik, Bonn, Germany, EU, 2007.

Increasingly, organizations make use of enterprise architectures to direct the development of the enterprise as a whole and its IT portfolio in particular. In this paper we investigate the regulative nature of enterprise architecture. We aim to develop a fundamental understanding of the regulative needs that underly an enterprise architecture, and then take these needs as a starting point to arrive at requirements on the language (architecture principles) used to denote enterprise architectures. We furthermore discuss the process of formulating principles as well as their semantics.

[ PDF ] [ Bibtex ]

A.J.J. van Breemen, J.J. Sarbo, and Th.P. van der Weide. Toward a theory of natural conceptualization. In: Complexity in Organisational and Technological Systems, 2007.

The focus of this paper is on the early phases of modeling consisting in the primary conceptualization of the underlying application domain. To this end we introduce a process model for the generation of meaningful concepts for a domain description. In virtue of its close relation with cognitive activity, this process model enables the modeler as well as the user to comprehend the concepts of the resulting domain in a natural way. Beyond this goal, natural conceptualization opens the possibility for the introduction of a uniform representation enabling the efficient combination of knowledge obtained from different stake holders during a modeling process.

[ Missing PDF ] [ Bibtex ]

A.J.J. van Breemen. Sign Processes and the Sheets of Semeiosis. In: Sign Processes and the Sheets of Semeiosis , Pages: 89-98, 2007.

After positioning our work in the field of information systems science, we will introduce the basic Peircean semiotic terms pertinent to sign analysis (sign aspects) and those pertinent to interpretation processes (interpretant aspects). Next we will match the sign aspects with the interpretant aspects in order to be able to derive our semiotic process model of cognitive activity. In order to derive the process model we will introduce the concept of a semiotic sheet. This paper is of interest to those engaged in semiotically oriented approaches to information systems and those interested in the Peircean theory of interpretants alike.

[ Missing PDF ] [ Bibtex ]

Proceedings of the Third International Conference on Web Information Systems and Technologies. Edited by: J. Filipe, J.A. Moinhos Cordeiro, B. Encarnacao, and V. Pedrosa. Vol: WIA, March, INSTICC Press, Setubal, Portugal, EU, 2007, ISBN 9789728865788.

[ Missing PDF ] [ Bibtex ]

Ildiko Flesch, Peter Lucas, and Th.P. van der Weide. Conflict-based Diagnosis: Adding Uncertainty to Model-based Diagnosis. In: International Joint Conference on Artificial Intelligence 2007, Vol: I., Pages: 380-385, January, 2007.

[ PDF ] [ Bibtex ]

Ildiko Flesch, Antonio Fernandez, and Antonio Salmeron. Incremental Supervised Classification for the MTE Distribution: a Preliminary Study. In: Congreso Espanol de Informatica (CEDI-2007), Pages: 217-224, 2007.

In this paper we propose an incremental method for building classifiers in domains with very large amounts of data or for data streams. The method is based on the use of mixtures of truncated exponentials, so that continuous and discrete variables can be handled simultaneously.

[ PDF ] [ Bibtex ]

Ildiko Flesch, and Peter Lucas. Independence Decomposition in Dynamic Bayesian Networks. In: 9th European Conference of Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU-2007), Pages: 560-571, Springer, 2007.

Dynamic Bayesian networks are a special type of Bayesian network that explicitly incorporate the dimension of time. They can be distinguished into repetitive and non-repetitive networks. Repetitiveness implies that the set of random variables of the network and their independence relations are the same at each time step. Due to their structural symmetry, repetitive networks are easier to use and are, therefore, often taken as the standard. However, repetitiveness is a very strong assumption, which normally does not hold, as particular dependences and independences may only hold at certain time steps. In this paper, we propose a new framework for independence modularisation in dynamic Bayesian networks. Our theory provides a method for separating atemporal and temporal independence relations, and offers a practical approach to building dynamic Bayesian networks that are possibly non-repetitive. A composition operator for temporal and atemporal independence relations is proposed and its properties are studied. Experimental results obtained by learning dynamic Bayesian networks from real data show that this framework offers a more accurate way for knowledge representation in dynamic Bayesian networks.

[ PDF ] [ Bibtex ]

Ildiko Flesch, Peter Lucas, and Th.P. van der Weide. Probabilistic properties of model-based diagnostic reasoning in Bayesian networks. In: 19th Belgian-Dutch Conference on Artificial Intelligence(BNAIC-2007), Pages: 119-126, 2007.

Much research has been carried out into general properties of Bayesian networks, whereas limited attention has been given to special types of Bayesian networks for specific applications. An example of such an application is model-based diagnosis, i.e. the diagnosis of malfunctioning of devices or systems, based on an explicit model of the structure and behaviour of these devices and systems. Basically, two types of model-based diagnosis are being distinguished: (emph{i}) consistency-based diagnosis, and (emph{ii}) abductive diagnosis. In this paper, we investigate the relationship between consistency-based and abductive reasoning in Bayesian networks. It will appear that abductive diagnoses can be determined using special properties from consistency-based diagnosis, yielding related computationally simplified forms for both probabilistic consistency-based and abductive diagnosis using Bayesian networks. Furthermore, the conceptual relationships between probabilistic diagnostic reasoning and logical diagnostic reasoning are studied.

[ PDF ] [ Bibtex ]

P. Groot, F. Bruijsten, and M. Oostdijk. Patient Data Confidentiality Issues of the Dutch Electronic Health Care Record. In: 19th Belgian-Dutch Conference on Artificial Intelligence(BNAIC-2007), Pages: 151-157, 2007.

Health care is currently in a phase of transition. One of the recent developments that seems to be inescapable is the introduction of an Electronic Health care Record and a central service system that makes all medical data electronically accessible. Many issues, including patient data confidentiality, however, are still not solved satisfactorily. One possible approach to patient data confidentiality is a control and warning system that monitors all access to patient information and flags those requests that do not abide to the law health care providers are expected to follow. In this paper, we investigate the feasibility of a control and warning system at a Dutch hospital, and we analyse whether such an approach is practical and a viable option for providing patient data confidentiality. We provide a conceptual schema of the underlying domain and give empirical results by querying the hospital information system. Our empirical results show that the policy of the hospital for providing patient data confidentiality is unlikely to succeed when scaled up to a National level in its current form. Although experimental results show that the policy can be strengthened considerably by using additional supporting facts from the care process of a patient, this is still insufficient as the number of flagged requests for patient data is too high. Some of these results are, however, caused by the developments of the Dutch health care system, which are not fully reflected yet in the hospital information system analysed. Incorporating these developments may lead to better results with respect to patient data confidentiality.

[ Missing PDF ] [ Bibtex ]

P. Groot. The Role of Model Checking in Critiquing based on Clinical Guidelines. In: 19th Belgian-Dutch Conference on Artificial Intelligence(BNAIC-2007), Pages: 353-354, 2007.

[ PDF ] [ Bibtex ]

P. Groot. Experiences in Quality Checking Medical Guidelines using Formal Methods. In: Proceedings Verification and Validation of Software Systems (VVSS 2007), Pages: 164-178, 2007.

In health care, the trend of evidence-based medicine, has led medical specialists to develop medical guidelines, which are large nontrivial documents suggesting the detailed steps that should be taken by health-care professionals in managing the disease in a patient. In the Protocure project the objective has been to assess the improvement of medical guidelines using formal methods. This paper reports on some of our findings and experiences in quality checking medical guidelines. In particular the formalisation of meta-level quality criteria for good practice medicine, which is used in conjunction with medical background knowledge to verify the quality of a guideline dealing with the management of diabetes mellitus type 2 using the interactive theorem prover KIV. For comparison, analogous investigations have been performed with other techniques including automatic theorem proving and model checking.

[ PDF ] [ Bibtex ]

C. Hesse, R. Oostenveld, T. Heskes, and O. Jensen. On the development of a brain-computer interface system using high-density magnetoencephalogram signals for real-time control of a robot arm. In: Annual Symposium of the IEEE-EMBS Benelux Chapter, 2007.

This work describes a brain-computer interface (BCI) system using multi-channel magnetoencephalogram (MEG) signals for real-time control of a computer game and a robot arm in a motor imagery paradigm. Computationally efficient spatial filtering and time-frequency decomposition facilitate the extraction and classification of neurophysiologically meaningful and task-relevant signal components from all of the 275 channels comprising the high-density sensor array. To our knowledge, this is the first report of an MEG-based BCI system capable of real-time signal processing and control using the whole sensor array. The robust and reliable performance of this system was demonstrated several times in front of a large public audience at an open day celebrating the 5th anniversary of the F.C. Donders Centre for Cognitive Neuroimaging at Radboud University Nijmegen.

[ PDF ] [ Bibtex ]

Koen .V. Hindriks, S.J.B.A. (Stijn) Hoppenbrouwers, Catholijn M. Jonker, and Dmytro Tykhonov. Automatic Issue Extraction from a Focused Dialogue. In: Proceedings of the 12th International Conference on Applications of Natural Language to Information Systems (NLDB`07), Lecture Notes in Computer Science, June, 2007.

Abstract. Various methodologies for structuring the process of domain modeling have been proposed, but there are few software tools that provide automatic support for the process of constructing a domain model. The problem is that it is hard to extract the relevant concepts from natural language texts since these typically include many irrelevant details that are hard to discern from relevant concepts. In this paper, we propose an alternative approach to extract domain models from natural language input. The idea is that more effective, automatic extraction is possible from a natural language text that is produced in a focused dialogue game. We present an application of this idea in the area of pre-negotiation, in combination with sophisticated parsing and transduction techniques for natural language and fairly simple pattern matching rules. Furthermore, a prototype is presented of a conversation-oriented experimentation environment for cooperative conceptualization. Several experiments have been performed to evaluate the approach and environment, and a technique for measuring the quality of extraction has been defined. The experiments indicate that even with a simple implementation of the proposed approach reasonably acceptable results can be obtained

[ PDF ] [ Bibtex ]

The Role of Model Checking in Critiquing based on Clinical Guidelines. , Pages: 411-420, 2007.

Medical critiquing systems criticise clinical actions performed by a physician. In order to provide useful feedback, an important task is to find differences between the actual actions and a set of `ideal` actions as described by a clinical guideline. In case differences exist, insight to which extent they are compatible is provided by the critiquing system. We propose a methodology for such critiquing, where the ideal actions are given by a formal model of a clinical guideline, and where the actual actions are derived from real world patient data. We employ model checking to investigate whether a part of the actual treatment is consistent with the guideline. Furthermore, it is shown how critiquing can be cast in terms of temporal logic, and what can be achieved by using model checking. The methodology has been applied to a clinical guideline of breast cancer in conjunction with breast cancer patient data.

[ PDF ] [ Bibtex ]

R. Jurgelenaite, T. Heskes, and T. Dijkstra. Using symmetric causal independence models to predict gene expression from sequence data. In: ECML-PKDD Workshop “Data Mining in Functional Genomics and Proteomics: Current Trends and Future Directions”, Radboud University Nijmegen, 2007.

We present an approach for inferring transcriptional regulatory modules from genome sequence and gene expression data. Our method, which is based on symmetric causal independence models, is both able to model the logic behind transcriptional regulation and to incorporate uncertainty about the functionality of putative transcription factor binding sites. Applying our approach to the deadliest species of human malaria parasite, Plasmodium falciparum, we obtain several striking results that deserve further (biological) investigation.

[ PDF ] [ Bibtex ] [ External URL ]

M. Kamal, A., J. Davis, J. Nabukenya, T., V. Schoonover, L., R. Pietron, and G.-J. de Vreede. Collaboration Engineering for Incident Response Planning: Process Development and Validation. In: Proceedings of the 40th Hawaii International Conference on System Science (HICSS, Vol: 0, Pages: 1-10, May, Proceedings of the 40th Annual Hawaii Internationa, Radboud University Nijmegen2007.

Many organizations have plans for incident response strategies. Despite Incident Response Planning (IRP) being an essential ingredient in conjuring security planning procedures in organizations, extensive literature reviews have revealed that there are no collaborative processes in place for such a crucial activity. This study proposes a design for a facilitated incident response planning process using technology such as GSS. Three sessions were conducted and an analysis of the sessions revealed that the facilitated IRP process design held up strongly in terms of efficiency, goal attainment, and session participant satisfaction. Future research implications entail devising an all-encompassing integrative general approach that would be applicable to any form of corporate security development planning process.

[ PDF ] [ Bibtex ]

G.L. Kolfschoten, and E.L. Santanen. Reconceptualizing Generate thinkLets: the Role of the Modifier. In: Proceedings of the 40th Hawaii International Conference on System Science (HICSS, Vol: 0, Pages: 16-25, 2007.

For the past fifty years, the idea generation literature has been plagued by inconsistent findings concerning how best to achieve certain group outcomes. It is proposed that these findings are the result of specific and minute influences on the behavior of the group members or the resulting outcomes that have previously been overlooked. This paper reconceptualizes the idea generation process, breaking it down into a specific set of Generate thinkLets and associated Modifiers that can impact both group processes and their resulting outcomes. It is anticipated that this new reconceptualization will serve as a research springboard to help researchers exercise greater precision and potentially even resolve many of the conflicting findings that exist in the field of idea generation.

[ PDF ] [ Bibtex ]

On the Move to Meaningful Internet Systems 2007: OTM 2007 Workshops - OTM Confederated International Conferences and Posters, AWeSOMe, CAMS, COMINF, OTM Academy Doctoral Consortium, MONET, NDKM, OnToContent, ORM, PerSys, PiPE, PPN, RDDS, SSWS, SWWS 2007, Vilamoura, Portugal, November 25 - 30, 2007, Proceedings, Part I. Lecture Notes in Computer Science, Vol: 4805, November, Springer, Berlin, Germany, EU, 2007.

[ Missing PDF ] [ Bibtex ]

On the Move to Meaningful Internet Systems 2007: DOA, CoopIS, ODBASE, GADA, and IS, Vilamoura, Portugal, November 25 - 30, 2007, Proceedings, Part I. Edited by: R. Meersman. Lecture Notes in Computer Science, Vol: 4803, November, Springer, Berlin, Germany, EU, 2007.

[ Missing PDF ] [ Bibtex ]

J. Nabukenya, P. van Bommel, and H.A. (Erik) Proper. Collaborative IT Policy-making as a means of achieving Business-IT Alignment. In: Proceedings of the Workshop on Business/IT Alignment and Interoperability (BUSITAL’07), held in conjunction with the 19th Conference on Advanced Information Systems Engineering (CAiSE’07), 978-82-519-2245-6, Vol: 1, Pages: 461-468, June, 2007.

This paper is concerned with the application of collaboration engineering to improve the quality of policy-making processes as they occur in a business-IT alignment context. Policies are needed to guide complex decision-making. The creation of such policies is a collaborative process. The quality of this collaboration has a profound impact on the quality of the resulting policies and the acceptance by its stakeholders. We therefore focus on the use of techniques and methods from the field of collaboration engineering to improve the quality of Business-IT alignment related policy-making processes.

[ PDF ] [ Bibtex ]

M. Op `t Land, and H.A. (Erik) Proper. Impact of Principles on Enterprise Engineering. In: roceedings of the 15th European Conference on Information Systems, Pages: 1965-1976, June, Radboud University Nijmegen2007.

Increasingly, organizations make use of enterprise architectures to direct the development of the en-terprise as a whole and the development of their IT portfolio in particular. This steering and directing is done by means of principles, which are essentially regarded as constraints on the design space for enterprise engineers, thus guiding them in their design efforts. In this paper we study the potential constraining effect of principles on the design of enterprises as well as the guidance designers may receive from these principles. We start by providing a brief dis-cussion on the concepts of enterprise architecture and enterprise engineering. We continue by discuss-ing a strategy to make principles specific and measurable enough to indeed allow them to constrain design space. This is followed by a discussion of a number of examples, taken from real-life practice, illustrating the steering effect of principles. Finally, we also briefly pay attention to the process that may be followed in formulating and formalizing principles.

[ PDF ] [ Bibtex ]

roceedings of the 15th European Conference on Information Systems. Edited by: H. {\"O}sterle, J. Schelp, and R Winter. June, University of St. Gallen, St. Gallen, Switzerland, 2007.

[ Missing PDF ] [ Bibtex ]

S.J. Overbeek, P. van Bommel, H.A. (Erik) Proper, and D.B.B. Rijsenbrij. Visualizing Formalisms with ORM Models. In: On the Move to Meaningful Internet Systems 2007: OTM 2007 Workshops - OTM Confederated International Conferences and Posters, AWeSOMe, CAMS, COMINF, OTM Academy Doctoral Consortium, MONET, NDKM, OnToContent, ORM, PerSys, PiPE, PPN, RDDS, SSWS, SWWS 2007, Vilamoura, Portugal, November 25 - 30, 2007, Proceedings, Part I, Lecture Notes in Computer Science, Vol: 4805, Pages: 709-718, November, 2007, ISBN 9783540768876.

During the development of theoretical frameworks researchers often graphically represent formal textual notations as part of a developed theory. This may lead to enrichments and new insights regarding a theory. A possibility for graphical representation of formalisms is the utilization of modeling languages such as ORM. This paper deals with the technique of visualizing formalisms by using ORM models and shows the advantages of graphically representing a formal theoretical framework. An application of the approach that has already been successfully practised is elaborated. This application concerns a theoretical framework consisting of knowledge intensive task properties and shows how the approach to visualize formalisms with ORM can be materialized.

[ PDF ] [ Bibtex ]

S.J. Overbeek, P. van Bommel, H.A. (Erik) Proper, and D.B.B. Rijsenbrij. Characterizing Knowledge Intensive Tasks indicating Cognitive Requirements - Scenarios in Methods for Specific Tasks. In: Proceedings of the IFIP TC8 / WG8.1 Working Conference on Situational Method Engineering: Fundamentals and Experiences (ME07), Springer IFIP Series, Vol: 244, Pages: 100-114, September, 2007, ISBN 9780387739465.

Methods for specific tasks can among others be identified in conceptual modelling of information systems and requirements engineering in software development. Such methods dictate a specific way of working by describing necessary knowledge intensive tasks to fulfil while applying the method. An actor may experience difficulties when trying to fulfil tasks as part of a method application, related to the cognitive abilities required to fulfil a certain task versus the specific cognitive abilities possessed by the actor. This paper specifically focusses on the cognitive abilities required to fulfil a knowledge intensive task while applying a method for specific tasks. This is based on a categorization and characterization of knowledge intensive tasks and on scenarios in conceptual modelling of information systems and requirements engineering.

[ PDF ] [ Bibtex ]

S.J. Overbeek, P. van Bommel, H.A. (Erik) Proper, and D.B.B. Rijsenbrij. Matching Cognitive Characteristics of Actors and Tasks. In: On the Move to Meaningful Internet Systems 2007: DOA, CoopIS, ODBASE, GADA, and IS, Vilamoura, Portugal, November 25 - 30, 2007, Proceedings, Part I, Edited by: R. Meersman. Lecture Notes in Computer Science, Vol: 4803, Pages: 371-380, November, 2007, ISBN 9783540768463.

Acquisition, application and testing of knowledge by actors trying to fulfill knowledge intensive tasks is becoming increasingly important for organizations due to trends such as globalization, the emergence of virtual organizations and growing product complexity. An actor`s management of basic cognitive functions, however, is at stake because of this increase in the need to acquire, apply and test knowledge during daily work. This paper specifically focusses on matchmaking between the cognitive characteristics supplied by an actor and the cognitive characteristics required to fulfill a certain knowledge intensive task. This is based on a categorization and characterization of actors and knowledge intensive tasks. A framework for a cognitive matchmaker system is introduced to compute actual match values and to be able to reason about the suitability of a specific actor to fulfill a task of a certain type.

[ PDF ] [ Bibtex ]

M. Samulski, N. Karssemijer, Peter Lucas, and P. Groot. Classification of mammographic masses using support vector machines and Bayesian networks. In: Proceedings of SPIE, Medical Imaging, Vol: 6514, 2007.

In this paper, we compare two state-of-the-art classification techniques characterizing masses as either benign or malignant, using a dataset consisting of 271 cases (131 benign and 140 malignant), containing both a MLO and CC view. For suspect regions in a digitized mammogram, 12 out of 81 calculated image features have been selected for investigating the classification accuracy of support vector machines (SVMs) and Bayesian networks (BNs). Additional techniques for improving their performance were included in their comparison: the Manly transformation for achieving a normal distribution of image features and principal component analysis (PCA) for reducing our high-dimensional data. The performance of the classifiers were evaluated with Receiver Operating Characteristics (ROC) analysis. The classifiers were trained and tested using a k-fold cross-validation test method (k=10). It was found that the area under the ROC curve (A_z) of the BN increased significantly (p=0.0002) using the Manly transformation, from A_z = 0.767 to A_z = 0.795. The Manly transformation did not result in a significant change for SVMs. Also the difference between SVMs and BNs using the transformed dataset was not statistically significant (p=0.78). Applying PCA resulted in an improvement in classification accuracy of the naive Bayesian classifier, from A_z = 0.767 to A_z = 0.786. The difference in classification performance between BNs and SVMs after applying PCA was small and not statistically significant (p=0.11).

[ PDF ] [ Bibtex ]

Proceedings of the Workshop on Business/IT Alignment and Interoperability (BUSITAL’07), held in conjunction with the 19th Conference on Advanced Information Systems Engineering (CAiSE’07). 978-82-519-2245-6, Vol: 1, June, 2007.

[ Missing PDF ] [ Bibtex ]

AIME. Springer-Verlag, 2007.

[ Missing PDF ] [ Bibtex ]

Annual Symposium of the IEEE-EMBS Benelux Chapter. 2007.

[ Missing PDF ] [ Bibtex ]

9th European Conference of Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU-2007). Springer, 2007.

[ Missing PDF ] [ Bibtex ]

19th Belgian-Dutch Conference on Artificial Intelligence(BNAIC-2007). 2007.

[ Missing PDF ] [ Bibtex ]

ECML-PKDD Workshop “Data Mining in Functional Genomics and Proteomics: Current Trends and Future Directions”. Radboud University Nijmegen, 2007.

[ External URL ] [ Bibtex ]

Congreso Espanol de Informatica (CEDI-2007). 2007.

[ Missing PDF ] [ Bibtex ]

11th European Conference on Principles and Practice of Knowledge Discovery in Databases (PKDD) . Springer, Berlin, Germany, EU, 2007.

[ Missing PDF ] [ Bibtex ]

Proceedings of the IFIP TC8 / WG8.1 Working Conference on Situational Method Engineering: Fundamentals and Experiences (ME07). Springer IFIP Series, Vol: 244, September, Springer, 2007.

[ Missing PDF ] [ Bibtex ]

International Joint Conference on Artificial Intelligence 2007. Vol: I., January, 2007.

[ Missing PDF ] [ Bibtex ]

Proceedings of the 40th Hawaii International Conference on System Science (HICSS. Vol: 0, 2007.

[ Missing PDF ] [ Bibtex ]

Artificial Intelligence in Medicine (AIME) 2007. Lecture Notes in Computer Science, 2007.

[ Missing PDF ] [ Bibtex ]

Enterprise Modelling and Information Systems Architectures (EMISA2007). Lecture Notes in Informatics, Vol: 119, Gesellschaft fur Informatik, Bonn, Germany, EU, 2007.

[ Missing PDF ] [ Bibtex ]

Proceedings of the 12th International Conference on Applications of Natural Language to Information Systems (NLDB`07). Lecture Notes in Computer Science, June, 2007.

[ Missing PDF ] [ Bibtex ]

Proceedings of SPIE. Medical Imaging, Vol: 6514, 2007.

[ Missing PDF ] [ Bibtex ]

Proceedings of the 2006 Nonlinear Statistical Signal Processing Workshop. 2007.

[ Missing PDF ] [ Bibtex ]

Sign Processes and the Sheets of Semeiosis . 2007.

[ Missing PDF ] [ Bibtex ]

Complexity in Organisational and Technological Systems. 2007.

[ Missing PDF ] [ Bibtex ]

Proceedings Verification and Validation of Software Systems (VVSS 2007). 2007.

[ Missing PDF ] [ Bibtex ]

Stefan Visscher, Peter Lucas, Ildiko Flesch, and K. Schurink. Using temporal context-dependent independence information in the Exploratory Analysis of Disease Processes. In: Artificial Intelligence in Medicine (AIME) 2007, Lecture Notes in Computer Science, Pages: 87-96, 2007.

Disease processes in patients are temporal in nature and involve uncertainty. It is necessary to gain insight into these processes when aiming at improving the diagnosis, treatment and prognosis of disease in patients. One way to achieve these aims is by explicitly modeling disease processes; several researchers have advocated the use of dynamic Bayesian networks for this purpose because of the versatility and expressiveness of this time-oriented probabilistic formalism. In the research described in this paper, we investigate the role of context-specific independence information in modeling the evolution of disease. The hypothesis tested was that within similar populations of patients differences in the learnt structure of a dynamic Bayesian network may result, depending on whether or not patients have a particular disease. This is an example of temporal context-specific independence information. We have tested and confirmed this hypothesis using a constraint-based Bayesian network structure learning algorithm which supports incorporating background knowledge into the learning process. Clinical data of mechanically-ventilated ICU patients, some of whom developed ventilator-associated pneumonia, were used for that purpose.

[ PDF ] [ Bibtex ]

P. van Bommel, S.J.B.A. (Stijn) Hoppenbrouwers, H.A. (Erik) Proper, and Th.P. van der Weide. QoMo: A Modelling Process Quality Framework based on SEQUAL. In: Workshop proceedings of the 19th International Conference on Advanced Infromarion Systems Engineering: EMMSAD, Vol: 1, Pages: 121-130, June, 2007.

This paper aims to contribute to the area of conceptual model quality assessment and improvement. We present a preliminary modelling process-oriented ‘Quality of Modelling’ framework (QoMo), mainly based on the estab-lished SEQUAL framework for quality of models. QoMo is based on knowl-edge state transitions, cost of the activities bringing such transitions about, and a goal structure for activities-for-modelling. Such goals are directly linked to concepts of SEQUAL. We discuss how goals for modelling can be linked to a rule-based way of describing processes for modelling. Such process descrip-tions hinge on strategy descriptions, which may be used descriptively (for studying/analysing real instances of processes) as well as prescriptively (for the guiding of modelling processes). Descriptive utility of the framework is critical for the quality/evaluation angle on processes-for-modelling, and reflects the main intended contribution of this paper.

[ PDF ] [ Bibtex ]

Reports

A.J.J. van Breemen, J.J. Sarbo and Th.P. van der Weide. Toward a theory of natural conceptualization. Technical report: ICIS-R07002, January, Radboud University Nijmegen, 2007.

The focus of this paper is on the early phases of ER-modeling consisting of the primary conceptualization of the underlying application domain. To this end we introduce a process model for the generation of meaningful concepts for a domain description. In virtue of its close relation with cognitive activity, this process model also enables the modeler as well as the user to comprehend the concepts of the resulting domain in a natural way. Beyond this goal, natural conceptualization opens the possibility for the introduction of a uniform representation enabling the efficient combination of knowledge obtained from different stake holders during a modeling process.

[ PDF ] [ Bibtex ]

J. Berendsen, and F.W. Vaandrager. Compositional Abstraction in Real-Time Model Checking. Technical report: ICIS-R07027, November, Radboud University Nijmegen, 2007.

The idea to use simulations (or refinements) as a compositional abstraction device is well-known, both in untimed and timed settings, and has already been studied theoretically and practically in many papers during the last three decades. Nevertheless, existing approaches do not handle two fundamental modeling concepts which, for instance, are frequently used in the popular Uppaal model checker: (1) a parallel composition operator that supports communication via shared variables as well as synchronization of actions, and (2) committed locations. In this paper, we describe a framework for compositional abstraction based on simulation relations that does support both concepts, and that is suitable for Uppaal. Our approach is very general and the only essential restriction is that the guards of input transitions do not depend on external variables. We have applied our compositional framework to verify the Zeroconf protocol for an arbitrary number of hosts.

[ PDF ] [ Bibtex ]

J. Berendsen, B. Gebremichael, Miaomiao Zhang, and F.W. Vaandrager. Formal Specification and Analysis of Zeroconf using Uppaal. Technical report: ICIS-R07032, December, Radboud University Nijmegen, 2007.

We report on a case study in which the model checker Uppaal is used to formally model parts of Zeroconf, a protocol for dynamic configuration of IPv4 link-local addresses that has been defined in RFC 3927 of the IETF. Our goal has been to construct a model that (a) is easy to understand by engineers, (b) comes as close as possible to the informal text (for each transition in the model there should be a corresponding piece of text in the RFC), and (c) may serve as a basis for formal verification. Our modeling efforts revealed several errors (or at least ambiguities) in the RFC that no one else spotted before. We present two proofs of the mutual exclusion property for Zeroconf (for an arbitrary number of hosts and IP addresses): a manual, operational proof, and a proof that combines model checking with the application of a new abstraction relation that is compositional with respect to committed locations. The model checking problem has been solved using Uppaal, and the abstractions have been checked either by hand or by using Uppaal-Tiga.

[ PDF ] [ Bibtex ]

P. van Bommel, H.A. (Erik) Proper, and Th.P. van der Weide. Structured Modeling with uncertainty. Technical report: ICIS-R07022, September, Radboud University Nijmegen, 2007.

This paper starts with the description of the modeling process as a dialog, and describes the associated formal functions, including the feedback supporting the growing mutual understanding. The dialog has a procedural and an informational aspect. For this latter a controlled grammar is used, that has a user friendly and a system friendly side. These sides are related via an elementary syntactical transformation. Assuming some elementary requirements on the dialog participants, we prove the main theorem for information modeling effectiveness. We also propose a system of metrics to support the modeling process. In terms of these metrics, modeling heuristics can be described and evaluated. We demonstrate our ideas by a simple sample session.

[ PDF ] [ Bibtex ]

P. van Bommel, S.J.B.A. (Stijn) Hoppenbrouwers, and H.A. (Erik) Proper. QoMo: A Modelling Process Quality Framework based on SEQUAL. Technical report: ICIS-R07007, March, Radboud University Nijmegen, 2007.

This paper aims to contribute to the area of conceptual model quality assessment and improvement. We present a preliminary modelling processoriented quality framework (‘QoMo’), mainly based on the established SEQUAL framework for quality of models. QoMo is based on knowledge state transitions, cost of the activities bringing such transitions about, and a goal structure for activities-for-modelling. Such goals are directly linked to concepts of SEQUAL. We discuss how goals for modelling can be linked to a rule-based way of describing processes for modelling. Such process descriptions hinge on strategy descriptions, which may be used descriptively (for studying/analysing real instances of processes) as well as prescriptively (for the guiding of modelling processes). Descriptive utility of the framework is primary for the quality/evaluation angle on processes-for-modelling, and reflects the main intended contribution of this paper.

[ PDF ] [ Bibtex ]

A.J.J. van Breemen, and J.J. Sarbo. Sign Processes and the Sheets of Semeiosis. Technical report: ICIS-R07003, January, Radboud University Nijmegen, 2007.

After positioning our work in the field of information systems science, we will introduce the basic Peircean semiotic terms pertinent to sign analysis (sign aspects) and those pertinent to interpretation processes (interpretant aspects). Next we will match the sign aspects with the interpretant aspects in order to be able to derive our semiotic process model of cognitive activity. In order to derive the process model we will introduce the concept of a semiotic sheet. This paper is of interest to those engaged in semiotically oriented approaches to information systems and those interested in the Peircean theory of interpretants alike.

[ PDF ] [ Bibtex ]

Ed Brinksma, and Jozef Hooman. Dependability for high-tech systems: an industry-as-laboratory approach. Technical report: ICIS-R07030, December, Radboud University Nijmegen, 2007.

The dependability of high-volume embedded systems, such a consumer electronic devices, is threatened by a combination of quickly increasing complexity, decreasing time-to-market, and strong cost constraints. This poses challenging research questions that are investigated in the Trader project, following the industry-as-lab approach. We present the main vision of this project, which is based on a model-based control paradigm, and the current status of the project results.

[ PDF ] [ Bibtex ]

G.J.N.M. (Guido) Chorus, Y.H.C. (Yves) Janse, S.J.B.A. (Stijn) Hoppenbrouwers, C.J.P. (Chris) Nellen, and H.A. (Erik) Proper. Formalizing Architecture Principles using Object-Role Modelling. Technical report: ICIS-R07006, February, 2007.

This technical report is the result of two experiments conducted as part of an ongoing research effort to formalize architecture principles. The experiment involves a first, and modest, evaluation of the use of ORM and ORC as a means to formalize and ground architecture principles. The experiments involve the evaluation of the use of ORM and ORC to formalize the example principles provided by the TOGAF (The Open Group Architecture Framework) and principles taken from industrial practice.

[ PDF ] [ Bibtex ]

Olha Shkaravska, and M.C.J.D. van Eekelen. Static inference of polynomial size-aware types. Technical report: ICIS-R07028, November, Radboud University Nijmegen, 2007.

We propose a static size analysis procedure that combines term-rewriting and type checking to automatically obtain output-on-input size dependencies for first-order functions. Attention is restricted to functions for which the size of the result is strictly polynomial, not necessarily monotonous, in the sizes of the arguments. To infer a size dependency, the procedure generates hypotheses for increasing degrees of polynomials. For each degree, to define a hypothetical polynomial one needs to know its values on a finite collection of points (subject to some geometrical condition). To compute the value of a size polynomial in a certain point we use a term-rewriting system generated by a standard size-annotations inference procedure. We have proven that if a function with a given input terminates at run-time on a meaningful stack and heap then the static (`compile-time`) term rewriting system of the size inference also terminates, on the integers representing the sizes of the corresponding inputs. The term rewriting system may terminate at compile-time when the underlying function definition does not at run-time. This makes the theoretical applicability of the proposed approach larger than the previous state-of-the-art, where run-time testing was used to generate hypothetical polynomials. Also, the practical applicability is improved due to increased efficiency since the term rewriting system at compile time abstracts from many computations that are done at run-time. We give the proof that if a function body terminates on a meaningful stack and heap then term-rewriting terminates on the integers representing the sizes of inputs. Term-rewriting may terminates when the underlying function definition does not. This makes the applicability of the proposed approach larger then its previous version, where we use run-time testing to generate hypothetical polynomials.

[ PDF ] [ Bibtex ]

M.A.J. van Gerven. Approximate Inference in Graphical Models using Tensor Decompositions. Technical report: ICIS-R07024, November, Radboud University Nijmegen, 2007.

We demonstrate that tensor decompositions can be used to transform graphical models into structurally simpler graphical models that approximate the same joint probability distribution. In this way, standard inference algorithms such as the junction tree algorithm, can be used in order to use the transformed graphical model for approximate inference. The usefulness of the technique is demonstrated by means of its application to thirty randomly generated small-world Markov networks.

[ PDF ] [ Bibtex ]

M.A.J. van Gerven. Tensor Decompositions for Probabilistic Classification. Technical report: ICIS-R07011, May, Radboud University Nijmegen, 2007, Longer version of a short paper presented at IDAMAP 2007.

Tensor decompositions are introduced as a novel approach to probabilistic classification and can be interpreted as a particular kind of mixture model. Since many problems in medicine and biology can be described as a classification problem, the approach is seen as a useful tool for biomedical data analysis. The approach is validated by means of a clinical database consisting of data about 1002 patients that suffer from hepatic disease. It is shown that the approach performs comparably to state-of-the-art results that have been obtained using a naive Bayes classifier.

[ PDF ] [ Bibtex ]

M. Gromov, and T.A.C. Willemse. Testing and Model-Checking Techniques for Diagnosis. Technical report: ICIS-R07016, June, Radboud University Nijmegen, 2007.

Black-box testing is a popular technique for assessing the quality of a system. However, in case of a test failure, only little information is available to identify the root-cause of the test failure. In such cases, additional diagnostic tests may help. We present techniques and a methodology for efficiently conducting diagnostic tests based on explicit fault models. For this, we rely on Model-Based Testing techniques for Labelled Transition Systems. Our techniques rely on, and exploit differences in outputs (or inputs) in fault models, respectively. We characterise the underlying concepts for our techniques both in terms of mathematics and in terms of the modal mu-calculus, which is a powerful temporal logic. The latter characterisations permit the use of efficient, off-the-shelf model checking techniques, leading to provably correct algorithms and pseudo decision procedures for diagnostic testing.

[ PDF ] [ Bibtex ]

C. Haack, E. Poll, J. Schafer, and A. Schubert. Immutable Objects for a Java-like Language. Technical report: ICIS-R07009, April, Radboud University Nijmegen, 2007.

We extend a Java-like language with immutability specifications and a static type system for verifying immutability. A class modifier immutable specifies that all class instances are immutable objects. Ownership types specify the depth of object states and enforce encapsulation of representation objects. The type system guarantees that the state of immutable objects does not visibly mutate during a program run. Provided immutability-annotated classes and methods are final, this is true even if immutable classes are composed with untrusted classes that follow Java

[ PDF ] [ Bibtex ]

Christian Haack, Marieke Huisman, Joe Kiniry, and Erik Poll. Proceedings of the Workshop on Verification and Analysis of Multi-threaded Java-like Programs. Technical report: ICIS-R07021, August, Radboud University Nijmegen, 2007.

This report contains the proceedings of the first Workshop on Verification and Analysis of Multi-threaded Java-like Programs (VAMP), held on September 3, 2007, as a satellite event to CONCUR in Lisbon, Portugal.

[ PDF ] [ Bibtex ]

R. Hamberg, and F.W. Vaandrager. Using Model Checkers in an Introductory Course on Operating Systems. Technical report: ICIS-R07031, December, Radboud University Nijmegen, 2007.

During the last three years, we have been experimenting with the use of the Uppaal model checker in an introductory course on operating systems for first-year Computer Science students at the Radboud University Nijmegen. The course uses model checkers as a tool to explain, visualize and solve concurrency problems. Our experience is that students enjoy to play with model checkers because it makes concurrency issues tangible. Even though it is hard to measure objectively, we think that model checkers really help students to obtain a deeper insight into concurrency. In this article, we report on our experiences in the classroom, explain how mutual exclusion algorithms, semaphores and monitors can conveniently be modeled in Uppaal, and present some results on properties of small, concurrent patterns.

[ PDF ] [ Bibtex ]

Maarten de Mol, M.C.J.D. van Eekelen, and Rinus Plasmeijer. Confluent Term-Graph Reduction for Computer-Aided Formal Reasoning. Technical report: ICIS-R07012, May, Radboud University Nijmegen, 2007.

Sparkle is the dedicated proof assistant for the lazy functional programming language Clean, which is based on term-graph rewriting. Because equational reasoning is an important proof technique, Sparkle needs to offer support for formally proving e1 = e2 in as many situations as possible. The base proof of equality is by means of reduction: if e1 reduces to e2, then also e1 = e2. Therefore, the underlying reduction system of Sparkle needs to be geared towards flexibility, allowing reduction to take place as often as possible. In this paper, we will define a flexible term-graph reduction system for a simplified lazy functional language. Our system leaves the choice of redex free and uses single-step reduction. It is therefore more suited for formal reasoning than the well-established standard reduction systems for lazy functional languages, which usually fix a single redex and/or realize multi-step reduction only. We will show that our system is correct with respect to the standard systems, by proving that it is confluent and allows standard lazy functional evaluation as a possible reduction path. Our reduction system is used in the foundation of Sparkle. Because it is based on a simplified functional language, it can be applied to any other functional language based on term-graph rewriting as well.

[ PDF ] [ Bibtex ]

Maarten de Mol, M.C.J.D. van Eekelen, and Rinus Plasmeijer. The Mathematical Foundation of the Proof Assistant Sparkle. Technical report: ICIS-R07025, November, Radboud University Nijmegen, 2007.

This report presents the mathematical foundation of the proof assistant SPARKLE, which is dedicated to the lazy functional language CLEAN. The mathematical foundation provides a formalization of the programming, logic and proof languages that are supported by SPARKLE. Furthermore, it formalizes the reduction of programs and the semantics of properties, and provides proofs for the soundness of the defined tactics.

[ PDF ] [ Bibtex ]

Wojciech Mostowski, and Erik Poll. Testing the Java Card Applet Firewall. Technical report: ICIS-R07029, December, Radboud University Nijmegen, 2007.

In this paper we discuss the methodology and results of testing the Java Card applet firewall mechanism. The main motivation for this work is the complexity of the firewall. Given the complexity, non-compliance of the cards with respect to the official specification is not unlikely. Firewall implementation faults may lead to serious security issues. Although we did not discover any serious problems on our test cards, a few minor specification violations are reported. We only found one specification violation on one card that could be considered unsafe, in that it might introduce a security risk for specific applications.

[ PDF ] [ Bibtex ]

J. Nabukenya, P. van Bommel, H.A. (Erik) Proper, and G.-J. de Vreede. An Evaluation Instrument for Collaborative Processes: Application to Organizational Policy-Making . Technical report: ICIS-R07017, July, Radboud University Nijmegen, 2007.

Decision-making in organizations is guided by policies. Organizational policy-making is a complex process in which several parties are involved, with multiple backgrounds, incompatible interests, and diverging areas of interest, yet they all have to be brought together to produce an acceptable policy result. Therefore, we propose to use techniques from collaboration engineering (CE) in this context. There is hardly any experience with CE in the field of organizational policy-making. In order to evaluate the effectiveness and efficiency of CE in organizational policy-making, it is important to have a systematic evaluation instrument. We distinguish between general and domain-specific indicators. Moreover, we consider measurement means and operationalization tools, such that organizational policy-making stakeholders can apply our instrument in their own organization.

[ PDF ] [ Bibtex ]

J. Nabukenya, P. van Bommel, and H.A. (Erik) Proper. Repeatable Collaboration Processes for Mature Organizational Policy Making. Technical report: ICIS-R07019, July, Radboud University Nijmegen, 2007.

Organizational policy making processes are complex processes in which many people are involved. Very often the results of these processes are not what the different stakeholders intended. Since policies play a major role in key decision making concerning the future of organizations, our research aims at improving the policies on the basis of cooperation. In order to achieve this goal, we apply the practice of collaboration engineering to the field of organizational policy making. We use the thinklet as a basic building block for facilitating intervention to create a repeatable pattern of collaboration among people working together towards achieving a goal. Our case studies show that policy making processes do need collaboration support indeed and that the resulting policies can be expected to improve.

[ PDF ] [ Bibtex ]

S.J. Overbeek, P. van Bommel, and H.A. (Erik) Proper. Matching Cognitive Characteristics of Actors and Tasks in Information Systems Engineering. Technical report: ICIS-R07026, November, 2007.

In daily practice, discrepancies may exist in the suitability match of actors and the tasks that have been allocated to them. A formal framework for cognitive matchmaking and a prototype implementation are introduced as a possible solution to improve the fit between actors and tasks. A case study has been conducted to clarify how the proposed cognitive matchmaking approach can be utilized in information systems engineering. The inductive-hypothetical research strategy has been applied as an overall research approach. A separate iteration of the strategy has been applied within the case study.

[ Missing PDF ] [ Bibtex ]

S.J. Overbeek, P. van Bommel, and H.A. (Erik) Proper. Information Systems Engineering supported by Cognitive Matchmaking. Technical report: ICIS-R07023, November, 2007.

In daily practice, discrepancies may exist in the suitability match of actors and the tasks that have been allocated to them. Formal theory and the prototype of a cognitive matchmaker system are introduced as a solution to improve the fit between actors and tasks. A case study has been conducted to clarify how the proposed cognitive matchmaker system can be utilized in information systems engineering. The inductive-hypothetical research strategy has been applied when performing the case study.

[ Missing PDF ] [ Bibtex ]

S.J. Overbeek, P. van Bommel, H.A. (Erik) Proper, and D.B.B. Rijsenbrij. Visualizing Formalisms with ORM Models. Technical report: ICIS-R07018, July, 2007.

During the development of theoretical frameworks researchers often graphically represent formal textual notations as part of a developed theory. This may lead to enrichments and new insights regarding a theory. A possibility for graphical representation of formalisms is the utilization of modeling languages such as ORM. This paper deals with the technique of visualizing formalisms by using ORM models and shows the advantages of graphically representing a formal theoretical framework. An application of the approach that has already been successfully practised is elaborated. This application concerns a theoretical framework consisting of knowledge intensive task properties and shows how the approach to visualize formalisms with ORM can be materialized.

[ Missing PDF ] [ Bibtex ]

S.J. Overbeek, P. van Bommel, H.A. (Erik) Proper, and D.B.B. Rijsenbrij. Characterizing Knowledge Intensive Tasks indicating Cognitive Requirements - Scenarios in Methods for Specific Tasks. Technical report: ICIS-R07005, February, 2007.

Methods for specific tasks include conceptual modelling in the context of information systems and requirements engineering in software development. Such methods dictate a specific way of working by describing necessary knowledge intensive tasks to fulfil while applying the method. An actor that is applying a method for specific tasks wishes to reach the goals following from the way of working. An actor may experience difficulties when trying to fulfil tasks as part of a method, related to the cognitive abilities required to fulfil a certain task versus the specific cognitive abilities possessed by the actor. This paper focusses on the cognitive abilities required to fulfil a knowledge intensive task while applying a method for specific tasks. This is based on a categorization and characterization of knowledge intensive tasks and scenarios in conceptual modelling of information systems and requirements engineering.

[ PDF ] [ Bibtex ]

S.J. Overbeek, P. van Bommel, H.A. (Erik) Proper, and D.B.B. Rijsenbrij. Matching Cognitive Characteristics of Actors and Tasks. Technical report: ICIS-R07013, June, 2007.

Acquisition, application and testing of knowledge by actors trying to fulfill knowledge intensive tasks is becoming increasingly important for organizations due to trends such as globalization, the emergence of virtual organizations and growing product complexity. An actor`s management of basic cognitive functions, however, is at stake because of this increase in the need to acquire, apply and test knowledge during daily work. This paper specifically focusses on matchmaking between the cognitive characteristics supplied by an actor and the cognitive characteristics required to fulfill a certain knowledge intensive task. This is based on a categorization and characterization of actors and knowledge intensive tasks. A framework for a cognitive matchmaker system is introduced to compute actual match values and to be able to reason about the suitability of a specific actor to fulfill a task of a certain type.

[ Missing PDF ] [ Bibtex ]

Wolter Pieters, and Robert van Haren. E-voting discourses in the UK and the Netherlands. Technical report: ICIS-R07020, August, Radboud University Nijmegen, 2007.

A qualitative case study of the e-voting discourses in the UK and the Netherlands was performed based on the theory of strategic niche management. In both countries, eight e-voting experts were interviewed on their expectations, risk estimations, cooperation and learning experiences. The results show that differences in these variables can partly explain the variations in the embedding of e-voting in the two countries, from a qualitative point of view.

[ PDF ] [ Bibtex ]

E.D. Schabell. Taking a peek inside the cmlFramework. Technical report: ICIS-R07008, March, Radboud University Nijmegen, 2007.

This paper will walk you through a demonstration run of the Collaborative Modeling Lab (CML) Framework prototype. This will start with a conceptual overview of the component structure. The reader will then be walked through a processing run in which a Niam Normal Form (NNF) grammar is used to Filter a Logbook, produce a Contract, Distill this into an Essence, and finally using the Builder to generate an eventual Model in ORM.

[ PDF ] [ Bibtex ]

O. Shkaravska, R. van Kesteren, and M.C.J.D. van Eekelen. Type Checking and Weak Type Inference for Polynomial Size Analysis of First-Order Functions. Technical report: ICIS-R07004, January, Radboud University Nijmegen, 2007.

We present a size-aware type system for first-order shapely functions. Here, a function is called shapely when the size of the result is determined exactly by a polynomial in the sizes of the arguments. Examples of shapely functions are matrix multiplication and the Cartesian product of two lists. The type checking problem for the type system is shown to be undecidable in general. We define a natural syntactic restriction such that the type checking becomes decidable, even though size polynomials are not necessarily linear. Furthermore, an algorithm for weak type inference for this system is given.

[ PDF ] [ Bibtex ]

M.C.J.D. van Eekelen, Stefan ten Hoedt, Rene Schreurs, and Yaroslav Usenko. Modeling and verifying a Real-Life Industrial Session-Layer Protocol in mCRL2. Technical report: ICIS-R07014, June, Radboud University Nijmegen, 2007.

This report describes the analysis of an industrial implementation of the session-layer of a load-balancing software system. This software comprises 7.5 thousand lines of C code. It is used for distribution of the print jobs among several document processors (workers). A large part of this commercially used software system has been modeled closely and analyzed using process-algebraic techniques. Several critical issues were discovered. Since the model was close to the code, all problems that were found in the model, could be traced back to the actual code resulting in concrete suggestions for improvement of the code. All in all, the analysis significantly improved the quality of this real-life system.

[ PDF ] [ Bibtex ]

Hendrik Tews. Proceedings of the C/C++ Verification workshop. Technical report: ICIS-R07015, June, Radboud University Nijmegen, 2007.

This report contains the proceedings of the first C/C++ Verification Workshop held in Oxford on July 2nd, 2007 as a side event of IFM 2007.

[ PDF ] [ Bibtex ]