IRIS Research Publications


2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987

Journal

P.J.M. Frederiks, A.H.M. ter Hofstede, and E. Lippe. A Unifying Framework for Conceptual Data Modelling Concepts. In: Information and Software Technology, Nr: 1, Vol: 39, Pages: 15-25, January, 1997.

For succesful information systems development, conceptual data modelling is essential. Nowadays many techniques for conceptual data modelling exist, examples are NIAM, FORM, PSM, many (E)ER variants, IFO, and FDM. In-depth comparisons of concepts of these techniques is very difficult as the mathematical formalisations of these techniques, if existing at all, are very different. As such there is a need for a unifying formal framework providing a sufficiently high level of abstraction.

In this paper the use of category theory for this purpose is addressed. Well-known conceptual data modelling concepts are discussed from a category theoretic point of view. Advantages and disadvantages of the approach chosen will be outlined.

[ PDF ] [ Bibtex ]

A.H.M. ter Hofstede, and Th.P. van der Weide. Deriving Identity from Extensionality. In: International Journal of Software Engineering and Knowledge Engineering, Nr: 2, Vol: 8, Pages: 189-221, June, 1997.

In recent years, a number of proposals have been made to extend conventional conceptual data modeling techniques with concepts for modeling complex object structures. Among the most prominent proposed concepts is the concept of collection type. A collection type is an object type of which the instances are sets of instances of an other object type. A drawback of the introduction of such a new concept is that the formal definition of the technique involved becomes considerably more complex. This is a result of the fact that collection types are populatable types and such types tend to complicate updates. In this paper it is shown how a new kind of constraint, the extensional uniqueness constraint, allows for an alternative treatment of collection types avoiding update problems. The formal definition of this constraint type is presented, other advantages of its introduction are discussed, and its consequences for, among others, identification schemes are elaborated.

[ PDF ] [ Bibtex ]

J.W.G.M. Hubbers, and A.H.M. ter Hofstede. Formalization of Communication and Behaviour in Object-Oriented Analysis. In: Data & Knowledge Engineering, Nr: 2, Vol: 23, Pages: 147-184, August, 1997.

[ Missing PDF ] [ Bibtex ]

F.C. Berger, and P. van Bommel. Augmenting a Characterization Network with Semantical Information. In: Information Processing & Management, Nr: 4, Vol: 33, Pages: 453-479, 1997.

[ Missing PDF ] [ Bibtex ]

A.H.M. ter Hofstede, and T.F. Verhoef. On the Feasibility of Situational Method Engineering. In: Information Systems, Nr: 6/7, Vol: 22, Pages: 410-422, September, 1997.

[ Missing PDF ] [ Bibtex ]

A.H.M. ter Hofstede, E. Lippe, and Th.P. van der Weide. Applications of a Categorical Framework for Conceptual Data Modeling. In: Acta Informatica, Nr: 12, Vol: 34, Pages: 927-963, December, 1997.

For successful information systems development, conceptual data modeling is essential. Nowadays a plethora of techniques for conceptual data modeling exist. Many of these techniques lack a formal foundation and a lot of theory, e.g. concerning updates or schema transformations, is highly data model specific. As such there is a need for a unifying formal framework providing a sufficiently high level of abstraction. In this paper the use of category theory for this purpose is addressed. Well-known conceptual data modeling concepts, such as relationship types, generalization, specialization, and collection types are discussed from a categorical point of view. An important advantage of this framework is its ``configurable semantics''. Features such as null values, uncertainty, and temporal behavior can be added by selecting appropriate instance categories. The addition of these features usually requires a complete redesign of the formalization in traditional set-based approaches to semantics. Applications of the framework in the context of schema transformations and improved automated modeling support are discussed."

[ PDF ] [ Bibtex ]

A.H.M. ter Hofstede, H.A. (Erik) Proper, and Th.P. van der Weide. Exploiting Fact Verbalisation in Conceptual Information Modelling. In: Information Systems, Nr: 6/7, Vol: 22, Pages: 349-385, September, 1997.

An increasing number of approaches to conceptual information modelling use verbalisation techniques as an aid to derive a model for a given universe of discourse (the problem domain). The underlying assumption is that by elaborate verbalisation of samples of facts, taken from the universe of discourse, one can elicit a complete overview of the relevant concepts and their inter-relationships. These verbalisations also provide a means to validate the resulting model in terms of expressions familiar to users. This approach can be found in modern ER variations, Object-Role Modelling variations, as well as different Object-Oriented Modelling techniques.

After the modelling process has ended, the fact verbalisations are hardly put to any further use. As we belief this to be unfortunate, this article is concerned with the exploitation of fact verbalisations after finishing the actual information system. The verbalisations are exploited in four directions. We consider their use for a conceptual query language, the verbalisation of instances, the description of the contents of a database, and for the verbalisation of queries in a computer supported query environment. To put everything in perspective, we also provide an example session with an envisioned tool for end-user query formulation that exploits the verbalisations.

[ PDF ] [ Bibtex ]

H.A. (Erik) Proper. Data Schema Design as a Schema Evolution Process. In: Data & Knowledge Engineering, Nr: 2, Vol: 22, Pages: 159-189, 1997.

In an information system a key role is played by the underlying data schema. This article starts out from the view that the entire modelling process of an information system's data schema can be seen as a schema transformation process. A transformation process that starts out with an initial draft conceptual schema and ends with an internal database schema for some implementation platform. This allows us to describe the transformation process of a database design as an evolution of a schema through a universe of data schemas. Doing so, allows for a better understanding of the actual design process, countering the problem of `software development under the lamppost'. Even when the information system design is finalised, the data schema can evolve further due to changes in the requirements on the system.

We present a universe of data schemas that allows us to describe the underlying data schemas at all stages of their development. This universe of data schemas is used as a case study on how to describe the complete evolution of a data schema with all its relevant aspects. The theory is general enough to cater for more modelling concepts, or different modelling approaches. To actually model the evolution of a data schema, we present a versioning mechanism that allows us to model the evolutions of the elements of data schemas and their interactions, leading to a better understanding of a schema design process as a whole. Finally, we also discuss the relationship between this simple versioning mechanism and general purpose version management systems.

[ PDF ] [ Bibtex ]

J.J. Sarbo. Building sub-knowledge bases using concept lattices. In: The Computer Journal, Nr: 10, Vol: 39, Pages: 868-875, 1997.

[ Missing PDF ] [ Bibtex ]

Book

P.J.M. Frederiks. Object-Oriented Modeling based on Information Grammars. University of Nijmegen, 1997.

[ Missing PDF ] [ Bibtex ]

Conference

B.C.M. Wondergem, P. van Bommel, T.W.C. Huibers, and Th.P. van der Weide. Towards an Agent-Based Retrieval Engine. In: Proceedings of the 19th BCS-IRSG Colloquium on IR Research, Edited by: J. Furner, and D.J. Harper. Pages: 126-144, April, 1997.

This article describes and analyses the retrieval component of the Profile Information Filtering Project of the University of Nijmegen. The overall structure of this project, serving as the context for the retrieval component, is stated. This component is called the Retrieval Engine and will be implemented as an intelligent retrieval agent, using sophisticated techniques from artificial intelligence. A synthesis between information retrieval and information filtering has to be found, coping with challenging problems stemming from the combination of the difficulties of both fields. The Retrieval Engine should be capable of giving an explanation of why a document was found relevant to the information need of the user. The techniques used will rely on sophisticated natural language processing. The techniques to establish relevance degrees for documents will consist of two parts: a symbolic and a numeric one. This allows for a mechanism that is both explainable and exact. Interesting approaches for obtaining this are stated.

[ PDF ] [ Bibtex ]

A.T. Arampatzis, T. Tsoris, and C.H.A. Koster. Irena: Information Retrieval Engine based on Natural language Analysis. In: Proceedings of the RIAO`97 Conference, Pages: 159-175, 1997.

[ Missing PDF ] [ Bibtex ]

A.T. Arampatzis, Th.P. van der Weide, P. van Bommel, and C.H.A. Koster. Linguistic Variation in Information Retrieval and Filtering. In: Informatiewetenschap 1997, Edited by: P.M.E. de Bra. Pages: 7-10, 1997.

n this paper, a natural language approach to Information Retrieval (IR) and Information Filtering (IF) is described. Rather than keywords, noun-phrases are used for both document description and as query language, resulting in a marked improvement of retrieval precision. Recall then can be enhanced by applying normalization to the noun-phrases and some other constructions. This new approach is incorporated in the Information Filtering Project Profile. The overall structure of the Profile project is described, focusing especially on the Parsing Engine involved in the natural language processing. Effectiveness and efficiency issues are elaborated concerning the Parsing Engine. The major contributions of this research include properties of grammars and parsers specialized in IR/IF (properties such as coverage, robustness, efficiency, ambiguity), normalization of noun-phrases, and similarity measures of noun-phrases.

Keywords: Linguistic Variation, Information Retrieval, Information Filtering

[ PDF ] [ Bibtex ]

A.P. Barros, A.H.M. ter Hofstede, and H.A. (Erik) Proper. Towards Real-Scale Business Transaction Workflow Modelling. In: Proceedings of the Ninth International Conference CAiSE`97 on Advanced Information Systems Engineering, Barcelona, Spain, EU, Edited by: A. Olivé, and J.A. Pastor. Lecture Notes in Computer Science, Vol: 1250, Pages: 437-450, June, Springer, 1997, ISBN 3540631070.

While the specification languages of workflow management systems focus on process execution semantics, the successful development of workflows relies on a fuller conceptualisation of business processing, including process semantics. For this, a wellspring of modelling techniques, paradigms and informal-formal method extensions which address the analysis of organisational processing structures (enterprise modelling) and communication (based on speech-act theory), is available. However, the characterisations - indeed the cognition - of workflows still appears coarse.

In this paper, we provide the complementary, empirical insight of a real-scale business transaction workflow. The development of the workflow model follows a set of principles which we believe address workflow modelling suitability. Through the principles, advanced considerations including asynchronous as well as synchronous messaging, temporal constraints and a service-oriented perspective are motivated. By illustrating the suitability principles and with it the inherent complexity of business transaction domains, we offer timely insights into workflow specification extension, and workflow reuse and deployment.

[ PDF ] [ Bibtex ]

A.P. Barros, A.H.M. ter Hofstede, and H.A. (Erik) Proper. Essential Principles for Workflow Modelling Effectiveness. In: Proceedings of the Third Pacific Asia Conference on Information Systems (PACIS`97), Edited by: G.G. Gable, and R.A.G. Webber. Pages: 137-147, April, 1997.

By incorporating aspects of coordination and collaboration, workflow implementations of information systems require a sound conceptualisation of business processing semantics. Traditionally, the success of conceptual modelling techniques has depended largely on the adequacy of conceptualisation, expressive power, comprehensibility and formal foundation. An equally important requirement, particularly with the increased conceptualisation of business aspects, is business suitability.

In this paper, the focus is on the business suitability of workflow modelling for a commonly encountered class of (operational) business processing, e.g. those of insurance claims, bank loans and land conveyancing. A general assessment is first conducted on some integrated techniques characterising well-known paradigms - structured process modelling, object-oriented modelling, behavioural process modelling and business-oriented modelling. Through this, an insight into business suitability within the broader perspective of technique adequacy, is gained. A specific business suitability diagnosis then follows using a particular characterisation of business processing, i.e. one where the intuitive semantics and inter-relationship of business services and business processes are nuanced. As a result, five business suitability principles are elicited. These are proposed for a more detailed understanding and (synthetic) development of workflow modelling techniques. Accordingly, further insight into workflow specification languages and workflow globalisation in open distributed architectures may also be gained.

[ PDF ] [ Bibtex ]

P. van Bommel, and Th.P. van der Weide. Educational Flow in Computing Science Courses. In: 3rd International Conference on Applied Informatics (ICAI 97), 1997.

In this paper we describe the organization of a Student Research Lab (SRL) and Student Teaching Lab (STL) in the context of a computing science curriculum. The SRL and STL are inspired by the following problems found in many academic computing science curriculums today: (1) the preparation for working as an IT professional is not given sufficient attention, (2) coherence within and between educational components is too weak. Our solution to these problems consists of the SRL and STL, where the flow of educational results is operationalized and formalized.

[ PDF ] [ Bibtex ]

P. van Bommel, and T. van Weert. Modern Universitair Informatica Onderwijs. In: Nationaal Informatica Onderwijs Congres (NIOC 97), March, 1997, In Dutch.

[ Missing PDF ] [ Bibtex ]

J.J. Sarbo, and J.I. Farkas. A data representation for abstract reasoning. In: Proceedings of the Seventh BENELEARN Conference, Edited by: W. Daelemans, P. Flach, and A. van den Bosch. Pages: 99-108, 1997.

[ Missing PDF ] [ Bibtex ]

B.C.M. Wondergem, P. van Bommel, T.W.C. Huibers, and Th.P. van der Weide. An Electronic Commerce Paradigm for Information Discovery. In: Proceedings of the Conferentie Informatiewetenschap (CIW`1997): Let your Browser do the Walking, Edited by: P.M.E. de Bra. Pages: 56-60, November, 1997.

This article investigates the connection between Electronic Commerce (EC) and Information Discovery (ID). ID is the synthesis of distributed Information Retrieval and Information Filtering, filled in with itelligent agents and information brokers. Currently, no link exists between EC and ID. We argue that this link consists of a cost model for ID. We therefore propose several (types of) cost models, which enable application of EC to the whole of ID. This is illustrated with examples.

[ PDF ] [ Bibtex ]

Reports

A.I. Bleeker, P.D. Bruza, and Th.P. van der Weide. A User-centred View on Hypermedia Design. Technical report: CSI-R9707, Computing Science Institute, University of Nijmegen, 1997.

Ever-increasing quantities of information, together with new developments on storage and retrieval methods, are confronting todays users with a huge information supply that they can barely oversee. Hypermedia information retrieval systems try to assist users in finding their way through the supply, but reality this is where many systems fall short. The reason is that most of them do not really communicate with users or find out what they really want. Instead, a bottom-up approach that reasons mainly from an information-oriented view point, has been a major design focus. We argue that the design of hypermedia systems should be based on an integration between both a top-down (user-oriented) and a bottom-up (information-oriented) approach, to develop hypermedia systems that know and understand their users. In this article, we present initial results of a new user-oriented approach.

[ PDF ] [ Bibtex ]

A.T. Arampatzis. Preprocessing documents in Profile. Technical report: CSI-N9706, May, Computing Science Institute, University of Nijmegen, Nijmegen, The Netherlands, 1997.

[ Missing PDF ] [ Bibtex ]

A.T. Arampatzis, Th.P. van der Weide, P. van Bommel, and C.H.A. Koster. Syntactical Analysis for Text Filtering. Technical report: CSI-R9721, November, Computing Science Institute, University of Nijmegen, Nijmegen, The Netherlands, 1997.

[ Missing PDF ] [ Bibtex ]

A.H.M. ter Hofstede, and M.E. Orlowska. On the Complexity of Some Verification Problems in Process Control Specifications. Technical report, April, Faculty of Information Technology, Queensland University of Technology, Brisbane, Queensland, Australia, 1997.

[ Missing PDF ] [ Bibtex ]

J.W.G.M. Hubbers, and T.F. Verhoef. A Real-life Application of Software Component Modelling. Technical report: CSI-R9705, Computing Science Institute, University of Nijmegen, Nijmegen, The Netherlands, 1997.

[ Missing PDF ] [ Bibtex ]

J.W.G.M. Hubbers, and A.H.M. ter Hofstede. An Investigation of the Core Concepts of Object-Oriented Conceptual Data Modeling. Technical report: CSI-R9706, Computing Science Institute, University of Nijmegen, 1997.

[ Missing PDF ] [ Bibtex ]

B.C.M. Wondergem, P. van Bommel, T.W.C. Huibers, and Th.P. van der Weide. How is this document`s relevancy derived? Personalizing Inferences in Preferential Models. Technical report, Computing Science Institute, University of Nijmegen, 1997.

In Information Retrieval, user preferences and domain knowledge play an important role. This article shows how to incorporate domain knowledge in a logical framework and provides a mechanism to exploit user preferences to personalize domain knowledge, based on the inferences made in the matching functions. The matching functions are essentially symbolic logical inferences. The logic used in this article are Preferential Models, which are augmented with domain knowledge by providing an enriched aboutness relation. However, the techniques described in this article are applicable to other logics as well. A way to personalize the domain knowledge is given, which also gives the user insight into the workings of the matching functions. In addition, sound inference rules, which are tailor-made for the domain knowledge, are provided.

[ Missing PDF ] [ Bibtex ]

P.A. Jones, P. van Bommel, C.H.A. Koster, and Th.P. van der Weide. Stratified Recursive Backup for Best First Search. Technical report: CSI-R9720, November, Information Systems Group, Computing Science Institute, University of Nijmegen, The Netherlands, EU, 1997.

In this paper a new abstract machine model, the Stratified Recursive Backup machine model, is described. This machine model can be used to implement best first search algorithms efficiently. Two applications of best first search, a text layouting system and a natural language parser, are analyzed to provide an in­depth understanding of the Stratified Recursive Backup machine model.

[ PDF ] [ Bibtex ]

Professional

J.W.G.M. Hubbers. De hoge hoed van UML (The high hat of UML). In: Software release magazine, Pages: 49-52, July, Array publications, Alphen aan de Rijn, The Netherlands, EU, 1997, In Dutch.

[ Missing PDF ] [ Bibtex ]