[All BiBTeX entries for this year]

J. Hughes, and M. Niqui. * Admissible Digit Sets and a Modified Stern--Brocot Representation. *Technical report: NIII-R0401, January, Radboud University Nijmegen, 2004.

We examine a special case of admissible representations of the closed interval, namely those which arise via sequences of a finite number of M�bius transformations. We regard certain sets of M�bius transformations as a generalized notion of digits and introduce sufficient conditions that such a ""digit set'' yields an admissible representation of [0,+�]. Furthermore we establish the productivity and correctness of the homographic algorithm for such ""admissible'' digit sets. In the second part of the paper we discuss representation of positive real numbers based on the Stern--Brocot tree. We show how we can modify the usual Stern--Brocot representation to yield a ternary admissible digit set.

[ Missing PDF ] [ Bibtex ]

M.C.J.D. van Eekelen, and M.J. de Mol. * Mixed Lazy/Strict Natural Semantics. *Technical report: NIII-R0402, January, Radboud University Nijmegen, 2004.

Explicitly enforcing strictness is often used by functional programmers as an important tool for making applications fit time and space efficiency requirements. Few functional programmers however, are familiar with the consequences of explicitly enforcing strictness for formal reasoning about their programs. Some ""folklore"" knowledge is well-known but, up to now no formal model for reasoning with enforced strictness has been available. This greatly hampered formal reasoning on mixed lazy/strict programs. This paper presents a model for formal reasoning with enforced strictness based on John Launchbury's natural lazy graph semantics. Lazy graph semantics are widely considered to be essential for lazy functional programming languages. In this paper Launchbury's semantics are extended with an explicit strict let construct. We show that such a construct extends the semantics in a natural way. We discuss 2 variants of this construct and prove correctness and adequacy of our mixed semantics for both variants.

[ Missing PDF ] [ Bibtex ]

P. Koopman, and M.J. Plasmeijer. * Testing Reactive Systems with GAST. *Technical report: NIII-R0403, January, Radboud University Nijmegen, 2004.

Gast is a fully automatic test system.Given a logical property, stated as a function, it is ableto generate appropriate test values, to execute tests with these values, and to evaluate the results of these tests.Many reactive systems, like automata and protocols,however, are specified by a model rather than in logic.There exist tools that are able to test software described by such a model based specification, but these toolshave limited capabilities to generate test data involving data types.Moreover, in these tools it is hard or even impossiblestate properties of these values in logic. In this paper we introduce some extensions of \Gast\ to combine the best of logic and model based testing.The integration of model based testing and testing based on logical properties in a single automatical system is the contribution of this paper. The system consists just of a small library instead of a huge stand alone system.

[ Missing PDF ] [ Bibtex ]

S.J.B.A. (Stijn) Hoppenbrouwers, A.I. Bleeker, and H.A. (Erik) Proper. * Modeling Linguistically Complex Business Domains. *Technical report: NIII-R0405, January, Radboud University Nijmegen, 2004.

The paper focuses on business domain modeling as part of requirements engineering in software development projects. Domain modeling concerns obtaining and modeling the language (concepts, terminologies, ontologies) used by stakeholders to talk about a domain. Archieving conceptual clarityand consensus among stakeholders is an important yet often neglected part of requirements engineering. Domain modeling can play a key role in supporting it. This does, however, require a nuanced approach to language aspects of domain modeling as well as ambition management concerning its goals, and the procedure followed. We provide an analysis of the linguistic complexities involved, as well as of various levels of ambition concerning the domain modeling process. On top of the ""classic"" approach to modeling of multiple terminologies within a domain, and domain evolution, we will elaborate on the first two aspects.

[ Missing PDF ] [ Bibtex ]

M. Hendriks, G. Behrmann, K.G. Larsen, P. Niebert, and F.W. Vaandrager. * Adding Symmetry Reduction to Uppaal. *Technical report: NIII-R0407, February, Radboud University Nijmegen, 2004.

We describe a prototype extension of the Uppaal real-time model checking tool with symmetry reduction.The symmetric data type scalarset, which is also used in the Murphi model checker, was added to Uppaals system description language to support the easy static detection of symmetries. Our prototype tool uses state swaps, described and proven sound earlier by Hendriks, to reduce the space and memory consumption of Uppaal. Moreover, under certain assumptions the reduction strategy iscanonical, which means that the symmetries are optimally used. For all examples that we experimented with (both academic toy examples and industrial cases), we obtained a drastic reduction of both computation time and memory usage, exponential in the size of the scalar sets used.

P. Achten, M. van Eekelen, and R. Plasmeijer. * Compositional Model-Views with Generic Graphical User Interfaces. *Technical report: NIII-R0408, February, Radboud University Nijmegen, 2004.

Creating GUI programs is hard even for prototyping purposes. Using the model-view paradigm makes it somewhat simpler since the model-view paradigm dictates that the model contains no GUI programming, as this is done by the views. Still, a lot of GUI programming is needed to implement the views.We present a new method for constructing GUI applications that fits well in the model-view paradigm. Novel in our approach is that the views also contain no actual GUI programming. Instead, views are constructed in a fully compositional way by defining a model of the view. We use a technique developed earlier to generate the {sf GUI} part. We show how the method supports flexibility, compositionality and incremental change by introducing abstract components in the view models.

D.R. Cok, and J.R. Kiniry. * ESC/Java2: Uniting ESC/Java and JML . *Technical report: NIII-R0413, May, Radboud University Nijmegen, 2004.

The ESC/Java tool was a lauded advance in effective static checking of realistic Java programs, but has become out-of-date with respect to Java and the Java Modeling Language (JML). The ESC/Java2 project, whose progress is described in this paper, builds on the final release of ESC/Java from DEC/SRC in several ways. It parses all of JML, thus can be used with the growing body of JML-annotated Java code; it has additional static checking capabilities; and it has been designed, constructed, and documented in such a way as to improve the toolsusability to both users and researchers. It is intended thatESC/Java2 be used for further research in, and larger-scale case studies of, annotation and verification, and for studies in programmer productivity that may result from its integration with other tools that work with JML and Java.

[ Missing PDF ] [ Bibtex ]

M. Dowse, A. Butterfield, M. van Eekelen, M. de Mol, and R. Plasmeijer. * Towards Machine-Verified Proofs for I/O. *Technical report: NIII-R0415, April, Radboud University Nijmegen, 2004.

In functional languages, the shape of the external world affects both our understanding of I/O and how we would wish to have I/O expressed. This paper takes the first tentative steps at examining the consequences of using an explicit model of the external world-state when reasoning (using tool-support) about the behaviour of lazy functional programs. We construct a file-system model and develop a monadic language which lets us structure I/O. Two proofs are then performed regarding the observable effect of real-world functional I/O, and it is shown how properties of the file-system lend themselves naturally to the modelling of concurrent behaviour on a single, global state. All proofs in this paper were machine-verified using the Sparkle proof-assistant.

P. Achten, M. van Eekelen, R. Plasmeijer, and A. van Weelden. * Arrows for Generic Graphical Editor Components. *Technical report: NIII-R0416, April, Radboud University Nijmegen, 2004.

GUI programming is hard, even for prototyping purposes. In this paper we present the Graphical Editor Component toolkit in which GUIs can be created in an abstract and compositional way. The basic building blocks are (Abstract) Graphical Editor Components ((A)GEC) with which the programmer can create GUIs by specification of the data models only. No low-level GUI programming is required. We show how these building blocks can be glued together conveniently using a combinator library based on the arrow combinators that have been introduced by John Hughes. The proofs of the associated arrow laws can be done with standard reasoning techniques without resorting to a dedicated semantic model.

B. Gebremichael, T. Krilavicius, and Y.S. Usenko. * A Formal Analysis of a Car Periphery Supervision System. *Technical report: NIII-R0418, June, Radboud University Nijmegen, 2004.

This paper presents a formal model of the real-time service allocation unit for the Car Periphery Supervision (CPS) system---a case study proposed by Robert Bosch GmbH in the context of the EU IST project AMETIST.The CPS system is a hybrid system, which is modeled in terms of timed automata. It is done by splitting the values of nonlinear continuous variables into finite set of regions and over-approximating the constraints on continuous variables into clock constraints.Safety properties of the timed model have been verified using UPPAAL. This is a sufficient condition for validating the corresponding safety properties of the initial hybrid system. The difference in time scale between the CPS componentshave also been taken care of by over-approximating the timed model using the convex-hull over-approximation feature available in UPPAAL.

J.R. Kiniry. * Kind Theory and Software Reuse. *Technical report: NIII-R0419, May, Radboud University Nijmegen, 2004.

Kind theory is a logic for describing and reasoning about structured knowledge in communities. It provides a formal framework for describing, finding, customizing, composing, and reasoning about structured domains, such as those of software and mathematics. One of the main domains studied using kind theory is that of software, particularly with regards to software as reusable knowledge aboutcomputation. This paper summarizes kind theory and several example applications of the theory for topics relating to software reuse.

[ Missing PDF ] [ Bibtex ]

J.R. Kiniry. * Semantic Properties for Lightweight Specification in KnowledgeableDevelopment Environments. *Technical report: NIII-R0420, May, Radboud University Nijmegen, 2004.

Semantic properties are domain-specific specification constructs used to augment an existing language with richer semantics. These properties are taken advantage of in system analysis, design, implementation, testing, and maintenance through the use of documentation and source-code transformation tools. Semantic properties are themselves specified at two levels: loosely with precise natural language, and formally within the problem domain. The refinement relationships between these specification levels, as well as between a semantic property's use and its realization in program code via tools, is specified with a new formal method for reuse called kind theory.

[ Missing PDF ] [ Bibtex ]

S. Bosman, and Th.P. van der Weide. * Assistance for the domain modeling dialog. *Technical report: NIII-R0421, Computing Science Institute, University of Nijmegen, 2004.

This paper considers the domain modeling dialog between domain
expert and system analyst. In this dialog, the system analyst
interprets the domain knowledge provided by the expert,
and creates a formal model that captures this knowledge.
As the expert may not express knowledge in a very precise way, the
system analyst has to find the correct interpretation out of many
possible interpretations.

In order to improve the quality of the modeling dialog, we propose
a mechanism that enables the system analyst to have a better idea
of the intentions of the domain expert, especially where the expert
expresses these intentions poorly.

P.T. de Vrieze, P. van Bommel, and Th.P. van der Weide. * A Generic Adaptive Model in Adaptive Hypermedia. *Technical report: NIII-R0424, May, Radboud University Nijmegen, 2004.

For adaptive hypermedia there is a strong model in form of the AHAM model and the AHA! system. This model, based on the Dexter Model, however is limited to application in hypermedia systems. In this paper we propose a new Generic Adaptivity Model. This state-machine based model can be used as the basis for adaptation in all kinds of applications. This Generic Adaptivity Model, when compared to AHAM has the following features: The Generic Adaptivity Model is more low-level than the AHAM model as it does not provide hypermedia specific concepts. Instead the GAM provides an additional Interface Model thaat describes the bindings of the adaptivity systems with the application. Finally the concepts of push and pull modelling have been incorporated in the Generic Adaptivity Model allowing system designers to make better decisions on what to store in the user model. This should allow for more flexible adaption systems.

A. Alimarine, and S. Smetsers. * Efficient Generic Functional Programming. *Technical report: NIII-R0425, June, Radboud University Nijmegen, 2004.

Generic functions are defined by induction on the structuralrepresentation of types. As a consequence, by defining just a single generic operation, one acquires this operation over any particular data type. An instance on a specific type is generated by interpretation of the types structure. A direct translation leads to extremely inefficient code that involves many conversions between types and their structural representations. In this paper we present an optimization technique based on compile-time symbolic evaluation. We prove that the optimization removes theoverhead of the generated code for a considerable class of generic functions. The proof uses typing to identify intermediate data structures that should be eliminated. In essence, the output after optimization is similar to hand-written code.

P. Achten, M. van Eekelen, R. Plasmeijer, and A. van Weelden. * Automatic Generation of Editors for Higher-Order Data Structures. *Technical report: NIII-R0427, June, Radboud University Nijmegen, 2004.

GUI programming is tedious. With generic functional programming techniques, we have constructed a programming toolkit with which one can create GUIs in an abstract and compositional way using type directed Graphical Editor Components (GECs). In this toolkit, the programmer specifies a GUI by means of a data model instead of low-level GUI programming. Due to a fundamental restriction of generic programming techniques, this data model must have a first-order type. In this paper we show that the programming toolkit can be extended in such a way that the data model can contain higher-order types, by combining the earlier developed techniques of generic GECs, abstract editors, and the higher-order shell Esther that is based on dynamic types. In principle this solution extends our GUI programming toolkit with the full expressive power of functional programming languages.

A. van Weelden, L. Frantzen, M. Oostdijk, P. Koopman, and J. Tretmans. * On-the-Fly Formal Testing of a Smart Card Applet. *Technical report: NIII-R0428, June, Radboud University Nijmegen, 2004.

Smart cards are used in critical application areas. This means that their software should function correctly. Formal methods are indispensable in obtaining high quality systems. This paper presents a case study on the use of formal methods in specification-based, black-box testing of a smart card applet. The system under test is a simple electronic purse application running on a Java Card platform. The specification of the applet is given as a Statechart model, and transformed into a functional form to serve as the input for the test generation, -execution, and -analysis tool Gast. Several test runs were conducted, completely automatically, and altogether consisting of several millions of test events. The tests were applied on an (assumed to be) correct applet implementaton, as well as on some error-seeded implementations. They showed that automated, formal, specification-based testing of smart card applets is feasible, and that errors can be detected.

[ Missing PDF ] [ Bibtex ]

Foundations Group. * Dutch Proof Tools Day 2004. *Technical report: NIII-R0429, July, Radboud University Nijmegen, 2004.

These are the proceedings of the Dutch Proof Tools Day 2004, organized by the Foundations group of the NIII, University of Nijmegen. The papers contained herein give an account of the contents of the talks presented at the meeting.The Dutch Proof Tools Day is organized on an annual basis by the Protagonist group of Proof Tool users in the Netherlands (and Belgium). The aim of the day is to bring together researchers that use (or develop) proof tools to discuss the use, applicability and background of proof tools. More specifically, it provides a forum for young researchers to get in touch with other proof tools users and exchange experiences.

M. Hendriks, B. van den Nieuwelaar, and F.W. Vaandrager. * Model Checker Aided Design of a Controller for a Wafer Scanner. *Technical report: NIII-R0430, June, Radboud University Nijmegen, 2004.

For a case-study of a wafer scanner from the semiconductor industry it is shown how model checking techniques can be used to compute (i) a simple yet optimal deadlock avoidance policy, and (ii) an infinite schedule that optimizes throughput. Deadlock avoidance is studied based on a simple finite state model using smv, and for throughput analysis a more detailed timed automatonmodel has been constructed and analyzed using the UPPAAL tool. The SMV and UPPAAL models are formally related through the notion of a stuttering bisimulation.The results were obtained within two weeks, which confirms once more that model checking techniques may help to improve the design process of realistic, industrial systems.Methodologically, the case study is interesting since two models (and in fact also two model checkers) were used to obtain results that could not have been obtained using only a single model (tool).

L. Cruz-Filipe, and F. Wiedijk. * Equational Reasoning in Algebraic Structures: a Complete Tactic. *Technical report: NIII-R0431, July, Radboud University Nijmegen, 2004.

We present ""rational"", a Coq tactic for equational reasoning in abelian groups, commutative rings, and fields. We give an mathematical description of the method that this tactic uses, which abstracts from Coq specifics.We prove that the method that ""rational"" uses is correct, and that it is complete for groups and rings. Completeness means that the method succeeds in proving an equality if and only if that equality is provable from the the group/ring axioms. Finally we characterize in what way our method is incomplete for fields.

P. Achten, M. van Eekelen, R. Plasmeijer, and A. van Weelden. * GEC: a toolkit for Generic Rapid Prototyping of Type Safe Interactive Applications. *Technical report: NIII-R0432, July, Radboud University Nijmegen, 2004.

GUI programming is notoriously tedious.By using generic functions that create Graphical Editor Components (GECs), it becomes possible to define user interfaces without any knowledge of low level I/O handling.A GEC editor can automatically be derived for values of any (user-defined) monomorphic (first order) type.With an editor the application user can create new values of the type the editor is created for.In this way one obtains an editor for free for almost any type. Such a free editor may not look fancy, but one can change the default look by defining specialized versions of the generic function for certain types, e.g. representing buttons, pull-down menus and the like.Furthermore, with GECs the programmer can create an abstraction level separating the view type and the domain type completely. As a result, the programmer can easily make a library of suited graphical representations and freely choose which representation to use. Consequently, working with GECs a programmer can focus on the data type representing the user interaction instead of on the nasty graphical details.Editors can be easily combined to ensure that a change in one editor has effects on the contents of others. One can combine editors by hand or use an arrow library of editor combinators.It is even possible to create editors for higher order types which enables the creation of user interfaces in which functions can be typed in by the user or read from disk at run-time. In the latter case, functions are actually compiled functions that are dynamically linked into the running application.GECs are suited for rapid prototyping of real world applications, for teaching and for debugging. This paper focuses on the use of the GEC toolkit for functional programmers, only briefly explaining its inner workings andunderlying principles.

A. Alimarine, and S. Smetsers. * Fusing Generic Functions. *Technical report: NIII-R0434, August, Radboud University Nijmegen, 2004.

Generic programming is accepted by the functional programming community as a valuable tool for program development. Several functional languages have adopted the generic scheme of type-indexed values. This scheme works by specialization of a generic function to a concrete type. However, the generated code is extremely inefficient compared to its hand-written counterpart. The performance penalty is so big that the practical usefulness of generic programming is compromised. In this paper we present a optimization algorithm that is able to completely eliminate the overhead intoduced by the specialization scheme for a large class of generic functions. The presented technique is based on consumer-producer elimination as exploited by fusion, a standard general purpose optimization method. We show that our algorithm is able to optimize many practical examples of generic functions.

Ildiko Flesch, and P.J.F. Lucas. * Markov Equivalence in Bayesian Networks. *Technical report: NIII-R0436, August, Institute for Computing and Information Science, University of Nijmegen, 2004.

Ling Cheung, Nancy Lynch, Roberto Segala, and F.W. Vaandrager. * Switched Probabilistic I/O Automata. *Technical report: NIII-R0437, September, Radboud University Nijmegen, 2004.

A switched probabilistic I/O automaton is a special kind of probabilistic I/O automaton (PIOA), enriched with an explicit mechanism to exchange control with its environment. Every closed system of switched automata satisfies the key property that, in any reachable state, at most one component automaton is active. We define a trace-based semantics for switched PIOAs and prove it is compositional. We also propose switch extensions of an arbitrary PIOA and use these extensions to define a new trace-based semantics for PIOAs.

E.M.G.M. Hubbers, and E. Poll. * Transactions and non-atomic API methods in Java Card: specification ambiguity and strange implementation behaviours. *Technical report: NIII-R0438, October, 2004.

This paper discusses an ambiguity in Suns specification of the Java Card platform, which we noticed in the course of developing the precise formal description of the Java Card transaction mechanism presented in an earlier paper. The ambiguity concerns the Java Card transaction mechanism, more in particular the interaction of the transaction mechanism and two Java Card API methods, the methods arrayCopyNonAtomic and arrayFillNonAtomic in the class javacard.framework.Util.The paper also describes the experiments we performed with smartcards of different manufacturers (IBM and Schlumberger) to find out the behaviour actually implemented on these card. Interestingly, these experiments revealed some unexpected (and unexplainable) behaviour of these two methods on some cards.

F.A. Grootjen, and Th.P. van der Weide. * Dualistic Ontologies. *Technical report: NIII-R0439, November, Radboud University Nijmegen, 2004.

To effectively use and share knowledge among AI systems, a formal specification of the representation of their shared domain of discourse -- called an ontology -- is indispensable. In this paper we demonstrate the role of dualistic ontologies in human activities such as searching, in-depth exploration and browsing.We start with the introduction of a formal definition of dualistic ontologies, based on a dual view on the universe of discourse.Then we relate this formal definition to three different (well known) kinds of ontologies, based on the vector model, on formal concept analysis and on fuzzy logic respectively. The vector model leads to concepts derived by latent semantic indexing using the singular value decomposition. Both the set model as the fuzzy set model lead to Formal Concept Analysis, in which the fuzzy set model is equipped with a parameter that controls the fine-grainedness of the resulting concepts. We discuss the relation between the resulting systems of concepts.Finally, we demonstrate the use of this theory by introducing the dual search engine. We show how this search engine can be employed to support the human activities addressed above.

N.A. Lynch, R. Segala, and F.W. Vaandrager. * Compositionality for Probabilistic Automata. *Technical report: NIII-R0444, Radboud University Nijmegen, 2004.

We establish that on the domain of probabilistic automata, the trace distribution preorder coincides with the simulation preorder.

F.W. Vaandrager, and A.L. de Groot. * Analysis of a Biphase Mark Protocol with Uppaal and PVS. *Technical report: NIII-R0445, November, Radboud University Nijmegen, 2004.

The biphase mark protocol is a convention for representing both a string of bits and clock edges in a square wave. The protocol is frequently used for communication at the physical level of the ISO/OSI hierarchy, and is implemented on microcontrollers such as the Intel 82530 Serial Communications Controller. An important property of the protocol is that bit strings of arbitrary length can be transmitted reliably, despite differences in the clock rates of sender and receiver, and variations of the clock rates (jitter), and distortion of the signal after generation of an edge. In this article, we show how the protocol can be modelled naturally in terms of timed automata. We use the model checker Uppaal to derive the maximal tolerances on the clock rates, for different instances of the protocol, and to support the general parametric verification that we formalized using the proof assistant PVS. Based on the derived parameter constraints we propose instances of BMP that are correct (at least in our model) but have a faster bit rate than the instances that are commonly implemented in hardware.

Gert Veldhuijzen, and Th.P. van der Weide. * Query by Dialog: Maximizing Information gain in Interactive Information Disclosure. *Technical report: NIII-R0447, November, Radboud University Nijmegen, 2004.

Formulating queries for search engines is a difficult task, which requires insight into the document collection that the search engine has access to.We describe an interactive approach to information disclosure, in which the search engine acts as a guide that helps the searcher find the information that she needs. The guide actively seeks to acquire information from the searcher about her information need. The dialog between searcher and guide is treated as an information exchange--the searcher acquires information about the collection, and the guide acquires information about the information need of the searcher. The guide and searcher collaborate to find a focus (subset of the document collection) that is relevant to the information need of the searcher.At each step in the dialog, the guide presents suggestions for query expansion based on the information gain that can be obtained when these suggestions are followed or rejected.

[ Missing PDF ] [ Bibtex ]

Paul de Vrieze, P. van Bommel, and Th.P. van der Weide. * A Method for Incorporating User Modelling. *Technical report: NIII-R0448, 2004.

In this paper a method is presented for adding user modelling to existing software systems. The method consists of seven steps that lead from initial analysis to the definition and evaluation of the elements needed for the adaptive behaviour. Further the concept of an adaptation element is introduced. Such an adaptation element can be used to determine the impact of personalisations.

E.M.G.M. Hubbers, B.P.F. Jacobs, and Wolter Pieters. * RIES - Internet Voting in Action. *Technical report: NIII-R0449, December, Radboud University Nijmegen, 2004.

RIES stands for Rijnland Internet Election System. It is an online voting system that was developed by one of the Dutch local authorities on water management. The system has been used twice in the fall of 2004 for in total approximately two million potential voters. In this paper we describe how this system works. Furthermore we do not only describe how the outcome of the elections can be verified but also how it has been verified by us. To conclude the paper we describe some possible points for improvement.

Gerd Behrmann, Ed Brinksma, Martijn Hendriks, and Angelika Mader. * Production Scheduling by Reachability Analysis -- A Case Study. *Technical report: NIII-R0455, December, Radboud University Nijmegen, 2004.

Schedule synthesis based on reachability analysis of timed automata has received attention in the last few years. The main strength of this approach is that the expressiveness of timed automata allows -- unlike many classical approaches -- the modelling of scheduling problems of very different kinds. Furthermore, the models are robust against changes in the parameter setting and against changes in the problem specification. This paper presents a case study that was provided by Axxom, an industrial partner of the Ametist project. It consists of a scheduling problem for lacquer production, and is treated with the timed automata approach. A number of problems have to be addressed for the modelling task: the information transfer from the industrial partner, the derivation of timed automaton model for the case study, and the heuristics that have to be added in order to reduce the search space. We try to isolate the generic problems of modelling for model checking, and suggest solutions that are also applicable for other scheduling cases. Model checking experiments indicate that -- for this problem -- the timed automata approach is competitive with Axxoms planning tool.

Theo Schouten, and Egon L. van den Broek. * Fast Exact Euclidean Distance (FEED) Transformation. *Technical report: NIII-R0456, December, Radboud University Nijmegen, 2004.

Fast Exact Euclidean Distance (FEED) transformation is introduced,starting from the inverse of the distance transformation. The prohibitive computational cost of a naive implementation of traditional Euclidean Distance Transformation, is tackled by three operations: restriction of both the number of object pixels and the number of background pixels taken in consideration and pre-computation of the Euclidean distance. Compared to the Shih and Liu 4-scan method the FEED algorithm is often faster and is less memory consuming.

[ Missing PDF ] [ Bibtex ]

Theo Schouten, Harco Kuppens, and Egon L. van den Broek. * Exact Euclidean Distance (tFEED) Maps. *Technical report: NIII-R0457, December, Radboud University Nijmegen, 2004.

In image and video analysis, distance maps are frequently used.They provide the (Euclidean) distance (ED) of background pixels to the nearest object pixel. In a naive implementation, each object pixel feeds its (exact) ED to each background pixel; then the minimum of these values denotes the ED to the closest object. Recently, the Fast Exact Euclidean Distance (FEED) transformation was launched, which was up to 2x faster than the fastest algorithms available. In this paper, first additional improvements to the original FEED algorithm are discussed. Next, a timed version of FEED (tFEED) is presented, which generates distance maps for video sequences by merging partial maps. For each object in a video, a partial map can be calculated for different frames, where the partial map for fixed objects is only calculated once. In a newly developed, dynamic test-environment for robot navigation purposes, tFEED proved to be up to 7x faster than using FEED on each frame separately. It is up to 4x faster than the fastest ED algorithm available for video sequences and even 40% faster than generating city-block or chamfer distance maps for frames. Hence, tFEED is the first real time algorithm for generating exact ED maps of video sequences.

[ Missing PDF ] [ Bibtex ]

Eva M. van Rikxoort, Egon L. van den Broek, and Theo Schouten. * Mimicking human texture classification. *Technical report: NIII-R0458, December, Radboud University Nijmegen, 2004.

In an attempt to mimic human (colorful) texture classification by a clustering algorithm three lines of research have been encountered, in which as test set 180 texture images (both their color and gray-scale equivalent) were drawn from the OuTex and VisTex databases. First, a k-means algorithm was applied with three feature vectors, based on color/gray values, four texture features, and their combination. Second, 18 participants clustered the images using a newly developed card sorting program. The mutual agreement between the participants was 57% and 56% and between the algorithm and the participants it was 47% and 45%, for respectively color and gray-scale texture images. Third, in a benchmark, 30 participants judged the algorithms' clusters with gray-scale textures as more homogeneous then those with colored textures. However, a high interpersonal variability was present for both the color and the gray-scale clusters. So, despite the promising results, it is questionable whether average human texture classification can be mimicked (if it exists at all).

[ Missing PDF ] [ Bibtex ]

B. Gebremichael, and F.W. Vaandrager. * Specifying Urgency in Timed I/O Automata. *Technical report: NIII-R0459, December, Radboud University Nijmegen, 2004.

Sifakis et al advocate the use of deadline predicates for the specificationof progress properties of Alur-Dill style timed automata.In this article, we extend these ideas to a more general setting,which may serve as a basis for deductive verification techniques.More specifically, we extend the TIOA framework of Lynch et al with urgency predicates.We identify a suitable language to describe the resulting timed I/O automata with urgencyand show that for this language time reactivity holds by construction. We also establish thatthe class of timed I/O automata with urgency is closed under composition.The use of urgency predicates is compared with three alternative approachesto specifying progress properties that have been advocated in the literature:invariants, stopping conditions and deadlines predicates.We argue that in practice the use of urgency predicates leads to shorter and more naturalspecifications than any of the other approaches.Some preliminary results on proving invariant properties of timed (I/O) automata with urgency are presented.