# Showing papers in "CTIT technical reports series in 1998"

•

TL;DR: In this paper, the authors describe the EM algorithm for finding the parameters of a mixture of Gaussian densities and a hidden Markov model (HMM) for both discrete and Gaussian mixture observation models.

Abstract: We describe the maximum-likelihood parameter estimation problem and how the ExpectationMaximization (EM) algorithm can be used for its solution. We first describe the abstract form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical rigor.

2,455 citations

•

TL;DR: The Mars Reference Mission as mentioned in this paper is a tool used by the exploration community to compare and evaluate approaches to mission and system concepts that could be used for human missions to Mars, and it is intended to identify and clarify system drivers, significant sources of cost, performance, risk, and schedule variation.

Abstract: This Addendum to the Mars Reference Mission was developed as a companion document to the NASA Special Publication 6107, "Human Exploration of Mars: The Reference Mission of the NASA Mars Exploration Study Team." It summarizes changes and updates to the Mars Reference Missions that were developed by the Exploration Office since the final draft of SP 6107 was printed in early 1999. The Reference Mission is a tool used by the exploration community to compare and evaluate approaches to mission and system concepts that could be used for human missions to Mars. It is intended to identify and clarify system drivers, significant sources of cost, performance, risk, and schedule variation. Several alternative scenarios, employing different technical approaches to solving mission and technology challenges, are discussed in this Addendum. Comparing alternative approaches provides the basis for continual improvement to technology investment plan and a general understanding of future human missions to Mars. The Addendum represents a snapshot of work in progress in support of planning for future human exploration missions through May 1998.

135 citations

•

TL;DR: The difficult index selection problem [GHRU97] largely disappears and the UB-Tree offers the potential to integrate OLAP with OLTP in the same processing environment.

Abstract: We investigate the usability and performance of the UB-Tree (universal B-Tree) for multidimensional data, as they arise in all relational databases and in particular in data-warehousing and data-mining applications. The UB-Tree is balanced and has all the guaranteed performance characteristics of B-Trees, i.e., it requires linear space for storage and logarithmic time for the basic operations of insertion, retrieval and deletion. Therefore it can efficiently support OLTP. In addition the UB-Tree preserves clustering of objects with respect to Cartesian distance. Therefore, it shows its main strengths for multidimensional data. It has very high potential for parallel processing. A single UB-Tree can replace a large number of secondary indexes and join indexes including foreign column join indexes (FCJ). For updates this means that only one UB-Tree must be managed instead of several secondary indexes. This reduces runtime and storage requirements substantially. For retrieval the UB-Tree has multiplicative complexity with respect to the relative size of the ranges for range queries, resulting in a dramatic performance improvement over multiple secondary indexes which have additive range query complexity. Furthermore, using the Tetris-Algorithm the UB-Tree enables reading data in any arbitrary sort order without the necessity of external sorting. Thus data need to be read only once to perform most of the operations of the relational algebra, such as ordering, grouping, aggregation, projection and joining. Therefore, the UB-Tree can support OLAP very efficiently. It is useful for geometric databases, data-warehousing and data-mining applications, but even more for databases in general, where multiple secondary indexes on one relation or FCJ-indexes to join several relations are widespread, which can all be replaced by a single UB-Tree index. Therefore, the difficult index selection problem [GHRU97] largely disappears and the UB-Tree offers the potential to integrate OLAP with OLTP in the same processing environment.

27 citations

•

TL;DR: This paper presents a three-layer architecture for implementing nested transaction support on a commercial multi-database environment directed at high portability and flexibility.

Abstract: Many applications dealing with complex processes require database support for nested transactions. Current commercial database systems lack this kind of support, offering flat, non-nested transactions only. This paper presents a three-layer architecture for implementing nested transaction support on a commercial multi-database environment. The architecture is directed at high portability and flexibility. The modular approach and the simple, event driven interfaces between the layers of the architecture enable the nested transaction support to be adapted to various applications, nested transaction models and database management systems. The architecture has been implemented to support a prototype of a commercial next-generation workflow management system.

19 citations

••

ETH Zurich

^{1}TL;DR: It is shown how the mechanisms used in PSS can be used as the basis for implementing process migration in a distributed setting based on an implementation within the OPERA process support kernel.

Abstract: Availability in Process Support Systems (PSS) can be achieved by using standby mechanisms that allow a backup server to take over in case a primary server fails. These mechanisms, resembling the process pair approach used in operating systems, require the primary to send information about state changes to the backup on a regular basis. In PSS where all relevant state information is stored in a database, there are two principal strategies for synchronizing a primary–backup pair. One is to use the replication mechanisms provided by the DBMS. Another is to implement a message mechanism to exchange state information between servers above the database level. For both approaches, several variants exist that allow to trade run-time performance for failover time. This paper discusses the possible strategies and evaluates their performance based on an implementation within the OPERA process support kernel. Moreover, it is shown how the mechanisms can be used as the basis for implementing process migration in a distributed setting.

12 citations

•

TL;DR: In this paper, a series of conjectures expressing combinatorial formulas on the first and the last terms of {A}8(m} with respect to some natural order on partitions are given.

Abstract: "Plethysm" {A}®(a} is a sort of composition of Schur functions (or partitions of positive integers), which was first introduced by D.E.Littlewood [20]. From the first time, it is an important problem to decompose it into irreducible factors, and several efforts are achieved concerning this problem. (See the references at the end of this paper.) But, at present, general decomposition formulas (or rules) are not yet known except some special cases. In view of the table in our previous paper [5], however, it seems that there exist some hidden combinatorial properties on {A}®{m}> and the purpose of this paper is to give a series of conjectures expressing combinatorial formulas on the first and the last terms of {A}8(m} with respect to some natural order on partitions, which gives a partial "answer" to the above problem. Unfortunately, at present, we do not know the proof of these conjectures. However, these conjectures give the correct results

12 citations

••

TL;DR: In this article, the use of the generic programming paradigm in the computational geometry algorithms library CGAL is described. And the parameterization of the geometric algorithms in CGAL enhances flexibility and adaptability and opens an easy way for abolishing precision and robustness problems by exact but nevertheless efficient computation.

Abstract: We report on the use of the generic programming paradigm in the computational geometry algorithms library CGAL. The parameterization of the geometric algorithms in CGAL enhances flexibility and adaptability and opens an easy way for abolishing precision and robustness problems by exact but nevertheless efficient computation. Furthermore we discuss circulators, which are an extension of the iterator concept to circular structures. Such structures arise frequently in geometric computing.

4 citations

••

TL;DR: This paper describes some fundamental issues for robust implementations of progressively refined tetrahedralizations generated through sequences of edge collapses and addresses the definition of appropriate cost functions.

Abstract: This paper describes some fundamental issues for robust implementations of progressively refined tetrahedralizations generated through sequences of edge collapses. We address the definition of appropriate cost functions and explain on various tests which are necessary to preserve the consistency of the mesh when collapsing edges. Although being considered a special case of progressive simplicial complexes [4], the results of our method are of high practical importance and can be used in many different applications, such as finite element meshing, scattered data interpolation or rendering of irregular volume data. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling – surfaces and object representations.

3 citations

•

TL;DR: In this paper, a technical report bundles the five reports that were written for two SURFnet4 projects in 1997, each report discusses a different research aspect of the SURFNet4 ATM network, including the Architecture of a ATM PVC Management Tool, Reporting on the Use of the ATM Network, SVCs in a Production Environment, Quality of Service Measures, Operation and Maintenance (OAM).

Abstract: This technical report bundles the five reports that were written for two SURFnet4 projects in 1997. Each report discusses a different research aspect of the SURFnet4 ATM network.
The topics are 1) the Architecture of a ATM PVC Management Tool, 2) Reporting on the Use of the ATM Network, 3) SVCs in a Production Environment, 4) Quality of Service Measures, 5) Operation and Maintenance (OAM).

••

TL;DR: This paper presents a software FUNKE based on the parametric representation which allows to compute best ts of surfaces, and shows that by exploiting the structure of the Jacobian matrix even with the additional unknowns the com putational complexity is comparable to a t using an implicit surface representation.

Abstract: The parametric representation of surfaces is often avoided in best t computations because of the large number of unknowns By exploiting the matrix structure D Sourlier has developed an e ective best t software FUNKE which is based on parametric representation FUNKE is perfectly applicable for sculptured surfaces We report on the best t of a turbine blade In the second part we propose a new shape distance measure to compare two CAD described surfaces We give algorithms in Matlab to compute this shape distance and the corresponding transformation Introduction and Notations In this paper we will represent a patch of a sculptured surface i e a polynomial surface as a tensor product surface x u v x u v y u v z u v A f u A g v f u B g v f u C g v A u v f u f u fn u T and g v g v gm v T are given basis functions of the surface parameters u and v These basis functions could be NURBS B splines Bezier polynomials or simply monomes i e fi u u The coe cients specifying the surface are given by the three matrices A B C R n It will be convenient to write the surface sometimes as x u v y u v z u v A Pn i Pm j aijfi u gj v Pn i Pm j bijfi u gj v Pn i Pm j cijfi u gj v A Pmn k kFk u v Pmn k kFk u v Pmn k kFk u v A The coe cients k k and k and the basis functions Fk u v are obtained by rearranging the data as follows i n j aij i n j bij i n j cij and Fi n j u v fi u gj v Least Squares Fit of sculptured surface A problem in coordinate metrology is the best t of a geometric element into measuring points Best t means that the sum of squares of the geometric distances of the measuring points to the geometric element with optimally tted parameters is to be minimized Although the parametric surface representation is a standard in all elds of CAD CAM CAQ cf B splines NURBS as well as in related elds like e g surface reconstruction this representation is not widely introduced and used so far in the commercial software for the practical needs of coordinate measuring techniques The best t calculation is still based there on implicit surface representation f x The main reason for that is the advantage of not having the two surface coordinates u and v involved as additional unknowns in the least squares t The problem with implicit surface representation is however that only for standard surfaces like plane sphere cylinder cone an explicit distance function is known For more complex surfaces like ellipsoids with no explicit expression for the distance an approximation for the geometric distance is often used Of course by using such an approximation the computed t is not the exact best t and this may not be acceptable With the parametric approach the number of unknowns is dramatically enlarged each measur ing point generates an additional unknown when tting a D curve or even two unknowns if we wish to t a surface cf or However Sourlier has developed a software FUNKE based on the parametric representation which allows to compute best ts of surfaces He shows that by exploiting the structure of the Jacobian matrix even with the additional unknowns the com putational complexity is comparable to a t using an implicit surface representation Interesting features of FUNKE are best t of sculptured surfaces complex features e g involute helical toroid combined features e g quadrant to be tted by position orientation length width and height best t with frozen degrees of freedom e g x radius cylinder t best t available for any new de ned surface standardization of best t implementation function independent best t procedure by separating geometry from position orientation description improved possibilities for probe radius correction and de ection compensation The software FUNKE computes the best t in the norm to given measuring points for every parameterized surface function FUNKE is based on a generic surface function x u v which is replaced in each function call by the current surface that we wish to t There is a strict separation of geometric and position orientation parameters The surface is thus described in its simplest position We call this the special parametric description in which only the parameters p occur which describe the geometry of the surface Here are some examples x plane u v p u v x sphere u v p p cos u cos v p sinu cos v sin v x cylinder u v p p cos u p sinu v x screw u v p v cosu v sinu p u The general description of a surface which includes the position orientation information t tx ty tz and the angles a a b c is given by the transformation R a is an orthogonal matrix x u v p a t R a x u v p t R a x u v p t If we wish to compute the distance d x of a measuring point x to the surface we have to solve a minimization problem in order to nd the foot point coordinates d x min u v k x x u v p a t k To compute the best t of a surface to n measuring points we need to minimize the sum of squares of the distances and thus we have to introduce n more unknowns for the foot point coordinates U u v un vn

•

TL;DR: In this paper, the authors present inconsistencies in the provisions for driving and rest periods, found in a previous analysis, and develop an abstract, yet precise and logically consistent interpretation of these provisions by constructing a mathematical model of the relevant parts of the social regulation.

Abstract: On public roads in the European Union, transport of passengers and goods by vehicles are constrained by the Council regulation EEC 3820/85 on the harmonisation of certain social legislation relating to road transport. This social regulation aims to improve road safety and working conditions for drivers. This report presents some inconsistencies in the provisions for driving and rest periods, found in a previous analysis. The purpose was to develop an abstract, yet precise and logically consistent interpretation of these provisions by constructing a mathematical model of the relevant parts of the social regulation.

••

TL;DR: A fast linearizable counting scheme for processors that increment at arbitrary rates is presented and in both a message passing and a shared memory environment the distributed counter is described.

Abstract: A distributed counter is a concurrent object which provides a test and increment operation on a shared value On the basis of a distributed counter one can implement various fundamental data structures such as queues or stacks We present a fast linearizable counting scheme for processors that increment at arbitrary rates Our counter is e cient in both a message passing and a shared memory environment we describe in detail the former We analyze the expected behaviour of our scheme using queueing theory In our simulations we compare our scheme with Counting Networks and Di racting Trees

••

•

TL;DR: The Prozess-Leit-System (PLS) als Arbeitsmittel, as well as die beiden arbeitsplätze "Kesselhaus" und "FO-Messwarte" are used in this paper.

Abstract: (1) das Prozess-Leit-System (PLS) als Arbeitsmittel, (2) die beiden Arbeitsplätze \"Kesselhaus\" und \"FO-Messwarte\", (3) die Arbeitsumgebung im \"Kesselhaus\" und in der \"FO-Messwarte\", sowie (4) die Arbeitsorganisation im Bereich Energiebetriebe.

••

TL;DR: In this paper, the authors demonstrate the importance of multiresolution compression methods for visualization using two examples: the first, compression domain volume rendering, enables one to visualize volume data progressively and instantaneously from its compressed data format and has been designed for WWW and networked applications.

Abstract: Compression methods have become of fundamental importance in almost every subfield of scientific visualization. However, unlike image compression, advanced visualization applications impose manifold constraints on the design of appropriate algorithms, where progressiveness, multiresolution or topology preservation are some of the key issues. This paper demonstrates the importance of multiresolution compression methods for visualization using two examples: The first, compression domain volume rendering, enables one to visualize volume data progressively and instantaneously from its compressed data format and has been designed for WWW and networked applications. The second one is a multiresolution compression and reconstruction method that allows for progressive coding, transmission and geometric reconstruction of surfaces and volumes. Both of the presented methods are so-called transform coding schemes and use wavelets for data representation.

•

TL;DR: This paper discusses the application of TINA concepts, architectures and related design paradigms in the MESH project, which adopted TINA as a means to facilitate the design and implementation of a flexible platform for developing and providing interactive multimedia services.

Abstract: This paper discusses the application of TINA concepts, architectures and related design paradigms in the MESH project. MESH adopted TINA as a means to facilitate the design and implementation of a flexible platform for developing and providing interactive multimedia services. This paper reports on the activity in which the TINA specifications have been studied and implementation options for the TINA components have been selected. This paper also discusses the role and position of protocols and objects in this context.

•

TL;DR: In this article, the authors consider the problem of deciding which of two populations has the larger location parameter and show that the one that chooses the population with the largest observed value of Xi minimizes the expected loss.

Abstract: In this paper we consider the problem of deciding which of two populations \pi_1 and \pi_2 has the larger location parameter. We base this decision -- which is a choice between "\pi_1", "\pi_2" and "\pi_1 or \pi_2" -- on summary statistics X1 and X2 , obtained from independent samples from the two populations. Our loss function contains a penalty for the absence of a "good" population as well as for the presence of a "bad" one among those chosen. We show that, for our class of decision rules (see (1.2)), the one that chooses the population with the largest observed value of Xi minimizes the expected loss. It also, obviously, minimizes the expected number of chosen populations. We give conditions under which the expected loss has a unique maximum and, for several examples where these conditions are satisfied, we also show that the expected loss is, for each (\theta_1, \theta_2), strictly decreasing in the (common) sample size n. For the case of normal populations Bechhofer (1954) proposed and studied this decision rule where he chose n to lowerbound the probability of a correct selection. Several new results on distributions having increasing failure rate, needed for our results, are of independent interest, as are new results on the peakedness of location estimators.

•

TL;DR: The results of the nephelometer experiment conducted aboard the Galileo mission to Jupiter are presented in this paper, where the authors describe the small number densities of particles, the moderate particle sizes, generally in the slightly submicron to few micron range, and the resulting small optical depths, mass densities due to particles, column particle number loading and column mass loading in the atmosphere encountered by the Galileo Probe during its descent.

Abstract: The results of the nephelometer experiment conducted aboard the Probe of the Galileo mission to Jupiter are presented. The tenuous clouds and sparse particulate matter in the relatively particle-free 5-micron "hot spot" region of the Probe's descent were documented from about 0.46 bars to about 12 bars. Three regions of apparent coherent structure were noted, in addition to many indications of extremely small particle concentrations along the descent path. From the first valid measurement at about 0.46 bars down to about 0.55 bars a feeble decaying lower portion of a cloud, corresponding with the predicted ammonia particle cloud, was encountered. A denser, but still very modest, particle structure was present in the pressure regime extending from about 0.76 to a distinctive base at 1.34 bars, and is compatible with the expected ammonium hydrosulfide cloud. No massive water cloud was encountered, although below the second structure, a small, vertically thin layer at about 1.65 bars may be detached from the cloud above, but may also be water condensation, compatible with reported measurements of water abundance from other Galileo Mission experiments. A third small signal region, extending from about 1.9 to 4.5 bars, exhibited quite weak but still distinctive structure, and, although the identification of the light scatterers in this region is uncertain, may also be a water cloud perhaps associated with lateral atmospheric motion and/or reduced to a small mass density by atmospheric subsidence or other explanations. Rough descriptions of the particle size distributions and cloud properties in these regions have been derived, although they may be imprecise because of the small signals and experimental difficulties. These descriptions document the small number densities of particles, the moderate particle sizes, generally in the slightly submicron to few micron range, and the resulting small optical depths, mass densities due to particles, column particle number loading and column mass loading in the atmosphere encountered by the Galileo Probe during its descent.