scispace - formally typeset
Search or ask a question

Showing papers in "Scientific Programming in 2002"


Journal ArticleDOI
TL;DR: An agent-based resource management system, ARMS, is implemented for grid computing that utilises the performance prediction techniques of the PACE toolkit to provide quantitative data regarding the performance of complex applications running on a local grid resource.
Abstract: Resource management is an important component of a grid computing infrastructure. The scalability and adaptability of such systems are two key challenges that must be addressed. In this work an agent-based resource management system, ARMS, is implemented for grid computing. ARMS utilises the performance prediction techniques of the PACE toolkit to provide quantitative data regarding the performance of complex applications running on a local grid resource. At the meta-level, a hierarchy of homogeneous agents are used to provide a scalable and adaptable abstraction of the system architecture. Each agent is able to cooperate with other agents and thereby provide service advertisement and discovery for the scheduling of applications that need to utilise grid resources. A case study with corresponding experimental results is included to demonstrate the efficiency of the resource management and scheduling system.

165 citations


Journal ArticleDOI
TL;DR: The UNICORE architecture and the services provided, based on the abstract job model, offer services for security, translation of abstract jobs into real batch jobs for different target systems, and a public key infrastructure.
Abstract: UNICORE (Uniform Interface to Computer Resources) is a software infrastructure supporting seamless and secure access to distributed resources. UNICORE allows uniform access to different hardware and software platforms as well as different organizational environments. Based on the abstract job model it offers services for security, translation of abstract jobs into real batch jobs for different target systems, and a public key infrastructure. This paper describes the UNICORE architecture and the services provided.

96 citations


Journal ArticleDOI
TL;DR: A novel application of hidden Markov models in implicit learning is presented and this method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.
Abstract: Markov models have been used extensively in psychology of learning. Applications of hidden Markov models are rare however. This is partially due to the fact that comprehensive statistics for model selection and model assessment are lacking in the psychological literature. We present model selection and model assessment statistics that are particularly useful in applying hidden Markov models in psychology. These statistics are presented and evaluated by simulation studies for a toy example. We compare AIC, BIC and related criteria and introduce a prediction error measure for assessing goodness-of-fit. In a simulation study, two methods of fitting equality constraints are compared. In two illustrative examples with experimental data we apply selection criteria, fit models with constraints and assess goodness-of-fit. First, data from a concept identification task is analyzed. Hidden Markov models provide a flexible approach to analyzing such data when compared to other modeling methods. Second, a novel application of hidden Markov models in implicit learning is presented. Hidden Markov models are used in this context to quantify knowledge that subjects express in an implicit learning task. This method of analyzing implicit learning data provides a comprehensive approach for addressing important theoretical issues in the field.

83 citations


Journal ArticleDOI
TL;DR: This paper details the areas relating to Grid research that it feels still need to be addressed to fully leverage the advantages of the Grid.
Abstract: The design and implementation of a national computing system and data grid has become a reachable goal from both the computer science and computational science point of view. A distributed infrastructure capable of sophisticated computational functions can bring many benefits to scientific work, but poses many challenges, both technical and socio-political. Technical challenges include having basic software tools, higher-level services, functioning and pervasive security, and standards, while socio-political issues include building a user community, adding incentives for sites to be part of a user-centric environment, and educating funding sources about the needs of this community. This paper details the areas relating to Grid research that we feel still need to be addressed to fully leverage the advantages of the Grid.

78 citations


Journal ArticleDOI
TL;DR: The design and prototype implementation of the VLAM-G platform is described, including several recent technologies such as the Globus toolkit, enhanced federated database systems, and visualization and simulation techniques.
Abstract: The Grid-based Virtual Laboratory AMsterdam (VLAM-G), provides a science portal for distributed analysis in applied scientific research. It offers scientists remote experiment control, data management facilities and access to distributed resources by providing cross-institutional integration of information and resources in a familiar environment. The main goal is to provide a unique integration of existing standards and software packages. This paper describes the design and prototype implementation of the VLAM-G platform. In this testbed we applied several recent technologies such as the Globus toolkit, enhanced federated database systems, and visualization and simulation techniques. Several domain specific case studies are described in some detail. Information management will be discussed separately in a forthcoming paper.

73 citations


Journal ArticleDOI
TL;DR: The SKaMPI benchmark is described, which covers a large fraction of MPI, and incorporates well-accepted mechanisms for ensuring accuracy and reliability, and is distinguished among other MPI benchmarks by an effort to maintain a public performance database with performance data from different hardware platforms and MPI implementations.
Abstract: The main objective of the MPI communication library is to enable {\it portable parallel programming} with high performance within the message-passing paradigm. Since the MPI standard has no associated performance model, and makes no performance guarantees, comprehensive, detailed and accurate performance figures for different hardware platforms and MPI implementations are important for the application programmer, both for understanding and possibly improving the behavior of a given program on a given platform, as well as for assuring a degree of predictable behavior when switching to another hardware platform and/or MPI implementation. We term this latter goal {\it performance portability}, and address the problem of attaining performance portability by benchmarking. We describe the SKaMPI benchmark which covers a large fraction of MPI, and incorporates well-accepted mechanisms for ensuring accuracy and reliability. SKaMPI is distinguished among other MPI benchmarks by an effort to maintain a public performance database with performance data from different hardware platforms and MPI implementations.

63 citations


Journal ArticleDOI
TL;DR: A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms that feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling.
Abstract: A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD) simulations and quantum-mechanical (QM) calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. production-quality programs also feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling for immersive and interactive visualization of massive simulation data.

36 citations



Journal Article
고륜호, 윤병주, 이훈철, 김성대, 유상조 
TL;DR: In this article, a log-polar coordinates shape adaptive discrete transform (PSADT) is used for image object detection, and the PSADT is used to detect image objects.
Abstract: 본 논문에서는 임의의 모양을 가지는 영상 객체(image object)에 적용 가능한 모양적응 이산변환(PSADT:Polar coordinates shape adaptive discrete transform)에 기반한 새로운 워터마킹 기법을 제안한다. 대수-극 좌표계(log-polar coordinates)와 모양적응 이산변환을 이용하여 제안된 기법은 MPEG-4의 VOP와 같은 임의의 모양을 가지는 영상 객체에 적용할 수 있을 뿐만 아니라 기존 기법에 비하여 회전과 크기 변화와 같은 기하학적인 공격에 대하여 우수한 강인성을 제공한다.

31 citations


Journal ArticleDOI
TL;DR: A Bayesian approach to determining the order of a finite state Markov chain whose transition probabilities are themselves governed by a homogeneous finite stateMarkov chain, extending previous work on homogeneous Markov chains to more general and applicable hidden Markov models.
Abstract: This paper describes a Bayesian approach to determining the order of a finite state Markov chain whose transition probabilities are themselves governed by a homogeneous finite state Markov chain. It extends previous work on homogeneous Markov chains to more general and applicable hidden Markov models. The method we describe uses a Markov chain Monte Carlo algorithm to obtain samples from the (posterior) distribution for both the order of Markov dependence in the observed sequence and the other governing model parameters. These samples allow coherent inferences to be made straightforwardly in contrast to those which use information criteria. The methods are illustrated by their application to both simulated and real data sets.

29 citations


Journal ArticleDOI
TL;DR: Experiments show that by using the proposed distributed DLB scheme for Structured Adaptive Mesh Refinement applications on distributed systems, the execution time can be reduced by 9%- to using parallelDLB scheme which does not consider the heterogeneous and dynamic features of distributed systems.
Abstract: Dynamic load balancing(DLB) for parallel systems has been studied extensively; however, DLB for distributed systems is relatively new. To efficiently utilize computing resources provided by distributed systems, an underlying DLB scheme must address both heterogeneous and dynamic features of distributed systems. In this paper, we propose a DLB scheme for Structured Adaptive Mesh Refinement(SAMR) applications on distributed systems. While the proposed scheme can take into consideration (1) the heterogeneity of processors and (2) the heterogeneity and dynamic load of the networks, the focus of this paper is on the latter. The load-balancing processes are divided into two phases: global load balancing and local load balancing. We also provide a heuristic method to evaluate the computational gain and redistribution cost for global redistribution. Experiments show that by using our distributed DLB scheme, the execution time can be reduced by 9%- to using parallel DLB scheme which does not consider the heterogeneous and dynamic features of distributed systems.

Journal ArticleDOI
TL;DR: The Grid Resource Broker is a grid portal that allows trusted users to create and handle computational/data grids on the fly exploiting a simple and friendly web-based GUI and provides location-transparent secure access to Globus services.
Abstract: Portals to computational/data grids provide the scientific community with a friendly environment in order to solve large-scale computational problems. The Grid Resource Broker (GRB) is a grid portal that allows trusted users to create and handle computational/data grids on the fly exploiting a simple and friendly web-based GUI. GRB provides location-transparent secure access to Globus services, automatic discovery of resources matching the user's criteria, selection and scheduling on behalf of the user. Moreover, users are not required to learn Globus and they do not need to write specialized code or to rewrite their existing legacy codes. We describe GRB architecture, its components and current GRB features addressing the main differences between our approach and related work in the area.

Journal ArticleDOI
TL;DR: The portal lets grid application programmers script complex distributed computations and package these applications with simple interfaces for others to use, and has been tested with various applications, including the distributed simulation of chemical processes in semiconductor manufacturing and collaboratory support for X-ray crystallographers.
Abstract: This paper describes the design and prototype implementation of the XCAT Grid Science Portal. The portal lets grid application programmers script complex distributed computations and package these applications with simple interfaces for others to use. Each application is packaged as a notebook which consists of web pages and editable parameterized scripts. The portal is a workstation-based specialized personal web server, capable of executing the application scripts and launching remote grid applications for the user. The portal server can receive event streams published by the application and grid resource information published by Network Weather Service (NWS) [35] or Autopilot [16] sensors. Notebooks can be published and stored in web based archives for others to retrieve and modify. The XCAT Grid Science Portal has been tested with various applications, including the distributed simulation of chemical processes in semiconductor manufacturing and collaboratory support for X-ray crystallographers.

Journal ArticleDOI
TL;DR: The results show that prediction of dynamic network performance is key to efficient scheduling and that tunability allows for production runs of on-line parallel tomography in Computational Grid environments.
Abstract: Tomography is a popular technique to reconstruct the three-dimensional structure of an object from a series of two-dimensional projections. Tomography is resource-intensive and deployment of a parallel implementation onto Computational Grid platforms has been studied in previous work. In this work, we address on-line execution of the application where computation is performed as data is collected from an on-line instrument. The goal is to compute incremental 3-D reconstructions that provide quasi-real-time feedback to the user. We model on-line parallel tomography as a tunable application: trade-offs between resolution of the reconstruction and frequency of feedback can be used to accommodate various resource availabilities. We demonstrate that application scheduling/tuning can be framed as multiple constrained optimization problems and evaluate our methodology in simulation. Our results show that prediction of dynamic network performance is key to efficient scheduling and that tunability allows for production runs of on-line parallel tomography in Computational Grid environments.

Journal ArticleDOI
TL;DR: The GDMP client-server software system is a generic file replication tool that replicates files securely and efficiently from one site to another in a Data Grid environment using Globus Grid tools.
Abstract: The GDMP client-server software system is a generic file replication tool that replicates files securely and efficiently from one site to another in a Data Grid environment using Globus Grid tools. In addition, it manages replica catalogue entries for file replicas and thus maintains a consistent view on names and locations of replicated files. Files to be transferred can be of any particular file format and GDMP treats them all in the same way. However, for Objectivity database files a particular plug-in exists. All files are assumed to be read-only.

Journal ArticleDOI
TL;DR: KappaPi is described as an example of a static automatic performance analysis tool, and also a general environment based on parallel patterns for developing and dynamically tuning parallel/distributed applications.
Abstract: Performance analysis and tuning of parallel/distributed applications are very difficult tasks for non-expert programmers. It is necessary to provide tools that automatically carry out these tasks. These can be static tools that carry out the analysis on a post-mortem phase or can tune the application on the fly. Both kind of tools have their target applications. Static automatic analysis tools are suitable for stable application while dynamic tuning tools are more appropriate to applications with dynamic behaviour. In this paper, we describe KappaPi as an example of a static automatic performance analysis tool, and also a general environment based on parallel patterns for developing and dynamically tuning parallel/distributed applications.

Journal ArticleDOI
TL;DR: Measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.
Abstract: When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries ({\it packages}, in Java) that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, {\it versioning} and {\it semantic expansion}, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

Journal ArticleDOI
TL;DR: In this paper, a low-level hardware monitoring facility based on a comprehensive visualization tool enables the generation of memory access histograms capable of showing all memory accesses across the complete address space of an application's working set.
Abstract: Shared memory applications running transparently on top of NUMA architectures often face severe performance problems due to bad data locality and excessive remote memory accesses. Optimizations with respect to data locality are therefore necessary, but require a fundamental understanding of an application's memory access behavior. The information necessary for this cannot be obtained using simple code instrumentation due to the implicit nature of the communication handled by the NUMA hardware, the large amount of traffic produced at runtime, and the fine access granularity in shared memory codes. In this paper an approach to overcome these problems and thereby to enable an easy and efficient optimization process is presented. Based on a low-level hardware monitoring facility in coordination with a comprehensive visualization tool, it enables the generation of memory access histograms capable of showing all memory accesses across the complete address space of an application's working set. This information can be used to identify access hot spots, to understand the dynamic behavior of shared memory applications, and to optimize applications using an application specific data layout resulting in significant performance improvements.

Journal ArticleDOI
TL;DR: A HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data and is derived based on the Bayesian model selection methodology.
Abstract: Hidden Markov Models(HMM) have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.

Journal ArticleDOI
TL;DR: Two novel HMM based techniques that segregate a speech segment from its concurrent background that can be reliably used in clean environments while the second method, which makes use of the wavelets denoising technique, is effective in noisy environments.
Abstract: The goal of the speech segments extraction process is to separate acoustic events of interest (the speech segment to be recognised) in a continuously recorded signal from other parts of the signal (background). The recognition rate of many voice command systems is very much dependent on speech segment extraction accuracy. This paper discusses two novel HMM based techniques that segregate a speech segment from its concurrent background. The first method can be reliably used in clean environments while the second method, which makes use of the wavelets denoising technique, is effective in noisy environments. These methods have been implemented and shown superiority over other popular techniques, thus, indicating that they have the potential to achieve greater levels of accuracy in speech recognition rates.

Journal ArticleDOI
Arun Iyengar1, Daniela Rosu1
TL;DR: The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.
Abstract: Web site applications are some of the most challenging high-performance applications currently being developed and deployed. The challenges emerge from the specific combination of high variability in workload characteristics and of high performance demands regarding the service level, scalability, availability, and costs. In recent years, a large body of research has addressed the Web site application domain, and a host of innovative software and hardware solutions have been proposed and deployed. This paper is an overview of recent solutions concerning the architectures and the software infrastructures used in building Web site applications. The presentation emphasizes three of the main functions in a complex Web site: the processing of client requests, the control of service levels, and the interaction with remote network caches.

Journal ArticleDOI
TL;DR: Two different strategies for routing method invocations are investigated, namely {\it call forwarding} and {\it referrals}.
Abstract: {\it Mobile Objects in Java} provides support for object mobility in Java. Similarly to the RMI technique, a notion of client-side stub, called startpoint, is used to communicate transparently with a server-side stub, called endpoint. Objects and associated endpoints are allowed to migrate. Our approach takes care of routing method calls using an algorithm that we studied in. The purpose of this paper is to present and evaluate the implementation of this algorithm in Java. In particular, two different strategies for routing method invocations are investigated, namely {\it call forwarding} and {\it referrals}. The result of our experimentation shows that the latter can be more efficient by up to 19%.

Journal ArticleDOI
TL;DR: CX, a network-based {c}omputational e{x}change, is presented, and the system's design integrates variations of ideas from other researchers, such as work stealing, non-blocking tasks, eager scheduling, and space-based coordination.
Abstract: CX, a network-based {c}omputational e{x}change, is presented. The system's design integrates variations of ideas from other researchers, such as work stealing, non-blocking tasks, eager scheduling, and space-based coordination. The object-oriented API is simple, compact, and cleanly separates application logic from the logic that supports interprocess communication and fault tolerance. Computations, of course, run to completion in the presence of computational hosts that join and leave the ongoing computation. Such hosts, or producers, use task caching and prefetching to overlap computation with interprocessor communication. To break a potential task server bottleneck, a network of task servers is presented. Even though task servers are envisioned as reliable, the self-organizing, scalable network of $n$ servers, described as a {\it sibling-connected height-balanced fat tree}, tolerates a sequence of $n-1$ server failures. Tasks are distributed throughout the server network via a simple "diffusion" process. CX is intended as a test bed for research on automated silent auctions, reputation services, authentication services, and bonding services. CX also provides a test bed for algorithm research into network-based parallel computation.

Journal ArticleDOI
TL;DR: This work reports on a simulation of primordial star formation which develops over 8000 subgrids at 34 levels of refinement to achieve a local refinement of a factor of $10^{12}$ in space and time, which allows the properties of the first stars which form in the universe to be resolved.
Abstract: As an entry for the 2001 Gordon Bell Award in the "special" category, we describe our 3-d, hybrid, adaptive mesh refinement (AMR) code Enzo designed for high-resolution, multiphysics, cosmological structure formation simulations. Our parallel implementation places no limit on the depth or complexity of the adaptive grid hierarchy, allowing us to achieve unprecedented spatial and temporal dynamic range. We report on a simulation of primordial star formation which develops over 8000 subgrids at 34 levels of refinement to achieve a local refinement of a factor of $10^{12}$ in space and time. This allows us to resolve the properties of the first stars which form in the universe assuming standard physics and a standard cosmological model. Achieving extreme resolution requires the use of 128-bit extended precision arithmetic (EPA) to accurately specify the subgrid positions. We describe our EPA AMR implementation on the IBM SP2 Blue Horizon system at the San Diego Supercomputer Center.

Journal ArticleDOI
Bruce Greer1, John Harrison1, Greg Henry1, Wei Li1, Peter Tang1 
TL;DR: This paper gives an overview of the most relevant architectural features of Itanium and provides illustrations of how these features are used in both low-level and high-level support for scientific and engineering computing, including transcendental functions and linear algebra kernels.
Abstract: The 64-bit Intel® Itanium® architecture is designed for high-performance scientific and enterprise computing, and the Itanium processor is its first silicon implementation. Features such as extensive arithmetic support, predication, speculation, and explicit parallelism can be used to provide a sound infrastructure for supercomputing. A large number of high-performance computer companies are offering Itanium® -based systems, some capable of peak performance exceeding 50 GFLOPS. In this paper we give an overview of the most relevant architectural features and provide illustrations of how these features are used in both low-level and high-level support for scientific and engineering computing, including transcendental functions and linear algebra kernels.

Journal Article
TL;DR: 본 논문은 MPEG 스트림 효과적으로 화면 전환 경계를 찾아내는 알고리즘을 제안한다.
Abstract: 본 논문은 MPEG 스트림 데이터에서 효과적으로 화면 전환 경계를 찾아내는 알고리즘을 제안한다. 이를 위하여 먼저 연속적인 장면의 변화 정도를 표시하는 척도로써 히스토그램 차이 값(histogram difference value)과 픽셀 차이 값(pixel difference value)을 각각 하나의 신호로 취급한다. 이 신호에 중간 값 필터를 적용하여 얻어진 값과 원래의 신호의 차이값인 MFD(Median filtered difference) 값을 구한다. 이렇게 얻어진 MFD의 값이 크면 회면 전환이 일어남을 나타내며 따라서 컷 검출의 기준이 될 수 있다. 또한, 인공 신경망을 사용하여 컷 경계가 되는 MFD값의 문턱치를 결정한다. 제안된 알고리즘은 변화량이 심한 동영상이나 급작스럽게 밝아지는 프레임을 포함하는 동영상에서 적절히 컷 전환을 검출함을 보여 준다. 실험결과에서 제안된 알고리즘의 성능을 보여준다.

Journal Article
TL;DR: MADF 알고리즘의 구조와 수렴특성을 해석한다, MADF 보다 우수하다.
Abstract: 데이터 전송채널에서 부호간간섭(ISI)을 제거하기 위해 개선된 비적적응 디지털필터(IMADF)의 구조와 수렴해석이 이루어진다. 0평균, 백색잠음하에서 분할등화기(FSE)를 이용한 IMADF의 수렴특성을 해석한다. 실험결과 1MADF 알고리즘의 수렴독성이 Sign 알고리즘과는 같으나, MADF 알고리즘 보다 우수하다. 특히 입력신호의 상관관계가 높을 때 유용한 특성을 갖는다.

Journal ArticleDOI
TL;DR: In this paper, porting a sequential global code to a shared-memory computing system is discussed; several efficient strategies to optimize the code are reported; well-optimized scientific libraries are used; detailed parallel implementation of the global model is reported; performance data are analyzed.
Abstract: The objective of our investigation is to establish robust inverse algorithms to convert GRACE gravity and ICESat altimetry mission data into global current and past surface mass variations. To assess separation of global sources of change and to evaluate spatio-temporal resolution and accuracy statistically from full posterior covariance matrices, a high performance version of a global simultaneous grid inverse algorithm is essential. One means to accomplish this is to implement a general, well-optimized, parallel global model on massively parallel supercomputers. In our present work, an efficient parallel version of a global inverse program has been implemented on the Origin 2000 using the OpenMP programming model. In this paper, porting a sequential global code to a shared-memory computing system is discussed; several efficient strategies to optimize the code are reported; well-optimized scientific libraries are used; detailed parallel implementation of the global model is reported; performance data of the code are analyzed. Scaling performance on a shared-memory system is also discussed. The parallel version software gives good speedup and dramatically reduces total data processing time.

Journal ArticleDOI
TL;DR: This paper presents a methodology for developing on-line tools with MIMO, and uses a distributed, CORBA-based application, which represents a test case with high performance requirements and an integrated tool environment for observing and steering the image reconstruction application.
Abstract: Software development is getting more and more complex, especially within distributed middleware-based environments. A major drawback during the overall software development process is the lack of on-line tools, i.e. tools applied as soon as there is a running prototype of an application. The MIMO MIddleware MOnitor provides a solution to this problem by implementing a framework for an efficient development of on-line tools. This paper presents a methodology for developing on-line tools with MIMO. As an example scenario, we choose a distributed medical image reconstruction application, which represents a test case with high performance requirements. Our distributed, CORBA-based application is instrumented for being observed with MIMO and related tools. Additionally, load balancing mechanisms are integrated for further performance improvements. As a result, we obtain an integrated tool environment for observing and steering the image reconstruction application. By using our rapid tool development process, the integration of on-line tools shows to be very convenient and enables an efficient tool deployment.

Journal Article
이인재, 김용호, 김중규, 이명호, 안치득 
TL;DR: Non-rigid object는 모양이나 크기가 일정치 않으며 형태도 극복하 위해 구름이 내 통해 연속되
Abstract: 멀티미디어 표준안으로 제안된 MPEG-4는 객체기반 부호화 방식으로서, 객체를 효율적으로 분할하는 것은 MPEG-4에 있어 중요한 관건이다. 지금까지 이 분야에 대한 연구는 주로 rigid object를 대상으로 하였으나, 본 논문에서는 non-rigid object, 특히 구름이나 연기와 같은 non-rigid object를 대상으로 하여 효율적인 영역 분할 방식을 연구하였다. Non-rigid object는 모양이나 크기가 일정치 않으며 시간에 따라 형태도 변형되므로 정확히 분할해내는 것은 쉽지 않다. 따라서 본 논문에서는 이를 효율적으로 극복하기 위해 정지 영상에서는 watershed 알고리즘을 사용하여 non-rigid object를 분할해 주었다. 그리고 동영상에서는 intra-frame segmentation과 inter-frame segmentation을 통해 연속되는 프레임 내 관심 있는 객체의 경계선을 자동으로 추출해 주었다. 이 때 영상 내 경계 정보와 영역 정보 각각에 가중치를 두어 원하는 객체를 보다 정확히 추출해 주었다.