scispace - formally typeset
Search or ask a question

Showing papers by "Peter Bajcsy published in 2010"


Journal ArticleDOI
TL;DR: In this paper, the authors applied a set of data mining tools to predict weekly nitrate-N concentrations at a gauging station on the Sangamon River near Decatur, Illinois.
Abstract: Agricultural nonpoint source pollution has been identified as one of the leading causes of surface water quality impairment in the United States. Such an impact is important, particularly in predominantly agricultural areas, where application of agricultural fertilizers often results in excessive nitrate levels in streams and rivers. When nitrate concentration in a public water supply reaches or exceeds drinking water standards, costly measures such as well closure or water treatment have to be considered. Thus, having accurate nitrate-N predictions is critical in making correct and timely management decisions. This study applied a set of data mining tools to predict weekly nitrate-N concentrations at a gauging station on the Sangamon River near Decatur, Illinois. The data mining tools used in this study included artificial neural networks, evolutionary polynomial regression and the naive Bayes model. The results were compared using seven forecast measures. In general, all models performed reasonably well, but not all achieved best scores in each of the measures, suggesting that a multi-tool approach is needed. In addition to improving forecast accuracy compared with previous studies, the tools described in this study demonstrated potential for application in error analysis, input selection and ranking of explanatory variables, thereby designing cost-effective monitoring networks.

33 citations


Journal ArticleDOI
TL;DR: This paper proposes the use of a novel image-based machine-learning (IBML) approach to reduce the number of user interactions required to identify promising calibration solutions involving spatially distributed parameter fields (e.g., hydraulic conductivity parameters in a groundwater model).
Abstract: The interactive multiobjective genetic algorithm (IMOGA) is a promising new approach to calibrate models. The IMOGA combines traditional optimization with an interactive framework, thus allowing both quantitative calibration criteria as well as the subjective knowledge of experts to drive the search for model parameters. One of the major challenges in using such interactive systems is the burden they impose on the experts that interact with the system. This paper proposes the use of a novel image-based machine-learning (IBML) approach to reduce the number of user interactions required to identify promising calibration solutions involving spatially distributed parameter fields (e.g., hydraulic conductivity parameters in a groundwater model). The first step in the IBML approach involves selecting a few highly representative solutions for expert ranking. The selection is performed using unsupervised clustering approaches from the field of image processing, which group potential parameter fields based on thei...

8 citations


Journal ArticleDOI
01 Dec 2010
TL;DR: An approach to designing and prototyping an evaluation framework for decisions based on image inspection allows users to analyze storage and computational costs of information gathering as a function of information granularity and then assesses the potential value of decision process reconstructions.
Abstract: Challenges related to documenting and reconstructing computer-assisted decision processes include the selection of information granularity, design of information gathering and reconstruction mechanisms, evaluation of reconstruction value, and storage and computational costs. This article surveys these challenges and explicates an approach to designing and prototyping an evaluation framework for decisions based on image inspection. The framework explored here allows users to analyze storage and computational costs of information gathering as a function of information granularity and then assesses the potential value of decision process reconstructions. We illustrate how evaluations of decision process reconstructions could potentially improve our understanding of future archival needs by simultaneously documenting, preserving and reconstructing computer-assisted decision processes, and evaluating and forecasting computational and storage requirements of the documentation and reconstruction processes over time.

6 citations


Proceedings ArticleDOI
29 Mar 2010
TL;DR: A set of e-services that serve as a framework for understanding content preservation, automation and computational requirements on preservation of electronic records are prototyped that can increase the productivity of the digital preservation community and other users of digital files.
Abstract: This paper addresses the workshop question: "Can data generated from the infancy of the digital age be ingestible by software today?" We have prototyped a set of e-services that serve as a framework for understanding content preservation, automation and computational requirements on preservation of electronic records. The framework consists of e-services for (a) finding file format conversion software, (b) executing file format conversions using available software, and (c) evaluating information loss across conversions. While the target audience for the technology is the US National Archives, these basic e-services are of interest to any manager of electronic records and to all citizens trying to keep their files current with the rapidly changing information technology. The novelty of the framework is in organizing the information about file format conversions, providing services about file format conversion paths, in prototyping a general architecture for reusing existing third-party software with import/export capabilities, and in evaluating information loss due to file format conversions. The impact of these e-services is in the widely accessible conversion software registry (CSR), conversion engine (Polyglot) and comparison engine (Versus) which can increase the productivity of the digital preservation community and other users of digital files.

5 citations


Book ChapterDOI
01 Feb 2010
TL;DR: The focus is on the problems related to building hazard aware spaces (HAS) to alert innocent people, similar to the problem related to swimming pool surveillance systems to prevent human drowning, and setting up the system to achieve desired accuracy and operating it to achieve reliable performance with or without human intervention.
Abstract: Wireless sensing devices are frequently used in smart spaces, ubiquitous and proactive computing, and situation awareness applications (Satyanarayanan 2001), (Vildjiounaite, Malm et al.), (Weiser), (Ilyas & Mahgoub). One could list a plethora of applications suitable for the use of wireless sensor networks and other sensing instruments, for instance, health care (wellness system for aging), environmental monitoring (pollution of air, water, and soil), atmospheric science (severe weather prediction), structural health monitoring (equipment or material fatigue detection), military surveillance (vehicle movement detection), facility monitoring (security and life-cycle of a facility), wild life monitoring (animal migration), or intelligent vehicle design (obstacle detection) (Dishman), (Gupta & Kumar), (Mainwaring, Polastre et al.), (Wang, Estrin et al.), (Roush, Goho et al.), (East), (Rom'an, Hess et al.), (Abowd), (Dey), (Kidd, Orr et al.). The list of on-going projects that include wireless sensor networks and other sensing instrumentation is also growing every day (see NSF, NIST and DARPA projects such as NSF NEON, LOOKING, SCCOOS, ROADNet, USArray, TeraBridge, ORION, CLEANER, NIST SHIELD or DARPA Active Networks, Connectionless networks, DTT). All projects have in common the fact that they represent multi-instrument and multi-sensor systems that can be characterized as smart outdoor, indoor or embedded spaces. The challenge is to build smart spaces that can intelligently sense environments, gather information, integrate information across disparate sensing systems over time, space and measurement, and finally detect and recognize events of interest to trigger event-driven actions. We have been interested in the hazard awareness application scenarios (Bajcsy, Johnson et al. 2008) (Bajcsy, Kooper et al. 2006) that concern humans due to (a) natural disastrous events, (b) failures of human hazard attention or (c) intentional harmful behaviors of humans. Our focus is on the problems related to building hazard aware spaces (HAS) to alert innocent people, similar to the problem related to swimming pool surveillance systems to prevent human drowning (e.g. Poseidon developed by Vision IQ). While building a real-time HAS system, one has to address the issues of (1) setting up the system to achieve desired accuracy and (2) operating it to achieve reliable performance with or without human intervention. In order to setup a HAS system, one ought to find ways

4 citations


Journal ArticleDOI
01 Jan 2010
TL;DR: The framework integrates state-of-the-art web application technologies, semantic content management capabilities and an exploratory scientific workflow system to enable rich, focused interactions with computational research products.
Abstract: As researchers work with larger and higher dimensional computational data and perform more complex chains of analysis, it becomes increasingly difficult to share and reproduce analyses and scientific results with fellow researchers and interested stakeholders. A framework for the dynamic web publication of data products and computational services has been developed to address these issues and to enable rich, focused interactions with computational research products. The framework integrates state-of-the-art web application technologies, semantic content management capabilities and an exploratory scientific workflow system. Its goal is to support the full scientific life cycle from the design and development of computational procedures, analyses and visualisations to the publication of annotated results and executable services on the web. The published data are available within web pages displaying live maps, graphs, tables and other visualisations that enable significant in situ exploration and further an...

3 citations


16 May 2010
TL;DR: In this paper, the authors present an optimization framework for automating the placement of multiple stereo cameras in an application specific manner, which eliminates ad-hoc experimentations and sub-optimal camera placement for end applications by running their simulation code.
Abstract: With the advent of virtual spaces, there has been a need to integrate physical world with virtual spaces. The integration can be achieved by real-time 3D imaging using stereo cameras followed by fusion of virtual and physical space information. Systems that enable such information fusions over several geographically distributed locations are called tele-immersive and should be easily deployed. The optimal placement of 3D cameras becomes the key to achieving high quality 3D information about physical spaces. In this paper, we present an optimization framework for automating the placement of multiple stereo cameras in an application specific manner. The framework eliminates ad-hoc experimentations and sub-optimal camera placements for end applications by running our simulation code. The camera placement problem is formulated as optimization problem over continuous physical space with the objective function based on 3D information error and a set of constraints that generalize application specific requirements. The novelty of our work lies in developing the theoretical optimization framework under spatially varying resolution requirements and in demonstrating improved camera placements with our framework in comparison with other placement techniques.

2 citations


Proceedings ArticleDOI
25 Oct 2010
TL;DR: This paper designs a task of moving a virtual ball from an initial center position to a basketball basket close to top in a closed virtual cuboid space and concludes that the most important parameter is the frame rate followed by the presence of of a human rendering rather than just an avatar representation of hands.
Abstract: Tele-immersive systems enable 3D interactive experience in a virtual space by bringing together objects born in physical and virtual reality environments that are geographically distributed. The immersive and interactive experiences are achieved by fusing real-time color plus depth video of physical scenes from multiple stereo cameras, displaying 3D reconstructions of physical and virtual objects, and tracking intersections of objects in a virtual space to facilitate interactions between objects. While tele-immersive (TI) systems have been attracting a lot of attention these days as a possible advancement of the current 2D video-based communication systems and of the virtual reality spaces, the challenges lie in limited resources available to satisfy the many requirements on a usable TI system. One of the TI challenges remains the quality of real-time 3D reconstruction of physical scenes. Our work is motivated by the lack of understanding of the importance of various TI system parameters on the quality which is critical for allocating limited resources to make TI systems useful. This paper investigates the impact of different quality parameters of TI systems on accuracy and speed of task executions in TI environments. We have designed a task of moving a virtual ball from an initial center position to a basketball basket close to top in a closed virtual cuboid space. Successful completion of the task involves localization, orientation and motion coordination for a human in TI environment. We report quantitative and qualitative measurements collected during task executions under simulated variations of 3D reconstruction quality parameters. Based on our experimental results with 29 subjects under 13 quality variations, we concluded that the most important parameter is the frame rate followed by the presence of of a human rendering rather than just an avatar representation of hands.

1 citations


Book ChapterDOI
01 Feb 2010
TL;DR: The objective of the book chapter is to describe methodologies for contemporary document processing, visual exploration, grouping and integrity verification, as well as to include computational scalability challenges and solutions.
Abstract: This book chapter describes problems related to contemporary document analyses. Contemporary documents contain multiple digital objects of different type. These digital objects have to be extracted from document containers, represented as data structures, and described by features suitable for comparing digital objects. In many archival and machine learning applications, documents are compared by using multiple metrics, checked for integrity and authenticity, and grouped based on similarity. The objective of our book chapter is to describe methodologies for contemporary document processing, visual exploration, grouping and integrity verification, as well as to include computational scalability challenges and solutions.

1 citations


Proceedings ArticleDOI
25 Oct 2010
TL;DR: 3D video tele-immersive, 2D video Skype and face-to-face used in a collaborative environment of a remote product development scenario are studied and a scope for improvement in each is proposed.
Abstract: Different institutions worldwide, such as economic, social and political, are relying increasingly on the communication technology to perform a variety of functions: holding remote business meetings, discussing design issues in product development, enabling consumers to remain connected with their families and children, and so on. In this environment, where geographic and temporal boundaries are shrinking rapidly, electronic communication medium are playing an important role. With recent advances in 3D sensing, computing on new hardware platforms, high bandwidth communication connectivity and 3D display technology, the vision of 3D video-teleconferencing and of tele-immersive experience has become very attractive. These advances lead to tele-immersive communication systems that enable 3D interactive experience in a virtual space consisting of objects born in physical and virtual environments. This experience is achieved by fusing real-time color plus depth video of physical scenes from multiple stereo cameras located at different geographic sites, displaying 3D reconstructions of physical and virtual objects, and performing computations to facilitate interactions between objects. While tele-immersive (TI) systems have been attracting a lot of attention these days, the advantages of enabled interactions and delivered 3D content for viewing as opposed to current 2D high definition video have not been evaluated. In this paper, we study the effectiveness of three different types of communication media on remote collaboration in order to document the pros and cons of new technologies such as TI. The three communication media include 3D video tele-immersive, 2D video Skype and face-to-face used in a collaborative environment of a remote product development scenario. Through a study done over 90 subjects, we discuss the strengths and weaknesses of different media and propose a scope for improvement in each of them.

1 citations


Proceedings ArticleDOI
TL;DR: This work discovered what salient characteristics made an artist different from others, and then enabled statistical learning about individual and collective authorship, and applied the framework implementation to the face illustrations in Froissart's Chronicles drawn by two anonymous authors.
Abstract: We addressed the problem of finding salient characteristics of artists from two-dimensional (2D) images of historical artifacts. Given a set of 2D images of historical artifacts by known authors, we discovered what salient characteristics made an artist different from others, and then enabled statistical learning about individual and collective authorship. The objective of this effort was to learn what would be unique about the style of each artist, and to provide the quantitative results about salient characteristic. We accomplished this by exploring a large search space of low level image descriptors. The motivation behind our framework was to assist humanists in discovering salient characteristics by automated exploration of the key image descriptors. By employing our framework we had not only saved time of art historians but also provided quantitative measures for incorporating their personal judgments and bridging the semantic gap in image understanding. We applied the framework implementation to the face illustrations in Froissart's Chronicles drawn by two anonymous authors. We reported the salient characteristics to be (HSV, histogram, k-nearest neighbor) among the 55 triples considered with 5-fold validations. These low level characteristics were confirmed by the experts to correspond semantically to the face skin colors.

Proceedings ArticleDOI
TL;DR: The motivation for synchronization of all signals is to support studies and understanding of human interaction in a decision support environment that have been limited so far due to the difficulties in automated processing of any observations during the decision making sessions.
Abstract: This paper addresses the problem of robust and automated synchronization of multiple audio and video signals. The input signals are from a set of independent multimedia recordings coming from several camcorders and microphones. While the camcorders are static, the microphones are mobile as they are attached to people. The motivation for synchronization of all signals is to support studies and understanding of human interaction in a decision support environment that have been limited so far due to the difficulties in automated processing of any observations during the decision making sessions. The application of our work is to environments supporting decisions. The data sets for this work have been acquired during training exercises of response teams, rescue workers, and fire fighters at multiple locations. The developed synchronization methodology for a set of independent multimedia recordings is based on introducing aural and visual landmarks with a bell and room light switches. Our approach to synchronization is based on detecting the landmarks in audio and video signals per camcorder and per microphone, and then fusing the results to increase robustness and accuracy of the synchronization. We report synchronization results that demonstrate accuracy of synchronization based on video and audio.