scispace - formally typeset
Search or ask a question

Showing papers by "Mitre Corporation published in 2006"


Proceedings ArticleDOI
17 Jul 2006
TL;DR: This paper used temporal reasoning as an over-sampling method to dramatically expand the amount of training data, resulting in predictive accuracy on link labeling as high as 93% using a Maximum Entropy classifier on human annotated data.
Abstract: This paper investigates a machine learning approach for temporally ordering and anchoring events in natural language texts. To address data sparseness, we used temporal reasoning as an over-sampling method to dramatically expand the amount of training data, resulting in predictive accuracy on link labeling as high as 93% using a Maximum Entropy classifier on human annotated data. This method compared favorably against a series of increasingly sophisticated baselines involving expansion of rules derived from human intuitions.

293 citations


Proceedings ArticleDOI
05 Dec 2006
TL;DR: This work analytically establishes the optimality of LLREF, and establishes that the algorithm has bounded overhead, and this bound is independent of time quanta (unlike Pfair).
Abstract: We present an optimal real-time scheduling algorithm for multiprocessors -- one that satisfies all task deadlines, when the total utilization demand does not exceed the utilization capacity of the processors. The algorithm called LLREF, is designed based on a novel abstraction for reasoning about task execution behavior on multiprocessors: the Time and Local Execution Time Domain Plane (or TL plane). LLREF is based on the fluid scheduling model and the fairness notion, and uses the T-L plane to describe fluid schedules without using time quanta, unlike the optimal Pfair algorithm (which uses time quanta). We show that scheduling for multiprocessors can be viewed as repeatedly occurring T-L planes, and feasibly scheduling on a single T-L plane results in the optimal schedule. We analytically establish the optimality of LLREF. Further, we establish that the algorithm has bounded overhead, and this bound is independent of time quanta (unlike Pfair). Our simulation results validate our analysis on the algorithm overhead.

262 citations


Proceedings ArticleDOI
25 Apr 2006
TL;DR: The history, motivation, and construction of MBOC signals are provided, various performance characteristics are shown, and their status in GALILEO and GPS signal design is summarized.
Abstract: This paper describes the Multiplexed Binary Offset Carrier (MBOC) spreading modulation that has been recommended by the GPS-GALILEO Working Group on Interoperability and Compatibility. The MBOC(6,1,1/11) power spectral density is a mixture of BOC(1,1) spectrum and BOC(6,1) spectrum, that would be used by GALILEO for its Open Service (OS) signal at L1 frequency, and also by GPS for its modernized L1 Civil (L1C) signal. A number of different time waveforms can produce the MBOC(6,1,1/11) spectrum, allowing flexibility in implementation, although interoperable waveforms remains an objective for GALILEO and GPS. The time-multiplexed BOC (TMBOC) implementation interlaces BOC(6,1) and BOC(1,1) spreading symbols in a regular pattern, whereas composite BOC (CBOC) uses multilevel spreading symbols formed from the weighted sum of BOC(1,1) and BOC(6,1) spreading symbols, interplexed to form a constant modulus composite signal. This paper provides information on the history, motivation, and construction of MBOC signals. It then shows various performance characteristics, and summarizes their status in GALILEO and GPS signal design.

223 citations


Proceedings ArticleDOI
30 Oct 2006
TL;DR: A novel quantitative metric for the security of computer networks that is based on an analysis of attack graphs is presented, which measures the security strength of a network in terms of the strength of the weakest adversary who can successfully penetrate the network.
Abstract: A security metric measures or assesses the extent to which a system meets its security objectives. Since meaningful quantitative security metrics are largely unavailable, the security community primarily uses qualitative metrics for security. In this paper, we present a novel quantitative metric for the security of computer networks that is based on an analysis of attack graphs. The metric measures the security strength of a network in terms of the strength of the weakest adversary who can successfully penetrate the network. We present an algorithm that computes the minimal sets of required initial attributes for the weakest adversary to possess in order to successfully compromise a network; given a specific network configuration, set of known exploits, a specific goal state, and an attacker class (represented by a set of all initial attacker attributes). We also demonstrate, by example, that diverse network configurations are not always beneficial for network security in terms of penetrability.

160 citations


Book ChapterDOI
04 Mar 2006
TL;DR: In this paper, Dolev-Yao style symbolic analysis is used to assert the security of cryptographic protocols within the universally composable (UC) security framework, which is similar to the traditional DolevYao criterion.
Abstract: Symbolic analysis of cryptographic protocols is dramatically simpler than full-fledged cryptographic analysis. In particular, it is simple enough to be automated. However, symbolic analysis does not, by itself, provide any cryptographic soundness guarantees. Following recent work on cryptographically sound symbolic analysis, we demonstrate how Dolev-Yao style symbolic analysis can be used to assert the security of cryptographic protocols within the universally composable (UC) security framework. Consequently, our methods enable security analysis that is completely symbolic, and at the same time cryptographically sound with strong composability properties. More specifically, we concentrate on mutual authentication and key-exchange protocols. We restrict attention to protocols that use public-key encryption as their only cryptographic primitive and have a specific restricted format. We define a mapping from such protocols to Dolev-Yao style symbolic protocols, and show that the symbolic protocol satisfies a certain symbolic criterion if and only if the corresponding cryptographic protocol is UC-secure. For mutual authentication, our symbolic criterion is similar to the traditional Dolev-Yao criterion. For key exchange, we demonstrate that the traditional Dolev-Yao style symbolic criterion is insufficient, and formulate an adequate symbolic criterion. Finally, to demonstrate the viability of our treatment, we use an existing tool to automatically verify whether some prominent key-exchange protocols are UC-secure.

131 citations


Book ChapterDOI
Joseph Mitola1
01 Jan 2006
TL;DR: This chapter reviews the substantial changes in use cases that drive cognitive wireless architecture and develops five complementary perspectives of cognitive radio architecture, called CRA-I through CRA-V, each building on the previous in capability.
Abstract: This chapter develops five complementary perspectives of cognitive radio architecture (CRA), called CRA-I through CRA-V, each building on the previous in capability. Architecture is driven top-down by market needs and bottom-up by available, affordable technologies. Taking the top-down perspective requires some attention to the use cases that the functions are intended to realize. This chapter therefore reviews the substantial changes in use cases that drive cognitive wireless architecture. Often technical architectures of the kind accelerate the state of practice by catalyzing work across the industry on plug-and-play, teaming, and collaboration. The thought is that to propel wireless technology from limited spectrum awareness toward valuable user awareness, an architecture is needed. The CRA articulates the functions, components, and design rules of next-generation stand-alone and embedded wireless devices and networks.

103 citations


Proceedings ArticleDOI
02 Mar 2006
TL;DR: A fine-grained decomposition of situation awareness is presented, and UAV interaction designers can specify SA needs and analysts can evaluate a UAV interface's SA support with greater precision and specificity than can be attained using other SA definitions.
Abstract: This paper presents a fine-grained decomposition of situation awareness (SA) as it pertains to the use of unmanned aerial vehicles (UAVs), and uses this decomposition to understand the types of SA attained by operators of the Desert Hawk UAV. Since UAVs are airborne robots, we adapt a definition previously developed for human-robot awareness after learning about the SA needs of operators through observations and interviews. We describe the applicability of UAV-related SA for people in three roles: UAV operators, air traffic controllers, and pilots of manned aircraft in the vicinity of UAVs. Using our decomposition, UAV interaction designers can specify SA needs and analysts can evaluate a UAV interface's SA support with greater precision and specificity than can be attained using other SA definitions.

86 citations


Journal ArticleDOI
TL;DR: The simulation studies and implementation measurements reveal that GUS performs close to, if not better than, the existing algorithms for the cases that they apply, and analytically establish several properties of GUS.
Abstract: This paper presents a uni-processor real-time scheduling algorithm called the generic utility scheduling algorithm (which we refer to simply as GUS). GUS solves a previously open real-time scheduling problem-scheduling application activities that have time constraints specified using arbitrarily shaped time/utility functions and have mutual exclusion resource constraints. A time/ utility function are a time constraint specification that describes an activity's utility to the system as a function of that activity's completion time. Given such time and resource constraints, we consider the scheduling objective of maximizing the total utility that is accrued by the completion of all activities. Since this problem is NP-hard, GUS heuristically computes schedules with a polynomial-time cost of O(n/sup 3/) at each scheduling event, where n is the number of activities in the ready queue. We evaluate the performance of GUS through simulation and by an actual implementation on a real-time POSIX operating system. Our simulation studies and implementation measurements reveal that GUS performs close to, if not better than, the existing algorithms for the cases that they apply. Furthermore, we analytically establish several properties of GUS.

82 citations


Book ChapterDOI
01 Jan 2006
TL;DR: In this paper, the authors make the case that traditional systems engineering (TSE) approach does not scale to the AOC; consequently, they don't believe TSE scales to the “enterprise”.
Abstract: Using the current instantiation of the Air and Space Operations Center (AOC), and the desired evolution of it, the AOC is shown to be best thought of as a complex system Complex Systems are alive and constantly changing They respond and interact with their environments – each causing impact on (and inspiring change in) the other We make the case that a traditional systems engineering (TSE) approach does not scale to the AOC; consequently, we don’t believe TSE scales to the “enterprise”

76 citations


Journal ArticleDOI
John A. Stine1
TL;DR: It is concluded that SCR provides the best support for smart antenna exploitation with the added benefits that there is no requirement for all nodes to be equipped with the same antenna technologies and that smart antennas can be combined with channelization technologies to provide even higher capacities.
Abstract: Smart antennas can increase the capacity of mesh networks and reduce the susceptibility of individual nodes to interception and jamming, but creating the conditions that allow them to be effective is difficult. In this article we provide a broad review of antenna technologies and identify their capabilities and limitations. We review mechanisms used by medium access control schemes to arbitrate access. These reviews let us identify a small set of conditions that are necessary for smart antenna exploitation. We then review the most common MAC approaches, carrier sense multiple access, slotted aloha, and time-division multiple access, and evaluate their suitability for exploiting smart antennas. We demonstrate that they are not capable of creating the complete set of antenna exploitation conditions while retaining a contention nature. We follow with a discussion of the synchronous collision resolution (SCR) MAC scheme and describe how it creates all the exploitation conditions. We conclude that SCR provides the best support for smart antenna exploitation with the added benefits that there is no requirement for all nodes to be equipped with the same antenna technologies and that smart antennas can be combined with channelization technologies to provide even higher capacities

74 citations


Journal ArticleDOI
TL;DR: No-reference image quality measures, which quantify quality inherent to a single image, are assessed, finding two families of quality measure that were most effective: one based on Natural Scene Statistics and one originally developed to measure distortion caused by image compression.
Abstract: Neuroimagery must be visually checked for unacceptable levels of distortion prior to processing. However, inspection is time-consuming, unreliable for detecting subtle distortions and often subjective. With the increasing volume of neuroimagery, objective measures of quality are needed in order to automate screening. To address this need, we have assessed the effectiveness of no-reference image quality measures, which quantify quality inherent to a single image. A data set of 1001 magnetic resonance images (MRIs) recorded from 143 subjects was used for this evaluation. The MRI images were artificially distorted with two levels of either additive Gaussian noise or intensity nouniformity created from a linear model. A total of 239 different quality measures were defined from seven overall families and used to discriminate images for the type and level of distortion. Analysis of Variance identified two families of quality measure that were most effective: one based on Natural Scene Statistics and one originally developed to measure distortion caused by image compression. Measures from both families reliably discriminated among undistorted images, noisy images, and images distorted by intensity nonuniformity. The best quality measures were sensitive only to the distortion category and were not significantly affected by other factors. The results are encouraging enough that several quality measures are being incorporated in a real world MRI test bed.

Proceedings Article
01 Jan 2006
TL;DR: This paper offers an initial exploratory data analysis of candidate features for blogger age prediction by analyzing the text and metadata of blog entries for evidence of blogger age.
Abstract: Accurate prediction of blogger age from evidence in the text and metadata of blog entries would be valuable for marketing, privacy, and law enforcement concerns. This paper offers an initial exploratory data analysis of candidate features for blogger age prediction.

Patent
Amlan Kundu1, Linda Van Guilder1, Tom Hines1, Ben Huyck1, Jon Phillips1 
29 Nov 2006
TL;DR: In this article, an over-segmentation-relabeling algorithm was used to identify diacritics or small segments of a handwritten character. But, the over segmentation was not applied to the main body of the character.
Abstract: A cursive character handwriting recognition system includes image processing means for processing an image of a handwritten word of one or more characters and classification means for determining an optimal string of one or more characters as composing the imaged word. The processing means segments the characters such that each character is made up of one or more segments and determines a sequence of the segments using an over-segmentation-relabeling algorithm. The system also includes feature extraction means for deriving a feature vector to represent feature information of one segment or a combination of several consecutive segments. The over-segmentation-relabeling algorithm places certain segments considered as diacritics or small segments so as to immediately precede or follow a segment of the associated main character body. Additionally, the system also includes classification means that processes each string of segments and outputs a number of optimal strings which could be matched against a given lexicon.

Proceedings ArticleDOI
02 Mar 2006
TL;DR: This paper segments video game interaction into domain-independent components which together form a framework that can be used to characterize real-time interactive multimedia applications in general and HRI in particular.
Abstract: There is growing interest in mining the world of video games to find inspiration for human-robot interaction (HRI) design. This paper segments video game interaction into domain-independent components which together form a framework that can be used to characterize real-time interactive multimedia applications in general and HRI in particular. We provide examples of using the components in both the video game and the Unmanned Aerial Vehicle (UAV) domains (treating UAVs as airborne robots). Beyond characterization, the framework can be used to inspire new HRI designs and compare different designs; we provide an example comparison of two UAV ground station applications.

Proceedings ArticleDOI
11 Oct 2006
TL;DR: An approach to integrating the distributable threads programming model with the Real-Time Specification for Java is presented and the ramifications for composing distributed, real-time systems in Java are discussed.
Abstract: The Distributed Real-Time Specification for Java (DRTSJ) is under development within Sun's Java Community Process (JCP) as Java Specification Request 50 (JSR-50), lead by the MITRE Corporation. We present the engineering considerations and design decisions settled by the Expert Group, the current and proposed form of the Reference Implementation, and a summary of open issues. In particular, we present an approach to integrating the distributable threads programming model with the Real-Time Specification for Java and discuss the ramifications for composing distributed, real-time systems in Java. The Expert Group plans to release an initial Early Draft Review (EDR) for previewing the distributable threads abstraction in the coming months, which we describe in detail. Along with that EDR, we will make available a demonstration application from Virginia Tech, and a DRTSJ-compatible RTSJ VM from Apogee.

Proceedings ArticleDOI
J.J. Rushanan1
09 Jul 2006
TL;DR: A family of binary sequences, called Weil sequences, which have prime length, are described, which are derived from the single quadratic-residue-based Legendre sequence using a "shift-and-add" construction.
Abstract: We describe a family of binary sequences, called Weil sequences, which have prime length. The sequences are derived from the single quadratic-residue-based Legendre sequence using a "shift-and-add" construction. The Weil sequences have correlation sidelobes bounded by 2radicp + 5, where p is the length. Thus they are asymptotically within a factor of two of the Welch bound. The sequences are optimally balanced when p is congruent to 3 modulo 4, and well-balanced otherwise. The family size is (p-1)/2

Journal ArticleDOI
10 Jul 2006
TL;DR: This work proposes a new, less powerful nondeterminism-resolution mechanism for PIOAs, consisting of tasks and local schedulers, and illustrates the potential of the task-PIOA framework by outlining its use in verifying an oblivious transfer protocol.
Abstract: Modeling frameworks such as probabilistic I/O automata (PIOA) and Markov decision processes permit both probabilistic and nondeterministic choices. In order to use such frameworks to express claims about probabilities of events, one needs mechanisms for resolving the nondeterministic choices. For PIOAs, nondeterministic choices have traditionally been resolved by schedulers that have perfect information about the past execution. However, such schedulers are too powerful for certain settings, such as cryptographic protocol analysis, where information must sometimes be hidden. Here, we propose a new, less powerful nondeterminism-resolution mechanism for PIOAs, consisting of tasks and local schedulers. Tasks are equivalence classes of system actions that are scheduled by oblivious, global task sequences. Local schedulers resolve nondeterminism within system components, based on local information only. The resulting task-PIOA framework yields simple notions of external behavior and implementation, and supports simple compositionality results. We also define a new kind of simulation relation, and show it to be sound for proving implementation. We illustrate the potential of the task-PIOA framework by outlining its use in verifying an oblivious transfer protocol.

Proceedings ArticleDOI
04 Jun 2006
TL;DR: A solution to the problem of matching personal names in English to the same names represented in Arabic script is presented by augmenting the classic Levenshtein edit-distance algorithm with character equivalency classes.
Abstract: This paper presents a solution to the problem of matching personal names in English to the same names represented in Arabic script. Standard string comparison measures perform poorly on this task due to varying transliteration conventions in both languages and the fact that Arabic script does not usually represent short vowels. Significant improvement is achieved by augmenting the classic Levenshtein edit-distance algorithm with character equivalency classes.

Proceedings ArticleDOI
25 Sep 2006
TL;DR: A novel approach to predicting sector capacity for airspace congestion management by identifying a set of primary flow patterns for each sector of interest through cluster analysis and establishing the sector capacity based on observed system performance transition behavior.
Abstract: En route sector congestion exists when the air traffic demand on an en route sector exceeds the sector capacity. A metric of sector capacity is needed that (1) is a good approximation of the amount of traffic that can be effectively handled in the sector, (2) can be predicted at look-ahead times of 30 minutes to several hours, and (3) can include the impact of convective weather on available capacity. This paper presents a novel approach to predicting sector capacity for airspace congestion management. First, a set of primary flow patterns for each sector of interest is identified through cluster analysis. Second, the sector capacity for each pattern is established based on observed system performance transition behavior. Finally, future sector capacity for a given prediction look-ahead time can be predicted through pattern recognition. Quantifying sector capacity as a function of traffic flow pattern also provides a basis for capturing weather impact on sector capacity.

Journal ArticleDOI
TL;DR: This work presents efficient algorithms to maintain k-nearest neighbor, and spatial join queries in this domain as time advances and updates occur, and experimentally compares these new algorithms with more straight forward adaptations of previous work to support updates.
Abstract: Cars, aircraft, mobile cell phones, ships, tanks, and mobile robots all have the common property that they are moving objects. A kinematic representation can be used to describe the location of these objects as a function of time. For example, a moving point can be represented by the function p(t) = x→0 + (t - t0)v→, where x→0 is the start location, t0 is the start time, and v→ is its velocity vector. Instead of storing the location of the object at a given time in a database, the coefficients of the function are stored. When an object's behavior changes enough so that the function describing its location is no longer accurate, the function coefficients for the object are updated. Because the location of each object is represented as a function of time, spatial query results can change even when no transactions update the database. We present efficient algorithms to maintain k-nearest neighbor, and spatial join queries in this domain as time advances and updates occur. We assume no previous knowledge of what the updates will be before they occur. We experimentally compare these new algorithms with more straight forward adaptations of previous work to support updates. Experiments are conducted using synthetic uniformly distributed data, and real aircraft flight data. The primary metric of comparison is the number of I/O disk accesses needed to maintain the query results and the supporting data structures.

Journal ArticleDOI
01 Oct 2006
TL;DR: In this paper, similarities and differences in the collision avoidance function and the necessity of developing various models of environmental and system components in the collisions avoidance functional chain are discussed.
Abstract: For Unmanned Aircraft to be routinely used in civil airspace, an effective collision avoidance function is one area deemed essential for safe operation. Like manned aircraft, avoiding collisions with transponder-equipped, or "cooperative" traffic is among the primary hazards. This paper discusses similarities and differences in the collision avoidance function and the necessity of developing various models of environmental and system components in the collision avoidance functional chain. Potential sensitivities and shortcomings of the TCAS collision avoidance system for unmanned aircraft are discussed. The analysis method of fast-time simulation can develop a rich sample of collision encounter events from the numerous statistical distributions. This provides an established means to demonstrate system compliance with safety targets, when they are established.

Journal ArticleDOI
TL;DR: It is shown that this multiuser bit-loading algorithm is optimal when the interference among users is nonexistent or strong, and finds a near-optimal solution for very-high-speed digital subscriber line systems.
Abstract: A multiuser bit-loading problem is investigated in multicarrier communication systems. Assuming knowledge of all the channel gains, we propose a multiuser bit-loading algorithm that attempts to minimize the total power to transmit a target rate-sum of all users. It is shown that this algorithm is optimal when the interference among users is nonexistent or strong. The simulation results show that the proposed algorithm finds a near-optimal solution for very-high-speed digital subscriber line systems

Proceedings ArticleDOI
23 Jul 2006
TL;DR: This work believes that the production of a consensual specification on multilingual lexicons can be a useful aid for the various NLP actors.
Abstract: Optimizing the production, maintenance and extension of lexical resources is one the crucial aspects impacting Natural Language Processing (NLP). A second aspect involves optimizing the process leading to their integration in applications. With this respect, we believe that the production of a consensual specification on multilingual lexicons can be a useful aid for the various NLP actors. Within ISO, one purpose of LMF (ISO-24613) is to define a standard for lexicons that covers multilingual data.

Proceedings ArticleDOI
23 Oct 2006
TL;DR: Recent changes to the JTRS program are explained and its new approach to delivering wireless networking capabilities to the warfighter is explained.
Abstract: The Joint Tactical Radio System (JTRS) is one of the Department of Defense's (DoD) core transformational programs. The Joint Program Executive Office (JPEO) JTRS manages the acquisition of this critical new capability. The mission of the JPEO JTRS is to develop and produce a family of interoperable, affordable software defined radios at moderate risk which provide secure, wireless networking communications capabilities for Joint forces. This paper will explain recent changes to the JTRS program and its new approach to delivering wireless networking capabilities to the warfighter.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This work proposes a transceiver architecture with rate and modulation agility, built with COTS components, supports data rate adjustability, and can switch modulation formats between differential phase-shift keying (DPSK), binary pulse-position modulation (BPPM), and on-off-keying (OOK).
Abstract: Free-space optical (FSO) communication links are susceptible to a tremendous amount of variability and offer a real challenge for efficient, robust system design. Whether changes in link margin are predictable (i.e. weather conditions or mission profiles) or truly random (i.e. atmospheric turbulence induced scintillation or boundary layer induced tracking errors), FSO communication systems will experience a large dynamic range of performance through most mission scenarios. Recent system designs within commercial, academic, and military organizations have focused on leveraging COTS technology from the fiber optic telecommunications industry. These systems are typically based on fixed designs that use set data rates and modulation formats. Utilizing such modalities in a mobile free-space environment can lead to systems that are either overly conservative or have a high rate of failure. To maximize overall system efficiency, we propose a transceiver architecture with rate and modulation agility. The presented transceiver is built with COTS components, supports data rate adjustability, and can switch modulation formats between Differential Phase-Shift Keying (DPSK), Binary Pulse-Position Modulation (BPPM), and On-off-Keying (OOK). A prototype system illustrating adaptive operation is presented and experimental results are shown.

Proceedings ArticleDOI
John A. Stine1
23 Oct 2006
TL;DR: This work argues that the MANET architecture model is not only unsuitable for exploiting cross-layer effects it also violates the very intent of the IP architecture, and proposes an alternative standardization effort that would preserve the opportunity for innovation while ensuring the integration of MANET subnetworks into larger integrated heterogeneous IP networks.
Abstract: The current Internet protocol (IP) architecture model for mobile ad hoc network (MANET) routing protocol development ignores cross-layer effects by seeking to emulate as closely as possible the wireline architecture. Nevertheless, cross-layer effects are unavoidable and it is actually desirable to exploit these interactions to achieve greater performance. Further, support for cross-layer information flow is necessary for many of the applications envisioned for MANETs. We review the purpose of the IP architecture and argue that the MANET architecture model is not only unsuitable for exploiting cross-layer effects it also violates the very intent of the IP architecture. By focusing the standardization effort on making routing solutions and placing them at the point of integration, just above IP in the protocol stack, it effectively stifles the IP development goals of supporting local subnetwork optimization and long term innovation. We review issues of cross-layer design and then propose an alternative standardization effort that would preserve the opportunity for innovation while ensuring the integration of MANET subnetworks into larger integrated heterogeneous IP networks. Our proposal places MANET into its own subnetworking layer and then divides standardization into four parts: the interface to the MANET subnetwork, a heterogeneous routing protocol, mechanisms for cross-layer information flow, and a combined logical and spatially hierarchical addressing scheme. We identify several more radical MANET design proposals that depart substantially from the current model. All could be integrated into a larger heterogeneous IP network using our protocol approach

Proceedings ArticleDOI
21 Apr 2006
TL;DR: The panel will explore both the pros and cons in favor of a separate field of HII, with a diversity of perspectives from several disciplines and research traditions including cognitive modeling and the study of human cognition, information science, information architecture, personal information management, ethnography and anthropology.
Abstract: The past few years have seen increasing discussion of the need for, even the inevitability of, a field of human-information interaction (HII) - as either a major sub-branch of human-computer interaction (HCI) or as a separate field altogether. The "I" in HII implies a focus on information and not computing technology. But what does this mean? Is there any way to focus on information without also considering the supporting tools, applications, and gadgets that are enabled by computing technology? The panel will explore both the pros and cons in favor of a separate field of HII. Panelists provide a diversity of perspectives from several disciplines and research traditions including cognitive modeling and the study of human cognition, information science, information architecture, personal information management, ethnography and anthropology.

Proceedings ArticleDOI
03 Apr 2006
TL;DR: A task model for schema integration is provided that facilitates the interoperation of research prototypes for schema matching with commercial schema mapping tools and provides a common representation so that these tools can more rapidly be combined.
Abstract: A key aspect of any data integration endeavor is establishing a transformation that translates instances of one or more source schemata into instances of a target schema. This schema integration task must be tackled regardless of the integration architecture or mapping formalism. In this paper we provide a task model for schema integration. We use this breakdown to motivate a workbench for schema integration in which multiple tools share a common knowledge repository. In particular, the workbench facilitates the interoperation of research prototypes for schema matching (which automatically identify likely semantic correspondences) with commercial schema mapping tools (which help produce instance-level transformations). Currently, each of these tools provides its own ad hoc representation of schemata and mappings; combining these tools requires aligning these representations. The workbench provides a common representation so that these tools can more rapidly be combined.

Patent
24 Jul 2006
TL;DR: A number of tools and methods for schema matching that generate schema graphs, populate match matrices and display the schema graphs and the match matrix can be found in this article, where a number of filters display the confidence value for each potential match as a link on a graphical user interface.
Abstract: Tools and methods for schema matching that generate schema graphs, populate match matrices and display the schema graphs and the match matrices. These tools and methods characterize potential matches between disparate schemata in terms of both a strength of evidence indicating the potential match and an amount of evidence indicating the potential match. A number of match voters generate a set of match scores for each potential match, and these match scores are combined by a vote merger to form a single confidence value for each potential match. A number of filters display the confidence value for each potential match as a link on a graphical user interface. Machine-learning techniques may be employed to adaptively determine confidence values based on previously established matches.

Patent
07 Apr 2006
TL;DR: In this article, the authors provide dynamic and independent data processing and dissemination at individual sensor nodes in a wireless sensor network, among other parameters, network traffic conditions, network connectivity conditions, conditions at the sensor node, and data characteristics and QOS (Quality of Service) requirements of the data being processed and/or disseminated.
Abstract: Methods and systems for smart data processing and dissemination in wireless sensor networks are provided herein. In one aspect, the present invention provides dynamic and independent data processing and dissemination at individual sensor nodes in a wireless sensor network. In another aspect, the present invention provides data processing and/or dissemination methods at a sensor node that are responsive to, among other parameters, network traffic conditions, network connectivity conditions, conditions at the sensor node, and the data characteristics and QOS (Quality of Service) requirements of the data being processed and/or disseminated. In yet another aspect, data processing and/or dissemination rules according to the present invention are easily configurable and modifiable depending on the specific sensor networking application.