scispace - formally typeset
Search or ask a question

Showing papers in "Ai Magazine in 2008"


Journal ArticleDOI
TL;DR: This article introduces four of the most widely used inference algorithms for classifying networked data and empirically compare them on both synthetic and real-world data.
Abstract: Many real-world applications produce networked data such as the world-wide web (hypertext documents connected via hyperlinks), social networks (for example, people connected by friendship links), communication networks (computers connected via communication links) and biological networks (for example, protein interaction networks). A recent focus in machine learning research has been to extend traditional machine learning classification techniques to classify nodes in such networks. In this article, we provide a brief introduction to this area of research and how it has progressed during the past decade. We introduce four of the most widely used inference algorithms for classifying networked data and empirically compare them on both synthetic and real-world data.

2,937 citations


Journal ArticleDOI
TL;DR: The Kiva warehouse management system as discussed by the authors creates a new paradigm for pick-pack-and-ship warehouses that significantly improves worker productivity by using movable storage shelves that can be lifted by small, autonomous robots.
Abstract: The Kiva warehouse-management system creates a new paradigm for pick-pack-and-ship warehouses that significantly improves worker productivity. The Kiva system uses movable storage shelves that can be lifted by small, autonomous robots. By bringing the product to the worker, productivity is increased by a factor of two or more, while simultaneously improving accountability and flexibility. A Kiva installation for a large distribution center may require 500 or more vehicles. As such, the Kiva system represents the first commercially available, large-scale autonomous robot system. The first permanent installation of a Kiva system was deployed in the summer of 2006.

633 citations


Journal ArticleDOI
TL;DR: This paper reviews the latest developments in automated preference-based planning, and various approaches for preference representation, and the main practical approaches developed so far.
Abstract: Automated Planning is an old area of AI that focuses on the development of techniques for finding a plan that achieves a given goal from a given set of initial states as quickly as possible. In most real-world applications, users of planning systems have preferences over the multitude of plans that achieve a given goal. These preferences allow to distinguish plans that are more desirable from those that are less desirable. Planning systems should therefore be able to construct high-quality plans, or at the very least they should be able to build plans that have a reasonably good quality given the resources available. In the last few years we have seen a significant amount of research that has focused on developing rich and compelling languages for expressing preferences over plans. On the other hand, we have seen the development of planning techniques that aim at finding high-quality plans quickly, exploiting some of the ideas developed for classical planning. In this paper we review the latest developments in automated preference-based planning. We also review various approaches for preference representation, and the main practical approaches developed so far.

89 citations


Journal ArticleDOI
TL;DR: The benefits of preferences for AI systems are explained and a picture of current AI research on preference handling is drawn to provide an introduction to the topics covered by this special issue on preference Handling.
Abstract: This article explains the benefits of preferences for AI systems and draws a picture of current AI research on preference handling. It thus provides an introduction to the topics covered by this special issue on preference handling.

83 citations


Journal ArticleDOI
TL;DR: Some analyses of common areas of design pitfalls are provided and a set of design guidelines are derived that assist the user in avoiding these problems in three important areas: user preference elicitation, preference revision, and explanation interfaces.
Abstract: We address user system interaction issues in product search and recommender systems: how to help users select the most preferential item from a large collection of alternatives. As such systems must crucially rely on an accurate and complete model of user preferences, the acquisition of this model becomes the central subject of our paper. Many tools used today do not satisfactorily assist users to establish this model because they do not adequately focus on fundamental decision objectives, help them reveal hidden preferences, revise conflicting preferences, or explicitly reason about tradeoffs. As a result, users fail to find the outcomes that best satisfy their needs and preferences. In this article, we provide some analyses of common areas of design pitfalls and derive a set of design guidelines that assist the user in avoiding these problems in three important areas: user preference elicitation, preference revision, and explanation interfaces. For each area, we describe the state-of-the-art of the developed techniques and discuss concrete scenarios where they have been applied and tested.

81 citations


Journal ArticleDOI
TL;DR: An introduction to preference handling in combinatorial domains in the context of collective decision making is given, and it is shown that the considerable body of work on preference representation and elicitation that AI researchers have been working on for several years is particularly relevant.
Abstract: In both individual and collective decision making, the space of alternatives from which the agent (or the group of agents) has to choose often has a combinatorial (or multi-attribute) structure. We give an introduction to preference handling in combinatorial domains in the context of collective decision making, and show that the considerable body of work on preference representation and elicitation that AI researchers have been working on for several years is particularly relevant. After giving an overview of languages for compact representation of preferences, we discuss problems in voting in combinatorial domains, and then focus on multiagent resource allocation and fair division. These issues belong to a larger field, known as computational social choice, that brings together ideas from AI and social choice theory, to investigate mechanisms for collective decision making from a computational point of view. We conclude by briefly describing some of the other research topics studied in computational social choice.

69 citations


Journal ArticleDOI
TL;DR: The challenge to develop an integrated perspective of embodiment in communication has been taken up by an international research group hosted by Bielefeld University's Center for Interdisciplinary Research from October, 2005 through September, 2006.
Abstract: The challenge to develop an integrated perspective of embodiment in communication has been taken up by an international research group hosted by Bielefeld University's Center for Interdisciplinary Research (ZiF -- Zentrum fur interdisziplinare Forschung) from October, 2005 through September, 2006. An international conference was held there on 12-15 January, 2005 to define a research agenda that will explicitly address embodied communication in humans and machines.

66 citations


Journal ArticleDOI
TL;DR: An overview of the multifaceted relationship between nonmonotonic logics and preferences is given and formalisms which explicitly represent preferences on properties of belief sets are commented on.
Abstract: We give an overview of the multifaceted relationship between nonmonotonic logics and preferences. We discuss how the nonmonotonicity of reasoning itself is closely tied to preferences reasoners have on models of the world or, as we often say here, possible belief sets. Selecting extended logic programming with the answer-set semantics as a "generic" nonmonotonic logic, we show how that logic defines preferred belief sets and how preferred belief sets allow us to represent and interpret normative statements. Conflicts among program rules (more generally, defaults) give rise to alternative preferred belief sets. We discuss how such conflicts can be resolved based on implicit specificity or on explicit rankings of defaults. Finally, we comment on formalisms which explicitly represent preferences on properties of belief sets. Such formalisms either build preference information directly into rules and modify the semantics of the logic appropriately, or specify preferences on belief sets independently of the mechanism to define them.

65 citations


Journal ArticleDOI
TL;DR: This work describes recent work on building systems that use models of the Blogosphere to recognize spam blogs, find opinions on topics, identify communities of interest, derive trust relationships, and detect influential bloggers.
Abstract: Social media systems such as weblogs, photo- and link-sharing sites, Wikis and on-line forums are currently thought to produce up to one third of new Web content. One thing that sets these ``Web 2.0'' sites apart from traditional Web pages and resources is that they are intertwined with other forms of networked data. Their standard hyperlinks are enriched by social networks, comments, trackbacks, advertisements, tags, RDF data and metadata. We describe recent work on building systems that use models of the Blogosphere to recognize spam blogs, find opinions on topics, identify communities of interest, derive trust relationships, and detect influential bloggers.

58 citations


Journal ArticleDOI
TL;DR: Why fractal patterns are an appropriate model for web systems and how semantic web technologies can be used to design scalable and interoperable systems are discussed.
Abstract: In the past, many knowledge representation systems failed because they were too monolithic and didn’t scale well, whereas other systems failed to have an impact because they were small and isolated. Along with this trade-off in size, there is also a constant tension between the cost involved in building a larger community that can interoperate through common terms and the cost of the lack of interoperability. The semantic web offers a good compromise between these approaches as it achieves wide-scale communication and interoperability using finite effort and cost. The semantic web is a set of standards for knowledge representation and exchange that is aimed at providing interoperability across applications and organizations. We believe that the gathering success of this technology is not derived from the particular choice of syntax or of logic. Its main contribution is in recognizing and supporting the fractal patterns of scalable web systems. These systems will be composed of many overlapping communities of all sizes, ranging from one individual to the entire population that have internal (but not global) consistency. The information in these systems, including documents and messages, will contain some terms that are understood and accepted globally, some that are understood within certain communities, and some that are understood locally within the system. The amount of interoperability between interacting agents (software or human) will depend on how many communities they have in common and how many ontologies (groups of consistent and related terms) they share. In this article we discuss why fractal patterns are an appropriate model for web systems and how semantic web technologies can be used to design scalable and interoperable systems.

46 citations


Journal ArticleDOI
TL;DR: It is described how AI techniques such as abstraction, explanation generation, machine learning, and preference elicitation, can be useful in modelling and solving soft constraints.
Abstract: We review constraint-based approaches to handle preferences. We start by defining the main notions of constraint programming, then give various concepts of soft constraints and show how they can be used to model quantitative preferences. We then consider how soft constraints can be adapted to handle other forms of preferences, such as bipolar, qualitative, and temporal preferences. Finally, we describe how AI techniques such as abstraction, explanation generation, machine learning, and preference elicitation, can be useful in modelling and solving soft constraints.

Journal ArticleDOI
TL;DR: This survey considers preference handling in applications such as recommender systems, personal assistant agents, and personalized user interfaces, and gives an outlook on potential benefits and challenges.
Abstract: Interactive artificial intelligence systems employ preferences in both their reasoning and their interaction with the user. This survey considers preference handling in applications such as recommender systems, personal assistant agents, and personalized user interfaces. We survey the major questions and approaches, present illustrative examples, and give an outlook on potential benefits and challenges.

Journal ArticleDOI
TL;DR: It is argued for the importance of assessing numerical utilities rather than qualitative preferences, and several utility elicitation techniques from artificial intelligence, operations research, and conjoint analysis are surveyed.
Abstract: The effective tailoring of decisions to the needs and desires of specific users requires automated mechanisms for preference assessment. We provide a brief overview of recent direct preference elicitation methods: these methods ask users to answer (ideally, a small number of) queries regarding their preferences and use this information to recommend a feasible decision that would be (approximately) optimal given those preferences. We argue for the importance of assessing numerical utilities rather than qualitative preferences, and survey several utility elicitation techniques from artificial intelligence, operations research, and conjoint analysis.

Journal ArticleDOI
TL;DR: The article provides an overview of three key results related to koptimality, and sketches algorithms for k-optimality and provides some experimental results for 1-, 2- and 3-optimal algorithms for several types of DCOPs.
Abstract: In many cooperative multiagent domains, the effect of local interactions between agents can be compactly represented as a network structure. Given that agents are spread across such a network, agents directly interact only with a small group of neighbors. A distributed constraint optimization problem (DCOP) is a useful framework to reason about such networks of agents. Given agents’ inability to communicate and collaborate in large groups in such networks, we focus on an approach called k-optimality for solving DCOPs. In this approach, agents form groups of one or more agents until no group of k or fewer agents can possibly improve the DCOP solution; we define this type of local optimum, and any algorithm guaranteed to reach such a local optimum, as k-optimal. The article provides an overview of three key results related to koptimality. The first set of results gives worst-case guarantees on the solution quality of k-optima in a DCOP. These guarantees can help determine an appropriate k-optimal algorithm, or possibly an appropriate constraint graph structure, for agents to use in situations where the cost of coordination between agents must be weighed against the quality of the solution reached. The second set of results gives upper bounds on the number of k-optima that can exist in a DCOP. These results are useful in domains where a DCOP must generate a set of solutions rather than a single solution. Finally, we sketch algorithms for k-optimality and provide some experimental results for 1-, 2- and 3-optimal algorithms for several types of DCOPs.

Journal ArticleDOI
TL;DR: The past and current work on the Meta-Cognitive Loop is described, a domain-general approach to giving artificial systems the ability to notice, assess, and repair problems, to make artificial systems more robust and less dependent on their human designers.
Abstract: Humans learn from their mistakes. When things go badly, we notice that something is amiss, figure out what went wrong and why, and attempt to repair the problem. Artificial systems depend on their human designers to program in responses to every eventuality and therefore typically don’t even notice when things go wrong, following their programming over the proverbial, and in some cases literal, cliff. This article describes our past and current work on the Meta-Cognitive Loop, a domain-general approach to giving artificial systems the ability to notice, assess, and repair problems. The goal is to make artificial systems more robust and less dependent on their human designers.

Journal ArticleDOI
TL;DR: Some of the important lessons learned from a successfully-deployed team of personal assistant agents (Electric Elves) in an office environment are outlined and continued research is outlined to address some of the concerns raised.
Abstract: Software personal assistants continue to be a topic of significant research interest. This article outlines some of the important lessons learned from a successfully-deployed team of personal assistant agents (Electric Elves) in an office environment. In the Electric Elves project, a team of almost a dozen personal assistant agents were continually active for seven months. Each elf (agent) represented one person and assisted in daily activities in an actual office environment. This project led to several important observations about privacy, adjustable autonomy, and social norms in office environments. In addition to outlining some of the key lessons learned we outline our continued research to address some of the concerns raised.

Journal ArticleDOI
TL;DR: The proposed approach is called analog genetic encoding (AGE) and realizes an implicit genetic encoding of analog networks that permits the evolution of human-competitive solutions to real-world analog network design and identification problems.
Abstract: A large class of systems of biological and technological relevance can be described as analog networks, that is, collections of dynamical devices interconnected by links of varying strength. Some examples of analog networks are genetic regulatory networks, metabolic networks, neural networks, analog electronic circuits, and control systems. Analog networks are typically complex systems which include nonlinear feedback loops and possess temporal dynamics at different time scales. Both the synthesis and reverse engineering of analog networks are recognized as knowledge-intensive activities, for which few systematic techniques exist. In this paper we will discuss the general relevance of the analog network concept and describe an evolutionary approach to the automatic synthesis and the reverse engineering of analog networks. The proposed approach is called analog genetic encoding (AGE) and realizes an implicit genetic encoding of analog networks. AGE permits the evolution of human-competitive solutions to real-world analog network design and identification problems. This is illustrated by some examples of application to the design of electronic circuits, control systems, learning neural architectures, and the reverse engineering of biological networks.

Journal ArticleDOI
TL;DR: This paper presents some of the most successful graph-based representations and algorithms used in language processing and tries to explain how and why they work.
Abstract: Over the last few years, a number of areas of natural language processing have begun applying graph-based techniques. These include, among others, text summarization, syntactic parsing, word-sense disambiguation, ontology construction, sentiment and subjectivity analysis, and text clustering. In this paper, we present some of the most successful graph-based representations and algorithms used in language processing and try to explain how and why they work.

Journal ArticleDOI
TL;DR: This article is a brief chronicling of a light-hearted and occasionally bittersweet presentation on “Whatever Happened to AI?” at the Stanford Spring Symposium, and a textual snapshot of a discussion with friends and colleagues, rather than a scholarly article.
Abstract: On March 27, 2006, I gave a light-hearted and occasionally bittersweet presentation on “Whatever Happened to AI?” at the Stanford Spring Symposium presentation – to a lively audience of active AI researchers and formerly-active ones (whose current inaction could be variously ascribed to their having aged, reformed, given up, redefined the problem, etc.) This article is a brief chronicling of that talk, and I entreat the reader to take it in that spirit: a textual snapshot of a discussion with friends and colleagues, rather than a scholarly article. I begin by whining about the Turing Test, but only for a thankfully brief bit, and then get down to my top-10 list of factors that have retarded progress in our field, that have delayed the emergence of a true strong AI.

Journal ArticleDOI
TL;DR: The architecture and AI technology behind an XML-based AI framework designed to streamline e-government form processing, used to implement an AI module for one of the busiest immigration agencies in the world, are described.
Abstract: This article describes the architecture and AI technology behind an XML-based AI framework designed to streamline e-government form processing. The framework performs several crucial assessment and decision support functions, including workflow case assignment, automatic assessment, follow-up action generation, precedent case retrieval, and learning of current practices. To implement these services, several AI techniques were used, including rule-based processing, schema-based reasoning, AI clustering, case-based reasoning, data mining, and machine learning. The primary objective of using AI for e-government form processing is of course to provide faster and higher quality service as well as ensure that all forms are processed fairly and accurately. With AI, all relevant laws and regulations as well as current practices are guaranteed to be considered and followed. An AI framework has been used to implement an AI module for one of the busiest immigration agencies in the world.

Journal ArticleDOI
TL;DR: WebCrow is introduced, an automatic crossword solver in which the needed knowledge is mined from the web: clues are solved primarily by accessing the web through search engines and applying natural language processing techniques.
Abstract: Crosswords are very popular and represent a useful domain of investigation for modern artificial intelligence In contrast to solving other celebrated games (such as chess), cracking crosswords requires a paradigm shift towards the ability to handle tasks for which humans require extensive semantic knowledge This article introduces WebCrow, an automatic crossword solver in which the needed knowledge is mined from the web: clues are solved primarily by accessing the web through search engines and applying natural language processing techniques In competitions at the European Conference on Artificial Intelligence (ECAI) in 2006 and other conferences this web-based approach enabled WebCrow to outperform its human challengers Just as chess was once called “the Drosophila of artificial intelligence,” we believe that crossword systems can be useful Drosophila of web-based agents

Journal ArticleDOI
TL;DR: This special issue of AI Magazine is dedicated to the proposition that problems populate the path to insight, implying the experiences and lessons learned should be shared.
Abstract: This special issue of AI Magazine is dedicated to the proposition that problems populate the path to insight, implying the experiences and lessons learned should be shared.

Journal ArticleDOI
TL;DR: The goal of the Electric Elves project was to develop software agent technology to support human organizations, and a variety of applications of the Elves were developed, including scheduling visitors, man- aging a research group, and monitoring travel.
Abstract: The goal of the Electric Elves project was to develop software agent technology to support human organizations. We developed a variety of applications of the Elves, including scheduling visitors, man- aging a research group (the Office Elves), and monitoring travel (the Travel Elves). The Travel Elves were eventually deployed at DARPA, where things did not go exact- ly as planned. In this article, we describe some of the things that went wrong and then present some of the lessons learned and new research that arose from our experience in building the Travel Elves.

Journal ArticleDOI
TL;DR: More than 30 image learning systems have been deployed on seven fishing vessels in Norway and Iceland over the past three years, and the use of CogniSight has significantly reduced the number of crew members needed on the boats (by up to six persons), and the time at sea has been shortened by 15 percent as discussed by the authors.
Abstract: A generic image learning system, CogniSight, is being used for the inspection of fishes before filleting offshore. More than 30 systems have been deployed on seven fishing vessels in Norway and Iceland over the past three years. Each CogniSight system uses four neural network chips (a total of 312 neurons) based on a natively parallel, hard-wired architecture that performs real-time learning and nonlinear classification (RBF). These systems are trained by the ship crew using Image Knowledge Builder, a ”show and tell” interface that facilitates easy training and validation. Fishers can reinforce the learning anytime when needed. The use of CogniSight has significantly reduced the number of crew members needed on the boats (by up to six persons), and the time at sea has been shortened by 15 percent. The fast and high return of investment (ROI) to the fishing fleet has significantly increased the market share of Pisces Industries, the company integrating CogniSight systems to its filleting machines.

Journal ArticleDOI
TL;DR: This work presents the 6S peer network, which uses machine learning techniques to learn about the changing query environment and shows that simple reinforcement learning algorithms are sufficient to detect and exploit semantic locality in the network, resulting in efficient routing and high-quality search results.
Abstract: Collaborative query routing is a new paradigm for Web search that treats both established search engines and other publicly available indices as intelligent peer agents in a search network. The approach makes it transparent for anyone to build their own (micro) search engine, by integrating established Web search services, desktop search, and topical crawling techniques. The challenge in this model is that each of these agents must learn about its environment— the existence, knowledge, diversity, reliability, and trustworthiness of other agents — by analyzing the queries received from and results exchanged with these other agents. We present the 6S peer network, which uses machine learning techniques to learn about the changing query environment. We show that simple reinforcement learning algorithms are sufficient to detect and exploit semantic locality in the network, resulting in efficient routing and high-quality search results. A prototype of 6S is available for public use and is intended to assist in the evaluation of different AI techniques employed by the networked agents.

Journal ArticleDOI
TL;DR: This work has designed, built, and deployed an interdisciplinary virtual observatory—an online service providing access to what appears to be an integrated collection of scientific data that appears as if all resources are organized, stored, and retrieved or used in a common way.
Abstract: Our work is aimed at enabling a new style of virtual, distributed scientific research. We have designed, built, and deployed an interdisciplinary virtual observatory—an online service providing access to what appears to be an integrated collection of scientific data. The Virtual Solar-Terrestrial Observatory (VSTO) is a production semantic web data framework providing access to observational data sets from fields spanning upper atmospheric terrestrial physics to solar physics. The observatory allows virtual access to a highly distributed and heterogeneous set of data that appears as if all resources are organized, stored, and retrieved or used in a common way. The end-user community includes scientists, students, and data providers. We will introduce interdisciplinary virtual observatories and their potential impact by describing our experiences with VSTO. We will also highlight some benefits of the embedded semantic web technology and also provide evaluation results after the first year of use.

Journal ArticleDOI
Srinivas Krovvidy1
TL;DR: Custom DU® is an automated underwriting system that enables mortgage lenders to build their own business rules that facilitate assessing borrower eligibility for different mortgage products, allowing for centralized control of underwriting policies and procedures even if lenders have decentralized operations.
Abstract: Custom DU is an automated underwriting system that enables mortgage lenders to build their own business rules that facilitate assessing borrower eligibility for different mortgage products. Developed by Fannie Mae, Custom DU has been used since 2004 by several lenders to automate the underwriting of numerous mortgage products. Custom DU uses rule specification language techniques and a web-based, user-friendly interface for implementing business rules that represent business policy. By means of the user interface, lenders can also customize their underwriting findings reports, test the rules that they have defined, and publish changes to business rules on a real-time basis, all without any software modifications. The user interface enforces structure and consistency, enabling business users to focus on their underwriting guidelines when converting their business policy to rules. Once lenders have created their rules, loans are routed to the appropriate rule sets, and customized, but consistent, results are always returned to the lender. Using Custom DU, lenders can create different rule sets for their products and assign them to different channels of the business, allowing for centralized control of underwriting policies and procedures—even if lenders have decentralized operations.

Journal ArticleDOI
TL;DR: The AAAI video archive is a central source of information about videotapes and films with information about AI that are stored digitally on other sites or physically in institutional archives.
Abstract: The AAAI video archive is a central source of information about videotapes and films with information about AI that are stored digitally on other sites or physically in institutional archives. For each video, the archive includes a brief description of the contents and personae, one or more representative short clips for classroom or individual use, and the location of the archival copy (for example, at a university library).

Journal ArticleDOI
TL;DR: Although the commercial venture failed, the team learned a lot, had fun, and are trying again with a new company, and to others who aspire to commercialize their AI technology, I say: "Take a chance!"
Abstract: Extempo Systems, Inc. was founded in 1995 to commercialize intelligent characters. Our team built innovative software and novel applications for several markets. We had some early-adopting customers during the Internet boom, but the company was not quite able to survive the significant downturn in corporate IT spending when the bubble burst. In 2004, Extempo ceased operations and was formally liquidated. Although our commercial venture failed, we learned a lot, had fun, and are trying again with a new company. To others who aspire to commercialize their AI technology, I say: ";;Take a chance!";;

Journal ArticleDOI
TL;DR: Learning from noisy data is very difficult - but if a certain method fails people often try again - instead trying to understand why they fail.
Abstract: Learning from noisy data is very difficult. But if a certain method fails people often try again - instead trying to understand why they fail.