scispace - formally typeset
Search or ask a question
Author

David S. Vogel

Bio: David S. Vogel is an academic researcher from University of Central Florida. The author has contributed to research in topics: Mean squared error & Linear model. The author has an hindex of 7, co-authored 11 publications receiving 1179 citations. Previous affiliations of David S. Vogel include Florida State University College of Arts and Sciences.

Papers
More filters
Journal ArticleDOI
TL;DR: It was found that across people and situations, games and interactive simulations are more dominant for cognitive gain outcomes, however, consideration of specific moderator variables yielded a more complex picture.
Abstract: Substantial disagreement exists in the literature regarding which educational technology results in the highest cognitive gain for learners. In an attempt to resolve this dispute, we conducted a me...

842 citations

Journal ArticleDOI
TL;DR: It was found that across people and situations, the right hemisphere is the more dominant for spatial processing, however, consideration of specific moderator variables yielded a more complex picture.

211 citations

Proceedings Article
28 Jun 2009
TL;DR: The KDD Cup 2009 as mentioned in this paper focused on identifying data mining techniques capable of rapidly building predictive models and scoring new entries on a large CRM database, and the results of the challenge were discussed at the KDD conference (June 28, 2009).
Abstract: We organized the KDD cup 2009 around a marketing problem with the goal of identifying data mining techniques capable of rapidly building predictive models and scoring new entries on a large database. Customer Relationship Management (CRM) is a key element of modern marketing strategies. The KDD Cup 2009 offered the opportunity to work on large marketing databases from the French Telecom company Orange to predict the propensity of customers to switch provider (churn), buy new products or services (appetency), or buy upgrades or add-ons proposed to them to make the sale more profitable (up-selling). The challenge started on March 10, 2009 and ended on May 11, 2009. This challenge attracted over 450 participants from 46 countries. We attribute the popularity of the challenge to several factors: (1) A generic problem relevant to the Industry (a classification problem), but presenting a number of scientific and technical challenges of practical interest including: a large number of training examples (50,000) with a large number of missing values (about 60%) and a large number of features (15,000), unbalanced class proportions (fewer than 10% of the examples of the positive class), noisy data, presence of categorical variables with many different values. (2) Prizes (Orange offered 10,000 Euros in prizes). (3) A well designed protocol and web site (we benefitted from past experience). (4) An effective advertising campaign using mailings and a teleconference to answer potential participants questions. The results of the challenge were discussed at the KDD conference (June 28, 2009). The principal conclusions are that ensemble methods are very effective and that ensemble of decision trees offer off-the-shelf solutions to problems with large numbers of samples and attributes, mixed types of variables, and lots of missing values. The data and the platform of the challenge remain available for research and educational purposes at http://www.kddcup-orange.com/.

58 citations

Journal ArticleDOI
TL;DR: The architecture of a classification system that uses a web directory to identify the subject context that the query terms are frequently used in is described, which received the Runner-Up Award for Query Categorization Performance of the KDD Cup 2005.
Abstract: The performance of search engines crucially depends on their ability to capture the meaning of a query most likely intended by the user. We study the problem of mapping a search engine query to those nodes of a given subject taxonomy that characterize its most likely meanings. We describe the architecture of a classification system that uses a web directory to identify the subject context that the query terms are frequently used in. Based on its performance on the classification of 800,000 example queries recorded from MSN search, the system received the Runner-Up Award for Query Categorization Performance of the KDD Cup 2005.

51 citations

Journal ArticleDOI
TL;DR: A case study of how artificial intelligence (AI) can be used for a high quality predictive modeling process, and how this process is used to improve the quality and efficiency of healthcare.
Abstract: Predictive modeling in healthcare has been gaining more interest and utilization in recent years. The tools for doing this have become more sophisticated with increasingly higher accuracy. We present a case study of how artificial intelligence (AI) can be used for a high quality predictive modeling process, and how this process is used to improve the quality and efficiency of healthcare. In this case study, MEDai, Inc. provides the analytical tools for the predictive modeling, and Sentara Healthcare uses these predictions to determine which members can be helped the most by actively looking for ways to prevent future severe outcomes. Most predictive methodologies implement rule-based systems or regression techniques. There are many pitfalls of these techniques when applied to medical data, where many variables and many interactive variable combinations exist necessitating modeling with AI. When comparing the R2 statistic (the commonly accepted measurement of how accurate a predictive model is) of traditional techniques versus AI techniques, the resulting accuracy more than doubles. The cited publications show a range of raw R2 values from 0.10 to 0.15. In contrast, the R2 value obtained from AI techniques implemented at Sentara is 0.34. Once the predictions are generated, data are displayed and analytical programs utilized for data mining and analysis. With this tool, it is possible to examine sub-groups of the data, or data mine to the member level. Risk factors can be determined and individual members/member groups can be analyzed to help make the decisions of what changes can be made to improve the level of medical care that people receive.

50 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Book
19 Nov 2008
TL;DR: This meta-analyses presents a meta-analysis of the contributions from the home, the school, and the curricula to create a picture of visible teaching and visible learning in the post-modern world.
Abstract: Preface Chapter 1 The challenge Chapter 2 The nature of the evidence: A synthesis of meta-analyses Chapter 3 The argument: Visible teaching and visible learning Chapter 4: The contributions from the student Chapter 5 The contributions from the home Chapter 6 The contributions from the school Chapter 7 The contributions from the teacher Chapter 8 The contributions from the curricula Chapter 9 The contributions from teaching approaches - I Chapter 10 The contributions from teaching approaches - II Chapter 11: Bringing it all together Appendix A: The 800 meta-analyses Appendix B: The meta-analyses by rank order References

6,776 citations

Journal ArticleDOI
TL;DR: This article summarizes the research on the positive effects of playing video games, focusing on four main domains: cognitive, motivational, emotional, and social, and proposes some candidate mechanisms by which playing videoGames may foster real-world psychosocial benefits.
Abstract: Video games are a ubiquitous part of almost all children's and adolescents' lives, with 97% playing for at least one hour per day in the United States. The vast majority of research by psychologists on the effects of "gaming" has been on its negative impact: the potential harm related to violence, addiction, and depression. We recognize the value of that research; however, we argue that a more balanced perspective is needed, one that considers not only the possible negative effects but also the benefits of playing these games. Considering these potential benefits is important, in part, because the nature of these games has changed dramatically in the last decade, becoming increasingly complex, diverse, realistic, and social in nature. A small but significant body of research has begun to emerge, mostly in the last five years, documenting these benefits. In this article, we summarize the research on the positive effects of playing video games, focusing on four main domains: cognitive, motivational, emotional, and social. By integrating insights from developmental, positive, and social psychology, as well as media psychology, we propose some candidate mechanisms by which playing video games may foster real-world psychosocial benefits. Our aim is to provide strong enough evidence and a theoretical rationale to inspire new programs of research on the largely unexplored mental health benefits of gaming. Finally, we end with a call to intervention researchers and practitioners to test the positive uses of video games, and we suggest several promising directions for doing so.

1,546 citations

Journal ArticleDOI
TL;DR: In this article, the authors used meta-analytic techniques to investigate whether serious games are more effective in terms of learning and more motivating than conventional instruction methods (learning: k = 77, N 5,547; motivation:k = 31, N 2,216).
Abstract: It is assumed that serious games influences learning in 2 ways, by changing cognitive processes and by affecting motivation. However, until now research has shown little evidence for these assumptions. We used meta-analytic techniques to investigate whether serious games are more effective in terms of learning and more motivating than conventional instruction methods (learning: k = 77, N 5,547; motivation: k = 31, N 2,216). Consistent with our hypotheses, serious games were found to be more effective in terms of learning (d= 0.29, p .05) than conventional instruction methods. Additional moderator analyses on the learning effects revealed that learners in serious games learned more, relative to those taught with conventional instruction methods, when the game was supplemented with other instruction methods, when multiple training sessions were involved, and when players worked in groups.

1,199 citations

Journal ArticleDOI
TL;DR: Results suggest games show higher learning gains than simulations and virtual worlds, and for simulation studies, elaborate explanation type feedback is more suitable for declarative tasks whereas knowledge of correct response is more appropriate for procedural tasks.
Abstract: The purpose of this meta-analysis is to examine overall effect as well as the impact of selected instructional design principles in the context of virtual reality technology-based instruction (i.e. games, simulation, virtual worlds) in K-12 or higher education settings. A total of 13 studies (N?=?3081) in the category of games, 29 studies (N?=?2553) in the category of games, and 27 studies (N?=?2798) in the category of virtual worlds were meta-analyzed. The key inclusion criteria were that the study came from K-12 or higher education settings, used experimental or quasi-experimental research designs, and used a learning outcome measure to evaluate the effects of the virtual reality-based instruction.Results suggest games (FEM?=?0.77; REM?=?0.51), simulations (FEM?=?0.38; REM?=?0.41), and virtual worlds (FEM?=?0.36; REM?=?0.41) were effective in improving learning outcome gains. The homogeneity analysis of the effect sizes was statistically significant, indicating that the studies were different from each other. Therefore, we conducted moderator analysis using 13 variables used to code the studies. Key findings included that: games show higher learning gains than simulations and virtual worlds. For simulation studies, elaborate explanation type feedback is more suitable for declarative tasks whereas knowledge of correct response is more appropriate for procedural tasks. Students performance is enhanced when they conduct the game play individually than in a group. In addition, we found an inverse relationship between number of treatment sessions learning gains for games.With regards to the virtual world, we found that if students were repeatedly measured it deteriorates their learning outcome gains. We discuss results to highlight the importance of considering instructional design principles when designing virtual reality-based instruction. A comprehensive review of virtual reality-based instruction research.Analysis of the moderation effects of design features in a virtual environment.Using an advance statistical technique of meta-analysis to study the effects.Virtual reality environment is effective for teaching in K-12 and higher education.Results can be used by instructional designers to design the virtual environments.

1,040 citations