scispace - formally typeset
Search or ask a question
Topic

Decision tree model

About: Decision tree model is a research topic. Over the lifetime, 2256 publications have been published within this topic receiving 38142 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article , a decision tree is used for machine learning in decision trees, and it is shown that decision trees can be used in machine learning tasks, such as decision trees for decision trees.
Abstract: 본 연구는 머신러닝(Machine Learning)의 일종인 의사결정트리(Decision Tree) 모형을 이용하여 글로벌 부동산 가격하락과 관련성이 높은 변수를 분석하였다. 분석기간은 1976년부터 2018년까지이며, 58개국 불균형 국가패널자료(연도별)를 사용하여 3단계 분석을 수행하였다. 분석결과 글로벌 부동산 가격하락 이벤트와 연관성이 높은 중요도 순서 상위 15개 설명변수를 도출하였다. 주요 15개 변수에 대한 글로벌 부동산 가격하락 이벤트 기간 평균과 2016년 주요국의 해당 지표들을 비교한 결과, 한국은 2016년 전후 부동산 가격의 급격한 조정 가능성이 다른 국가와 비교해 상대적으로 낮았던 것을 확인할 수 있었다. 본 연구가 국제비교를 통한 단기부동산 가격 전망에 유용한 참고자료가 될 수 있을 것으로 기대한다.
Patent
25 Jun 2014
TL;DR: In this paper, a domain feature tree model based on an i* framework is presented. But the model is constructed in a semi-automation mode, and it is guaranteed that the mapped i* elements are more comprehensive than mapped i * elements in original methods.
Abstract: The invention discloses a domain feature tree modeling method based on an i* framework. The domain feature tree modeling method includes the steps of firstly, reading a tel file or a q7 file or an XML file of the i* framework, obtaining participant information, and sequentially placing participants into a participant queue in the access sequence; secondly, mapping an i* element in each participant into a feature tree model, and building repulsion relations between the features in the feature tree models; thirdly, building dependence relations between the features according to different dependence chains, and automatically associating the multiple feature tree models generated by the different participants; finally, improving a domain feature tree model, and outputting information of the final domain feature tree model. According to the domain feature tree modeling method, it is guaranteed that the mapped i* elements are more comprehensive than mapped i* elements in original methods, and the domain feature tree model is constructed in a semi-automation mode.
Journal Article
TL;DR: Modifications for constructing geometric decision tree classifier s using adaptive boosting using boosting process are presented and applied on standard data set from UCI repository.
Abstract: Decision trees have proved to be valuable tools for the description, classification and generalization of data. Work on constructing decision trees from data exists in multiple disciplines such as statistics, pattern recognition, decision theory, signal processing, machine learning and artificial neural networks. This paper proposes a modified geometric decision tree algorithm using boosting process. Decision Trees are considered to be one of the most popular approaches for performing classification. Classification tree produced by decisi on tress are simple understand and interpret. This paper presents modifications for constructing geometric decision tree classifier s using adaptive boosting. In this algorithm tree is constructed using the geometric structure of data. The algorithm evaluates th e best angle bisector (hyper planes) as split rule for decision tree structure. The multidimensional hyper planes help to builds small decision trees and gives better performance. Furthermore to enhance the performance, we proposed use an Adaptive Boosting met hod for boosting decision tree which will create multiple small geometric decision trees in the view of improving the performance of prediction in terms of accuracy, confusion matrix, precision and recall. The proposed modification to geometric decision tree is applied on standard data set from UCI repository.
01 Jan 2014
TL;DR: The intersection of communication complexity and distributed computing is studied, where number-in-hand models, wherein each player sees only her own input, can be applied immediately to distributed computing problems, and have been studied extensively.
Abstract: In these notes, we study the intersection of communication complexity and distributed computing. To understand why distributed computing researchers care about communication complexity tools and results, we briey turn our attention to distributed computing. In traditional distributed computing models, local computation is free, and communication between parties is expensive. Typical complexity measures include the number of messages sent (message complexity), the total number of bits sent (bit complexity), and the total number of rounds of computation in synchronous models (round complexity). Communication complexity is interested in these same complexity measures, and the most common communication complexity models are very similar (in some cases identical) to distributed computing models. In part, this is because communication complexity emerged from the study of distributed computing, and early interest in its models was driven by applications in distributed computing. For example, number-on-forehead models, wherein each player can see only the inputs of other players, were initially considered unrealistic (and less interesting) because they did not coincide with distributed computing models. In contrast, number-in-hand models, wherein each player sees only her own input, can be applied immediately to distributed computing problems, and have been studied extensively.
Proceedings ArticleDOI
22 Sep 2013
TL;DR: A probabilistic search algorithm for rigid-body protein-protein docking that employs a machine learning model to score bound configurations prior to subjecting promising configurations to local optimization with a sophisticated force field is presented.
Abstract: We present a probabilistic search algorithm for rigid-body protein-protein docking. The algorithm is a realization of the basin hopping framework for sampling low-energy local minima of a given energy function. To save computational resources, the algorithm employs a machine learning model to score bound configurations prior to subjecting promising configurations to local optimization with a sophisticated force field. The machine learning model is a decision tree trained on known native dimers to learn features that constitute true interaction interfaces. The FoldX force field is employed only on sampled dimeric configurations determined by the decision tree model to contain true interaction interfaces. The preliminary results are promising and motivate us to further investigate such an informatics-driven approach to protein-protein docking.

Network Information
Related Topics (5)
Cluster analysis
146.5K papers, 2.9M citations
80% related
Artificial neural network
207K papers, 4.5M citations
78% related
Fuzzy logic
151.2K papers, 2.3M citations
77% related
The Internet
213.2K papers, 3.8M citations
77% related
Deep learning
79.8K papers, 2.1M citations
77% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202310
202224
2021101
2020163
2019158
2018121