scispace - formally typeset
Search or ask a question

Answers from top 4 papers

More filters
Papers (4)Insight
Both cross validation and accuracy of the algorithm shows that SVM performs well.
Proceedings ArticleDOI
Minal Deshpande, Preeti Bajaj 
01 Dec 2016
16 Citations
Improved performance measure shows satisfactory results upon application of SVM.
Experimental results show that the prediction accuracy of conventional SVM may be improved significantly by using our model.
Book ChapterDOI
02 Jul 2001
18 Citations
Compared to single SVMs, the multi-SVM classification system exhibits promising accuracy performance on well-known data sets.

See what other people are reading

What is Dropout in deep learning?
4 answers
Dropout in deep learning is a popular regularization technique that helps prevent overfitting during model training. It was introduced by Hinton et al. in 2012 and has since been widely used in neural networks. Dropout not only addresses overfitting but also mitigates underfitting when applied at the start of training, reducing directional variance of gradients across mini-batches. Early dropout, used in the initial training phases and then turned off, can improve model performance by aligning mini-batch gradients with the dataset's gradient. Conversely, late dropout, activated later in training, can regularize overfitting models effectively. Dropout has shown effectiveness in few-shot learning tasks like object detection and image classification, enhancing the generalization power of pre-trained models.
How does machine learning and natural language processing (NLP) contribute to the development of new languages?
5 answers
Machine learning and natural language processing (NLP) play crucial roles in the development of new languages by enabling machines to understand, interpret, and generate human language. These technologies utilize algorithms and models to analyze text, speech, and documents, facilitating tasks like sentiment analysis, text classification, and question answering. In the context of creating new languages for machines, a proposed model aims to enable machines to develop their own language through the use of machine learning algorithms and artificial intelligence. By leveraging NLP and machine learning, systems can learn to communicate in their own language, similar to humans, enhancing efficiency and enabling effective communication among machines. The combination of NLP and machine learning empowers machines to generate new sounds, classify them, and ultimately develop their own languages, resembling human language development processes.
Is subgrouping for chronic low back pain working?
4 answers
Subgrouping for chronic low back pain (CLBP) shows promise but with limitations. Various classification schemes like the Impact Stratification Score (ISS) and tools such as the Subgroups for Targeted Treatment (STarT) Back Screening Tool have been successful in predicting outcomes and guiding therapy. Studies have identified subgroups within CLBP based on factors like pain interference, psychosocial aspects, and physical disability, showing differences in treatment responses. However, challenges exist in accurately classifying individuals with CLBP, especially when distinguishing them from pain-free controls. While some subgroups may benefit more from specific treatments like passive physical or psychological interventions, the overall clinical significance of subgrouping remains limited due to small subgroup sizes and modest additional benefits observed. Further research optimizing variables and refining subgrouping methods is needed to enhance the effectiveness of subgrouping strategies for CLBP.
Are there feedback and assessment frameworks?
5 answers
Various frameworks exist to enhance feedback and assessment practices in educational settings. One such framework focuses on integrating digital tools into assessment design to improve assessment quality and relevance to a digitally-mediated society. Additionally, a feedback literacy model tailored for e-Assessments aims to promote effective feedback practices in digital learning environments, emphasizing the importance of feedback engagement for students. Moreover, a data mining-based framework analyzes student feedback to enhance the learning process by automatically detecting feedback polarity, classifying assessment issues, and correlating these issues with student performance. These frameworks provide valuable guidance for educators in creating or modifying assessments, stimulating critical thinking and practical support in assessment design.
What are the primary advantages and disadvantages of using Support Vector Machines (SVM) in classification and regression tasks?
4 answers
Support Vector Machines (SVM) offer advantages such as robustness in handling large numbers of predictors with limited sample sizes, enhanced generalization ability when incorporating known feature-evolution priors, and good performance in terms of accuracy, sparsity, and flexibility in regression tasks. However, SVMs have limitations like sensitivity to noise and outliers, which can affect the model's boundary. Additionally, traditional SVMs may not fully utilize training data, leading to potential loss of information and local accuracy issues. To address these limitations, researchers have proposed variants and extensions of SVM, such as modified SVR models that extract more information from existing training data to improve performance without requiring additional points.
What are the different levels of machine supervision to learn from data with disagreement?
5 answers
Different levels of machine supervision to learn from data with disagreement include traditional supervised learning with majority voting, and the innovative approach of jury learning that explicitly resolves label disagreements by defining which groups determine the classifier's prediction. Additionally, semi-supervised learning methods based on belief functions address uncertainties encountered in learning tasks by effectively utilizing limited supervised information to classify data. Moreover, for risk management in healthcare decision-making, models like Gaussian Processes are crucial for providing accurate uncertainty estimates, especially when dealing with self-supervised labeled data and the risk of overfitting. These diverse approaches cater to different scenarios where disagreements in data annotations need to be managed effectively.
What is the Associative Network Theory in context of consumer behavior?
5 answers
The Associative Network Theory in consumer behavior involves utilizing network analysis to understand the interconnected mental representations consumers have, like brand images. This theory combines Human Associative Memory models with network analytic approaches to delve deeper into consumer cognition. By analyzing associative networks at different levels, insights into salient brand image components, strategic brand dimensions, and overall network structures can be gained. This approach allows for the identification of elements that can be influenced through short-term marketing efforts, defining brand characteristics for medium-term strategies, and managing brand imagery in the long term. The theory aids in comprehending how consumers mentally connect different aspects of products or services, providing valuable insights for marketing and brand management.
Can machine learning models predict potential issues with building ventilation systems, and if so, how accurate are these predictions?
5 answers
Machine learning models can effectively predict potential issues with building ventilation systems. Various studies have demonstrated the application of machine learning algorithms, such as deep neural networks (DNN), support vector regression (SVR), and random forest (RF), in predicting ventilation rates. Additionally, fault detection and diagnosis (FDD) techniques, particularly based on Artificial Neural Networks (ANNs), have shown promise in identifying faults in Heating, Ventilation, and Air Conditioning (HVAC) systems. These models have showcased high accuracy levels, with DNN outperforming SVR and RF, achieving a Mean Absolute Percentage Error (MAPE) of 20.1%. Moreover, an LSTM model has been utilized for maintenance prediction in ventilator systems, achieving a high accuracy of 82% in predicting failures. These findings collectively highlight the effectiveness and accuracy of machine learning models in predicting issues related to building ventilation systems.
What works are done in the field of spoken audio content for data analysis? ?
5 answers
Various works have been conducted in the field of spoken audio content for data analysis. One approach involves exploring the potential of audio feature analysis for spoken word collections, aiming to extract extra-semantic information directly from audio files. Additionally, sentiment analysis on speaker-discriminated speech transcripts has been proposed to detect emotions in conversations, focusing on developing efficient algorithms for speaker discrimination and sentiment analysis. Furthermore, the use of weakly supervised learning frameworks, particularly multiple instance learning algorithms, has been suggested for audio event detection tasks, enabling the detection of audio events through weak labels with reduced annotation effort and expense. These studies collectively contribute to advancing the analysis of spoken audio content for various purposes.
How net neutrality rules and legitimate traffic management practices affect blockchain platforms operation?
5 answers
Net neutrality rules and legitimate traffic management practices play a crucial role in shaping the operation of blockchain platforms. Provably neutral clouds are essential for enhancing blockchain scalability. In the context of Intelligent Transportation Systems (ITS), blockchain-enabled crowdsensing frameworks leverage Deep Reinforcement Learning (DRL) and Distributed Alternating Direction Method of Multipliers (DIADEM) algorithms to optimize traffic management while considering blockchain safety and latency. However, challenges such as oracle issues, interoperability between different blockchain platforms, and the absence of measures for private key loss hinder the full utilization of blockchain securities. Introducing blockchain middleware is proposed as a solution to address these limitations and enhance the effectiveness of blockchain-based securities.
Is there a positive relationship between traffic congestion and urban areas?
5 answers
Yes, there is a positive relationship between traffic congestion and urban areas. Urban areas experience increasing traffic congestion due to factors like unplanned road networks, high vehicle volume, and critical congestion areas. This congestion leads to adverse effects on the environment and economy, impacting citizens through air pollution, fuel consumption, noise pollution, accidents, and loss of time. Efforts to manage congestion through public policies can indirectly affect road safety, making cities more sustainable in terms of travel time and accident probability. Predicting traffic flow in urban spaces is crucial for congestion control, with advanced methods like Vision Transformers showing promise in accurately forecasting traffic congestion and aiding in energy conservation.