scispace - formally typeset
Search or ask a question

Answers from top 7 papers

More filters
Papers (7)Insight
The selection of SVM parameters has an important influence on the classification accuracy of SVM.
The experiment results demonstrate that, compared with cross validation method, this proposed method improves accuracy of SVM parameter selection and generalization performance of SVM.
Proceedings ArticleDOI
Lucas K. C. Lai, James N. K. Liu 
11 Jul 2010
22 Citations
SVM performs well in the short term forecast.
The simulation results show that the proposed approach can effectively tune the parameters of LS-SVM, and the improved LS-SVM with wavelet kernel can provide better precision.
The proposed classifier combines advantages of the principal component analysis and SVM to improve classification performances of the traditional SVM.
The proposed MS-SVM method exhibits a superior low-order harmonic performance comparable to that of SHE and provides the same fast and flexible control capability as the conventional SVM.
Open accessProceedings ArticleDOI
14 Aug 2009
10 Citations
Experiments demonstrated that the integrated GA-SVM approach is superior compared to conventional SVM applications.

Related Questions

Does Relevance Vector Machine (RVM) model have hyperparameter tuning issues?5 answersThe Relevance Vector Machine (RVM) model does have hyperparameter tuning issues. Several papers discuss the importance of selecting the appropriate hyperparameters for RVM models.. These papers propose various methods to address this issue, including grid search, random search, Bayesian optimization, simulated annealing, particle swarm optimization, and evolutionary algorithms.. They compare the performance of different search algorithms and find that some algorithms, such as trees of Parzen estimators and particle swarm optimization, can select better hyperparameters with only a slight increase in computation time compared to grid search.. Overall, the papers emphasize the importance of tuning the hyperparameters of RVM models to achieve better performance and provide insights into the different methods that can be used for this purpose..
What is the role of tuning parameters in machine learning algorithms?5 answersTuning parameters play a crucial role in machine learning algorithms. It is essential to find the optimal values for these parameters to ensure that the models perform well. Hyperparameter tuning is a technique used to determine the most effective parameters for learning algorithms. Different methods, such as grid search, random search, Bayesian Optimization, Genetic Algorithm, SHERPA, and Optuna, can be used to tune hyperparameters. The objective is to find the best hyperparameter tuning method for each algorithm. Grid search is a commonly used approach to identify ranges of tuning parameters that lead to good models across different datasets. The process involves exploring different optimization algorithms to select a model within the tuning parameter space. The results of hyperparameter tuning can significantly improve the performance of machine learning algorithms, such as Multinomial Logistic Regression, Support Vector Machine, and Artificial Neural Network.
What is the best way to tune hyperparameters?4 answersThe best way to tune hyperparameters is by adopting established best practices from AutoML, such as the separation of tuning and testing seeds, as well as principled hyperparameter optimization (HPO) across a broad search space. This approach has been shown to significantly affect the agent's final performance and sample efficiency in deep reinforcement learning (RL). Comparisons between multiple state-of-the-art HPO tools and hand-tuned counterparts have demonstrated that HPO approaches often have higher performance and lower compute overhead. Additionally, efficient formulas to compute the gradient and hessian of approximate leave-one-out cross-validation have been derived, allowing for the application of second-order optimizers to find hyperparameters. It is important to note that hyperparameter optimization is more nuanced than previously believed, and the specific optimizers that should be endorsed remain to be seen.
How to perform SVM?6 answers
How to tune a regression model?7 answers
How to tune a transistor radio?8 answers

See what other people are reading

What are the current models used for combacting imbalanced data and intrepretability in money laundering?
6 answers
The challenge of combating imbalanced data and enhancing interpretability in the context of money laundering detection has led to the development and application of various models and techniques. Deep learning models, despite their increasing application in high-stakes fields like money laundering detection, often struggle with imbalanced data, which exacerbates their black-box nature and complicates the interpretability of their decisions. To address these issues, researchers have proposed frameworks and techniques tailored to modern deep networks that can uncover data complexities commonly found in imbalanced datasets, such as class overlap, sub-concepts, and outlier instances. One approach to improving the reliability of imbalanced classification, specifically for fraudulent bank transactions, involves using the Tomek links resampling algorithm combined with machine learning models like TPOT and RandomForest. This method has shown promising results in detecting fraudulent transactions with high accuracy. Similarly, the literature review on the application of deep learning (DL) techniques for Anti-Money Laundering (AML) emphasizes the need for incorporating Explainable Artificial Intelligence (XAI) to enhance model interpretability, despite the limited research in this area. Strategies for optimizing supervised learning algorithms against imbalanced datasets include various resampling techniques, with oversampling techniques, in particular, achieving significant improvements in model performance on the minority class. Additionally, novel methodologies like the Cost-Sensitive Cosine Similarity K-Nearest Neighbor (CoSKNN) and the K-modes Imbalance Classification Hybrid Approach (K-MICHA) have been developed to tackle the imbalance problem by adjusting the classification process to be more sensitive to the minority class. The DELATOR framework, which utilizes graph neural networks to learn from large-scale temporal graphs, represents an innovative approach to detecting money laundering activities. By adapting concepts from the GraphSMOTE framework and incorporating multi-task learning, DELATOR effectively addresses the challenges of learning from heavily imbalanced graph data. Lastly, the optimization of classifiers based on Support Vector Machine (SVM) combined with Random Under Sampling (RUS) techniques has also been shown to effectively detect fraud in financial transactions by handling imbalanced datasets. These models and techniques collectively represent the current state-of-the-art in combating imbalanced data and enhancing interpretability in the domain of money laundering detection.
How to best use random forest using transcriptome data for toxicity prediction?
4 answers
Random forest, a powerful machine learning method, can be effectively utilized for toxicity prediction by incorporating transcriptome data alongside chemical structural descriptions. This integration enhances prediction accuracy significantly. By leveraging genome-wide variants and preconditioned random forest regression (PRFR), researchers have successfully predicted urinary toxicity symptoms in prostate cancer patients undergoing radiotherapy, highlighting the potential of this approach for clinical applications. Additionally, in the context of chemical pollution and freshwater ecosystems, random forest analysis identified key predictors of toxicity, emphasizing the importance of factors like the chemical substance, fish species, and environmental conditions. Therefore, to optimize random forest for toxicity prediction, combining chemical and transcriptome data is crucial for improving accuracy and applicability in various toxicological studies.
How to best use random forest classification using transcriptome data for toxicity prediction?
4 answers
Random forest classification can be optimized for toxicity prediction by integrating transcriptome data with chemical descriptors. This combination enhances prediction accuracy by capturing both structural and biological information, improving model performance. Utilizing a heterogeneous dataset comprising chemical descriptors, protein target descriptors, and cytotoxicity data can significantly enhance predictive models, outperforming traditional approaches. Additionally, recursive random forests have shown efficiency in variable selection and model development for liver toxicity prediction, reducing redundant descriptors and highlighting the importance of specific factors like octanol-water partition coefficients and CYP2A2 endpoints. By leveraging these insights, incorporating transcriptome data into random forest models can lead to more robust and accurate toxicity predictions, benefiting drug development and public health.
What methods are currently being developed to improve the prediction and calculation of frost-free periods in agricultural planning?
5 answers
Various methods are being developed to enhance the prediction and calculation of frost-free periods in agricultural planning. One approach involves utilizing spatial interpolation based on climate data from existing weather stations, digital elevation models, and vegetation index data to estimate minimum temperatures. Machine learning algorithms, such as deep neural networks, convolution neural networks, and random forest models, are also being employed for accurate frost event prediction at the point-scale, with promising results in reducing frost damage and improving minimum temperature forecasts. Additionally, the integration of different machine learning methods has been proposed to provide users with a comprehensive frost forecast system, combining the strengths of multiple approaches for enhanced prediction capabilities. These advancements aim to support farmers in making informed decisions and implementing preventive measures to mitigate frost-related risks in agriculture.
What is the importance of popular songs research?
5 answers
Research on popular songs holds significant importance across various fields. It aids in enhancing language learning by incorporating memory strategies and phonological aspects into teaching methods. Furthermore, studying popular music in relation to memory and cultural heritage broadens the horizons of popular music studies, deepening historical and geo-cultural perspectives while shedding light on neglected cultural forms and practices. Additionally, the use of popular songs in mass advertising is a common method to connect with audiences, emphasizing the importance of matching songs with brands to convey advertising messages effectively. Despite attempts to correlate song popularity with physical parameters using statistics and machine learning, no significant relationships were found, suggesting the need for alternative approaches in analyzing song popularity. Overall, research on popular songs contributes to language learning, cultural heritage exploration, effective advertising, and guides future music analysis and artificial intelligence endeavors.
How does the difficulty of balancing schooling with other commitments contribute to the dropout rate among students?
4 answers
The challenge of balancing schooling with other commitments significantly contributes to the dropout rate among students. Imbalanced data in Student Dropout Prediction (SDP) problems reduces the efficiency of Machine Learning classifiers, leading to inaccurate predictions of dropout students. Educational data mining techniques like Balanced Bagging, when used to predict dropout rates, highlight the importance of addressing this issue. Students facing difficulties in managing their academic responsibilities alongside other commitments are more likely to be classified as potential dropouts, emphasizing the impact of this struggle on educational outcomes. Research on student dropout in Greece also underscores the complexities faced by individuals who have left school prematurely, shedding light on the challenges and consequences of this decision.
How hidden Markov chains are used for diagnosis and prognosis of faults in battery packs for electrical vehicles ?
9 answers
The diagnosis and prognosis of faults in battery packs for electric vehicles (EVs) are critical for ensuring safety and reliability. While the provided contexts do not explicitly mention the use of hidden Markov chains, they offer a comprehensive overview of various methodologies and technologies applied in the fault diagnosis and prognosis of EV battery packs. These methodologies can be conceptually linked to the principles underlying hidden Markov models (HMMs), which are used for modeling systems where the state is partially observable and processes are stochastic. Entropy algorithms, as discussed by Zhang, Hong, and Xiaoming, provide a basis for diagnosing battery faults by analyzing abnormal fluctuations and entropy features, which could be analogous to the state transitions in HMMs. Similarly, the real-time fault diagnosis scheme using normalized discrete wavelet decomposition captures early frequency-domain features of fault signals, akin to observing emissions in an HMM that indicate underlying states. The probability analysis model and the Analytic Hierarchy Process (AHP) method for calculating the probability of battery failure based on real-time monitoringcan be related to the probabilistic nature of HMMs, where the likelihood of transitioning to a particular state based on observed data is calculated. Neural network models, including Multilayer Perceptron (MLP) and Radial Basis Function (RBF), demonstrate techniques for detecting and fixing battery problems, which could be integrated with HMMs for enhanced prediction and diagnosis by learning from sequential data. Advanced methods like combining variational mode decomposition (VMD) with edit distance for fault diagnosis, addressing data redundancy in fault detection, and employing sensor topology and signal processing for fault diagnostics, all contribute to the broader landscape of battery pack fault diagnosis and prognosis. These methodologies, while not directly employing hidden Markov chains, embody the essence of HMMs through their focus on analyzing sequential or temporal data, extracting features, and making probabilistic assessments based on observed and latent variables. Integrating these approaches with HMMs could potentially enhance the accuracy and efficiency of diagnosing and prognosing battery faults in EVs by leveraging the strengths of each method to account for the stochastic nature of fault progression and manifestation.
How artificial intelligence support additive manufacturing ?
5 answers
Artificial intelligence (AI) plays a crucial role in supporting additive manufacturing (AM) by enhancing various aspects of the process. AI enables designers and operators to optimize AM technologies, improve efficiency, and monitor production in real-time. Digital twins, utilizing AI and sensor data, aid in fault detection, reducing the need for physical prototypes and enhancing product development. The integration of AI and the Internet of Things in AM enhances tool path generation, generative design, support generation, and component inspection, leading to improved manufacturing processes. Moreover, AI techniques like the Global Herding Algorithm-based Neural Network optimize temperature profile predictions in AM processes, ensuring the production of high-quality parts with increased accuracy. Overall, AI's application in AM facilitates customization, efficiency, fault detection, and quality improvement in additive manufacturing processes.
What is AVNN of the HRV outcomes?
5 answers
The AVNN (the average of all the NN intervals) is a crucial HRV feature that can be accurately inferred from a shorter ECG interval of about 1 minute, with a mean error of less than 5% of the computed HRV features. This highlights the potential for real-time monitoring using a deep learning-based system, specifically a recurrent neural network, to estimate HRV features efficiently. Additionally, HRV, particularly the standard deviation of N-N intervals (SDNN), has been shown to be negatively associated with inflammatory markers like high-sensitive C-reactive protein (hs-CRP) and interleukin-6 (IL-6). These findings emphasize the importance of HRV in predicting stroke outcomes and its relationship with inflammatory markers, shedding light on its potential role in assessing autonomic nervous system function and overall health.
What is effect by cutting classes?
5 answers
Cutting classes has significant negative consequences on academic performance. Research shows that skipping classes harms exam scores, while attending preexam reviews and taking optional quizzes positively impact exam scores. Additionally, weight cutting in sports negatively affects reaction time but has mixed effects on power and accuracy in striking performance. On the other hand, reducing class sizes does not show statistically significant improvements in academic test scores or non-cognitive skills. Addressing class cutting in high schools is crucial, as it often stems from disengagement and alienation, leading to course failure and dropout rates. Schools can combat class cutting by understanding students' concerns, collaborating with them, and engaging in institutional self-scrutiny to create a more supportive environment.
How do machine learning algorithms predict sales in e-commerce platforms?
4 answers
Machine learning algorithms play a crucial role in predicting sales in e-commerce platforms. Various studies have highlighted the significance of utilizing advanced techniques like Recurrent Neural Networks (RNNs), Long-Short-Term-Memory (LSTM) models, and DAE-SVM modelsfor accurate sales forecasting. These algorithms leverage historical data from online trade markets, consider factors like promotion campaigns and product competition, and employ methods such as sentiment analysis in customer reviews to enhance prediction accuracy. By incorporating preprocessing steps, feature selection, and model training, machine learning models can effectively handle diverse data types and provide valuable insights for e-commerce businesses to optimize their marketing strategies and ensure long-term success.