scispace - formally typeset
K

Kate A. Smith

Researcher at Monash University, Clayton campus

Publications -  129
Citations -  4718

Kate A. Smith is an academic researcher from Monash University, Clayton campus. The author has contributed to research in topics: Artificial neural network & Association rule learning. The author has an hindex of 32, co-authored 129 publications receiving 4402 citations. Previous affiliations of Kate A. Smith include Walter and Eliza Hall Institute of Medical Research & University of Melbourne.

Papers
More filters
Journal ArticleDOI

Characteristic-Based Clustering for Time Series Data

TL;DR: This paper proposes a method for clustering of time series based on their structural characteristics, which reduces the dimensionality of the time series and is much less sensitive to missing or noisy data.
Journal ArticleDOI

Neural Networks for Combinatorial Optimization: a Review of More Than a Decade of Research

TL;DR: It has been over a decade since neural networks were first applied to solve combinatorial optimization problems and the current standing of neural networks for combinatorsial optimization is presented by considering each of the major classes of combinatorially optimization problems.
Journal ArticleDOI

On learning algorithm selection for classification

TL;DR: This paper introduces a new method for learning algorithm evaluation and selection, with empirical results based on classification, to generate rules, using the rule-based learning algorithm C5.0, to describeWhich types of algorithms are suited to solving which types of classification problems.
Journal ArticleDOI

Neural networks in business: techniques and applications for the operations researcher

TL;DR: An overview of the different types of neural network models which are applicable when solving business problems is presented, as well as their historical and current use in business.
Journal ArticleDOI

On chaotic simulated annealing

TL;DR: A new approach to chaotic simulated annealing with guaranteed convergence and minimization of the energy function is suggested by gradually reducing the time step in the Euler approximation of the differential equations that describe the continuous Hopfield neural network.