scispace - formally typeset
A

Adam Krzyżak

Researcher at Concordia University

Publications -  264
Citations -  8136

Adam Krzyżak is an academic researcher from Concordia University. The author has contributed to research in topics: Support vector machine & Radial basis function network. The author has an hindex of 37, co-authored 244 publications receiving 7631 citations. Previous affiliations of Adam Krzyżak include West Pomeranian University of Technology & Wrocław University of Technology.

Papers
More filters
Journal ArticleDOI

Contour-based handwritten numeral recognition using multiwavelets and neural networks

TL;DR: A handwritten numeral recognition descriptor is developed using multiwavelet orthonormal shell decomposition, which allows multiwavelets to outperform scalar wavelets in some applications, e.g. signal denoising.
Journal ArticleDOI

Nonparametric regression estimation using penalized least squares

TL;DR: This work presents multivariate penalized least squares regression estimates using Vapnik-Chervonenkis theory, and shows strong consistency of the truncated versions of the estimates without any conditions on the underlying distribution.
Journal ArticleDOI

On estimation of a class of nonlinear systems by the kernel regression estimate

TL;DR: The estimation of a multiple-input single-output discrete Hammerstein system that contains a nonlinear memoryless subsystem followed by a dynamic linear subsystem is studied, and the distribution-free pointwise and global convergence of the estimate is demonstrated.
Journal ArticleDOI

Global convergence of the recursive kernel regression estimates with applications in classification and nonlinear system estimation

TL;DR: It is shown, using the martingale device, that weak, strong and complete L/ sub 1/ consistencies are equivalent and the conditions on a certain smoothing sequence are necessary and sufficient for strong L/sub 1/ consistency of the recursive kernel regression estimate.
Journal ArticleDOI

A fast svm training algorithm

TL;DR: A fast support vector machine (SVM) training algorithm is proposed under SVM's decomposition framework by effectively integrating kernel caching, digest and shrinking policies and stopping conditions and the promising scalability paves a new way to solve more large-scale learning problems in other domains such as data mining.