scispace - formally typeset
Search or ask a question
Institution

Australian National University

EducationCanberra, Australian Capital Territory, Australia
About: Australian National University is a education organization based out in Canberra, Australian Capital Territory, Australia. It is known for research contribution in the topics: Population & Galaxy. The organization has 34419 authors who have published 109261 publications receiving 4315448 citations. The organization is also known as: The Australian National University & ANU.


Papers
More filters
Book
15 Jul 1998
TL;DR: The core of the book is Pienemann's Processability Theory which spells out which second language forms are processable at which developmental stage and is based on recent research into language processing and is formalised within Lexical-Functional Grammar.
Abstract: This book marks a new development in the field of second language acquisition research It explores the way in which language processing mechanisms shape the course of language development Language Processing and Second Language Development thus adds one major psychological component to the search for a theory of second language acquisition The core of the book is Pienemann’s Processability Theory which spells out which second language forms are processable at which developmental stage The theory is based on recent research into language processing and is formalised within Lexical-Functional Grammar The predictions of the theory are applied to the second language development of English, German, Japanese and Swedish The theory is also tested in on-line experiments In addition, Processability Theory has major implications for interlanguage variation (including task variation) and age-related differences in language acquisition All of these issues are explored from a processing perspective with theoretical and empirical rigor

597 citations

Journal ArticleDOI
TL;DR: In this paper, an alternative approach based on quadratic regularisation is suggested and shown to have advantages from some points of view, and it is shown that optimal convergence rates are achieved by the PCA technique in certain circumstances.
Abstract: In functional linear regression, the slope "parameter" is a function. Therefore, in a nonparametric context, it is determined by an infinite number of unknowns. Its estimation involves solving an ill-posed problem and has points of contact with a range of methodologies, including statistical smoothing and deconvolution. The standard approach to estimating the slope function is based explicitly on functional principal components analysis and, consequently, on spectral decomposition in terms of eigenvalues and eigen-functions. We discuss this approach in detail and show that in certain circumstances, optimal convergence rates are achieved by the PCA technique. An alternative approach based on quadratic regularisation is suggested and shown to have advantages from some points of view.

597 citations

Journal ArticleDOI
TL;DR: The requirements are the basis of a new evaluation methodology that aims at a simple and easily interpretable tracker comparison and a fully-annotated dataset with per-frame annotations with several visual attributes, which is the largest benchmark to date.
Abstract: This paper addresses the problem of single-target tracker performance evaluation. We consider the performance measures, the dataset and the evaluation system to be the most important components of tracker evaluation and propose requirements for each of them. The requirements are the basis of a new evaluation methodology that aims at a simple and easily interpretable tracker comparison. The ranking-based methodology addresses tracker equivalence in terms of statistical significance and practical differences. A fully-annotated dataset with per-frame annotations with several visual attributes is introduced. The diversity of its visual properties is maximized in a novel way by clustering a large number of videos according to their visual attributes. This makes it the most sophistically constructed and annotated dataset to date. A multi-platform evaluation system allowing easy integration of third-party trackers is presented as well. The proposed evaluation methodology was tested on the VOT2014 challenge on the new dataset and 38 trackers, making it the largest benchmark to date. Most of the tested trackers are indeed state-of-the-art since they outperform the standard baselines, resulting in a highly-challenging benchmark. An exhaustive analysis of the dataset from the perspective of tracking difficulty is carried out. To facilitate tracker comparison a new performance visualization technique is proposed.

596 citations

Journal ArticleDOI
TL;DR: This review focuses on applications and protocols of recent studies where docking calculations and molecular dynamics simulations were combined to dock small molecules into protein receptors, and is structured to lead the reader from the simpler to more compute‐intensive methods.
Abstract: A rational approach is needed to maximize the chances of finding new drugs, and to exploit the opportunities of potential new drug targets emerging from genomic and proteomic initiatives, and from the large libraries of small compounds now readily available through combinatorial chemistry. Despite a shaky early history, computer-aided drug design techniques can now be effective in reducing costs and speeding up drug discovery. This happy outcome results from development of more accurate and reliable algorithms, use of more thoughtfully planned strategies to apply them, and greatly increased computer power to allow studies with the necessary reliability to be performed. Our review focuses on applications and protocols, with the main emphasis on critical analysis of recent studies where docking calculations and molecular dynamics (MD) simulations were combined to dock small molecules into protein receptors. We highlight successes to demonstrate what is possible now, but also point out drawbacks and future directions. The review is structured to lead the reader from the simpler to more compute-intensive methods. Thus, while inexpensive and fast docking algorithms can be used to scan large compound libraries and reduce their size, more accurate but expensive MD simulations can be applied when a few selected ligand candidates remain. MD simulations can be used: during the preparation of the protein receptor before docking, to optimize its structure and account for protein flexibility; for the refinement of docked complexes, to include solvent effects and account for induced fit; to calculate binding free energies, to provide an accurate ranking of the potential ligands; and in the latest developments, during the docking process itself to find the binding site and correctly dock the ligand a priori.

595 citations


Authors

Showing all 34925 results

NameH-indexPapersCitations
Cyrus Cooper2041869206782
Nicholas G. Martin1921770161952
David R. Williams1782034138789
Krzysztof Matyjaszewski1691431128585
Anton M. Koekemoer1681127106796
Robert G. Webster15884390776
Ashok Kumar1515654164086
Andrew White1491494113874
Bernhard Schölkopf1481092149492
Paul Mitchell146137895659
Liming Dai14178182937
Thomas J. Smith1401775113919
Michael J. Keating140116976353
Joss Bland-Hawthorn136111477593
Harold A. Mooney135450100404
Network Information
Related Institutions (5)
University of Oxford
258.1K papers, 12.9M citations

92% related

University College London
210.6K papers, 9.8M citations

91% related

Pennsylvania State University
196.8K papers, 8.3M citations

91% related

University of Edinburgh
151.6K papers, 6.6M citations

91% related

University of Cambridge
282.2K papers, 14.4M citations

91% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
2023280
2022773
20215,261
20205,464
20195,109
20184,825