scispace - formally typeset
Search or ask a question
Author

Tanya Schmah

Bio: Tanya Schmah is an academic researcher from University of Ottawa. The author has contributed to research in topics: Symplectic geometry & Geometric mechanics. The author has an hindex of 17, co-authored 40 publications receiving 1089 citations. Previous affiliations of Tanya Schmah include École Polytechnique Fédérale de Lausanne & Bryn Mawr College.

Papers
More filters
Book
04 Oct 2009
TL;DR: Holm as mentioned in this paper provides a unified viewpoint of Lagrangian and Hamiltonian mechanics in the coordinate-free language of differential geometry in the spirit of the Marsden-Ratiu school.
Abstract: ,by Darryl D. Holm, Tanya Schmah and Cristina Stoica, Oxford University Press,Oxford, 2009, xi + 515 pp., ISBN: 978-0-19-921290-3The purpose of the book is to provide the unifying viewpoint of Lagrangian andHamiltonian mechanics in the coordinate-free language of differential geometryin the spirit of the Marsden-Ratiu school. The book is similar in content - althoughless formal - to the book by J. Marsden and T. Ratiu [7]. One can also mentionthe companion two-volumes book by Holm [4,5] written at a more basic level,and that one can recommend as an introductory reading. The classical treatises onthe subject are the books by Abraham-Marsden [1], Arnold [2] and Libermann-Marle [6].Typical applications are N-particle systems, rigid bodies, continua such as u-ids and electromagnetic systems that illustrate the powerfulness of the adoptedpoint of view. The geometrical structure allows the covering of both the nite-dimensional conservative case (rst part of the book) and the innite dimensionalsituation in the second part. The notion of symmetry here is central, as it allowsa reduction of the number of dimensions of the mechanical systems, and furtherexploits the conserved quantities (momentum map) associated to symmetry. Liegroup symmetries, Poisson reduction and momentum maps are rst discussed.The concepts are introduced in a progressive and clear manner in the rst part ofthe book. The second part devoted to innite dimensional systems is motivatedby the identication of Euler’s ideal uid motion with the geodesic o w on thegroup of volume-preserving diffeomorphism. The Euler-PoincarO (EP) variationalprinciple for the Euler uid equations is exposed in the framework of geometricmechanics, in association with Lie-Poisson Hamiltonian structure of Noether’stheorem and momentum maps. Original applications of the Euler-PoincarO equa-tions to solitons, computational anatomy, image matching, or geophysical uiddynamics are given at the end of the second part of the book.Here the rst chapter recapitulates the Newtonian, Lagrangian and Hamiltonian117

254 citations

Journal ArticleDOI
TL;DR: In this paper, the Grassberger-Procaccaccia algorithm was used to estimate the minimum number of data points required to resolve the dimension of an attractor, and a criterion derived by Eckmann and Ruelle [Physica D 56, 185 (1992) ] to provide a further check on these dimension estimates was derived.
Abstract: This contribution presents four results. First, calculations indicate that when examined by the Grassberger-Procaccia algorithm alone, filtered noise can mimic low-dimensional chaotic attractors. Given the ubiquity of signal filtering in experimental investigations, this is potentially important. Second, a criterion is derived which provides an estimate of the minimum data accuracy needed to resolve the dimension of an attractor. Third, it is shown that a criterion derived by Eckmann and Ruelle [Physica D 56, 185 (1992)] to estimate the minimum number of data points required in a Grassberger-Procaccia calculation can be used to provide a further check on these dimension estimates. Fourth, it is shown that surrogate data techniques recently published by Theiler and his colleagues [in Nonlinear Modeling and Forecasting, edited by M. Casdagli and S. Eubanks (Addison Wesley, Reading, MA, 1992)] can successfully distinguish between linearly correlated noise and nonlinear structure. These results, and most particularly the first, indicate that Grassberger-Procaccia results must be interpreted with far greater circumspection than has previously been the case, and that the algorithm should be used in combination with additional procedures such as calculations with surrogate data. When filtered signals are examined by this algorithm alone, a finite noninteger value of ${\mathit{D}}_{2}$ is consistent with low-dimensional chaotic behavior, but it is certainly not a definitive diagnostic of chaos.

247 citations

Proceedings Article
08 Dec 2008
TL;DR: It is shown that much better discrimination can be achieved by fitting a generative model to each separate condition and then seeing which model is most likely to have generated the data.
Abstract: Neuroimaging datasets often have a very large number of voxels and a very small number of training cases, which means that overfitting of models for this data can become a very serious problem. Working with a set of fMRI images from a study on stroke recovery, we consider a classification task for which logistic regression performs poorly, even when L1- or L2- regularized. We show that much better discrimination can be achieved by fitting a generative model to each separate condition and then seeing which model is most likely to have generated the data. We compare discriminative training of exactly the same set of models, and we also consider convex blends of generative and discriminative training.

82 citations

Patent
26 May 2004
TL;DR: In this article, a system and method for performing a categorical analysis on one or more time dependent dynamic processes is provided, where a reference library of data pertaining to multiple characteristics of time series reflective of the dynamic process is created and used to define selected categories.
Abstract: A system and method for performing a categorical analysis on one or more time dependent dynamic processes is provided. A reference library of data pertaining to multiple characteristics of time series reflective of the dynamic process is created and used to define selected categories for performing the categorical analysis.

59 citations

Posted Content
TL;DR: FAIM is able to maintain both the advantages of higher accuracy and fewer "folding" locations over VoxelMorph, over a range of hyper-parameters (with the same values used for both networks).
Abstract: We present a new unsupervised learning algorithm, "FAIM", for 3D medical image registration. With a different architecture than the popular "U-net", the network takes a pair of full image volumes and predicts the displacement fields needed to register source to target. Compared with "U-net" based registration networks such as VoxelMorph, FAIM has fewer trainable parameters but can achieve higher registration accuracy as judged by Dice score on region labels in the Mindboggle-101 dataset. Moreover, with the proposed penalty loss on negative Jacobian determinants, FAIM produces deformations with many fewer "foldings", i.e. regions of non-invertibility where the surface folds over itself. In our experiment, we varied the strength of this penalty and investigated changes in registration accuracy and non-invertibility in terms of number of "folding" locations. We found that FAIM is able to maintain both the advantages of higher accuracy and fewer "folding" locations over VoxelMorph, over a range of hyper-parameters (with the same values used for both networks). Further, when trading off registration accuracy for better invertibility, FAIM required less sacrifice of registration accuracy. Codes for this paper will be released upon publication.

47 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Abstract: The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.

11,201 citations

Book
01 Jan 2006
TL;DR: The brain's default state: self-organized oscillations in rest and sleep, and perturbation of the default patterns by experience.
Abstract: Prelude. Cycle 1. Introduction. Cycle 2. Structure defines function. Cycle 3. Diversity of cortical functions is provided by inhibition. Cycle 4. Windows on the brain. Cycle 5. A system of rhythms: from simple to complex dynamics. Cycle 6. Synchronization by oscillation. Cycle 7. The brain's default state: self-organized oscillations in rest and sleep. Cycle 8. Perturbation of the default patterns by experience. Cycle 9. The gamma buzz: gluing by oscillations in the waking brain. Cycle 10. Perceptions and actions are brain state-dependent. Cycle 11. Oscillations in the "other cortex:" navigation in real and memory space. Cycle 12. Coupling of systems by oscillations. Cycle 13. The tough problem. References.

4,266 citations

Journal ArticleDOI
10 Jul 2015-PLOS ONE
TL;DR: This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers by introducing a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks.
Abstract: Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.

3,330 citations

Journal ArticleDOI
01 Jan 1986
TL;DR: The New York Review ofBooks as mentioned in this paper is now over twenty years old and it has attracted controversy since its inception, but it is the controversies that attract the interest of the reader and to which the history, especially an admittedly impressionistic survey, must give some attention.
Abstract: It comes as something ofa surprise to reflect that the New York Review ofBooks is now over twenty years old. Even people of my generation (that is, old enough to remember the revolutionary 196os but not young enough to have taken a very exciting part in them) think of the paper as eternally youthful. In fact, it has gone through years of relatively quiet life, yet, as always in a competitive journalistic market, it is the controversies that attract the interest of the reader and to which the history (especially an admittedly impressionistic survey that tries to include something of the intellectual context in which a journal has operated) must give some attention. Not all the attacks which the New York Review has attracted, both early in its career and more recently, are worth more than a brief summary. What do we now make, for example, of Richard Kostelanetz's forthright accusation that 'The New York Review was from its origins destined to publicize Random House's (and especially [Jason] Epstein's) books and writers'?1 Well, simply that, even if the statistics bear out the charge (and Kostelanetz provides some suggestive evidence to support it, at least with respect to some early issues), there is nothing surprising in a market economy about a publisher trying to push his books through the pages of a journal edited by his friends. True, the New York Review has not had room to review more than around fifteen books in each issue and there could be a bias in the selection of

2,430 citations

Journal ArticleDOI
28 Jan 1983-Science
TL;DR: Specialized experiments with atmosphere and coupled models show that the main damping mechanism for sea ice region surface temperature is reduced upward heat flux through the adjacent ice-free oceans resulting in reduced atmospheric heat transport into the region.
Abstract: The potential for sea ice-albedo feedback to give rise to nonlinear climate change in the Arctic Ocean – defined as a nonlinear relationship between polar and global temperature change or, equivalently, a time-varying polar amplification – is explored in IPCC AR4 climate models. Five models supplying SRES A1B ensembles for the 21 st century are examined and very linear relationships are found between polar and global temperatures (indicating linear Arctic Ocean climate change), and between polar temperature and albedo (the potential source of nonlinearity). Two of the climate models have Arctic Ocean simulations that become annually sea ice-free under the stronger CO 2 increase to quadrupling forcing. Both of these runs show increases in polar amplification at polar temperatures above-5 o C and one exhibits heat budget changes that are consistent with the small ice cap instability of simple energy balance models. Both models show linear warming up to a polar temperature of-5 o C, well above the disappearance of their September ice covers at about-9 o C. Below-5 o C, surface albedo decreases smoothly as reductions move, progressively, to earlier parts of the sunlit period. Atmospheric heat transport exerts a strong cooling effect during the transition to annually ice-free conditions. Specialized experiments with atmosphere and coupled models show that the main damping mechanism for sea ice region surface temperature is reduced upward heat flux through the adjacent ice-free oceans resulting in reduced atmospheric heat transport into the region.

1,356 citations